Refine
Has Fulltext
- yes (2511) (remove)
Year of publication
Document Type
- Doctoral Thesis (2511) (remove)
Language
Keywords
- climate change (53)
- Klimawandel (51)
- Modellierung (34)
- Nanopartikel (28)
- machine learning (21)
- Fernerkundung (20)
- Synchronisation (19)
- remote sensing (18)
- Spracherwerb (17)
- Blickbewegungen (16)
Institute
- Institut für Physik und Astronomie (404)
- Institut für Biochemie und Biologie (383)
- Institut für Geowissenschaften (325)
- Institut für Chemie (301)
- Extern (148)
- Institut für Umweltwissenschaften und Geographie (121)
- Institut für Ernährungswissenschaft (102)
- Wirtschaftswissenschaften (97)
- Department Psychologie (88)
- Hasso-Plattner-Institut für Digital Engineering GmbH (87)
Auf der Grundlage von Sonnenphotometermessungen an drei Messstationen (AWIPEV/ Koldewey in Ny-Ålesund (78.923 °N, 11.923 °O) 1995–2008, 35. Nordpol Driftstation – NP-35 (84.3–85.5 °N, 41.7–56.6 °O) März/April 2008, Sodankylä (67.37 °N, 26.65 °O) 2004–2007) wird die Aerosolvariabilität in der europäischen Arktis und deren Ursachen untersucht. Der Schwerpunkt liegt dabei auf der Frage des Zusammenhanges zwischen den an den Stationen gemessenen Aerosolparametern (Aerosol optische Dicke, Angström Koeffizient, usw.) und dem Transport des Aerosols sowohl auf kurzen Zeitskalen (Tagen) als auch auf langen Zeitskalen (Monate, Jahre). Um diesen Zusammenhang herzustellen, werden für die kurzen Zeitskalen mit dem Trajektorienmodell PEP-Tracer 5-Tage Rückwärtstrajektorien in drei Starthöhen (850 hPa, 700 hPa, 500 hPa) für die Uhrzeiten 00, 06, 12 und 18 Uhr berechnet. Mit Hilfe der nicht-hierarchischen Clustermethode k-means werden die berechneten Rückwärtstrajektorien dann zu Gruppen zusammengefasst und bestimmten Quellgebieten und den gemessenen Aerosol optischen Dicken zugeordnet. Die Zuordnung von Aerosol optischer Dicke und Quellregion ergibt keinen eindeutigen Zusammenhang zwischen dem Transport verschmutzter Luftmassen aus Europa oder Russland bzw. Asien und erhöhter Aerosol optischer Dicke. Dennoch ist für einen konkreten Einzelfall (März 2008) ein direkter Zusammenhang von Aerosoltransport und hohen Aerosol optischen Dicken nachweisbar. In diesem Fall gelangte Waldbrandaerosol aus Südwestrussland in die Arktis und konnte sowohl auf der NP-35 als auch in Ny-Ålesund beobachtet werden. In einem weiteren Schritt wird mit Hilfe der EOF-Analyse untersucht, inwieweit großskalige atmosphärische Zirkulationsmuster für die Aerosolvariabilität in der europäischen Arktis verantwortlich sind. Ähnlich wie bei der Trajektorienanalyse ist auch die Verbindung der atmosphärischen Zirkulation zu den Photometermessungen an den Stationen in der Regel nur schwach ausgeprägt. Eine Ausnahme findet sich bei der Betrachtung des Jahresganges des Bodendruckes und der Aerosol optischen Dicke. Hohe Aerosol optische Dicken treten im Frühjahr zum einen dann auf, wenn durch das Islandtief und das sibirische Hochdruckgebiet Luftmassen aus Europa oder Russland/Asien in die Arktis gelangen, und zum anderen, wenn sich ein kräftiges Hochdruckgebiet über Grönland und weiten Teilen der Arktis befindet. Ebenso zeigt sich, dass der Übergang zwischen Frühjahr und Sommer zumindest teilweise bedingt ist durch denWechsel vom stabilen Polarhoch im Winter und Frühjahr zu einer stärker von Tiefdruckgebieten bestimmten arktischen Atmosphäre im Sommer. Die geringere Aerosolkonzentration im Sommer kann zum Teil mit einer Zunahme der nassen Deposition als Aerosolsenke begründet werden. Für Ny-Ålesund wird neben den Transportmustern auch die chemische Zusammensetzung des Aerosols mit Hilfe von Impaktormessungen an der Zeppelinstation auf dem Zeppelinberg (474m ü.NN) nahe Ny-Ålesund abgeleitet. Dabei ist die positive Korrelation der Aerosoloptischen Dicke mit der Konzentration von Sulfationen und Ruß sehr deutlich. Beide Stoffe gelangen zu einem Großteil durch anthropogene Emissionen in die Atmosphäre. Die damit nachweisbar anthropogen geprägte Zusammensetzung des arktischen Aerosols steht im Widerspruch zum nicht eindeutig herstellbaren Zusammenhang mit dem Transport des Aerosols aus Industrieregionen. Dies kann nur durch einen oder mehrere gleichzeitig stattfindende Transformationsprozesse (z. B. Nukleation von Schwefelsäurepartikeln) während des Transportes aus den Quellregionen (Europa, Russland) erklärt werden.
Einleitung: Vorliegende empirische Daten verdeutlichen, dass in der Fachwelt zwar weites gehend Einigkeit über die Wirkung des Mediums Wasser auf den Organismus in Ruhe (metabolisch und endokrin) besteht, aber differente Aussagen bei Immersion und Bewegung (hämodynamisch, metabolisch und endokrin) getroffen werden. Wie unterscheidet sich die physische Beanspruchung an Land und im Wasser? Gelten die allgemeingültigen Empfehlungen an Land zur Steuerung erwünschter Trainings- bzw. Belastungseffekte auch für aquale Bewegungs- und Trainingsformen? Ergebnisse und Diskussion: Die Herzfrequenz, der systolische Blutdruck und der Sauerstoffverbrauch waren in Ruhe (baseline) an der anaeroben Schwelle und während der Ausbelastung auf dem Land und im Wasser ähnlich. Der Respiratorische Quotient wurde gering reduziert, als die Probanden im Wasser trainierten. Die Glukose- und Laktatkonzentration wurden vermindert, wohingegen die freie Fettsäurekonzentration mit der Belastung im Wasser erhöht wurde. Wasserimmersion senkte die Adrenalin- und Noradrenalinkonzentration und erhöhte die vermehrte ANP-Produktion während der Belastung. Belastungsinduzierte Anstiege endokriner Parameter (Adrenalin und Noradrenalin) sind im Wasser geringer ausgeprägt als an Land. Hinsichtlich der Stoffwechselregulation konnte beobachtet werden, dass ANP eine Rolle bei der Regulation des Fettstoffwechsels spielt. Die Ergebnisse lassen vermuten, dass Belastungen im Wasser vor allem eine spezifische humorale und metabolische Antwort des Organismus entlocken. Belastungsinduzierte Anstiege endokriner Parameter (Katecholamine) im Wasser sind geringer ausgeprägt als an Land. Immersions- und Belastungseffekte scheinen teilweise konträre Reize zu sein. Es sind daher weiterhin experimentelle Untersuchungen notwendig, um die Regulationsmechanismen des Organismus zur Kompensation eines erhöhten venösen Rückstroms bei Immersion ohne und vor allem mit Bewegung zu klären. Auf Grund der geringen Unterschiede in der hämodynamischen Reaktion des Körpers bei vergleichbarer körperlicher Belastung Land vs. Wasser kann sich an den allgemeingültigen Empfehlungen an Land zur Steuerung erwünschter Trainings-bzw. Belastungseffekte auch für aquale Bewegungs- und Trainingsformen orientiert werden.
Companies develop process models to explicitly describe their business operations. In the same time, business operations, business processes, must adhere to various types of compliance requirements. Regulations, e.g., Sarbanes Oxley Act of 2002, internal policies, best practices are just a few sources of compliance requirements. In some cases, non-adherence to compliance requirements makes the organization subject to legal punishment. In other cases, non-adherence to compliance leads to loss of competitive advantage and thus loss of market share. Unlike the classical domain-independent behavioral correctness of business processes, compliance requirements are domain-specific. Moreover, compliance requirements change over time. New requirements might appear due to change in laws and adoption of new policies. Compliance requirements are offered or enforced by different entities that have different objectives behind these requirements. Finally, compliance requirements might affect different aspects of business processes, e.g., control flow and data flow. As a result, it is infeasible to hard-code compliance checks in tools. Rather, a repeatable process of modeling compliance rules and checking them against business processes automatically is needed. This thesis provides a formal approach to support process design-time compliance checking. Using visual patterns, it is possible to model compliance requirements concerning control flow, data flow and conditional flow rules. Each pattern is mapped into a temporal logic formula. The thesis addresses the problem of consistency checking among various compliance requirements, as they might stem from divergent sources. Also, the thesis contributes to automatically check compliance requirements against process models using model checking. We show that extra domain knowledge, other than expressed in compliance rules, is needed to reach correct decisions. In case of violations, we are able to provide a useful feedback to the user. The feedback is in the form of parts of the process model whose execution causes the violation. In some cases, our approach is capable of providing automated remedy of the violation.
The widespread usage of products containing volatile organic compounds (VOC) has lead to a general human exposure to these chemicals in work places or homes being suspected to contribute to the growing incidence of environmental diseases. Since the causal molecular mechanisms for the development of these disorders are not completely understood, the overall objective of this thesis was to investigate VOC-mediated molecular effects on human lung cells in vitro at VOC concentrations comparable to exposure scenarios below current occupational limits. Although differential expression of single proteins in response to VOCs has been reported, effects on complex protein networks (proteome) have not been investigated. However, this information is indispensable when trying to ascertain a mechanism for VOC action on the cellular level and establishing preventive strategies. For this study, the alveolar epithelial cell line A549 has been used. This cell line, cultured in a two-phase (air/liquid) model allows the most direct exposure and had been successfully applied for the analysis of inflammatory effects in response to VOCs. Mass spectrometric identification of 266 protein spots provided the first proteomic map of A549 cell line to this extent that may foster future work with this frequently used cellular model. The distribution of three typical air contaminants, monochlorobenzene (CB), styrene and 1,2 dichlorobenzene (1,2-DCB), between gas and liquid phase of the exposure model has been analyzed by gas chromatography. The obtained VOC partitioning was in agreement with available literature data. Subsequently the adapted in vitro system has been successfully employed to characterize the effects of the aromatic compound styrene on the proteome of A549 cells (Chapter 4). Initially, the cell toxicity has been assessed in order to ensure that most of the concentrations used in the following proteomic approach were not cytotoxic. Significant changes in abundance and phosphorylation in the total soluble protein fraction of A549 cells have been detected following styrene exposure. All proteins have been identified using mass spectrometry and the main cellular functions have been assigned. Validation experiments on protein and transcript level confirmed the results of the 2-DE experiments. From the results, two main cellular pathways have been identified that were induced by styrene: the cellular oxidative stress response combined with moderate pro-apoptotic signaling. Measurement of cellular reactive oxygen species (ROS) as well as the styrene-mediated induction of oxidative stress marker proteins confirmed the hypothesis of oxidative stress as the main molecular response mechanism. Finally, adducts of cellular proteins with the reactive styrene metabolite styrene 7,8 oxide (SO) have been identified. Especially the SO-adducts observed at both the reactive centers of thioredoxin reductase 1, which is a key element in the control of the cellular redox state, may be involved in styrene-induced ROS formation and apoptosis. A similar proteomic approach has been carried out with the halobenzenes CB and 1,2-DCB (Chapter 5). In accordance with previous findings, cell toxicity assessment showed enhanced toxicity compared to the one caused by styrene. Significant changes in abundance and phosphorylation of total soluble proteins of A549 cells have been detected following exposure to subtoxic concentrations of CB and 1,2-DCB. All proteins have been identified using mass spectrometry and the main cellular functions have been assigned. As for the styrene experiment, the results indicated two main pathways to be affected in the presence of chlorinated benzenes, cell death signaling and oxidative stress response. The strong induction of pro-apoptotic signaling has been confirmed for both treatments by detection of the cleavage of caspase 3. Likewise, the induction of redox-sensitive protein species could be correlated to an increased cellular level of ROS observed following CB treatment. Finally, common mechanisms in the cellular response to aromatic VOCs have been investigated (Chapter 6). A similar number (4.6-6.9%) of all quantified protein spots showed differential expression (p<0.05) following cell exposure to styrene, CB or 1,2-DCB. However, not more than three protein spots showed significant regulation in the same direction for all three volatile compounds: voltage-dependent anion-selective channel protein 2, peroxiredoxin 1 and elongation factor 2. However, all of these proteins are important molecular targets in stress- and cell death-related signaling pathways.
Large-scale volcanic deformation recently detected by radar interferometry (InSAR) provides new information and thus new scientific challenges for understanding volcano-tectonic activity and magmatic systems. The destabilization of such a system at depth noticeably affects the surrounding environment through magma injection, ground displacement and volcanic eruptions. To determine the spatiotemporal evolution of the Lazufre volcanic area located in the central Andes, we combined short-term ground displacement acquired by InSAR with long-term geological observations. Ground displacement was first detected using InSAR in 1997. By 2008, this displacement affected 1800 km2 of the surface, an area comparable in size to the deformation observed at caldera systems. The original displacement was followed in 2000 by a second, small-scale, neighbouring deformation located on the Lastarria volcano. We performed a detailed analysis of the volcanic structures at Lazufre and found relationships with the volcano deformations observed with InSAR. We infer that these observations are both likely to be the surface expression of a long-lived magmatic system evolving at depth. It is not yet clear whether Lazufre may trigger larger unrest or volcanic eruptions; however, the second deformation detected at Lastarria and the clear increase of the large-scale deformation rate make this an area of particular interest for closer continuous monitoring.
The presented work describes new concepts of fast switching elements based on principles of photonics. The waveguides working in visible and infra-red ranges are put in a basis of these elements. And as materials for manufacturing of waveguides the transparent polymers, dopped by molecules of the dyes possessing second order nonlinear-optical properties are proposed. The work shows how nonlinear-optical processes in such structures can be implemented by electro-optical and opto-optical control circuit signals. In this paper we consider the complete cycle of fabrication of several types of integral photonic elements. The theoretical analysis of high-intensity beam propagation in media with second-order optical nonlinearity is performed. Quantitative estimations of necessary conditions of occurrence of the nonlinear-optical phenomena of the second order taking into account properties of used materials are made. The paper describes the various stages of manufacture of the basic structure of the integrated photonics: a planar waveguide. Using the finite element method the structure of the electromagnetic field inside the waveguide in different modes was analysed. A separate part of the work deals with the creation of composite organic materials with high optical nonlinearity. Using the methods of quantum chemistry, the dependence of nonlinear properties of dye molecules from its structure were investigated in details. In addition, the paper discusses various methods of inducing of an optical nonlinearity in dye-doping of polymer films. In the work, for the first time is proposed the use of spatial modulation of nonlinear properties of waveguide according Fibonacci law. This allows involving several different nonlinear optical processes simultaneously. The final part of the work describes various designs of integrated optical modulators and switches constructed of organic nonlinear optical waveguides. A practical design of the optical modulator based on Mach-Zehnder interferometer made by a photolithography on polymer film is presented.
Das Parallel-Seismik-Verfahren dient vor allem der nachträglichen Längenmessung von Fundamentpfählen oder ähnlichen Elementen zur Gründung von Bauwerken. Eine solche Messung wird beispielsweise notwendig, wenn ein Gebäude verstärkt, erhöht oder anders als bisher genutzt werden soll, aber keine Unterlagen mehr über die Fundamente vorhanden sind. Das Messprinzip des schon seit einigen Jahrzehnten bekannten Verfahrens ist relativ einfach: Auf dem Pfahlkopf wird meist durch Hammerschlag eine Stoßwelle erzeugt, die durch den Pfahl nach unten läuft. Dabei wird Energie in den Boden abgegeben. Die abgestrahlten Wellen werden von Sensoren in einem parallel zum Pfahl hergestellten Bohrloch registriert. Aus den Laufzeiten lassen sich die materialspezifischen Wellengeschwindigkeiten im Pfahl und im Boden sowie die Pfahllänge ermitteln. Bisher wurde meist ein sehr einfaches Verfahren zur Datenauswertung verwendet, das die Länge der Pfähle systematisch überschätzt. In der vorliegenden Dissertation wurden die mathematisch-physikalischen Grundlagen beleuchtet und durch Computersimulation die Wellenausbreitung in Pfahl und Boden genau untersucht. Weitere Simulationen klärten den Einfluss verschiedener Mess- und Strukturparameter, beispielsweise den Einfluss von Bodenschichtung oder Fehlstellen im Pfahl. So konnte geklärt werden, in welchen Fällen mit dem Parallel-Seismik-Verfahren gute Ergebnisse erzielt werden können (z. B. bei Fundamenten in Sand oder Ton) und wo es an seine Grenzen stößt (z. B. bei Gründung im Fels). Auf Basis dieser Ergebnisse entstand ein neuer mathematischer Formalismus zur Auswertung der Laufzeiten. In Verbindung mit einem Verfahren zur Dateninversion, d. h. der automatischen Anpassung der Unbekannten in den Gleichungen an die Messergebnisse, lassen sich sehr viel genauere Werte für die Pfahllänge ermitteln als mit allen bisher publizierten Verfahren. Zudem kann man nun auch mit relativ großen Abständen zwischen Bohrloch und Pfahl (2 - 3 m) arbeiten. Die Methode wurde an simulierten Daten ausführlich getestet. Die Messmethode und das neue Auswerteverfahren wurden in einer Reihe praktischer Anwendungen getestet – und dies fast immer erfolgreich. Nur in einem Fall komplizierter Fundamentgeometrie bei gleichzeitig sehr hoher Anforderung an die Genauigkeit war schon nach Simulationen klar, dass hier ein Einsatz nicht sinnvoll ist. Dafür zeigte es sich, dass auch die Länge von Pfahlwänden und Spundwänden ermittelt werden kann. Die Parallel-Seismik-Methode funktioniert als einziges verfügbares Verfahren zur Fundamentlängenermittlung zugleich in den meisten Bodenarten sowie an metallischen und nichtmetallischen Fundamenten und kommt ohne Kalibrierung aus. Sie ist nun sehr viel breiter einsetzbar und liefert sehr viel genauere Ergebnisse. Die Simulationen zeigten noch Potential für Erweiterungen, zum Beispiel durch den Einsatz spezieller Sensoren, die zusätzliche Wellentypen empfangen und unterscheiden können.
Gegenstand der Studie ist die Evaluation eines kommunalen Sportprojekts. Die Forschungsarbeit entstand aus der wachsenden Erkenntnis heraus, dass es nicht mehr nur um die Entwicklung und Durchführung kommunaler oder sozialer Projekte geht, sondern zunehmend darauf ankommt, die Projektarbeit zu evaluieren, um ihren Einfluss auf die kommunale, soziale und personale Entwicklung zu prüfen und in der Folge die Implementierung zu optimieren. Die unterschiedlichen Schritte in der Definition des theoretischen Rahmens, der Datenanalyse sowie der Erarbeitung der evaluativen Empfehlungen wurden unternommen mit dem Anspruch auf Modellcharakter, um für zukünftige Evaluationsvorhaben entsprechende Standards zu setzen. Die Grundidee des kommunalen Sportprojekts „Straßenfußball für Toleranz“ ist innovativ: Mädchen und Jungen erobern durch gemeinsames Fußballspielen den öffentlichen Raum. Sie spielen ohne Schiedsrichter und nach speziellen Regeln. Das Projekt richtet sich ausdrücklich an sozial benachteiligte Jugendliche und bezieht gleichermaßen Jungen wie Mädchen ein.
Foraging in space and time
(2010)
All animals are adapted to the environmental conditions of the habitat they chose to live in. It was the aim of this PhD-project, to show which behavioral strategies are expressed as mechanisms to cope with the constraints, which contribute to the natural selection pressure acting on individuals. For this purpose, small mammals were exposed to different levels and types of predation risk while actively foraging. Individuals were either exposed to different predator types (airborne or ground) or combinations of both, or to indirect predators (nest predators). Risk was assumed to be distributed homogeneously, so changing the habitat or temporal adaptations where not regarded as potential options. Results show that wild-caught voles have strategic answers to this homogeneously distributed risk, which is perceived by tactile, olfactory or acoustic cues. Thus, they do not have to know an absolut quality (e.g., in terms of food provisioning and risk levels of all possible habitats), but they can adapt their behavior to the actual circumstances. Deriving risk uniform levels from cues and adjusting activity levels to the perceived risk is an option to deal with predators of the same size or with unforeseeable attack rates. Experiments showed that as long as there are no safe places or times, it is best to reduce activity and behave as inconspicuous as possible as long as the costs of missed opportunities do not exceed the benefits of a higher survival probability. Test showed that these costs apparently grow faster for males than for females, especially in times of inactivity. This is supported by strong predatory pressure on the most active groups of rodents (young males, sexually active or dispersers) leading to extremely female-biased operative sex ratios in natural populations. Other groups of animals, those with parental duties such as nest guarding, for example, have to deal with the actual risk in their habitat as well. Strategies to indirect predation pressure were tested by using bank vole mothers, confronted with a nest predator that posed no actual threat to themselves but to their young (Sorex araneus). They reduced travelling and concentrated their effort in the presence of shrews, independent of the different nutritional provisioning of food by varying resource levels due to the different seasons. Additionally, they exhibited nest-guarding strategies by not foraging in the vicinity of the nest site in order to reduce conspicuous scent marks. The repetition of the experiment in summer and autumn showed that changing environmental constraints can have a severe impact on results of outdoor studies. In our case, changing resource levels changed the type of interaction between the two species. The experiments show that it is important to analyze decision making and optimality models on an individual level, and, when that is not possible (maybe because of the constraints of field work), groups of animals should be classified by using the least common denominator that can be identified (such as sex, age, origin or kinship). This will control for the effects of the sex or stage of life history or the individual´s reproductive and nutritional status on decision making and will narrow the wide behavioral variability associated with the complex term of optimality.
Der Anstieg der Energiepreise kann zu einer länger anhaltenden Verteuerung von Gütertransporten führen. Welche Auswirkungen haben steigende Transportkosten auf die Entwicklung von Städtesystemen? Ein solcher Transportkostenanstieg hat in der Russischen Föderation nach der Preisliberalisierung 1992 real, d.h. in Relation zu den Preisen anderer Gütergruppen stattgefunden. Gleichzeitig stellt die Bevölkerungsstatistik der Russischen Föderation Daten bereit, mit deren Hilfe Hypothesen zur Entwicklung von Städtesystemen unter dem Einfluss steigender Transportkosten geprüft werden können. Diese Daten werden in der vorliegenden Arbeit umfassend ausgewertet. Den theoretischen Hintergrund liefert die Modellierung eines Städtesystems mit linearer Raumstruktur im Rahmen der Neuen Ökonomischen Geographie. Damit wird ein Werkzeug geschaffen, das auch auf weiträumige Städtesysteme mit ausgeprägter Bandstruktur angewendet werden kann. Die hier erstmals erfolgte ausführliche Erläuterung des zu Grunde liegenden Theorieansatzes versteht sich als Ergänzung der Standardlehrbücher der Raumwirtschaftstheorie. Die Ergebnisse der empirischen Untersuchung bestätigen die Prognose des Modells, dass in großflächigen Ländern bzw. Regionen mit Ähnlichkeit zur unterstellten Raumstruktur ein Anstieg der Transportkosten Konzentrationstendenzen in den Zentren befördert, während die peripheren Regionen zunehmend abgekoppelt werden.
Temporal gravimeter observations, used in geodesy and geophysics to study variation of the Earth’s gravity field, are influenced by local water storage changes (WSC) and – from this perspective – add noise to the gravimeter signal records. At the same time, the part of the gravity signal caused by WSC may provide substantial information for hydrologists. Water storages are the fundamental state variable of hydrological systems, but comprehensive data on total WSC are practically inaccessible and their quantification is associated with a high level of uncertainty at the field scale. This study investigates the relationship between temporal gravity measurements and WSC in order to reduce the hydrological interfering signal from temporal gravity measurements and to explore the value of temporal gravity measurements for hydrology for the superconducting gravimeter (SG) of the Geodetic Observatory Wettzell, Germany. A 4D forward model with a spatially nested discretization domain was developed to simulate and calculate the local hydrological effect on the temporal gravity observations. An intensive measurement system was installed at the Geodetic Observatory Wettzell and WSC were measured in all relevant storage components, namely groundwater, saprolite, soil, top soil and snow storage. The monitoring system comprised also a suction-controlled, weighable, monolith-filled lysimeter, allowing an all time first comparison of a lysimeter and a gravimeter. Lysimeter data were used to estimate WSC at the field scale in combination with complementary observations and a hydrological 1D model. Total local WSC were derived, uncertainties were assessed and the hydrological gravity response was calculated from the WSC. A simple conceptual hydrological model was calibrated and evaluated against records of a superconducting gravimeter, soil moisture and groundwater time series. The model was evaluated by a split sample test and validated against independently estimated WSC from the lysimeter-based approach. A simulation of the hydrological gravity effect showed that WSC of one meter height along the topography caused a gravity response of 52 µGal, whereas, generally in geodesy, on flat terrain, the same water mass variation causes a gravity change of only 42 µGal (Bouguer approximation). The radius of influence of local water storage variations can be limited to 1000 m and 50 % to 80 % of the local hydro¬logical gravity signal is generated within a radius of 50 m around the gravimeter. At the Geodetic Observatory Wettzell, WSC in the snow pack, top soil, unsaturated saprolite and fractured aquifer are all important terms of the local water budget. With the exception of snow, all storage components have gravity responses of the same order of magnitude and are therefore relevant for gravity observations. The comparison of the total hydrological gravity response to the gravity residuals obtained from the SG, showed similarities in both short-term and seasonal dynamics. However, the results demonstrated the limitations of estimating total local WSC using hydrological point measurements. The results of the lysimeter-based approach showed that gravity residuals are caused to a larger extent by local WSC than previously estimated. A comparison of the results with other methods used in the past to correct temporal gravity observations for the local hydrological influence showed that the lysimeter measurements improved the independent estimation of WSC significantly and thus provided a better way of estimating the local hydrological gravity effect. In the context of hydrological noise reduction, at sites where temporal gravity observations are used for geophysical studies beyond local hydrology, the installation of a lysimeter in combination with complementary hydrological measurements is recommended. From the hydrological view point, using gravimeter data as a calibration constraint improved the model results in comparison to hydrological point measurements. Thanks to their capacity to integrate over different storage components and a larger area, gravimeters provide generalized information on total WSC at the field scale. Due to their integrative nature, gravity data must be interpreted with great care in hydrological studies. However, gravimeters can serve as a novel measurement instrument for hydrology and the application of gravimeters especially designed to study open research questions in hydrology is recommended.
In the present work, we study wave phenomena in strongly nonlinear lattices. Such lattices are characterized by the absence of classical linear waves. We demonstrate that compactons – strongly localized solitary waves with tails decaying faster than exponential – exist and that they play a major role in the dynamics of the system under consideration. We investigate compactons in different physical setups. One part deals with lattices of dispersively coupled limit cycle oscillators which find various applications in natural sciences such as Josephson junction arrays or coupled Ginzburg-Landau equations. Another part deals with Hamiltonian lattices. Here, a prominent example in which compactons can be found is the granular chain. In the third part, we study systems which are related to the discrete nonlinear Schrödinger equation describing, for example, coupled optical wave-guides or the dynamics of Bose-Einstein condensates in optical lattices. Our investigations are based on a numerical method to solve the traveling wave equation. This results in a quasi-exact solution (up to numerical errors) which is the compacton. Another ansatz which is employed throughout this work is the quasi-continuous approximation where the lattice is described by a continuous medium. Here, compactons are found analytically, but they are defined on a truly compact support. Remarkably, both ways give similar qualitative and quantitative results. Additionally, we study the dynamical properties of compactons by means of numerical simulation of the lattice equations. Especially, we concentrate on their emergence from physically realizable initial conditions as well as on their stability due to collisions. We show that the collisions are not exactly elastic but that a small part of the energy remains at the location of the collision. In finite lattices, this remaining part will then trigger a multiple scattering process resulting in a chaotic state.
Die gesundheitsfördernden Eigenschaften von grünem Tee sind weitgehend akzeptiert. Den Teecatechinen, insbesondere dem Epigallocatechin-3-gallat (EGCG), werden zahlreiche positive Effekte zugesprochen (z. B. antioxidativ, antikanzerogen, antiinflammatorisch, Blutdruck und Cholesterinspiegel senkend). Die Mechanismen, die zu einer Reduktion der in Tierversuchen beschriebenen Körper- und Fettmasse führen, sind nicht ausreichend geklärt. Ziel dieser Arbeit bestand darin, die kurz- und mittelfristigen Wirkungen einer TEAVIGO®-Applikation (mind. 94 % EGCG) am Mausmodell im Hinblick auf den Energie- und Fettstoffwechsel sowie die Expression daran beteiligter Gene in wichtigen Organen und Geweben zu untersuchen. In verschiedenen Tierversuchen wurde männlichen C57BL/6-Mäusen eine Hochfettdiät (HFD) mit und ohne Supplementation (oral, diätetisch) des entkoffeinierten Grüntee-Extraktes TEAVIGO® in unterschiedlichen Dosierungen gefüttert. Es wurden sowohl kurz- als auch mittelfristige Wirkungen des EGCG auf die Energiebilanz (u. a. indirekte Tierkalorimetrie) und Körperzusammensetzung (NMR) sowie die exogene Substratoxidation (Stabilisotopentechnik: Atemtests, Inkorporation natürlicher 13C-angereicherter Triglyceride aus Maiskeimöl in diverse Organe/Gewebe) und Gen-expression (quantitative real-time PCR) untersucht. Die Applikationsform und ihre Dauer riefen unterschiedliche Wirkungen hervor. Mäuse mit diätetischer Supplementation zeigten bereits nach kurzer Zeit eine verminderte Körperfettmasse, die bei weiterer Verabreichung auch zu einer Reduktion der Körpermasse führte. Beide Applikationsformen resultieren, unabhängig von der Dauer der Intervention, in einer erhöhten Energieausscheidung, während die Futter- und Energieaufnahme durch EGCG nicht beeinflusst wurden. Der Energieverlust war von einer erhöhten Fett- und Stickstoffausscheidung begleitet, deren Ursache die in der Literatur beschriebene Interaktion und Hemmung digestiver Enzyme sein könnte. Besonders unter postprandialen Bedingungen wiesen EGCG-Mäuse erniedrigte Triglycerid- und Glycogengehalte in der Leber auf, was auf eine eingeschränkte intestinale Absorption der Nährstoffe hindeutet. Transkriptanalysen ergaben im Darm eine verminderte Expression von Fettsäuretransportern, während die Expression von Glucosetransportern durch EGCG erhöht wurde. Weiterhin reduzierte EGCG, nach Umstellung von Standard- auf eine maiskeimölhaltige Hochfettdiät, die Inkorporation natürlicher 13C-angereicherter Triglyceride in diverse Organe und Gewebe – insbesondere Leber, viszerales und braunes Fettgewebe sowie Skelettmuskel. Die Analyse der 13C-Anreicherung im Atem der Mäuse und die Energieumsatzmessungen ergaben nach kurzer Applikation eine erhöhte Fettoxidation, die im weiteren Verlauf der Intervention auf eine erhöhte Kohlenhydratoxidation umgeschaltet wurde. Weiterhin war die orale Applikation von EGCG bei gleichzeitiger Fütterung einer Hochfettdiät von makroskopischen und mikroskopischen degenerativen Veränderungen der Leber begleitet. Diese Effekte wurden nach diätetischer Supplementation der Hochfettdiät mit EGCG nicht beobachtet. Zusammenfassend zeigen die Ergebnisse, dass die Körpergewichts- und Fettgewebs-abnahme durch diätetisches EGCG sich durch eine herabgesetzte Verdaulichkeit der Nahrung erklären lässt. Dies führte zu verschiedenen kurz- und mittelfristigen Veränderungen in der Fettverteilung und im Fettmetabolismus.
This thesis is focused on the electronic, spin-dependent and dynamical properties of thin magnetic systems. Photoemission-related techniques are combined with synchrotron radiation to study the spin-dependent properties of these systems in the energy and time domains. In the first part of this thesis, the strength of electron correlation effects in the spin-dependent electronic structure of ferromagnetic bcc Fe(110) and hcp Co(0001) is investigated by means of spin- and angle-resolved photoemission spectroscopy. The experimental results are compared to theoretical calculations within the three-body scattering approximation and within the dynamical mean-field theory, together with one-step model calculations of the photoemission process. From this comparison it is demonstrated that the present state of the art many-body calculations, although improving the description of correlation effects in Fe and Co, give too small mass renormalizations and scattering rates thus demanding more refined many-body theories including nonlocal fluctuations. In the second part, it is shown in detail monitoring by photoelectron spectroscopy how graphene can be grown by chemical vapour deposition on the transition-metal surfaces Ni(111) and Co(0001) and intercalated by a monoatomic layer of Au. For both systems, a linear E(k) dispersion of massless Dirac fermions is observed in the graphene pi-band in the vicinity of the Fermi energy. Spin-resolved photoemission from the graphene pi-band shows that the ferromagnetic polarization of graphene/Ni(111) and graphene/Co(0001) is negligible and that graphene on Ni(111) is after intercalation of Au spin-orbit split by the Rashba effect. In the last part, a time-resolved x-ray magnetic circular dichroic-photoelectron emission microscopy study of a permalloy platelet comprising three cross-tie domain walls is presented. It is shown how a fast picosecond magnetic response in the precessional motion of the magnetization can be induced by means of a laser-excited photoswitch. From a comparision to micromagnetic calculations it is demonstrated that the relatively high precessional frequency observed in the experiments is directly linked to the nature of the vortex/antivortex dynamics and its response to the magnetic perturbation. This includes the time-dependent reversal of the vortex core polarization, a process which is beyond the limit of detection in the present experiments.
Um Prozesse in biologischen Systemen auf molekularer Ebene zu untersuchen, haben sich vor allem fluoreszenzspektroskopische Methoden bewährt. Die Möglichkeit, einzelne Moleküle zu beobachten, hat zu einem deutlichen Fortschritt im Verständnis von elementaren biochemischen Prozessen geführt. Zu einer der bekanntesten Methoden der Einzelmolekülspektroskopie zählt die Fluoreszenz-Korrelations-Spektroskopie (FCS), mit deren Hilfe intramolekulare und diffusionsgesteuerte Prozesse in einem Zeitbereich von µs bis ms untersucht werden können. Durch die Verwendung von sog. Fluoreszenzsonden können Informationen über deren molekulare Mikroumgebung erhalten werden. Insbesondere für die konfokale Mikroskopie und die Einzelmolekülspektroskopie werden Fluoreszenzfarbstoffe mit einer hohen Photostabilität und hohen Fluoreszenzquantenausbeute benötigt. Aufgrund ihrer hohen Fluoreszenzquantenausbeute und der Möglichkeit, maßgeschneiderte“ Farbstoffe in einem breiten Spektralbereich für die Absorption und Fluoreszenz zu entwickeln, sind Cyaninfarbstoffe von besonderem Interesse für bioanalytische Anwendungen. Als Fluoreszenzmarker finden diese Farbstoffe insbesondere in der klinischen Diagnostik und den Lebenswissenschaften Verwendung. Die in dieser Arbeit verwendeten Farbstoffe DY-635 und DY-647 sind zwei typische Vertreter dieser Farbstoffklasse. Durch Modifizierung können die Farbstoffe kovalent an biologisch relevante Moleküle gebunden werden. Aufgrund ihres Absorptionsmaximums oberhalb von 630nm werden sie insbesondere in der Bioanalytik eingesetzt. In der vorliegenden Arbeit wurden die spektroskopischen Eigenschaften der Cyaninfarbstoffe DY-635 und DY-647 in biomimetischen und biologischen Modellsystemen untersucht. Zur Charakterisierung wurden dabei neben der Absorptionsspektroskopie insbesondere fluoreszenzspektroskopische Methoden verwendet. Dazu zählen die zeitkorrelierte Einzelphotonenzählung zur Ermittlung des Fluoreszenzabklingverhaltens, Fluoreszenz-Korrelations-Spektroskopie (FCS) zur Beobachtung von Diffusions- und photophysikalischen Desaktivierungsprozessen und die zeitaufgelöste Fluoreszenzanisotropie zur Untersuchung der Rotationsdynamik und Beweglichkeit der Farbstoffe im jeweiligen Modellsystem. Das Biotin-Streptavidin-System wurde als Modellsystem für die Untersuchung von Protein-Ligand-Wechselwirkungen verwendet, da der Bindungsmechanismus weitgehend aufgeklärt ist. Nach Bindung der Farbstoffe an Streptavidin wurde eine erhebliche Veränderung in den Absorptions- und Fluoreszenzeigenschaften beobachtet. Es wird angenommen, dass diese spektralen Veränderungen durch Wechselwirkung von benachbarten, an ein Streptavidintetramer gebundenen Farbstoffmolekülen und Bildung von H-Dimeren verursacht wird. Für das System Biotin-Streptavidin ist bekannt, dass während der Bindung des Liganden (Biotin) an das Protein eine Konformationsänderung auftritt. Anhand von zeitaufgelösten Fluoreszenzanisotropieuntersuchungen konnte in dieser Arbeit gezeigt werden, dass diese strukturellen Veränderungen zu einer starken Einschränkung der Beweglichkeit des Farbstoffes DY-635B führen. Liegt eine Mischung von ungebundenem und Streptavidin-gebundenem Farbstoff vor, können die Anisotropieabklingkurven nicht nach einem exponentiellen Verlauf angepasst werden. Es konnte im Rahmen dieser Arbeit gezeigt werden, dass in diesem Fall die Auswertung mit Hilfe des Assoziativen Anisotropiemodells möglich ist, welches eine Unterscheidung der Beiträge aus den zwei verschiedenen Mikroumgebungen ermöglicht. Als zweites Modellsystem dieser Arbeit wurden Mizellen des nichtionischen Tensids Tween-20 eingesetzt. Mizellen bilden eines der einfachsten Systeme, um die Mikroumgebung einer biologischen Membran nachzuahmen. Sind die Farbstoffe in den Mizellen eingelagert, so kommt es zu keiner Veränderung der Mizellgröße. Die ermittelten Werte des Diffusionskoeffizienten der mizellar eingelagerten Farbstoffe spiegeln demzufolge die Translationsbewegung der Tween-20-Mizellen wider. Die Beweglichkeit der Farbstoffe innerhalb der Tween-20-Mizellen wurde durch zeitaufgelöste Fluoreszenzanisotropiemessungen untersucht. Neben der „Wackelbewegung“, entsprechend dem wobble-in-a-cone-Modell, wird zusätzlich noch die laterale Diffusion der Farbstoffe entlang der Mizelloberfläche beschrieben.
Preparation and investigation of polymer-foam films and polymer-layer systems for ferroelectrets
(2010)
Piezoelectric materials are very useful for applications in sensors and actuators. In addition to traditional ferroelectric ceramics and ferroelectric polymers, ferroelectrets have recently become a new group of piezoelectrics. Ferroelectrets are functional polymer systems for electromechanical transduction, with elastically heterogeneous cellular structures and internal quasi-permanent dipole moments. The piezoelectricity of ferroelectrets stems from linear changes of the dipole moments in response to external mechanical or electrical stress. Over the past two decades, polypropylene (PP) foams have been investigated with the aim of ferroelectret applications, and some products are already on the market. PP-foam ferroelectrets may exhibit piezoelectric d33 coefficients of 600 pC/N and more. Their operating temperature can, however, not be much higher than 60 °C. Recently developed polyethylene-terephthalate (PET) and cyclo-olefin copolymer (COC) foam ferroelectrets show slightly better d33 thermal stabilities, but usually at the price of smaller d33 values. Therefore, the main aim of this work is the development of new thermally stable ferroelectrets with appreciable piezoelectricity. Physical foaming is a promising technique for generating polymer foams from solid films without any pollution or impurity. Supercritical carbon dioxide (CO2) or nitrogen (N2) are usually employed as foaming agents due to their good solubility in several polymers. Polyethylene propylene (PEN) is a polyester with slightly better properties than PET. A “voiding + inflation + stretching” process has been specifically developed to prepare PEN foams. Solid PEN films are saturated with supercritical CO2 at high pressure and then thermally voided at high temperatures. Controlled inflation (Gas-Diffusion Expansion or GDE) is applied in order to adjust the void dimensions. Additional biaxial stretching decreases the void heights, since it is known lens-shaped voids lead to lower elastic moduli and therefore also to stronger piezoelectricity. Both, contact and corona charging are suitable for the electric charging of PEN foams. The light emission from the dielectric-barrier discharges (DBDs) can be clearly observed. Corona charging in a gas of high dielectric strength such as sulfur hexafluoride (SF6) results in higher gas-breakdown strength in the voids and therefore increases the piezoelectricity. PEN foams can exhibit piezoelectric d33 coefficients as high as 500 pC/N. Dielectric-resonance spectra show elastic moduli c33 of 1 − 12 MPa, anti-resonance frequencies of 0.2 − 0.8 MHz, and electromechanical coupling factors of 0.016 − 0.069. As expected, it is found that PEN foams show better thermal stability than PP and PET. Samples charged at room temperature can be utilized up to 80 − 100 °C. Annealing after charging or charging at elevated temperatures may improve thermal stabilities. Samples charged at suitable elevated temperatures show working temperatures as high as 110 − 120 °C. Acoustic measurements at frequencies of 2 Hz − 20 kHz show that PEN foams can be well applied in this frequency range. Fluorinated ethylene-propylene (FEP) copolymers are fluoropolymers with very good physical, chemical and electrical properties. The charge-storage ability of solid FEP films can be significantly improved by adding boron nitride (BN) filler particles. FEP foams are prepared by means of a one-step procedure consisting of CO2 saturation and subsequent in-situ high-temperature voiding. Piezoelectric d33 coefficients up to 40 pC/N are measured on such FEP foams. Mechanical fatigue tests show that the as-prepared PEN and FEP foams are mechanically stable for long periods of time. Although polymer-foam ferroelectrets have a high application potential, their piezoelectric properties strongly depend on the cellular morphology, i.e. on size, shape, and distribution of the voids. On the other hand, controlled preparation of optimized cellular structures is still a technical challenge. Consequently, new ferroelectrets based on polymer-layer system (sandwiches) have been prepared from FEP. By sandwiching an FEP mesh between two solid FEP films and fusing the polymer system with a laser beam, a well-designed uniform macroscopic cellular structure can be formed. Dielectric resonance spectroscopy reveals piezoelectric d33 coefficients as high as 350 pC/N, elastic moduli of about 0.3 MPa, anti-resonance frequencies of about 30 kHz, and electromechanical coupling factors of about 0.05. Samples charged at elevated temperatures show better thermal stabilities than those charged at room temperature, and the higher the charging temperature, the better is the stability. After proper charging at 140 °C, the working temperatures can be as high as 110 − 120 °C. Acoustic measurements at frequencies of 200 Hz − 20 kHz indicate that the FEP layer systems are suitable for applications at least in this range.
Als Schnittstelle zwischen der Lokal- und der Staatsebene konnten die Landkreise ihre Stellung im politischen System Deutschlands bewahren und ausbauen. Optimierungsbedarf hat sich aufgrund des sozioökonomischen, technischen und demographischen Wandel sowie der öffentlichen Finanzkrise ergeben. Kreisgebietsreformmodelle und die Ausweitung der Aufgabenkooperation dominieren die Reformdebatte. Neben den verfassungsrechtlichen Anforderungen und der Akzeptanz der Reform bei den Betroffenen ist der Reformerfolg wesentlich von der Qualität der Umsetzungsstrategie abhängig. Der aktivierenden Einbeziehung der Mitarbeiter in den Reformprozess kommt dabei eine besondere Bedeutung zu.
Untersuchungen PEG-basierter thermo-responsiver Polymeroberflächen zur Steuerung der Zelladhäsion
(2010)
Moderne Methoden für die Einzelzellanalyse werden dank der fortschreitenden Weiterentwicklung immer sensitiver. Dabei steigen jedoch auch die Anforderungen an das Probenmaterial. Viele Aufbereitungsprotokolle adhärenter Zellen beinhalten eine enzymatische Spaltung der Oberflächenproteine, um die Ablösung vom Zellkultursubstrat zu ermöglichen. Verschiedene Methoden, wie die Patch-Clamp-Technik oder eine auf der Markierung extrazellulärer Domänen von Membranproteinen basierende Durchflusszytometrie können dann nur noch eingeschränkt eingesetzt werden. Daher ist die Etablierung neuer Zellablösemethoden dringend notwendig. In der vorliegenden Arbeit werden erstmals PEG-basierte thermo-responsive Oberflächen erfolgreich für die Zellkultur eingesetzt. Dabei wird das zerstörungsfreie Ablösen verschiedener Zelllinien von den Oberflächen durch Temperatursenkung realisiert. Die Funktionalität der Oberflächen wird durch Variation der Polymerstruktur, sowie der Konzentration der Beschichtungslösung, durch Beschichtung der Oberflächen mit einem zelladhäsionsfördernden Protein (Fibronektin) und durch Adsorption zelladhäsionsvermittelnder Peptide (RGD) optimiert. Um den Zellablösungsprozess detaillierter zu untersuchen, wird hier zum ersten Mal der direkte Zellkontakt mit thermo-responsiven Oberflächen mittels oberflächensensitiver Mikroskopie (TIRAF) sichtbar gemacht. Mit dieser Technik sind die exakte Quantifizierung und die Analyse der Reduktion der Zelladhäsionsfläche während des Abkühlens möglich. Hierbei werden in Abhängigkeit von der Zelllinie Unterschiede im Zellverhalten während des Ablösens festgestellt: Zellen, wie eine Brustkrebszelllinie und eine Ovarzelllinie, die bekanntermaßen stärker mit ihrer Umgebung in Kontakt treten, vergrößern im Verlauf des Beobachtungszeitraumes den Abstand zwischen Zellmembran und Oberfläche, reduzieren jedoch ihre Zell-Substratkontaktfläche kaum. Mausfibroblasten hingegen verkleinern drastisch die Zelladhäsionsfläche. Der Ablösungsprozess wird vermutlich aktiv von den Zellen gesteuert. Diese Annahme wird durch zwei Beobachtungen gestützt: Erstens verläuft die Reduktion der Zelladhäsionsfläche bei Einschränkung des Zellmetabolismus durch eine Temperatursenkung auf 4 °C verzögert. Zweitens hinterlassen die Zellen Spuren, die nach dem Ablösen der Zellen auf den Oberflächen zurückbleiben. Mittels Kombination von TIRAF- und TIRF-Mikroskopie werden die Zelladhäsionsfläche und die Aktinstruktur gleichzeitig beobachtet. Die Verknüpfung beider Methoden stellt eine neue Möglichkeit dar, intrazelluläre Prozesse mit der Zellablösung von thermo-responsiven Oberflächen zu korrelieren.
‘Heterosis’ is a term used in genetics and breeding referring to hybrid vigour or the superiority of hybrids over their parents in terms of traits such as size, growth rate, biomass, fertility, yield, nutrient content, disease resistance or tolerance to abiotic and abiotic stress. Parental plants which are two different inbred (pure) lines that have desired traits are crossed to obtain hybrids. Maximum heterosis is observed in the first generation (F1) of crosses. Heterosis has been utilised in plant and animal breeding programs for at least 90 years: by the end of the 21st century, 65% of worldwide maize production was hybrid-based. Generally, it is believed that an understanding of the molecular basis of heterosis will allow the creation of new superior genotypes which could either be used directly as F1 hybrids or form the basis for the future breeding selection programmes. Two selected accessions of a research model plant Arabidopsis thaliana (thale cress) were crossed to obtain hybrids. These typically exhibited a 60-80% increase of biomass when compared to the average weight of both parents. This PhD project focused on investigating the role of selected regulatory genes given their potentially key involvement in heterosis. In the first part of the project, the most appropriate developmental stage for this heterosis study was determined by metabolite level measurements and growth observations in parents and hybrids. At the selected stage, around 60 candidate regulatory genes (i.e. differentially expressed in hybrids when compared to parents) were identified. Of these, the majority were transcription factors, genes that coordinate the expression of other genes. Subsequent expression analyses of the candidate genes in biomass-heterotic hybrids of other Arabidopsis accessions revealed a differential expression in a gene subset, highlighting their relevance for heterosis. Moreover, a fraction of the candidate regulatory genes were found within DNA regions closely linked to the genes that underlie the biomass or growth heterosis. Additional analyses to validate the role of selected candidate regulatory genes in heterosis appeared insufficient to establish their role in heterosis. This uncovered a need for using novel approaches as discussed in the thesis. Taken together, the work provided an insight into studies on the molecular mechanisms underlying heterosis. Although studies on heterosis date back to more than one hundred years, this project as many others revealed that more investigations will be needed to uncover this phenomenon.
Pektatlyase (Pel-15) aus dem alkalophilen Bodenbakterium Bacillus spec. KSM-P15 ist mit 197 Aminosäuren eines der kleinsten, bekannten β-3-Solenoidproteine. Sie spaltet Polygalakturonsäurederivate in einem Ca2+-abhängigen β-Eliminierungsprozess. Wie bei allen Proteinen dieser Enzymfamilie ist auch die Polypeptidkette von Pel-15 zu einer einsträngigen, rechtsgängigen, parallelen β-Helix aufgewunden. In diesem Strukturmotiv enthält jede Windung drei β-Stränge, die jeweils durch flexible Schleifenbereiche miteinander verbunden sind. Insgesamt acht Windungen stapeln sich in Pel-15 übereinander und bilden entlang der Helixachse flächige, parallele β-Faltblätter aus. Im Bereich dieser β-Faltblätter existiert ein ausgedehntes Netzwerk von Wasserstoffbrückenbindungen, durch das der hydrophobe Kern, der sich im Inneren der β-Helix befindet, vom umgebenden Lösungsmittel abgeschirmt wird. Besondere Abschlussstrukturen an beiden Enden der β-Helix, wie sie typischerweise bei anderen Ver-tretern dieser Strukturklasse ausgeprägt werden, sind in Pel-15 nicht zu beobachten. Stattdessen sind die terminalen Bereiche der β-Helix über Salzbrücken und hydrophobe Seitenkettenkontakte stabilisiert. In der vorliegenden Dissertation wurde die Pektatlyase Pel-15 hinsichtlich ihres Faltungsgleichgewichtes, ihrer enzymatischen Aktivität und der Kinetik ihrer Strukturbildung charakterisiert. In eine evolutionär konservierte Helixwindung wurden destabilisierende Mutationen eingeführt, und deren Auswirkungen mittels spektroskopischer Methoden analysiert. Die Ergebnisse zeigen, dass Pel-15 in Gegenwart des Denaturierungsmittels Guanidiniumhydrochlorid einen hyperfluoreszenten Gleichgewichtsustand (HF) populiert, der nach Messungen von Faltungs- und Entfaltungskinetiken ein konformationelles Ensemble aus den Zuständen HFslow und HFfast darstellt. Diese HF-Zustände sind durch eine hohe Aktivierungsbarriere voneinander getrennt. In Rückfaltungsexperimenten populieren nur etwa 80 % der faltenden Moleküle den Zwischenzustand HFslow, der mit einer Zeitkonstante von ca. 100 s zu HFfast weiterreagiert. Die Denaturierungsmittelabhängigkeit dieser Reaktion ist sehr gering, was eine trans-/cis-Prolylisomerisierung als geschwindigkeitslimitierenden Schritt nahelegt. Die Existenz eines cis-Peptides in der nativen Struktur macht es erforderlich, den denaturierten Zustand als ein Ensemble kinetisch separierter Konformationen, kurz: DSE, zu betrachten, das durch die Spezies Ufast und Uslow populiert wird. Nach dem in dieser Arbeit aufgestellten „Minimalmodell der Pel-15 Faltung“ stehen die HF-Spezies (HFslow, HFfast) mit den Konformationen des DSE in einem thermodynamischen Kreisprozess. Das Modell positioniert HFfast und die native Konformation N auf die „native Seite“ der Aktivierungsbarriere und trägt damit der Tatsache Rechnung, dass die Gleichgewichtseinstellung zwischen diesen Spezies zu schnell ist, um mit manuellen Techniken erfasst zu werden. Die hochaffine Bindung von Ca2+ (Kd = 10 μM) verschiebt sich das Faltungsgleichgewicht bereits in Gegenwart von 1 mM CaCl2 soweit auf die Seite des nativen Zustandes, das HFfast nicht länger nachweisbar ist. Entgegen anfänglicher Vermutungen kommt einer lokalen, evolutionär konservierten Disulfidbrücke im Zentrum der β-Helix eine wichtige Stabilisierungsfunktion zu. Die Disulfidbrücke befindet sich in einem kurzen Schleifenbereich der β-Helix nahe dem aktiven Zentrum. Obwohl ihr Austausch gegen die Reste Val und Ala die freie Stabilisierungsenthalpie des Proteins um ca. 10 kJ/mol reduziert, lässt die Struktur im Bereich der Mutationsstelle keine gravierende Veränderung erkennen. Auch die katalytisch relevante Ca2+-Bindungsaffinität bleibt unbeeinflusst; dennoch zeigen Enzymaktivitätstests für VA-Mutanten eine Reduktion der enzymatischen Aktivität um fast 50 % an. Die evolutionär konservierte Helixwindung im Allgemeinen und die in ihr enthaltene Disulfidbrücke im Besonderen müssen nach den vorliegenden Ergebnissen also eine zentrale Funktion sowohl für die Struktur des katalytischen Zentrums als auch für die Strukturbildung der β-Helix während der Faltungsreaktion besitzen. Die Ergebnisse dieser Arbeit finden in mehreren Punkten Anklang an Faltungseigenschaften, die für andere β -Helixproteine beschrieben wurden. Vor allem aber prädestinieren sie Pel-15 als ein neues, β-helikales Modellprotein. Aufgrund seiner einfachen Topologie, seiner niedrigen Windungszahl und seiner hohen thermodynamischen Stabilität ist Pel-15 sehr gut geeignet, die Determinanten von Stabilität und Strukturbildung des parallelen β-Helix-Motivs in einer Auflösung zu studieren, die aufgrund der Komplexität bestehender β-helikaler Modellsysteme bislang nicht zur Verfügung stand.
The availability of large data sets has allowed researchers to uncover complex properties in complex systems, such as complex networks and human dynamics. A vast number of systems, from the Internet to the brain, power grids, ecosystems, can be represented as large complex networks. Dynamics on and of complex networks has attracted more and more researchers’ interest. In this thesis, first, I introduced a simple but effective dynamical optimization coupling scheme which can realize complete synchronization in networks with undelayed and delayed couplings and enhance the small-world and scale-free networks’ synchronizability. Second, I showed that the robustness of scale-free networks with community structure was enhanced due to the existence of communities in the networks and some of the response patterns were found to coincide with topological communities. My results provide insights into the relationship between network topology and the functional organization in complex networks from another viewpoint. Third, as an important kind of nodes of complex networks, human detailed correspondence dynamics was studied by both data and the model. A new and general type of human correspondence pattern was found and an interacting priority-queues model was introduced to explain it. The model can also embrace a range of realistic social interacting systems such as email and letter communication. My findings provide insight into various human activities both at the individual and network level. Fourth, I present clearly new evidence that human comment behavior in on-line social systems, a different type of interacting human dynamics, is non-Poissonian and a model based on the personal attraction was introduced to explain it. These results are helpful for discovering regular patterns of human behavior in on-line society and the evolution of the public opinion on the virtual as well as real society. Finally, there are conclusion and outlook of human dynamics and complex networks.
With the rise of nanotechnology in the last decade, nanofluidics has been established as a research field and gained increased interest in science and industry. Natural aqueous nanofluidic systems are very complex, there is often a predominance of liquid interfaces or the fluid contains charged or differently shaped colloids. The effects, promoted by these additives, are far from being completely understood and interesting questions arise with regards to the confinement of such complex fluidic systems. A systematic study of nanofluidic processes requires designing suitable experimental model nano – channels with required characteristics. The present work employed thin liquid films (TLFs) as experimental models. They have proven to be useful experimental tools because of their simple geometry, reproducible preparation, and controllable liquid interfaces. The thickness of the channels can be adjusted easily by the concentration of electrolyte in the film forming solution. This way, channel dimensions from 5 – 100 nm are possible, a high flexibility for an experimental system. TLFs have liquid IFs of different charge and properties and they offer the possibility to confine differently shaped ions and molecules to very small spaces, or to subject them to controlled forces. This makes the foam films a unique “device” available to obtain information about fluidic systems in nanometer dimensions. The main goal of this thesis was to study nanofluidic processes using TLFs as models, or tools, and to subtract information about natural systems plus deepen the understanding on physical chemical conditions. The presented work showed that foam films can be used as experimental models to understand the behavior of liquids in nano – sized confinement. In the first part of the thesis, we studied the process of thinning of thin liquid films stabilized with the non – ionic surfactant n – dodecyl – β – maltoside (β – C₁₂G₂) with primary interest in interfacial diffusion processes during the thinning process dependent on surfactant concentration 64. The surfactant concentration in the film forming solutions was varied at constant electrolyte (NaCl) concentration. The velocity of thinning was analyzed combining previously developed theoretical approaches. Qualitative information about the mobility of the surfactant molecules at the film surfaces was obtained. We found that above a certain limiting surfactant concentration the film surfaces were completely immobile and they behaved as non – deformable, which decelerated the thinning process. This follows the predictions for Reynolds flow of liquid between two non – deformable disks. In the second part of the thesis, we designed a TLF nanofluidic system containing rod – like multivalent ions and compared this system to films containing monovalent ions. We presented first results which recognized for the first time the existence of an additional attractive force in the foam films based on the electrostatic interaction between rod – like ions and oppositely charged surfaces. We may speculate that this is an ion bridging component of the disjoining pressure. The results show that for films prepared in presence of spermidine the transformation of the thicker CF to the thinnest NBF is more probable as films prepared with NaCl at similar conditions of electrostatic interaction. This effect is not a result of specific adsorption of any of the ions at the fluid surfaces and it does not lead to any changes in the equilibrium properties of the CF and NBF. Our hypothesis was proven using the trivalent ion Y3+ which does not show ion bridging. The experimental results are compared to theoretical predictions and a quantitative agreement on the system’s energy gain for the change from CF to NBF could be obtained. In the third part of the work, the behavior of nanoparticles in confinement was investigated with respect to their impact on the fluid flow velocity. The particles altered the flow velocity by an unexpected high amount, so that the resulting changes in the dynamic viscosity could not be explained by a realistic change of the fluid viscosity. Only aggregation, flocculation and plug formation can explain the experimental results. The particle systems in the presented thesis had a great impact on the film interfaces due to the stabilizer molecules present in the bulk solution. Finally, the location of the particles with respect to their lateral and vertical arrangement in the film was studied with advanced reflectivity and scattering methods. Neutron Reflectometry studies were performed to investigate the location of nanoparticles in the TLF perpendicular to the IF. For the first time, we study TLFs using grazing incidence small angle X – ray scattering (GISAXS), which is a technique sensitive to the lateral arrangement of particles in confined volumes. This work provides preliminary data on a lateral ordering of particles in the film.
Trying to do two things at once decreases performance of one or both tasks in many cases compared to the situation when one performs each task by itself. The present thesis deals with the question why and in which cases these dual-task costs emerge and moreover, whether there are cases in which people are able to process two cognitive tasks at the same time without costs. In four experiments the influence of stimulus-response (S-R) compatibility, S-R modality pairings, interindividual differences, and practice on parallel processing ability of two tasks are examined. Results show that parallel processing is possible. Nevertheless, dual-task costs emerge when: the personal processing strategy is serial, the two tasks have not been practiced together, S-R compatibility of both tasks is low (e.g. when a left target has to be responded with a right key press and in the other task an auditorily presented “A” has to be responded by saying “B”), and modality pairings of both tasks are Non Standard (i.e., visual-spatial stimuli are responded vocally whereas auditory-verbal stimuli are responded manually). Results are explained with respect to executive-based (S-R compatibility) and content-based crosstalk (S-R modality pairings) between tasks. Finally, an alternative information processing account with respect to the central stage of response selection (i.e., the translation of the stimulus to the response) is presented.
1. Problemstellung und Relevanz des Themas Die deutsche Hochschullandschaft hat in den letzten Jahren zahlreiche Veränderungen bewältigen müssen und steht weiterhin großen Herausforderungen gegenüber, durch welche sich zunehmend wettbewerbsähnliche Merkmale in diesem Sektor verfestigen: • Umstellung auf international vergleichbare Studiengänge • Neuregelung der Studienplatzvergabe • Einführung von Studiengebühren in einigen Bundesländern • Leistungsindikatoren zur Verteilung der staatlichen Haushaltsmittel • Demographischer Wandel Eine Bildungseinrichtung besitzt mehrere Anspruchsgruppen: die Studierenden, welche Bildungsleistungen nachfragen, den Staat, der für die Leistungen zahlt, die Öffentlichkeit, die an Grundlagenforschungen interessiert ist und schließlich die Wirtschaft, die Absolventen rekrutiert (vgl. Berthold, C. 2001, S.431). Die Hochschulen befinden sich untereinander verstärkt im Wettbewerb um qualifizierte (und ggf. zahlungswillige) Studierende, um finanzielle Mittel vom Staat oder aus der Privatwirtschaft und um renommierte Wissenschaftler. Hochschulen müssen sich nun den veränderten Bedingungen anpassen, um auch weiterhin im nationalen und internationalen Wettbewerb überlebensfähig zu bleiben. Grundsätzlich kann sich hierbei an in der Privatwirtschaft erfolgreich eingesetzten Marketinginstrumenten orientiert werden. 2. Zielsetzung und Aufbau der Arbeit Nach einer Analyse der oben genannten Rahmenbedingungen, wird im ersten Teil dieser Arbeit gezeigt, welche Erkenntnisse aus dem Marketing auf Hochschulen übertragen werden können. Dabei werden sowohl strategische Fragen beleuchtet als auch die Instrumente des Marketing-Mix vorgestellt. In einer anschließenden Untersuchung wurden Faktoren bestimmt, welche sich positiv auf den Entwicklungsstand von Marketingaktivitäten an Hochschulen auswirken. Dabei konnten - beispielhaft für die Region Berlin/Brandenburg - sechs verschiedene Hochschultypen identifiziert werden. Diese weisen, in Abhängigkeit der verschiedenen Eigenschaften der jeweiligen Institutionen, einen unterschiedlichen Entwicklungsstand oder ein anderes Verständnis von Hochschulmarketing auf. Aufgrund dessen erscheinen für sie jeweils andere Marketingstrategien empfehlenswert. Die größte Rolle für den differenzierten Status quo im Hochschulmarketing an Berliner und Brandenburger Hochschulen spielt die Stärke des äußeren Drucks unter dem sich die Hochschule befindet, um ihre Auslastung und die notwendige finanzielle Ausstattung sicherzustellen. Ferner unterscheiden sich die Hochschulleitungen erheblich in ihrem Engagement und der Bereitschaft, diesen Herausforderungen mit Marketinginstrumenten zu begegnen. Trotz der gestiegenen Anzahl von Beiträgen zur Notwendigkeit der Einführung von ökonomischen Überlegungen auch im Hochschulmanagement gibt es viele Kritiker, die ein Ende der Freiheit für Forschung und Lehre prophezeien, wenn der Marketing-Gedanke verstärkt auch an Bildungseinrichtungen Einzug hält. Unumstritten ist, dass Managementansätze aus der privaten Wirtschaft nicht ohne weiteres auf eine Hochschule adaptiert werden können. Wahrscheinlich besteht jedoch die größere Gefahr für Freiheit und Erfolg von Forschung und Lehre in der Missachtung dieser aktuellen Tendenzen (vgl. Tutt 2006, S. 171)!
This thesis is concerned with the issue of extinction of populations composed of different types of individuals, and their behavior before extinction and in case of a very late extinction. We approach this question firstly from a strictly probabilistic viewpoint, and secondly from the standpoint of risk analysis related to the extinction of a particular model of population dynamics. In this context we propose several statistical tools. The population size is modeled by a branching process, which is either a continuous-time multitype Bienaymé-Galton-Watson process (BGWc), or its continuous-state counterpart, the multitype Feller diffusion process. We are interested in different kinds of conditioning on non-extinction, and in the associated equilibrium states. These ways of conditioning have been widely studied in the monotype case. However the literature on multitype processes is much less extensive, and there is no systematic work establishing connections between the results for BGWc processes and those for Feller diffusion processes. In the first part of this thesis, we investigate the behavior of the population before its extinction by conditioning the associated branching process X_t on non-extinction (X_t≠0), or more generally on non-extinction in a near future 0≤θ<∞ (X_{t+θ}≠0), and by letting t tend to infinity. We prove the result, new in the multitype framework and for θ>0, that this limit exists and is non-degenerate. This reflects a stationary behavior for the dynamics of the population conditioned on non-extinction, and provides a generalization of the so-called Yaglom limit, corresponding to the case θ=0. In a second step we study the behavior of the population in case of a very late extinction, obtained as the limit when θ tends to infinity of the process conditioned by X_{t+θ}≠0. The resulting conditioned process is a known object in the monotype case (sometimes referred to as Q-process), and has also been studied when X_t is a multitype Feller diffusion process. We investigate the not yet considered case where X_t is a multitype BGWc process and prove the existence of the associated Q-process. In addition, we examine its properties, including the asymptotic ones, and propose several interpretations of the process. Finally, we are interested in interchanging the limits in t and θ, as well as in the not yet studied commutativity of these limits with respect to the high-density-type relationship between BGWc processes and Feller processes. We prove an original and exhaustive list of all possible exchanges of limit (long-time limit in t, increasing delay of extinction θ, diffusion limit). The second part of this work is devoted to the risk analysis related both to the extinction of a population and to its very late extinction. We consider a branching population model (arising notably in the epidemiological context) for which a parameter related to the first moments of the offspring distribution is unknown. We build several estimators adapted to different stages of evolution of the population (phase growth, decay phase, and decay phase when extinction is expected very late), and prove moreover their asymptotic properties (consistency, normality). In particular, we build a least squares estimator adapted to the Q-process, allowing a prediction of the population development in the case of a very late extinction. This would correspond to the best or to the worst-case scenario, depending on whether the population is threatened or invasive. These tools enable us to study the extinction phase of the Bovine Spongiform Encephalopathy epidemic in Great Britain, for which we estimate the infection parameter corresponding to a possible source of horizontal infection persisting after the removal in 1988 of the major route of infection (meat and bone meal). This allows us to predict the evolution of the spread of the disease, including the year of extinction, the number of future cases and the number of infected animals. In particular, we produce a very fine analysis of the evolution of the epidemic in the unlikely event of a very late extinction.
Processing negative imperatives in Bulgarian : evidence from normal, aphasic and child language
(2010)
The incremental nature of sentence processing raises questions about the way the information of incoming functional elements is accessed and subsequently employed in building the syntactic structure which sustains interpretation processes. The present work approaches these questions by investigating the negative particle ne used for sentential negation in Bulgarian and its impact on the overt realisation and the interpretation of imperative inflexion, bound aspectual morphemes and clitic pronouns in child, adult and aphasic language. In contrast to other Slavic languages, Bulgarian negative imperatives (NI) are grammatical only with imperfective verbs. We argue that NI are instantiations of overt aspectual coercion induced by the presence of negation as a temporally sensitive sentential operator. The scope relation between imperative mood, negation, and aspect yields the configuration of the imperfective present which in Bulgarian has to be overtly expressed and prompts the imperfective marking of the predicate. The regular and transparent application of the imperfectivising mechanism relates to the organisation of the TAM categories in Bulgarian which not only promotes the representation of fine perspective shifts but also provides for their distinct morphological expression. Using an elicitation task with NI, we investigated the way 3- and 4-year-old children represent negation in deontic contexts as reflected in their use of aspectually appropriate predicates. Our findings suggest that children are sensitive to the imperfectivity requirement in NI from early on. The imperfectivisation strategies reveal some differences from the target morphological realisation. The relatively low production of target imperfectivised prefixed verbs cannot be explained with morphological processing deficits, but rather indicates that up to the age of five children experience difficulties to apply a progressive view point to accomplishments. Two self-paced reading studies present evidence that neurologically unimpaired Bulgarian speakers profit from the syntactic and prosodic properties of negation during online sentence comprehension. The imperfectivity requirement negation imposes on the predicate speeds up lexical access to imperfective verbs. Similarly, clitic pronouns are more accessible after negation due to the phono-syntactic properties of clitic clusters. As the experimental stimuli do not provide external discourse referents, personal pronouns are parsed as object agreement markers. Without subsequent resolution, personal pronouns appear to be less resource demanding than reflexive clitics. This finding is indicative of the syntax-driven co-reference establishment processes triggered through the lexical specification of reflexive clitics. The results obtained from Bulgarian Broca's aphasics show that they exhibit processing patterns similar to those of the control group. Notwithstanding their slow processing speed, the agrammatic group showed no impairment of negation as reflected by their sensitivity to the aspectual requirements of NI, and to the prosodic constraints on clitic placement. The aphasics were able to parse the structural dependency between mood, negation and aspect as functional categories and to represent it morphologically. The prolonged reaction times (RT) elicited by prefixed verbs indicate increasing processing costs due to the semantic integration of prefixes as perfectivity markers into an overall imperfective construal. This inference is supported by the slower RT to reflexive clitics, which undergo a structurally triggered resolution. Evaluated against cross-linguistic findings, the obtained result strongly suggests that aphasic performance with pronouns depends on the interpretation efforts associated with co-reference establishment and varies due to availability of discourse referents. The investigation of normal and agrammatic processing of Bulgarian NI presents support for the hypothesis that the comprehension deficits in Broca's aphasia result from a slowed-down implementation of syntactic operations. The protracted structure building consumes processing resources and causes temporal mismatches with other processes sustaining sentence comprehension. The investigation of the way Bulgarian children and aphasic speakers process NI reveals that both groups are highly sensitive to the imperfective constraint on the aspectual construal imposed by the presence of negation. The imperfective interpretation requires access to morphologically complex verb forms which contain aspectual morphemes with conflicting semantic information – perfective prefixes and imperfective suffixes. Across modalities, both populations exhibit difficulties in processing prefixed imperfectivised verbs which as predicates of negative imperative sentences reflect the inner perspective the speaker and the addressee need to take towards a potentially bounded situation description.
Production of regular and non-regular verbs : evidence for a lexical entry complexity account
(2010)
The incredible productivity and creativity of language depends on two fundamental resources: a mental lexicon and a mental grammar. Rules of grammar enable us to produce and understand complex phrases we have not encountered before and at the same time constrain the computation of complex expressions. The concepts of the mental lexicon and mental grammar have been thoroughly tested by comparing the use of regular versus non-regular word forms. Regular verbs (e.g. walk-walked) are computed using a suffixation rule in a neural system for grammatical processing; non-regular verbs (run-ran) are retrieved from associative memory. The role of regularity has only been explored for the past tense, where regularity is overtly visible. To explore the representation and encoding of regularity as well as the inflectional processes involved in the production of regular and non-regular verbs, this dissertation investigated three groups of German verbs: regular, irregular and hybrid verbs. Hybrid verbs in German have completely regular conjugation in the present tense and irregular conjugation in the past tense. Articulation latencies were measured while participants named pictures of actions, producing the 3rd person singular of regular, hybrid, and irregular verbs in present and past tense. Studying the production of German verbs in past and present tense, this dissertation explored the complexity of lexical entries as a decisive factor in the production of verbs.
Die Zuwanderung und der Integration von Zuwanderern aus den GUS-Staaten in Deutschland ist eine bedeutsame politische und rechtliche Thematik. Diese Thematik ist in Deutschland bisher noch wenig untersucht und nur in Teilbereichen bearbeitet. Deshalb untersucht das vorgestellte Werk folgende Fragen und Aspekte: Analyse der Zuwanderung aus den GUS-Staaten; Darstellung von Zuwanderungsgruppen aus den GUS-Staaten und ihres rechtlichen Status; die Wellen der Zuwanderung anhand der Zuwanderer aus den GUS-Staaten, einschließlich (Spät)Aussiedlern und jüdische Zuwanderern, tschetschenischen Asylsuchenden, Familienangehörige, Studierende, qualifizierte Arbeitskrafte usw., Analyse der Integrationsprogramme und Integrationsmaßnahmen für die Zuwanderer aus den GUS-Staaten; Darstellung der Integrationschancen und Integrationshemmnisse am Beispiel der Zuwanderer aus den GUS-Staaten, einschließlich das Recht auf Annerkennung der akademischen und beruflichen Qualifikation, das Recht auf Arbeit u.a., die russischen Rückkehr- und (Reintegrations-) Programme für die im Bundesgebiet lebenden Zuwanderer aus den GUS-Staaten – ihre Analyse und Bewertung. Eine weitere Besonderheit der Veröffentlichung besteht darin, dass die Autorin Ihre wissenschaftlichen Ausführungen zur rechtlichen Stellung der GUS-Zuwanderer und zu den Erfolgen bei ihrer Integration, aber auch zu den Integrationshemmnissen und Integrationsproblemen auf eine soziologische Befragung von Zuwanderern aus den GUS-Staaten in den Ländern Brandenburg und Berlin stützt. Die empirische Untersuchung bezieht sich auf den Zeitraum 1991 bis 2009.
20 Jahre sind mittlerweile vergangen seit die friedliche Protestbewegung zur Abdankung des alten Regimes der Deutschen Demokratischen Republik führte. Im darauf folgenden Jahre kam es zur Wiedervereinigung der beiden deutschen Staaten. Der anschließende Transformationsprozess ist aufgrund der besonderen Umstände in Deutschland einzigartig unter den ehemaligen sozialistischen Staaten Mittel- und Osteuropas. Der Schwerpunkt dieser Arbeit liegt in der Transformation des Verarbeitenden Gewerbes in den Bundesländern Berlin und Brandenburg. Mit der Wiedervereinigung der beiden deutschen Staaten veränderte sich die Situation für die Betriebe im ehemals sozialistischen Teil auf dramatische Weise. Die Auswirkungen werden anhand von Makro- und Mikrodaten analysiert. Untersuchungsgegenstände sind verschiedene ökonomische Indikatoren, wie Zahl von Betrieben und Arbeitsplätzen, Strukturen (nach Größe und Branchen), Umsätze (im In- und Ausland) sowie Investitionen. Der Vergleich von Brandenburg und Ostberlin mit Westberlin bietet dabei die Möglichkeit, Aufschluss über den erreichten Stand des Transformationsprozesses zu erhalten. Die Datenbasis dieser Arbeit besteht neben Angaben aus der Volkswirtschaftlichen Gesamtrechnung der Länder aus verschiedenen betriebsbasierten Erhebungen der amtlichen Statistik. Der Beobachtungszeitraum umfasst dabei die Jahre 1991 bis 2005. Zur Analyse von Betriebs- und Beschäftigungszahlen und ihrer Dynamik steht sogar eine Totalerhebung für die Jahre 1991 bis 2000 zur Verfügung. Ein besonderer Schwerpunkt dieser Arbeit ist die Rolle der Exporte für die betriebliche Entwicklung. Die deutsche Wirtschaftspolitik fördert Unternehmen bei ihrem Schritt auf ausländische Märkte, da man sich von Exporten eine Wachstumsstimulation erhofft. Damit eine solche Förderung auch langfristige positive Effekte entfalten kann, muss einerseits der Export positiven Einfluss auf das Produktivitätswachstum des betreffenden Betriebes haben, und andererseits muss das Exportverhalten eine gewisse Persistenz aufweisen. Beide Bedingungen werden innerhalb der Arbeit detailliert untersucht.
Ziel dieser Arbeit ist die Überwindung einer Differenz, die zwischen der Theorie der Phase bzw. der Phasendynamik und ihrer Anwendung in der Zeitreihenanalyse besteht: Während die theoretische Phase eindeutig bestimmt und invariant unter Koordinatentransformationen bzw. gegenüber der jeweils gewählten Observable ist, führen die Standardmethoden zur Abschätzung der Phase aus gegebenen Zeitreihen zu Resultaten, die einerseits von den gewählten Observablen abhängen und so andererseits das jeweilige System keineswegs in eindeutiger und invarianter Weise beschreiben. Um diese Differenz deutlich zu machen, wird die terminologische Unterscheidung von Phase und Protophase eingeführt: Der Terminus Phase wird nur für Variablen verwendet, die dem theoretischen Konzept der Phase entsprechen und daher das jeweilige System in invarianter Weise charakterisieren, während die observablen-abhängigen Abschätzungen der Phase aus Zeitreihen als Protophasen bezeichnet werden. Der zentrale Gegenstand dieser Arbeit ist die Entwicklung einer deterministischen Transformation, die von jeder Protophase eines selbsterhaltenden Oszillators zur eindeutig bestimmten Phase führt. Dies ermöglicht dann die invariante Beschreibung gekoppelter Oszillatoren und ihrer Wechselwirkung. Die Anwendung der Transformation bzw. ihr Effekt wird sowohl an numerischen Beispielen demonstriert - insbesondere wird die Phasentransformation in einem Beispiel auf den Fall von drei gekoppelten Oszillatoren erweitert - als auch an multivariaten Messungen des EKGs, des Pulses und der Atmung, aus denen Phasenmodelle der kardiorespiratorischen Wechselwirkung rekonstruiert werden. Abschließend wird die Phasentransformation für autonome Oszillatoren auf den Fall einer nicht vernachlässigbaren Amplitudenabhängigkeit der Protophase erweitert, was beispielsweise die numerischen Bestimmung der Isochronen des chaotischen Rössler Systems ermöglicht.
In dieser Arbeit wurden die Möglichkeiten und Grenzen für Zirkulardichroismus-Messungen mit Synchrotronstrahlung untersucht. Dazu wurde ein Messaufbau für Zirkulardichroismus-Messungen an zwei Strahlrohren am Berliner Elektronenspeicherring für Synchrotronstrahlung eingesetzt, die für Messungen im Bereich des ultravioletten Lichts geeignet sind. Eigenschaften der Strahlrohre und des Messaufbau wurden in einigen wichtigen Punkten mit kommerziellen Zirkulardichroismus-Spektrometern verglichen. Der Schwerpunkt lag auf der Ausdehnung des zugänglichen Wellenlängenbereichs unterhalb von 180 nm zur Untersuchung des Zirkulardichroismus von Proteinen in diesem Bereich. In diesem Bereich ist es nicht nur die Lichtquelle sondern vor allem die Absorption des Lichts durch Wasser, die den Messbereich bei der Messung biologischer Proben in wässriger Lösung einschränkt. Es wurden Bedingungen gefunden, unter denen der Messbereich auf etwa 160 nm, in einigen Fällen bis auf 130 nm ausgedehnt werden konnte. Dazu musste die Pfadlänge deutlich reduziert werden und verschieden Probenküvetten wurden getestet. Der Einfluss der dabei auftretenden Spannungsdoppelbrechung in den Probenküvetten auf das Messsignal konnte mit einem alternativen Messaufbau deutlich reduziert werden. Systematische Fehler im Messsignal und auftretende Strahlenschäden begrenzen jedoch die Zuverlässigkeit der gemessenen Spektren. Bei Proteinfilmen schränkt die Absorption von Wasser den Messbereich kaum ein. Es wurden jedoch meist deutliche Unterschiede zwischen den Spektren von Proteinfilmen und den Spektren von Proteinen in wässriger Lösung festgestellt. Solange diese Unterschiede nicht minimiert werden können, stellen Proteinfilme keine praktikable Alternative zu Messungen in wässriger Lösung dar.
Nanofibrous mats are interesting scaffold materials for biomedical applications like tissue engineering due to their interconnectivity and their size dimension which mimics the native cell environment. Electrospinning provides a simple route to access such fiber meshes. This thesis addresses the structural and functional control of electrospun fiber mats. In the first section, it is shown that fiber meshes with bimodal size distribution could be obtained in a single-step process by electrospinning. A standard single syringe set-up was used to spin concentrated poly(ε-caprolactone) (PCL) and poly(lactic-co-glycolic acid) (PLGA) solutions in chloroform and meshes with bimodal-sized fiber distribution could be directly obtained by reducing the spinning rate at elevated humidity. Scanning electron microscopy (SEM) and mercury porosity of the meshes suggested a suitable pore size distribution for effective cell infiltration. The bimodal fiber meshes together with unimodal fiber meshes were evaluated for cellular infiltration. While the micrometer fibers in the mixed meshes generate an open pore structure, the submicrometer fibers support cell adhesion and facilitate cell bridging on the large pores. This was revealed by initial cell penetration studies, showing superior ingrowth of epithelial cells into the bimodal meshes compared to a mesh composed of unimodal 1.5 μm fibers. The bimodal fiber meshes together with electrospun nano- and microfiber meshes were further used for the inorganic/organic hybrid fabrication of PCL with calcium carbonate or calcium phosphate, two biorelevant minerals. Such composite structures are attractive for the potential improvement of properties such as stiffness or bioactivity. It was possible to encapsulate nano and mixed sized plasma-treated PCL meshes to areas > 1 mm2 with calcium carbonate using three different mineralization methods including the use of poly(acrylic acid). The additive seemed to be useful in stabilizing amorphous calcium carbonate to effectively fill the space between the electrospun fibers resulting in composite structures. Micro-, nano- and mixed sized fiber meshes were successfully coated within hours by fiber directed crystallization of calcium phosphate using a ten-times concentrated simulated body fluid. It was shown that nanofibers accelerated the calcium phosphate crystallization, as compared to microfibers. In addition, crystallizations performed at static conditions led to hydroxyapatite formations whereas in dynamic conditions brushite coexisted. In the second section, nanofiber functionalization strategies are investigated. First, a one-step process was introduced where a peptide-polymer-conjugate (PLLA-b-CGGRGDS) was co-spun with PLGA in such a way that the peptide is enriched on the surface. It was shown that by adding methanol to the chloroform/blend solution, a dramatic increase of the peptide concentration at the fiber surface could be achieved as determined by X-ray photoelectron spectroscopy (XPS). Peptide accessibility was demonstrated via a contact angle comparison of pure PLGA and RGD-functionalized fiber meshes. In addition, the electrostatic attraction between a RGD-functionalized fiber and a silica bead at pH ~ 4 confirmed the accessibility of the peptide. The bioactivity of these RGD-functionalized fiber meshes was demonstrated using blends containing 18 wt% bioconjugate. These meshes promoted adhesion behavior of fibroblast compared to pure PLGA meshes. In a second functionalization approach, a modular strategy was investigated. In a single step, reactive fiber meshes were fabricated and then functionalized with bioactive molecules. While the electrospinning of the pure reactive polymer poly(pentafluorophenyl methacrylate) (PPFPMA) was feasible, the inherent brittleness of PPFPMA required to spin a PCL blend. Blends and pure PPFPMA showed a two-step functionalization kinetics. An initial fast reaction of the pentafluorophenyl esters with aminoethanol as a model substance was followed by a slow conversion upon further hydrophilization. This was analysed by UV/Vis-spectroscopy of the pentaflurorophenol release upon nucleophilic substitution with the amines. The conversion was confirmed by increased hydrophilicity of the resulting meshes. The PCL/PPFPMA fiber meshes were then used for functionalization with more complex molecules such as saccharides. Aminofunctionalized D-Mannose or D-Galactose was reacted with the active pentafluorophenyl esters as followed by UV/Vis spectroscopy and XPS. The functionality was shown to be bioactive using macrophage cell culture. The meshes functionalized with D-Mannose specifically stimulated the cytokine production of macrophages when lipopolysaccharides were added. This was in contrast to D-Galactose- or aminoethanol-functionalized and unfunctionalized PCL/PPFPMA fiber mats.
Die Ca2+/Calmodulin-aktivierte Serin/Threonin-Phosphatase Calcineurin ist ein Schlüsselmolekül des T-Zell-Rezeptorabhängigen Signalnetzwerkes. Calcineurin aktiviert die Transkriptionsfaktoren der NFATc-Familie durch Dephosphorylierung und reguliert darüber die Expression wichtiger Zytokine und Oberflächenproteine. Die Aktivität von Calcineurin wird durch zahlreiche endogene Proteine moduliert und ist Angriffspunkt der immunsuppressiven Substanzen Cyclosporin A und FK506. In dieser Arbeit wurde der alternative niedermolekulare Calcineurin-NFATc-Inhibitor NCI3 hinsichtlich seiner Effekte auf T-Zell-Rezeptor-abhängige Signalwege charakterisiert. Die Ergebnisse zeigen, daß das Pyrazolopyrimidinderivat NCI3 nichttoxisch und zellmembranpermeabel ist. In T-Zell-Rezeptor-stimulierten primären humanen TH-Zellen unterdrückt NCI3 die Proliferation und IL-2-Produktion (IC50-Wert ~4 µM), da die Dephosphorylierung von NFATc und die anschließende nukleäre Translokation gehemmt wird. NCI3 inhibiert die calcineurinabhängige NFAT- und NF-κB-, aber nicht die AP-1-kontrollierte Reprtergenexpression, in mikromolaren Konzentrationen (IC50-Werte 2 bzw. 7 µM). Im Gegensatz zu Cyclosporin A stört NCI3 nicht die Phosphataseaktivität von Calcineurin, sondern interferiert mit der Calcineurin-NFATc-Bindung. Ein wichtiges endogenes Modulatorprotein für die Calcineurinaktivität ist RCAN1, das vermutlich den Calcineurin-NFATc-Signalweg über einen negativen Rückkopplungsmechanismus reguliert. Hier wurde gezeigt, daß RCAN1 in humanen TH-Zellen exprimiert wird. Die Spleißvariante RCAN1-1 ist in ruhenden T-Zellen basal exprimiert und wird nicht durch T-Zell-Rezeptor-Stimulierung in seiner Expression verändert. RCAN1-4 dagegen ist in ruhenden Zellen kaum zu detektieren und wird stimulierungsabhängig induziert. Durch die Verwendung Calcineurin-NFATc-spezifischer Inhibitoren wie NCI3 wurde gezeigt, daß die RCAN1-4-Induktion durch diesen Signalweg limitiert ist. Die in dieser Arbeit gewonnenen Daten und Erkenntnisse tragen dazu bei, das Verständnis der Funktion und Regulation von Calcineurin in T-Zellen zu vertiefen.
Fire prone Mediterranean-type vegetation systems like those in the Mediterranean Basin and South-Western Australia are global hot spots for plant species diversity. To ensure management programs act to maintain these highly diverse plant communities, it is necessary to get a profound understanding of the crucial mechanisms of coexistence. In the current literature several mechanisms are discussed. The objective of my thesis is to systematically explore the importance of potential mechanisms for maintaining multi-species, fire prone vegetation by modelling. The model I developed is spatially-explicit, stochastic, rule- and individual-based. It is parameterised on data of population dynamics collected over 18 years in the Mediterranean-type shrublands of Eneabba, Western Australia. From 156 woody species of the area seven plant traits have been identified to be relevant for this study: regeneration mode, annual maximum seed production, seed size, maximum crown diameter, drought tolerance, dispersal mode and seed bank type. Trait sets are used for the definition of plant functional types (PFTs). The PFT dynamics are simulated annual by iterating life history processes. In the first part of my thesis I investigate the importance of trade-offs for the maintenance of high diversity in multi-species systems with 288 virtual PFTs. Simulation results show that the trade-off concept can be helpful to identify non-viable combinations of plant traits. However, the Shannon Diversity Index of modelled communities can be high despite of the presence of ‘supertypes’. I conclude, that trade-offs between two traits are less important to explain multi-species coexistence and high diversity than it is predicted by more conceptual models. Several studies show, that seed immigration from the regional seed pool is essential for maintaining local species diversity. However, systematical studies on the seed rain composition to multi-species communities are missing. The results of the simulation experiments, as presented in part two of this thesis, show clearly, that without seed immigration the local species community found in Eneabba drifts towards a state with few coexisting PFTs. With increasing immigration rates the number of simulated coexisting PFTs and Shannon diversity quickly approaches values as also observed in the field. Including the regional seed input in the model is suited to explain more aggregated measures of the local plant community structure such as species richness and diversity. Hence, the seed rain composition should be implemented in future studies. In the third part of my thesis I test the sensitivity of Eneabba PFTs to four different climate change scenarios, considering their impact on both local and regional processes. The results show that climate change clearly has the potential to alter the number of dispersed seeds for most of the Eneabba PFTs and therefore the source of the ‘immigrants’ at the community level. A classification tree analysis shows that, in general, the response to climate change was PFT-specific. In the Eneabba sand plains sensitivity of a PFT to climate change depends on its specific trait combination and on the scenario of environmental change i.e. development of the amount of rainfall and the fire frequency. This result emphasizes that PFT-specific responses and regional process seed immigration should not be ignored in studies dealing with the impact of climate change on future species distribution. The results of the three chapters are finally analysed in a general discussion. The model is discussed and improvements and suggestions are made for future research. My work leads to the following conclusions: i) It is necessary to support modelling with empirical work to explain coexistence in species-rich plant communities. ii) The chosen modelling approach allows considering the complexity of coexistence and improves the understanding of coexistence mechanisms. iii) Field research based assumptions in terms of environmental conditions and plant life histories can relativise the importance of more hypothetic coexistence theories in species-rich systems. In consequence, trade-offs can play a lower role than predicted by conceptual models. iv) Seed immigration is a key process for local coexistence. Its alteration because of climate change should be considered for prognosis of coexistence. Field studies should be carried out to get data on seed rain composition.
We establish elements of a new approach to ellipticity and parametrices within operator algebras on manifolds with higher singularities, only based on some general axiomatic requirements on parameter-dependent operators in suitable scales of spaes. The idea is to model an iterative process with new generations of parameter-dependent operator theories, together with new scales of spaces that satisfy analogous requirements as the original ones, now on a corresponding higher level. The "full" calculus involves two separate theories, one near the tip of the corner and another one at the conical exit to infinity. However, concerning the conical exit to infinity, we establish here a new concrete calculus of edge-degenerate operators which can be iterated to higher singularities.
Die Xanthin-Dehydrogenase aus Rhodobacter capsulatus ist ein cytoplasmatisches Enzym, welches ein (αβ)₂ Heterotetramer mit einer Größe von 275 kDa bildet. Die drei Kofaktoren (Moco, 2[2Fe2S], FAD) sind auf zwei unterschiedlichen Polypeptidketten gebunden. So sind die beiden spektroskopisch unterscheidbaren Eisen-Schwefel-Zentren und das FAD in der XdhA-Untereinheit und der Moco in der XdhB-Untereinheit gebunden. Im ersten Teil dieser Arbeit sollte untersucht werden, warum die R. capsulatus XDH ein Dimer bildet und ob ein intramolekularer Elektronentransfer existiert. Dafür wurde eine chimäre XDH-Variante [(α)₂(β₁wt/β₂E730A)] erzeugt, welche eine aktive und eine inaktive XdhB-Untereinheit trägt. Mit Hilfe von Reduktionsspektren sowie mit der Bestimmung der kinetischen Parameter für die Substrate Xanthin und NAD+ konnte gezeigt werden, dass die chimäre XDH-Variante katalytisch halb so aktiv war, wie der auf gleiche Weise gereinigte XDH-Wildtyp. Dies verdeutlicht, dass die noch aktive Untereinheit der Chimären selbstständig und unabhängig Substrat binden und hydroxylieren kann und ein intramolekularer Elektronentransfer zwischen den beiden XdhB-Untereinheiten nicht stattfindet. Ein weiteres Ziel war die funktionelle Charakterisierung der Mus musculus AOX1 sowie der humanen AOX1 hinsichtlich ihrer Substratspezifitäten und ihrer biophysikalischen Eigenschaften sowie der Charakterisierung der konservierten Aminosäuren im aktiven Zentrum der mAOX1. Da bislang noch kein heterologes Expressionssystem für ein aktives und stabiles rekombinantes AO-Protein existierte, wurde ein E. coli Expressionssystem mit der gleichzeitigen Expression der entsprechenden Mocosulfurase für mAOX1 und hAOX1 in dieser Arbeit etabliert. Mit Hilfe dieser Koexpression konnte die Aktivität der rekombinanten mAOX1 um 50 % gesteigert werden, wenn gleich auch der sulfurierte Moco-Anteil nur 20 % betrug. Um die konservierten Aminosäuren im aktiven Zentrum hinsichtlich ihrer Funktion der Substratbindung zu charakterisieren, wurden folgende Varianten erzeugt: V806E, M884R, V806/M884R sowie E1265Q. Mit Hilfe von kinetischen Substratuntersuchungen konnte gezeigt werden, dass die beiden Aminosäuren Val806 und Met884 für die Erkennung und die Stabilisierung von Aldehyden und N-Heterozyklen essentiell sind. Ein Austausch dieser beiden gegen Glutamat bzw. Arginin (wie bei R. capsulatus XDH) zeigte jedoch keine Xanthin- oder Hypoxanthinumsetzung. Für das Glu1265 wurde ebenfalls die Rolle als die Katalyse initiierende Aminosäure belegt.
Die Europäische Währungsunion (EWU) umfasst heute 16 Staaten mit insgesamt 321 Millionen Einwohnern, sie ist mit einem Bruttoinlandsprodukt von 22,9 Billionen Euro einer der größten Wirtschaftsräume der Erde. In den nächsten Jahren wird die EWU durch die Aufnahme der 2004 und 2007 beigetretenen neuen EU-Länder weiter wachsen. Da der Beitritt von der Erfüllung der Kriterien von Maastricht abhängt, erfolgt die Erweiterung im Gegensatz zur 5. Erweiterungsrunde der EU nicht als Block, sondern sequentiell. Nach den Beitritten von Slowenien am 1.1.2007 und der Slowakei zum 1.1.2009 steht der Beitritt eines ersten großen Landes in den nächsten Jahren bevor. Daher stößt die Frage der Effekte eines solchen Beitritts seit geraumer Zeit auf breites Interesse in der ökonomischen Literatur. Das Forschungsziel der Dissertation ist es, die theoretischen Wirkungsmechanismen eines Beitritts der neuen Mitgliedsländer zur Europäischen Währungsunion abzubilden. Hierzu werden mögliche stabilitätspolitische Konsequenzen sowie die Auswirkungen eines Beitritts auf die geografische Wirtschaftsstruktur und das Wachstum dieser Länder in theoretischen Modellen abgeleitet. Die direkten Effekte des Beitritts werden in einem angewandt-theoretischen Modell zudem quantifiziert. Insgesamt wird der Beitritt aus drei verschiedenen Perspektiven analysiert: Erstens werden die Konsequenzen der Währungsunion für die Stabilitätspolitik der neuen Mitgliedsländer im Rahmen eines neukeynesianischen Modells betrachtet. Zweitens werden die mit der Transaktionskostensenkung verbundenen Gewinne in einem angewandten Gleichgewichtsmodell quantifiziert. Drittens werden die wachstumstheoretischen Wirkungen der Finanzmarktintegration in einem dynamischen Gleichgewichtsmodell untersucht. Da die drei Aspekte der makroökonomischen Stabilität, der Transaktionskostensenkung und der dynamischen Wirkungen der Finanzmarktintegration weitgehend unabhängig voneinander auftreten, ist die Verwendung verschiedener Modelle mit geringen Kosten verbunden. In der Gesamtbeurteilung des EWU-Beitritts der neuen EU-Länder kommt diese Arbeit zu einer anderen Einschätzung als bisherige Studien. Die in Teil eins ermittelten stabilitätspolitischen Konsequenzen sind entweder neutral oder implizieren bei Beitritt zur Währungsunion eine größere Stabilität. Die in Teil zwei und drei ermittelten statischen und dynamischen Gewinne eines Beitritts sind zudem erheblich, so dass ein schneller Beitritt zur Währungsunion für die neuen EU-Mitgliedsländer vorteilhaft ist. Unter Berücksichtigung der Ziele der Europäischen Wirtschafts- und Währungsunion (EWWU) müssen hierzu jedoch zwei Bedingungen erfüllt sein. Einerseits sind hinreichend entwickelte Finanzmärkte notwendig, um das Ziel einer Konvergenz der neuen und alten EU-Mitgliedsländer zu erreichen. Andererseits wird der Gesamtraum von einer stärkeren Finanzmarktintegration und einer Senkung der Transaktionskosten profitieren, jedoch durch die Übertragung von Schocks der neuen Mitgliedsländer instabiler. Daher kann der Beitritt der neuen Mitgliedsländer zur EWU für den Gesamtraum negativ sein. Diese Kosten sind nur dann zu rechtfertigen, falls über die schnellere Entwicklung der neuen Mitgliedsstaaten eine höhere Stabilität des Währungsraumes erzielt wird. Das neukeynesianische Wachstumsmodell gibt Hinweise, dass eine solche Entwicklung eintreten könnte.
Der Streifenkiwi (Apteryx mantelli) kommt im Freiland nur auf der Nordinsel Neuseelands vor. Aufgrund des gefährdeten Bestands ist eine sich selbst erhaltene Zoopopulation wichtig. Kenntnisse des Verhaltens helfen, die Ansprüche der Tiere zu verstehen. Zudem können sie darüber Auskunft geben, inwiefern das Wohlbefinden eines Tieres gegeben ist. Durch die Untersuchung der Brutaktivität sollte ein Überblick über den allgemeinen Verlauf der Brut gegeben und Aktivitätsmuster für den Berliner Hahn erarbeitet werden, um den Verlauf zukünftiger Bruten einschätzen und eventuell positiv beeinflussen zu können. Dazu kamen die Untersuchung der täglichen Aktivität einer Henne sowie Beobachtungen des Verhaltens der Tiere. Diese dienten der Bestandsaufnahme der gezeigten Verhaltensweisen und sollten zusammen mit der Aktivität die Grundlage zur Einschätzung bilden, ob die Ansprüche der Kiwis im Zoo Berlin erfüllt werden, und Hinweise zur Verbesserung der Haltung geben. Die Brutaktivität des Hahnes konnte über drei Brutperioden hinweg detailliert dargestellt werden und zeigte, dass nicht nur innerhalb der Art sondern bei einem einzigen Tier unter ähnlichen Bedingungen die Variabilität so groß sein kann, dass sie für Vorhersagen über den Erfolg einer Brut nicht geeignet ist. Im Zusammenhang mit der Aktivität der Henne ließen sich keine Auffälligkeiten erkennen, die auf eine allgemeine Störung der Tiere schließen lassen oder für eine Beeinträchtigung der Brut verantwortlich gemacht werden könnten. Soweit aus den Beobachtungen im Freiland geschlossen werden kann, scheinen die Kiwis im Zoo ein weitgehend natürliches Verhalten zu zeigen. Die Haltungsbedingungen scheinen den Ansprüchen der Tiere zu entsprechen. Es ließen sich nur bedingt Strategien entwickeln, um die Bedingungen für die Brut und damit für die Nachzucht zu verbessern, da sich die Aktivität des Hahnes während der Brut von Jahr zu Jahr als unerwartet variabel erwies. Für ein weiteres Verständnis des Brutverhaltens und eine mögliche Verbesserung der Bedingungen wäre eine Untersuchung zum Einfluss verschiedener Umweltfaktoren auf die Brutaktivität des Hahnes wünschenswert.
Am 22. Oktober 1565 beauftragte der Herzog Julius von Braunschweig-Wolfenbüttel seinen Prediger Martin Chemnitz, das literarische Oeuvre des Magisters Cyriacus Spangenberg auf dem Buchmarkt ausfindig zu machen, prunkvoll binden zu lassen und in den herzöglichen Bibliotheksbestand aufzunehmen. 64 Werke mit gut 6000 Seiten hatte der Mansfelder Generaldekan Spangenberg zu diesem Zeitpunkt bereits verfasst, seine Amtskollegen in der sächsischen Grafschaft hatten ihrerseits 64 Bücher veröffentlicht. Bis zum Abgang Spangenbergs aus Mansfeld 1574 verdoppelte sich die Anzahl geistlicher Veröffentlichungen Mansfelder Provenienz. Obwohl zu Lebzeiten breit rezipiert, hat die Publizistik der geistlichen "Druckmetropole" Mansfeld in der Geschichte und Kirchengeschichte wenig Beachtung gefunden. Die vorliegende Dissertation will diese Forschungslücke schließen. Die Mansfelder Prediger verfassten Lehrpredigten, Festpredigten, Trostpredigten, Katechismen, theologische Disputationen, historische Abhandlungen und geistliche Spiele in hoher Zahl und publizierten diese unter geschickter Ausnutzung der Mechanismen der frühneuzeitlichen Buchmarktes reichsweit. Ihre Veröffentlichungen richteten sich an Theologen, "Weltkinder" und "Einfältige". Sie generierten Verbindungen zu den Kirchen und Potentaten Nord- und Süddeutschlands, Frankreichs und der Niederlande und führten zu Kontroversen mit den großen Bildungszentren Mitteldeutschlands Wittenberg, Leipzig und Jena und deren Landesherren. Die Frage nach der Motivation für das Engagement der Mansfelder Prediger auf dem Buchmarkt steht im Zentrum der Untersuchung, die in einem synoptischen Verfahren den Wunsch nach Teilhaberschaft an der Ausbildung der kirchlichen, herrschaftlichen, sozialen und kommunikativen Strukturen als zentrales Motiv der schreibenden Theologen herausarbeitet, aber auch die Absicht der Autoren beweist, der Grafschaft Mansfeld über das Medium Buch als lutherischem Bildungszentrum in Europa Geltung zu verschaffen.
Die Dissertation untersucht die Entwicklung der prosodischen Struktur von Simplizia und Komposita im Deutschen. Ausgewertet werden langzeitlich erhobene Produktionsdaten von vier monolingualen Kindern im Alter von 12 bis 26 Monaten. Es werden vier Entwicklungsstufen angenommen, in denen jedoch keine einheitlichen Outputs produziert werden. Die Asymmetrien zwischen den verschiedenen Wörtern werden systematisch auf die Struktur des Zielwortes zurückgeführt. In einer optimalitätstheoretischen Analyse wird gezeigt, dass sich die Entwicklungsstufen aus der Umordnung von Constraints ergeben und dass dasselbe Ranking die Variation zwischen den Worttypen zu einer bestimmten Entwicklungsstufe vorhersagt.
In 1915, Alfred Wegener published his hypotheses of plate tectonics that revolutionised the world for geologists. Since then, many scientists have studied the evolution of continents and especially the geologic structure of orogens: the most visible consequence of tectonic processes. Although the morphology and landscape evolution of mountain belts can be observed due to surface processes, the driving force and dynamics at lithosphere scale are less well understood despite the fact that rocks from deeper levels of orogenic belts are in places exposed at the surface. In this thesis, such formerly deeply-buried (ultra-) high-pressure rocks, in particular eclogite facies series, have been studied in order to reveal details about the formation and exhumation conditions and rates and thus provide insights into the geodynamics of the most spectacular orogenic belt in the world: the Himalaya. The specific area investigated was the Kaghan Valley in Pakistan (NW Himalaya). Following closure of the Tethyan Ocean by ca. 55-50 Ma, the northward subduction of the leading edge of India beneath the Eurasian Plate and subsequent collision initiated a long-lived process of intracrustal thrusting that continues today. The continental crust of India – granitic basement, Paleozoic and Mesozoic cover series and Permo-Triassic dykes, sills and lavas – has been buried partly to mantle depths. Today, these rocks crop out as eclogites, amphibolites and gneisses within the Higher Himalayan Crystalline between low-grade metamorphosed rocks (600-640°C/ ca. 5 kbar) of the Lesser Himalaya and Tethyan sediments. Beside tectonically driven exhumation mechanisms the channel flow model, that describes a denudation focused ductile extrusion of low viscosity material developed in the middle to lower crust beneath the Tibetan Plateau, has been postulated. To get insights into the lithospheric and crustal processes that have initiated and driven the exhumation of this (ultra-) high-pressure rocks, mineralogical, petrological and isotope-geochemical investigations have been performed. They provide insights into 1) the depths and temperatures to which these rocks were buried, 2) the pressures and temperatures the rocks have experienced during their exhumation, 3) the timing of these processes 4) and the velocity with which these rocks have been brought back to the surface. In detail, through microscopical studies, the identification of key minerals, microprobe analyses, standard geothermobarometry and modelling using an effective bulk rock composition it has been shown that published exhumation paths are incomplete. In particular, the eclogites of the northern Kaghan Valley were buried to depths of 140-100 km (36-30 kbar) at 790-640°C. Subsequently, cooling during decompression (exhumation) towards 40-35 km (17-10 kbar) and 630-580°C has been superseded by a phase of reheating to about 720-650°C at roughly the same depth before final exhumation has taken place. In the southern-most part of the study area, amphibolite facies assemblages with formation conditions similar to the deduced reheating phase indicate a juxtaposition of both areas after the eclogite facies stage and thus a stacking of Indian Plate units. Radiometric dating of zircon, titanite and rutile by U-Pb and amphibole and micas by Ar-Ar reveal peak pressure conditions at 47-48 Ma. With a maximum exhumation rate of 14 cm/a these rocks reached the crust-mantle boundary at 40-35 km within 1 Ma. Subsequent exhumation (46-41 Ma, 40-35 km) decelerated to ca. 1 mm/a at the base of the continental crust but rose again to about 2 mm/a in the period of 41-31 Ma, equivalent to 35-20 km. Apatite fission track (AFT) and (U-Th)/He ages from eclogites, amphibolites, micaschists and gneisses yielded moderate Oligocene to Miocene cooling rates of about 10°C/Ma in the high altitude northern parts of the Kaghan Valley using the mineral-pair method. AFT ages are of 24.5±3.8 to 15.6±2.1 Ma whereas apatite (U-Th)/He analyses yielded ages between 21.0±0.6 and 5.3±0.2 Ma. The southern-most part of the Valley is dominated by younger late Miocene to Pliocene apatite fission track ages of 7.6±2.1 and 4.0±0.5 Ma that support earlier tectonically and petrologically findings of a juxtaposition and stack of Indian Plate units. As this nappe is tectonically lowermost, a later distinct exhumation and uplift driven by thrusting along the Main Boundary Thrust is inferred. A multi-stage exhumation path is evident from petrological, isotope-geochemical and low temperature thermochronology investigations. Buoyancy driven exhumation caused an initial rapid exhumation: exhumation as fast as recent normal plate movements (ca. 10 cm/a). As the exhuming units reached the crust-mantle boundary the process slowed down due to changes in buoyancy. Most likely, this exhumation pause has initiated the reheating event that is petrologically evident (e.g. glaucophane rimmed by hornblende, ilmenite overgrowth of rutile). Late stage processes involved widespread thrusting and folding with accompanied regional greenschist facies metamorphism, whereby contemporaneous thrusting on the Batal Thrust (seen by some authors equivalent to the MCT) and back sliding of the Kohistan Arc along the inverse reactivated Main Mantle Thrust caused final exposure of these rocks. Similar circumstances have been seen at Tso Morari, Ladakh, India, 200 km further east where comparable rock assemblages occur. In conclusion, as exhumation was already done well before the initiation of the monsoonal system, climate dependent effects (erosion) appear negligible in comparison to far-field tectonic effects.
Lake ecosystems across the globe have responded to climate warming of recent decades. However, correctly attributing observed changes to altered climatic conditions is complicated by multiple anthropogenic influences on lakes. This thesis contributes to a better understanding of climate impacts on freshwater phytoplankton, which forms the basis of the food chain and decisively influences water quality. The analyses were, for the most part, based on a long-term data set of physical, chemical and biological variables of a shallow, polymictic lake in north-eastern Germany (Müggelsee), which was subject to a simultaneous change in climate and trophic state during the past three decades. Data analysis included constructing a dynamic simulation model, implementing a genetic algorithm to parameterize models, and applying statistical techniques of classification tree and time-series analysis. Model results indicated that climatic factors and trophic state interactively determine the timing of the phytoplankton spring bloom (phenology) in shallow lakes. Under equally mild spring conditions, the phytoplankton spring bloom collapsed earlier under high than under low nutrient availability, due to a switch from a bottom-up driven to a top-down driven collapse. A novel approach to model phenology proved useful to assess the timings of population peaks in an artificially forced zooplankton-phytoplankton system. Mimicking climate warming by lengthening the growing period advanced algal blooms and consequently also peaks in zooplankton abundance. Investigating the reasons for the contrasting development of cyanobacteria during two recent summer heat wave events revealed that anomalously hot weather did not always, as often hypothesized, promote cyanobacteria in the nutrient-rich lake studied. The seasonal timing and duration of heat waves determined whether critical thresholds of thermal stratification, decisive for cyanobacterial bloom formation, were crossed. In addition, the temporal patterns of heat wave events influenced the summer abundance of some zooplankton species, which as predators may serve as a buffer by suppressing phytoplankton bloom formation. This thesis adds to the growing body of evidence that lake ecosystems have strongly responded to climatic changes of recent decades. It reaches beyond many previous studies of climate impacts on lakes by focusing on underlying mechanisms and explicitly considering multiple environmental changes. Key findings show that climate impacts are more severe in nutrient-rich than in nutrient-poor lakes. Hence, to develop lake management plans for the future, limnologists need to seek a comprehensive, mechanistic understanding of overlapping effects of the multi-faceted human footprint on aquatic ecosystems.
Das homotrimere Tailspikeadhäsin des Bakteriophagen P22 ist ein etabliertes Modellsystem, dessen Faltung, Assemblierung und Stabilität in vivo und in vitro umfassend charakterisiert ist. Das zentrale Strukturmotiv des Proteins ist eine parallele beta-Helix mit 13 Windungen, die von einer N‑terminalen Kapsidbindedomäne und einer C‑terminalen Trimerisierungsdomäne flankiert wird. Jede Windung beinhaltet drei kurze beta-Stränge, die durch turns und loops unterschiedlicher Länge verbunden sind. Durch den sich strukturell wiederholenden, spulenförmigen Aufbau formen beta-Stränge benachbarter Windungen elongierte beta-Faltblätter. Das Lumen der beta-Helix beinhaltet größtenteils hydrophobe Seitenketten, welche linear und sehr regelmäßig entlang der Längsachse gestapelt sind. Eine hoch repetitive Struktur, ausgedehnte beta-Faltblätter und die regelmäßige Anordnung von ähnlichen oder identischen Seitenketten entlang der beta-Faltblattachse sind ebenfalls typische Kennzeichen von Amyloidfibrillen, die bei Proteinfaltungskrankheiten wie Alzheimer, der Creutzfeld-Jakob-Krankheit, Chorea Huntington und Typ-II-Diabetes gebildet werden. Es wird vermutet, dass die hohe Stabilität des Tailspikeproteins und auch die der Amyloidfibrille durch Seitenkettenstapelung, einem geordneten Netzwerk von Wasserstoffbrückenbindungen und den rigiden, oligomeren Verbund bedingt ist. Um den Einfluss der Seitenkettenstapelung auf die Stabilität, Faltung und Struktur des P22 Tailspikeproteins zu untersuchen, wurden sieben Valine in einem im Lumen der beta-Helix begrabenen Seitenkettenstapel gegen das kleinere und weniger hydrophobe Alanin und das voluminösere Leucin substituiert. Der Einfluss der Mutationen wurde anhand zweier Tailspikevarianten, dem trimeren, N‑terminal verkürzten TSPdeltaN‑Konstrukt und der monomeren, isolierten beta-Helix Domäne analysiert. Generell wurde in den Experimenten deutlich, dass Mutationen zu Alanin stärkere Effekte auslösen als Mutationen zu Leucin. Die dichte und hydrophobe Packung im Kern der beta-Helix bildet somit die Basis für Stabilität und Faltung des Proteins. Anhand hoch aufgelöster Kristallstrukturen jeweils zweier Alanin‑ und Leucin‑Mutanten konnte verdeutlicht werden, dass das Strukturmotiv der parallelen beta-Helix stark formbar ist und mutationsbedingte Änderungen des Seitenkettenvolumens durch kleine und lokale Verschiebung der Haupt‑ und Seitenketten ausgeglichen werden, sodass mögliche Kavitäten gefüllt und sterische Spannung abgebaut werden können. Viele Mutanten zeigten in vivo und in vitro einen temperatursensitiven Faltungsphänotyp (temperature sensitive for folding, tsf), d.h. bei Temperaturerhöhung waren die Ausbeuten des N‑terminal verkürzten Trimers im Vergleich zum Wildtyp deutlich verringert. Weiterführende Experimente zeigten, dass der tsf‑Phänotyp durch die Beeinflussung unterschiedlicher Stadien des Reifungsprozesses oder auch durch die Verminderung der kinetischen Stabilität des nativen Trimers ausgelöst wurde. Durch Untersuchungen am vollständigen und am N‑terminal verkürzten Wildtypprotein wurde gezeigt, dass die Entfaltungsreaktion des Tailspiketrimers komplex ist. Die Verläufe der Kinetiken folgen zwar einem apparenten Zweizustandsverhalten, jedoch sind bei Darstellung der Entfaltungsäste im Chevronplot die Abhängigkeiten der Geschwindigkeitskonstanten vom Denaturierungsmittel nicht linear, sondern in unterschiedliche Richtungen gewölbt. Dieses Verhalten könnte durch ein hoch energetisches Entfaltungsintermediat, einen breiten Übergangsbereich oder parallele Entfaltungswege hervorgerufen sein. Mit Hilfe der monomeren, isolierten beta-Helix Domäne, bei der die N‑terminale Capsidbindedomäne und die C‑terminale Trimerisierungsdomäne deletiert sind und welche als unabhängige Faltungseinheit fungiert, wurde gezeigt, dass alle Mutanten im Harnstoff‑induzierten Gleichgewicht analog zum Wildtypprotein einem Zweizustandsverhalten mit vergleichbaren Kooperativitäten folgen. Die konformationellen Stabilitäten von in der beta-Helix zentral gelegenen Alanin‑ und Leucin‑Mutanten sind stark vermindert, während Mutationen in äußeren Bereichen der Domäne keinen Einfluss auf die Stabilität der beta-Helix haben. Bei Verlängerung der Inkubationszeiten der Gleichgewichtsexperimente konnte die langsame Bildung von Aggregaten im Übergangsbereich der destabilisierten Mutanten detektiert werden. Die in der Arbeit erlangten Erkenntnisse lassen vermuten, dass die isolierte beta-Helix einem für die Reifung des Tailspikeproteins entscheidenden thermolabilen Faltungsintermediat auf Monomerebene sehr ähnlich ist. Im Intermediat ist ein zentraler Kern, der die Windungen 4 bis 7 und die „Rückenflosse“ beinhaltet, stabilitätsbestimmend. Dieser Kern könnte als Faltungsnukleus dienen, an den sich sequenziell weitere Helixwindungen anlagern und im Zuge der „Monomerreifung“ kompaktieren.
Whether the results of fiscal transfers have positive or negative implications depends upon the incentives that transfer systems create for both central and local governments. The complexity and ambiguity of the relationship between fiscal transfers and tax revenues of local governments is one of the main causes why research projects, even in the same country, come to different results. This investigation is seriously questioning the often stated substitution effect based only on an analysis of aggregated data and finally rejects in the qualitative part of this research (using survey techniques) a substitution effect in the majority of the assessed municipalities. While most theories are modeling governments as tax-maximizers (Leviathan) or as being prone to fiscal laziness, this investigation shows that mayors react to a whole set of incentives. Most mayors react rational and rather pragmatically in respect to the incentives and constraints which are established by the particular context of a municipality, the central government and their own personality/identity/interests. While the yield on property tax in Peru is low, there are no signs that increases in transfers have had, on average, a negative impact on their revenue generation. On an individual basis there exist mayors who are revenue maximizers, others who are substituting revenues and others who show apathy. Many engage in property tax. While rural or small municipalities have limited potential, property taxes are the main revenue sources for the Peruvian urban municipalities, rising on average 10% during the last five years. The property tax in Peru accounts for less than 0.2% of GDP, which compared to the Latin American average, is extremely low. In 2002, property tax was collecting nationwide about 10% of the overall budget of local governments. In 2006, the share was closer to 6% due to windfall transfers. The property tax can enhance accountability at the local level and has important impacts on urban spatial development. It is also important considering that most charges or transfers are earmarked such that property tax yields can cover discretionary finances. The intergovernmental fiscal transfers can be described as a patchwork of political liabilities of the past rather than connected with thorough compensation or service improvement functions. The fiscal base of local governments in Peru remains small for the municipalities and the incentive structure to enhance property tax revenues is far from optimal. The central government and sector institutions, which are in the Peruvian institutional design of the property tax responsible for the enablement environment, can reinforce local tax efforts. In the past the central government permanently changed the rules of the game, giving municipalities reduced predictability of policy choices. There are no relevant signs that a stronger property tax is captured by Peruvian interest groups. Since the central government has responsibility for tax regulation and partly valuation there has been little debate about financial issues on the local political agenda. Most council members are therefore not familiar with tax issues. If the central government did not set the tax rate and valuation then there would probably be a more vigorous public debate and an electorate that was better informed about local politics. Elected mayors (as political and administrative leaders) are not counterbalanced and held in check by an active council and/or by vigorous local political parties. Local politics are concentrated on the mayor, electoral rules, the institutional design and political culture – all of which are not helpful in increasing the degree of influence that citizens and associations have upon collective decision-making at the local level. The many alternations between democracy and autocracy have not been helpful in building strong institutions at the local level. Property tax revenues react slowly and the institutional context matters because an effective tax system as a public good can only be created if actors have long time horizons. The property tax has a substantial revenue potential, however, since municipalities are going through a transfer bonanza, it is especially difficult to make a plea for increasing their own revenue base. Local governments should be the proponents of property tax reform, but they have, in Peru, little policy clout because the municipal associations are dispersed and there exists little relevant information concerning important local policy issues.
The Tibetan Plateau is the largest elevated landmass in the world and profoundly influences atmospheric circulation patterns such as the Asian monsoon system. Therefore this area has been increasingly in focus of palaeoenvironmental studies. This thesis evaluates the applicability of organic biomarkers for palaeolimnological purposes on the Tibetan Plateau with a focus on aquatic macrophyte-derived biomarkers. Submerged aquatic macrophytes have to be considered to significantly influence the sediment organic matter due to their high abundance in many Tibetan lakes. They can show highly 13C-enriched biomass because of their carbon metabolism and it is therefore crucial for the interpretation of δ13C values in sediment cores to understand to which extent aquatic macrophytes contribute to the isotopic signal of the sediments in Tibetan lakes and in which way variations can be explained in a palaeolimnological context. Additionally, the high abundance of macrophytes makes them interesting as potential recorders of lake water δD. Hydrogen isotope analysis of biomarkers is a rapidly evolving field to reconstruct past hydrological conditions and therefore of special relevance on the Tibetan Plateau due to the direct linkage between variations of monsoon intensity and changes in regional precipitation / evaporation balances. A set of surface sediment and aquatic macrophyte samples from the central and eastern Tibetan Plateau was analysed for composition as well as carbon and hydrogen isotopes of n-alkanes. It was shown how variable δ13C values of bulk organic matter and leaf lipids can be in submerged macrophytes even of a single species and how strongly these parameters are affected by them in corresponding sediments. The estimated contribution of the macrophytes by means of a binary isotopic model was calculated to be up to 60% (mean: 40%) to total organic carbon and up to 100% (mean: 66%) to mid-chain n-alkanes. Hydrogen isotopes of n-alkanes turned out to record δD of meteoric water of the summer precipitation. The apparent enrichment factor between water and n-alkanes was in range of previously reported ones (≈-130‰) at the most humid sites, but smaller (average: -86‰) at sites with a negative moisture budget. This indicates an influence of evaporation and evapotranspiration on δD of source water for aquatic and terrestrial plants. The offset between δD of mid- and long-chain n-alkanes was close to zero in most of the samples, suggesting that lake water as well as soil and leaf water are affected to a similar extent by those effects. To apply biomarkers in a palaeolimnological context, the aliphatic biomarker fraction of a sediment core from Lake Koucha (34.0° N; 97.2° E; eastern Tibetan Plateau) was analysed for concentrations, δ13C and δD values of compounds. Before ca. 8 cal ka BP, the lake was dominated by aquatic macrophyte-derived mid-chain n-alkanes, while after 6 cal ka BP high concentrations of a C20 highly branched isoprenoid compound indicate a predominance of phytoplankton. Those two principally different states of the lake were linked by a transition period with high abundances of microbial biomarkers. δ13C values were relatively constant for long-chain n-alkanes, while mid-chain n-alkanes showed variations between -23.5 to -12.6‰. Highest values were observed for the assumed period of maximum macrophyte growth during the late glacial and for the phytoplankton maximum during the middle and late Holocene. Therefore, the enriched values were interpreted to be caused by carbon limitation which in turn was induced by high macrophyte and primary productivity, respectively. Hydrogen isotope signatures of mid-chain n-alkanes have been shown to be able to track a previously deduced episode of reduced moisture availability between ca. 10 and 7 cal ka BP, indicated by a 20‰ shift towards higher δD values. Indications for cooler episodes at 6.0, 3.1 and 1.8 cal ka BP were gained from drops of biomarker concentrations, especially microbial-derived hopanoids, and from coincidental shifts towards lower δ13C values. Those episodes correspond well with cool events reported from other locations on the Tibetan Plateau as well as in the Northern Hemisphere. To conclude, the study of recent sediments and plants improved the understanding of factors affecting the composition and isotopic signatures of aliphatic biomarkers in sediments. Concentrations and isotopic signatures of the biomarkers in Lake Koucha could be interpreted in a palaeolimnological context and contribute to the knowledge about the history of the lake. Aquatic macrophyte-derived mid-chain n-alkanes were especially useful, due to their high abundance in many Tibetan Lakes and their ability to record major changes of lake productivity and palaeo-hydrological conditions. Therefore, they have the potential to contribute to a fuller understanding of past climate variability in this key region for atmospheric circulation systems.
Ghana ist ein Musterbeispiel dafür, dass ein Entwicklungsland den Weg zu Good Governance schaffen kann. In vielen Studien wird dem Land im afrikanischen Vergleich heute bescheinigt, hier ein Vorreiter zu sein. Dies ist Ausgangslage der vorliegenden Studie, die der Frage nachgeht „Welche Gründe, Muster und Bedingungen führen zur Entstehung von Good Governance?“. Im Zentrum der vorliegenden Studie steht, wie aus der erkenntnisleitenden Fragestellung hervorgeht, eine empirische Untersuchung zur Entstehung von Good Governance und damit ein Transformationsprozess. Dieser wird bewusst über einen sehr langen Zeitraum (über ein halbes Jahrhundert) untersucht, um auch langfristige Entwicklungen einbeziehen zu können. Die Studie wird mit Hilfe eines „Mixed-Methods-Ansatzes“ sowohl unter Rückgriff auf quantitative als auch auf qualitative Methoden durchgeführt, was sich im Rückblick als sehr ertragreich erwiesen hat. Zunächst wird die Qualität der Governance über den gesamten Zeitraum anhand von sechs Indikatoren gemessen. Danach werden qualitativ die Gründe für die Fort- und Rückschritte analysiert. Dabei lassen sich immer wieder Systematiken herausarbeiten, wie zum Beispiel zirkuläre Entwicklungen, die über viele Jahre den Weg hin zu Good Governance verhinderten, bis jeweils Ausbrüche aus den Kreisläufen geschafft werden konnten. Sowohl in der demokratischen und rechtsstaatlichen Entwicklung als auch bezogen auf die Versorgung der Bevölkerung mit öffentlichen Gütern und die wirtschaftliche Entwicklung. Auch wenn die verschiedenen Bereiche von Good Governance zunächst einzeln untersucht werden, so zeigen sich gleichzeitig deutlich die Wechselwirkungen der Komponenten. Zum Beispiel kristallisiert sich klar heraus, dass Rechtsstaatlichkeit sowohl auf die Stabilität politischer Systeme wirkt, als auch auf die wirtschaftliche Entwicklung. Ebenso beeinflussen diese wiederum die Korruption. Ähnliche Verknüpfungen lassen sich auch bei allen anderen Bereichen nachvollziehen. Die Entwicklung eines Landes kann also nur unter Berücksichtigung eines komplexen Governance-Systems verstanden und erklärt werden. Dabei können die Wechselwirkungen entweder konstruktiv oder destruktiv sein. Die Verflechtungen der einzelnen Bereiche werden in einem Negativ- und dann in einem Positiv-Szenario festgehalten. Diese Idealtypen-Bildung spitzt die Erkenntnisse der vorliegenden Arbeit zu und dient dem analytischen Verständnis der untersuchten Prozesse. Die Untersuchung zeigt, wie Good Governance über das Zusammenspiel verschiedener Faktoren entstehen kann und dass es wissenschaftlich sehr ertragreich ist, Transformationsforschung auf ein komplexes Governance-System auszuweiten. Hierbei werden die vielen empirisch erarbeiteten Ergebnisse zu den einzelnen Transformationen zu komplexen, in sich greifenden Gesamtszenarien zusammengeführt. Da es bisher keine explizite Good Governance-Transformationsforschung gab, wurde hiermit ein erster Schritt in diese Richtung getan. Es wird darüber hinaus deutlich, dass eine Transformation zu Good Governance nicht durch eine kurzfristige Veränderung der Rahmenbedingungen zu erreichen ist. Es geht um kulturelle Veränderungen, um Lernprozesse, um langfristige Entwicklungen, die in der Studie am Beispiel Ghana analysiert werden. In vielen vorangegangenen Transformationsstudien wurde diese zeitliche Komponente vernachlässigt. Ghana hat bereits viele Schritte getan, um einen Weg in die Zukunft und zu Good Governance zu finden. Die Untersuchung dieser Schritte ist Kern der vorliegenden Arbeit. Der Weg Ghanas ist jedoch noch nicht abgeschlossen.
This thesis is concerned with the development of numerical methods using finite difference techniques for the discretization of initial value problems (IVPs) and initial boundary value problems (IBVPs) of certain hyperbolic systems which are first order in time and second order in space. This type of system appears in some formulations of Einstein equations, such as ADM, BSSN, NOR, and the generalized harmonic formulation. For IVP, the stability method proposed in [14] is extended from second and fourth order centered schemes, to 2n-order accuracy, including also the case when some first order derivatives are approximated with off-centered finite difference operators (FDO) and dissipation is added to the right-hand sides of the equations. For the model problem of the wave equation, special attention is paid to the analysis of Courant limits and numerical speeds. Although off-centered FDOs have larger truncation errors than centered FDOs, it is shown that in certain situations, off-centering by just one point can be beneficial for the overall accuracy of the numerical scheme. The wave equation is also analyzed in respect to its initial boundary value problem. All three types of boundaries - outflow, inflow and completely inflow that can appear in this case, are investigated. Using the ghost-point method, 2n-accurate (n = 1, 4) numerical prescriptions are prescribed for each type of boundary. The inflow boundary is also approached using the SAT-SBP method. In the end of the thesis, a 1-D variant of BSSN formulation is derived and some of its IBVPs are considered. The boundary procedures, based on the ghost-point method, are intended to preserve the interior 2n-accuracy. Numerical tests show that this is the case if sufficient dissipation is added to the rhs of the equations.
Eine besondere Rolle im Fremdstoffmetabolismus hat die SULT1A1 beim Menschen aufgrund der hohen Expression und breiten Gewebeverteilung. Während die humane SULT1A1 in sehr vielen Geweben exprimiert wird, wurde die murine SULT1A1 vor allem in der Leber, Lunge und Colon gefunden. Neben der Gewebeverteilung spielt auch der Polymorphismus im humanen SULT1A1-Gen eine bedeutende Rolle. Der häufigste Polymorphismus in diesem Gen führt zu einer Aminosäuresubstitution von Arginin zu Histidin an Position 213. Die Genvariante mit Histidin (auch als SULT1A1*2 bezeichnet) codiert für ein Protein mit einer geringen Enzymaktivität und einer reduzierten Enzymmenge in Thrombocyten. Über den Einfluss dieser allelischen Varianten in anderen Geweben ist bislang wenig bekannt. In vorausgegangenen epidemiologischen Studien wurden mögliche Korrelationen zwischen den Genvarianten und der Krebsentstehung in verschiedenen Geweben untersucht. Diese Daten liefern jedoch widersprüchliche Ergebnisse zum Krebsrisiko. Aufgrund der strittigen epidemiologischen Daten sollten Tiermodelle generiert werden, um die häufigsten SULT1A1-Allele hinsichtlich der Empfindlichkeit gegenüber Nahrungs- und Umweltkanzerogenen zu untersuchen. Zur Erzeugung transgener (tg) Mauslinien wurde mittels Mikroinjektion der codierenden Genbereich und große flankierende Humansequenzen stromaufwärts und stromabwärts in das Mausgenom integriert. Es wurden mehrere Mauslinien hergestellt. Zwei davon, die Mauslinie 31 mit dem SULT1A1*1-Allel und die Mauslinie 28 mit dem SULT1A1*2-Allel, wurden eingehend analysiert. In beiden Linien wurde eine identische Kopienzahl des Transgens ermittelt. Proteinbiochemische Charakterisierungen zeigten eine weitgehend dem Menschen entsprechende Gewebeverteilung und zelluläre und subzelluläre Lokalisation der humanen SULT1A1 in der Linie (Li) 28. In Li 31 wurden Unterschiede zu Li 28 sowohl in der Gewebeverteilung als auch in der zellulären Lokalisation des exprimierten humanen Proteins ermittelt. Dabei war die Expression auf Proteinebene in der SULT1A1*2-tg Linie generell stärker als in der SULT1A1*1-Linie. Dieses Ergebnis war überraschend, denn in humanen Thrombocyten führt das SULT1A1*1-Allel zu einem höheren Gehalt an SULT1A1-Protein als das SULT1A1*2-Allel. Zur Analyse der unterschiedlichen Proteinexpressionen in den tg Mauslinien wurde die cDNA und der 5´-flankierende Bereich des SULT1A1-Gens sequenziert. In beiden tg Linien entsprach die Sequenz der cDNA der Referenzsequenz aus der Gendatenbank (Pubmed). In der 5´-flankierenden Region wurden bekannte Polymorphismen analysiert und unterschiedliche Haplotypen in den tg Linien an den Positionen -624 und -396 ermittelt. Dabei wurde in der Li 31 der Haplotyp detektiert, der in der Literatur mit einer höheren SULT1A1-Enzymaktivität beschrieben wird. Der mögliche Zusammenhang zwischen Transkriptionsrate und Proteinexpression wurde in RNA-Expressionsanalysen im codierenden und 5´-nicht codierenden Bereich (mit den alternativen Exons 1B und 1A) untersucht. Im codierenden Bereich und im Exon 1B konnte in den untersuchten Organen eine höhere RNA-Expression in der Li 28 im Vergleich zur Li 31 ermittelt werden. Außer in der Lunge wurde für Exon 1B eine identische RNA-Expression detektiert. RNA, die Exon 1A enthielt, wurde in allen untersuchten Organen der Li 28, aber nur in der Lunge bei der Li 31 gefunden. In beiden tg Linien konnten mit den Exon 1A-Primern jedoch auch größere PCR-Produkte ermittelt werden. Dieser Unterschied im Exon 1A und mögliche Spleißvarianten könnten damit für die unterschiedliche Proteinexpression des humanen SULT1A1-Proteins in den beiden tg Mauslinien sein. Die in dieser Arbeit generierten und charakterisierten tg Mausmodelle wurden in einer toxikologischen Studie eingesetzt. Es wurde das heterozyklische aromatische Amin 2-Amino-1-methyl-6-phenylimidazo-[4,5-b]pyridin (PhIP) verwendet. PhIP wird beim Erhitzen und Braten von Fleisch und Fisch gebildet und könnte mit der erhöhten Krebsentstehung im Colon in der westlichen Welt im Zusammenhang stehen. Mittels 32P-Postlabelling sollte der Einfluss der zusätzlichen Expression der humanen SULT-Proteine auf die PhIP-DNA-Adduktbildung analysiert werden. Dabei wurden mehr DNA-Addukte in den tg Tieren als in den Wildtyp-Mäusen ermittelt. Die Konzentration der gebildeten DNA-Addukte korrelierte mit der Expressionsstärke des humanen SULT1A1-Proteins in den tg Mäusen. An den in dieser Arbeit generierten tg Mauslinien mit den häufigsten allelischen Varianten des SULT1A1-Gens konnten Unterschiede auf RNA- und Protein-Ebene ermittelt werden. Zudem konnte gezeigt werden, dass die Expression der humanen SULT1A1 eine Auswirkung sowohl auf die Stärke als auch das Zielgewebe der DNA-Adduktbildung in vivo hat.
The origin and evolution of granites has been widely studied because granitoid rocks constitute a major portion of the Earth ́s crust. The formation of granitic magma is, besides temperature mainly triggered by the water content of these rocks. The presence of water in magmas plays an important role due to the ability of aqueous fluids to change the chemical composition of the magma. The exsolution of aqueous fluids from melts is closely linked to a fractionation of elements between the two phases. Then, aqueous fluids migrate to shallower parts of the Earth ́s crust because of it ́s lower density compared to that of melts and adjacent rocks. This process separates fluids and melts, and furthermore, during the ascent, aqueous fluids can react with the adjacent rocks and alter their chemical signature. This is particularly impor- tant during the formation of magmatic-hydrothermal ore deposits or in the late stages of the evolution of magmatic complexes. For a deeper insight to these processes, it is essential to improve our knowledge on element behavior in such systems. In particular, trace elements are used for these studies and petrogenetic interpretations because, unlike major elements, they are not essential for the stability of the phases involved and often reflect magmatic processes with less ambiguity. However, for the majority of important trace elements, the dependence of the geochemical behavior on temperature, pressure, and in particular on the composition of the system are only incompletely or not at all experimentally studied. Former studies often fo- cus on the determination of fluid−melt partition coefficients (Df/m=cfluid/cmelt) of economically interesting elements, e.g., Mo, Sn, Cu, and there are some partitioning data available for ele- ments that are also commonly used for petrological interpretations. At present, no systematic experimental data on trace element behavior in fluid−melt systems as function of pressure, temperature, and chemical composition are available. Additionally, almost all existing data are based on the analysis of quenched phases. This results in substantial uncertainties, particularly for the quenched aqueous fluid because trace element concentrations may change upon cooling. The objective of this PhD thesis consisted in the study of fluid−melt partition coefficients between aqueous solutions and granitic melts for different trace elements (Rb, Sr, Ba, La, Y, and Yb) as a function of temperature, pressure, salinity of the fluid, composition of the melt, and experimental and analytical approach. The latter included the refinement of an existing method to measure trace element concentrations in fluids equilibrated with silicate melts di- rectly at elevated pressures and temperatures using a hydrothermal diamond-anvil cell and synchrotron radiation X-ray fluorescence microanalysis. The application of this in-situ method enables to avoid the main source of error in data from quench experiments, i.e., trace element concentration in the fluid. A comparison of the in-situ results to data of conventional quench experiments allows a critical evaluation of quench data from this study and literature data. In detail, starting materials consisted of a suite of trace element doped haplogranitic glasses with ASI varying between 0.8 and 1.4 and H2O or a chloridic solution with m NaCl/KCl=1 and different salinities (1.16 to 3.56 m (NaCl+KCl)). Experiments were performed at 750 to 950◦C and 0.2 or 0.5 GPa using conventional quench devices (externally and internally heated pressure vessels) with different quench rates, and at 750◦C and 0.2 to 1.4 GPa with in-situ analysis of the trace element concentration in the fluids. The fluid−melt partitioning data of all studied trace elements show 1. a preference for the melt (Df/m < 1) at all studied conditions, 2. one to two orders of magnitude higher Df/m using chloridic solutions compared to experiments with H2O, 3. a clear dependence on the melt composition for fluid−melt partitioning of Sr, Ba, La, Y, and Yb in experiments using chloridic solutions, 4. quench rate−related differences of fluid−melt partition coefficients of Rb and Sr, and 5. distinctly higher fluid−melt partitioning data obtained from in-situ experiments than from comparable quench runs, particularly in the case of H2O as starting solution. The data point to a preference of all studied trace elements for the melt even at fairly high salinities, which contrasts with other experimental studies, but is supported by data from studies of natural co-genetically trapped fluid and melt inclusions. The in-situ measurements of trace element concentrations in the fluid verify that aqueous fluids will change their composition upon cooling, which is in particular important for Cl free systems. The distinct differences of the in-situ results to quench data of this study as well as to data from the literature signify the im- portance of a careful fluid sampling and analysis. Therefore, the direct measurement of trace element contents in fluids equilibrated with silicate melts at elevated PT conditions represents an important development to obtain more reliable fluid−melt partition coefficients. For further improvement, both the aqueous fluid and the silicate melt need to be analyzed in-situ because partitioning data that are based on the direct measurement of the trace element content in the fluid and analysis of a quenched melt are still not completely free of quench effects. At present, all available data on element complexation in aqueous fluids in equilibrium with silicate melts at high PT are indirectly derived from partitioning data, which involves in these experiments assumptions on the species present in the fluid. However, the activities of chemical components in these partitioning experiments are not well constrained, which is required for the definition of exchange equilibria between melt and fluid species. For example, the melt-dependent variation of partition coefficient observed for Sr imply that this element can not only be complexed by Cl− as suggested previously. The data indicate a more complicated complexation of Sr in the aqueous fluid. To verify this hypothesis, the in-situ setup was also used to determine strontium complexation in fluids equilibrated with silicate melts at desired PT conditions by the application of X-ray absorption near edge structure (XANES) spectroscopy. First results show a strong effect of both fluid and melt composition on the resulting XANES spectra, which indicates different complexation environments for Sr.
Sehzellen von Insekten sind epitheliale Zellen mit einer charakteristischen, hochpolaren Morphologie und Organisation. Die molekularen Komponenten der Sehkaskade befinden sich im Rhabdomer, einem Saum dicht gepackter Mikrovilli entlang der Sehzelle. Bereits in den 70er Jahren des letzten Jahrhunderts wurde beschrieben, dass die Mikrovilli entlang einer Sehzelle eine unterschiedliche Ausrichtung besitzen, oder in anderen Worten, die Rhabdomere entlang der Sehzell-Längsachse verdreht sind. So sind in den Sehzellen R1-R6 bei dipteren Fliegen (Calliphora, Drosophila) die Mikrovilli im distalen und proximalen Bereich eines Rhabdomers etwa rechtwinkelig zueinander angeordnet. Dieses Phänomen wird in der Fachliteratur als rhabdomere twisting bezeichnet und reduziert die Empfindlichkeit für polarisiertes Licht. Es wurde für das Drosophila-Auge gezeigt, dass diese strukturelle Asymmetrie der Sehzellen mit einer molekularen Asymmetrie in der Verteilung phosphotyrosinierter Proteine an die Stielmembran (einem nicht-mikrovillären Bereich der apikalen Plasmamembran) einhergeht. Zudem wurde gezeigt, dass die immuncytochemische Markierung mit anti-Phosphotyrosin (anti-PY) als lichtmikroskopischer Marker für das rhabdomere twisting verwendet werden kann. Bisher wurde hauptsächlich die physiologische Bedeutung der Rhabdomerverdrehung untersucht. Es ist wenig über die entwicklungs- und zellbiologischen Grundlagen bekannt. Ziel der vorliegenden Arbeit war es, die Identität der phosphotyrosinierten Proteine an der Stielmembran zu klären und ihre funktionelle Bedeutung für die Entwicklung des rhabdomere twisting zu analysieren. Zudem sollte untersucht werden, welchen Einfluss die inneren Sehzellen R7 und R8 auf die Verdrehung der Rhabdomere von R1-R6 haben. Für die zwei Proteinkinasen Rolled (ERK) und Basket (JNK) vom Typ der Mitogen-aktivierten Proteinkinasen (MAPK) konnte ich zeigen, dass sie in ihrer aktivierten (= phosphorylierten) Form (pERK bzw. pJNK) eine asymmetrische Verteilung an der Stielmembran aufweisen vergleichbar der Markierung mit anti-PY. Weiterhin wurde diese asymmetrische Verteilung von pERK und pJNK ebenso wie die von PY erst kurz vor Schlupf der Fliegen (bei ca. 90% pupaler Entwicklung) etabliert. Durch Präinkubationsexperimente mit anti-PY wurde die Markierung mit anti-pERK bzw. anti-pJNK unterbunden. Diese Ergebnisse sprechen dafür, dass pERK und pJNK zu den Proteinen gehören, die von anti-PY an der Stielmembran erkannt werden. Da es sich bei ERK und JNK um Kinasen handelt, ist es naheliegend, dass diese an der Entwicklung des rhabdomere twisting beteiligt sein könnten. Diese Hypothese wurde durch die Analyse von hypermorphen (rl SEM)und hypomorphen (rl 1/rl 10a) Rolled-Mutanten überprüft. In der rl SEM-Mutante mit erhöhter Aktivität der Proteinkinase erfolgte die asymmetrische Positionierung von pERK an der Stielmembran sowie die Mikrovillikippung schon zu einem früheren Zeitpunkt in der pupalen Entwicklung. Im adulten Auge war die anti-PY-Markierung im distalen Bereich der Sehzellen intensiver sowie der Kippwinkel vergrößert. In der rl 1/rl 10a-Mutanten mit reduzierter Kinaseaktivität waren die anti-PY-Markierung und der Kippwinkel im proximalen Bereich der Sehzellen verringert. Die Proteinkinase ERK hat somit einen Einfluss auf die zeitliche Etablierung des rhabdomere twisting wie auch auf dessen Ausprägung im Adulttier. Die Rhabdomerverdrehung sowie die Änderung im anti-PY-Markierungsmuster erfolgen an den Sehzellen R1-R6 relativ abrupt auf halber Ommatidienlänge, dort wo das Rhabdomer von R7 endet und das von R8 beginnt. Es stellte sich deshalb die Frage, ob die Rhabdomerverdrehung an R1-R6 durch die Sehzelle R7 und/oder R8 beeinflusst wird. Um dieser Frage nachzugehen wurden Mutanten analysiert, denen die R7- oder die R8-Photorezeptoren bzw. R7 und R8 fehlten. Das wichtigste Ergebnis dieser Untersuchungen war, dass bei Fehlen von R8 die Rhabdomerverdrehung bei R1-R6 nach keinen erkennbaren Regeln erfolgt. R8 ist somit Voraussetzung für die Etablierung der Rhabdomerverdrehung in R1-R6. Folgendes Modell wurde auf Grundlage dieses und weiterer Ergebnisse erarbeitet: Im dritten Larvenstadium rekrutiert R8 die Sehzellpaare R2/R5, R3/R4 und R1/R6. Dabei werden R1-R6 durch den Kontakt zu R8 „polarisiert“. Abschließend wird R7 durch R8 rekrutiert. Dies führt zu einer Fixierung der Polarität von R1-R6 durch R7. Die Ausführung der Mikrovillikippung anhand der festgelegten Polarität erfolgt in der späten Puppenphase. Die Proteinkinase ERK ist an diesem letzten Morphogeneseprozess beteiligt.
Molecular photoswitches are attracting much attention lately mostly because of their possible applications in nano technology, and their role in biology. One of the widely studied representatives of photochromic molecules is azobenzene (AB). With light, by a static electric field, or with tunneling electrons this specie can be "switched" from the flat and energetically more stable trans form, into the compact cis form. The back reaction can be induced optically or thermally. Quantum chemical calculations, mostly based on density functional theory, on the AB molecule, AB derivatives and related systems are presented. All the calculations were done for isolated species, however, with implications for latest experimental results aiming at the switching of surface mounted ABs. In some of these experiments, it is assumed that the switching process is substrate mediated, by attaching an electron or a hole to the adsorbate forming short-lived anion or cation resonances. Therefore, we calculated also cationic and anionic ABs in this work. An influence of external electric fields on the potential energy surfaces, was also studied. Further, by the type, number and positioning of various substituent groups, systematic changes on activation energies and rates for the thermal cis-to-trans isomerization can be enforced. The nature of the transition state for ground state isomerization was investigated. Applying Eyring's transition state theory, trends in activation energies and rates were predicted and are, where a comparison was possible, in good agreement with experimental data. Further, thermal isomerization was studied in solution, for which a polarizable continuum model was employed. The influence of substitution and an environment leaves its traces on structural properties of molecules and quantitative appearance of calculated UV/Vis spectra, as well. Finally, an explicit treatment of a solid substrate was demonstrated for the conformational switching, by scanning tunneling microscope, of a 1,5-cyclooctadiene (COD) molecule at a Si(001) surface, treated by a cluster model. At first, we studied energetics and potential energy surfaces along relevant switching coordinates by quantum chemical calculations, followed by the switching dynamics using wave packet methods. We show that, in spite the simplicity of the model, our calculations support the switching of adsorbed COD, by inelastic electron tunneling at low temperatures.
About 2,000 of the more than 27,000 genes of the genetic model plant Arabidopsis thaliana encode for transcription factors (TFs), proteins that bind DNA in the promoter region of their target genes and thus act as transcriptional activators and repressors. Since TFs play essential roles in nearly all biological processes, they are of great scientific and biotechnological interest. This thesis concentrated on the functional characterisation of four selected members of the Arabidopsis DOF-family, namely DOF1.2, DOF3.1, DOF3.5 and DOF5.2, which were selected because of their specific expression pattern in the root tip, a region that comprises the stem cell niche and cells for the perception of environmental stimuli. DOF1.2, DOF3.1 and DOF3.5 are previously uncharacterized members of the Arabidopsis DOF-family, while DOF5.2 has been shown to be involved in the phototrophic flowering response. However, its role in root development has not been described so far. To identify biological processes regulated by the four DOF proteins in detail, molecular and physiological characterization of transgenic plants with modified levels of DOF1.2, DOF3.1, DOF3.5 and DOF5.2 expression (constitutive and inducible over-expression, artificial microRNA) was performed. Additionally expression patterns of the TFs and their target genes were analyzed using promoter-GUS lines and publicly available microarray data. Finally putative protein-protein interaction partners and upstream regulating TFs were identified using the yeast two-hybrid and one-hybrid system. This combinatorial approach revealed distinct biological functions of DOF1.2, DOF3.1, DOF3.5 and DOF5.2 in the context of root development. DOF1.2 and DOF3.5 are specifically and exclusively expressed in the root cap, including the central root cap (columella) and the lateral root cap, organs which are essential to direct oriented root growth. It could be demonstrated that both genes work in the plant hormone auxin signaling pathway and have an impact on distal cell differentiation. Altered levels of gene expression lead to changes in auxin distribution, abnormal cell division patterns and altered root growth orientation. DOF3.1 and DOF5.2 share a specific expression pattern in the organizing centre of the root stem cell niche, called the quiescent centre. Both genes redundantly control cell differentiation in the root´s proximal meristem and unravel a novel transcriptional regulation pathway for genes enriched in the QC cells. Furthermore this work revealed a novel bipartite nuclear localisation signal being present in the protein sequence of the DOF TF family from all sequenced plant species. Summing up, this work provides an important input into our knowledge about the role of DOF TFs during root development. Future work will concentrate on revealing the exact regulatory networks of DOF1.2, DOF3.1, DOF3.5 and DOF5.2 and their possible biotechnological applications.
Zum Erhalt vom Aussterben bedrohter Papageienvögel (Psittaciformes) ist die Nachzucht in Menschenobhut neben dem Erhalt freilebender Populationen von großer Bedeutung, die Reproduktion bestimmter Arten gelingt allerdings nur unzureichend. Als Hauptgrund dafür gilt die Zwangsverpaarung im Rahmen von Zuchtprogrammen (Beispiel: Europäisches Erhaltungszuchtprogramm, EEP), hier werden Brutpaare hauptsächlich nach genetischen Aspekten zusammengestellt. Der reproduktive Erfolg ist bei den meisten Papageienarten, die in dauerhaften Paarbindungen leben (perennial monogamy), eng der Paarbindung korreliert. Eine freie Partnerwahl ist demnach von großer Bedeutung für die Zucht in Menschenobhut, im Rahmen von Erhaltungszuchtprogrammen jedoch nur selten möglich. Das Ziel der Untersuchung war, eine wissenschaftlich begründete Methode zu entwickeln, durch die es möglich sein soll, das Fortpflanzungspotential von Brutpaaren der Gattung Ara anhand der Paarbindung zu bestimmen. Dafür wurde die Bedeutung der Qualität der Paarbindung der Brutpaare für den Lebens-Reproduktionserfolg (Lifetime-reproductive success, LRS) untersucht. Die Datenaufnahme erfolgte in dem Zuchtzentrum 'La Vera' der Loro Parque Fundación auf Teneriffa/ Spanien. Hier wurden in den Jahren 2006 und 2007 21 Brutpaare der Gattung Ara untersucht. Die Paarbindung wurde zum Einen durch typisches Paarbindungsverhalten und zum Anderen durch die physiologische Abstimmung der einzelnen Brutpaare anhand der Ausschüttung des Steroidhormons Testosteron dargestellt. Das Paarbindungsverhalten setzte sich aus der ‚Abstimmung der Tagesaktivität’, dem ‚Kontaktverhalten’ und den ‚sozialen Interaktionen’ zusammen. Zur Abstimmung der Tagesaktivität zählten die Verhaltensweisen Ruhen, Sitzen, Nahrungsaufnahme, Gefiederpflege, Beschäftigung und Lokomotion. Unter Kontaktverhalten wurden das Überschreiten der Individualdistanz bei bestimmten Verhaltensweisen und die Rollenverteilung der Geschlechter untersucht. Unter ‚sozialen Interaktionen’ wurden die Dauer und der Häufigkeit der sozialen Gefiederpflege und der Sozialen Index zusammengefasst. Bei der sozialen Gefiederpflege wurde die Dauer und die Häufigkeit der Phasen erhoben, sowie der jeweilige Initiator dieser Interaktion. Zusätzlich wurde untersucht, welches Geschlecht, wie häufig und mit welcher Dauer aktiv an der sozialen Gefiederpflege beteiligt war. Aus den Beobachtungen wurde der soziale Index berechnet, der angibt, wie das Verhältnis sozio-positiver zu agonistischen Interaktionen für jedes Individuum, sowie das Paar an sich ist. Zur Messung der Testosteron-Ausschüttung der Partnertiere wurden von September bis November 2007 über einen Zeitraum von 9 Wochen jede Woche einmal für jedes Individuum Kotproben gesammelt. Mit der Analyse der Proben wurde das Veterinär-Physiologisch-Chemische-Institut der Universität Leipzig unter der Leitung von Prof. Dr. Almuth Einspanier beauftragt. Zur Ermittlung des Hormongehalts in den gewonnenen Kotproben diente ein kompetitiver Doppelantikörper-Enzymimmunoassay (EIA). Das Fortpflanzungspotential wurde über die Anzahl der Eier, Gelege und Jungtiere, sowie über die Gelegegröße dargestellt. Diese Daten geben, bezogen auf die Dauer der Paarbindung, Auskunft über die Produktivität eines Brutpaares, anhand dessen zusätzlich ein Produktivitäts-Koeffizient berechnet wurde. Des weiteren sollte die Anzahl der von einem Brutpaar selbständig großgezogenen Jungtiere Auskunft über die Fähigkeit zur kooperativen Jungenaufzucht geben. Zur Untersuchung der Bedeutung der Paarbindungsqualität wurden Diskriminanzfunktionsanalysen und Regressionsanalysen durchgeführt, wozu die untersuchten Brutpaare anhand ihres Fortpflanzungspotentials in verschiedene Gruppen eingeteilt wurden. Anhand der Ergebnisse der Studie konnte gezeigt werden, dass das Fortpflanzungspotential von Brutpaaren von verschiedenen Kriterien, die die Paarbindungsqualität charakterisieren, abhängt. Dabei ist zwischen der Produktivität und der Fähigkeit zur kooperativen Jungenaufzucht zu unterscheiden. Die Produktivität eines Paares wurde hinsichtlich der abgestimmten Tagesaktivität positiv vom synchronen Ruhen mit dem Partner beeinflusst, sowie von der Häufigkeit und Dauer der vom Weibchen ausgehenden sozialen Gefiederpflege. Brutpaare mit hoher Produktivität waren zudem über eine hohe ‚intra-Paar Fluktuation’ des Steroidhormons Testosteron gekennzeichnet. Die Brutpaare, die in der Lage sind, ihre Jungtiere in Kooperation großzuziehen, zeigten ebenfalls einen hohen Anteil zeitlich mit dem Partner abgestimmter Ruhephasen, zudem häufiges Ruheverhalten in Körperkontakt zum Partner und ein hohes zeitliches Investment der Männchen bei der Initiierung und Durchführung sozialer Gefiederpflege. Darüber hinaus zeigten Männchen, die einen Beitrag zur kooperativen Jungenaufzucht leisten, eine wesentlich geringere durchschnittliche Testosteron-Konzentration – bezogen auf den Untersuchungszeitraum, als Männchen, die Brutpaaren angehören, die nicht zur selbständigen Jungenaufzucht fähig sind. Dieses Ergebnis spiegelt die Bedeutung von Testosteron bei der elterlichen Fürsorge wider und bietet einen Anhaltspunkt für weitere Untersuchungen. Die Untersuchung konnte zeigen, dass es möglich und sinnvoll ist, das individuelle Verhalten von Tieren in Menschenobhut für den Erhalt bedrohter Tierarten einzusetzen. Weitere, auf dieser Studie aufbauende Untersuchungen sollten zum Ziel haben, zuverlässig die Brutpaare erkennbar zu machen, die über ein gutes Fortpflanzungspotential verfügen. Auf diese Weise kann unzureichender Reproduktionserfolg bedrohter Papageienarten in Menschenobhut infolge von Zwangsverpaarung minimiert werden.
Calibration of the global hydrological model WGHM with water mass variations from GRACE gravity data
(2010)
Since the start-up of the GRACE (Gravity Recovery And Climate Experiment) mission in 2002 time dependent global maps of the Earth's gravity field are available to study geophysical and climatologically-driven mass redistributions on the Earth's surface. In particular, GRACE observations of total water storage changes (TWSV) provide a comprehensive data set for analysing the water cycle on large scales. Therefore they are invaluable for validation and calibration of large-scale hydrological models as the WaterGAP Global Hydrology Model (WGHM) which simulates the continental water cycle including its most important components, such as soil, snow, canopy, surface- and groundwater. Hitherto, WGHM exhibits significant differences to GRACE, especially for the seasonal amplitude of TWSV. The need for a validation of hydrological models is further highlighted by large differences between several global models, e.g. WGHM, the Global Land Data Assimilation System (GLDAS) and the Land Dynamics model (LaD). For this purpose, GRACE links geodetic and hydrological research aspects. This link demands the development of adequate data integration methods on both sides, forming the main objectives of this work. They include the derivation of accurate GRACE-based water storage changes, the development of strategies to integrate GRACE data into a global hydrological model as well as a calibration method, followed by the re-calibration of WGHM in order to analyse process and model responses. To achieve these aims, GRACE filter tools for the derivation of regionally averaged TWSV were evaluated for specific river basins. Here, a decorrelation filter using GRACE orbits for its design is most efficient among the tested methods. Consistency in data and equal spatial resolution between observed and simulated TWSV were realised by the inclusion of all most important hydrological processes and an equal filtering of both data sets. Appropriate calibration parameters were derived by a WGHM sensitivity analysis against TWSV. Finally, a multi-objective calibration framework was developed to constrain model predictions by both river discharge and GRACE TWSV, realised with a respective evolutionary method, the ε-Non-dominated-Sorting-Genetic-Algorithm-II (ε-NSGAII). Model calibration was done for the 28 largest river basins worldwide and for most of them improved simulation results were achieved with regard to both objectives. From the multi-objective approach more reliable and consistent simulations of TWSV within the continental water cycle were gained and possible model structure errors or mis-modelled processes for specific river basins detected. For tropical regions as such, the seasonal amplitude of water mass variations has increased. The findings lead to an improved understanding of hydrological processes and their representation in the global model. Finally, the robustness of the results is analysed with respect to GRACE and runoff measurement errors. As a main conclusion obtained from the results, not only soil water and snow storage but also groundwater and surface water storage have to be included in the comparison of the modelled and GRACE-derived total water budged data. Regarding model calibration, the regional varying distribution of parameter sensitivity suggests to tune only parameter of important processes within each region. Furthermore, observations of single storage components beside runoff are necessary to improve signal amplitudes and timing of simulated TWSV as well as to evaluate them with higher accuracy. The results of this work highlight the valuable nature of GRACE data when merged into large-scale hydrological modelling and depict methods to improve large-scale hydrological models.
Coupling of the electrical, mechanical and optical response in polymer/liquid-crystal composites
(2010)
Micrometer-sized liquid-crystal (LC) droplets embedded in a polymer matrix may enable optical switching in the composite film through the alignment of the LC director along an external electric field. When a ferroelectric material is used as host polymer, the electric field generated by the piezoelectric effect can orient the director of the LC under an applied mechanical stress, making these materials interesting candidates for piezo-optical devices. In this work, polymer-dispersed liquid crystals (PDLCs) are prepared from poly(vinylidene fluoride-trifluoroethylene) (P(VDF-TrFE)) and a nematic liquid crystal (LC). The anchoring effect is studied by means of dielectric relaxation spectroscopy. Two dispersion regions are observed in the dielectric spectra of the pure P(VDF-TrFE) film. They are related to the glass transition and to a charge-carrier relaxation, respectively. In PDLC films containing 10 and 60 wt% LC, an additional, bias-field-dependent relaxation peak is found that can be attributed to the motion of LC molecules. Due to the anchoring effect of the LC molecules, this relaxation process is slowed down considerably, when compared with the related process in the pure LC. The electro-optical and piezo-optical behavior of PDLC films containing 10 and 60 wt% LCs is investigated. In addition to the refractive-index mismatch between the polymer matrix and the LC molecules, the interaction between the polymer dipoles and the LC molecules at the droplet interface influences the light-scattering behavior of the PDLC films. For the first time, it was shown that the electric field generated by the application of a mechanical stress may lead to changes in the transmittance of a PDLC film. Such a piezo-optical PDLC material may be useful e.g. in sensing and visualization applications. Compared to a non-polar matrix polymer, the polar matrix polymer exhibits a strong interaction with the LC molecules at the polymer/LC interface which affects the electro-optical effect of the PDLC films and prevents a larger increase in optical transmission.
In dieser Arbeit wird die Entwicklung und Charakterisierung neuer „smarter“ Redoxhydrogele mit drei verschiedenen funktionellen Eigenschaften und deren erfolgreicher Einsatz zur elektrochemischen Kontaktierung von Oxidoreduktasen beschrieben. Diese neuen Redoxpolymere 1. tragen kovalent integrierte Redoxzentren umgeben von einer hydrophilen Polymermatrix, 2. reaktive Kopplungsgruppen für den Aufbau selbstassemblierter Polymerschichten auf Elektrodenoberflächen und 3. lassen sich in ihrer Redoxaktivität durch Verwendung „intelligenter“ Polymere über externe Stimuli kontrollieren. Die Redoxhydrogele wurden nach dem Vorbild eines Baukastensystems in einfachen Ein-Stufen-Synthesen synthetisiert. Dazu wurden verschiedene Redoxzentren (Ferrocen, 1,10-Phenanthrolin-5,6-dion und 4-Carboxy-2,5,7-Trinitro-9-fluorenon), reaktive Kopplungsgruppen (Epoxy-, Amino-, Thiol- oder Disulfidfunktionen) und Polymermatrices (Poly-(N-Isopropylacrylamid) (PNIPAM) und Poly(ethylenglykolmethacrylat) (PEGMA)) in unterschiedlichen Zusammensetzungen miteinander copolymerisiert. Die Polymere wurden in Form von dünnen Polymerfilmen über die wiederholenden Funktionalitäten auf Elektrodenoberflächen aufgebracht und physiko- und elektrochemisch charakterisiert. Durch die erstmals gezeigte, derartige Ankopplung der Polymere, entstehen dreidimensionale, hydrophile selbstassemblierte Polymerschichten. Die Elektronentransferwege sind kurz und der Elektronentransfer effizient. Diese Polymer-modifizierten Elektroden wurden für die Kontaktierung von zwei exemplarisch ausgewählten Oxidoreduktasen eingesetzt, die Nicotinsäureamid-adenin-dinucleotid-abhängige Glucosedehydrogenase (NAD-GDH), welche ein freibewegliches Coenzym und die Pyrrolochinolinchinon-abhängige Glucosedehydrogenase (PQQ-GDH), welche ein prosthetisches Coenzym verwenden. Die Redoxaktivitäten des PNIPAMFoxy- und PEGMA-Fc-Polymers ließen sich durch externe Stimuli in Form von Temperatur und Calciumkonzentrationen kontrollieren. Ein Modell für die Komplexierung der Calciumionen durch die PEG-Seitenketten unter Ausbildung Kronenether-ähnlicher Strukturen und der daraus resultierenden Steigerung des Elektronentransfers wurde gezeigt.
This thesis presents methods for automated synthesis of flexible chip multiprocessor systems from parallel programs targeted at FPGAs to exploit both task-level parallelism and architecture customization. Automated synthesis is necessitated by the complexity of the design space. A detailed description of the design space is provided in order to determine which parameters should be modeled to facilitate automated synthesis by optimizing a cost function, the emphasis being placed on inclusive modeling of parameters from application, architectural and physical subspaces, as well as their joint coverage in order to avoid pre-constraining the design space. Given a parallel program and a set of an IP library, the automated synthesis problem is to simultaneously (i) select processors (ii) map and schedule tasks to them, and (iii) select one or several networks for inter-task communications such that design constraints and optimization objectives are met. The research objective in this thesis is to find a suitable model for automated synthesis, and to evaluate methods of using the model for architectural optimizations. Our contributions are a holistic approach for the design of such systems, corresponding models to facilitate automated synthesis, evaluation of optimization methods using state of the art integer linear and answer set programming, as well as the development of synthesis heuristics to solve runtime challenges.
Die Analyse vergleicht Installationen von Bruce Nauman und Olafur Eliasson ausgehend von der Fragestellung, wie sich die künstlerischen Performativitätsstrategien der 1960er/70er Jahren und die der zeitgenössischen Kunst in ihren Wirkungen und Effekten unterscheiden lassen. Dabei werden die Positionen der beiden Künstler als paradigmatisch für eine Ästhetik des Performativen angesehen. Neben dem Vergleich der Künstler steht die theoretische Auseinandersetzung mit der Diskursfigur der Performativität sowie deren methodischen Anwendbarkeit in der Kunstwissenschaft im Vordergrund. Während sich Installationen der 1960er/70er Jahre besonders durch die psycho-physische Einwirkung auf die Sinneswahrnehmung des Betrachters auszeichnen und durchaus Schockeffekte beim Betrachter hervorrufen, befasst sich die zeitgenössische Kunstpraxis vornehmlich mit visuellen und poetischen Effekten, die eine kontemplative Rezeptionshaltung des Betrachters einfordern. Bruce Nauman war es ein Anliegen, den tradierten Status des Kunstwerks als ein zu Betrachtendes, das sich durch Begriffe wie Form, Ursprung und Originalität fassen ließ, in Frage zu stellen und stattdessen eine reale leibliche Erfahrung für den Betrachter nachvollziehbar werden zu lassen. Künstlern wie Olafur Eliasson geht es in den künstlerischen Produktionen vor allem um die Wahrnehmung der Wahrnehmung sowie der Erzeugung von Präsenzeffekten. Mit dem Aufkommen solcher Verfahren wurde deutlich, dass performative Installationen nach anderen Beschreibungsformen verlangten und, dass diese durch eine Ästhetik des Performativen gefasst werden können. Wie genau vollzieht sich der Wandel von den performativen Strategien der 1960er/70er Jahre zu denen der zeitgenössischen Installationskünstlern? Verläuft dieser vom Schock zur Poesie?
The genome can be considered the blueprint for an organism. Composed of DNA, it harbours all organism-specific instructions for the synthesis of all structural components and their associated functions. The role of carriers of actual molecular structure and functions was believed to be exclusively assumed by proteins encoded in particular segments of the genome, the genes. In the process of converting the information stored genes into functional proteins, RNA – a third major molecule class – was discovered early on to act a messenger by copying the genomic information and relaying it to the protein-synthesizing machinery. Furthermore, RNA molecules were identified to assist in the assembly of amino acids into native proteins. For a long time, these - rather passive - roles were thought to be the sole purpose of RNA. However, in recent years, new discoveries have led to a radical revision of this view. First, RNA molecules with catalytic functions - thought to be the exclusive domain of proteins - were discovered. Then, scientists realized that much more of the genomic sequence is transcribed into RNA molecules than there are proteins in cells begging the question what the function of all these molecules are. Furthermore, very short and altogether new types of RNA molecules seemingly playing a critical role in orchestrating cellular processes were discovered. Thus, RNA has become a central research topic in molecular biology, even to the extent that some researcher dub cells as “RNA machines”. This thesis aims to contribute towards our understanding of RNA-related phenomena by applying Bioinformatics means. First, we performed a genome-wide screen to identify sites at which the chemical composition of DNA (the genotype) critically influences phenotypic traits (the phenotype) of the model plant Arabidopsis thaliana. Whole genome hybridisation arrays were used and an informatics strategy developed, to identify polymorphic sites from hybridisation to genomic DNA. Following this approach, not only were genotype-phenotype associations discovered across the entire Arabidopsis genome, but also regions not currently known to encode proteins, thus representing candidate sites for novel RNA functional molecules. By statistically associating them with phenotypic traits, clues as to their particular functions were obtained. Furthermore, these candidate regions were subjected to a novel RNA-function classification prediction method developed as part of this thesis. While determining the chemical structure (the sequence) of candidate RNA molecules is relatively straightforward, the elucidation of its structure-function relationship is much more challenging. Towards this end, we devised and implemented a novel algorithmic approach to predict the structural and, thereby, functional class of RNA molecules. In this algorithm, the concept of treating RNA molecule structures as graphs was introduced. We demonstrate that this abstraction of the actual structure leads to meaningful results that may greatly assist in the characterization of novel RNA molecules. Furthermore, by using graph-theoretic properties as descriptors of structure, we indentified particular structural features of RNA molecules that may determine their function, thus providing new insights into the structure-function relationships of RNA. The method (termed Grapple) has been made available to the scientific community as a web-based service. RNA has taken centre stage in molecular biology research and novel discoveries can be expected to further solidify the central role of RNA in the origin and support of life on earth. As illustrated by this thesis, Bioinformatics methods will continue to play an essential role in these discoveries.
Die visuelle Kommunikation ist eine effiziente Methode, um dynamische Phänomene zu beschreiben. Informationsobjekte präzise wahrzunehmen, einen schnellen Zugriff auf strukturierte und relevante Informationen zu ermöglichen, erfordert konsistente und nach dem formalen Minimalprinzip konzipierte Analyse- und Darstellungsmethoden. Dynamische Raumphänomene in Geoinformationssystemen können durch den Mangel an konzeptionellen Optimierungsanpassungen aufgrund ihrer statischen Systemstruktur nur bedingt die Informationen von Raum und Zeit modellieren. Die Forschung in dieser Arbeit ist daher auf drei interdisziplinäre Ansätze fokussiert. Der erste Ansatz stellt eine echtzeitnahe Datenerfassung dar, die in Geodatenbanken zeitorientiert verwaltet wird. Der zweite Ansatz betrachtet Analyse- und Simulationsmethoden, die das dynamische Verhalten analysieren und prognostizieren. Der dritte Ansatz konzipiert Visualisierungsmethoden, die insbesondere dynamische Prozesse abbilden. Die Symbolisierung der Prozesse passt sich bedarfsweise in Abhängigkeit des Prozessverlaufes und der Interaktion zwischen Datenbanken und Simulationsmodellen den verschiedenen Entwicklungsphasen an. Dynamische Aspekte können so mit Hilfe bewährter Funktionen aus der GI-Science zeitnah mit modularen Werkzeugen entwickelt und visualisiert werden. Die Analyse-, Verschneidungs- und Datenverwaltungsfunktionen sollen hierbei als Nutzungs- und Auswertungspotential alternativ zu Methoden statischer Karten dienen. Bedeutend für die zeitliche Komponente ist das Verknüpfen neuer Technologien, z. B. die Simulation und Animation, basierend auf einer strukturierten Zeitdatenbank in Verbindung mit statistischen Verfahren. Methodisch werden Modellansätze und Visualisierungstechniken entwickelt, die auf den Bereich Verkehr transferiert werden. Verkehrsdynamische Phänomene, die nicht zusammenhängend und umfassend darstellbar sind, werden modular in einer serviceorientierten Architektur separiert, um sie in verschiedenen Ebenen räumlich und zeitlich visuell zu präsentieren. Entwicklungen der Vergangenheit und Prognosen der Zukunft werden über verschiedene Berechnungsmethoden modelliert und visuell analysiert. Die Verknüpfung einer Mikrosimulation (Abbildung einzelner Fahrzeuge) mit einer netzgesteuerten Makrosimulation (Abbildung eines gesamten Straßennetzes) ermöglicht eine maßstabsunabhängige Simulation und Visualisierung des Mobilitätsverhaltens ohne zeitaufwendige Bewertungsmodellberechnungen. Zukünftig wird die visuelle Analyse raum-zeitlicher Veränderungen für planerische Entscheidungen ein effizientes Mittel sein, um Informationen übergreifend verfügbar, klar strukturiert und zweckorientiert zur Verfügung zu stellen. Der Mehrwert durch visuelle Geoanalysen, die modular in einem System integriert sind, ist das flexible Auswerten von Messdaten nach zeitlichen und räumlichen Merkmalen.
Fiscal federalism has been an important topic among public finance theorists in the last four decades. There is a series of arguments that decentralization of governments enhances growth by improving allocation efficiency. However, the empirical studies have shown mixed results for industrialized and developing countries and some of them have demonstrated that there might be a threshold level of economic development below which decentralization is not effective. Developing and transition countries have developed a variety of forms of fiscal decentralization as a possible strategy to achieve effective and efficient governmental structures. A generalized principle of decentralization due to the country specific circumstances does not exist. Therefore, decentralization has taken place in different forms in various countries at different times, and even exactly the same extent of decentralization may have had different impacts under different conditions. The purpose of this study is to investigate the current state of the fiscal decentralization in Mongolia and to develop policy recommendations for the efficient and effective intergovernmental fiscal relations system for Mongolia. Within this perspective the analysis concentrates on the scope and structure of the public sector, the expenditure and revenue assignment as well as on the design of the intergovernmental transfer and sub-national borrowing. The study is based on data for twenty-one provinces and the capital city of Mongolia for the period from 2000 to 2009. As a former socialist country Mongolia has had a highly centralized governmental sector. The result of the analysis below revealed that the Mongolia has introduced a number of decentralization measures, which followed a top down approach and were slowly implemented without any integrated decentralization strategy in the last decade. As a result Mongolia became de-concentrated state with fiscal centralization. The revenue assignment is lacking a very important element, for instance significant revenue autonomy given to sub-national governments, which is vital for the efficient service delivery at the local level. According to the current assignments of the expenditure and revenue responsibilities most of the provinces are unable to provide a certain national standard of public goods supply. Hence, intergovernmental transfers from the central jurisdiction to the sub-national jurisdictions play an important role for the equalization of the vertical and horizontal imbalances in Mongolia. The critical problem associated with intergovernmental transfers is that there is not a stable, predictable and transparent system of transfer allocation. The amount of transfers to sub-national governments is determined largely by political decisions on ad hoc basis and disregards local differences in needs and fiscal capacity. Thus a fiscal equalization system based on the fiscal needs of the provinces should be implemented. The equalization transfers will at least partly offset the regional disparities in revenues and enable the sub-national governments to provide a national minimum standard of local public goods.
In the first section of the thesis graphitic carbon nitride was for the first time synthesised using the high-temperature condensation of dicyandiamide (DCDA) – a simple molecular precursor – in a eutectic salt melt of lithium chloride and potassium chloride. The extent of condensation, namely next to complete conversion of all reactive end groups, was verified by elemental microanalysis and vibrational spectroscopy. TEM- and SEM-measurements gave detailed insight into the well-defined morphology of these organic crystals, which are not based on 0D or 1D constituents like known molecular or short-chain polymeric crystals but on the packing motif of extended 2D frameworks. The proposed crystal structure of this g-C3N4 species was derived in analogy to graphite by means of extensive powder XRD studies, indexing and refinement. It is based on sheets of hexagonally arranged s-heptazine (C6N7) units that are held together by covalent bonds between C and N atoms. These sheets stack in a graphitic, staggered fashion adopting an AB-motif, as corroborated by powder X-ray diffractometry and high-resolution transmission electron microscopy. This study was contrasted with one of many popular – yet unsuccessful – approaches in the last 30 years of scientific literature to perform the condensation of an extended carbon nitride species through synthesis in the bulk. The second section expands the repertoire of available salt melts introducing the lithium bromide and potassium bromide eutectic as an excellent medium to obtain a new phase of graphitic carbon nitride. The combination of SEM, TEM, PXRD and electron diffraction reveals that the new graphitic carbon nitride phase stacks in an ABA’ motif forming unprecedentedly large crystals. This section seizes the notion of the preceding chapter, that condensation in a eutectic salt melt is the key to obtain a high degree of conversion mainly through a solvatory effect. At the close of this chapter ionothermal synthesis is seen established as a powerful tool to overcome the inherent kinetic problems of solid state reactions such as incomplete polymerisation and condensation in the bulk especially when the temperature requirement of the reaction in question falls into the proverbial “no man’s land” of classical solvents, i.e. above 250 to 300 °C. The following section puts the claim to the test, that the crystalline carbon nitrides obtained from a salt melt are indeed graphitic. A typical property of graphite – namely the accessibility of its interplanar space for guest molecules – is transferred to the graphitic carbon nitride system. Metallic potassium and graphitic carbon nitride are converted to give the potassium intercalation compound, K(C6N8)3 designated according to its stoichiometry and proposed crystal structure. Reaction of the intercalate with aqueous solvents triggers the exfoliation of the graphitic carbon nitride material and – for the first time – enables the access of singular (or multiple) carbon nitride sheets analogous to graphene as seen in the formation of sheets, bundles and scrolls of carbon nitride in TEM imaging. The thus exfoliated sheets form a stable, strongly fluorescent solution in aqueous media, which shows no sign in UV/Vis spectroscopy that the aromaticity of individual sheets was subject to degradation. The final section expands on the mechanism underlying the formation of graphitic carbon nitride by literally expanding the distance between the covalently linked heptazine units which constitute these materials. A close examination of all proposed reaction mechanisms to-date in the light of exhaustive DSC/MS experiments highlights the possibility that the heptazine unit can be formed from smaller molecules, even if some of the designated leaving groups (such as ammonia) are substituted by an element, R, which later on remains linked to the nascent heptazine. Furthermore, it is suggested that the key functional groups in the process are the triazine- (Tz) and the carbonitrile- (CN) group. On the basis of these assumptions, molecular precursors are tailored which encompass all necessary functional groups to form a central heptazine unit of threefold, planar symmetry and then still retain outward functionalities for self-propagated condensation in all three directions. Two model systems based on a para-aryl (ArCNTz) and para-biphenyl (BiPhCNTz) precursors are devised via a facile synthetic procedure and then condensed in an ionothermal process to yield the heptazine based frameworks, HBF-1 and HBF-2. Due to the structural motifs of their molecular precursors, individual sheets of HBF-1 and HBF-2 span cavities of 14.2 Å and 23.0 Å respectively which makes both materials attractive as potential organic zeolites. Crystallographic analysis confirms the formation of ABA’ layered, graphitic systems, and the extent of condensation is confirmed as next-to-perfect by elemental analysis and vibrational spectroscopy.
Due to the unique environmental conditions and different feedback mechanisms, the Arctic region is especially sensitive to climate changes. The influence of clouds on the radiation budget is substantial, but difficult to quantify and parameterize in models. In the framework of the PhD, elastic backscatter and depolarization lidar observations of Arctic clouds were performed during the international Arctic Study of Tropospheric Aerosol, Clouds and Radiation (ASTAR) from Svalbard in March and April 2007. Clouds were probed above the inaccessible Arctic Ocean with a combination of airborne instruments: The Airborne Mobile Aerosol Lidar (AMALi) of the Alfred Wegener Institute for Polar and Marine Research provided information on the vertical and horizontal extent of clouds along the flight track, optical properties (backscatter coefficient), and cloud thermodynamic phase. From the data obtained by the spectral albedometer (University of Mainz), the cloud phase and cloud optical thickness was deduced. Furthermore, in situ observations with the Polar Nephelometer, Cloud Particle Imager and Forward Scattering Spectrometer Probe (Laboratoire de Météorologie Physique, France) provided information on the microphysical properties, cloud particle size and shape, concentration, extinction, liquid and ice water content. In the thesis, a data set of four flights is analyzed and interpreted. The lidar observations served to detect atmospheric structures of interest, which were then probed by in situ technique. With this method, an optically subvisible ice cloud was characterized by the ensemble of instruments (10 April 2007). Radiative transfer simulations based on the lidar, radiation and in situ measurements allowed the calculation of the cloud forcing, amounting to -0.4 W m-2. This slight surface cooling is negligible on a local scale. However, thin Arctic clouds have been reported more frequently in winter time, when the clouds' effect on longwave radiation (a surface warming of 2.8 W m-2) is not balanced by the reduced shortwave radiation (surface cooling). Boundary layer mixed-phase clouds were analyzed for two days (8 and 9 April 2007). The typical structure consisting of a predominantly liquid water layer on cloud top and ice crystals below were confirmed by all instruments. The lidar observations were compared to European Centre for Medium-Range Weather Forecasts (ECMWF) meteorological analyses. A change of air masses along the flight track was evidenced in the airborne data by a small completely glaciated cloud part within the mixed-phase cloud system. This indicates that the updraft necessary for the formation of new cloud droplets at cloud top is disturbed by the mixing processes. The measurements served to quantify the shortcomings of the ECMWF model to describe mixed-phase clouds. As the partitioning of cloud condensate into liquid and ice water is done by a diagnostic equation based on temperature, the cloud structures consisting of a liquid cloud top layer and ice below could not be reproduced correctly. A small amount of liquid water was calculated for the lowest (and warmest) part of the cloud only. Further, the liquid water content was underestimated by an order of magnitude compared to in situ observations. The airborne lidar observations of 9 April 2007 were compared to space borne lidar data on board of the satellite Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO). The systems agreed about the increase of cloud top height along the same flight track. However, during the time delay of 1 h between the lidar measurements, advection and cloud processing took place, and a detailed comparison of small-scale cloud structures was not possible. A double layer cloud at an altitude of 4 km was observed with lidar at the West coast in the direct vicinity of Svalbard (14 April 2007). The cloud system consisted of two geometrically thin liquid cloud layers (each 150 m thick) with ice below each layer. While the upper one was possibly formed by orographic lifting under the influence of westerly winds, or by the vertical wind shear shown by ECMWF analyses, the lower one might be the result of evaporating precipitation out of the upper layer. The existence of ice precipitation between the two layers supports the hypothesis that humidity released from evaporating precipitation was cooled and consequently condensed as it experienced the radiative cooling from the upper layer. In summary, a unique data set characterizing tropospheric Arctic clouds was collected with lidar, in situ and radiation instruments. The joint evaluation with meteorological analyses allowed a detailed insight in cloud properties, cloud evolution processes and radiative effects.
CHAMP (CHAllenging Minisatellite Payload) is a German small satellite mission to study the earth's gravity field, magnetic field and upper atmosphere. Thanks to the good condition of the satellite so far, the planned 5 years mission is extended to year 2009. The satellite provides continuously a large quantity of measurement data for the purpose of Earth study. The measurements of the magnetic field are undertaken by two Fluxgate Magnetometers (vector magnetometer) and one Overhauser Magnetometer (scalar magnetometer) flown on CHAMP. In order to ensure the quality of the data during the whole mission, the calibration of the magnetometers has to be performed routinely in orbit. The scalar magnetometer serves as the magnetic reference and its readings are compared with the readings of the vector magnetometer. The readings of the vector magnetometer are corrected by the parameters that are derived from this comparison, which is called the scalar calibration. In the routine processing, these calibration parameters are updated every 15 days by means of scalar calibration. There are also magnetic effects coming from the satellite which disturb the measurements. Most of them have been characterized during tests before launch. Among them are the remanent magnetization of the spacecraft and fields generated by currents. They are all considered to be constant over the mission life. The 8 years of operation experience allow us to investigate the long-term behaviors of the magnetometers and the satellite systems. According to the investigation, it was found that for example the scale factors of the FGM show obvious long-term changes which can be described by logarithmic functions. The other parameters (offsets and angles between the three components) can be considered constant. If these continuous parameters are applied for the FGM data processing, the disagreement between the OVM and the FGM readings is limited to \pm1nT over the whole mission. This demonstrates, the magnetometers on CHAMP exhibit a very good stability. However, the daily correction of the parameter Z component offset of the FGM improves the agreement between the magnetometers markedly. The Z component offset plays a very important role for the data quality. It exhibits a linear relationship with the standard deviation of the disagreement between the OVM and the FGM readings. After Z offset correction, the errors are limited to \pm0.5nT (equivalent to a standard deviation of 0.2nT). We improved the corrections of the spacecraft field which are not taken into account in the routine processing. Such disturbance field, e.g. from the power supply system of the satellite, show some systematic errors in the FGM data and are misinterpreted in 9-parameter calibration, which brings false local time related variation of the calibration parameters. These corrections are made by applying a mathematical model to the measured currents. This non-linear model is derived from an inversion technique. If the disturbance field of the satellite body are fully corrected, the standard deviation of scalar error \triangle B remains about 0.1nT. Additionally, in order to keep the OVM readings a reliable standard, the imperfect coefficients of the torquer current correction for the OVM are redetermined by solving a minimization problem. The temporal variation of the spacecraft remanent field is investigated. It was found that the average magnetic moment of the magneto-torquers reflects well the moment of the satellite. This allows for a continuous correction of the spacecraft field. The reasons for the possible unknown systemic error are discussed in this thesis. Particularly, both temperature uncertainties and time errors have influence on the FGM data. Based on the results of this thesis the data processing of future magnetic missions can be designed in an improved way. In particular, the upcoming ESA mission Swarm can take advantage of our findings and provide all the auxiliary measurements needed for a proper recovery of the ambient magnetic field.
The present thesis aims to introduce process-based model for species range dynamics that can be fitted to abundance data. For this purpose, the well-studied Proteaceae species of the South African Cape Floristic Region (CFR) offer a great data set to fit process-based models. These species are subject to wildflower harvesting and environmental threats like habitat loss and climate change. The general introduction of this thesis presents shortly the available models for species distribution modelling. Subsequently, it presents the feasibility of process-based modelling. Finally, it introduces the study system as well as the objectives and layout. In Chapter 1, I present the process-based model for range dynamics and a statistical framework to fit it to abundance distribution data. The model has a spatially-explicit demographic submodel (describing dispersal, reproduction, mortality and local extinction) and an observation submodel (describing imperfect detection of individuals). The demographic submodel links species-specific habitat models describing the suitable habitat and process-based demographic models that consider local dynamics and anemochoric seed dispersal between populations. After testing the fitting framework with simulated data, I applied it to eight Proteaceae species with different demographic properties. Moreover, I assess the role of two other demographic mechanisms: positive (Allee effects) and negative density-dependence. Results indicate that Allee effects and overcompensatory local dynamics (including chaotic behaviour) seem to be important for several species. Most parameter estimates quantitatively agreed with independent data. Hence, the presented approach seemed to suit the demand of investigating non-equilibrium scenarios involving wildflower harvesting (Chapter 2) and environmental change (Chapter 3). The Chapter 2 addresses the impacts of wildflower harvesting. The chapter includes a sensitivity analysis over multiple spatial scales and demographic properties (dispersal ability, strength of Allee effects, maximum reproductive rate, adult mortality, local extinction probability and carrying capacity). Subsequently, harvesting effects are investigated on real case study species. Plant response to harvesting showed abrupt threshold behavior. Species with short-distance seed dispersal, strong Allee effects, low maximum reproductive rate, high mortality and high local extinction are most affected by harvesting. Larger spatial scales benefit species response, but the thresholds become sharper. The three case study species supported very low to moderate harvesting rates. Summarizing, demographic knowledge about the study system and careful identification of the spatial scale of interest should guide harvesting assessments and conservation of exploited species. The sensitivity analysis’ results can be used to qualitatively assess harvesting impacts for poorly studied species. I investigated in Chapter 3 the consequences of past habitat loss, future climate change and their interaction on plant response. I use the species-specific estimates of the best model describing local dynamics obtained in Chapter 1. Both habitat loss and climate change had strong negative impacts on species dynamics. Climate change affected mainly range size and range filling due to habitat reductions and shifts combined with low colonization. Habitat loss affected mostly local abundances. The scenario with both habitat loss and climate change was the worst for most species. However, this impact was better than expected by simple summing of separate effects of habitat loss and climate change. This is explained by shifting ranges to areas less affected by humans. Range size response was well predicted by the strength of environmental change, whereas range filling and local abundance responses were better explained by demographic properties. Hence, risk assessments under global change should consider demographic properties. Most surviving populations were restricted to refugia, serving as key conservation focus.The findings obtained for the study system as well as the advantages, limitations and potentials of the model presented here are further discussed in the General Discussion. In summary, the results indicate that 1) process-based demographic models for range dynamics can be fitted to data; 2) demographic processes improve species distribution models; 3) different species are subject to different processes and respond differently to environmental change and exploitation; 4) density regulation type and Allee effects should be considered when investigating range dynamics of species; 5) the consequences of wildflower harvesting, habitat loss and climate change could be disastrous for some species, but impacts vary depending on demographic properties; 6) wildflower harvesting impacts varies over spatial scale; 7) The effects of habitat loss and climate change are not always additive.
Central stars of planetary nebulae are low-mass stars on the brink of their final evolution towards white dwarfs. Because of their surface temperature of above 25,000 K their UV radiation ionizes the surrounding material, which was ejected in an earlier phase of their evolution. Such fluorescent circumstellar gas is called a "Planetary Nebula". About one-tenth of the Galactic central stars are hydrogen-deficient. Generally, the surface of these central stars is a mixture of helium, carbon, and oxygen resulting from partial helium burning. Moreover, most of them have a strong stellar wind, similar to massive Pop-I Wolf-Rayet stars, and are in analogy classified as [WC]. The brackets distinguish the special type from the massive WC stars. Qualitative spectral analyses of [WC] stars lead to the assumption of an evolutionary sequence from the cooler, so-called late-type [WCL] stars to the very hot, early-type [WCE] stars. Quantitative analyses of the winds of [WC] stars became possible by means of computer programs that solve the radiative transfer in the co-moving frame, together with the statistical equilibrium equations for the population numbers. First analyses employing models without iron-line blanketing resulted in systematically different abundances for [WCL] and [WCE] stars. While the mass ratio of He:C is roughly 40:50 for [WCL] stars, it is 60:30 in average for [WCE] stars. The postulated evolution from [WCL] to [WCE] however could only lead to an increase of carbon, since heavier elements are built up by nuclear fusion. In the present work, improved models are used to re-analyze the [WCE] stars and to confirm their He:C abundance ratio. Refined models, calculated with the Potsdam WR model atmosphere code (PoWR), account now for line-blanketing due to iron group elements, small scale wind inhomogeneities, and complex model atoms for He, C, O, H, P, N, and Ne. Referring to stellar evolutionary models for the hydrogen-deficient [WC] stars, Ne and N abundances are of particular interest. Only one out of three different evolutionary channels, the VLTP scenario, leads to a Ne and N overabundance of a few percent by mass. A VLTP, a very late thermal pulse, is a rapid increase of the energy production of the helium-burning shell, while hydrogen burning has already ceased. Subsequently, the hydrogen envelope is mixed with deeper layers and completely burnt in the presence of C, He, and O. This results in the formation of N and Ne. A sample of eleven [WCE] stars has been analyzed. For three of them, PB 6, NGC 5189, and [S71d]3, a N overabundance of 1.5% has been found, while for three other [WCE] stars such high abundances of N can be excluded. In the case of NGC 5189, strong spectral lines of Ne can be reproduced qualitatively by our models. At present, the Ne mass fraction can only be roughly estimated from the Ne emission lines and seems to be in the order of a few percent by mass. Furthermore, using a diagnostic He-C line pair, the He:C abundance ratio of 60:30 for [WCE] stars is confirmed. Within the framework of the analysis, a new class of hydrogen-deficient central stars has been discovered, with PB 8 as its first member. Its atmospheric mixture resembles rather that of the massive WNL stars than of the [WC] stars. The determined mass fractions H:He:C:N:O are 40:55:1.3:2:1.3. As the wind of PB 8 contains significant amounts of O and C, in contrast to WN stars, a classification as [WN/WC] is suggested.
Foreland-basin systems are excellent archives to decipher the feedbacks between surface and tectonic processes in orogens. The sedimentary architecture of a foreland-basin system reflects the balance between tectonic subsidence causing long-term accommodation space and sediment influx corresponding to efficiency of erosion and mass-redistribution processes. In order to explore the effects of climatic and tectonic forcing in such a system, I investigated the Oligo-Miocene foreland-basin sediments of the southern Alborz mountains, an intracontinental orogen in northern Iran, related to the Arabia-Eurasia continental collision. This work includes absolute dating methods such as 40Ar/39Ar and zircon (U-Th)/He thermochronology, magnetostratigraphy, sedimentological analysis, sandstone and conglomerate provenance study, carbon and oxygen isotope analysis, and clay mineralogy study. Results show a systematic correlation between coarsening-upward cycles and sediment accumulation rates in the basin on 105 to 106yr time scales. During thrust loading phases, the coarse-grained fraction supplied by the uplifting range is stored in the proximal part of the basin (sedimentary facies retrogradation), while fine-grained sediments are deposited in distal sectors. Variations in sediment provenance during these phases of enhanced tectonic activity give evidence for erosional unroofing phases and/or drainage-reorganization events. In addition, enhanced tectonic activity promoted the growth of topography and associated orographic barrier effects, as demonstrated by sedimentologic indicators and the analysis of stable C and O isotopes from calcareous paleosols and lacustrine/palustrine samples. Extensive progradation of coarse-grained deposits occurs during phases of decreased subsidence, when the coarse-grained fraction supplied by the uplifting range cannot be completely stored in the proximal part of the basin. In this environment, a reduction in basin subsidence is associated with laterally stacked fluvial channel deposits, and is related to intra-foreland uplift, as documented by growth strata, tectonic tilting, and sediment reworking. Increase in sediment accumulation rate associated with progradation of vertically-stacked coarse-grained fluvial channels also occurs. Paleosol O-isotope data shows that this increase is related to wetter climatic phases, suggesting that surface processes are more efficient and exhumation rates increase, giving rise to a positive feedback. Furthermore, isotopic and sedimentologic data show that starting from 10-9 Ma, climate became less arid with an increase in seasonality of precipitation. Because important changes were also recorded in the Mediterranean Sea and Asia at that time, the evidence for climatic variability observed in the Alborz mountains most likely reflects changes in Northern Hemisphere atmospheric circulation patterns. This study has additional implications for the evolution of the Alborz mountains and the Arabia-Eurasia continental collision zone. At the orogenic scale, the locus of deformation did not move steadily southward, but stepped forward and backward since Oligocene time. In particular, from ~ 17.5 to 6.2 Ma the orogen grew by a combination of frontal accretion and wedge-internal deformation on time scales of ca. 0.7 to 2 m.y. Moreover, the provenance data suggest that prior to 10-9 Ma the shortening direction changed from NW-SE to NNE-SSW, in agreement with structural data. On the scale of the entire collision zone, the evolution of the studied basins and adjacent mountain ranges suggests a new geodynamic model for the evolution of the Arabia-Eurasia continental collision zone. Numerous sedimentary basins in the Alborz mountains and in other locations of the Arabia-Eurasia collision zone record a change from a tensional (transtensional) to a compressional (transpressional) tectonic setting by ~ 36 Ma. I interpret this to reflect the onset of subduction of the stretched Arabian continental lithosphere beneath central Iran, leading to moderate plate coupling and lower- and upper-plate deformation (soft continental collision). The increase in deformation rates in the southern Alborz mountains from ~ 17.5 Ma suggests that significant upper-plate deformation must have started by the early Miocene most likely in response to an increase in degree of plate coupling. I suggest that this was related to the subduction of thicker Arabian continental lithosphere and the consequent onset of hard continental collision. This model reconciles the apparent lag time of 15-20 m.y between the late Eocene to early Oligocene age for the initial Arabia-Eurasia continental collision and the onset of widespread deformation across the collision zone to the north in early to late Miocene time.
Das serotonerge System besitzt sowohl bei Invertebraten als auch bei Vertebraten eine große Bedeutung für die Kontrolle und Modulation vieler physiologischer Prozesse und Verhaltensleistungen. Bei der Honigbiene Apis mellifera spielt Serotonin (5-Hydroxytryptamin, 5-HT) eine wichtige Rolle bei der Arbeitsteilung und dem Lernen. Die 5-HT-Rezeptoren, die überwiegend zur Familie der G-Protein gekoppelten Rezeptoren (GPCRs) gehören, besitzen eine Schlüsselstellung für das Verständnis der molekularen Mechanismen der serotonergen Signalweiterleitung. Ziel dieser Arbeit war es, 5-HT-Rezeptoren der Honigbiene zu charakterisieren. Dazu zählt die Identifizierung der molekularen Struktur, die Ermittlung der intrazellulären Signalwege, die Erstellung von pharmakologischen Profilen, die Ermittlung der Expressionsmuster und die Ermittlung der physiologischen Funktionen der Rezeptoren. Mit Hilfe der Informationen aus dem Honey Bee Genome Project, konnten drei RezeptorcDNAs kloniert werden. Vergleiche der abgeleiteten Aminosäuresequenzen mit den Aminosäuresequenzen bereits charakterisierter Rezeptoren legten nahe, dass es sich dabei um einen 5-HT1- (Am5-HT1) und zwei 5-HT2-Rezeptoren (Am5-HT2α und Am5-HT2β) handelt. Die strukturelle Analyse der abgeleiteten Aminosäuresequenz dieser Rezeptoren postuliert das Vorhandensein der charakteristischen heptahelikalen Architektur von GPCRs und zeigt starkkonservierte Motive, die bedeutend für die Ligandenbindung, die Rezeptoraktivierung und die Kopplung an G-Proteine sind. Für die beiden 5 HT2-Rezeptoren konnte zudem alternatives Spleißen nachgewiesen werden. Mit den cDNAs des Am5-HT1- und des Am5-HT2α-Rezeptors wurden HEK293-Zellen stabil transfiziert und anschließend die Rezeptoren funktionell und pharmakologisch analysiert. Am5-HT1 hemmt bei Aktivierung abhängig von der 5-HT-Konzentration die cAMPProduktion.Die Substanzen 5-Methoxytryptamin (5-MT) und 5-Carboxamidotryptamin konnten als Agonisten identifiziert werden. Methiothepin dagegen blockiert die 5-HTWirkung vollständig. Prazosin und WAY100635 stellen partielle Antagonisten des Am5-HT1-Rezeptors dar. Der Am5-HT2_-Rezeptor stimuliert bei Aktivierung die Synthese des sekundären Botenstoffs Inositoltrisphosphat, was wiederum zu einer messbaren Erhöhung der intrazellulären Ca2+-Konzentration führt. 5-MT und 8-OH-DPAT zeigen eine deutliche agonistische Wirkung auf Am5-HT2α. Dagegen besitzen Clozapin, Methiothepin, Mianserin und Cyproheptadin die Fähigkeit, die 5-HT-Wirkung um 51-64 % zu vermindern. Die bereits erwähnte alternative Spleißvariante von Am5-HT2α wurde ebenfalls in HEK293-Zellen exprimiert und analysiert, scheint jedoch eigenständig nicht funktionell zu sein. Gegen die dritte cytoplasmatische Schleife (CPL3) wurde ein polyklonales Antiserum generiert. Dieses erkennt in Western-Blot-Analysen ein Protein mit einer Masse von ca. 50 kDa. Durch immunhistochemische Analysen am Bienengehirn wurde die Verteilung des Rezeptors genauer untersucht. Dabei zeigten die optischen Neuropile, besonders die Lamina und die Ocellarnerven, stets eine starke Markierung. Außerdem wird der Rezeptor in den α- und β-Loben sowie der Lippe, dem Basalring und dem Pedunculus der Pilzkörper exprimiert. Doppelmarkierungen zeigen stets eine enge Nachbarschaft von serotonergen Fasern und dem Am5-HT1-Rezeptor. Weiterhin konnte gezeigt werden, dass der Am5-HT1-Rezeptor sehr wahrscheinlich an der Regulation des phototaktischen Verhalten der Honigbiene beteiligt ist. Verfütterung von 5-HT hat eine deutlich negative Wirkung auf das phototaktischen Verhalten. Diese kann durch den Am5-HT1-Rezeptor-Agonisten 5-CT imitiert werden. Schließlich konnte gezeigt werden, dass der Am5-HT1-Antagonist Prazosin die 5-HT-Wirkung deutlich vermindern kann.
With the rise of electronic integration between organizations, the need for a precise specification of interaction behavior increases. Information systems, replacing interaction previously carried out by humans via phone, faxes and emails, require a precise specification for handling all possible situations. Such interaction behavior is described in process choreographies. Choreographies enumerate the roles involved, the allowed interactions, the message contents and the behavioral dependencies between interactions. Choreographies serve as interaction contract and are the starting point for adapting existing business processes and systems or for implementing new software components. As a thorough analysis and comparison of choreography modeling languages is missing in the literature, this thesis introduces a requirements framework for choreography languages and uses it for comparing current choreography languages. Language proposals for overcoming the limitations are given for choreography modeling on the conceptual and on the technical level. Using an interconnection modeling style, behavioral dependencies are defined on a per-role basis and different roles are interconnected using message flow. This thesis reveals a number of modeling "anti-patterns" for interconnection modeling, motivating further investigations on choreography languages following the interaction modeling style. Here, interactions are seen as atomic building blocks and the behavioral dependencies between them are defined globally. Two novel language proposals are put forward for this modeling style which have already influenced industrial standardization initiatives. While avoiding many of the pitfalls of interconnection modeling, new anomalies can arise in interaction models. A choreography might not be realizable, i.e. there does not exist a set of interacting roles that collectively realize the specified behavior. This thesis investigates different dimensions of realizability.
A huge number of applications require coherent radiation in the visible spectral range. Since diode lasers are very compact and efficient light sources, there exists a great interest to cover these applications with diode laser emission. Despite modern band gap engineering not all wavelengths can be accessed with diode laser radiation. Especially in the visible spectral range between 480 nm and 630 nm no emission from diode lasers is available, yet. Nonlinear frequency conversion of near-infrared radiation is a common way to generate coherent emission in the visible spectral range. However, radiation with extraordinary spatial temporal and spectral quality is required to pump frequency conversion. Broad area (BA) diode lasers are reliable high power light sources in the near-infrared spectral range. They belong to the most efficient coherent light sources with electro-optical efficiencies of more than 70%. Standard BA lasers are not suitable as pump lasers for frequency conversion because of their poor beam quality and spectral properties. For this purpose, tapered lasers and diode lasers with Bragg gratings are utilized. However, these new diode laser structures demand for additional manufacturing and assembling steps that makes their processing challenging and expensive. An alternative to BA diode lasers is the stripe-array architecture. The emitting area of a stripe-array diode laser is comparable to a BA device and the manufacturing of these arrays requires only one additional process step. Such a stripe-array consists of several narrow striped emitters realized with close proximity. Due to the overlap of the fields of neighboring emitters or the presence of leaky waves, a strong coupling between the emitters exists. As a consequence, the emission of such an array is characterized by a so called supermode. However, for the free running stripe-array mode competition between several supermodes occurs because of the lack of wavelength stabilization. This leads to power fluctuations, spectral instabilities and poor beam quality. Thus, it was necessary to study the emission properties of those stripe-arrays to find new concepts to realize an external synchronization of the emitters. The aim was to achieve stable longitudinal and transversal single mode operation with high output powers giving a brightness sufficient for efficient nonlinear frequency conversion. For this purpose a comprehensive analysis of the stripe-array devices was done here. The physical effects that are the origin of the emission characteristics were investigated theoretically and experimentally. In this context numerical models could be verified and extended. A good agreement between simulation and experiment was observed. One way to stabilize a specific supermode of an array is to operate it in an external cavity. Based on mathematical simulations and experimental work, it was possible to design novel external cavities to select a specific supermode and stabilize all emitters of the array at the same wavelength. This resulted in stable emission with 1 W output power, a narrow bandwidth in the range of 2 MHz and a very good beam quality with M²<1.5. This is a new level of brightness and brilliance compared to other BA and stripe-array diode laser systems. The emission from this external cavity diode laser (ECDL) satisfied the requirements for nonlinear frequency conversion. Furthermore, a huge improvement to existing concepts was made. In the next step newly available periodically poled crystals were used for second harmonic generation (SHG) in single pass setups. With the stripe-array ECDL as pump source, more than 140 mW of coherent radiation at 488 nm could be generated with a very high opto-optical conversion efficiency. The generated blue light had very good transversal and longitudinal properties and could be used to generate biphotons by parametric down-conversion. This was feasible because of the improvement made with the infrared stripe-array diode lasers due to the development of new physical concepts.
Die vorliegende Untersuchung zeigt das ständige Wachstum der Dimension und Bedeutung der staatlichen Schutzpflichten als eine eigenständige Funktion der Grundrechte. Mit jedem Fortschritt und der Entwicklung in der modernen Welt, entstehen in der Gesellschaft immer wieder neue Bereiche, die gesetzlicher Regulierung bedürfen. Daher ist die staatliche Aufgabe eindeutig: Der Staat muss die in der Verfassung ausgelegten Prinzipien in der Realität durch die Gesetze umsetzen und sie ständig wiederkehrend nachbessern. Daher ist der Staat gefordert, die Einzelnen repressiv und präventiv zu schützen. Die Dissertation untersucht die Problematik von staatlichen Schutzpflichten im Rahmen der Grundrechte der georgischen Verfassung vom 24. August 1995 im Vergleich mit den Menschenrechten und Grundfreiheiten der Europäischen Menschenrechtskonvention. Die Arbeit greift ein Grundrechtsproblem auf, das sich gerade in rechtlichen und politischen Umbruchssituationen wie diejenige, die Georgien als Nachfolgestaat der zerbrochenen Sowjetunion durchlebt, als besonders wichtig erweist. Auf dem Weg zur dogmatischen Entfaltung einer grundrechtlichen Schutzpflicht wird als eine Art Leitbild die Europäische Menschenrechtskonvention (EMRK) herangezogen. Dies erklärt sich aus der Natur der EMRK, die sich als eine Art Verfassung für Europa darstellt und in Georgien seit 1999 in Kraft ist. In der Arbeit wird auf die deutsche Schutzpflichtenlehre verwiesen. Das erklärt sich aus der in Deutschland schon seit etwa 30 Jahren geführten Diskussion, die immer noch nicht abgeschlossen ist, aber aus der sich bemerkenswerte und kontroverse Ergebnisse ziehen lassen. Die Arbeit zeigt, dass die georgische Verfassung zahlreiche Ansätze der staatlichen Schutzpflichten – allgemeiner und konkreter Art – liefert, die auch vor allem in der Rechtsprechung des Georgischen Verfassungsgerichts verschiedentlich schon aufgegriffen wurden, durchaus zum Teil unter Rückgriff auf Aussagen der Europäischen Menschenrechtskonvention (EMRK) bzw. des Europäischen Gerichtshofes für Menschenrechte (EGMR). Den Bereich der grundrechtlichen Schutzpflichten der georgischen Verfassung auszuleuchten ist für eine relativ neue Rechtstaatlichkeit eines postsowjetischen Staates wichtig, um den Anstoß für eine dringend nötige Debatte zu geben.
Hohe Leistungsansprüche im Wettkampfsport erfordern von den Athleten eine hohe sportliche Belastbarkeit. Möglichkeiten die Trainingsumfänge und -intensitäten zu erhöhen, sind z.T. ausgeschöpft. So bestehen nach wie vor Bestrebungen neue Wege zu finden, um mögliche Leistungsreserven zu erschließen. Elektrotherapieverfahren haben sich im klinischen Alltag, u.a. zur Behandlung von Traumata, bewährt und werden häufig zum Zweck der Analgesierung, Verbesserung der Gewebedurchblutung und zur Muskelstimulation angewandt. Deren Einsatz im adjuvanten Bereich der Trainingsbegleitung wurde bislang nur vereinzelt beschrieben. In der vorliegenden Studie wurden die Auswirkungen einer elektromagnetischen Anwendungsform auf ausgewählte psycho-physische Parameter untersucht (Kontrollgruppenvergleich mit placebokontrolliertem Design), um Aussagen über praxisrelevante Ansätze zur trainingsunterstützenden Betreuung abzuleiten. Es stellte sich die Frage, ob eine Intervention (15 x / 4 Wo.) mit frequenzmodulierten Wechselströmen im vorwiegend niederfrequenten Wirkungsspektrum (0-10000Hz, 5 μA / cm², CellVAS®) zu einer Beeinflussung der untersuchten Parameter führen und dahingehend nachhaltige leistungsfördernde oder -reduzierende Effekte erzielt werden könnten. Des Weiteren sollte geprüft werden, inwiefen die erhobenen Parameter (PWC170, Squat-Jump, Lateralflexion der Wirbelsäule und SF36®) aussagekräftig genug sind. Die Wirksamkeit des Applikationsform wurde im Prä-Post-Vergleich vor (T1), nach (T2) und 4 Wo. nach Abschluss (T3, Nachhaltigkeit) der Intervention analysiert. Die Teilnehmer der Kontrollgruppe erhielten vergleichbare Applikationen im Placebomodus. Das Probandenkollektiv bestand aus gesunden Leistungssportlern, deren Sportarten einen hohen Kraftausdaueranteil enthielten (n=127). Die Gruppenzuteilung erfolgte teilrandomisiert in Haupt- (HG) und Kontrollgruppe (KG). Zudem wurden die Gruppen zusätzlich geschlechtsspezifisch getrennt. Im Untersuchungsverlauf ließen sich Veränderungen für die Leistungsparameter PWC170 und Squat Jump erkennen. Inwiefern diese Abweichungen auf den Einfluss der Intervention mit frequenzmodulierten Wechselströmen im niederfrequenten Wirkungsspektrum zurückzuführen sind, konnte in dieser Untersuchung nicht eindeutig geklärt werden. Die nachgewiesenen Effekte ließen sich nach den zu Grunde liegenden wissenschaftlichen Standards nicht statistisch valide belegen. Der wissenschaftliche Nachweis einer mögliche Leistungsveränderung konnte nicht abschließend erbracht werden. Im therapeutischen Bereich hat die untersuchte Applikationsform, auf Basis der bestehen Studienlage, ihre Anwendung gefunden und kann bedenkenlos verwendet werden. Für den Einsatz als unterstützendes Verfahren in der sportlichen Praxis besteht nach wie vor Bedarf an validen, randomisierten Studien, die die Wirksamkeit der Applikationsform auf psycho-physische Parameter von Athleten nachhaltig belegen, bevor sie in der sportlichen Praxis Anwendung finden sollten.
Leaves are the main photosynthetic organs of vascular plants, and leaf development is dependent on a proper control of gene expression. Transcription factors (TFs) are global regulators of gene expression that play essential roles in almost all biological processes among eukaryotes. This PhD project focused on the characterization of the sink-to-source transition of Arabidopsis leaves and on the analysis of TFs that play a role in early leaf development. The sink-to-source transition occurs when the young emerging leaves (net carbon importers) acquire a positive photosynthetic balance and start exporting photoassimilates. We have established molecular and physiological markers (i.e., CAB1 and CAB2 expression levels, AtSUC2 and AtCHoR expression patterns, chlorophyll and starch levels, and photosynthetic electron transport rates) to identify the starting point of the transition, especially because the sink-to-source is not accompanied by a visual phenotype in contrast to other developmental transitions, such as the mature-to-senescent transition of leaves. The sink-to-source transition can be divided into two different processes: one light dependent, related to photosynthesis and light responses; and one light independent or impaired, related to the changes in the vascular tissue that occur when leaves change from an import to an export mode. Furthermore, starch, but not sucrose, has been identified as one of the potential signalling molecules for this transition. The expression level of 1880 TFs during early leaf development was assessed by qRTPCR, and 153 TFs were found to exhibit differential expression levels of at least 5-fold. GRF, MYB and SRS are TF families, which are overrepresented among the differentially expressed TFs. Additionally, processes like cell identity acquisition, formation of the epidermis and leaf development are overrepresented among the differentially expressed TFs, which helps to validate the results obtained. Two of these TFs were further characterized. bZIP21 is a gene up-regulated during the sink-to-source and mature-to-senescent transitions. Its expression pattern in leaves overlaps with the one observed for AtCHoR, therefore it constitutes a good marker for the sink-to-source transition. Homozygous null mutants of bZIP21 could not be obtained, indicating that the total absence of bZIP21 function may be lethal to the plant. Phylogenetic analyses indicate that bZIP21 is an orthologue of Liguleless2 from maize. In these analyses, we identified that the whole set of bZIPs in plants originated from four founder genes, and that all bZIPs from angiosperms can be classified into 13 groups of homologues and 34 Possible Groups of Orthologues (PoGOs). bHLH64 is a gene highly expressed in early sink leaves, its expression is downregulated during the mature-to-senescent transition. Null mutants of bHLH64 are characterized by delayed bolting when compared to the wild-type; this indicates a possible delay in the sink-to-source transition or the retention of a juvenile identity. A third TF, Dof4, was also characterized. Dof4 is neither differentially expressed during the sink-to-source nor during the senescent-to-mature transition, but a null mutant of Dof4 develops bigger leaves than the wild-type and forms a greater number of siliques. The Dof4 null mutant has proven to be a good background for biomass accumulation analysis. Though not overrepresented during the sink-to-source transition, NAC transcription factors seem to contribute significantly to the mature-to-senescent transition. Twenty two NACs from Arabidopsis and 44 from rice are differentially expressed during late stages of leaf development. Phylogenetic analyses revealed that most of these NACs cluster into three big groups of homologues, indicating functional conservation between eudicots and monocots. To prove functional conservation of orthologues, the expression of ten NAC genes of barley was analysed. Eight of the ten NAC genes were found to be differentially expressed during senescence. The use of evolutionary approaches combined with functional studies is thus expected to support the transfer of current knowledge of gene control gained in model species to crops.
Die acinösen Speicheldrüsen der Schabe Periplaneta americana sind reich durch serotonerge, dopaminerge und GABAerge Fasern innerviert. Die biogenen Amine Serotonin (5-HT) und Dopamin (DA) induzieren die Sekretion eines NaCl-haltigen Primärspeichels. Die physiologische Rolle der GABAergen Innervation des Drüsenkomplexes war bislang unbekannt. Weiterhin wurde vermutet, dass Tyramin (TA) und Octopamin (OA) an der Speichelbildung beteiligt sind. Mittels intrazellulärer Ableitungen von sekretorischen Acinuszellen mit und ohne Stimulierung des Speicheldrüsennervs (SDN) sollte daher die Wirkung von GABA, TA und OA im Speicheldrüsenkomplex untersucht werden. Intrazelluläre Ableitungen aus Acinuszellen zeigten, dass sowohl DA als auch 5 HT biphasische Änderungen des Membranpotentials induzierten. Diese bestanden aus einer initialen Hyperpolarisation und einer darauf folgenden transienten Depolarisation. Stimulierung des SDN mittels einer Saugelektrode verursachte ebenfalls biphasische Änderungen des Membranpotentials der Acinuszellen, die mit den DA- bzw. 5-HT-induzierten Änderungen kinetisch identisch waren. Dieses Ergebnis zeigte, dass die elektrische Stimulierung des SDN im Nerv-Speicheldrüsenpräparat eine verlässliche Methode zur Untersuchung der Wirkungen von Neuromodulatoren auf die dopaminerge und/oder sertotonerge Neurotransmission ist. Die Hyperpolarisation der DA-induzierten Potentialänderungen wurde durch eine intrazelluläre Ca2+-Freisetzung und die Öffnung basolateral lokalisierter Ca2+-gesteuerter K+-Kanäle verur-sacht. Die DA- und 5-HT-induzierte Depolarisation hing kritisch von der Aktivität eines basolateral lokalisierten Na+-K+-2Cl--Symporters ab. GABA, TA und OA potenzierten die elektrischen Antworten der Acinuszellen, wenn diese durch SDN-Stimulierung hervorgerufen wurden. Dabei war OA wirksamer als TA. Dieses Ergebnis zeigte, dass diese Substanzen als im Drüsenkomplex präsynaptisch und erregend als Neuromodulatoren wirken. Pharmakologische Untersuchungen ergaben, dass die erregende Wirkung von GABA durch einen G-Protein-gekoppelten GABAB-Rezeptor vermittelt wurde. Messungen der durch SDN-Stimulierung induzierten Flüssigkeits- und Proteinsekretionsraten zeigten, dass beide Parameter in Anwesenheit von GABA verstärkt waren. Dies ließ auf eine verstärkte serotonerge Neurotransmission schließen, da nur 5-HT die Bildung eines Protein-haltigen Speichels verursacht. Immuncytochemische Untersuchungen zeigten, dass die Drüsen tyraminerge und octopaminerge Innervation empfangen. Weiterhin wurde der erste charakterisierte TA-Rezeptor (PeaTYR1) der Schabe auf einem paarigen, lateral zur Drüse ziehenden Nerv markiert, der auch tyraminerge Fasern enthielt. Die vorliegende Arbeit trug zum Verständnis der komplexen Funktionsweise der Speicheldrüse der Schabe bei und erweiterte das lückenhafte Wissen über die neuronale Kontrolle exokriner Drüsen in Insekten.
This thesis considers on the one hand the construction of point processes via conditional intensities, motivated by the partial Integration of the Campbell measure of a point process. Under certain assumptions on the intensity the existence of such a point process is shown. A fundamental example turns out to be the Pólya sum process, whose conditional intensity is a generalisation of the Pólya urn dynamics. A Cox process representation for that point process is shown. A further process considered is a Poisson process of Gaussian loops, which represents a noninteracting particle system derived from the discussion of indistinguishable particles. Both processes are used to define particle systems locally, for which thermodynamic limits are determined.
This study presents noble gas compositions (He, Ne, Ar, Kr, and Xe) of lavas from several Hawaiian volcanoes. Lavas from the Hawaii Scientific Drilling Project (HSDP) core, surface samples from Mauna Kea, Mauna Loa, Kilauea, Hualalai, Kohala and Haleakala as well as lavas from a deep well on the summit of Kilauea were investigated. Noble gases, especially helium, are used as tracers for mantle reservoirs, based on the assumption that high 3He/4He ratios (>8 RA) represent material from the deep and supposedly less degassed mantle, whereas lower ratios (~ 8 RA) are thought to represent the upper mantle. Shield stage Mauna Kea, Kohala and Kilauea lavas yielded MORB-like to moderately high 3He/4He ratios, while 3He/4He ratios in post-shield stage Haleakala lavas are MORB-like. Few samples show 20Ne/22Ne and 21Ne/22Ne ratios different from the atmospheric values, however, Mauna Kea and Kilauea lavas with excess in mantle Ne agree well with the Loihi-Kilauea line in a neon three-isotope plot, whereas one Kohala sample plots on the MORB correlation line. The values in the 4He/40Ar* (40Ar* denotes radiogenic Ar) versus 4He diagram imply open system fractionation of He from Ar, with a deficiency in 4He. Calculated 4He/40Ar*, 3He/22Nes (22NeS denotes solar Ne) and 4He/21Ne ratios for the sample suite are lower than the respective production and primordial ratios, supporting the observation of a fractionation of He from the heavier noble gases, with a depletion of He with respect to Ne and Ar. The depletion of He is interpreted to be partly due to solubility controlled gas loss during magma ascent. However, the preferential He loss suggests that He is more incompatible than Ne and Ar during magmatic processes. In a binary mixing model, the isotopic He and Ne pattern are best explained by a mixture of a MORB-like end-member with a plume like or primordial end-member with a fractionation in 3He/22Ne, represented by a curve parameter r of 15 (r=(³He/²²Ne)MORB/(³He/²²Ne)PLUME or PRIMORDIAL). Whether the high 3He/4He ratios in Hawaiian lavas are indicative of a primitive component within the Hawaiian plume or are rather a product of the crystal-melt- partitioning behavior during partial melting remains to be resolved.
Analytical ultracentrifugation (AUC) has made an important contribution to polymer and particle characterization since its invention by Svedberg (Svedberg and Nichols 1923; Svedberg and Pederson 1940) in 1923. In 1926, Svedberg won the Nobel price for his scientific work on disperse systems including work with AUC. The first important discovery performed with AUC was to show the existence of macromolecules. Since that time AUC has become an important tool to study polymers in biophysics and biochemistry. AUC is an absolute technique that does not need any standard. Molar masses between 200 and 1014 g/mol and particle size between 1 and 5000 nm can be detected by AUC. Sample can be fractionated into its components due to its molar mass, particle size, structure or density without any stationary phase requirement as it is the case in chromatographic techniques. This very property of AUC earns it an important status in the analysis of polymers and particles. The distribution of molar mass, particle sizes and densities can be measured with the fractionation. Different types of experiments can give complementary physicochemical parameters. For example, sedimentation equilibrium experiments can lead to the study of pure thermodynamics. For complex mixtures, AUC is the main method that can analyze the system. Interactions between molecules can be studied at different concentrations without destroying the chemical equilibrium (Kim et al. 1977). Biologically relevant weak interactions can also be monitored (K ≈ 10-100 M-1). An analytical ultracentrifuge experiment can yield the following information: • Molecular weight of the sample • Number of the components in the sample if the sample is not a single component • Homogeneity of the sample • Molecular weight distribution if the sample is not a single component • Size and shape of macromolecules & particles • Aggregation & interaction of macromolecules • Conformational changes of macromolecules • Sedimentation coefficient and density distribution Such an extremely wide application area of AUC allows the investigation of all samples consisting of a solvent and a dispersed or dissolved substance including gels, micro gels, dispersions, emulsions and solutions. Another fact is that solvent or pH limitation does not exist for this method. A lot of new application areas are still flourishing, although the technique is 80 years old. In 1970s, 1500 AUC were operational throughout the world. At those times, due to the limitation in detection technologies, experimental results were obtained with photographic records. As time passed, faster techniques such as size exclusion chromatography (SEC), light scattering (LS) or SDS-gel electrophoresis occupied the same research fields with AUC. Due to these relatively new techniques, AUC began to loose its importance. In the 1980s, only a few AUC were in use throughout the world. In the beginning of the 1990s a modern AUC -the Optima XL-A - was released by Beckman Instruments (Giebeler 1992). The Optima XL-A was equipped with a modern computerized scanning absorption detector. The addition of Rayleigh Interference Optics is introduced which is called XL-I AUC. Furthermore, major development in computers made the analysis easier with the help of new analysis software. Today, about 400 XL-I AUC exist worldwide. It is usually applied in the industry of pharmacy, biopharmacy and polymer companies as well as in academic research fields such as biochemistry, biophysics, molecular biology and material science. About 350 core scientific publications which use analytical ultracentrifugation are published every year (source: SciFinder 2008 ) with an increasing number of references (436 reference in 2008). A tremendous progress has been made in method and analysis software after digitalization of experimental data with the release of XL-I. In comparison to the previous decade, data analysis became more efficient and reliable. Today, AUC labs can routinely use sophisticated data analysis methods for determination of sedimentation coefficient distributions (Demeler and van Holde 2004; Schuck 2000; Stafford 1992), molar mass distributions (Brookes and Demeler 2008; Brookes et al. 2006; Brown and Schuck 2006), interaction constants (Cao and Demeler 2008; Schuck 1998; Stafford and Sherwood 2004), particle size distributions with Angstrom resolution (Cölfen and Pauck 1997) and the simulations determination of size and shape distributions from sedimentation velocity experiments (Brookes and Demeler 2005; Brookes et al. 2006). These methods are also available in powerful software packages that combines various methods, such as, Ultrascan (Demeler 2005), Sedift/Sedphat (Schuck 1998; Vistica et al. 2004) and Sedanal (Stafford and Sherwood 2004). All these powerful packages are free of charge. Furthermore, Ultrascans source code is licensed under the GNU Public License (http://www.gnu.org/copyleft/gpl.html). Thus, Ultrascan can be further improved by any research group. Workshops are organized to support these software packages. Despite of the tremendous developments in data analysis, hardware for the system has not developed much. Although there are various user developed detectors in research laboratories, they are not commercially available. Since 1992, only one new optical system called “the fluorescence optics” (Schmidt and Reisner, 1992, MacGregor et al. 2004, MacGregor, 2006, Laue and Kroe, in press) has been commercialized. However, except that, there has been no commercially available improvement in the optical system. The interesting fact about the current hardware of the XL-I is that it is 20 years old, although there has been an enormous development in microelectronics, software and in optical systems in the last 20 years, which could be utilized for improved detectors. As examples of user developed detector, Bhattacharyya (Bhattacharyya 2006) described a Multiwavelength-Analytical Ultracentrifuge (MWL-AUC), a Raman detector and a small angle laser light scattering detector in his PhD thesis. MWL-AUC became operational, but a very high noise level prevented to work with real samples. Tests with the Raman detector were not successful due to the low light intensity and thus high integration time is required. The small angle laser light scattering detector could only detect latex particles but failed to detect smaller particles and molecules due to low sensitivity of the detector (a photodiode was used as detector). The primary motivation of this work is to construct a detector which can measure new physico-chemical properties with AUC with a nicely fractionated sample in the cell. The final goal is to obtain a multiwavelength detector for the AUC that measures complementary quantities. Instrument development is an option for a scientist only when there is a huge potential benefit but there is no available commercial enterprise developing appropriate equipment, or if there is not enough financial support to buy it. The first case was our motivation for developing detectors for AUC. Our aim is to use today’s technological advances in microelectronics, programming, mechanics in order to develop new detectors for AUC and improve the existing MWL detector to routine operation mode. The project has multiple aspects which can be listed as mechanical, electronical, optical, software, hardware, chemical, industrial and biological. Hence, by its nature it is a multidisciplinary project. Again by its nature it contains the structural problem of its kind; the problem of determining the exact discipline to follow at each new step. It comprises the risk of becoming lost in some direction. Having that fact in mind, we have chosen the simplest possible solution to any optical, mechanical, electronic, software or hardware problem we have encountered and we have always tried to see the overall picture. In this research, we have designed CCD-C-AUC (CCD Camera UV/Vis absorption detector for AUC) and SLS-AUC (Static Light Scattering detector for AUC) and tested them. One of the SLS-AUC designs produced successful test results, but the design could not be brought to the operational stage. However, the operational state Multiwavelength Analytical Ultracentrifuge (MWL-AUC) AUC has been developed which is an important detector in the fields of chemistry, biology and industry. In this thesis, the operational state Multiwavelength Analytical Ultracentrifuge (MWL-AUC) AUC is to be introduced. Consequently, three different applications of MWL-AUC to the aforementioned disciplines shall be presented. First of all, application of MWL-AUC to a biological system which is a mixture of proteins lgG, aldolase and BSA is presented. An application of MWL-AUC to a mass-produced industrial sample (β-carotene gelatin composite particles) which is manufactured by BASF AG, is presented. Finally, it is shown how MWL-AUC will impact on nano-particle science by investigating the quantum size effect of CdTe and its growth mechanism. In this thesis, mainly the relation between new technological developments and detector development for AUC is investigated. Pioneering results are obtained that indicate the possible direction to be followed for the future of AUC. As an example, each MWL-AUC data contains thousands of wavelengths. MWL-AUC data also contains spectral information at each radial point. Data can be separated to its single wavelength files and can be analyzed classically with existing software packages. All the existing software packages including Ultrascan, Sedfit, Sedanal can analyze only single wavelength data, so new extraordinary software developments are needed. As a first attempt, Emre Brookes and Borries Demeler have developed mutliwavelength module in order to analyze the MWL-AUC data. This module analyzes each wavelength separately and independently. We appreciate Emre Brookes and Borries Demeler for their important contribution to the development of the software. Unfortunately, this module requires huge amount of computer power and does not take into account the spectral information during the analysis. New software algorithms are needed which take into account the spectral information and analyze all wavelengths accordingly. We would like also invite the programmers of Ultrascan, Sedfit, Sedanal and the other programs, to develop new algorithms in this direction.
For the elucidation of the dynamics of signal transduction processes that are induced by cellular interactions, defined events along the signal transduction cascade and subsequent activation steps have to be analyzed and then also correlated with each other. This cannot be achieved by ensemble measurements because averaging biological data ignores the variability in timing and response patterns of individual cells and leads to highly blurred results. Instead, only a multi-parameter analysis at a single-cell level is able to exploit the information that is crucially needed for deducing the signaling pathways involved. The aim of this work was to develop a process line that allows the initiation of cell-cell or cell-particle interactions while at the same time the induced cellular reactions can be analyzed at various stages along the signal transduction cascade and correlated with each other. As this approach requires the gentle management of individually addressable cells, a dielectrophoresis (DEP)-based microfluidic system was employed that provides the manipulation of microscale objects with very high spatiotemporal precision and without the need of contacting the cell membrane. The system offers a high potential for automation and parallelization. This is essential for achieving a high level of robustness and reproducibility, which are key requirements in order to qualify this approach for a biomedical application. As an example process for intercellular communication, T cell activation has been chosen. The activation of the single T cells was triggered by contacting them individually with microbeads that were coated with antibodies directed against specific cell surface proteins, like the T cell receptor-associated kinase CD3 and the costimulatory molecule CD28 (CD; cluster of differentiation). The stimulation of the cells with the functionalized beads led to a rapid rise of their cytosolic Ca2+ concentration which was analyzed by a dual-wavelength ratiometric fluorescence measurement of the Ca2+-sensitive dye Fura-2. After Ca2+ imaging, the cells were isolated individually from the microfluidic system and cultivated further. Cell division and expression of the marker molecule CD69 as a late activation event of great significance were analyzed the following day and correlated with the previously recorded Ca2+ traces for each individual cell. It turned out such that the temporal profile of the Ca2+ traces between both activated and non-activated cells as well as dividing and non-dividing cells differed significantly. This shows that the pattern of Ca2+ signals in T cells can provide early information about a later reaction of the cell. As isolated cells are highly delicate objects, a precondition for these experiments was the successful adaptation of the system to maintain the vitality of single cells during and after manipulation. In this context, the influences of the microfluidic environment as well as the applied electric fields on the vitality of the cells and the cytosolic Ca2+ concentration as crucially important physiological parameters were thoroughly investigated. While a short-term DEP manipulation did not affect the vitality of the cells, they showed irregular Ca2+ transients upon exposure to the DEP field only. The rate and the strength of these Ca2+ signals depended on exposure time, electric field strength and field frequency. By minimizing their occurrence rate, experimental conditions were identified that caused the least interference with the physiology of the cell. The possibility to precisely control the exact time point of stimulus application, to simultaneously analyze short-term reactions and to correlate them with later events of the signal transduction cascade on the level of individual cells makes this approach unique among previously described applications and offers new possibilities to unravel the mechanisms underlying intercellular communication.
This work presents the synthesis and the self-assembly of symmetrical amphiphilic ABA and BAB triblock copolymers in dilute, semi-concentrated and highly concentrated aqueous solution. A series of new bifunctional bistrithiocarbonates as RAFT agents was used to synthesise these triblock copolymers, which are characterised by a long hydrophilic middle block and relatively small, but strongly hydrophobic end blocks. As hydrophilic A blocks, poly(N-isopropylacrylamide) (PNIPAM) and poly(methoxy diethylene glycol acrylate) (PMDEGA) were employed, while as hydrophobic B blocks, poly(4-tert-butyl styrene), polystyrene, poly(3,5-dibromo benzyl acrylate), poly(2-ethylhexyl acrylate), and poly(octadecyl acrylate) were explored as building blocks with different hydrophobicities and glass transition temperatures. The five bifunctional trithiocarbonates synthesised belong to two classes: the first are RAFT agents, which position the active group of the growing polymer chain at the outer ends of the polymer (Z-C(=S)-S-R-S-C(=S)-Z, type I). The second class places the active groups in the middle of the growing polymer chain (R-S-C(=S)-Z-C(=S)-S-R, type II). These RAFT agents enable the straightforward synthesis of amphiphilic triblock copolymers in only two steps, allowing to vary the nature of the hydrophobic blocks as well as the length of the hydrophobic and hydrophilic blocks broadly with good molar mass control and narrow polydispersities. Specific side reactions were observed among some RAFT agents including the elimination of ethylenetrithiocarbonate in the early stage of the polymerisation of styrene mediated by certain agents of the type II, while the use of the RAFT agents of type I resulted in retardation of the chain extension of PNIPAM with styrene. These results underline the need of a careful choice of RAFT agents for a given task. The various copolymers self-assemble in dilute and semi-concentrated aqueous solution into small flower-like micelles. No indication for the formation of micellar clusters was found, while only at high concentration, physical hydrogels are formed. The reversible thermoresponsive behaviour of the ABA and BAB type copolymer solutions in water with A made of PNIPAM was examined by turbidimetry and dynamic light scattering (DLS). The cloud point of the copolymers was nearly identical to the cloud point of the homopolymer and varied between 28-32 °C with concentrations from 0.01 to 50 wt%. This is attributed to the formation of micelles where the hydrophobic blocks are shielded from a direct contact with water, so that the hydrophobic interactions of the copolymers are nearly the same as for pure PNIPAM. Dynamic light scattering measurements showed the presence of small micelles at ambient temperature. The aggregate size dramatically increased above the cloud point, indicating a change of aggregate morphology into clusters due to the thermosensitivity of the PNIPAM block. The rheological behaviour of the amphiphilic BAB triblock copolymers demonstrated the formation of hydrogels at high concentrations, typically above 30-35 wt%. The minimum concentration to induce hydrogels decreased with the increasing glass transition temperatures and increasing length of the end blocks. The weak tendency to form hydrogels was attributed to a small share of bridged micelles only, due to the strong segregation regime occurring. In order to learn about the role of the nature of the thermoresponsive block for the aggregation, a new BAB triblock copolymer consisting of short polystyrene end blocks and PMDEGA as stimuli-responsive middle block was prepared and investigated. Contrary to PNIPAM, dilute aqueous solutions of PMDEGA and of its block copolymers showed reversible phase transition temperatures characterised by a strong dependence on the polymer composition. Moreover, the PMDEGA block copolymer allowed the formation of physical hydrogels at lower concentration, i.e. from 20 wt%. This result suggests that PMDEGA has a higher degree of water-swellability than PNIPAM.
In der molekularen Diagnostik besteht ein Bedarf an schnellen und spezifischen Testsystemen, die entweder für die Labordiagnostik oder in Point of Care-Umgebungen eingesetzt werden können. Um dieses Ziel zu erreichen, stehen die Miniaturisierung und Parallelisierung im Mittelpunkt des Forschungsinteresses. Die führende Methode im Bereich der DNA-Analytik ist derzeit die Realtime-PCR. Dieser Technologie sind hinsichtlich der Multiplexfähigkeit technologischen Hürden gesetzt, da derzeit nur eine Analyse von maximal vier Parametern parallel in einem Versuchsansatz erfolgen kann. Microarrays stellen hingegen die benötigten Voraussetzungen zur Verfügung, um als Werkzeuge für die Multiparameteranalyse in verschiedensten Anwendungsbereichen zu dienen. Ein Schwerpunkt dieser Arbeit war es, Multiplex-PCRs und diagnostische Microarrays zu entwickeln, die für analytische Fragestellungen eine schnelle und zuverlässige Multiparameteranalytik ermöglichen, um die bisherigen Einschränkungen aktueller Nachweisverfahren zu vermeiden. Als Anwendungen wurden zum einen ein Nachweissystem für acht relevante Geflügelpathogene zur Überwachung in der Geflügelzucht, zum anderen ein Nachweissystem zur Identifikation potentiell allergener Lebensmittelinhaltstoffe entwickelt. Neben der Entwicklung geeigneter PCR und Multiplex-PCR-Verfahren sowie spezifischer Microarrays für die Detektion der gesuchten Zielsequenzen stand auch die weiterführende Integration von DNA-Amplifikation und Microarray-Technologie im Fokus dieser Arbeit. Die OnChip-Amplifikation stellt eine Möglichkeit dar, um DNA-Analytik und Detektion in einem Reaktionsschritt zu integrieren. Entsprechend wurden die in der Arbeit entwickelten PCR- und Multiplex-PCR-Verfahren zum Nachweis potentieller allergener Lebensmittelinhaltsstoffe für die OnChip-Amplifikation adaptiert und Reaktionsbedingungen getestet, die eine Multiparameteranalyse auf dem Chip ermöglichen. Die entwickelten OnChip-PCR-Verfahren zeigten eine hohe Spezifität sowohl in Single- als auch in der Multiplex-OnChip-PCR. Eine Sensitivität von 10 Kopien bzw. <10ppm konnte in Single-OnChip-PCRs für den Nachweis allergener Lebensmittelinhaltsstoffe gezeigt werden. In Multiplex-OnChip-PCRs konnten 10-100ppm allergene Verunreinigungen spezifisch in unterschiedlichen Lebensmitteln nachgewiesen werden. Ein weiterer Schritt in Richtung einer möglichen Verwendung im Point of Care-Bereich stellt der Einsatz eines isothermalen Amplifikationsverfahrens dar. Vorteil eines solchen Verfahrens ist die Möglichkeit, auf das ansonsten benötigte Thermocycling zu verzichten. Dies vereinfacht eine Integration der OnChip-Amplifikation in mobile Analysegeräte oder Lab on Chip-Systeme und qualifiziert das Verfahren für den Einsatz in Point of Care-Umgebungen. In dieser Arbeit wurde eine noch junge isothermale Amplifikationsmethode, die helikase-abhängige Amplifikation (HDA), hinsichtlich ihrer Eignung für die Integration auf einem Microarray getestet. Hierfür konnte die bislang erste OnChip-HDA für Einzel- und Duplex-Nachweise von Pathogenen entwickelt werden.
Dispersal behavior plays an important role for the geographical distribution and population structure of any given species. Individual’s fitness, reproductive and competitive ability, and dispersal behavior can be determined by the age of the individual. Age-dependent as well as density-dependent dispersal patterns are common in many bird species. In this thesis, I first present age-dependent breeding ability and natal site fidelity in white storks (Ciconia ciconia); migratory birds breeding in large parts of Europe. I predicted that both the proportion of breeding birds and natal site fidelity increase with the age. After the seventies of the last century, following a steep population decline, a recovery of the white stork population has been observed in many regions in Europe. Increasing population density in the white stork population in Eastern Germany especially after 1983 allowed examining density- as well as age-dependent breeding dispersal patterns. Therefore second, I present whether: young birds show more often and longer breeding dispersal than old birds, and frequency of dispersal events increase with the population density increase, especially in the young storks. Third, I present age- and density-dependent dispersal direction preferences in the give population. I asked whether and how the major spring migration direction interacts with dispersal directions of white storks: in different age, and under different population densities. The proportion of breeding individuals increased in the first 22 years of life and then decreased suggesting, the senescent decay in aging storks. Young storks were more faithful to their natal sites than old storks probably due to their innate migratory direction and distance. Young storks dispersed more frequently than old storks in general, but not for longer distance. Proportion of dispersing individuals increased significantly with increasing population densities indicating, density- dependent dispersal behavior in white storks. Moreover, the finding of a significant interaction effects between the age of dispersing birds and year (1980–2006) suggesting, older birds dispersed more from their previous nest sites over time due to increased competition. Both young and old storks dispersed along their spring migration direction; however, directional preferences were different in young storks and old storks. Young storks tended to settle down before reaching their previous nest sites (leading to the south-eastward dispersal) while old birds tended to keep migrating along the migration direction after reaching their previous nest sites (leading to the north-westward dispersal). Cues triggering dispersal events may be age-dependent. Changes in the dispersal direction over time were observed. Dispersal direction became obscured during the second half of the observation period (1993–2006). Increase in competition may affect dispersal behavior in storks. I discuss the potential role of: age for the observed age-dependent dispersal behavior, and competition for the density dependent dispersal behavior. This Ph.D. thesis contributes significantly to the understanding of population structure and geographical distribution of white storks. Moreover, presented age- and density (competition)-dependent dispersal behavior helps understanding underpinning mechanisms of dispersal behavior in bird species.
After the epoch of reionisation the intergalactic medium (IGM) is kept at a high photoionisation level by the cosmic UV background radiation field. Primarily composed of the integrated contribution of quasars and young star forming galaxies, its intensity is subject to spatial and temporal fluctuations. In particular in the vicinity of luminous quasars, the UV radiation intensity grows by several orders of magnitude. Due to an enhanced UV radiation up to a few Mpc from the quasar, the ionised hydrogen fraction significantly increases and becomes visible as a reduced level of absorption in the HI Lyman alpha (Ly-alpha) forest. This phenomenon is known as the proximity effect and it is the main focus of this thesis. Modelling the influence on the IGM of the quasar radiation, one is able to determine the UV background intensity at a specific frequency (J_nu_0), or equivalently, its photoionisation rate (Gamma_b). This is of crucial importance for both theoretical and observational cosmology. Thus far, the proximity effect has been investigated primarily by combining the signal of large samples of quasars, as it has been regarded as a statistical phenomenon. Only a handful of studies tried to measure its signature on individual lines of sight, albeit focusing on one sight line only. Our aim is to perform a systematic investigation of large samples of quasars searching for the signature of the proximity effect, with a particular emphasis on its detection on individual lines of sight. We begin this survey with a sample of 40 high resolution (R~45000), high signal to noise ratio (S/N~70) quasar spectra at redshift 2.1<z<4.7, publicly available in the European Southern Observatory (ESO) archive. The extraordinary quality of this data set enables us to detect the proximity effect signature not only in the combined quasar sample, but also along each individual sight line. This allows us to determine not only the UV background intensity at the mean redshift of this sample, but also to estimate its intensity in small (Delta z~0.2) redshift intervals in the range 2<z<4. Our estimates (J_nu_0~ 3x10^{-22} erg s^{-1} cm^{-2} Hz^{-1} sr^{-1}) are for the first time in very good agreement with different constraints of its evolution obtained from theoretical predictions and numerical simulations. We continue this systematic analysis of the proximity effect with the largest search to date invoking the Sloan Digital Sky Survey (SDSS) data set. The sample consists of 1733 quasars at redshifts z>2.3. In spite of the low resolution and limited S/N we detect the proximity effect on about 98\% of the quasars at a high significance level. Thereby we are able to determine the evolution of the UV background photoionisation rate within the redshift range 2<z<5 finding Gamma_b~ 1.6x10^{-12} s^{-1}. With these new measurements we explore literature estimates of the quasar luminosity function and predict the stellar luminosity density up to redshift of about z~5. Our results are globally in good agreement with recent determinations inferred from deep surveys of high redshift galaxies. We then compare our measurements on the UV background photoionisation rate inferred from the two samples at high and low resolution. While these data sets show extreme differences, our determinations are in considerable agreement at z<3.3, even though they show less agreement at higher redshifts. We suspect that this may be caused by either the small number of high resolution quasar spectra at the highest redshifts considered or by some systematic effect due to the limited data quality of SDSS. Complementary to the observational investigation of the proximity effect on high redshift quasars, we exploit some theoretical aspects linked to and based on the results on this phenomenon. We employ complex numerical simulations of structure formation to achieve a better representation of the Ly-alpha forest. Modelling the signature of the proximity effect on randomly selected sight lines, we prove the advantages of dealing with individual lines of sight instead of combining their signal to investigate this phenomenon. Furthermore, we develop and test novel techniques aimed at a more precise determination of the proximity effect signal. With this investigation we demonstrate that the technique developed and employed in this thesis is the most accurate adopted thus far. Tighter determinations of the UV background are certainly based on suitable methods to detect its signature, but also on a deeper understanding of the environments in which quasars form and evolve. We initiate an investigation of complex numerical simulations including the radiative transport of energy to model in a more detailed way the proximity effect. Such a simulation may lead to the characterisation of the quasar environment based on the comparison between the observed and simulated statistical properties of the proximity effect signature.
To date, positive relationships between diversity and community biomass have been mainly found, especially in terrestrial ecosystems due to the complementarity and/or dominance effect. In this thesis, the effect of diversity on the performance of terrestrial plant and phytoplankton communities was investigated to get a better understanding of the underlying mechanisms in the biodiversity-ecosystem functioning context. In a large grassland biodiversity experiment, the Jena Experiment, the effect of community diversity on the individual plant performance was investigated for all species. The species pool consisted of 60 plant species belonging to 4 functional groups (grasses, small herbs, tall herbs, legumes). The experiment included 82 large plots which differed in species richness (1-60), functional richness (1-4), and community composition. Individual plant height increased with increasing species richness suggesting stronger competition for light in more diverse communities. The aboveground biomass of the individual plants decreased with increasing species richness indicating stronger competition in more species-rich communities. Moreover, in more species-rich communities plant individuals were less likely to flower out and had fewer inflorescences which may be resulting from a trade-off between resource allocation to vegetative height growth and to reproduction. Responses to changing species richness differed strongly between functional groups and between species of similar functional groups. To conclude, individual plant performance can largely depend on the diversity of the surrounding community. Positive diversity effects on biomass have been mainly found for substrate-bound plant communities. Therefore, the effect of diversity on the community biomass of phytoplankton was studied using microcosms. The communities consisted of 8 algal species belonging to 4 functional groups (green algae, diatoms, cyanobacteria, phytoflagellates) and were grown at different functional richness levels (1-4). Functional richness and community biomass were negatively correlated and all community biomasses were lower than their average monoculture biomasses of the component species, revealing community underyielding. This was mainly caused by the dominance of a fast-growing species which built up low biomasses in monoculture and mixture. A trade-off between biomass and growth rate in monoculture was found for all species, and thus fast-growing species built up low biomasses and slow-growing species reached high biomasses in monoculture. As the fast-growing, low-productive species monopolised nutrients in the mixtures, they became the dominant species resulting in the observed community underyielding. These findings suggest community overyielding when biomasses of the component species are positively correlated with their growth rates in monocultures. Aquatic microcosm experiments with an extensive design were performed to get a broad range of community responses. The phytoplankton communities differed in species diversity (1, 2, 4, 8, and 12), functional diversity (1, 2, 3, and 4) and community composition. The species/functional diversity positively affected community biomass, revealing overyielding in most of the communities. This was mainly caused by a positive complementarity effect which can be attributed to resource use complementarity and/or facilitative interaction among the species. Overyielding of more diverse communities occurred when the biomass of the component species was correlated positively with their growth rates in monoculture and thus, fast-growing and high-productive species were dominant in mixtures. This and the study mentioned above generated an emergent pattern for community overyielding and underyielding from the relationship between biomass and growth rate in monoculture as long as the initial community structure prevailed. Invasive species can largely affect ecosystem processes, whereas invasion is also influenced by diversity. To date, studies revealed negative and positive diversity effects on the invasibility (susceptibility of a community to the invasion by new species). The effect of productivity (nutrient concentration ranging from 10 to 640 µg P L-1), herbivory (presence/absence of the generalist feeder) and diversity (3, 4, 6 species were randomly chosen from the resident species pool) on the invasibility of phytoplankton communities consisting of 10 resident species was investigated using semi-continuous microcosms. Two functionally diverse invaders were chosen: the filamentous and less-edible cynaobacterium C. raciborskii and the unicellular and well-edible phytoflagellate Cryptomonas sp. The phytoflagellate indirectly benefited from grazing pressure of herbivores whereas C. raciborskii suffered more from it. Diversity did not affect the invasibility of the phytoplankton communities. Rather, it was strongly influenced by the functional traits of the resident and invasive species.
Despite general concern that the massive deposits of methane stored under permafrost underground and undersea could be released into the atmosphere due to rising temperatures attributed to global climate change, little is known about the methanogenic microorganisms in permafrost sediments, their role in methane emissions, and their phylogeny. The aim of this thesis was to increase knowledge of uncultivated methanogenic microorganisms in submarine and terrestrial permafrost deposits, their community composition, the role they play with regard to methane emissions, and their phylogeny. It is assumed that methanogenic communities in warmer submarine permafrost may serve as a model to anticipate the response of methanogenic communities in colder terrestrial permafrost to rising temperatures. The compositions of methanogenic communities were examined in terrestrial and submarine permafrost sediment samples. The submarine permafrost studied in this research was 10°C warmer than the terrestrial permafrost. By polymerase chain reaction (PCR), DNA was extracted from each of the samples and analyzed by molecular microbiological methods such as PCR-DGGE, RT-PCR, and cloning. Furthermore, these samples were used for in vitro experiment and FISH. The submarine permafrost analysis of the isotope composition of CH4 suggested a relationship between methane content and in situ active methanogenesis. Furthermore, active methanogenesis was proven using 13C-isotope measurements of methane in submarine permafrost sediment with a high TOC value and a high methane concentration. In the molecular-microbiological studies uncultivated lines of Methanosarcina, Methanomicrobiales, Methanobacteriacea and the Groups 1.3 and Marine Benthic from Crenarchaeota were found in all submarine and terrestrial permafrost samples. Methanosarcina was the dominant group of the Archaea in all submarine and terrestrial permafrost samples. The archaeal community composition, in particular, the methanogenic community composition showed diversity with changes in temperatures. Furthermore, cell count of methanogens in submarine permafrost was 10 times higher than in terrestrial permafrost. In vitro experiments showed that methanogens adapt quickly and well to higher temperatures. If temperatures rise due to climate change, an increase in methanogenic activity can be expected as long as organic material is sufficiently available and qualitatively adequate.
Throughout its empirical research history eye movement research has always been aware of the differences in reading behavior induced by individual differences and task demands. This work introduces a novel comprehensive concept of reading strategy, comprising individual differences in reading style and reading skill as well as reader goals. In a series of sentence reading experiments recording eye movements, the influence of reading strategies on reader- and word-level effects assuming distributed processing has been investigated. Results provide evidence for strategic, top-down influences on eye movement control that extend our understanding of eye guidance in reading.
Flood hazard estimations are conducted with a variety of methods. These include flood frequency analysis (FFA), hydrologic and hydraulic modelling, probable maximum discharges as well as climate scenarios. However, most of these methods assume stationarity of the used time series, i.e., the series must not exhibit trends. Against the background of climate change and proven significant trends in atmospheric circulation patterns, it is questionable whether these changes are also reflected in the discharge data. The aim of this PhD thesis is therefore to clarify, in a spatially-explicit manner, whether the available discharge data derived from selected German catchments exhibit trends. Concerning the flood hazard, the suitability of the currently used stationary FFA approaches is evaluated for the discharge data. Moreover, dynamics in atmospheric circulation patterns are studied and the link between trends in these patterns and discharges is investigated. To tackle this research topic, a number of different analyses are conducted. The first part of the PhD thesis comprises the study and trend test of 145 discharge series from catchments, which cover most of Germany for the period 1951–2002. The seasonality and trend pattern of eight flood indicators, such as maximum series and peak-over-threshold series, are analyzed in a spatially-explicit manner. Analyses are performed on different spatial scales: at the local scale, through gauge-specific analyses, and on the catchment-wide and basin scales. Besides the analysis of discharge series, data on atmospheric circulation patterns (CP) are an important source of information, upon which conclusions about the flood hazard can be drawn. The analyses of these circulation patterns (after Hess und Brezowsky) and the study of the link to peak discharges form the second part of the thesis. For this, daily data on the dominant CP across Europe are studied; these are represented by different indicators, which are tested for trend. Moreover, analyses are performed to extract flood triggering circulation patterns and to estimate the flood potential of CPs. Correlations between discharge series and CP indicators are calculated to assess a possible link between them. For this research topic, data from 122 meso-scale catchments in the period 1951–2002 are used. In a third part, the Mulde catchment, a mesoscale sub-catchment of the Elbe basin, is studied in more detail. Fifteen discharge series of different lengths in the period 1910–2002 are available for the seasonally differentiated analysis of the flood potential of CPs and flood influencing landscape parameters. For trend tests of discharge and CP data, different methods are used. The Mann-Kendall test is applied with a significance level of 10%, ensuring statistically sound results. Besides the test of the entire series for trend, multiple time-varying trend tests are performed with the help of a resampling approach in order to better differentiate short-term fluctuations from long-lasting trends. Calculations of the field significance complement the flood hazard assessment for the studied regions. The present thesis shows that the flood hazard is indeed significantly increasing for selected regions in Germany during the winter season. Especially affected are the middle mountain ranges in Central Germany. This increase of the flood hazard is attributed to a longer persistence of selected CPs during winter. Increasing trends in summer floods are found in the Rhine and Danube catchments, decreasing trends in the Elbe and Weser catchments. Finally, a significant trend towards a reduced diversity of CPs is found causing fewer patterns with longer persistence to dominate the weather over Europe. The detailed study of the Mulde catchment reveals a flood regime with frequent low winter floods and fewer summer floods, which bear, however, the potential of becoming extreme. Based on the results, the use of instationary approaches for flood hazard estimation is recommended in order to account for the detected trends in many of the series. Through this methodology it is possible to directly consider temporal changes in flood series, which in turn reduces the possibility of large under- or overestimations of the extreme discharges, respectively.
The presented thesis describes the observations of the Galactic center Quintuplet cluster, the spectral analysis of the cluster Wolf-Rayet stars of the nitrogen sequence to determine their fundamental stellar parameters, and discusses the obtained results in a general context. The Quintuplet cluster was discovered in one of the first infrared surveys of the Galactic center region (Okuda et al. 1987, 1989) and was observed for this project with the ESO-VLT near-infrared integral field instrument SINFONI-SPIFFI. The subsequent data reduction was performed in parts with a self-written pipeline to obtain flux-calibrated spectra of all objects detected in the imaged field of view. First results of the observation were compiled and published in a spectral catalog of 160 flux-calibrated $K$-band spectra in the range of 1.95 to 2.45\,$\mu$m, containing 85 early-type (OB) stars, 62 late-type (KM) stars, and 13 Wolf-Rayet stars. About 100 of these stars are cataloged for the first time. The main part of the thesis project was concentrated on the analysis of the WR stars of the nitrogen sequence and one further identified emission line star (Of/WN) with tailored Potsdam Wolf-Rayet (PoWR) models for expanding atmospheres (Hamann et al. 1995) that are applied to derive the stellar parameters of these stars. For this purpose, the atomic input data of the PoWR models had to be extended by further line transitions in the near-infrared spectral range to enable adaequate model spectra to be calculated. These models were then fitted to the observed spectra, revealing typical paramters for this class of stars. A significant amount of hydrogen of up to $X_\text{H} \sim 0.2$ by mass fraction is still present in their stellar atmospheres. The stars are also found to be very luminous ($\log{(L/L_\odot)} > 6.0$) and show mass-loss rates and wind characteristics typical for radiation-driven winds. By comparison with stellar evolutionary models (Meynet \& Maeder 2003a; Langer et al. 1994), the initial masses were estimated and indicate that the Quintuplet WN stars are descendants from the most massive O stars with $M_\text{init} > 60 M_\odot$ and their ages correspond to a cluster age of 3-5\,million years. The analysis of the individual WN stars revealed an average extinction of $A_K =3.1 \pm 0.5$\,mag ($A_V = 27 \pm 4$) towards the Quintuplet cluster. This extinction was applied to derive the stellar luminosities of the remaining early-type and late-type stars in the catalog and a Hertzsprung-Russell diagram could be compiled. Surprisingly, two stellar populations are found, a group of main sequence OB stars and a group of evolved late-type stars, i.e. red supergiants (RSG). The main sequence stars indicate a cluster age of 4 million years, which would be too young for red supergiants to be already present. A star formation event lasting for a few million years might possibly explain the Quintuplet's population and the cluster would still be considered coeval. However, the unexpected and simultaneous presence of red supergiants and Wolf-Rayet stars in the cluster points out that the details of star formation and cluster evolution are not yet well understood for the Quintuplet cluster.
Recent years witnessed a vast advent of stalagmites as palaeoclimate archives. The multitude of geochemical and physical proxies and a promise of a precise and accurate age model greatly appeal to palaeoclimatologists. Although substantial progress was made in speleothem-based palaeoclimate research and despite high-resolution records from low-latitudinal regions, proving that palaeo-environmental changes can be archived on sub-annual to millennial time scales our comprehension of climate dynamics is still fragmentary. This is in particular true for the summer monsoon system on the Indian subcontinent. The Indian summer monsoon (ISM) is an integral part of the intertropical convergence zone (ITCZ). As this rainfall belt migrates northward during boreal summer, it brings monsoonal rainfall. ISM strength depends however on a variety of factors, including snow cover in Central Asia and oceanic conditions in the Indic and Pacific. Presently, many of the factors influencing the ISM are known, though their exact forcing mechanism and mutual relations remain ambiguous. Attempts to make an accurate prediction of rainfall intensity and frequency and drought recurrence, which is extremely important for South Asian countries, resemble a puzzle game; all interaction need to fall into the right place to obtain a complete picture. My thesis aims to create a faithful picture of climate change in India, covering the last 11,000 ka. NE India represents a key region for the Bay of Bengal (BoB) branch of the ISM, as it is here where the monsoon splits into a northwestward and a northeastward directed arm. The Meghalaya Plateau is the first barrier for northward moving air masses and receives excessive summer rainfall, while the winter season is very dry. The proximity of Meghalaya to the Tibetan Plateau on the one hand and the BoB on the other hand make the study area a key location for investigating the interaction between different forcings that governs the ISM. A basis for the interpretation of palaeoclimate records, and a first important outcome of my thesis is a conceptual model which explains the observed pattern of seasonal changes in stable isotopes (d18O and d2H) in rainfall. I show that although in tropical and subtropical regions the amount effect is commonly called to explain strongly depleted isotope values during enhanced rainfall, alone it cannot account for observed rainwater isotope variability in Meghalaya. Monitoring of rainwater isotopes shows no expected negative correlation between precipitation amount and d18O of rainfall. In turn I find evidence that the runoff from high elevations carries an inherited isotopic signature into the BoB, where during the ISM season the freshwater builds a strongly depleted plume on top of the marine water. The vapor originating from this plume is likely to memorize' and transmit further very negative d18O values. The lack of data does not allow for quantication of this plume effect' on isotopes in rainfall over Meghalaya but I suggest that it varies on seasonal to millennial timescales, depending on the runoff amount and source characteristics. The focal point of my thesis is the extraction of climatic signals archived in stalagmites from NE India. High uranium concentration in the stalagmites ensured excellent age control required for successful high-resolution climate reconstructions. Stable isotope (d18O and d13C) and grey-scale data allow unprecedented insights into millennial to seasonal dynamics of the summer and winter monsoon in NE India. ISM strength (i. e. rainfall amount) is recorded in changes in d18Ostalagmites. The d13C signal, reflecting drip rate changes, renders a powerful proxy for dry season conditions, and shows similarities to temperature-related changes on the Tibetan Plateau. A sub-annual grey-scale profile supports a concept of lower drip rate and slower stalagmite growth during dry conditions. During the Holocene, ISM followed a millennial-scale decrease of insolation, with decadal to centennial failures resulting from atmospheric changes. The period of maximum rainfall and enhanced seasonality corresponds to the Holocene Thermal Optimum observed in Europe. After a phase of rather stable conditions, 4.5 kyr ago, the strengthening ENSO system dominated the ISM. Strong El Nino events weakened the ISM, especially when in concert with positive Indian Ocean dipole events. The strongest droughts of the last 11 kyr are recorded during the past 2 kyr. Using the advantage of a well-dated stalagmite record at hand I tested the application of laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) to detect sub-annual to sub-decadal changes in element concentrations in stalagmites. The development of a large ablation cell allows for ablating sample slabs of up to 22 cm total length. Each analyzed element is a potential proxy for different climatic parameters. Combining my previous results with the LAICP- MS-generated data shows that element concentration depends not only on rainfall amount and associated leaching from the soil. Additional factors, like biological activity and hydrogeochemical conditions in the soil and vadose zone can eventually affect the element content in drip water and in stalagmites. I present a theoretical conceptual model for my study site to explain how climatic signals can be transmitted and archived in stalagmite carbonate. Further, I establish a first 1500 year long element record, reconstructing rainfall variability. Additionally, I hypothesize that volcanic eruptions, producing large amounts of sulfuric acid, can influence soil acidity and hence element mobilization.
We study buckling instabilities of filaments in biological systems. Filaments in a cell are the building blocks of the cytoskeleton. They are responsible for the mechanical stability of cells and play an important role in intracellular transport by molecular motors, which transport cargo such as organelles along cytoskeletal filaments. Filaments of the cytoskeleton are semiflexible polymers, i.e., their bending energy is comparable to the thermal energy such that they can be viewed as elastic rods on the nanometer scale, which exhibit pronounced thermal fluctuations. Like macroscopic elastic rods, filaments can undergo a mechanical buckling instability under a compressive load. In the first part of the thesis, we study how this buckling instability is affected by the pronounced thermal fluctuations of the filaments. In cells, compressive loads on filaments can be generated by molecular motors. This happens, for example, during cell division in the mitotic spindle. In the second part of the thesis, we investigate how the stochastic nature of such motor-generated forces influences the buckling behavior of filaments. In chapter 2 we review briefly the buckling instability problem of rods on the macroscopic scale and introduce an analytical model for buckling of filaments or elastic rods in two spatial dimensions in the presence of thermal fluctuations. We present an analytical treatment of the buckling instability in the presence of thermal fluctuations based on a renormalization-like procedure in terms of the non-linear sigma model where we integrate out short-wavelength fluctuations in order to obtain an effective theory for the mode of the longest wavelength governing the buckling instability. We calculate the resulting shift of the critical force by fluctuation effects and find that, in two spatial dimensions, thermal fluctuations increase this force. Furthermore, in the buckled state, thermal fluctuations lead to an increase in the mean projected length of the filament in the force direction. As a function of the contour length, the mean projected length exhibits a cusp at the buckling instability, which becomes rounded by thermal fluctuations. Our main result is the observation that a buckled filament is stretched by thermal fluctuations, i.e., its mean projected length in the direction of the applied force increases by thermal fluctuations. Our analytical results are confirmed by Monte Carlo simulations for buckling of semiflexible filaments in two spatial dimensions. We also perform Monte Carlo simulations in higher spatial dimensions and show that the increase in projected length by thermal fluctuations is less pronounced than in two dimensions and strongly depends on the choice of the boundary conditions. In the second part of this work, we present a model for buckling of semiflexible filaments under the action of molecular motors. We investigate a system in which a group of motors moves along a clamped filament carrying a second filament as a cargo. The cargo-filament is pushed against the wall and eventually buckles. The force-generating motors can stochastically unbind and rebind to the filament during the buckling process. We formulate a stochastic model of this system and calculate the mean first passage time for the unbinding of all linking motors which corresponds to the transition back to the unbuckled state of the cargo filament in a mean-field model. Our results show that for sufficiently short microtubules the movement of kinesin-I-motors is affected by the load force generated by the cargo filament. Our predictions could be tested in future experiments.
There are many factors which make speaking and understanding a second language (L2) a highly complex challenge. Skills and competencies in in both linguistic and metalinguistic areas emerge as parts of a multi-faceted, flexible concept underlying bilingual/multilingual communication. On the linguistic level, a combination of an extended knowledge of idiomatic expressions, a broad lexical familiarity, a large vocabulary size, and the ability to deal with phonetic distinctions and fine phonetic detail has been argued necessary for effective nonnative comprehension of spoken language. The scientific interest in these factors has also led to more interest in the L2’s information structure, the way in which information is organised and packaged into informational units, both within and between clauses. On a practical level, the information structure of a language can offer the means to assign focus to a certain element considered important. Speakers can draw from a rich pool of linguistic means to express this focus, and listeners can in turn interpret these to guide them to the highlighted information which in turn facilitates comprehension, resulting in an appropriate understanding of what has been said. If a speaker doesn’t follow the principles of information structure, and the main accent in a sentence is placed on an unimportant word, then there may be inappropriate information transfer within the discourse, and misunderstandings. The concept of focus as part of the information structure of a language, the linguistic means used to express it, and the differential use of focus in native and nonnative language processing are central to this dissertation. Languages exhibit a wide range of ways of directing focus, including by prosodic means, by syntactic constructions, and by lexical means. The general principles underlying information structure seem to contrast structurally across different languages, and they can also differ in the way they express focus. In the context of L2 acquisition, characteristics of the L1 linguistic system are argued to influence the acquisition of the L2. Similarly, the conceptual patterns of information structure of the L1 may influence the organization of information in the L2. However, strategies and patterns used to exploit information structure for succesful language comprehension in the native L1, may not apply at all, or work in different ways or todifferent degrees in the L2. This means that L2 learners ideally have to understand the way that information structure is expressed in the L2 to fully use the information structural benefit in the L2. The knowledge of information structural requirements in the L2 could also imply that the learner would have to make adjustments regarding the use of information structural devices in the L2. The general question is whether the various means to mark focus in the learners’ native language are also accessible in the nonnative language, and whether a L1-L2 transfer of their usage should be considered desirable. The current work explores how information structure helps the listener to discover and structure the forms and meanings of the L2. The central hypothesis is that the ability to access information structure has an impact on the level of the learners’ appropriateness and linguistic competence in the L2. Ultimately, the ability to make use of information structure in the L2 is believed to underpin the L2 learners’ ability to effectively communicate in the L2. The present study investigated how use of focus markers affects processing speed and word recall recall in a native-nonnative language comparison. The predominant research question was whether the type of focus marking leads to more efficient and accurate word processing in marked structures than in unmarked structures, and whether differences in processing patterns can be observed between the two language conditions. Three perception studies were conducted, each concentrating on one of the following linguistic parameters: 1. Prosodic prominence: Does prosodic focus conveyed by sentence accent and by word position facilitate word recognition? 2. Syntactical means: Do cleft constructions result in faster and more accurate word processing? 3. Lexical means: Does focus conveyed by the particles even/only (German: sogar/nur) facilitate word processing and word recall? Experiments 2 and 3 additionally investigated the contribution of context in the form of preceding questions. Furthermore, they considered accent and its facilitative effect on the processing of words which are in the scope of syntactic or lexical focus marking. All three experiments tested German learners of English in a native German language condition and in English as their L2. Native English speakers were included as a control for the English language condition. Test materials consisted of single sentences, all dealing with bird life. Experiment 1 tested word recognition in three focus conditions (broad focus, narrow focus on the target, and narrow focus on a constituent than the target) in one condition using natural unmanipulated sentences, and in the other two conditions using spliced sentences. Experiment 2 (effect of syntactic focus marking) and Experiment 3 (effect of lexical focus marking) used phoneme monitoring as a measure for the speed of word processing. Additionally, a word recall test (4AFC) was conducted to assess the effective entry of target-bearing words in the listeners’ memory. Experiment 1: Focus marking by prosodic means Prosodic focus marking by pitch accent was found to highlight important information (Bolinger, 1972), making the accented word perceptually more prominent (Klatt, 1976; van Santen & Olive, 1990; Eefting, 1991; Koopmans-van Beinum & van Bergem, 1989). However, accent structure seems to be processed faster in native than in nonnative listening (Akker& Cutler, 2003, Expt. 3). Therefore, it is expected that prosodically marked words are better recognised than unmarked words, and that listeners can exploit accent structure better for accurate word recognition in their L1 than they do in the L2 (L1 > L2). Altogether, a difference in word recognition performance in L1 listening is expected between different focus conditions (narrow focus > broad focus). Results of Experiments 1 show that words were better recognized in native listening than in nonnative listening. Focal accent, however, doesn’t seem to help the German subjects recognize accented words more accurately, in both the L1 and the L2. This could be due to the focus conditions not being acoustically distinctive enough. Results of experiments with spliced materials suggest that the surrounding prosodic sentence contour made listeners remember a target word and not the local, prosodic realization of the word. Prosody seems to indeed direct listeners’ attention to the focus of the sentence (see Cutler, 1976). Regarding the salience of word position, VanPatten (2002; 2004) postulated a sentence location principle for L2 processing, stating a ranking of initial > final > medial word position. Other evidence mentions a processing adantage of items occurring late in the sentence (Akker & Cutler, 2003), and Rast (2003) observed in an English L2 production study a trend of an advantage of items occurring at the outer ends of the sentence. The current Experiment 1 aimed to keep the length of the sentences to an acceptable length, mainly to keep the task in the nonnative lnaguage condition feasable. Word length showed an effect only in combination with word position (Rast, 2003; Rast & Dommergues, 2003). Therefore, word length was included in the current experiment as a secondary factor and without hypotheses. Results of Experiment 1 revealed that the length of a word doesn’t seem to be important for its accurate recognition. Word position, specifically the final position, clearly seems to facilitate accurate word recognition in German. A similar trend emerges in condition English L2, confirming Klein (1984) and Slobin (1985). Results don’t support the sentence location principle of VanPatten (2002; 2004). The salience of the final position is interpreted as recency effect (Murdock, 1962). In addition, the advantage of the final position may benefit from the discourse convention that relevant background information is referred to first, and then what is novel later (Haviland & Clark, 1974). This structure is assumed to cue the listener as to what the speaker considers to be important information, and listeners might have reacted according to this convention. Experiment 2: Focus marking by syntactic means Atypical syntactic structures often draw listeners’ attention to certain information in an utterance, and the cleft structure as a focus marking device appears to be a common surface feature in many languages (Lambrecht, 2001). Surface structure influences sentence processing (Foss & Lynch, 1969; Langford & Holmes, 1979), which leads to competing hypotheses in Experiment 2: on the one hand, the focusing effect of the cleft construction might reduce processing times. On the other, cleft constructions in German were found to be used less to mark fo than in English (Ahlemeyer & Kohlhof, 1999; Doherty, 1999; E. Klein, 1988). The complexity of the constructions, and the experience from the native language might work against an advantage of the focus effect in the L2. Results of Experiment 2 show that the cleft structure is an effective device to mark focus in German L1. The processing advantage is explained by the low degree of structural markedness of cleft structures: listeners use the focus function of sentence types headed by the dummy subject es (English: it) due to reliance on 'safe' subject-prominent SVO-structures. The benefit of cleft is enhanced when the sentences are presented with context, suggesting a substantial benefit when focus effects of syntactic surface structure and coherence relation between sentences are integrated. Clefts facilitate word processing for English native speakers. Contrary to German L1, the marked cleft construction doesn’t reduce processing times in English L2. The L1-L2 difference was interpreted as a learner problem of applying specific linguistic structures according to the principles of information structure in the target language. Focus marking by cleft did not help German learners in native or in nonnative word recall. This could be attributed to the phonological similarity of the multiple choice options (Conrad & Hull, 1964), and to a long time span between listening and recall (Birch & Garnsey, 1995; McKoon et al., 1993). Experiment 3: Focus marking by lexical means Focus particles are elements of structure that can indicate focus (König, 1991), and their function is to emphasize a certain part of the sentence (Paterson et al., 1999). I argue that the focus particles even/only (German: sogar/nur) evoke contrast sets of alternatives resp. complements to the element in focus (Ni et al., 1996), which causes interpretations of context. Therefore, lexical focus marking isn’t expected to lead to faster word processing. However, since different mechanisms of encoding seem to underlie word memory, a benefit of the focusing function of particles is expected to show in the recall task: due to focus particles being a preferred and well-used feature for native speakers of German, a transfer of this habitualness is expected, resulting in a better recall of focused words. Results indicated that focus particles seem to be the weakest option to mark focus: Focus marking by lexical particle don’t seem to reduce word processing times in either German L1, English L2, or in English L1. The presence of focus particles is likely to instantiate a complex discourse model which lets the listener await further modifying information (Liversedge et al., 2002). This semantic complexity might slow down processing. There are no indications that focus particles facilitate native language word recall in German L1 and English L1. This could be because focus particles open sets of conditions and contexts that enlarge the set of representations in listeners rather than narrowing it down to the element in the scope of the focus particle. In word recall, the facilitative effect of focus particles emerges only in the nonnative language condition. It is suggested that L2 learners, when faced with more demanding tasks in an L2, use a broad variety of means that identify focus for a better representation of novel words in the memory. In Experiments 2 and 3, evidence suggests that accent is an important factor for efficient word processing and accurate recall in German L1 and English L1, but less so in English L2. This underlines the function of accent as core speech parameter and consistent cue to the perception of prominence native language use (see Cutler & Fodor, 1979; Pitt & Samuel, 1990a; Eriksson et al., 2002; Akker & Cutler, 2003); the L1-L2 difference is attributed to patterns of expectation that are employed in the L1 but not (yet?) in the L2. There seems to exist a fine-tuned sensitivity to how accents are distributed in the native language, listeners expect an appropriate distribution and interpret it accordingly (Eefting, 1991). This pleads for accent placement as extremely important to L2 proficiency; the current results also suggest that accent and its relationship with other speech parameters has to be newly established in the L2 to fully reveal its benefits for efficient processing of speech. There is evidence that additional context facilitates processing of complex syntactic structures but that a surplus of information has no effect if the sentence construction is less challenging for the listener. The increased amount of information to be processed seems to impede better word recall, particularly in the L2. Altogether, it seems that focus marking devices and context can combine to form an advantageous alliance: a substantial benefit in processing efficiency is found when parameters of focus marking and sentence coherence are integrated. L2 research advocates the beneficial aspects of providing context for efficient L2 word learning (Lawson & Hogben, 1996). The current thesis promotes the view that a context which offers more semantic, prosodic, or lexical connections might compensate for the additional processing load that context constitutes for the listeners. A methodological consideration concerns the order in which language conditions are presented to listeners, i.e., L1-L2 or L2-L1. Findings suggest that presentation order could enforce a learning bias, with the performance in the second experiment being influenced by knowledge acquired in the first (see Akker & Cutler, 2003). To conclude this work: The results of the present study suggest that information structure is more accessible in the native language than it is in the nonnative language. There is, however, some evidence that L2 learners have an understanding of the significance of some information-structural parameters of focus marking. This has a beneficial effect on processing efficiency and recall accuracy; on the cognitive side it illustrates the benefits and also the need of a dynamic exchange of information-structural organization between L1 and L2. The findings of the current thesis encourage the view that an understanding of information structure can help the learner to discover and categorise forms and meanings of the L2. Information structure thus emerges as a valuable resource to advance proficiency in a second language.
Although the basic structure of biological membranes is provided by the lipid bilayer, most of the specific functions are carried out by membrane proteins (MPs) such as channels, ion-pumps and receptors. Additionally, it is known, that mutations in MPs are directly or indirectly involved in many diseases. Thus, structure determination of MPs is of major interest not only in structural biology but also in pharmacology, especially for drug development. Advances in structural biology of membrane proteins (MPs) have been strongly supported by the success of three leading techniques: X-ray crystallography, electron microscopy and solution NMR spectroscopy. However, X-ray crystallography and electron microscopy, require highly diffracting 3D or 2D crystals, respectively. Today, structure determination of non-crystalline solid protein preparations has been made possible through rapid progress of solid-state MAS NMR methodology for biological systems. Castellani et. al. solved and refined the first structure of a microcrystalline protein using only solid-state MAS NMR spectroscopy. These successful application open up perspectives to access systems that are difficult to crystallise or that form large heterogeneous complexes and insoluble aggregates, for example ligands bound to a MP-receptor, protein fibrils and heterogeneous proteins aggregates. Solid-state MAS NMR spectroscopy is in principle well suited to study MP at atomic resolution. In this thesis, different types of MP preparations were tested for their suitability to be studied by solid-state MAS NMR. Proteoliposomes, poorly diffracting 2D crystals and a PEG precipitate of the outer membrane protein G (OmpG) were prepared as a model system for large MPs. Results from this work, combined with data found in the literature, show that highly diffracting crystalline material is not a prerequirement for structural analysis of MPs by solid-state MAS NMR. Instead, it is possible to use non-diffracting 3D crystals, MP precipitates, poorly diffracting 2D crystals and proteoliposomes. For the latter two types of preparations, the MP is reconstituted into a lipid bilayer, which thus allows the structural investigation in a quasi-native environment. In addition, to prepare a MP sample for solid-state MAS NMR it is possible to use screening methods, that are well established for 3D and 2D crystallisation of MPs. Hopefully, these findings will open a fourth method for structural investigation of MP. The prerequisite for structural studies by NMR in general, and the most time consuming step, is always the assignment of resonances to specific nuclei within the protein. Since the last few years an ever-increasing number of assignments from solid-state MAS NMR of uniformly carbon and nitrogen labelled samples is being reported, mostly for small proteins of up to around 150 amino acids in length. However, the complexity of the spectra increases with increasing molecular weight of the protein. Thus the conventional assignment strategies developed for small proteins do not yield a sufficiently high degree of assignment for the large MP OmpG (281 amino acids). Therefore, a new assignment strategy to find starting points for large MPs was devised. The assignment procedure is based on a sample with [2,3-13C, 15N]-labelled Tyr and Phe and uniformly labelled alanine and glycine. This labelling pattern reduces the spectral overlap as well as the number of assignment possibilities. In order to extend the assignment, four other specifically labelled OmpG samples were used. The assignment procedure starts with the identification of the spin systems of each labelled amino acid using 2D 13C-13C and 3D NCACX correlation experiments. In a second step, 2D and 3D NCOCX type experiments are used for the sequential assignment of the observed resonances to specific nuclei in the OmpG amino acid sequence. Additionally, it was shown in this work, that biosynthetically site directed labelled samples, which are normally used to observe long-range correlations, were helpful to confirm the assignment. Another approach to find assignment starting points in large protein systems, is the use of spectroscopic filtering techniques. A filtering block that selects methyl resonances was used to find further assignment starting points for OmpG. Combining all these techniques, it was possible to assign nearly 50 % of the observed signals to the OmpG sequence. Using this information, a prediction of the secondary structure elements of OmpG was possible. Most of the calculated motifs were in good aggreement with the crystal structures of OmpG. The approaches presented here should be applicable to a wide variety of MPs and MP-complexes and should thus open a new avenue for the structural biology of MPs.
Stellar magnetic fields, as a crucial component of star formation and evolution, evade direct observation at least with current and near future instruments. However investigating whether magnetic fields are generated by a dynamo process or represent relics from the formation process, or whether they show a behavior similar to the sun or something very different, it is essential to investigate their structure and temporal evolution. Fortunately nature provides us with the possibility to indirectly observe surface topologies on distant stars by means of Doppler shift and polarization of light, though not without its challenges. Based on the mentioned effects, the so called Zeeman-Doppler Imaging technique is a powerful method to retrieve magnetic fields from rapid rotating stars based on measurements of spectropolarimetric observations in terms of Stokes profiles. In recent years, a large number of stellar magnetic field distributions could be reconstructed by Zeeman-Doppler Imaging (ZDI). However, the implementation of this method often relies on many approximations because, as an inversion method, it entails enormous computational requirements. The aim of this thesis is to develop methods for a ZDI, designed to invert time-resolved spectropolarimetric data of active late type stars, and to account for the expected complex and small scale magnetic fields on these stars. In order to reliably reconstruct the detailed field orientation and strength, the inversion method is employed to be able to use of all four Stokes components. Furthermore it is based on fully polarized radiative transfer calculations to account for the intricate interplay between temperature and magnetic field. Finally, the application of a newly developed ZDI code to Stokes I and V observations of II Pegasi (short: II Peg) was supposed to deliver the first magnetic surface maps for this highly active star. To accomplish the high computational burden of a radiative transfer based ZDI, we developed a novel approximation method to speed up the inversion process. It is based on Principal Component Analysis and Artificial Neural Networks. The latter approximate the functional mapping between atmospheric parameters and the corresponding local Stokes profiles. Inverse problems, as we are dealing with, are potentially ill-posed and require a regularization method. We propose a new regularization scheme, which implements a local entropy function that accounts for the peculiarities of the reconstruction of localized magnetic fields. To deal with the relatively large noise that is always present in polarimetric data, we developed a multi-line denoising technique based on Principal Component Analysis. In contrast to other multi-line techniques that extract from a large number of spectral lines a sort of mean profile, this method allows to extract individual spectral lines and thus allows for an inversion on the basis of specific lines. All these methods are incorporated in our newly developed ZDI code iMap, which is based on a conjugated gradient method. An in depth validation of our new synthesis method demonstrates the reliability and accuracy of this approach as well as a gain in computation time by almost three orders of magnitude relative to the conventional radiative transfer calculations. We investigated the influence of the different Stokes components (IV / IVQU) on the ability to reconstruct a known synthetic field configuration. In doing so we validate the capability of our inversion code, and we also assess limitations of magnetic field inversions in general. In a first application to II Peg, a K2 IV subgiant, we derived temperature and magnetic field surface distributions from spectropolarimetric data obtained in 2004 and 2007. It gives for the first time simultaneously the temporal evolution of the surface temperature and magnetic field distribution on II Peg.
Biogene Amine sind kleine organische Verbindungen, die sowohl bei Vertebraten als auch bei Invertebraten als Neurotransmitter, Neuromodulatoren und/oder Neurohormone wirken. Sie bilden eine bedeutende Gruppe von Botenstoffen und entfalten ihre Wirkungen vornehmlich über die Bindung an G-Protein-gekoppelte Rezeptoren. Bei Insekten wurde eine Vielzahl von Wirkungen biogener Amine beschrieben. Das führte schon frühzeitig zur Vermutung, dass Insekten (u. a. Invertebraten) wie die Wirbeltiere ein diverses Repertoire an aminergen Rezeptoren besitzen. Für ein umfassendes Verständnis der komplexen physiologischen Wirkungen biogener Amine fehlten jedoch wichtige Informationen über die molekulare Identität der entsprechenden Rezeptorproteine und ihrer pharmakologischen Eigenschaften, ihre Lokalisation und ihre intrazellulären Reaktionspartner. Viele bei Schaben gut untersuchte (neuro)physiologische Prozesse sowie Verhaltensweisen werden durch Serotonin und Dopamin gesteuert bzw. moduliert. Über die beteiligten Rezeptoren ist jedoch bisher vergleichsweise wenig bekannt. Die Klonierung und Charakterisierung von Serotonin- und Dopaminrezeptoren der Amerikanischen Schabe P. americana ist damit ein längst überfälliger Schritt auf dem Weg zu einem umfassenden Verständnis der vielfältigen Wirkungen biogener Amine bei Insekten. Durch die Anwendung verschiedener Klonierungsstrategien konnten cDNAs isoliert werden, die für potentielle Serotoninrezeptoren und einen Dopaminrezeptor kodieren. Die Sequenzen weisen die größte Ähnlichkeit zu Mitgliedern der 5-HT1- und 5-HT7-Rezeptorklassen bzw. den Invertebratentyp-Dopaminrezeptoren auf. Die isolierten Rezeptoren der Amerikanischen Schabe wurden dementsprechend Pea(Periplaneta americana)5-HT1, Pea5-HT7 und PeaDop2 benannt. Das Hydropathieprofil dieser Rezeptoren postuliert das Vorhandensein der charakteristischen heptahelikalen Architektur G-Protein-gekoppelter Rezeptoren. Die abgeleiteten Aminosäuresequenzen zeigen typische Merkmale aminerger Rezeptoren. So sind Aminosäuren, die bedeutend für die Ligandenbindung, die Rezeptoraktivierung und die Kopplung an GProteine sind, in den Rezeptoren konserviert. Expressionsstudien zeigten eine auffallend hohe Expression aller drei Rezeptor-mRNAs im Gehirn sowie in den Speicheldrüsen. Im Rahmen dieser Arbeit wurden polyklonale Antikörper gegen den Pea5-HT1-Rezeptor sowie den PeaDop2-Rezeptor hergestellt. Der anti-Pea5-HT1-Antikörper detektiert im Homogenat von Schabengehirnen, Speicheldrüsen und Pea5-HT1-exprimierenden HEK 293-Zellen die glykosylierte Form des Rezeptors. In Gehirnschnitten markiert der anti-Pea5-HT1-Antikörper spezifisch einige Zellkörper in der Pars intercerebralis und deren Axone, welche in den Corpora cardiaca Nerv I projizieren. Der PeaDop2-Rezeptor wurde durch den spezifischen anti-PeaDop2-Antikörper in Neuronen mit Somata im anterioren Randbereich der Medulla nachgewiesen. Diese Neurone innervieren die optischen Loben und projizieren in das ventrolaterale Protocerebrum. Die intrazellulären Signalwege der heterolog exprimierten Pea5-HT1- und PeaDop2-Rezeptoren wurden in HEK 293-Zellen untersucht. Die Aktivierung des Pea5-HT1-Rezeptors durch Serotonin führt zur Hemmung der cAMP-Synthese. Des Weiteren wurde gezeigt, dass der Rezeptor konstitutive Aktivität besitzt. WAY 100635, ein hoch selektiver 5-HT1A-Rezeptorantagonist, wurde als wirksamer inverser Agonist am Pea5-HT1-Rezeptor identifiziert. Der stabil exprimierte PeaDop2-Rezeptor antwortet auf eine Aktivierung durch Dopamin mit einer Erhöhung der cAMP-Konzentration. Eine C-terminal trunkierte Variante dieses Rezeptors ist eigenständig nicht funktional. Die Ergebnisse der vorliegenden Arbeit indizieren, dass die untersuchten aminergen Rezeptoren im zentralen Nervensystems der Schabe an der Informationsverarbeitung beteiligt sind und verschiedene physiologische Prozesse in peripheren Organen regulieren. Mit der Klonierung und funktionellen Charakterisierung der ersten Serotoninrezeptoren und eines Dopaminrezeptors ist damit eine wichtige Grundlage für die Untersuchung ihrer Funktionen geschaffen worden.
Der ubiquitär exprimierte, multifunktionale Glucosetransporter GLUT8 gehört zur Klasse III der Familie der passiven Glucosetransporter, die aus insgesamt 14 Proteinen besteht. Die fünf Mitglieder der Klasse IIII unterscheiden sich strukturell leicht von den Mitgliedern der Klasse I und II (Joost und Thorens, 2001). GLUT8 besitzt ein N-terminales Dileucin-Motiv, das Teil eines [DE]XXXL[LI] Motivs ist, welches für die Sortierung des Transporters in späte Endosomen und Lysosomen verantwortlich ist (Augustin et al., 2005). Da bis heute kein Signal identifiziert wurde, das eine Translokation des Transporters zur Plasmamembran auslöst, wird eine intrazelluläre Funktion von GLUT8 vermutet (Widmer et al., 2005). Im Rahmen der vorliegenden Arbeit wurde die intrazelluläre Funktion des Transporters in der Regulation der Glucosehomöostase des Körpers durch Analyse einer Slc2a8-knockout-Maus untersucht. Die homozygote Deletion des Transporters erbrachte lebensfähige Nachkommen, die sich augenscheinlich nicht von ihren Wildtyp-Geschwistern unterschieden. Allerdings wurde bei Verpaarungen heterozygoter Mäuse eine verminderte Anzahl an Slc2a8-/--Nachkommen beobachtet, die signifikant von der erwarteten Mendel’schen Verteilung abwich. Da Slc2a8 die höchste mRNA-Expression in den Testes aufwies und die Überprüfung der Fertilität mittels verschiedener homozygoter Verpaarungen eine Störung der weiblichen Fortpflanzungsfähigkeit ausschloss, wurden die Spermatozoen der Slc2a8-/--Mäuse eingehender untersucht. Als Ursache für die verringerte Anzahl von Slc2a8-/--Geburten wurde eine verminderte Prozentzahl motiler Slc2a8-/--Spermien ermittelt, die durch eine unzureichende mitochondriale Kondensation in den Spermien bedingt war. Diese Veränderung war mit einem reduzierten mitochondrialen Membranpotential assoziiert, was eine verminderte ATP-Produktion nach sich zog. Somit scheint GLUT8 in den Spermien an einem intrazellulären Transportprozess beteiligt zu sein, der einen Einfluss auf die oxidative Phosphorylierung der Mitochondrien ausübt. Im Gehirn wurde Slc2a8 besonders stark im Hippocampus exprimiert, der in der Regulation von körperlicher Aktivität, Explorationsverhalten, Erinnerungs- und Lernprozessen sowie Angst- und Stressreaktionen eine Rolle spielt. Außerdem wurde GLUT8 im Hypothalamus nachgewiesen, der unter anderem an der Regulation der Nahrungsaufnahme beteiligt ist. Die Slc2a8-/--Mäuse zeigten im Vergleich zu ihren Slc2a8+/+-Geschwistern eine signifikant gesteigerte körperliche Aktivität, die zusammen mit der von Membrez et al. (2006) publizierten erhöhten Zellproliferation im Hippocampus auf eine Nährstoffunterversorgung dieses Areals hindeutet. Die Nahrungsaufnahme war in Abwesenheit von GLUT8 nicht verändert, was zusammen mit dem nur geringfügig niedrigeren Körpergewicht der Slc2a8-/--Mäuse eine Funktion von GLUT8 im Glucose-sensing der Glucose-sensitiven Neurone des Gehirns ausschließt. Das leicht reduzierte Körpergewicht der Slc2a8-/--Mäuse ließ sich keinem bestimmten Organ- oder Gewebetyp zuordnen, sondern schien durch eine marginale Gewichtsreduktion aller untersuchten Gewebe bedingt zu sein. Zusammen mit den erniedrigten Blutglucosespiegeln und der anscheinend gesteigerten Lebenserwartung zeigten die Slc2a8-/--Mäuse Symptome einer leichten Nährstoffunterversorgung. GLUT8 scheint daher am Transport von Zuckerderivaten, die während des lysosomalen/endosomalen Abbaus von Glykoproteinen anfallen, beteiligt zu sein. Die so wiederaufbereiteten Zucker dienen dem Körper offenbar als zusätzliche Energiequelle.
The adaptive evolutionary potential of a species or population to cope with omnipresent environmental challenges is based on its genetic variation. Variability at immune genes, such as the major histocompatibility complex (MHC) genes, is assumed to be a very powerful and effective tool to keep pace with diverse and rapidly evolving pathogens. In my thesis, I studied natural levels of variation at the MHC genes, which have a key role in immune defence, and parasite burden in different small mammal species. I assessed the importance of MHC variation for parasite burden in small mammal populations in their natural environment. To understand the processes shaping different patterns of MHC variation I focused on evidence of selection through pathogens upon the host. Further, I addressed the issue of low MHC diversity in populations or species, which could potentially arise as a result from habitat fragmentation and isolation. Despite their key role in the mammalian evolution the marsupial MHC has been rarely investigated. Studies on primarily captive or laboratory bred individuals indicated very little or even no polymorphism at the marsupial MHC class II genes. However, natural levels of marsupial MHC diversity and selection are unknown to date as studies on wild populations are virtually absent. I investigated MHC II variation in two Neotropical marsupial species endemic to the threatened Brazilian Atlantic Forest (Gracilinanus microtarsus, Marmosops incanus) to test whether the predicted low marsupial MHC class II polymorphism proves to be true under natural conditions. For the first time in marsupials I confirmed characteristics of MHC selection that were so far only known from eutherian mammals, birds, and fish: Positive selection on specific codon sites, recombination, and trans-species polymorphism. Beyond that, the two marsupial species revealed considerable differences in their MHC class II diversity. Diversity was rather low in M. incanus but tenfold higher in G. microtarsus, disproving the predicted general low marsupial MHC class II variation. As pathogens are believed to be very powerful drivers of MHC diversity, I studied parasite burden in both host species to understand the reasons for the remarkable differences in MHC diversity. In both marsupial species specific MHC class II variants were associated to either high or low parasite load highlighting the importance of the marsupial MHC class II in pathogen defence. I developed two alternative scenarios with regard to MHC variation, parasite load, and parasite diversity. In the ‘evolutionary equilibrium’ scenario I assumed the species with low MHC diversity, M. incanus, to be under relaxed pathogenic selection and expected low parasite diversity. Alternatively, low MHC diversity could be the result of a recent loss of genetic variation by means of a genetic bottleneck event. Under this ‘unbalanced situation’ scenario, I assumed a high parasite burden in M. incanus due to a lack of resistance alleles. Parasitological results clearly reject the first scenario and point to the second scenario, as M. incanus is distinctly higher parasitised but parasite diversity is relatively equal compared to G. microtarsus. Hence, I suggest that the parasite load in M. incanus is rather the consequence than the cause for its low MHC diversity. MHC variation and its associations to parasite burden have been typically studied within single populations but MHC variation between populations was rarely taken into account. To gain scientific insight on this issue, I chose a common European rodent species. In the yellow necked mouse (Apodemus flavicollis), I investigated the effects of genetic diversity on parasite load not on the individual but on the population level. I included populations, which possess different levels of variation at the MHC as well as at neutrally evolving genetic markers (microsatellites). I was able to show that mouse populations with a high MHC allele diversity are better armed against high parasite burdens highlighting the significance of adaptive genetic diversity in the field of conservation genetics. An individual itself will not directly benefit from its population’s large MHC allele pool in terms of parasite resistance. But confronted with the multitude of pathogens present in the wild a population with a large MHC allele reservoir is more likely to possess individuals with resistance alleles. These results deepen our understanding of the complex causes and processes of evolutionary adaptations between hosts and pathogens.
The comprehension of figurative language : electrophysiological evidence on the processing of irony
(2008)
This dissertation investigates the comprehension of figurative language, in particular the temporal processing of verbal irony. In six experiments using event-related potentials(ERP) brain activity during the comprehension of ironic utterances in relation to equivalent non-ironic utterances was measured and analyzed. Moreover, the impact of various language-accompanying cues, e.g., prosody or the use of punctuation marks, as well as non-verbal cues such as pragmatic knowledge has been examined with respect to the processing of irony. On the basis of these findings different models on figurative language comprehension, i.e., the 'standard pragmatic model', the 'graded salience hypothesis', and the 'direct access view', are discussed.
Classical semiconductor physics has been continuously improving electronic components such as diodes, light-emitting diodes, solar cells and transistors based on highly purified inorganic crystals over the past decades. Organic semiconductors, notably polymeric, are a comparatively young field of research, the first light-emitting diode based on conjugated polymers having been demonstrated in 1990. Polymeric semiconductors are of tremendous interest for high-volume, low-cost manufacturing ("printed electronics"). Due to their rather simple device structure mostly comprising only one or two functional layers, polymeric diodes are much more difficult to optimize compared to small-molecular organic devices. Usually, functions such as charge injection and transport are handled by the same material which thus needs to be highly optimized. The present work contributes to expanding the knowledge on the physical mechanisms determining device performance by analyzing the role of charge injection and transport on device efficiency for blue and white-emitting devices, based on commercially relevant spiro-linked polyfluorene derivatives. It is shown that such polymers can act as very efficient electron conductors and that interface effects such as charge trapping play the key role in determining the overall device efficiency. This work contributes to the knowledge of how charges drift through the polymer layer to finally find neutral emissive trap states and thus allows a quantitative prediction of the emission color of multichromophoric systems, compatible with the observed color shifts upon driving voltage and temperature variation as well as with electrical conditioning effects. In a more methodically oriented part, it is demonstrated that the transient device emission observed upon terminating the driving voltage can be used to monitor the decay of geminately-bound species as well as to determine trapped charge densities. This enables direct comparisons with numerical simulations based on the known properties of charge injection, transport and recombination. The method of charge extraction under linear increasing voltages (CELIV) is investigated in some detail, correcting for errors in the published approach and highlighting the role of non-idealized conditions typically present in experiments. An improved method is suggested to determine the field dependence of charge mobility in a more accurate way. Finally, it is shown that the neglect of charge recombination has led to a misunderstanding of experimental results in terms of a time-dependent mobility relaxation.
Wenn in einem verbrannten Gebäude, in dem sich unliebsame Untermieter breit gemacht haben, diese damit beginnen, Stein für Stein das restliche Gemäuer abzutragen, um die verbliebenen Fenster zuzumauern, wird es Zeit, mit mehr oder weniger freundlichen Worten die Bewohner des Hauses zu verweisen, die nur verhindern, dass sich neue Besucher dem Gelände nähern. Dafür muss freilich alter und neuer Behang von den Wänden genommen werden; und eben durch die Verbannung all dessen, was nicht an diesen Ort gehört, kann ein freundliches Bild des Dichters Friedrich Hölderlin erhalten bleiben, der nicht nur poetisch, sondern auch zwischenmenschlich einigen bedeutenden Persönlichkeiten seiner Zeit im Wege stand, was ihm wohl in beiden Aspekten zum Verhängnis geworden ist, weil er sich weder verwandtschaftlich noch im Rahmen des poetischen Geschäfts gegen seine intrigante Umgebung zur Wehr zu setzen wusste. Der etwas länger geratene Aufsatz „So sind die Zeichen in der Welt“ soll weder ein neues Heiligenbild schaffen noch einen frisch aus der Tasche gezogenen Popstar zum Liebhaben, sondern will behutsam einige Fresken des Gedankengebäudes Hölderlin für diejenigen freilegen, deren Bild des Dichters noch nicht völlig von der Vorstellung des wahnsinnig gewordenen Dichters übertüncht worden ist. Obwohl sich die Arbeit damit ganz bewusst den Hölderlin - Studien von Pierre Bertaux anschließt, setzt sie sich auch mit dieser Wahrnehmungslinie kritisch auseinander, indem sie neben biographischen Anmerkungen auch stil- und ideologiekritische Methoden einsetzt, um die manchmal unübersichtliche Quellenlage ein wenig durchsichtiger zu machen, als dies bisher der Fall ist. Über eine solche, in Einzelheiten vielleicht unorthodox wirkende Darstellung hinaus will die Arbeit die Behandlungsmöglichkeit von Friedrich Hölderlin im Deutschunterricht des Gymnasiums nahe legen, weil selbst Jüngeres über ihn behandelt wird, das darauf hinweist, inwiefern die Marginalisierung dieses Poeten damit zu tun hat, dass er während eines langen Abschnitts der Literaturgeschichte auch dafür verantwortlich gemacht wurde, was er weder geschrieben hat noch meinte. Die Intention der Arbeit besteht insgesamt in der Vorstellung, das Gedankengut Hölderlins müsse aus dem breiten Strom einer konservativen Wahrnehmungstradition entfernt werden (zu der beispielsweise auch die dramatische Hölderlin - Bearbeitung E. Jelineks Wolken.Heim. gezählt werden kann, selbst wenn sie widerborstig gemeint sein sollte) und dieser Dichter sei als realistischer Denker zu restaurieren, der sich deshalb dem Literaturbetrieb seiner Zeit entgezogen hat, weil er, selbst der Lebensführung nach, sehr früh die Bewegungen gegen französische Aufklärung und Revolution begriffen hat – und von deren massiver Ablehnung Hölderlin bis heute getroffen wird. Da Friedrich Hölderlin aber nicht nur ideologischer Betrachtung, Kritik und Verfälschung ausgesetzt ist, sondern auch regelmäßig Gegenstand umfangreicher biographisch - psychologischer Spekulationen darstellt, wurde dieser Aspekt nicht nur bezogen auf die Rezeptionsgeschichte untersucht, sondern am Gegenstand selbst. In diesem Zusammenhang konnte nicht nur eine bislang vernachlässigte Beziehung Hölderlins zu Sophie Mereau rekonstruiert und der Verdacht zurückgewiesen werden, es habe zur selben Zeit eine homoerotische Beziehung zu Isaak Sinclair bestanden, vielmehr gelang auch der Nachweis, dass das Verhältnis des Dichters zu Susette Gontard weder singulär noch konkurrenzlos gewesen ist, weshalb sich eine eindeutige Zuordnung dieser Frau zur poetischen Figur der Diotima verbietet. Dazu wurde einerseits der Umstand entmythologisiert, nach dem die Liebe zur Frankfurter Zeit platonisch betrieben wurde, andererseits aber diese Affaire den Verhältnissen Hölderlins zu anderen Frauen zugeordnet, mit denen sich Frau Gontard – letztlich erfolglos – dadurch auseinander zu setzen versuchte, dass sie die Rolle Diotimas okkupierte. Dabei ließ sich schließlich der Verdacht erhärten, die stabilste Bindung des Dichters an eine Frau sei die zur eigenen Schwester Heinrike gewesen, mit der bis zum Bruch von Bordeaux aus zwar unregelmäßig, aber emotional immer wieder ausufernde Briefe getauscht wurden. Es ist nicht ohne Ironie, wenn ausgerechnet im vielleicht bekanntesten Gedicht Hölderlins, der „Hälfte des Lebens“, in dem regelmäßig ein bedeutender philosophischer Entwurf gesehen wird, Rudimente eines Textes enthalten sind, der – eindeutig erotisch konnotiert – an die eigene Schwester gerichtet ist.
Vitamin E wird immer noch als das wichtigste lipophile Antioxidanz in biologischen Membranen betrachtet. In den letzten Jahren hat sich jedoch der Schwerpunkt der Vitamin E-Forschung hin zu den nicht-antioxidativen Funktionen verlagert. Besonderes Interesse gilt dabei dem α-Tocopherol, der häufigsten Vitamin E-Form im Gewebe von Säugetieren, und seiner Rolle bei der Regulation der Genexpression. Das Ziel dieser Dissertation war die Untersuchung der genregulatorischen Funktionen von α-Tocoperol und die Identifizierung α-Tocopherol-sensitiver Gene in vivo. Zu diesem Zweck wurden Mäuse mit verschiedenen Mengen α-Tocopherol gefüttert. Die Analyse der hepatischen Genexpression mit Hilfe von DNA-Microarrays identifizierte 387 α-Tocopherol-sensitive Gene. Funktionelle Clusteranalysen der differentiell exprimierten Gene zeigten einen Einfluss von α-Tocooherol auf zelluläre Transportprozesse. Besonders solche Gene, die an vesikulären Transportvorgängen beteiligt sind, wurden größtenteils durch α-Tocopherol hochreguliert. Für Syntaxin 1C, Vesicle-associated membrane protein 1, N-ethylmaleimide-sensitive factor and Syntaxin binding protein 1 konnte eine erhöhte Expression mittels real time PCR bestätigt werden. Ein funktioneller Einfluss von α-Tocopherol auf vesikuläre Transportprozesse konnte mit Hilfe des in vitro β-Hexosaminidase Assays in der sekretorischen Mastzelllinie RBL-2H3 gezeigt werden. Die Inkubation der Zellen mit α-Tocopherol resultierte in einer konzentrationsabhängigen Erhöhung der PMA/Ionomycin-stimulierten Sekretion der β-Hexosaminidase. Eine erhöhte Expression ausgewählter Gene, die an der Degranulation beteiligt sind, konnte nicht beobachtet werden. Damit schien ein direkter genregulatorischer Effekt von α-Tocopherol eher unwahrscheinlich. Da eine erhöhte Sekretion auch mit β-Tocopherol aber nicht mit Trolox, einem hydrophilen Vitamin E-Analogon, gefunden wurde, wurde vermutet, dass α-Tocopherol die Degranulation möglicherweise durch seine membranständige Lokalisation beeinflussen könnte. Die Inkubation der Zellen mit α-Tocopherol resultierte in einer veränderten Verteilung des Gangliosids GM1, einem Lipid raft Marker. Es wird angenommen, dass diese Membranmikrodomänen als Plattformen für Signaltransduktionsvorgänge fungieren. Ein möglicher Einfluss von Vitamin E auf die Rekrutierung/Translokation von Signalproteinen in Membranmikrodomänen könnte die beobachteten Effekte erklären. Eine Rolle von α-Tocopherol im vesikulären Transport könnte nicht nur seine eigene Absorption und seinen Transport beeinflussen, sondern auch eine Erklärung für die bei schwerer Vitamin E-Defizienz auftretenden neuronalen Dysfunktionen bieten. Im zweiten Teil der Arbeit wurde die α-Tocopheroltransferprotein (Ttpa) Knockout-Maus als genetisches Modell für Vitamin E-Defizienz verwendet, um den Effekt von Ttpa auf die Genexpression und die Gewebeverteilung von α-Tocopherol zu analysieren. Ttpa ist ein cytosolisches Protein, das für die selektive Retention von α-Tocopherol in der Leber verantwortlich ist. Die Ttpa-Defizienz resultierte in sehr geringen α-Tocopherol-Konzentrationen im Plasma und den extrahepatischen Geweben. Die Analyse der α-Tocopherol-Gehalte im Gehirn wies auf eine Rolle von Ttpa bei der α-Tocopherol-Aufnahme ins Gehirn hin.
Die Interdependenz formaler und informaler Strukturen im Lichte der Systemtheorie Niklas Luhmanns
(2009)
Die meisten Menschen verbringen heutzutage den Großteil ihres Daseins in Organisationen. Sie werden immer häufiger in Organisationen geboren (Krankenhaus), in Organisationen sozialisiert (Kindergärten, Schulen usw.), sind für ihre Existenzsicherung auf Lohnzahlungen von Organisationen angewiesen, und zunehmend fristen sie ihr Lebensende in Organisationen (Krankenhaus, Altenheim etc.). Aus soziologischer Sicht sind Organisationen deshalb besonders interessant und verdienen eine besondere Beachtung in der Gesellschaftsanalyse. In dieser Untersuchung soll nicht der Siegeszug der Organisation in der soziokulturellen Evolution der Gesellschaft im Mittelpunkt stehen, sondern die Frage: Wie kommt das Driften (Maturana, Varela, 1991) der Organisation zustande? Geht man davon aus, dass in der Evolution Aussterben die Regel und Anpassung die Ausnahme ist, scheint der Aspekt des Driftens organisierter Sozialsysteme besonderes Augenmerk zu verdienen. Liest man die für Deutschland veröffentlichten Zahlen der Unternehmensinsolvenzen, gerade in den heutigen Zeiten der Wirtschafts- und Finanzkrise, scheint der Fortbestand einer einmal ins Leben gerufenen Organisation eher ungewiss als gesichert zu sein. Des Weiteren scheint es so zu sein, dass Organisationen gewissen Lebenszyklen (Küpper, Felsch) unterworfen sind. In den älteren Organisationstheorien wurde noch von einem einheitlichen Zweck ausgegangen, der die gesamte Strukturierung der Organisation übergreift. Alle Organisationsmitglieder haben ihr Handeln im Hinblick auf die Verwirklichung dieses spezifischen Zwecks der Intention nach rational zu gestalten. In der Organisationsanalyse stellte man aber fest, dass Zweckverschiebungen innerhalb der formalen Organisationen eher die Regel als die Ausnahme sind. (Mayntz, 1963 u.a.) Dies Problem der rational gestalteten Organisation wurde somit den Organisationsmitgliedern zugeschrieben. Gleichsam als die andere Seite der formalen Organisation agieren die Mitglieder der formalen Organisation in der informellen Organisation als Mikropolitiker (Bosetzky, Heinrich, 1989), die die formalen Strukturen unterminieren, um ihre persönliche Nutzenmaximierung voranzutreiben. Übernimmt man diese Perspektive für die Betrachtung der formalen Organisation, kann man sich schwer der Annahme verweigern, dass die Organisationsmitglieder grundlegend feindlich gegenüber der Organisation gesinnt sind. Mit dieser Perspektive würde man all den freiwilligen Mitgliedern in Hilfsorganisationen, sozialen Vereinen usw. nicht gerecht werden. In der hier durchgeführten Analyse wird die Perspektive der Luhmannschen Systemtheorie eingenommen. Damit sind die Organisationsmitglieder nicht aus der theoretischen Betrachtung eliminiert, sondern im Gegenteil, sie werden in der Umwelt der organisierten Sozialsysteme verortet. Das hat den entscheidenden Vorteil, dass den Organisationsmitgliedern aus der theoretischen Betrachtung heraus mehr Freiheit zugestanden wird als in akteurszentrierten Theorien. Denn Systembildung bedeutet immer die Streichung mindestens eines Freiheitsgrades (Foerster von, 1997). Mit der Luhmannschen Systemtheorie wird des Weiteren davon ausgegangen, dass sich gleichsam unbeobachtet hinter dem Rücken der Anwesenden ein Netzwerk webt, ein soziales System sich bildet. Alle sozialen Systeme beruhen letztlich auf der Unterscheidung von Bewusstsein und Kommunikation. Die Kommunikation selbst kann man nicht beobachten sondern nur erschließen. Solange sie störungsfrei läuft, bleibt sie den Anwesenden unbewusst. Erst bei Störungen des Kommunikationsflusses macht sie sich bemerkbar, obgleich sie fast nie den Anwesenden bewusst wird. Denn die Kommunikation drillt den Menschen auf den Menschen, weil sie sich der Wahrnehmung entzieht (Fuchs, 1998). Die Autopoiesis der Kommunikation ist auf die Anwesenheit zweier psychischer Systeme bzw. Bewusstseinssysteme angewiesen. Sie ermöglichen überhaupt erst den Raum oder den Phänomenbereich, in dem die Autopoiesis sozialer Systeme möglich ist (Luhmann, 1990). Die Autopoiesis der Kommunikation setzt entsprechend immer Interaktion der Anwesenden voraus. In der Interaktion selbst, werden sich die Anwesenden in besonderer Weise wechselseitig bewusst und können sich entsprechend anders zur Geltung bringen, als in den Strukturzwängen einer formalern Organisation. Die Kommunikation selbst gibt den Beteiligten gewisse Changiermöglichkeiten an die Hand, z.B. das An- und Ausschalten verschiedener operativer Displacement (Fuchs, 1993), um ihren störungsfreien Ablauf zu ermöglichen und entsprechende Brüche zu vermeiden. Zum Beispiel den nahtlosen Übergang von einem Thema zu einem anderen. Die Interaktion selbst wird als zeitinstabiles Kontaktsystem (Luhmann, 1997) begriffen, das mit dem Auseinandergehen der Beteiligten erloschen ist. Die hier kurz angerissene Bedeutung der Kommunikation in der Luhmannschen Systemtheorie erklärt, warum ihr in der durchgeführten Analyse ein so breiter Raum eingeräumt wurde. Organisationen sind Sozialsysteme eines anderen Typs und besitzen damit verbunden ganz andere emergente Eigenschaften. Sie können mit der diffusen Kommunikation der Interaktion nichts anfangen. Ihre Operationen basieren auf Entscheidungen. Jede Entscheidung schließt an eine Entscheidungskommunikation an, aber sie selbst ist die Sinnverdichtung dieser Kommunikation. Und eben dieser Sachverhalt stellt ihre Effizienz, ihr Tempovorteil gegenüber allen anderen Typen sozialer Systeme dar. Erst wenn es der Organisation gelingt Entscheidungen an Entscheidungen zu knüpfen, ist sie in der Lage ihr eigenes Netzwerk ihrer eigenen Entscheidungen zu etablieren. Nur in der Form der Entscheidung kann sie ihre für sie selbst nicht weiter hintergehbaren Systemelemente (Entscheidungen) aneinander anschließen, Entscheidungen anhand von Entscheidungen produzieren. Gelingt ihr das, gewinnen die Entscheidungen füreinander Relevanz, können sich wechselseitig stützen, vorbereiten und entlasten. Jede Entscheidung muss jetzt ihre eigene Vorgängerentscheidung und den jeweiligen Kontext anderer Entscheidungen mit berücksichtigen. Es bildet sich ein Zusammenhang der Entscheidungen, der die Grenzen des Systems begründet und bezeichnet. Da jede Organisation sich immer nur jeweils im Moment ihres Entscheidens realisiert, bekommt sie ein Zeitproblem. Man muss nicht nur entscheiden, sondern man muss mit Bezug auf den Entscheidungszusammenhang korrekt und rechtzeitig entscheiden bevor sich das zu entscheidende Problem zu Ungunsten der Organisation von selbst erledigt hat. Alles was jetzt in der Organisation als relevant betrachtet werden soll, muss die Form einer Entscheidung annehmen. Dies bedeutet nicht, dass in der Entscheidungskommunikation nicht Einfluss auf die Entscheidung genommen werden kann, aber zum einen wird man aufgrund des Entscheidungsdrucks versuchen die Entscheidungskommunikation soweit wie möglich zu verkürzen, z.B durch Programmierung. Zum anderen sieht man der Entscheidung ihre Entscheidungskommunikation nicht an. Man kann sie nur noch erahnen. Organisationen kommunizieren am liebsten mit Organisationen in ihrer Umwelt, da diese gezwungen sind, selbst Entscheidungen zu produzieren, mit denen man selbst etwas anfangen kann. Man kann sie entweder in den eigenen Entscheidungszusammenhang übernehmen, oder man kann sie mit einer eigenen Entscheidung ablehnen. Aber jede Entscheidung, die die Organisation trifft bestätigt oder ändert ihre Strukturen. Dieser Gedankengang führte zu der Überlegung, dass informale Strukturen selbst organisierte Interaktionssysteme sein müssen. Sie müssen sich bereits in irgendeiner Form selbst organisieren. Sie stehen unter dem Gesetz des Wiedersehens. Die sozialen Kontakte werden sich in einem absehbaren Zeit- und Interessenhorizont wiederholen, sich verdichten und konfirmieren (Luhmann, 1997) und dies erfordert bereits ein gewisses Maß an Organisation. Man muss die nächsten Treffen planen, ein Thema auswählen usw. Letztlich produzieren sie Entscheidungen mit denen die formale Organisation etwas anfangen kann. Dies ist einer der Gründe, warum sich die formale Organisation zunehmend den Zugriff auf informale Strukturen ermöglicht.