Refine
Year of publication
Document Type
- Article (36086)
- Doctoral Thesis (6536)
- Monograph/Edited Volume (5575)
- Postprint (3296)
- Review (2315)
- Part of a Book (1098)
- Other (973)
- Conference Proceeding (578)
- Preprint (569)
- Part of Periodical (531)
Language
- English (31117)
- German (26198)
- Spanish (365)
- French (330)
- Italian (115)
- Russian (113)
- Multiple languages (70)
- Hebrew (36)
- Portuguese (25)
- Polish (24)
Keywords
- Germany (209)
- climate change (182)
- Deutschland (146)
- machine learning (90)
- European Union (79)
- diffusion (78)
- Sprachtherapie (77)
- Migration (75)
- morphology (74)
- Logopädie (73)
Institute
- Institut für Biochemie und Biologie (5506)
- Institut für Physik und Astronomie (5471)
- Institut für Geowissenschaften (3686)
- Institut für Chemie (3492)
- Wirtschaftswissenschaften (2645)
- Historisches Institut (2528)
- Department Psychologie (2358)
- Institut für Mathematik (2159)
- Institut für Romanistik (2115)
- Sozialwissenschaften (1884)
In der vorliegenden Arbeit werden die Eigenschaften geschlossener fluider Membranen, sogenannter Vesikeln, bei endlichen Temperaturen untersucht. Dies beinhaltet Betrachtungen zur Form freier Vesikeln, eine Untersuchung des Adhäsionsverhaltens von Vesikeln an planaren Substraten sowie eine Untersuchung der Eigenschaften fluider Vesikeln in eingeschränkten Geometrien. Diese Untersuchungen fanden mit Hilfe von Monte-Carlo-Simulationen einer triangulierten Vesikeloberfläche statt. Die statistischen Eigenschaften der fluktuierenden fluiden Vesikeln wurden zum Teil mittels Freier-Energie-Profile analysiert. In diesem Zusammenhang wurde eine neuartige Histogrammethode entwickelt.<BR> Die Form für eine freie fluide Vesikel mit frei veränderlichem Volumen, die das Konfigurationsenergie-Funktional minimiert, ist im Falle verschwindender Temperatur eine Kugel. Mit Hilfe von Monte-Carlo-Simulationen sowie einem analytisch behandelbaren Modellsystem konnte gezeigt werden, daß sich dieses Ergebnis nicht auf endliche Temperaturen verallgemeinern lässt und statt dessen leicht prolate und oblate Vesikelformen gegenüber der Kugelgestalt überwiegen. Dabei ist die Wahrscheinlichkeit für eine prolate Form ein wenig gröoßer als für eine oblate. Diese spontane Asphärizität ist entropischen Ursprungs und tritt nicht bei zweidimensionalen Vesikeln auf. Durch osmotische Drücke in der Vesikel, die größer sind als in der umgebenden Flüssigkeit, lässt sich die Asphärizität reduzieren oder sogar kompensieren. Die Übergänge zwischen den beobachteten prolaten und oblaten Formen erfolgen im Bereich von Millisekunden in Abwesenheit osmotisch aktiver Partikel. Bei Vorhandensein derartiger Partikel ergeben sich Übergangszeiten im Bereich von Sekunden. Im Rahmen der Untersuchung des Adhäsionsverhaltens fluider Vesikeln an planaren, homogenen Substraten konnte mit Hilfe von Monte-Carlo-Simulationen festgestellt werden, dass die Eigenschaften der Kontaktfläche der Vesikeln stark davon abhängen, welche Kräfte den Kontakt bewirken. Für eine dominierende attraktive Wechselwirkung zwischen Substrat und Vesikelmembran sowie im Falle eines Massendichteunterschieds der Flüssigkeiten innerhalb und außerhalb der Vesikel, der die Vesikel auf das Substrat sinken lässt, fndet man innerhalb der Kontakt ache eine ortsunabhangige Verteilung des Abstands zwischen Vesikelmembran und Substrat. Drückt die Vesikel ohne Berücksichtigung osmotischer Effekte auf Grund einer Differenz der Massendichten der Membran und der umgebenden Flüssigkeit gegen das Substrat, so erhält man eine Abstandsverteilung zwischen Vesikelmembran und Substrat, die mit dem Abstand vom Rand der Kontaktfläche variiert. Dieser Effekt ist zudem temperaturabhängig. Ferner wurde die Adhäsion fluider Vesikeln an chemisch strukturierten planaren Substraten untersucht. Durch das Wechselspiel von entropischen Effekten und Konfigurationsenergien entsteht eine komplexe Abhängigkeit der Vesikelform von Biegesteifigkeit, osmotischen Bedingungen und der Geometrie der attraktiven Domänen. Für die Bestimmung der Biegesteifigkeit der Vesikelmembranen liefern die existierenden Verfahren stark voneinander abweichende Ergebnisse. In der vorliegenden Arbeit konnte mittels Monte-Carlo-Simulationen zur Bestimmung der Biegesteifigkeit anhand des Mikropipettenverfahrens von Evans gezeigt werden, dass dieses Verfahren die a priori für die Simulation vorgegebene Biegesteifigkeit im wesentlichen reproduzieren kann. Im Hinblick auf medizinisch-pharmazeutische Anwendungen ist der Durchgang fluider Vesikeln durch enge Poren relevant. In Monte-Carlo-Simulationen konnte gezeigt werden, dass ein spontaner Transport der Vesikel durch ein Konzentrationsgefälle osmotisch aktiver Substanzen, das den physiologischen Bedingungen entspricht, induziert werden kann. Es konnten die hierfür notwendigen osmotischen Bedingungen sowie die charakteristischen Zeitskalen abgeschätzt werden. Im realen Experiment sind Eindringzeiten in eine enge Pore im Bereich weniger Minuten zu erwarten. Ferner konnte beobachtet werden, dass bei Vesikeln mit einer homogenen, positiven spontanen Krümmung Deformationen hin zu prolaten Formen leichter erfolgen als bei Vesikeln ohne spontane Krümmung. Mit diesem Effekt ist eine Verringerung der Energiebarriere für das Eindringen in eine Pore verbunden, deren Radius nur wenig kleiner als der Vesikelradius ist.
About 24 % of the land surface in the northern hemisphere are underlayed by permafrost in various states. Permafrost aggradation occurs under special environmental conditions with overall low annual precipitation rates and very low mean annual temperatures. Because the general permafrost occurrence is mainly driven by large-scale climatic conditions, the distribution of permafrost deposits can be considered as an important climate indicator. The region with the most extensive continuous permafrost is Siberia. In northeast Siberia, the ice- and organic-rich permafrost deposits of the Ice Complex are widely distributed. These deposits consist mostly of silty to fine-grained sandy sediments that were accumulated during the Late Pleistocene in an extensive plain on the then subaerial Laptev Sea shelf. One important precondition for the Ice Complex sedimentation was, that the Laptev Sea shelf was not glaciated during the Late Pleistocene, resulting in a mostly continuous accumulation of permafrost sediments for at least this period. This shelf landscape became inundated and eroded in large parts by the Holocene marine transgression after the Last Glacial Maximum. Remnants of this landscape are preserved only in the present day coastal areas. Because the Ice Complex deposits contain a wide variety of palaeo-environmental proxies, it is an excellent palaeo-climate archive for the Late Quaternary in the region. Furthermore, the ice-rich Ice Complex deposits are sensible to climatic change, i.e. climate warming. Because of the large-scale climatic changes at the transition from the Pleistocene to the Holocene, the Ice Complex was subject to extensive thermokarst processes since the Early Holocene. Permafrost deposits are not only an environmental indicator, but also an important climate factor. Tundra wetlands, which have developed in environments with aggrading permafrost, are considered a net sink for carbon, as organic matter is stored in peat or is syn-sedimentary frozen with permafrost aggradation. Contrary, the Holocene thermokarst development resulted in permafrost degradation and thus the release of formerly stored organic carbon. Modern tundra wetlands are also considered an important source for the climate-driving gas methane, originating mainly from microbial activity in the seasonal active layer. Most scenarios for future global climate development predict a strong warming trend especially in the Arctic. Consequently, for the understanding of how permafrost deposits will react and contribute to such scenarios, it is necessary to investigate and evaluate ice-rich permafrost deposits like the widespread Ice Complex as climate indicator and climate factor during the Late Quaternary. Such investigations are a pre-condition for the precise modelling of future developments in permafrost distribution and the influence of permafrost degradation on global climate. The focus of this work, which was conducted within the frame of the multi-disciplinary joint German-Russian research projects "Laptev Sea 2000" (1998-2002) and "Dynamics of Permafrost" (2003-2005), was twofold. First, the possibilities of using remote sensing and terrain modelling techniques for the observation of periglacial landscapes in Northeast Siberia in their present state was evaluated and applied to key sites in the Laptev Sea coastal lowlands. The key sites were situated in the eastern Laptev Sea (Bykovsky Peninsula and Khorogor Valley) and the western Laptev Sea (Cape Mamontovy Klyk region). For this task, techniques using CORONA satellite imagery, Landsat-7 satellite imagery, and digital elevation models were developed for the mapping of periglacial structures, which are especially indicative of permafrost degradation. The major goals were to quantify the extent of permafrost degradation structures and their distribution in the investigated key areas, and to establish techniques, which can be used also for the investigation of other regions with thermokarst occurrence. Geographical information systems were employed for the mapping, the spatial analysis, and the enhancement of classification results by rule-based stratification. The results from the key sites show, that thermokarst, and related processes and structures, completely re-shaped the former accumulation plain to a strongly degraded landscape, which is characterised by extensive deep depressions and erosional remnants of the Late Pleistocene surface. As a results of this rapid process, which in large parts happened within a short period during the Early Holocene, the hydrological and sedimentological regime was completely changed on a large scale. These events resulted also in a release of large amounts of organic carbon. Thermokarst is now the major component in the modern periglacial landscapes in terms of spatial extent, but also in its influence on hydrology, sedimentation and the development of vegetation assemblages. Second, the possibilities of using remote sensing and terrain modelling as a supplementary tool for palaeo-environmental reconstructions in the investigated regions were explored. For this task additionally a comprehensive cryolithological field database was developed for the Bykovsky Peninsula and the Khorogor Valley, which contains previously published data from boreholes, outcrops sections, subsurface samples, and subsurface samples, as well as additional own field data. The period covered by this database is mainly the Late Pleistocene and the Holocene, but also the basal deposits of the sedimentary sequence, interpreted as Pliocene to Early Pleistocene, are contained. Remote sensing was applied for the observation of periglacial strucures, which then were successfully related to distinct landscape development stages or time intervals in the investigation area. Terrain modelling was used for providing a general context of the landscape development. Finally, a scheme was developed describing mainly the Late Quaternary landscape evolution in this area. A major finding was the possibility of connecting periglacial surface structures to distinct landscape development stages, and thus use them as additional palaeo-environmental indicator together with other proxies for area-related palaeo-environmental reconstructions. In the landscape evolution scheme, i.e. of the genesis of the Late Pleistocene Ice Complex and the Holocene thermokarst development, some new aspects are presented in terms of sediment source and general sedimentation conditions. This findings apply also for other sites in the Laptev Sea region.
Diese Arbeit behandelt die Frage, welche Auswirkungen eine EU-Mitgliedschaft der Türkei auf die europäischen Sicherheitsbeziehungen haben würde. Es wird die sicherheitspolitische Situation in- und außerhalb der türkischen Staatsgrenzen analysiert. Auf Basis der Rational Choice Theorie vom Akteurzentrierten Institutionalismus wird gezeigt mit welchen Herausforderungen die Europäische Union konfrontiert wäre und die Frage behandelt, ob eine so genannte Privilegierte Partnerschaft eine mögliche Alternative zu einer Vollmitgliedschaft sein kann.
Reversible addition-fragmentation transfer (RAFT) was used as a controlling technique for studying the aqueous heterophase polymerization. The polymerization rates obtained by calorimetric investigation of ab initio emulsion polymerization of styrene revealed the strong influence of the type and combination of the RAFT agent and initiator on the polymerization rate and its profile. The studies in all-glass reactors on the evolution of the characteristic data such as average molecular weight, molecular weight distribution, and average particle size during the polymerization revealed the importance of the peculiarities of the heterophase system such as compartmentalization, swelling, and phase transfer. These results illustrated the important role of the water solubility of the initiator in determining the main loci of polymerization and the crucial role of the hydrophobicity of the RAFT agent for efficient transportation to the polymer particles. For an optimum control during ab-initio batch heterophase polymerization of styrene with RAFT, the RAFT agent must have certain hydrophilicity and the initiator must be water soluble in order to minimize reactions in the monomer phase. An analytical method was developed for the quantitative measurements of the sorption of the RAFT agents to the polymer particles based on the absorption of the visible light by the RAFT agent. Polymer nanoparticles, temperature, and stirring were employed to simulate the conditions of a typical aqueous heterophase polymerization system. The results confirmed the role of the hydrophilicity of the RAFT agent on the effectiveness of the control due to its fast transportation to the polymer particles during the initial period of polymerization after particle nucleation. As the presence of the polymer particles were essential for the transportation of the RAFT agents into the polymer dispersion, it was concluded that in an ab initio emulsion polymerization the transport of the hydrophobic RAFT agent only takes place after the nucleation and formation of the polymer particles. While the polymerization proceeds and the particles grow the rate of the transportation of the RAFT agent increases with conversion until the free monomer phase disappears. The degradation of the RAFT agent by addition of KPS initiator revealed unambigueous evidence on the mechanism of entry in heterophase polymerization. These results showed that even extremely hydrophilic primary radicals, such as sulfate ion radical stemming from the KPS initiator, can enter the polymer particles without necessarily having propagated and reached a certain chain length. Moreover, these results recommend the employment of azo-initiators instead of persulfates for the application in seeded heterophase polymerization with RAFT agents. The significant slower rate of transportation of the RAFT agent to the polymer particles when its solvent (styrene) was replaced with a more hydrophilic monomer (methyl methacrylate) lead to the conclusion that a complicated cooperative and competitive interplay of solubility parameters and interaction parameter with the particles exist, determining an effective transportation of the organic molecules to the polymer particles through the aqueous phase. The choice of proper solutions of even the most hydrophobic organic molecules can provide the opportunity of their sorption into the polymer particles. Examples to support this idea were given by loading the extremely stiff fluorescent molecule, pentacene, and very hydrophobic dye, Sudan IV, into the polymer particles. Finally, the first application of RAFT at room temperature heterophase polymerization is reported. The results show that the RAFT process is effective at ambient temperature; however, the rate of fragmentation is significantly slower. The elevation of the reaction temperature in the presence of the RAFT agent resulted in faster polymerization and higher molar mass, suggesting that the fragmentation rate coefficient and its dependence on the temperature is responsible for the observed retardation.
This book is about inventing successes and good practices of governments that are "closer to the people". Numerous examples throughout Latin America indicate-often despite macroeconomic instability, high inflation, and strong top-down regulation-that subnational actors have repeatedly achieved what their central counterparts preached: sound policymaking, better administration, better services, more participation, and sustained economic development. But what makes some governments change course and move toward innovation? What triggers experimentation and, eventually, turns ordinary practice into good practice? The book answers some of these questions. It goes beyond a mere documentation of good and best practice, which is increasingly provided through international networks and Internet sites. Instead, it seeks a better understanding of the origins and fates of such successes at the micro level. The case studies and analytical chapters seek to explain: How good practice is born at the local level; Where innovative ideas come from; How such ideas are introduced in a new context, successfully implemented, and propagated locally and beyond; What donors can do to effectively assist processes of self-induced and bottom-up change.
The Thesis is focused on the properties of self-organized nanostructures. Atomic and electronic properties of different systems have been investigated using methods of electron diffraction, scanning tunneling microscopy and photoelectron spectroscopy. Implementation of the STM technique (including design, construction, and tuning of the UHV experimental set-up) has been done in the framework of present work. This time-consuming work is reported to greater detail in the experimental part of this Thesis. The scientific part starts from the study of quantum-size effects in the electronic structure of a two-dimensional Ag film on the supporting substrate Ni(111). Distinct quantum well states in the sp-band of Ag were observed in photoelectron spectra. Analysis of thickness- and angle-dependent photoemission supplies novel information on the properties of the interface. For the first time the Ni(111) relative band gap was indirectly probed in the ground-state through the electronic structure of quantum well states in the adlayer. This is particularly important for Ni where valence electrons are strongly correlated. Comparison of the experiment with calculations performed in the formalism of the extended phase accumulation model gives the substrate gap which is fully consistent with the one obtained by ab-initio LDA calculations. It is, however, in controversy to the band structure of Ni measured directly by photoemission. These results lend credit to the simplest view of photoemission from Ni, assigning early observed contradictions between theory and experiments to electron correlation effects in the final state of photoemission. Further, nanosystems of lower dimensionality have been studied. Stepped surfaces W(331) and W(551) were used as one-dimensional model systems and as templates for self-organization of Au nanoclusters. Photon energy dependent photoemission revealed a surface resonance which was never observed before on W(110) which is the base plane of the terrace microsurfaces. The dispersion E(k) of this state measured on stepped W(331) and W(551) with angle-resolved photoelectron spectroscopy is modified by a strong umklapp effect. It appears as two parabolas shifted symmetrically relative to the microsurface normal by half of the Brillouin zone of the step superlattice. The reported results are very important for understanding of the electronic properties of low-dimensional nanostructures. It was also established that W(331) and W(551) can serve as templates for self-organization of metallic nanostructures. A combined study of electronic and atomic properties of sub-monolayer amounts of gold deposited on these templates have shown that if the substrate is slightly pre-oxidized and the temperature is elevated, then Au can alloy with the first monolayer of W. As a result, a nanostructure of uniform clusters of a surface alloy is produced all over the steps. Such clusters feature a novel sp-band in the vicinity of the Fermi level, which appears split into constant energy levels due to effects of lateral quantization. The last and main part of this work is devoted to large-scale reconstructions on surfaces and nanostructures self-assembled on top. The two-dimensional surface carbide W(110)/C-R(15x3) has been extensively investigated. Photoemission studies of quantum size effects in the electronic structure of this reconstruction, combined with an investigation of its surface geometry, lead to an advanced structural model of the carbide overlayer. It was discovered that W(110)/C-R(15x3) can control self-organization of adlayers into nanostructures with extremely different electronic and structural properties. Thus, it was established that at elevated temperature the R(15x3) superstructure controls the self-assembly of sub-monolayer amounts of Au into nm-wide nanostripes. Based on the results of core level photoemission, the R(15x3)-induced surface alloying which takes place between Au and W can be claimed as driving force of self-organization. The observed stripes exhibit a characteristic one-dimensional electronic structure with laterally quantized d-bands. Obviously, these are very important for applications, since dimensions of electronic devices have already stepped into the nm-range, where quantum-size phenomena must undoubtedly be considered. Moreover, formation of perfectly uniform molecular clusters of C60 was demonstrated and described in terms of the van der Waals formalism. It is the first experimental observation of two-dimensional fullerene nanoclusters with "magic numbers". Calculations of the cluster potentials using the static approach have revealed characteristic minima in the interaction energy. They are achieved for 4 and 7 molecules per cluster. The obtained "magic numbers" and the corresponding cluster structures are fully consistent with the results of the STM measurements.
Self-assembly of polymeric building blocks is a powerful tool for the design of novel materials and structures that combine different properties and may respond to external stimuli. In the past decades, most studies were focused on the self-assembly of amphiphilic diblock copolymers in solution. The dissolution of these block copolymers in a solvent selective for one block results mostly in the formation of micelles. The micellar structure of diblock copolymers is inherently limited to a homogeneous core surrounded by a corona, which keeps the micelle in solution. Thus, for drug-delivery applications, such structures only offer a single domain (the hydrophobic inner core) for drug entrapment. Whereas multicompartment micelles composed of a water-soluble shell and a segregated hydrophobic core are novel, interesting morphologies for applications in a variety of fields including medicine, pharmacy and biotechnology. The separated incompatible compartments of the hydrophobic core could enable the selective entrapment and release of various hydrophobic drugs while the hydrophilic shell would permit the stabilization of these nanostructures in physiological media. However, so far, the preparation and control of stable multicompartment micellar systems are in the first stages and the number of morphological studies concerning such micelles is rather low. Thus considerably little is known about their exact inner structures. In the present study, we concentrate on four different approaches for the preparation of multicompartment micelles by self-assembly in aqueous media. A similarity of all approaches was that hydrocarbon and fluorocarbon blocks were selected for all employed copolymers since such segments tend to be strongly incompatible, and thus favor the segregation into distinct domains. Our studies have shown that the self-assembly of the utilized copolymers in aqueous solution leads in three cases to the formation of multicompartment micelles. As expected the shape and size of the micelles depend on the molecular architecture and to some extent also on the way of preparation. These novel structured colloids may serve as models as well as mimics for biological structures such as globular proteins, and may open interesting opportunities for nanotechnology applications.
Natural and human induced environmental changes affect populations at different time scales. If they occur in a spatial heterogeneous way, they cause spatial variation in abundance. In this thesis I addressed three topics, all related to the question, how environmental changes influence population dynamics. In the first part, I analysed the effect of positive temporal autocorrelation in environmental noise on the extinction risk of a population, using a simple population model. The effect of autocorrelation depended on the magnitude of the effect of single catastrophic events of bad environmental conditions on a population. If a population was threatened by extinction only, when bad conditions occurred repeatedly, positive autocorrelation increased extinction risk. If a population could become extinct, even if bad conditions occurred only once, positive autocorrelation decreased extinction risk. These opposing effects could be explained by two features of an autocorrelated time series. On the one hand, positive autocorrelation increased the probability of series of bad environmental conditions, implying a negative effect on populations. On the other hand, aggregation of bad years also implied longer periods with relatively good conditions. Therefore, for a given time period, the overall probability of occurrence of at least one extremely bad year was reduced in autocorrelated noise. This can imply a positive effect on populations. The results could solve a contradiction in the literature, where opposing effects of autocorrelated noise were found in very similar population models. In the second part, I compared two approaches, which are commonly used for predicting effects of climate change on future abundance and distribution of species: a "space for time approach", where predictions are based on the geographic pattern of current abundance in relation to climate, and a "population modelling approach" which is based on correlations between demographic parameters and the inter-annual variation of climate. In this case study, I compared the two approaches for predicting the effect of a shift in mean precipitation on a population of the sociable weaver Philetairus socius, a common colonially living passerine bird of semiarid savannahs of southern Africa. In the space for time approach, I compared abundance and population structure of the sociable weaver in two areas with highly different mean annual precipitation. The analysis showed no difference between the two populations. This result, as well as the wide distribution range of the species, would lead to the prediction of no sensitive response of the species to a slight shift in mean precipitation. In contrast, the population modelling approach, based on a correlation between reproductive success and rainfall, predicted a sensitive response in most model types. The inconsistency of predictions was confirmed in a cross-validation between the two approaches. I concluded that the inconsistency was caused, because the two approaches reflect different time scales. On a short time scale, the population may respond sensitively to rainfall. However, on a long time scale, or in a regional comparison, the response may be compensated or buffered by a variety of mechanisms. These may include behavioural or life history adaptations, shifts in the interactions with other species, or differences in the physical environment. The study implies that understanding, how such mechanisms work, and at what time scale they would follow climate change, is a crucial precondition for predicting ecological consequences of climate change. In the third part of the thesis, I tested why colony sizes of the sociable weaver are highly variable. The high variation of colony sizes is surprising, as in studies on coloniality it is often assumed that an optimal colony size exists, in which individual bird fitness is maximized. Following this assumption, the pattern of bird dispersal should keep colony sizes near an optimum. However, I showed by analysing data on reproductive success and survival that for the sociable weaver fitness in relation to colony size did not follow an optimum curve. Instead, positive and negative effects of living in large colonies overlaid each other in a way that fitness was generally close to one, and density dependence was low. I showed in a population model, which included an evolutionary optimisation process of dispersal that this specific shape of the fitness function could lead to a dispersal strategy, where the variation of colony sizes was maintained.
Variation in nitrogen deposition and available soil nitrogen in a forest–grassland ecotone in Canada
(2004)
Regional variation in nitrogen (N) deposition increases plant productivity and decreases species diversity, but landscape- or local-scale influences on N deposition are less well-known. Using ion-exchange resin, we measured variation of N deposition and soil N availability within Elk Island National Park in the ecotone between grassland and boreal forest in western Canada. The park receives regionally high amounts of atmospheric N deposition (22 kg ha⁻¹ yr⁻¹). N deposition was on average higher ton clayrich luvisols than on brunisols, and areas burned 1 – 15 years previously received more atmospheric N than unburned sites. We suggest that the effects of previous fires and soil type on deposition rate act through differences in canopy structure. The magnitude of these effects varied with the presence of ungulate grazers (bison, moose, elk) and vegetation type (forest, shrubland, grassland). Available soil N (ammonium and nitrate) was higher in burned than unburned sites in the absence of grazing, suggesting an effect of deposition. On grazed sites, differences between fire treatments were small, presumably because the removal of biomass by grazers reduced the effect of fire. Aspen invades native grassland in this region, and our results suggest that fire without grazing might reinforce the expansion of forest into grassland facilitated by N deposition.
This thesis is concerned with the solution of the blind source separation problem (BSS). The BSS problem occurs frequently in various scientific and technical applications. In essence, it consists in separating meaningful underlying components out of a mixture of a multitude of superimposed signals. In the recent research literature there are two related approaches to the BSS problem: The first is known as Independent Component Analysis (ICA), where the goal is to transform the data such that the components become as independent as possible. The second is based on the notion of diagonality of certain characteristic matrices derived from the data. Here the goal is to transform the matrices such that they become as diagonal as possible. In this thesis we study the latter method of approximate joint diagonalization (AJD) to achieve a solution of the BSS problem. After an introduction to the general setting, the thesis provides an overview on particular choices for the set of target matrices that can be used for BSS by joint diagonalization. As the main contribution of the thesis, new algorithms for approximate joint diagonalization of several matrices with non-orthogonal transformations are developed. These newly developed algorithms will be tested on synthetic benchmark datasets and compared to other previous diagonalization algorithms. Applications of the BSS methods to biomedical signal processing are discussed and exemplified with real-life data sets of multi-channel biomagnetic recordings.
Amphiphilic molecules contain a hydrophilic headgroup and a hydrophobic tail. The headgroup is polar or ionic and likes water, the tail is typically an aliphatic chain that cannot be accommodated in a polar environment. The prevailing molecular asymmetry leads to a spontaneous adsorption of amphiphiles at the air/water or oil/water interfaces. As a result, the surface tension and the surface rheology is changed. Amphiphiles are important tools to deliberately modify the interfacial properties of liquid interfaces and enable new phenomena such as foams which cannot be formed in a pure liquid. In this thesis we investigate the static and dynamic properties of adsorption layers of soluble amphiphiles at the air/water interface, the so called Gibbs monolayers. The classical way for an investigation of these systems is based on a thermodynamic analysis of the equilibrium surface tension as a function of the bulk composition in the framework of Gibbs theory. However, thermodynamics does not provide any structural information and several recent publications challenge even fundamental text book concepts. The experimental investigation faces difficulties imposed by the low surface coverage and the presence of dissolved amphiphiles in the adjacent bulk phase. In this thesis we used a suite of techniques with the sensitivity to detect less than a monolayer of molecules at the air-water interface. Some of these techniques are extremely complex such as infrared visible sum frequency generation (IR-VIS SFG) spectroscopy or second harmonic generation (SHG). Others are traditional techniques, such as ellipsometry employed in new ways and pushed to new limits. Each technique probes selectively different parts of the interface and the combination provides a profound picture of the interfacial architecture. The first part of the thesis is dedicated to the distribution of ions at interfaces. Adsorption layers of ionic amphiphiles serve as model systems allowing to produce a defined surface charge. The charge of the monolayer is compensated by the counterions. As a result of a complex zoo of interactions there will be a defined distribution of ions at the interface, however, its experimental determination is a big scientific challenge. We could demonstrate that a combination of linear and nonlinear techniques gives direct insights in the prevailing ion distribution. Our investigations reveal specific ion effects which cannot be described by classical Poisson-Boltzmann mean field type theories. Adsorption layer and bulk phase are in thermodynamic equilibrium, however, it is important to stress that there is a constant molecular exchange between adsorbed and dissolved species. This exchange process is a key element for the understanding of some of the thermodynamic properties. An excellent way to study Gibbs monolayers is to follow the relaxation from a non-equilibrium to an equilibrium state. Upon compression amphiphiles must leave the adsorption layer and dissolve in the adjacent bulk phase. Upon expansion amphiphiles must adsorb at the interface to restore the equilibrium coverage. Obviously the frequency of the expansion and compression cycles must match the molecular exchange processes. At too low frequencies the equilibrium is maintained at all times. If the frequency is too fast the system behaves as a monolayer of insoluble surfactants. In this thesis we describe an unique variant of an oscillating bubble technique that measures precisely the real and imaginary part of the complex dilational modulus E in a frequency range up to 500 Hz. The extension of about two decades in the time domain in comparison to the conventional method of an oscillating drop is a tremendous achievement. The imaginary part of the complex dilational modulus E is a consequence of a dissipative process which is interpreted as an intrinsic surface dilational viscosity. The IR-VIS SFG spectra of the interfacial water provide a molecular interpretation of the underlying dissipative process.
Das Superoxidradikal kann mit fast allen Bestandteilen von Zellen reagieren und diese schädigen. Die medizinische Forschung stellte eine Beteiligung des Radikals an Krebs, Herzinfarkten und neuraler Degeneration fest. Ein empfindlicher Superoxidnachweis ist daher zum besseren Verständnis von Krankheitsverläufen wichtig. Dabei stellen die geringen typischen Konzentrationen und seine kurze Lebensdauer große Anforderungen. Ziel dieser Arbeit war es zum einen, zwei neuartige Proteinarchitekturen auf Metallelektroden zu entwickeln und deren elektrochemisches Ansprechverhalten zu charakterisieren. Zum anderen waren diese Elektroden zur empfindlichen quantitativen Superoxiddetektion einzusetzen. Im ersten Teil der Arbeit wurde eine Protein-Multischichtelektrode aus Cytochrom c und dem Polyelektrolyten Poly(anilinsulfonsäure) nach dem Layer-by-layer-Verfahren aufgebaut. Für zwei bis 15 Schichten an Protein wurde eine deutliche Zunahme an elektrodenaktivem Cytochrom c mit jedem zusätzlichen Aufbringungsschritt nachgewiesen. Die Zunahme verlief linear und ergab bei 15 Schichten eine Zunahme der redoxaktiven Proteinmenge um deutlich mehr als eine Größenordnung. Während das formale Potential im Multischichtsystem sich im Vergleich zur Monoschichtelektrode nicht veränderte, wurde für die Kinetik eine Abhängigkeit der Geschwindigkeit des Elektronentransfers von der Zahl der Proteinschichten beobachtet. Mit zunehmender Scangeschwindigkeit trat ein reversibler Kontaktverlust zu den äußeren Schichten auf. Die lineare Zunahme an elektroaktivem Protein mit steigender Zahl an Depositionsschritten unterscheidet sich deutlich von in der Literatur beschriebenen Protein/Polyelektrolyt-Multischichtelektroden, bei denen ab etwa 6-8 Schichten keine Zunahme an elektroaktivem Protein mehr festgestelltwurde. Auch ist bei diesen die Zunahme an kontaktierbaren Proteinmolekülen auf das Zwei- bis Fünffache limitiert. Diese Unterschiede des neu vorgestellten Systems zu bisherigen Multischichtassemblaten erklärt sich aus einem in dieser Arbeit für derartige Systeme erstmals beschriebenen Elektronentransfermechanismus. Der Transport von Elektronen zwischen der Elektrodenoberfläche und den Proteinmolekülen in den Schichten verläuft über einen Protein-Protein-Elektronenaustausch. Dieser Mechanismus beruht auf dem schnellen Selbstaustausch von Cytochrom c-Molekülen und einer verbleibenden Rotationsflexibilität des Proteins im Multischichtsystem. Die Reduzierung des Proteins durch das Superoxidradikal und eine anschließende Reoxidation durch die Elektrode konnten nachgewiesen werden. In einem amperometrischen Messansatz wurde das durch Superoxidradikale hervorgerufene elektrochemische Signal in Abhängigkeit von der Zahl an Proteinschichten gemessen. Ein maximales Ansprechverhalten auf das Radikal wurde mit 6-Schichtelektroden erzielt. Die Empfindlichkeit der 6-Schichtelektroden wurde im Vergleich zum Literaturwert der Monoschichtelektrode um Faktor 14, also mehr als eine Größenordnung, verbessert. Somit konnte eine Elektrode mit 6 Schichten aus Cytochrom c und Poly(anilinsulfonsäure) als neuartiger Superoxidsensor mit einer 14-fachen Verbesserung der Empfindlichkeit im Vergleich zum bislang benutzten System entwickelt werden. Der zweite Teil dieser Arbeit beschreibt die Auswahl, Gewinnung und Charakterisierung von Mutanten des Proteins Cu,Zn-Superoxiddismutase zur elektrochemischen Quantifizierung von Superoxidradikalen. Monomere Mutanten des humanen dimeren Enzyms wurden entworfen, die durch Austausch von Aminosäuren ein oder zwei zusätzliche Cysteinreste besaßen, mit welchem sie direkt auf der Goldelektrodenoberfläche chemisorbieren sollten. 6 derartige Mutanten konnten in ausreichender Menge und Reinheit in aktiver Form gewonnen werden. Die Bindung der Superoxiddismutase-Mutanten an Goldoberflächen konnte durch Oberflächen-plasmonresonanz und Impedanzspektroskopie nachgewiesen werden. Alle Mutanten wiesen einen quasi-reversiblen Elektronentransfer zwischen SOD und Elektrode auf. Durch Untersuchung von kupferfreien SOD-Mutanten sowie des Wildtyps konnte nachgewiesen werden, das die Mutanten über die eingefügten Cysteinreste auf der Elektrode chemisorptiv gebunden wurden und der Elektronentransfer zwischen der Elektrode und dem Kupfer im aktiven Zentrum der SOD erfolgte. Die Superoxiddismutase katalysiert die Zersetzung von Superoxidmolekülen durch Oxidation und durch Reduktion der Radikale. Somit sind beide Teilreaktionen von analytischem Interesse. Zyklovoltammetrisch konnte sowohl die Oxidation als auch die Reduktion des Radikals durch die immobilisierten Superoxiddismutase-Mutanten nachgewiesen werden. In amperometrischen Messanordnungen konnten beide Teilreaktionen zur analytischen Quantifizierung von Superoxidradikalen genutzt werden. Im positiven Potentialfenster wurde die Empfindlichkeit um einen Faktor von etwa 10 gegenüber der Cytochrom c–Monoschichtelektrode verbessert.
Electrets are materials capable of storing oriented dipoles or an electric surplus charge for long periods of time. The term "electret" was coined by Oliver Heaviside in analogy to the well-known word "magnet". Initially regarded as a mere scientific curiosity, electrets became increasingly imporant for applications during the second half of the 20th century. The most famous example is the electret condenser microphone, developed in 1962 by Sessler and West. Today, these devices are produced in annual quantities of more than 1 billion, and have become indispensable in modern communications technology. Even though space-charge electrets are widely used in transducer applications, relatively little was known about the microscopic mechanisms of charge storage. It was generally accepted that the surplus charges are stored in some form of physical or chemical traps. However, trap depths of less than 2 eV, obtained via thermally stimulated discharge experiments, conflicted with the observed lifetimes (extrapolations of experimental data yielded more than 100000 years). Using a combination of photostimulated discharge spectroscopy and simultaneous depth-profiling of the space-charge density, the present work shows for the first time that at least part of the space charge in, e.g., polytetrafluoroethylene, polypropylene and polyethylene terephthalate is stored in traps with depths of up to 6 eV, indicating major local structural changes. Based on this information, more efficient charge-storing materials could be developed in the future. The new experimental results could only be obtained after several techniques for characterizing the electrical, electromechanical and electrical properties of electrets had been enhanced with in situ capability. For instance, real-time information on space-charge depth-profiles were obtained by subjecting a polymer film to short laser-induced heat pulses. The high data acquisition speed of this technique also allowed the three-dimensional mapping of polarization and space-charge distributions. A highly active field of research is the development of piezoelectric sensor films from electret polymer foams. These materials store charges on the inner surfaces of the voids after having been subjected to a corona discharge, and exhibit piezoelectric properties far superior to those of traditional ferroelectric polymers. By means of dielectric resonance spectroscopy, polypropylene foams (presently the most widely used ferroelectret) were studied with respect to their thermal and UV stability. Their limited thermal stability renders them unsuitable for applications above 50 °C. Using a solvent-based foaming technique, we found an alternative material based on amorphous Teflon® AF, which exhibits a stable piezoelectric coefficient of 600 pC/N at temperatures up to 120 °C.
The interactions between peptides and lipids are of fundamental importance in the functioning of numerous membrane-mediated biochemical processes including antimicrobial peptide action, hormone-receptor interactions, drug bioavailability across the blood-brain barrier and viral fusion processes. Alteration of peptide structure could be a cause of many diseases. Biological membranes are complex systems, therefore simplified models may be introduced in order to understand processes occurring in nature. The lipid monolayers at the air/water interface are suitable model systems to mimic biological membranes since many parameters can be easily controlled. In the present work the lipid monolayers were used as a model membrane and their interactions with two different peptides B18 and Amyloid beta (1-40) peptide were investigated. B18 is a synthetic peptide that binds to lipid membranes that leads to the membrane fusion. It was demonstrated that it adopts different structures in the aqueous solutions and in the membrane interior. It is unstructured in solutions and forms alpha-helix at the air/water interface or in the membrane bound state. The peptide has affinity to the negatively charged lipids and even can fold into beta-sheet structure in the vicinity of charged membranes at high peptide to lipid ratio. It was elucidated that in the absence of electrostatic interactions B18 does not influence on the lipid structure, whereas it provides partial liquidization of the negatively charged lipids. The understanding of mechanism of the peptide action in model system may help to develop the new type of antimicrobial peptides as well as it can shed light on the general mechanisms of peptide/membrane binding. The other studied peptide - Amyloid beta (1-40) peptide, which is the major component of amyloid plaques found in the brain of patients with Alzheimer's disease. Normally the peptide is soluble and is not toxic. During aging or as a result of the disease it aggregates and shows a pronounced neurotoxicity. The peptide aggregation involves the conformational transition from a random coil or alpha-helix to beta-sheets. Recently it was demonstrated that the membrane can play a crucial role for the peptide aggregation and even more the peptide can cause the change in the cell membranes that leads to a neuron death. In the present studies the structure of the membrane bound Amyloid beta peptide was elucidated. It was found that the peptide adopts the beta-sheet structure at the air/water interface or being adsorbed on lipid monolayers, while it can form alpha-helical structure in the presence of the negatively charged vesicles. The difference between the monolayer system and the bulk system with vesicles is the peptide to lipid ratio. The peptide adopts the helical structure at low peptide to lipid ratio and folds into beta-sheet at high ratio. Apparently, Abeta peptide accumulation in the brain is concentration driven. Increasing concentration leads to a change in the lipid to peptide ratio that induces the beta-sheet formation. The negatively charged lipids can act as seeds in the plaque formation, the peptide accumulates on the membrane and when the peptide to lipid ratio increases it the peptide forms toxic beta-sheet containing aggregates.
We present an application of imprecise probability theory to the quantification of uncertainty in the integrated assessment of climate change. Our work is motivated by the fact that uncertainty about climate change is pervasive, and therefore requires a thorough treatment in the integrated assessment process. Classical probability theory faces some severe difficulties in this respect, since it cannot capture very poor states of information in a satisfactory manner. A more general framework is provided by imprecise probability theory, which offers a similarly firm evidential and behavioural foundation, while at the same time allowing to capture more diverse states of information. An imprecise probability describes the information in terms of lower and upper bounds on probability. For the purpose of our imprecise probability analysis, we construct a diffusion ocean energy balance climate model that parameterises the global mean temperature response to secular trends in the radiative forcing in terms of climate sensitivity and effective vertical ocean heat diffusivity. We compare the model behaviour to the 20th century temperature record in order to derive a likelihood function for these two parameters and the forcing strength of anthropogenic sulphate aerosols. Results show a strong positive correlation between climate sensitivity and ocean heat diffusivity, and between climate sensitivity and absolute strength of the sulphate forcing. We identify two suitable imprecise probability classes for an efficient representation of the uncertainty about the climate model parameters and provide an algorithm to construct a belief function for the prior parameter uncertainty from a set of probability constraints that can be deduced from the literature or observational data. For the purpose of updating the prior with the likelihood function, we establish a methodological framework that allows us to perform the updating procedure efficiently for two different updating rules: Dempster's rule of conditioning and the Generalised Bayes' rule. Dempster's rule yields a posterior belief function in good qualitative agreement with previous studies that tried to constrain climate sensitivity and sulphate aerosol cooling. In contrast, we are not able to produce meaningful imprecise posterior probability bounds from the application of the Generalised Bayes' Rule. We can attribute this result mainly to our choice of representing the prior uncertainty by a belief function. We project the Dempster-updated belief function for the climate model parameters onto estimates of future global mean temperature change under several emissions scenarios for the 21st century, and several long-term stabilisation policies. Within the limitations of our analysis we find that it requires a stringent stabilisation level of around 450 ppm carbon dioxide equivalent concentration to obtain a non-negligible lower probability of limiting the warming to 2 degrees Celsius. We discuss several frameworks of decision-making under ambiguity and show that they can lead to a variety of, possibly imprecise, climate policy recommendations. We find, however, that poor states of information do not necessarily impede a useful policy advice. We conclude that imprecise probabilities constitute indeed a promising candidate for the adequate treatment of uncertainty in the integrated assessment of climate change. We have constructed prior belief functions that allow much weaker assumptions on the prior state of information than a prior probability would require and, nevertheless, can be propagated through the entire assessment process. As a caveat, the updating issue needs further investigation. Belief functions constitute only a sensible choice for the prior uncertainty representation if more restrictive updating rules than the Generalised Bayes'Rule are available.
Max Weber
(2005)
Die Website beinhaltet ausgewählte Werke Max Webers im Volltext.Die Potsdamer Internet-Ausgabe "PIA" folgt den alten Ausgaben der zwanziger Jahre (den "Marianne-Ausgaben"), die auch dem größten Teil der bisherigen Weber-Forschung zugrundeliegen. Das Projekt, Webers Werke der EDV zugänglich zu machen, entstand zunächst aus dem Bedürfnis nach neuen Registern. Wir arbeiten in der Erziehungswissenschaft. Für diesen Bereich sind die bisher verfügbaren Register ganz unzulänglich. Dem ist nun abgeholfen. Künftig können Weber-Interessierte aller Disziplinen ihre eigenen Register erstellen. Alle folgenden Texte können heruntergeladen und zur Schlagwort- und Zitatensuche, aber, versteht sich, auch zu anspruchsvolleren Inhaltsanalysen, sprachlichen Untersuchungen und anderen Vorhaben mithilfe spezieller Programme weiter bearbeitet werden. Die Auswahl der hier aufgenommenen Werke hat keine systematischen Gründe. Wir wollten einen Anfang machen und haben uns auf diejenigen Texte beschränkt, die uns in alten Ausgaben zur Hand waren, weil die jüngeren Ausgaben urheberrechtlich geschützt sind. Wichtiges fehlt: die Börsenschriften, "Wirtschaft und Gesellschaft", die Konfuzianismusstudie, die Musiksoziologie, die Schriften zur Russischen Revolution und andere.
Der Vortrag skizziert die Geschichte der deutschen Romanistik, gibt einen kurzen Überblick über den Stand 2003 im Bereich der Französischen Philologie und resümiert die fachlichen Herausforderungen im deutsch-französischen sowohl kulturellen wie politischen Kontext. Anschließend folgen drei Vorschläge zur Veränderung der Schulausbildung und Universitätslehre auf der Basis eines breiten Kulturverständnisses: 1. Einführung eines neuen allgemeinverbindlichen Schulfaches "Europa-Kunde" ("Connaissances de l'Europe"), das europaweit in Ergänzung oder Kooperation mit dem Fach Geschichte gelehrt werden sollte, 2. die systematische Ergänzung der traditionellen Literatur- und Sprachwissenschaft durch Kurse zu den deutsch-französischen Kulturbeziehungen, 3. die Ergänzung des traditionellen romanistischen Lehrkanons durch Seminare aus dem Bereich des Kulturmanagements.
Alexander von Humboldt im Netz" ist ein Projekt des Instituts für Romanistik der Universität Potsdam, unter der wissenschaftlichen Leitung von Prof. Dr. Ottmar Ette. Die Website verfolgt den Zweck - die weltweiten Aktivitäten zu dem großen Forscher und Gelehrten vorzustellen und zu bündeln - mehr Menschen mit dem Denken Alexander von Humboldts vertraut zu machen - einen Überblick über verschiedene Institutionen, Veranstaltungen, Tagungen, Ausstellungen, Projekte, Bibliotheken und vieles mehr zu geben.
We study a natural Dirac operator on a Lagrangian submanifold of a Kähler manifold. We first show that its square coincides with the Hodge - de Rham Laplacian provided the complex structure identifies the Spin structures of the tangent and normal bundles of the submanifold. We then give extrinsic estimates for the eigenvalues of that operator and discuss some examples.
Die vorliegende Arbeit beschäftigte sich mit zwei Themengebieten. Es ging zum einen um die mechanischen Eigenschaften von Polyelektrolythohlkapseln und zum anderen um die Adhäsion von Polyelektrolythohlkapseln. Die mechanischen Eigenschaften wurden mit der AFM „colloidal probe” Technik untersucht. Dabei zeigte sich, dass die Kraftdeformationskurven für kleine Deformationen den nach der Schalentheorie vorhergesagten linearen Verlauf haben. Ebenso wurde die quadratische Abhängigkeit der Federkonstanten von der Dicke bestätigt. Für PAH/PSS findet man einen E-Modul von 0.25 GPa. Zusammen mit der Tatsache, dass die Deformationskurven unabhängig von der Geschwindigkeit sind und praktisch keine Hysterese zeigen, sowie der Möglichkeit die Kapseln plastisch zu deformieren, kann man schließen, dass das System in einem glasartigen Zustand vorliegt. <pt>Erwartungsgemäß zeigte der pH einen starken Einfluss auf die PEM. Während in einem pH-Bereich zwischen 2 und 11.5 keine morphologischen Änderungen festgestellt werden konnten, vergrößerte sich der Radius bei pH = 12 um bis zu 50 %. Diese Radienänderung war reversibel und ging einher mit einem sichtbaren Weicherwerden der Kapseln. Eine Abnahme des E-Moduls um mindestens drei Größenordungen wurde durch Kraftdeformationsmessungen bestätigt. Die Kraftdeformationskurven zeigen eine starke Hysterese. Das System befindet sich nun nicht mehr in einem glasartigen Zustand, sondern ist viskos bis gummiartig geworden. Messungen an Kapseln, die mit Glutardialdehyd behandelt wurden, zeigten, dass die Behandlung das pH-abhängige Verhalten verändert. Dies kann darauf zurückgeführt werden, dass das PAH durch den Glutardialdehyd quervernetzt wird. Bei einem hohen Quervernetzungsgrad, zeigen die Kapseln keine Änderung des mechanischen Verhaltens bei pH = 12. Schwach quervernetzte Kapseln werden immer noch signifikant weicher bei pH = 12, jedoch ändert sich der Radius nicht. Außerdem wurden Multilagenkapseln untersucht, deren Stabilität nicht auf elektrostatischen Wechselwirkungen sondern auf Wasserstoffbrückenbindungen beruhte. Diese Kapseln zeigten eine deutlich höhere Steifigkeit mit E-Modulen bis zu 1 GPa. Es wurde gefunden, dass auch dieses System für kleine Deformationen ein lineares Kraft-Deformationsverhalten zeigt, und dass die Federkonstante quadratisch von der Dicke abhängt. Die Kapseln lösen sich praktisch sofort bei pH = 6.5 auf. In der Nähe dieses pHs konnte das Abnehmen der Federkonstanten verfolgt werden. Außerdem wurde das Adhäsionsverhalten von PAH/PSS Kapseln auf mit PEI-beschichtetem Glas untersucht. Die Adhäsionsflächen waren zu einem großen Teil rund und ließen sich quantitativ auswerten. Der Adhäsionsradius nimmt mit dem Kapselradius zu und mit der Dicke ab. Das Verhalten konnte mit zwei Modellen, einem für die große und einem für die kleine Deformation beschrieben werden. Das große Deformationsmodell liefert um eine Größenordung niedrigere Adhäsionsenergien als das kleine Deformationsmodell, welches mit Werten von ‑0.2 mJ/m<sup>2 Werte in einem plausiblen Bereich liefert. Es wurde gefunden, dass bei einem Verhältnis von Dicke zu Deformation von etwa eins "buckling" auftritt. Dieser Punkt markierte zugleich den Übergang von der großen zur kleinen Deformation.
Adhesion of biological cells to their environment is mediated by two-dimensional clusters of specific adhesion molecules which are assembled in the plasma membrane of the cells. Due to the activity of the cells or external influences, these adhesion sites are usually subject to physical forces. In recent years, the influence of such forces on the stability of cellular adhesion clusters was increasingly investigated. In particular, experimental methods that were originally designed for the investigation of single bond rupture under force have been applied to investigate the rupture of adhesion clusters. The transition from single to multiple bonds, however, is not trivial and requires theoretical modelling. Rupture of biological adhesion molecules is a thermally activated, stochastic process. In this work, a stochastic model for the rupture and rebinding dynamics of clusters of parallel adhesion molecules under force is presented. In particular, the influence of (i) a constant force as it may be assumed for cellular adhesion clusters is investigated and (ii) the influence of a linearly increasing force as commonly used in experiments is considered. Special attention is paid to the force-mediated cooperativity of parallel adhesion bonds. Finally, the influence of a finite distance between receptors and ligands on the binding dynamics is investigated. Thereby, the distance can be bridged by polymeric linker molecules which tether the ligands to a substrate.
Die Autorinnen wenden sich an WissenschaftlerInnen und PraktikerInnen, die an nachhaltiger Regionalenwicklung, an regionalen Ungleichheiten und insbesondere an ländlichen Räumen interessiert sind. In der Studie wird aktuelles statistisches Material zu Nachhaltigkeitsressourcen und -defiziten in den durch die Raumordnung definierten neun siedlungsstrukturellen Kreistypen Deutschlands aufbereitet. Im Zentrum der Analyse stehen vier ländliche Regionstypen, die sich nach ihren Entwicklungspotentialen und Problemlagen unterscheiden. Sie werden anhand verfügbarer Daten mit NUTS3- bzw. NUTS2-Gebieten der Europäischen Union verglichen. Es wird gezeigt, wie sich deutsche ländliche Gebiete im europäischen Rahmen positionieren. Die Analyseergebnisse werden durch Tabellen und Karten zur räumlichen Verteilung von Ressourcen veranschaulicht. Für die vier ländlichen Kreistypen in Deutschland werden Nachhaltigkeitsbilanzen zusammengefasst und mit europäischen ländlichen Raumtypen verglichen. Abschließend werden Überlegungen für raumspezifische Pfade der Regionalentwicklung für jeden analysierten ländlichen Kreistyp zur Diskussion gestellt. Die Autorinnen sind langjährige Mitarbeiterinnen der Forschungsgruppe "Umweltsoziologie" der Wirtschafts- und Sozialwissenschaftlichen Fakultät der Universität Potsdam.
Das 1817 erstmals schriftlich erwähnte Selen galt lange Zeit nur als toxisch und sogar als procancerogen, bis es 1957 von Schwarz und Foltz als essentielles Spurenelement erkannt wurde, dessen biologische Funktionen in Säugern durch Selenoproteine vermittelt werden. Die Familie der Glutathionperoxidasen nimmt hierbei eine wichtige Stellung ein. Für diese sind konkrete Funktionen und die dazugehörigen molekularen Mechanismen, welche über die von ihnen katalysierte Hydroperoxidreduktion und damit verbundene antioxidative Kapazität hinausgehen, bislang nur unzureichend beschrieben worden. Die Funktion der gastrointestinalen Glutathionperoxidase (GI-GPx) wird als Barriere gegen eine Hydroperoxidabsorption im Gastrointestinaltrakt definiert. Neuen Erkenntnissen zufolge wird die GI-GPx aber auch in verschiedenen Tumoren verstärkt exprimiert, was weitere, bis dato unbekannte, Funktionen dieses Enzymes wahrscheinlich macht. Um mögliche neue Funktionen der GI-GPx, vor allem während der Cancerogenese, abzuleiten, wurde hier die transkriptionale Regulation der GI-GPx detaillierter untersucht. Die Sequenzanalyse des humanen GI-GPx-Promotors ergab das Vorhandensein von zwei möglichen "antioxidant response elements" (ARE), bei welchen es sich um Erkennungssequenzen des Transkriptionsfaktors Nrf2 handelt. Die meisten der bekannten Nrf2-Zielgene gehören in die Gruppe der Phase-II-Enzyme und verfügen über antioxidative und/oder detoxifizierende Eigenschaften. Sowohl auf Promotorebene als auch auf mRNA- und Proteinebene konnte die Expression der GI-GPx durch typische, in der Nahrung enthaltene, Nrf2-Aktivatoren wie z.B. Sulforaphan oder Curcumin induziert werden. Eine direkte Beteiligung von Nrf2 wurde durch Cotransfektion von Nrf2 selbst bzw. von Keap1, das Nrf2 im Cytoplasma festhält, demonstriert. Somit konnte die GI-GPx eindeutig als Nrf2-Zielgen identifiziert werden. Ob sich die GI-GPx in die Gruppe der antiinflammatorischen und anticancerogenen Phase-II-Enzyme einordnen lässt, bleibt noch zu untersuchen. Die Phospholipidhydroperoxid Glutathionperoxidase (PHGPx) nimmt aufgrund ihres breiten Substratspektrums, ihrer hohen Lipophilie und ihrer Fähigkeit, Thiole zu modifizieren, eine Sonderstellung innerhalb der Familie der Glutathionperoxidasen ein. Mit Hilfe eines PHGPx-überexprimierenden Zellmodells wurden deshalb Beeinflussungen des zellulären Redoxstatus und daraus resultierende Veränderungen in der Aktivität redoxsensitiver Transkriptionsfaktorsysteme und in der Expression atheroskleroserelevanter Adhäsionsmoleküle untersucht. Als Transkriptionsfaktoren wurden NF-kB und Nrf2 ausgewählt. Die Bindung von NF-kB an sein entsprechendes responsives Element in der DNA erfordert das Vorhandensein freier Thiole, wohingegen Nrf2 durch Thiolmodifikation von Keap1 freigesetzt wird und in den Kern transloziert. Eine erhöhte Aktivität der PHGPx resultierte in einer Erhöhung des Verhältnisses von GSH zu GSSG, andererseits aber in einer verminderten Markierbarkeit freier Proteinthiole. PHGPx-Überexpression reduzierte die IL-1-induzierte NF-kB-Aktivität, die sich in einer verminderten NF-kB-DNA-Bindefähigkeit und Transaktivierungsaktivität ausdrückte. Auch war die Proliferationsrate der Zellen vermindert. Die Expression des NF-kB-regulierten vaskulären Zelladhäsionsmoleküls, VCAM-1, war ebenfalls deutlich verringert. Umgekehrt war in PHGPx-überexprimierenden Zellen eine erhöhte Nrf2-Aktivität und Expression der Nrf2-abhängigen Hämoxygenase-1 zu verzeichnen. Letzte kann für die meisten der beobachteten Effekte verantwortlich gemacht werden. Die hier dargestellten Ergebnisse verdeutlichen, dass eine Modifizierung von Proteinthiolen als wichtige Determinante für die Regulation der Expression und Funktion von Glutathionperoxidasen angesehen werden kann. Entgegen früheren Vermutungen, welche oxidative Vorgänge generell mit pathologischen Veränderungen assoziierten, scheint ein moderater oxidativer Stress, bedingt durch eine transiente Thiolmodifikation, durchaus günstige Auswirkungen zu haben, da, wie hier dargelegt, verschiedene, miteinander interagierende, cytoprotektive Mechanismen ausgelöst werden. Hieran wird deutlich, dass sich "antioxidative Wirkung" oder "oxidativer Stress" keineswegs nur auf "gute" oder "schlechte" Vorgänge beschränken lassen, sondern im Zusammenhang mit den beeinflussten (patho)physiologischen Prozessen und dem Ausmaß der "Störung" des physiologischen Redoxgleichgewichtes betrachtet werden müssen.
Die trainingswissenschaftliche Diagnostik in den Kernbereichen Training, Wettkampf und Leistungsfähigkeit ist durch einen hohen Praxisbezug, eine ausgeprägte strukturelle Komplexität und vielseitige Wechselwirkungen der sportwissenschaftlichen Teilgebiete geprägt. Diese Eigenschaften haben in der Vergangenheit dazu geführt, dass zentrale Fragestellungen, wie beispielsweise die Maximierung der sportlichen Leistungsfähigkeit, eine ökonomische Trainingsgestaltung, eine effektive Talentauswahl und -sichtung oder die Modellbildung noch nicht vollständig gelöst werden konnten. Neben den bereits vorhandenen linearen Lösungsansätzen werden in dieser Arbeit Methoden aus dem Bereich der Neuronalen Netzwerke eingesetzt. Diese nichtlinearen Diagnoseverfahren sind besonders geeignet für die Analyse von Prozessabläufen, wie sie beispielsweise im Training vorliegen. Im theoretischen Teil werden zunächst Gemeinsamkeiten, Abhängigkeiten und Unterschiede in den Bereichen Training, Wettkampf und Leistungsfähigkeit untersucht sowie die Brücke zwischen trainingswissenschaftlicher Diagnostik und nichtlinearen Verfahren über die Begriffe der Interdisziplinarität und Integrativität geschlagen. Angelehnt an die Theorie der Neuronalen Netze werden anschließend die Grundlagenmodelle Perzeptron, Multilayer-Perzeptron und Selbstorganisierende Karten theoretisch erläutert. Im empirischen Teil stehen dann die nichtlineare Analyse von personalen Anforderungsstrukturen, Zustände der sportlichen Form und die Prognose sportlichen Talents - allesamt bei jugendlichen Leistungsschwimmerinnen und -schwimmern - im Mittelpunkt. Die nichtlinearen Methoden werden dabei einerseits auf ihre wissenschaftliche Aussagekraft überprüft, andererseits untereinander sowie mit linearen Verfahren verglichen.
This paper focuses on mysteries written by the Afro-American women authors Barbara Neely and Valerie Wilson Wesley. Both authors place a black woman in the role of the detective - an innovative feature not only in the realm of female detective literature of the past two decades but also with regard to the current discourse about race and class in US-American society. This discourse is important because detective novels are considered popular literature and thus a mass product designed to favor commercial instead of literary claims. Thus, the focus is placed on the development of the two protagonists, on their lives as detectives and as black women, in order to find out whether or not and how the genre influences the depiction of Afro-American experiences. It appears that both of these detective series represent Afro-American culture in different ways, which confirms a heterogenic development of this ethnic group. However, the protagonist's search for identity and their relationships to white people could be identified as a major unifying claim of Afro-American literature. With differing intensity, the authors Neely and Wesley provide the white or mainstream reader with insight into their culture and confront the reader's ignorance of black culture. In light of this, it is a great achievement that Neely and Wesley have reached not only a black audience but also a growing number of white readers.
Nitrogen is an essential macronutrient for plants and nitrogen fertilizers are indispensable for modern agriculture. Unfortunately, we know too little about how plants regulate their use of soil nitrogen, to maximize fertilizers-N use by crops and pastures. This project took a dual approach, involving forward and reverse genetics, to identify N-regulators in plants, which may prove useful in the future to improve nitrogen-use efficiency in agriculture. To identify nitrogen-regulated transcription factor genes in Arabidopsis that may control N-use efficiency we developed a unique resource for qRT-PCR measurements on all Arabidpsis transcription factor genes. Using closely spaced, gene-specific primer pairs and SYBR® Green to monitor amplification of double-stranded DNA, transcript levels of 83% of all target genes could be measured in roots or shoots of young Arabidopsis wild-type plants. Only 4% of reactions produced non-specific PCR products, and 13% of TF transcripts were undetectable in these organs. Measurements of transcript abundance were quantitative over six orders of magnitude, with a detection limit equivalent to one transcript molecule in 1000 cells. Transcript levels for different TF genes ranged between 0.001-100 copies per cell. Real-time RT-PCR revealed 26 root-specific and 39 shoot-specific TF genes, most of which have not been identified as organ-specific previously. An enlarged and improved version of the TF qRT-PCR platform contains now primer pairs for 2256 Arabidopsis TF genes, representing 53 gene families and sub-families arrayed on six 384-well plates. Set-up of real-time PCR reactions is now fully robotized. One researcher is able to measure expression of all 2256 TF genes in a single biological sample in a just one working day. The Arabidopsis qRT-PCT platform was successfully used to identify 37 TF genes which transcriptionaly responded at the transcriptional level to N-deprivation or to nitrate per se. Most of these genes have not been characterized previously. Further selection of TF genes based on the responses of selected candidates to other macronutrients and abiotic stresses allowed to distinguish between TFs regulated (i) specifically by nitrogen (29 genes) (ii) regulated by general macronutrient or by salt and osmotic stress (6 genes), and (iii) responding to all major macronutrients and to abiotic stresses. Most of the N-regulated TF genes were also regulated by carbon. Further characterization of sixteen selected TF genes, revealed: (i) lack of transcriptional response to organic nitrogen, (ii) two major types of kinetics of induction by nitrate, (iii) specific responses for the majority of the genes to nitrate but not downstream products of nitrate assimilation. All sixteen TF genes were cloned into binary vectors for constitutive and ethanol inducible over expression, and the first generation of transgenic plants were obtained for almost all of them. Some of the plants constitutively over expressing TF genes under control of the 35S promoter revealed visible phenotypes in T1 generation. Homozygous T-DNA knock out lines were also obtained for many of the candidate TF genes. So far, one knock out line revealed a visible phenotype: retardation of flowering time. A forward genetic approach using an Arabidopsis ATNRT2.1 promoter : Luciferase reporter line, resulted in identification of eleven EMS mutant reporter lines affected in induction of ATNRT2.1 expression by nitrate. These lines could by divided in the following classes according to expression of other genes involved in primary nitrogen and carbon metabolism: (i) lines affected exclusively in nitrate transport, (ii) those affected in nitrate transport, acquisition, but also in glycolysis and oxidative pentose pathway, (iii) mutants affected moderately in nitrate transport, oxidative pentose pathway and glycolysis but not in primary nitrate assimilation. Thus, several different N-regulatory genes may have been mutated in this set of mutants. Map-based cloning has begun to identify the genes affected in these mutants.
Die Beschäftigung mit dem Thema Change Management erfordert die Auseinandersetzung mit einem heterogenen Feld von Ansätzen und fachlichen Perspektiven. Es besteht ein Mangel an systematischen empirischen Untersuchungen zu diesem Thema. Insbesondere fehlen Arbeiten, die mehr als eine "Schule" des Change Managements berücksichtigen. Unterschiede in den situativen Anforderungen werden zudem theoretisch und empirisch oft unzureichend berücksichtigt. Die Vermutung liegt nahe, dass das Scheitern von Veränderungsprozessen häufig durch die stereotype Anwendung generalisierender Empfehlungen gängiger Ansätze verursacht wird. Um diesen Defiziten zu begegnen, sollten in dieser Arbeit Kontingenzen von situativen Anforderungen und Change Management empirisch überprüft werden. Der Untersuchung liegt eine Konzeption zugrunde, die hinsichtlich des Projekterfolgs von der idealen Passung (Kontingenz) zwischen situativer Anforderung und Change Management ausgeht und damit einhergehende prozessbezogene Einflussfaktoren berücksichtigt. Erfolg wird im Sinne der Nachhaltigkeit als Wirkungen wirtschaftlicher, organisationsbezogener und qualifikatorischer Art definiert. In drei Teilstudien wurden Beratende und betriebliche Beteiligte jeweils projektbezogen schriftlich und mündlich zu betrieblicher Ausgangslage, Veränderungsprinzipien, Wirkungen und prozessbezogenen Einflussfaktoren befragt. Die erste Teilstudie umfasst vier Fallstudien. Hier wurden insgesamt 18 Projektbeteiligte, jeweils Beratende, betriebliche Projektleitende und –mitarbeitende, befragt. Die zweite Teilstudie umfasst die schriftliche und mündliche Befragung von 31 Beratenden verschiedener Schulen des Change Managements. In der dritten Teilstudie wurden 47 betriebliche Veränderungsverantwortliche schriftlich befragt. Die Projekte der zweiten und dritten Teilstudie liessen sich in jeweils zwei statistisch abgesicherte Erfolgsgruppen einteilen, wobei sich die Erfolgsgruppen nicht systematisch auf Merkmalen der betrieblichen Ausgangslage unterscheiden. Folgendes sind die wichtigsten Ergebnisse: Bei traditionell-bürokratischen Organisationsstrukturen geht ein langfristig-kontinuierliches, graduell-anpassendes, pragmatisch-lösungsorientiertes Vorgehen, in flexiblen Strukturen dagegen ein kurzfristiges, tiefgreifendes und integrativ-konzeptuelles Vorgehen mit Projekterfolg einher.In traditionell-hierarchischen Führungsstrukturen erweist sich ein wenig humanzentriertes und wenig selbstbeurteilendes Vorgehen mit standardisierter Vorgehensmethode, in flexiblen Führungsstrukturen ein stark humanzentriertes und ausgeprägt selbstbeurteilendes Vorgehen mit angepasster Vorgehensmethode als erfolgsversprechend.Bei grossem betrieblichem Veränderungswissen erweist sich ein selbstbeurteilendes Vorgehen, bei geringem Veränderungswissen ein wenig selbstbeurteilendes Vorgehen als erfolgsversprechend.Angesichts ökonomischer Anforderungen geht ein langfristig-kontinuierliches, schnelles und rollend geplantes Vorgehen mit einem tiefgreifenden und integrativ-konzeptuellen Ansatz, unter wenig Einbezug von Fremdbeurteilungen mit positiven Wirkungen einher.Bei technologischen Anforderungen ist ein langfristig-kontinuierliches, langsames und tiefgreifendes Vorgehen mit wenig Fremdbeurteilung erfolgsversprechend.Für soziokulturelle Anforderungen erweist sich ein langsames, selbstbeurteilendes, graduell anpassendes und pragmatisch-fokussiertes Vorgehen als erfolgreich. Angesichts politisch-rechtlicher Anforderungen geht ein linear geplantes, zielfokussiertes, fachberaterisches und wenig prozessorientiertes Vorgehen mit Erfolg einher.Bei Gesamtreorganisationen als innerorganisationale Anforderungen erweist sich ein linear geplantes, fach- und prozessberaterisches sowie tiefgreifendes Vorgehen als erfolgsversprechend.Bei innerorganisationalen Anforderungen durch Führungswechsel geht ein kurzfristig-temporäres, integrativ-konzeptuelles Vorgehen mit angepasster Vorgehensmethode mit Erfolg einher. Hinsichtlich prozessbezogener Einflussfaktoren erweisen sich situationsspezifisches Beratendenhandeln, unterstützendes Führungsverhalten, grosse Akzeptanz der/des Beratenden, umfassender Einbezug der Mitarbeitenden, aktive Beteiligung und Engagement der Mitarbeitenden, Verankerung des Projekts in der Organisation und hoher wahrgenommener Nutzen der Veränderung als wichtigste unterstützende Faktoren. Widerstände aus der Belegschaft, problematisches Führungsverhalten im Veränderungsprozess, fehlende/mangelhafte neben Tagesgeschäft bereitgestellte Ressourcen, behindernde organisationale (nicht projektbezogene) Entwicklungen, Angst/Verunsicherung der Belegschaft sind die wichtigsten hemmenden Faktoren.
Die vorliegende Arbeit wurde im Rahmen des multidisziplinären Deutsch-Russischen Verbundprojektes "Laptev See 2000" erstellt. Die dargestellten bodenkundlichen und mikro-biologischen Untersuchungen verfolgten das Ziel die mikrobielle Lebensgemeinschaft eines Permafrostbodens im sibirischen Lena Delta zu charakterisieren, wobei den methanogenen Archaea besondere Beachtung zukam. Die Probennahme wurde im August 2001 im zentralen Lenadelta, auf der Insel Samoylov durchgeführt. Das Delta liegt im Bereich des kontinuierlichen Permafrostes, was bedeutet, dass nur eine flache saisonale Auftauschicht während der Sommermonate auftaut. Das untersuchte Bodenprofil lag im Zentrum eines für die Landschaft repräsentativen Low Center Polygons. Zum Zeitpunkt der Beprobung betrug die Auftautiefe des untersuchten Bodens 45 cm.. Der Wasserstand lag zum Untersuchungszeitpunkt 18 cm unter der Geländeoberfläche, so dass alle tiefer liegenden Horizonte durch anaerobe Verhältnisse charakterisiert waren. Die Untersuchung der bodenkundlichen Parameter ergab unter anderem eine mit zunehmender Tiefe abnehmende Konzentration von Kohlenstoff und Stickstoff, sowie eine Abnahme von Temperatur und Wurzeldichte. Um die Auswirkungen der sich mit der Tiefe verändernden Bodeneigenschaften auf die Mikroorganismen zu ermitteln, wurden die Mikroorganismenpopulationen der verschiedenen Bodentiefen mit Hilfe der Fluoreszenz in situ Hybridisierung hinsichtlich ihrer Anzahl, Aktivität und Zusammensetzung beschrieben. Für die Charakterisierung des physiologischen Profils dieser Gemeinschaften, bezüglich der von ihr umsetzbaren Kohlenstoffverbindungen, wurden BIOLOG Mikrotiterplatten unter den in situ Bedingungen angepassten Bedingungen eingesetzt. Die sich im Profil verändernden Bodenparameter, vor allem die abnehmende Substratversorgung, die geringe Temperatur und die anaeroben Verhältnisse in den unteren Bodenschichten führten zu einer Veränderung der Mikroorganismenpopulation im Bodenprofil. So nahm von oben nach unten die Gesamtanzahl der ermittelten Mikroorganismen von 23,0 × 108 auf 1,2 × 108 Zellen g-1 ab. Gleichzeitig sank der Anteil der aktiven Zellen von 59% auf 33%. Das bedeutet, dass im Bereich von 0-5 cm 35mal mehr aktive Zellen g-1 als im Bereich von 40-45 cm gefunden wurden. Durch den Einsatz spezieller rRNS-Sonden konn-te zusätzlich eine Abnahme der Diversität mit zunehmender Bodentiefe nachgewiesen werden. Die geringere Aktivität der Population in den unteren Horizonten sowie die Unterschiede in der Zusammensetzung wirkten sich auf den Abbau der organischen Substanz aus. So wur-den die mit Hilfe der BIOLOG Mikrotiterplatten angebotenen Substanzen in größerer Tiefe langsamer und unvollständiger abgebaut. Insbesondere in den oberen 5 cm konnten einige der angebotenen Polymere und Kohlehydrate deutlich besser als im restlichen Profil umge-setzt werden. Das außerdem unter anaeroben Versuchsbedingungen diese Substrate deutlich schlechter umgesetzt wurden, kann so interpretiert werden, dass die konstant anaeroben Bedingungen in den unteren Horizonten ein Auftreten der Arten, die diese Substrate umset-zen, erschweren. Die in den oberen, aeroben Bodenabschnitten wesentlich höheren Zellzahlen und Aktivitäten und die dadurch schnellere C-Umsetzung führen auch zu einer besseren Substratversorgung der methanogenen Archaea in den makroskopisch aeroben Horizonten. Die erhöhte Substratverfügbarkeit erklärt die Tatsache, dass im Bereich von 0-5 cm die meisten methanogenen Archaea gefunden wurden, obwohl sich dieser Bereich zum Zeitpunkt der Probennahme oberhalb des wassergesättigten Bodenbereichs befand. Trotz der aeroben Bedingungen in, liegt im Bereich von 5 10 cm die für die methanogenen Archaea am besten geeignete Kombination aus Substratangebot und anaeroben Nischen vor. Hinzu kommt, dass in diesen Tiefen die Sommertemperaturen etwas höher liegen als in den tieferen Horizonten, was wiederum die Aktivität positiv beeinflusst. Bei zusammenfassender Betrachtung der Untersuchungsergebnisse von Anzahl, Aktivität, Zusammensetzung und Leistung der gesamten, aber im besonderen auch der methanogenen Mikroorganismenpopulation wird deutlich, dass in dem untersuchten Bodenprofil unter ökologischen Gesichtspunkten die oberen 15-20 cm den für den C-Umsatz relevantesten Bereich darstellen. Das Zusammenspiel wichtiger Bodenparameter wie Bodentemperatur, Wasserstand, Nährstoffversorgung und Durchwurzelung führt dazu, dass in dem untersuchten Tundraboden in den oberen 15-20 cm eine wesentlich größere und diversere Anzahl an Mikroorganismen existiert, die für einen schnelleren und umfassenderen Kohlenstoffumsatz in diesem Bereich des active layers sorgt.
Bei konventionellen Mikroarray-Experimenten zur Genexpressionsanalyse wird fluoreszenz- oder radioaktiv-markierte cDNA oder RNA mit immobilisierten Proben hybridisiert. Für ein gut detektierbares und auswertbares Ergebnis werden jedoch pro Array mindestens 15 - 20 µg Hybridisierungstarget benötigt. Dazu müssen entweder 15 - 20 µg RNA direkt durch Reverse Transkription in markierte cDNA umgeschrieben werden oder bei Vorhandensein von weniger Startmaterial die RNA amplifiziert werden (Standard- Affymetrix-Protokolle, Klur et al. 2004). Oft sind damit zeit- und kostenintensive Probenpräparationen verbunden und das Ergebnis ist nicht immer reproduzierbar. Obwohl es inzwischen einige Protokolle gibt, die dieses Problem zu lösen versuchen (Zhang et al. 2001, Iscove et al. 2002, McClintick et al. 2003, Stirewalt et al. 2004), eine optimale, leicht handbare und reproduzierbare Methode gibt es weiterhin nicht, weshalb in dieser Arbeit ein weiterer Lösungsansatz gesucht wurde. In der vorgestellten Arbeit werden zwei einfache Methoden beschrieben, mit denen Gene aus geringen RNA-Mengen nachgewiesen werden können: erstens die On Chip- RT-PCR mit cDNA als Matrize und zweitens diese Methode als One-Step-Reaktion mit RNA als Matrize. Beide Methoden beruhen auf dem Prinzip der PCR an immobilisierten Primern auf einer Chipoberfläche. Diese Möglichkeit der exponentiellen Amplifikation ist reproduzierbar und sensitiv. In Experimenten zur Etablierung des On-Chip-PCR-Systems wurden für die Immobilisierung der Primer verschiedene Kopplungsmethoden verwendet. Die affine Kopplung über Biotin- Streptavidin erwies sich als geeignet. Die On-Chip-Reaktion an kovalent gebundenen Primern wurde für amino-modifizierte Primer auf Epoxy-Oberflächen sowie für EDC-Kopplung auf silanisierten Oberflächen gezeigt. Für die letztgenannte Methode wurde die On-Chip-PCR optimiert, dass Spottingkonzentrationen der Primer von 5 - 10µM schon ausreichend sind. Der Einsatz von fluoreszenz-markierten Primern während der PCR ermöglicht eine unmittelbare Auswertung nach der Synthese ohne zusätzliche Detektionsschritte. In dieser Arbeit konnte außerdem mit der vorgestellten Methode der simultane Nachweis zweier Gene gezeigt werden. Die Methode kann noch als Multiplex-Analyse ausgebaut werden, um so mehrere Gene in gleichzeitig einem Ansatz nachweisen zu können. Die Ergebnisse der Versuche mit Matrizen aus unterschiedlichen Zelltypen deuten darauf hin, dass die On-Chip-RT-PCR eine weitere optimale Methode für den Nachweis von gering exprimierten Genen bietet.
In this thesis, we give two constructions for Riemannian metrics on Seiberg-Witten moduli spaces. Both these constructions are naturally induced from the L2-metric on the configuration space. The construction of the so called quotient L2-metric is very similar to the one construction of an L2-metric on Yang-Mills moduli spaces as given by Groisser and Parker. To construct a Riemannian metric on the total space of the Seiberg-Witten bundle in a similar way, we define the reduced gauge group as a subgroup of the gauge group. We show, that the quotient of the premoduli space by the reduced gauge group is isomorphic as a U(1)-bundle to the quotient of the premoduli space by the based gauge group. The total space of this new representation of the Seiberg-Witten bundle carries a natural quotient L2-metric, and the bundle projection is a Riemannian submersion with respect to these metrics. We compute explicit formulae for the sectional curvature of the moduli space in terms of Green operators of the elliptic complex associated with a monopole. Further, we construct a Riemannian metric on the cobordism between moduli spaces for different perturbations. The second construction of a Riemannian metric on the moduli space uses a canonical global gauge fixing, which represents the total space of the Seiberg-Witten bundle as a finite dimensional submanifold of the configuration space. We consider the Seiberg-Witten moduli space on a simply connected Käuhler surface. We show that the moduli space (when nonempty) is a complex projective space, if the perturbation does not admit reducible monpoles, and that the moduli space consists of a single point otherwise. The Seiberg-Witten bundle can then be identified with the Hopf fibration. On the complex projective plane with a special Spin-C structure, our Riemannian metrics on the moduli space are Fubini-Study metrics. Correspondingly, the metrics on the total space of the Seiberg-Witten bundle are Berger metrics. We show that the diameter of the moduli space shrinks to 0 when the perturbation approaches the wall of reducible perturbations. Finally we show, that the quotient L2-metric on the Seiberg-Witten moduli space on a Kähler surface is a Kähler metric.
Diagenetic studies of carbonate rocks focused for a long time on photozoan carbonate assemblages deposited in tropical climates. The results of these investigations were taken as models for the diagenetic evolution of many fossil carbonates. Only in recent years the importance of heterozoan carbonates, generally formed out of the tropics or in deeper waters, was realized. Diagenetic studies focusing on this kind of rocks are still scarce, but indicate that the diagenetic evolution of these rocks might be a better model for many fossil carbonate settings ("calcite-sea" carbonates) than the photozoan model used before. This study deals with the determination of the diagenetic pathways and environments in such shallow-water heterozoan carbonate assemblages. Special emphasis is put on the identification of early, near-seafloor diagenetic processes and on the evaluation of the amount of constructive diagenesis in form of cementation in this diagenetic environment. As study area the Central Mediterranean, the Maltese Islands and Sicily, was chosen. Here two sections were logged in Olio-Miocene shallow-water carbonates consisting of different kinds of heterozoan assemblages. The study area is very suitable for the investigation of constructive early diagenetic processes, as the rocks were never deeply buried and burial diagenetic pressure solution and cementation as cause of lithification could be ruled out. Nevertheless, the carbonate rocks are well lithified and form steep cliffs, implying cementation/lithification in another, shallower diagenetic environment. To determine the diagenetic pathways and environments, detailed transmitted light and cathodoluminescence petrography was carried out on thin sections. Furthermore the stable isotope (δ<sup>18O and δ<sup>13C) composition of the bulk rock, single biota and single cement phases was determined, as well as the major and trace element composition of the single cement phases. Petrographically three (Sicily) to four (Maltese Islands) cementation phases, two phases of fabric selective and one of non-fabric selective dissolution, one phase of neomorphism and one of chemical compaction could be distinguished. The stable isotope measurements of the single cement phases pointed to cement precipitation from marine, marine-derived and meteoric waters. The trace element analysis indicated precipitation under reducing conditions, (A) in an open system with low rock-water interaction on the Maltese Islands and (B) in a closed system with high rock-water interaction on Sicily. For the closed systems case, aragonite as cement source could be concluded because its chemical composition was preserved in the newly formed cements. By integrating these results, diagenetic pathways and environments for the investigated locations were established, and the cement source(s) in the different environments were determined. The diagenetic evolution started in the marine environment with the precipitation of fibrous/fibrous-bladed and epitaxial cement I. These cements formed as High Mg Calcite (HMC) directly out of marine waters. The paleoenvironmentally shallowest part of the section on the Maltese Islands was also exposed to meteoric diagenetic fluids. This meteoric influence lead to the dissolution of aragonitic and HMC skeletons, which sourced the cementation by Low Mg Calcitic (LMC) epitaxial cement II in this part of the Maltese section. Entering the burial-marine environment the main part of dissolution, cementation and neomorphism started to take place. The elevated CO2 content in this environment, caused by the decay of organic matter, lead to the dissolution of aragonitic skeletons, which sourced the cementation by LMC epitaxial cement II, bladed and blocky cements. The earlier precipitated HMC cement phases were either partly dissolved (epitaxial cement I) or neomorphosed to LMC (fibrous/fibrous-bladed and epitaxial cement I). In the burial environment weak chemical compaction took place without sourcing significant amounts of cementation. In a last phase the rocks entered the meteoric realm by uplift, which caused non-fabric selective dissolution. This study shows that early diagenetic processes, taking place at or just below the sediment-water-interface, are very important for the mineralogical stabilization of heterozoan carbonate strata. The main amount of constructive diagenesis in form of cementation takes place in this environment, sourced by dissolution of aragonitic and, to a lesser degree, of HMC skeletons. The results of this study imply that the primary amount of aragonitic skeletons in heterozoan carbonate sediments must be carefully assessed, as they are the main early diagenetic cement source. In fossil heterozoan carbonate rocks, aragonitic skeletons might be the cement source even when no relict structures like micritic envelops or biomolds are preserved. In general, the diagenetic evolution of heterozoan carbonate rocks is a good model for the diagenesis of "calcite-sea" time carbonate rocks.
Stochastic information, to be understood as "information gained by the application of stochastic methods", is proposed as a tool in the assessment of changes in climate. This thesis aims at demonstrating that stochastic information can improve the consideration and reduction of uncertainty in the assessment of changes in climate. The thesis consists of three parts. In part one, an indicator is developed that allows the determination of the proximity to a critical threshold. In part two, the tolerable windows approach (TWA) is extended to a probabilistic TWA. In part three, an integrated assessment of changes in flooding probability due to climate change is conducted within the TWA. The thermohaline circulation (THC) is a circulation system in the North Atlantic, where the circulation may break down in a saddle-node bifurcation under the influence of climate change. Due to uncertainty in ocean models, it is currently very difficult to determine the distance of the THC to the bifurcation point. We propose a new indicator to determine the system's proximity to the bifurcation point by considering the THC as a stochastic system and using the information contained in the fluctuations of the circulation around the mean state. As the system is moved closer to the bifurcation point, the power spectrum of the overturning becomes "redder", i.e. more energy is contained in the low frequencies. Since the spectral changes are a generic property of the saddle-node bifurcation, the method is not limited to the THC, but it could also be applicable to other systems, e.g. transitions in ecosystems. In part two, a probabilistic extension to the tolerable windows approach (TWA) is developed. In the TWA, the aim is to determine the complete set of emission strategies that are compatible with so-called guardrails. Guardrails are limits to impacts of climate change or to climate change itself. Therefore, the TWA determines the "maneuvering space" humanity has, if certain impacts of climate change are to be avoided. Due to uncertainty it is not possible to definitely exclude the impacts of climate change considered, but there will always be a certain probability of violating a guardrail. Therefore the TWA is extended to a probabilistic TWA that is able to consider "probabilistic uncertainty", i.e. uncertainty that can be expressed as a probability distribution or uncertainty that arises through natural variability. As a first application, temperature guardrails are imposed, and the dependence of emission reduction strategies on probability distributions for climate sensitivities is investigated. The analysis suggests that it will be difficult to observe a temperature guardrail of 2°C with high probabilities of actually meeting the target. In part three, an integrated assessment of changes in flooding probability due to climate change is conducted. A simple hydrological model is presented, as well as a downscaling scheme that allows the reconstruction of the spatio-temporal natural variability of temperature and precipitation. These are used to determine a probabilistic climate impact response function (CIRF), a function that allows the assessment of changes in probability of certain flood events under conditions of a changed climate. The assessment of changes in flooding probability is conducted in 83 major river basins. Not all floods can be considered: Events that either happen very fast, or affect only a very small area can not be considered, but large-scale flooding due to strong longer-lasting precipitation events can be considered. Finally, the probabilistic CIRFs obtained are used to determine emission corridors, where the guardrail is a limit to the fraction of world population that is affected by a predefined shift in probability of the 50-year flood event. This latter analysis has two main results. The uncertainty about regional changes in climate is still very high, and even small amounts of further climate change may lead to large changes in flooding probability in some river systems.
The multidrug and toxic compounds extrusion (MATE) family includes hundreds of functionally uncharacterised proteins from bacteria and all eukaryotic kingdoms except the animal kingdom, that function as drug/toxin::Na<sup>+ or H<sup>+ antiporters. In Arabidopsis thaliana the MATE family comprises 56 members, one of which is NIC2 (Novel Ion Carrier 2). Using heterologous expression systems including Escherichia coli and Saccharomyces cerevisiae, and the homologous expression system of Arabidopsis thaliana, the functional characterisation of NIC2 was performed. It has been demonstrated that NIC2 confers resistance of E. coli towards the chemically diverse compounds such as tetraethylammonium chloride (TEACl), tetramethylammonium chloride (TMACl) and a toxic analogue of indole-3-acetic acid, 5-fluoro-indole-acetic acid (F-IAA). Therefore, NIC2 may be able to transport a broad range of drug and toxic compounds. In wild-type yeast the expression of NIC2 increased the tolerance towards lithium and sodium, but not towards potassium and calcium. In A. thaliana, the overexpression of NIC2 led to strong phenotypic changes. Under normal growth condtions overexpression caused an extremely bushy phenotype with no apical dominance but an enhanced number of lateral flowering shoots. The amount of rossette leaves and flowers with accompanying siliques were also much higher than in wild-type plants and the senescence occurred earlier in the transgenic plants. In contrast, RNA interference (RNAi) used to silence NIC2 expression, induced early flower stalk development and flowering compared with wild-type plants. In additon, the main flower stalks were not able to grow vertically, but instead had a strong tendency to bend towards the ground. While NIC2 RNAi seedlings produced many lateral roots outgrowing from the primary root and the root-shoot junction, NIC2 overexpression seedlings displayed longer primary roots that were characterised by a 2 to 4 h delay in the gravitropic response. In addition, these lines exhibited an enhanced resistance to exogenously applied auxins, i.e. indole-3-acetic acid (IAA) and indole-3-butyric acid (IBA) when compared with the wild-type roots. Based on these results, it is suggested that the NIC2 overexpression and NIC2 RNAi phenotypes were due to decreased or increased levels of auxin, respectively. The ProNIC2:GUS fusion gene revealed that NIC2 is expressed in the stele of the elongation zone, in the lateral root cap, in new lateral root primordia, and in pericycle cells of the root system. In the vascular tissue of rosette leaves and inflorescence stems, the expression was observed in the xylem parenchyma cells, while in siliques it was also in vascular tissue, but as well in the dehiscence and abscission zones. The organ- and tissue-specific expression sites of NIC2 correlate with the sites of auxin action in mature Arabidopsis plants. Further experiments using ProNIC2:GUS indicated that NIC2 is an auxin-inducible gene. Additionally, during the gravitropic response when an endogenous auxin gradient across the root tip forms, the GUS activity pattern of the ProNIC2:GUS fusion gene markedly changed at the upper side of the root tip, while at the lower side stayed unchanged. Finally, at the subcellular level NIC2-GFP fusion protein localised in the peroxisomes of Nicotana tabacum BY2 protoplasts. Considering the experimental results, it is proposed that the hypothetical function of NIC2 is the efflux transport which takes part in the auxin homeostasis in plant tissues probably by removing auxin conjugates from the cytoplasm into peroxisomes.
The protection of species is one major focus in conservation biology. The basis for any management concept is the knowledge of the species autecology. In my thesis, I studied the life-history traits and population dynamics of the endangered Lesser Spotted Woodpecker (Picoides minor) in Central Europe. Here, I combine a range of approaches, from empirical investigations of a Lesser Spotted Woodpecker population in the Taunus low mountain range in Germany, the analysis of empirical data and the development of an individual-based stochastic model simulating the population dynamics. In the field studies I collected basic demographic data of reproductive success and mortality. Moreover, breeding biology and behaviour were investigated in detail. My results showed a significant decrease of the reproductive success with later timing of breeding, caused by deterioration in food supply. Moreover, mate fidelity was of benefit, since pairs composed of individuals that bred together the previous year started earlier with egg laying and obtained a higher reproductive success. Both sexes were involved in parental care, but the care was only shared equally during incubation and the early nestling stage. In the late nestling stage, parental care strategies differed between sexes: Females considerably decreased feeding rate with number of nestlings and even completely deserted small broods. Males fed their nestlings irrespective of brood size and compensated for the females absence. The organisation of parental care in the Lesser Spotted Woodpecker is discussed to provide the possibility for females to mate with two males with separate nests and indeed, polyandry was confirmed. To investigate the influence of the observed flexibility in the social mating system on the population persistence, a stochastic individual-based model simulating the population dynamics of the Lesser Spotted Woodpecker was developed, based on empirical results. However, pre-breeding survival rates could not be obtained empirically and I present in this thesis a pattern-oriented modelling approach to estimate pre-breeding survival rates by comparing simulation results with empirical pattern of population structure and reproductive success on population level. Here, I estimated the pre-breeding survival for two Lesser Spotted Woodpecker populations on different latitudes to test the reliability of the results. Finally, I used the same simulation model to investigate the effect of flexibility in the mating system on the persistence of the population. With increasing rate of polyandry in the population, the persistence increased and even low rates of polyandry had a strong influence. Even when presuming only a low polyandry rate and costs of polyandry in terms of higher mortality and lower reproductive success for the secondary male, the positive effect of polyandry on the persistence of the population was still strong. This thesis greatly helped to increase the knowledge of the autecology of an endangered woodpecker species. Beyond the relevance for the species, I could demonstrate here that in general flexibility in mating systems are buffer mechanisms and reduce the impact of environmental and demographic noise.
Die Dissertation stellt eine neue Herangehensweise an die Lösung der Aufgabe der funktionalen Diagnostik digitaler Systeme vor. In dieser Arbeit wird eine neue Methode für die Fehlererkennung vorgeschlagen, basierend auf der Logischen Ergänzung und der Verwendung von Berger-Codes und dem 1-aus-3 Code. Die neue Fehlererkennungsmethode der Logischen Ergänzung gestattet einen hohen Optimierungsgrad der benötigten Realisationsfläche der konstruierten Fehlererkennungsschaltungen. Außerdem ist eins der wichtigen in dieser Dissertation gelösten Probleme die Synthese vollständig selbstprüfender Schaltungen.
Origin and symmetry of the observed global magnetic fields in galaxies are not fully understood. We intend to clarify the question of the magnetic field origin and investigate the global action of the magneto-rotational instability (MRI) in galactic disks with the help of 3D global magneto-hydrodynamical (MHD) simulations. The calculations were done with the time-stepping ZEUS 3D code using massive parallelization. The alpha-Omega dynamo is known to be one of the most efficient mechanisms to reproduce the observed global galactic fields. The presence of strong turbulence is a pre-requisite for the alpha-Omega dynamo generation of the regular magnetic fields. The observed magnitude and spatial distribution of turbulence in galaxies present unsolved problems to theoreticians. The MRI is known to be a fast and powerful mechanism to generate MHD turbulence and to amplify magnetic fields. We find that the critical wavelength increases with the increasing of magnetic fields during the simulation, transporting the energy from critical to larger scales. The final structure, if not disrupted by supernovae explosions, is the structure of `thin layers' of thickness of about 100 pcs. An important outcome of all simulations is the magnitude of the horizontal components of the Reynolds and Maxwell stresses. The result is that the MRI-driven turbulence is magnetic-dominated: its magnetic energy exceeds the kinetic energy by a factor of 4. The Reynolds stress is small and less than 1% of the Maxwell stress. The angular momentum transport is thus completely dominated by the magnetic field fluctuations. The volume-averaged pitch angle is always negative with a magnitude of about -30. The non-saturated MRI regime is lasting sufficiently long to fill the time between the galactic encounters, independently of strength and geometry of the initial field. Therefore, we may claim the observed pitch angles can be due to MRI action in the gaseous galactic disks. The MRI is also shown to be a very fast instability with e-folding time proportional to the time of one rotation. Steep rotation curves imply a stronger growth for the magnetic energy due to MRI. The global e-folding time is from 44 Myr to 100 Myr depending on the rotation profile. Therefore, MRI can explain the existence of rather large magnetic field in very young galaxies. We also have reproduced the observed rms values of velocities in the interstellar turbulence as it was observed in NGC 1058. We have shown with the simulations that the averaged velocity dispersion of about 5 km/s is a typical number for the MRI-driven turbulence in galaxies, which agrees with observations. The dispersion increases outside of the disk plane, whereas supernovae-driven turbulence is found to be concentrated within the disk. In our simulations the velocity dispersion increases a few times with the heights. An additional support to the dynamo alpha-effect in the galaxies is the ability of the MRI to produce a mix of quadrupole and dipole symmetries from the purely vertical seed fields, so it also solves the seed-fields problem of the galactic dynamo theory. The interaction of magneto-rotational instability and random supernovae explosions remains an open question. It would be desirable to run the simulation with the supernovae explosions included. They would disrupt the calm ring structure produced by global MRI, may be even to the level when we can no longer blame MRI to be responsible for the turbulence.
Mesoporous organosilica materials with amine functions : surface characteristics and chirality
(2005)
In this work mesoporous organisilica materials are synthesized through the silica sol-gel process. For this a new class of precursors which are also surfactant are synthesized and self-assembled. This leads to a high surface area functionality which is analysized with copper (II) and water adsorption.
During this PhD project three technical platforms were either improved or newly established in order to identify interesting genes involved in SNF, validate their expression and functionally characterise them. An existing 5.6K cDNA array (Colebatch et al., 2004) was extended to produce the 9.6K LjNEST array, while a second array, the 11.6K LjKDRI array, was also produced. Furthermore, the protocol for array hybridisation was substantially improved (Ott et al., in press). After functional classification of all clones according to the MIPS database and annotation of their corresponding tentative consensus sequence (TIGR) these cDNA arrays were used by several international collaborators and by our group (Krusell et al., 2005; in press). To confirm results obtained from the cDNA array analysis different sets of cDNA pools were generated that facilitate rapid qRT-PCR analysis of candidate gene expression. As stable transformation of Lotus japonicus takes several months, an Agrobacterium rhizogenes transformation system was established in the lab and growth conditions for screening transformants for symbiotic phenotypes were improved. These platforms enable us to identify genes, validate their expression and functionally characterise them in the minimum of time. The resources that I helped to establish, were used in collaboration with other people to characterise several genes like the potassium transporter LjKup and the sulphate transporter LjSst1, that were transcriptionally induced in nodules compared to uninfected roots, in more detail (Desbrosses et al., 2004; Krusell et al., 2005). Another gene that was studied in detail was LjAox1. This gene was identified during cDNA array experiments and detailed expression analysis revealed a strong and early induction of the gene during nodulation with high expression in young nodules which declines with the age of the nodule. Therefore, LjAox1 is an early nodulin. Promoter:gus fusions revealed an LjAox1 expression around the nodule endodermis. The physiological role of LjAox1 is currently being persued via RNAi. Using RNA interference, the synthesis of all symbiotic leghemoglobins was silenced simultaneously in Lotus japonicus. As a result, growth of LbRNAi lines was severely inhibited compared to wild-type plants when plants were grown under symbiotic conditions in the absence of mineral nitrogen. The nodules of these plants were arrested in growth 14 post inoculation and lacked the characteristic pinkish colour. Growing these transgenic plants in conditions where reduced nitrogen is available for the plant led to normal plant growth and development. This demonstrates that leghemoglobins are not required for plant development per se, and proves for the first time that leghemoglobins are indispensable for symbiotic nitrogen fixation. Absence of leghemoglobins in LbRNAi nodules led to significant increases in free-oxygen concentrations throughout the nodules, a decrease in energy status as reflected by the ATP/ADP ratio, and an absence of the bacterial nitrogenase protein. The bacterial population within nodules of LbRNAi plants was slightly reduced. Alterations of plant nitrogen and carbon metabolism in LbRNAi nodules was reflected in changes in amino acid composition and starch deposition (Ott et al., 2005). These data provide strong evidence that nodule leghemoglobins function as oxygen transporters that facilitate high flux rates of oxygen to the sites of respiration at low free oxygen concentrations within the infected cells.
Wetting and phase transitions play a very important role our daily life. Molecularly thin films of long-chain alkanes at solid/vapour interfaces (e.g. C30H62 on silicon wafers) are very good model systems for studying the relation between wetting behaviour and (bulk) phase transitions. Immediately above the bulk melting temperature the alkanes wet partially the surface (drops). In this temperature range the substrate surface is covered with a molecularly thin ordered, solid-like alkane film ("surface freezing"). Thus, the alkane melt wets its own solid only partially which is a quite rare phenomenon in nature. The thesis treats about how the alkane melt wets its own solid surface above and below the bulk melting temperature and about the corresponding melting and solidification processes. Liquid alkane drops can be undercooled to few degrees below the bulk melting temperature without immediate solidification. This undercooling behaviour is quite frequent and theoretical quite well understood. In some cases, slightly undercooled drops start to build two-dimensional solid terraces without bulk solidification. The terraces grow radially from the liquid drops on the substrate surface. They consist of few molecular layers with the thickness multiple of all-trans length of the molecule. By analyzing the terrace growth process one can find that, both below and above the melting point, the entire substrate surface is covered with a thin film of mobile alkane molecules. The presence of this film explains how the solid terrace growth is feeded: the alkane molecules flow through it from the undercooled drops to the periphery of the terrace. The study shows for the first time the coexistence of a molecularly thin film ("precursor") with partially wetting bulk phase. The formation and growth of the terraces is observed only in a small temperature interval in which the 2D nucleation of terraces is more likely than the bulk solidification. The nucleation mechanisms for 2D solidification are also analyzed in this work. More surprising is the terrace behaviour above bulk the melting temperature. The terraces can be slightly overheated before they melt. The melting does not occur all over the surface as a single event; instead small drops form at the terrace edge. Subsequently these drops move on the surface "eating" the solid terraces on their way. By this they grow in size leaving behind paths from were the material was collected. Both overheating and droplet movement can be explained by the fact that the alkane melt wets only partially its own solid. For the first time, these results explicitly confirm the supposed connection between the absence of overheating in solid and "surface melting": the solids usually start to melt without an energetic barrier from the surface at temperatures below the bulk melting point. Accordingly, the surface freezing of alkanes give rise of an energetic barrier which leads to overheating.
Ziel dieser Arbeit war die Entwicklung neuer Substanzen für die Gentherapie. Diese beinhaltet die Behebung von erblich bedingten Krankheiten wie z.B. Mucoviscidose. Dabei werden im Zellkern defekte Gene durch normale, gesunde DNA-Sequenzen ersetzt. Zur Einschleusung des Genmaterials in die Zellen (Transfektion) werden geeignete Transport-Systeme bzw. Methoden benötigt, die dort die Freisetzung der neu einzubauenden Gene (Genexpression ausgedrückt in Transfektionseffizienzen) gestatten. Hierfür wurden neue Polykation-DNA-Komplexe (Vektoren) auf Basis kationischer Polymere wie Poly(ethylenimin) (PEI) hergestellt, charakterisiert und nachfolgend in Transfektionsversuchen an verschiedenen Zelllinien eingesetzt. Sowohl das kationische Ausgangspolymer PEI als auch das Pfropfcopolymer PEI-g-PEO (PEO-Seitenketten zur Erhöhung der Biokompatibilität) wurden mit Rezeptorliganden modifiziert, um eine verbesserte und spezifische Transfektion an ausgesuchten Zellen zu erreichen. Als Liganden wurden Folsäure (Transfektion an HeLa-Zellen), Triiod-L-thyronin (HepG2-Zellen) und die Uronsäuren der Galactose, Mannose, Glucose sowie die Lactobionsäure (HeLa-, HepG2- und 16HBE-Zellen) verwendet. Das PEI, die Pfropfcopolymere PEI-g-PEO und die Ligand-funktionalisierten Copolymere wurden hinsichtlich ihrer chemischen Zusammensetzung und molekularen Parameter charakterisiert. Die Molmassenuntersuchungen mittels Größenausschlusschromatographie zeigten, dass nach der Synthese unterschiedliche Polymerfraktionen mit nicht einheitlicher chemischer Zusammensetzung vorlagen. Die anschließenden Transfektionsversuche wurden mit Hilfe einer speziellen DNA (Luciferase) an den Zelllinien HepG2 (Leberkrebszellen), HeLa (Gebärmutterhalskrebszellen) und 16HBE (Atemwegsepithelzellen) durchgeführt. Die T3(Triiod-L-thyronin)-Vektoren zeigten in Abhängigkeit vom eingesetzten Komplexverhältnis Polykation/DNA ein Maximum in der Transfektion an HepG2-Zellen. Die Hypothese der rezeptorvermittelten Endozytose ließ sich durch entsprechende T3-Überschuss-Experimente und Fluoreszenzmikroskopie-Untersuchungen bestätigen. Dagegen konnte bei den Folsäure-Vektoren keine rezeptorvermittelte Endozytose beobachtet werden. Bei den Vektoren mit Mannuronsäure-Ligand (Man) konnte an allen drei Zelllinien (HepG2, HeLa, 16HBE) eine konstante, hohe Transfereffizienz nachgewiesen werden. Sie waren bei allen eingesetzten Polymer-DNA-Verhältnissen effizienter als der Vergleichsvektor PEI. Dieses Transfektionsverhalten ließ sich durch Blockierung der Zuckerstruktur unterbinden. In Transfektionsexperimenten mit einem Überschuss an freier Mannuronsäure und fluoreszenzmikroskopischen Untersuchungen konnte eine rezeptorvermittelte Endozytose der Man-Vektoren an den o.g. Zelllinien nachgewiesen werden. Die anderen Uronsäure-Konjugate zeigten keine signifikanten Abweichungen im Transfektionsverhalten im Vergleich zum PEI-Vektor.
Mit den immensen Aufgaben und Problematiken der Transformation konfrontiert, wechselten die Mehrparteien-Regierungen in Estland vergleichsweise häufig. Im Jahr 2002 war die insgesamt achte Regierung seit 1992 im Amt. Eine detaillierte Untersuchung der Regierungsstabilität am Beispiel von Estlands bis dato sieben Regierungen erscheint daher angebracht, da das Land trotz der häufigen Regierungswechsel im osteuropäischen Vergleich als erfolgreichstes Transformationsland angesehen wird. Kann Regierungsstabilität auch dann vorliegen, wenn die Regierungen selbst sehr häufig wechseln? Dies ist die eigentliche Fragestellung der vorliegenden Diplomarbeit. Es wird davon ausgegangen, dass sich Regierungsstabilität aus mehreren Variablen zusammensetzt, die sich gegenseitig beeinflussen. Angaben über die durchschnittliche Verweildauer einer Regierung im Amt besitzen wenig Aussagekraft, vielmehr müssen die eigentlichen Hintergründe für einen Wechsel beleuchtet werden.
In an experimental study the attempt was made to examine the effects of the Reciprocal Teaching method on measures of metacognition and try to identify the effective features of this method that are necessary for the learning gains to occur. Reciprocal Teaching, originally developed by Palincsar and Brown (1984), is a very successful training program which was designed to improve student's reading comprehension skills by teaching them reading strategies. In the present study the tasks and responsibilities assumed by 5thgrade elementary students (N = 55) participating in a 16-session reading strategy training were varied systematically. Not only the students who participated in the training program in one of the three experimental conditions were compared with respect to knowledge and performance measures, but there was also a comparison to their control classmates who did not participate in strategy training (N = 86). Detailed analyses of video-taped sessions provided additional information. The strategy training was most beneficial for measures of knowledge and performance more closely related to the content of the training program, namely knowledge about specific reading strategies taught in training and application of those strategies. No significant effects were observed for more distal measures (general strategy knowledge, reading comprehension). As for the features of the program, it could be shown that students of the two experimental conditions where the students were responsible for giving each other feedback on performance (with respect to both content and strategy application) and guiding the correction of the answer outperformed both the experimental condition in which the trainer was responsible for those tasks and the control group. It is concluded that it is not merely the application of strategies, but the combination of strategy application with concurrent teaching and learning of metacognitive acquisition procedures (analysis, monitoring, evaluation, and regulation) in an inter-individual way as the precedent of these processes occurring intra-individually that seems to be an efficient way of acquiring metacognitive knowledge and skills. It was also shown that strategy training does not necessarily have to include the precise kind of interaction that characterizes the Reciprocal Teaching method. Instead, the tasks of monitoring, evaluating, and regulating other children's learning processes - i.e., tasks associated with the "teacher role" - are the ones that promote the acquisition of metacognitive knowledge and skills. Generally, any strategy training program that not only provides children with plentiful opportunities for practice, but also prompts them to engage in these kinds of metacognitive processes, may help children to acquire metacognitive knowledge and skills.
At present, carbon sequestration in terrestrial ecosystems slows the growth rate of atmospheric CO2 concentrations, and thereby reduces the impact of anthropogenic fossil fuel emissions on the climate system. Changes in climate and land use affect terrestrial biosphere structure and functioning at present, and will likely impact on the terrestrial carbon balance during the coming decades - potentially providing a positive feedback to the climate system due to soil carbon releases under a warmer climate. Quantifying changes, and the associated uncertainties, in regional terrestrial carbon budgets resulting from these effects is relevant for the scientific understanding of the Earth system and for long-term climate mitigation strategies. A model describing the relevant processes that govern the terrestrial carbon cycle is a necessary tool to project regional carbon budgets into the future. This study (1) provides an extensive evaluation of the parameter-based uncertainty in model results of a leading terrestrial biosphere model, the Lund-Potsdam-Jena Dynamic Global Vegetation Model (LPJ-DGVM), against a range of observations and under climate change, thereby complementing existing studies on other aspects of model uncertainty; (2) evaluates different hypotheses to explain the age-related decline in forest growth, both from theoretical and experimental evidence, and introduces the most promising hypothesis into the model; (3) demonstrates how forest statistics can be successfully integrated with process-based modelling to provide long-term constraints on regional-scale forest carbon budget estimates for a European forest case-study; and (4) elucidates the combined effects of land-use and climate changes on the present-day and future terrestrial carbon balance over Europe for four illustrative scenarios - implemented by four general circulation models - using a comprehensive description of different land-use types within the framework of LPJ-DGVM. This study presents a way to assess and reduce uncertainty in process-based terrestrial carbon estimates on a regional scale. The results of this study demonstrate that simulated present-day land-atmosphere carbon fluxes are relatively well constrained, despite considerable uncertainty in modelled net primary production. Process-based terrestrial modelling and forest statistics are successfully combined to improve model-based estimates of vegetation carbon stocks and their change over time. Application of the advanced model for 77 European provinces shows that model-based estimates of biomass development with stand age compare favourably with forest inventory-based estimates for different tree species. Driven by historic changes in climate, atmospheric CO2 concentration, forest area and wood demand between 1948 and 2000, the model predicts European-scale, present-day age structure of forests, ratio of biomass removals to increment, and vegetation carbon sequestration rates that are consistent with inventory-based estimates. Alternative scenarios of climate and land-use change in the 21<sup>st century suggest carbon sequestration in the European terrestrial biosphere during the coming decades will likely be on magnitudes relevant to climate mitigation strategies. However, the uptake rates are small in comparison to the European emissions from fossil fuel combustion, and will likely decline towards the end of the century. Uncertainty in climate change projections is a key driver for uncertainty in simulated land-atmosphere carbon fluxes and needs to be accounted for in mitigation studies of the terrestrial biosphere.
Kolleginnen und Kollegen aus Literaturwissenschaft und Linguistik sind der Aufforderung der Herausgeberinnen gefolgt, Beiträge zu Ehren Joachim Gessingers zu verfassen, die sein zentrales Arbeitsgebiet, die jüngere Sprach-, Mentalitäts- und Wissenschaftsgeschichte, zum Thema haben. Entstanden ist eine facettenreiche Festschrift, die Aspekte der Schriftgeschichte, der Sprachpolitik und der Universitätsgeschichte ebenso aufgreift wie linguistische Fragen zur Sprachvariation - und nicht zuletzt Einblicke in das komplexe Privatleben des Autors gewährt. Die Festschrift ist in Form eines Menüs zum 60. Geburtstag des Jubilars präsentiert und enthält nach dem Entrée als Plats du jour im Kapitel "Lüttje Lage und Maultaschen" Beiträge von Otto Ludwig (Von Kopf und Hand : zur Konstitution der neuzeitlichen Schreibpraxis in spätmittelalterlicher Zeit) und Isabel Zollna (Ohr und Hand : die Taquigrafía castellana o arte de escribir con tanta velocidad como se habla (1803) von Francisco de Paula Martí). Es folgt der Abschnitt "Bouletten" mit Beiträgen von Angelika Ebrecht / Klaus Laermann (Wie kommt Farbe zur Sprache?), Wolfert von Rahden („Ächte Weimaraner“ : zur Genealogie eines Genealogen), Susanne Scharnowski („Die Studirten drücken jetzt einander todt, wenn ich so sagen darf“ : einige Anmerkungen zu Universitätsreform und Gelehrsamkeitskritik seit der Aufklärung), Hartmut Schmidt (Die Sprache des Regimes und die Sprache der Bürger : Carl Goerdeler und andere zum Leipziger Universitätsjubiläum 1934) und Jürgen Trabant (Welche Sprache für Europa?). Im Kapitel „Rüben und Kartoffeln“ geben sich folgende Autoren die Ehre: Elisabeth Berner („Im ersten Augenblick war es mir Deinetwegen leid“ : Theodor Fontane im Krisenjahr 1876), Manuela Böhm (Berliner Sprach-Querelen : ein Ausschnitt aus der Debatte über den style réfugié im 18. Jahrhundert), Peter Eisenberg (Jeder versteht jeden : wie Luther die Pfingstgeschichte schreibt), Christian Fischer (Variation und Korrelation im Mittelniederdeutschen : Möglichkeiten und Grenzen der Variablenlinguistik), Anja Voeste („Die Neger heben“? : die Sprachenfrage in Deutsch-Neuguinea (1884–1914)), Heide Wegener (Das Hühnerei vor der Hundehütte : von der Notwendigkeit historischen Wissens in der Grammatikographie des Deutschen) und Birgit Wolf („Woher kommt eigentlich ...?“ : Sprachberatung und Sprachgeschichte an der Universität Potsdam). Anschließend geht es ans Dessert: Liliane Weissberg (Die Unschuld des Namens und die ungeheure Unordnung der Welt), Roland Willemyns / Eline Vanhecke / Wim Vandenbussche (Politische Loyalität und Sprachwahl : eine Fallstudie aus dem Flandern des frühen 19. Jahrhunderts), Jürgen Erfurt (Zweisprachige Alphabetisierung im Räderwerk politischer und wissenschaftlicher Diskurse), Franz Januschek (Über Fritz und andere Auslaufmodelle : ein Beitrag zur Lingologie), Ulrich Schmitz (Grün bei Grimm) und Wolfert von Rahden (Immer wieder plötzlich am Ende des Sommers : zur Phänomenologie des Abschiedsrituals auf einem italienischen Landsitz in den achtziger Jahren) servieren Pralinen und Marshmallows, Obst und Hupferl.
Vitamin E : elucidation of the mechanism of side chain degradation and gene regulatory functions
(2005)
For more than 80 years vitamin E has been in the focus of scientific research. Most of the progress concerning non-antioxidant functions, nevertheless, has only arisen from publications during the last decade. Most recently, the metabolic pathway of vitamin E has been almost completely elucidated. Vitamin E is metabolized by truncation of its side chain. The initial step of an omega-hydroxylation is carried out by cytochromes P450 (CYPs). This was evidenced by the inhibition of the metabolism of alpha-tocopherol by ketoconozole, an inhibitor of CYP3A expression, whereas rifampicin, an inducer of CYP3A expression increased the metabolism of alpha-tocopherol. Although the degradation pathway is identical for all tocopherols and tocotrienols, there is a marked difference in the amount of the release of metabolites from the individual vitamin E forms in cell culture as well as in experimental animals and in humans. Recent findings not only proposed an CYP3A4-mediated degradation of vitamin E but also suggested an induction of the metabolizing enzymes by vitamin E itself. In order to investigate how vitamin E is able to influence the expression of metabolizing enzymes like CYP3A4, a pregnane X receptor (PXR)-based reporter gene assay was chosen. PXR is a nuclear receptor which regulates the transcription of genes, e.g., CYP3A4, by binding to specific DNA response elements. And indeed, as shown here, vitamin E is able to influence the expression of CYP3A via PXR in an in vitro reporter gene assay. Tocotrienols showed the highest activity followed by delta- and alpha-tocopherol. An up-regulation of Cyp3a11 mRNA, the murine homolog of the human CYP3A4, could also be confirmed in an animal experiment. The PXR-mediated change in gene expression displayed the first evidence of a direct transcriptional activity of vitamin E. PXR regulates the expression of genes involved in xenobiotic detoxification, including oxidation, conjugation, and transport. CYP3A, e.g., is involved in the oxidative metabolism of numerous currently used drugs. This opens a discussion of possible side effects of vitamin E, but the extent to which supranutritional doses of vitamin E modulate these pathways in humans has yet to be determined. Additionally, as there is arising evidence that vitamin E's essentiality is more likely to be based on gene regulation than on antioxidant functions, it appeared necessary to further investigate the ability of vitamin E to influence gene expression. Mice were divided in three groups with diets (i) deficient in alpha-tocopherol, (ii) adequate in alpha-tocopherol supply and (iii) with a supranutritional dosage of alpha-tocopherol. After three months, half of each group was supplemented via a gastric tube with a supranutritional dosage of gamma-tocotrienol per day for 7 days. Livers were analyzed for vitamin E content and liver RNA was prepared for hybridization using cDNA array and oligonucleotide array technology. A significant change in gene expression was observed by alpha-tocopherol but not by gamma-tocotrienol and only using the oligonucleotide array but not using the cDNA array. The latter effect is most probably due to the limited number of genes represented on a cDNA array, the lacking gamma-tocotrienol effect is obviously caused by a rapid degradation, which might prevent bioefficacy of gamma-tocotrienol. Alpha-tocopherol changed the expression of various genes. The most striking observation was an up-regulation of genes, which code for proteins involved in synaptic transmitter release and calcium signal transduction. Synapsin, synaptotagmin, synaptophysin, synaptobrevin, RAB3A, complexin 1, Snap25, ionotropic glutamate receptors (alpha 2 and zeta 1) were shown to be up-regulated in the supranutritional group compared to the deficient group. The up-regulation of synaptic genes shown in this work are not only supported by the strong concentration of genes which all are involved in the process of vesicular transport of neurotransmitters, but were also confirmed by a recent publication. However, a confirmation by real time PCR in neuronal tissue like brain is now required to explain the effect of vitamin E on neurological functionality. The change in expression of genes coding for synaptic proteins by vitamin E is of principal interest thus far, since the only human disease directly originating from an inadequate vitamin E status is ataxia with isolated vitamin E deficiency. Therefore, with the results of this work, an explanation for the observed neurological symptoms associated with vitamin E deficiency can be presented for the first time.
Interpretation of and reasoning with conditionals : probabilities, mental models, and causality
(2003)
In everyday conversation "if" is one of the most frequently used conjunctions. This dissertation investigates what meaning an everyday conditional transmits and what inferences it licenses. It is suggested that the nature of the relation between the two propositions in a conditional might play a major role for both questions. Thus, in the experiments reported here conditional statements that describe a causal relationship (e.g., "If you touch that wire, you will receive an electric shock") were compared to arbitrary conditional statements in which there is no meaningful relation between the antecedent and the consequent proposition (e.g., "If Napoleon is dead, then Bristol is in England"). Initially, central assumptions from several approaches to the meaning and the reasoning from causal conditionals will be integrated into a common model. In the model the availability of exceptional situations that have the power to generate exceptions to the rule described in the conditional (e.g., the electricity is turned off), reduces the subjective conditional probability of the consequent, given the antecedent (e.g., the probability of receiving an electric shock when touching the wire). This conditional probability determines people's degree of belief in the conditional, which in turn affects their willingness to accept valid inferences (e.g., "Peter touches the wire, therefore he receives an electric shock") in a reasoning task. Additionally to this indirect pathway, the model contains a direct pathway: Cognitive availability of exceptional situations directly reduces the readiness to accept valid conclusions. The first experimental series tested the integrated model for conditional statements embedded in pseudo-natural cover stories that either established a causal relation between the antecedent and the consequent event (causal conditionals) or did not connect the propositions in a meaningful way (arbitrary conditionals). The model was supported for the causal, but not for the arbitrary conditional statements. Furthermore, participants assigned lower degrees of belief to arbitrary than to causal conditionals. Is this effect due to the presence versus absence of a semantic link between antecedent and consequent in the conditionals? This question was one of the starting points for the second experimental series. Here, the credibility of the conditionals was manipulated by adding explicit frequency information about possible combinations of presence or absence of antecedent and consequent events to the problems (i.e., frequencies of cases of 1. true antecedent with true consequent, 2. true antecedent with false consequent, 3. false antecedent with true consequent, 4. false antecedent with false consequent). This paradigm allows testing different approaches to the meaning of conditionals (Experiment 4) as well as theories of conditional reasoning against each other (Experiment 5). The results of Experiment 4 supported mainly the conditional probability approach to the meaning of conditionals (Edgington, 1995) according to which the degree of belief a listener has in a conditional statement equals the conditional probability that the consequent is true given the antecedent (e.g., the probability of receiving an electric shock when touching the wire). Participants again assigned lower degrees of belief to the arbitrary than the causal conditionals, although the conditional probability of the consequent given the antecedent was held constant within every condition of explicit frequency information. This supports the hypothesis that the mere presence of a causal link enhances the believability of a conditional statement. In Experiment 5 participants solved conditional reasoning tasks from problems that contained explicit frequency information about possible relevant cases. The data favored the probabilistic approach to conditional reasoning advanced by Oaksford, Chater, and Larkin (2000). The two experimental series reported in this dissertation provide strong support for recent probabilistic theories: for the conditional probability approach to the meaning of conditionals by Edgington (1995) and the probabilistic approach to conditional reasoning by Oaksford et al. (2000). In the domain of conditional reasoning, there was additionally support for the modified mental model approaches by Markovits and Barrouillet (2002) and Schroyens and Schaeken (2003). Probabilistic and mental model approaches could be reconciled within a dual-process-model as suggested by Verschueren, Schaeken, and d'Ydewalle (2003).
Für ein tiefergehendes Verständnis von Entwicklung und Funktion der quergestreiften Muskulatur ist eine Betrachtung der am Aufbau der Myofibrillen, den kontraktilen Organellen, beteiligten Proteine essentiell. Die vorliegende Arbeit beschäftigt sich mit Myomesin, einem Protein der sarkomeren M-Bande. Zunächst wurde die cDNA des humanen Myomesins vollständig kloniert, sequenziert und nachfolgend die komplette Größe der aminoterminalen Kopfdomäne bestimmt. Es konnte gezeigt werden, daß Myomesin in vitro mit den Domänen 1 und 12 an Myosin bindet. Die muskelspezifische Isoform der Kreatinkinase bindet an die Domänen 7 und 8. Stimulations- und Inhibitionsexperimente belegen, daß Myomesin an Serin 618 in vivo durch die Proteinkinase A phosphoryliert wird und daß diese Phosphorylierung durch Aktivierung beta2-adrenerger Rezeptoren stimulierbar ist. In Muskelgewebeproben von Patienten, die an der Hypertrophen Kardiomyopathie, einer genetisch bedingten Herzmuskelkrankheit, erkrankt sind, konnte mit einem neu hergestellten phosphorylierungsabhängigen Antikörper eine Verminderung der Menge phosphorylierten Myomesins nachgewiesen werden. Mögliche Ursachen werden diskutiert. Myomesin bildet Dimere, wie durch hefegenetische und biochemische Experimente gezeigt werden konnte. Die Dimerisierung von Myomesin könnte eine zentrale Rolle für den Einbau der Myosinfilamente in die naszierende Myofibrille haben. Anhand der gewonnenen Daten wurde ein verbessertes Modell der zentralen M-Bande erstellt.
Subject of this work is the study of applications of the Galactic Microlensing effect, where the light of a distant star (source) is bend according to Einstein's theory of gravity by the gravitational field of intervening compact mass objects (lenses), creating multiple (however not resolvable) images of the source. Relative motion of source, observer and lens leads to a variation of deflection/magnification and thus to a time dependant observable brightness change (lightcurve), a so-called microlensing event, lasting weeks to months. The focus lies on the modeling of binary-lens events, which provide a unique tool to fully characterize the lens-source system and to detect extra-solar planets around the lens star. Making use of the ability of genetic algorithms to efficiently explore large and intricate parameter spaces in the quest for the global best solution, a modeling software (Tango) for binary lenses is developed, presented and applied to data sets from the PLANET microlensing campaign. For the event OGLE-2002-BLG-069 the 2nd ever lens mass measurement has been achieved, leading to a scenario, where a G5III Bulge giant at 9.4 kpc is lensed by an M-dwarf binary with total mass of M=0.51 solar masses at distance 2.9 kpc. Furthermore a method is presented to use the absence of planetary lightcurve signatures to constrain the abundance of extra-solar planets.