Refine
Has Fulltext
- yes (78) (remove)
Year of publication
- 2009 (78) (remove)
Document Type
- Doctoral Thesis (78) (remove)
Language
- English (48)
- German (28)
- French (1)
- Multiple languages (1)
Is part of the Bibliography
- yes (78)
Keywords
- Microarray (3)
- Modellierung (3)
- Arabidopsis thaliana (2)
- Biomarker (2)
- Diabetes (2)
- Dynamik (2)
- Erosion (2)
- GPCR (2)
- Paläoklima (2)
- Serotonin (2)
Institute
- Institut für Biochemie und Biologie (24)
- Institut für Geowissenschaften (11)
- Institut für Physik und Astronomie (10)
- Institut für Ernährungswissenschaft (7)
- Institut für Chemie (5)
- Institut für Umweltwissenschaften und Geographie (5)
- Wirtschaftswissenschaften (3)
- Department Psychologie (2)
- Extern (2)
- Department Erziehungswissenschaft (1)
Despite general concern that the massive deposits of methane stored under permafrost underground and undersea could be released into the atmosphere due to rising temperatures attributed to global climate change, little is known about the methanogenic microorganisms in permafrost sediments, their role in methane emissions, and their phylogeny. The aim of this thesis was to increase knowledge of uncultivated methanogenic microorganisms in submarine and terrestrial permafrost deposits, their community composition, the role they play with regard to methane emissions, and their phylogeny. It is assumed that methanogenic communities in warmer submarine permafrost may serve as a model to anticipate the response of methanogenic communities in colder terrestrial permafrost to rising temperatures. The compositions of methanogenic communities were examined in terrestrial and submarine permafrost sediment samples. The submarine permafrost studied in this research was 10°C warmer than the terrestrial permafrost. By polymerase chain reaction (PCR), DNA was extracted from each of the samples and analyzed by molecular microbiological methods such as PCR-DGGE, RT-PCR, and cloning. Furthermore, these samples were used for in vitro experiment and FISH. The submarine permafrost analysis of the isotope composition of CH4 suggested a relationship between methane content and in situ active methanogenesis. Furthermore, active methanogenesis was proven using 13C-isotope measurements of methane in submarine permafrost sediment with a high TOC value and a high methane concentration. In the molecular-microbiological studies uncultivated lines of Methanosarcina, Methanomicrobiales, Methanobacteriacea and the Groups 1.3 and Marine Benthic from Crenarchaeota were found in all submarine and terrestrial permafrost samples. Methanosarcina was the dominant group of the Archaea in all submarine and terrestrial permafrost samples. The archaeal community composition, in particular, the methanogenic community composition showed diversity with changes in temperatures. Furthermore, cell count of methanogens in submarine permafrost was 10 times higher than in terrestrial permafrost. In vitro experiments showed that methanogens adapt quickly and well to higher temperatures. If temperatures rise due to climate change, an increase in methanogenic activity can be expected as long as organic material is sufficiently available and qualitatively adequate.
Die Etablierung eines gesunden Ernährungsverhaltens unserer Kinder ist die wichtigste Voraussetzung für ihre körperliche, kognitive und emotionale Entwicklung. Dabei sind neben einer genetischen Disposition und kulturellen Gegebenheiten vor allem die Einflüsse der jeweiligen Betreuungspersonen ausschlaggebend. Die Eltern steuern sowohl direkt (durch Aufforderungen, Verbote u.ä.) als auch indirekt (durch die Förderung eigenverantwortlicher Entscheidungen u.ä.) das Ernährungsverhalten ihres Kindes. Untersuchungen zum mütterlichen Steuerungsverhalten konzentrierten sich bisher vorwiegend auf die Betrachtung direkter Strategien sowie auf altershomogene und sozial besser gestellte Gruppen. Aufgrund möglicher Verzerrungen durch die Betrachtung einzelner Ausschnitte des Zusammenhangs zwischen Steuerung und Ernährung wurde in der vorliegenden Arbeit ein Gesamtmodell spezifiziert, welches den Zusammenhang zwischen elterlicher Steuerung und kindlicher Ernährung unter Berücksichtigung von Sozial- und Gewichtsfaktoren abbildet. Dazu wurden drei Erhebungen mit insgesamt über 900 Müttern 1 – 10-jähriger Kinder durchgeführt. Innerhalb dieser Untersuchungen ist es gelungen, erstmalig ein deutschsprachiges Instrument zur Messung elterlicher Steuerungsstrategien in der Essenssituation (ISS) zu entwickeln. Die Analysen zeigten, dass bisher nur selten untersuchte Strategien, wie das explizite Bemühen um Vorbildwirkung und das Erlauben von eigenverantwortlichen Entscheidungen des Kindes, von den Müttern häufig angewandt werden. Die Analyse des komplexen Wirkungsgefüge von Steuerung, kindlicher Ernährung sowie sozialer und gewichtsbezogener Faktoren zeigte weiterhin, dass neben stabilen Faktoren, wie dem mütterlichen Status und dem Alter des Kindes, ein entscheidender Anteil der mütterlichen Steuerungsstrategien für die kindliche Ernährung verantwortlich ist. Die berichteten Ergebnisse zeigen auf, wie relevant die gemeinsame Betrachtung von gesunden und problematischen Nahrungsmitteln sowie die den Zusammenhang zwischen Steuerung und Ernährung beeinflussenden Faktoren innerhalb eines Modells ist. Zusammengefasst scheint vor allem die Steuerung durch Belohnung von und mit bestimmten Nahrungsmitteln eine für das kindliche Ernährungsverhalten und das Übergewichtsrisiko besonders kritische Strategie zu sein. Dies ist umso bedeutender, als bisherige Untersuchungen dieses Verhalten häufig in restriktive Strategien integriert betrachteten. Die separate Analyse wies jedoch darauf hin, dass vor allem die belohnenden Verhaltensanteile relevant sind. Dies zeigt, dass es für die Entwicklung einer gesunden kindlichen Ernährung tatsächlich veränderbare Verhaltensweisen gibt, die Eltern in Präventionsprogrammen oder anderen mit Kursen zum kindlichen Wohl assoziierten Einrichtungen vermittelt werden können.
This study presents noble gas compositions (He, Ne, Ar, Kr, and Xe) of lavas from several Hawaiian volcanoes. Lavas from the Hawaii Scientific Drilling Project (HSDP) core, surface samples from Mauna Kea, Mauna Loa, Kilauea, Hualalai, Kohala and Haleakala as well as lavas from a deep well on the summit of Kilauea were investigated. Noble gases, especially helium, are used as tracers for mantle reservoirs, based on the assumption that high 3He/4He ratios (>8 RA) represent material from the deep and supposedly less degassed mantle, whereas lower ratios (~ 8 RA) are thought to represent the upper mantle. Shield stage Mauna Kea, Kohala and Kilauea lavas yielded MORB-like to moderately high 3He/4He ratios, while 3He/4He ratios in post-shield stage Haleakala lavas are MORB-like. Few samples show 20Ne/22Ne and 21Ne/22Ne ratios different from the atmospheric values, however, Mauna Kea and Kilauea lavas with excess in mantle Ne agree well with the Loihi-Kilauea line in a neon three-isotope plot, whereas one Kohala sample plots on the MORB correlation line. The values in the 4He/40Ar* (40Ar* denotes radiogenic Ar) versus 4He diagram imply open system fractionation of He from Ar, with a deficiency in 4He. Calculated 4He/40Ar*, 3He/22Nes (22NeS denotes solar Ne) and 4He/21Ne ratios for the sample suite are lower than the respective production and primordial ratios, supporting the observation of a fractionation of He from the heavier noble gases, with a depletion of He with respect to Ne and Ar. The depletion of He is interpreted to be partly due to solubility controlled gas loss during magma ascent. However, the preferential He loss suggests that He is more incompatible than Ne and Ar during magmatic processes. In a binary mixing model, the isotopic He and Ne pattern are best explained by a mixture of a MORB-like end-member with a plume like or primordial end-member with a fractionation in 3He/22Ne, represented by a curve parameter r of 15 (r=(³He/²²Ne)MORB/(³He/²²Ne)PLUME or PRIMORDIAL). Whether the high 3He/4He ratios in Hawaiian lavas are indicative of a primitive component within the Hawaiian plume or are rather a product of the crystal-melt- partitioning behavior during partial melting remains to be resolved.
In the first section of the thesis graphitic carbon nitride was for the first time synthesised using the high-temperature condensation of dicyandiamide (DCDA) – a simple molecular precursor – in a eutectic salt melt of lithium chloride and potassium chloride. The extent of condensation, namely next to complete conversion of all reactive end groups, was verified by elemental microanalysis and vibrational spectroscopy. TEM- and SEM-measurements gave detailed insight into the well-defined morphology of these organic crystals, which are not based on 0D or 1D constituents like known molecular or short-chain polymeric crystals but on the packing motif of extended 2D frameworks. The proposed crystal structure of this g-C3N4 species was derived in analogy to graphite by means of extensive powder XRD studies, indexing and refinement. It is based on sheets of hexagonally arranged s-heptazine (C6N7) units that are held together by covalent bonds between C and N atoms. These sheets stack in a graphitic, staggered fashion adopting an AB-motif, as corroborated by powder X-ray diffractometry and high-resolution transmission electron microscopy. This study was contrasted with one of many popular – yet unsuccessful – approaches in the last 30 years of scientific literature to perform the condensation of an extended carbon nitride species through synthesis in the bulk. The second section expands the repertoire of available salt melts introducing the lithium bromide and potassium bromide eutectic as an excellent medium to obtain a new phase of graphitic carbon nitride. The combination of SEM, TEM, PXRD and electron diffraction reveals that the new graphitic carbon nitride phase stacks in an ABA’ motif forming unprecedentedly large crystals. This section seizes the notion of the preceding chapter, that condensation in a eutectic salt melt is the key to obtain a high degree of conversion mainly through a solvatory effect. At the close of this chapter ionothermal synthesis is seen established as a powerful tool to overcome the inherent kinetic problems of solid state reactions such as incomplete polymerisation and condensation in the bulk especially when the temperature requirement of the reaction in question falls into the proverbial “no man’s land” of classical solvents, i.e. above 250 to 300 °C. The following section puts the claim to the test, that the crystalline carbon nitrides obtained from a salt melt are indeed graphitic. A typical property of graphite – namely the accessibility of its interplanar space for guest molecules – is transferred to the graphitic carbon nitride system. Metallic potassium and graphitic carbon nitride are converted to give the potassium intercalation compound, K(C6N8)3 designated according to its stoichiometry and proposed crystal structure. Reaction of the intercalate with aqueous solvents triggers the exfoliation of the graphitic carbon nitride material and – for the first time – enables the access of singular (or multiple) carbon nitride sheets analogous to graphene as seen in the formation of sheets, bundles and scrolls of carbon nitride in TEM imaging. The thus exfoliated sheets form a stable, strongly fluorescent solution in aqueous media, which shows no sign in UV/Vis spectroscopy that the aromaticity of individual sheets was subject to degradation. The final section expands on the mechanism underlying the formation of graphitic carbon nitride by literally expanding the distance between the covalently linked heptazine units which constitute these materials. A close examination of all proposed reaction mechanisms to-date in the light of exhaustive DSC/MS experiments highlights the possibility that the heptazine unit can be formed from smaller molecules, even if some of the designated leaving groups (such as ammonia) are substituted by an element, R, which later on remains linked to the nascent heptazine. Furthermore, it is suggested that the key functional groups in the process are the triazine- (Tz) and the carbonitrile- (CN) group. On the basis of these assumptions, molecular precursors are tailored which encompass all necessary functional groups to form a central heptazine unit of threefold, planar symmetry and then still retain outward functionalities for self-propagated condensation in all three directions. Two model systems based on a para-aryl (ArCNTz) and para-biphenyl (BiPhCNTz) precursors are devised via a facile synthetic procedure and then condensed in an ionothermal process to yield the heptazine based frameworks, HBF-1 and HBF-2. Due to the structural motifs of their molecular precursors, individual sheets of HBF-1 and HBF-2 span cavities of 14.2 Å and 23.0 Å respectively which makes both materials attractive as potential organic zeolites. Crystallographic analysis confirms the formation of ABA’ layered, graphitic systems, and the extent of condensation is confirmed as next-to-perfect by elemental analysis and vibrational spectroscopy.
Parataxe et subordination, ces deux termes, parfois antithétiques, sont problématiques du fait même de leur extrême polysémie. C’est de cette ambiguïté que naît l’objet d’étude, les constructions asyndétiques, au statut incertain entre intégration et indépendance. Dans cette thèse, nous proposons de réinterroger ce phénomène ancien et déjà bien connu en ancien français, en le mettant en regard des remises en question et avancées des recherches actuelles sur le sujet. Pour cela, il nous faut tout d'abord poser une définition de ce qu'est la subordination. Nous montrons ensuite que les constructions asyndétiques sont bien des cas de subordination. Cette thèse établit enfin que ce phénomène constitue, en ancien français du moins, une variante libre en syntaxe. Sa répartition et sa présence dans les textes a cependant très tôt diminué, mais l’existence de phénomènes parallèles en français moderne, tout comme d’autres indices, nous permettent de faire l’hypothèse que cette évolution tient d’une alternance oral / écrit. Cette thèse montre ainsi que les problèmes, comme les enjeux, ne diffèrent finalement pas, quel que soit l’état de langue et que la parataxe constitue bien une construction dans le système de la langue.
Pectic polysaccharides, a class of plant cell wall polymers, form one of the most complex networks known in nature. Despite their complex structure and their importance in plant biology, little is known about the molecular mechanism of their biosynthesis, modification, and turnover, particularly their structure-function relationship. One way to gain insight into pectin metabolism is the identification of mutants with an altered pectin structure. Those were obtained by a recently developed pectinase-based genetic screen. Arabidopsis thaliana seedlings grown in liquid medium containing pectinase solutions exhibited particular phenotypes: they were dwarfed and slightly chlorotic. However, when genetically different A. thaliana seed populations (random T-DNA insertional populations as well as EMS-mutagenized populations and natural variations) were subjected to this treatment, individuals were identified that exhibit a different visible phenotype compared to wild type or other ecotypes and may thus contain a different pectin structure (pec-mutants). After confirming that the altered phenotype occurs only when the pectinase is present, the EMS mutants were subjected to a detailed cell wall analysis with particular emphasis on pectins. This suite of mutants identified in this study is a valuable resource for further analysis on how the pectin network is regulated, synthesized and modified. Flanking sequences of some of the T-DNA lines have pointed toward several interesting genes, one of which is PEC100. This gene encodes a putative sugar transporter gene, which, based on our data, is implicated in rhamnogalacturonan-I synthesis. The subcellular localization of PEC100 was studied by GFP fusion and this protein was found to be localized to the Golgi apparatus, the organelle where pectin biosynthesis occurs. Arabidopsis ecotype C24 was identified as a susceptible one when grown with pectinases in liquid culture and had a different oligogalacturonide mass profile when compared to ecotype Col-0. Pectic oligosaccharides have been postulated to be signal molecules involved in plant pathogen defense mechanisms. Indeed, C24 showed elevated accumulation of reactive oxygen species upon pectinase elicitation and had altered response to the pathogen Alternaria brassicicola in comparison to Col-0. Using a recombinant inbred line population three major QTLs were identified to be responsible for the susceptibility of C24 to pectinases. In a reverse genetic approach members of the qua2 (putative pectin methyltransferase) family were tested for potential target genes that affect pectin methyl-esterification. The list of these genes was determined by in silico study of the pattern of expression and co-expression of all 34 members of this family resulting in 6 candidate genes. For only for one of the 6 analyzed genes a difference in the oligogalacturonide mass profile was observed in the corresponding knock-out lines, confirming the hypothesis that the methyl-esterification pattern of pectin is fine tuned by members of this gene family. This study of pectic polysaccharides through forward and reverse genetic screens gave new insight into how pectin structure is regulated and modified, and how these modifications could influence pectin mediated signalling and pathogenicity.
Dietary antioxidants are believed to play an important role in the prevention and treatment of a variety of diseases associated with oxidative stress. Although there is a wide range of dietary antioxidants, the bulk of the research to date has been focused on the nutrient antioxidants vitamin C, E, and carotenoids. Certain relatively uncommon antioxidants such as lipoic acid (LA), and phenolic compounds such as (-)-epicatechin (EC), (-)-epigallocatechin (EGC), (-)-epicatechin gallate (ECG), and (-)-epigallocatechin gallate (EGCG), have not been extensively investigated although they may exert greater antioxidant potency than that of carotenoids and vitamins. Extracts from selected plants and plant byproducts may represent rich sources for one or more of such antioxidants and therefore exhibit higher effects than a single antioxidant due to the synergistic effects produced between such antioxidants. However, in the last decade a number of epidemiological, animal and in vitro studies have suggested a protective and therapeutic potency of these antioxidants in a broad range of diseases such as cancer, diabetes, atherosclerosis, cataract and acute and chronic neurological disorders. Inflammation, the response of the host toward any infection or injury, plays a central role in the development of many chronic diseases. Several evidences demonstrated the rise of different types of cancer from sites of inflammation. This suggests that active oxygen species and some cytokines generated in the inflamed tissues can cause injury to DNA and ultimately lead to carcinogenesis. Diethylnitrosamine (DEN) is one of the most important environmental carcinogens, present in a variety of foods, alcoholic beverages, tobacco smoke and it can be synthesized endogenously. In addition to the liver it can induce carcinogenesis in other organs like kidney, trachea, lung, esophagus, fore stomach, and nasal cavity. Several epidemiological and laboratory studies indicate that nitroso compounds including DEN may induce hyperplasia and chronic inflammation which is closely associated with the development of hepatocellular carcinoma. Despite increasing evidence on the potential of antioxidants in modulating the etiology of chronic diseases, little is known about their role in inflammation and acute phase response (APR). Therefore the aim of the present work was to study the protective effect of water and solvent extracts of eight plant and plant byproducts including green tea, artichoke, spinach, broccoli, onion and eggplant, orange and potato peels as well as eight antioxidants agents including EC, EGC, ECG, EGCG, ascorbic acid (AA), acetylcysteine (NAC), α-LA, and alpha-tocopherol (α-TOC) toward acute inflammation induced by interleukin-6 (IL-6) and hepatotoxicity induced by DEN in vitro. The negative acute phase proteins (APP), transthyretin (TTR) and retinol-binding protein (RBP) were used as inflammatory biomarkers analyzed by ELISA, whereas neutral red assay was used for evaluating the cytotoxicity. All experiments were performed in vitro using human hepatocarcinoma cell line (HepG2). Additionally the antioxidant activity was measured by TEAC and FRAP assays, phenolic content was measured by Folin–Ciocalteu and characterized by HPLC. Moreover, the microheterogeneity of TTR was detected using immunoprecipitation assay combined with SELDI-TOF MS. Results of present study showed that HepG2 cells provide a simple, sensitive in vitro system for studying the regulation of the negative APP, TTR and RBP under free and inflammatory condition. IL-6, a potent proinflammatory cytokine, in a concentration of 25 ng/ml was able to reduce TTR and RBP secretion by approximately 50-60% after 24h of incubation. With exception of broccoli and water extract of onion which showed pro-inflammatory effects in this study, all other plant extracts, at specific concentrations, were able to elevate TTR secretion in normal condition and even under treatment of IL-6 where the effect was quite lower. Green tea followed by artichoke and potato peel exhibited the highest elevation in TTR concentration which reached 1.1 and 2.5 folds of control in presence and absences of IL-6 respectively. In general Plant extracts were ordered according their anti-inflammatory potency as following: in water extracts; green tea > artichoke > potato peel > orange peel > spinach > eggplant peel, where in solvent extracts; green tea > artichoke > potato peel > spinach > eggplant peel > onion > orange peel. The antiinflammatory effect of water extracts of green tea, artichoke and orange peel were significantly higher than their corresponding solvent extracts whereas water extracts of eggplant-, potato peels and spinach showed lower effect than their solvent extracts. On the other hand α-LA followed by EGCG and ECG exhibited the highest elevation in TTR concentration compared to other antioxidants. The relation between the anti-inflammatory potential and antioxidants activity and phenolic content for the investigated substances was generally weak. This may suggest the involvement of other mechanisms than antioxidants properties for the observed effect. TTR secreted by HepG2 cells has a molecular structure quite similar to the purified standard and serum TTR in which all the three main variants are contained including native, S-cystinylated and Sglutathionylated TTR. Interestingly, a variant with molecular mass of 13453.8 + 8.3 Da has been detected only in TTR secreted by HepG2. Among all investigated antioxidants and plant extracts, six substances were able to elevate the native preferable TTR variant. The potency of these substances can be ordered as following α-LA > NAC > onion > AA > EGCG > green tea. A weak correlation between elevation on TTR and shifting to the native form was observed. Similar weak correlation has also been observed between antioxidants activity and elevation in native TTR. Although DEN was able to induce cell death in a concentration dependent manner, it requires considerably higher concentrations for its effects especially after 24h. This may be attributed to a lack in cytochrome P450 enzymes produced by HepG2. At selected concentrations some antioxidants and plant extracts significantly attenuate DEN cytotoxicity as following: spinach > α-LA > artichoke > orange peel > eggplant peel > α-TOC > onion > AA. Contrary all other substances especially green tea, broccoli, potato peel, and ECG stimulate DEN toxicity. In conclusion, this study demonstrated that selected antioxidants and plant extracts may attenuate the inflammatory process, not only by their antioxidants potency but also by other mechanisms which remain unclear. They may also play a vital role on stabilizing the tetramic structure of TTR and thereby prevent amyloidosis diseases. Lipoic acid represents in this study unique function against inflammation and hepatotoxicity. Despite the protective effect demonstrated by investigated substances, attention should also be given to the pro-oxidant and potential cytotoxic effects produced at higher concentrations.
Die Zielsetzung der vorliegenden Arbeit ist die Beschreibung hydrophober Bodeneigenschaften und deren Auswirkungen auf Oberflächenabfluss und Erosion auf verschiedenen Skalen. Die dazu durchgeführten Untersuchungen fanden auf einer Rekultivierungsfläche im Braunkohlegebiet Welzow Süd (Südostdeutschland) statt. Die Prozesse wurden auf drei Skalen untersucht, die von der Plotskala (1m²) über die Hangskala (300m²) bis zur Betrachtung eines kleinen Einzugsgebietes (4ha) reichen. Der Grad der hydrophoben Bodeneigenschaften wurde sowohl direkt, über die Bestimmung des Kontaktwinkel, als auch indirekt, über die Bestimmung der Persistenz, ermittelt. Dabei zeigte sich, dass der Boden im Winterhalbjahr hydrophil reagierte, während er im Sommerhalbjahr hydrophobe Bodeneigenschaften aufwies. Die Ergebnisse deuten darauf hin, dass ansteigende Bodenwassergehalte, die in der Literatur häufig als Ursache für einen Wechsel der Bodeneigenschaften angegeben werden, auf dieser Fläche nicht zu einem Umbruch der Bodenbedingungen führen. Stattdessen kam es als Folge des Auftauens von gefrorenem Boden zu hydrophilen Bodeneigenschaften, die zu einem Anstieg des Bodenwassergehalts führten. Räumliche Unterschiede zeigten sich in den geomorphologischen Einheiten. Rinnen und Rillen wiesen seltener hydrophobe Eigenschaften als die Zwischenrillenbereiche und Kuppen auf. Diese räumlichen und zeitlichen Variabilitäten wirkten sich auch auf den Oberflächenabfluss aus, der als Abflussbeiwert (ABW: Quotient aus Abfluss und Niederschlag) untersucht wurde. Der ABW liegt auf Böden mit hydrophoben Bodeneigenschaften (ABW=0,8) deutlich höher als bei jenen mit hydrophilen Eigenschaften(ABW=0,2), wie sie im Winter oder auf anderem Substrat vorzufinden sind (diese Werte beziehen sich auf die Plotskala). Betrachtet man die Auswirkungen auf unterschiedlichen Skalen, nimmt der Abflussbeiwert mit zunehmender Flächengröße ab (ABW = 0,8 auf der Plotskala, ABW = 0,5 auf der Hangskala und ABW = 0,2 im gesamten Gebiet), was in den hydrophil reagierenden Rillen und Rinnen auf der Hangskala und dem hydrophilen Substrat im Einzugsgebiet begründet ist. Zur Messung der Erosion wurden verschiedene, zum Teil neu entwickelte Methoden eingesetzt, um eine hohe zeitliche und räumliche Auflösung zu erreichen. Bei einer neu entwickelten Methode wird der Sedimentaustrag ereignisbezogen über eine Waage bestimmt. In Kombination mit einer Kippwaage ermöglicht sie die gleichzeitige Messung des Oberflächenabflusses. Die Messapparatur wurde für Gebiete entwickelt, die eine überwiegend grobsandige Textur aufweisen und nur geringe Mengen Ton und Schluff enthalten. Zusätzlich wurden zwei Lasersysteme zur Messung der räumlichen Verteilung der Erosion eingesetzt. Für die erste Methode wurde ein punktuell messender Laser in einer fest installierten Apparatur über die Fläche bewegt und punktuell Höhenunterschiede in einem festen Raster bestimmt. Durch Interpolation konnten Bereiche mit Sedimentabtrag von Akkumulationsbereiche unterschieden werden. Mit dieser Methode können auch größere Flächen vermessen werden (hier 16 m²), allerdings weisen die Messungen in den Übergangsbereichen von Rinne zu Zwischenrille große Fehler auf. Bei der zweiten Methode wird mit einer Messung ein Quadratmeter mit einer hohen räumlichen Auflösung komplett erfasst. Um ein dreidimensionales Bild zu erstellen, müssen insgesamt vier Aufnahmen von jeweils unterschiedlichen Seiten aufgenommen werden. So lassen sich Abtrag und Akkumulation sehr genau bestimmen, allerdings ist die Messung relativ aufwendig und erfasst nur eine kleine Fläche. Zusätzlich wurde der Sedimentaustrag noch auf der Plotskala erfasst. Die Messungen zeigen, korrespondierend zu den Bodeneigenschaften, große Sedimentausträge während des Sommerhalbjahrs und kaum Austräge im Winter. Weiterhin belegen die Ergebnisse eine größere Bedeutung der Rillenerosion gegenüber der Zwischenrillenerosion für Niederschlagsereignisse hoher Intensität (>25 mm/h in einem zehnminütigem Intervall). Im Gegensatz dazu dominierte die Zwischenrillenerosion bei Ereignissen geringerer Intensität (<20 mm/h in einem zehnminütigem Intervall), wobei mindestens 9 mm Niederschlag in einer Intensität von mindesten 3,6 mm/h nötig sind, damit es zur Erosion kommt. Basierend auf den gemessenen Abflüssen und Sedimentausträgen wurden Regressiongleichungen abgeleitet, die eine Berechnung dieser beiden Prozesse für die untersuchte Fläche ermöglichen. Während die Menge an Oberflächenabfluss einen starken Zusammenhang zu der Menge an gefallenem Niederschlag zeigt (r² = 0,9), ist die Berechnung des ausgetragenen Sedimentes eher ungenau (r² = 0,7). Zusammenfassend beschreibt die Arbeit Einflüsse hydrophober Bodeneigenschaften auf verschiedenen Skalen und arbeitet die Auswirkungen, die vor allem auf der kleinen Skala von großer Bedeutung sind, heraus.
Molecular photoswitches are attracting much attention lately mostly because of their possible applications in nano technology, and their role in biology. One of the widely studied representatives of photochromic molecules is azobenzene (AB). With light, by a static electric field, or with tunneling electrons this specie can be "switched" from the flat and energetically more stable trans form, into the compact cis form. The back reaction can be induced optically or thermally. Quantum chemical calculations, mostly based on density functional theory, on the AB molecule, AB derivatives and related systems are presented. All the calculations were done for isolated species, however, with implications for latest experimental results aiming at the switching of surface mounted ABs. In some of these experiments, it is assumed that the switching process is substrate mediated, by attaching an electron or a hole to the adsorbate forming short-lived anion or cation resonances. Therefore, we calculated also cationic and anionic ABs in this work. An influence of external electric fields on the potential energy surfaces, was also studied. Further, by the type, number and positioning of various substituent groups, systematic changes on activation energies and rates for the thermal cis-to-trans isomerization can be enforced. The nature of the transition state for ground state isomerization was investigated. Applying Eyring's transition state theory, trends in activation energies and rates were predicted and are, where a comparison was possible, in good agreement with experimental data. Further, thermal isomerization was studied in solution, for which a polarizable continuum model was employed. The influence of substitution and an environment leaves its traces on structural properties of molecules and quantitative appearance of calculated UV/Vis spectra, as well. Finally, an explicit treatment of a solid substrate was demonstrated for the conformational switching, by scanning tunneling microscope, of a 1,5-cyclooctadiene (COD) molecule at a Si(001) surface, treated by a cluster model. At first, we studied energetics and potential energy surfaces along relevant switching coordinates by quantum chemical calculations, followed by the switching dynamics using wave packet methods. We show that, in spite the simplicity of the model, our calculations support the switching of adsorbed COD, by inelastic electron tunneling at low temperatures.
Motivations and research objectives: During the passage of rain water through a forest canopy two main processes take place. First, water is redistributed; and second, its chemical properties change substantially. The rain water redistribution and the brief contact with plant surfaces results in a large variability of both throughfall and its chemical composition. Since throughfall and its chemistry influence a range of physical, chemical and biological processes at or below the forest floor the understanding of throughfall variability and the prediction of throughfall patterns potentially improves the understanding of near-surface processes in forest ecosystems. This thesis comprises three main research objectives. The first objective is to determine the variability of throughfall and its chemistry, and to investigate some of the controlling factors. Second, I explored throughfall spatial patterns. Finally, I attempted to assess the temporal persistence of throughfall and its chemical composition. Research sites and methods: The thesis is based on investigations in a tropical montane rain forest in Ecuador, and lowland rain forest ecosystems in Brazil and Panama. The first two studies investigate both throughfall and throughfall chemistry following a deterministic approach. The third study investigates throughfall patterns with geostatistical methods, and hence, relies on a stochastic approach. Results and Conclusions: Throughfall is highly variable. The variability of throughfall in tropical forests seems to exceed that of many temperate forests. These differences, however, do not solely reflect ecosystem-inherent characteristics, more likely they also mirror management practices. Apart from biotic factors that influence throughfall variability, rainfall magnitude is an important control. Throughfall solute concentrations and solute deposition are even more variable than throughfall. In contrast to throughfall volumes, the variability of solute deposition shows no clear differences between tropical and temperate forests, hence, biodiversity is not a strong predictor of solute deposition heterogeneity. Many other factors control solute deposition patterns, for instance, solute concentration in rainfall and antecedent dry period. The temporal variability of the latter factors partly accounts for the low temporal persistence of solute deposition. In contrast, measurements of throughfall volume are quite stable over time. Results from the Panamanian research site indicate that wet and dry areas outlast consecutive wet seasons. At this research site, throughfall exhibited only weak or pure nugget autocorrelation structures over the studies lag distances. A close look at the geostatistical tools at hand provided evidence that throughfall datasets, in particular those of large events, require robust variogram estimation if one wants to avoid outlier removal. This finding is important because all geostatistical throughfall studies that have been published so far analyzed their data using the classical, non-robust variogram estimator.
Modern acquisition of seismic data on receiver networks worldwide produces an increasing amount of continuous wavefield recordings. Hence, in addition to manual data inspection, seismogram interpretation requires new processing utilities for event detection, signal classification and data visualization. Various machine learning algorithms, which can be adapted to seismological problems, have been suggested in the field of pattern recognition. This can be done either by means of supervised learning using manually defined training data or by unsupervised clustering and visualization. The latter allows the recognition of wavefield patterns, such as short-term transients and long-term variations, with a minimum of domain knowledge. Besides classical earthquake seismology, investigations of temporal patterns in seismic data also concern novel approaches such as noise cross-correlation or ambient seismic vibration analysis in general, which have moved into focus within the last decade. In order to find records suitable for the respective approach or simply for quality control, unsupervised preprocessing becomes important and valuable for large data sets. Machine learning techniques require the parametrization of the data using feature vectors. Applied to seismic recordings, wavefield properties have to be computed from the raw seismograms. For an unsupervised approach, all potential wavefield features have to be considered to reduce subjectivity to a minimum. Furthermore, automatic dimensionality reduction, i.e. feature selection, is required in order to decrease computational cost, enhance interpretability and improve discriminative power. This study presents an unsupervised feature selection and learning approach for the discovery, imaging and interpretation of significant temporal patterns in seismic single-station or network recordings. In particular, techniques permitting an intuitive, quickly interpretable and concise overview of available records are suggested. For this purpose, the data is parametrized by real-valued feature vectors for short time windows using standard seismic analysis tools as feature generation methods, such as frequency-wavenumber, polarization, and spectral analysis. The choice of the time window length is dependent on the expected durations of patterns to be recognized or discriminated. We use Self-Organizing Maps (SOMs) for a data-driven feature selection, visualization and clustering procedure, which is particularly suitable for high-dimensional data sets. Using synthetics composed of Rayleigh and Love waves and three different types of real-world data sets, we show the robustness and reliability of our unsupervised learning approach with respect to the effect of algorithm parameters and data set properties. Furthermore, we approve the capability of the clustering and imaging techniques. For all data, we find improved discriminative power of our feature selection procedure compared to feature subsets manually selected from individual wavefield parametrization methods. In particular, enhanced performance is observed compared to the most favorable individual feature generation method, which is found to be the frequency spectrum. The method is applied to regional earthquake records at the European Broadband Network with the aim to define suitable features for earthquake detection and seismic phase classification. For the latter, we find that a combination of spectral and polarization features favor S wave detection at a single receiver. However, SOM-based visualization of phase discrimination shows that clustering applied to the records of two stations only allows onset or P wave detection, respectively. In order to improve the discrimination of S waves on receiver networks, we recommend to consider additionally the temporal context of feature vectors. The application to continuous recordings of seismicity close to an active volcano (Mount Merapi, Java, Indonesia) shows that two typical volcano-seismic events (VTB and Guguran) can be detected and distinguished by clustering. In contrast, so-called MP events cannot be discriminated. Comparable results are obtained for selected features and recognition rates regarding a previously implemented supervised classification system. Finally, we test the reliability of wavefield clustering to improve common ambient vibration analysis methods such as estimation of dispersion curves and horizontal to vertical spectral ratios. It is found, that in general, the identified short- and long-term patterns have no significant impact on those estimates. However, for individual sites, effects of local sources can be identified. Leaving out the corresponding clusters, yields reduced uncertainties or allows for improving estimation of dispersion curves.
Modern anthropogenic forcing of atmospheric chemistry poses the question of how the Earth System will respond as thousands of gigatons of greenhouse gas are rapidly added to the atmosphere. A similar, albeit nonanthropogenic, situation occurred during the early Paleogene, when catastrophic release of carbon to the atmosphere triggered abrupt increase in global temperatures. The best documented of these events is the Paleocene-Eocene Thermal Maximum (PETM, ~55 Ma) when the magnitude of carbon addition to the oceans and atmosphere was similar to those expected for the future. This event initiated global warming, changes in hydrological cycles, biotic extinction and migrations. A recently proposed hypothesis concerning changes in marine ecosystems suggests that this global warming strongly influenced the shallow-water biosphere, triggering extinctions and turnover in the Larger Foraminifera (LF) community and the demise of corals. The successions from the Adriatic Carbonate Platform (SW Slovenia) represent an ideal location to test the hypothesis of a possible causal link between the PETM and evolution of shallow-water organisms because they record continuous sedimentation from the Late Paleocene to the Early Eocene and are characterized by a rich biota, especially LF, fundamental for detailed biostratigraphic studies. In order to reconstruct paleoenvironmental conditions during deposition, I focused on sedimentological analysis and paleoecological study of benthic assemblages. During the Late Paleocene-earliest Eocene, sedimentation occurred on a shallow-water carbonate ramp system characterized by enhanced nutrient levels. LF represent the common constituent of the benthic assemblages that thrived in this setting throughout the Late Paleocene to the Early Eocene. With detailed biostratigraphic and chemostratigraphic analyses documenting the most complete record to date available for the PETM event in a shallow-water marine environment, I correlated chemostratigraphically for the first time the evolution of LF with the δ¹³C curves. This correlation demonstrated that no major turnover in the LF communities occurred synchronous with the PETM; thus the evolution of LF was mainly controlled by endogenous biotic forces. The study of Late Thanetian metric-sized microbialite-coral mounds which developed in the middle part of the ramp, documented the first Cenozoic occurrence of microbially-cemented mounds. The development of these mounds, with temporary dominance of microbial communities over corals, suggest environmentally-triggered “phase shifts” related to frequent fluctuations of nutrient/turbidity levels during recurrent wet phases which preceding the extreme greenhouse conditions of the PETM. The paleoecological study of the coral community in the microbialites-coral mounds, the study of corals from Early Eocene platform from SW France, and a critical, extensive literature research of Late Paleocene – Early Eocene coral occurrences from the Tethys, the Atlantic, the Caribbean realms suggested that these corals types, even if not forming extensive reefs, are common in the biofacies as small isolated colonies, piles of rubble or small patch-reefs. These corals might have developed ‘alternative’ life strategies to cope with harsh conditions (high/fluctuating nutrients/turbidity, extreme temperatures, perturbation of aragonite saturation state) during the greenhouse times of the early Paleogene, representing a good fossil analogue to modern corals thriving close to their thresholds for survival. These results demonstrate the complexity of the biological responses to extreme conditions, not only in terms of temperature but also nutrient supply, physical disturbance and their temporal variability and oscillating character.
Sehzellen von Insekten sind epitheliale Zellen mit einer charakteristischen, hochpolaren Morphologie und Organisation. Die molekularen Komponenten der Sehkaskade befinden sich im Rhabdomer, einem Saum dicht gepackter Mikrovilli entlang der Sehzelle. Bereits in den 70er Jahren des letzten Jahrhunderts wurde beschrieben, dass die Mikrovilli entlang einer Sehzelle eine unterschiedliche Ausrichtung besitzen, oder in anderen Worten, die Rhabdomere entlang der Sehzell-Längsachse verdreht sind. So sind in den Sehzellen R1-R6 bei dipteren Fliegen (Calliphora, Drosophila) die Mikrovilli im distalen und proximalen Bereich eines Rhabdomers etwa rechtwinkelig zueinander angeordnet. Dieses Phänomen wird in der Fachliteratur als rhabdomere twisting bezeichnet und reduziert die Empfindlichkeit für polarisiertes Licht. Es wurde für das Drosophila-Auge gezeigt, dass diese strukturelle Asymmetrie der Sehzellen mit einer molekularen Asymmetrie in der Verteilung phosphotyrosinierter Proteine an die Stielmembran (einem nicht-mikrovillären Bereich der apikalen Plasmamembran) einhergeht. Zudem wurde gezeigt, dass die immuncytochemische Markierung mit anti-Phosphotyrosin (anti-PY) als lichtmikroskopischer Marker für das rhabdomere twisting verwendet werden kann. Bisher wurde hauptsächlich die physiologische Bedeutung der Rhabdomerverdrehung untersucht. Es ist wenig über die entwicklungs- und zellbiologischen Grundlagen bekannt. Ziel der vorliegenden Arbeit war es, die Identität der phosphotyrosinierten Proteine an der Stielmembran zu klären und ihre funktionelle Bedeutung für die Entwicklung des rhabdomere twisting zu analysieren. Zudem sollte untersucht werden, welchen Einfluss die inneren Sehzellen R7 und R8 auf die Verdrehung der Rhabdomere von R1-R6 haben. Für die zwei Proteinkinasen Rolled (ERK) und Basket (JNK) vom Typ der Mitogen-aktivierten Proteinkinasen (MAPK) konnte ich zeigen, dass sie in ihrer aktivierten (= phosphorylierten) Form (pERK bzw. pJNK) eine asymmetrische Verteilung an der Stielmembran aufweisen vergleichbar der Markierung mit anti-PY. Weiterhin wurde diese asymmetrische Verteilung von pERK und pJNK ebenso wie die von PY erst kurz vor Schlupf der Fliegen (bei ca. 90% pupaler Entwicklung) etabliert. Durch Präinkubationsexperimente mit anti-PY wurde die Markierung mit anti-pERK bzw. anti-pJNK unterbunden. Diese Ergebnisse sprechen dafür, dass pERK und pJNK zu den Proteinen gehören, die von anti-PY an der Stielmembran erkannt werden. Da es sich bei ERK und JNK um Kinasen handelt, ist es naheliegend, dass diese an der Entwicklung des rhabdomere twisting beteiligt sein könnten. Diese Hypothese wurde durch die Analyse von hypermorphen (rl SEM)und hypomorphen (rl 1/rl 10a) Rolled-Mutanten überprüft. In der rl SEM-Mutante mit erhöhter Aktivität der Proteinkinase erfolgte die asymmetrische Positionierung von pERK an der Stielmembran sowie die Mikrovillikippung schon zu einem früheren Zeitpunkt in der pupalen Entwicklung. Im adulten Auge war die anti-PY-Markierung im distalen Bereich der Sehzellen intensiver sowie der Kippwinkel vergrößert. In der rl 1/rl 10a-Mutanten mit reduzierter Kinaseaktivität waren die anti-PY-Markierung und der Kippwinkel im proximalen Bereich der Sehzellen verringert. Die Proteinkinase ERK hat somit einen Einfluss auf die zeitliche Etablierung des rhabdomere twisting wie auch auf dessen Ausprägung im Adulttier. Die Rhabdomerverdrehung sowie die Änderung im anti-PY-Markierungsmuster erfolgen an den Sehzellen R1-R6 relativ abrupt auf halber Ommatidienlänge, dort wo das Rhabdomer von R7 endet und das von R8 beginnt. Es stellte sich deshalb die Frage, ob die Rhabdomerverdrehung an R1-R6 durch die Sehzelle R7 und/oder R8 beeinflusst wird. Um dieser Frage nachzugehen wurden Mutanten analysiert, denen die R7- oder die R8-Photorezeptoren bzw. R7 und R8 fehlten. Das wichtigste Ergebnis dieser Untersuchungen war, dass bei Fehlen von R8 die Rhabdomerverdrehung bei R1-R6 nach keinen erkennbaren Regeln erfolgt. R8 ist somit Voraussetzung für die Etablierung der Rhabdomerverdrehung in R1-R6. Folgendes Modell wurde auf Grundlage dieses und weiterer Ergebnisse erarbeitet: Im dritten Larvenstadium rekrutiert R8 die Sehzellpaare R2/R5, R3/R4 und R1/R6. Dabei werden R1-R6 durch den Kontakt zu R8 „polarisiert“. Abschließend wird R7 durch R8 rekrutiert. Dies führt zu einer Fixierung der Polarität von R1-R6 durch R7. Die Ausführung der Mikrovillikippung anhand der festgelegten Polarität erfolgt in der späten Puppenphase. Die Proteinkinase ERK ist an diesem letzten Morphogeneseprozess beteiligt.
Although the basic structure of biological membranes is provided by the lipid bilayer, most of the specific functions are carried out by membrane proteins (MPs) such as channels, ion-pumps and receptors. Additionally, it is known, that mutations in MPs are directly or indirectly involved in many diseases. Thus, structure determination of MPs is of major interest not only in structural biology but also in pharmacology, especially for drug development. Advances in structural biology of membrane proteins (MPs) have been strongly supported by the success of three leading techniques: X-ray crystallography, electron microscopy and solution NMR spectroscopy. However, X-ray crystallography and electron microscopy, require highly diffracting 3D or 2D crystals, respectively. Today, structure determination of non-crystalline solid protein preparations has been made possible through rapid progress of solid-state MAS NMR methodology for biological systems. Castellani et. al. solved and refined the first structure of a microcrystalline protein using only solid-state MAS NMR spectroscopy. These successful application open up perspectives to access systems that are difficult to crystallise or that form large heterogeneous complexes and insoluble aggregates, for example ligands bound to a MP-receptor, protein fibrils and heterogeneous proteins aggregates. Solid-state MAS NMR spectroscopy is in principle well suited to study MP at atomic resolution. In this thesis, different types of MP preparations were tested for their suitability to be studied by solid-state MAS NMR. Proteoliposomes, poorly diffracting 2D crystals and a PEG precipitate of the outer membrane protein G (OmpG) were prepared as a model system for large MPs. Results from this work, combined with data found in the literature, show that highly diffracting crystalline material is not a prerequirement for structural analysis of MPs by solid-state MAS NMR. Instead, it is possible to use non-diffracting 3D crystals, MP precipitates, poorly diffracting 2D crystals and proteoliposomes. For the latter two types of preparations, the MP is reconstituted into a lipid bilayer, which thus allows the structural investigation in a quasi-native environment. In addition, to prepare a MP sample for solid-state MAS NMR it is possible to use screening methods, that are well established for 3D and 2D crystallisation of MPs. Hopefully, these findings will open a fourth method for structural investigation of MP. The prerequisite for structural studies by NMR in general, and the most time consuming step, is always the assignment of resonances to specific nuclei within the protein. Since the last few years an ever-increasing number of assignments from solid-state MAS NMR of uniformly carbon and nitrogen labelled samples is being reported, mostly for small proteins of up to around 150 amino acids in length. However, the complexity of the spectra increases with increasing molecular weight of the protein. Thus the conventional assignment strategies developed for small proteins do not yield a sufficiently high degree of assignment for the large MP OmpG (281 amino acids). Therefore, a new assignment strategy to find starting points for large MPs was devised. The assignment procedure is based on a sample with [2,3-13C, 15N]-labelled Tyr and Phe and uniformly labelled alanine and glycine. This labelling pattern reduces the spectral overlap as well as the number of assignment possibilities. In order to extend the assignment, four other specifically labelled OmpG samples were used. The assignment procedure starts with the identification of the spin systems of each labelled amino acid using 2D 13C-13C and 3D NCACX correlation experiments. In a second step, 2D and 3D NCOCX type experiments are used for the sequential assignment of the observed resonances to specific nuclei in the OmpG amino acid sequence. Additionally, it was shown in this work, that biosynthetically site directed labelled samples, which are normally used to observe long-range correlations, were helpful to confirm the assignment. Another approach to find assignment starting points in large protein systems, is the use of spectroscopic filtering techniques. A filtering block that selects methyl resonances was used to find further assignment starting points for OmpG. Combining all these techniques, it was possible to assign nearly 50 % of the observed signals to the OmpG sequence. Using this information, a prediction of the secondary structure elements of OmpG was possible. Most of the calculated motifs were in good aggreement with the crystal structures of OmpG. The approaches presented here should be applicable to a wide variety of MPs and MP-complexes and should thus open a new avenue for the structural biology of MPs.
Die vorliegende Untersuchung zeigt das ständige Wachstum der Dimension und Bedeutung der staatlichen Schutzpflichten als eine eigenständige Funktion der Grundrechte. Mit jedem Fortschritt und der Entwicklung in der modernen Welt, entstehen in der Gesellschaft immer wieder neue Bereiche, die gesetzlicher Regulierung bedürfen. Daher ist die staatliche Aufgabe eindeutig: Der Staat muss die in der Verfassung ausgelegten Prinzipien in der Realität durch die Gesetze umsetzen und sie ständig wiederkehrend nachbessern. Daher ist der Staat gefordert, die Einzelnen repressiv und präventiv zu schützen. Die Dissertation untersucht die Problematik von staatlichen Schutzpflichten im Rahmen der Grundrechte der georgischen Verfassung vom 24. August 1995 im Vergleich mit den Menschenrechten und Grundfreiheiten der Europäischen Menschenrechtskonvention. Die Arbeit greift ein Grundrechtsproblem auf, das sich gerade in rechtlichen und politischen Umbruchssituationen wie diejenige, die Georgien als Nachfolgestaat der zerbrochenen Sowjetunion durchlebt, als besonders wichtig erweist. Auf dem Weg zur dogmatischen Entfaltung einer grundrechtlichen Schutzpflicht wird als eine Art Leitbild die Europäische Menschenrechtskonvention (EMRK) herangezogen. Dies erklärt sich aus der Natur der EMRK, die sich als eine Art Verfassung für Europa darstellt und in Georgien seit 1999 in Kraft ist. In der Arbeit wird auf die deutsche Schutzpflichtenlehre verwiesen. Das erklärt sich aus der in Deutschland schon seit etwa 30 Jahren geführten Diskussion, die immer noch nicht abgeschlossen ist, aber aus der sich bemerkenswerte und kontroverse Ergebnisse ziehen lassen. Die Arbeit zeigt, dass die georgische Verfassung zahlreiche Ansätze der staatlichen Schutzpflichten – allgemeiner und konkreter Art – liefert, die auch vor allem in der Rechtsprechung des Georgischen Verfassungsgerichts verschiedentlich schon aufgegriffen wurden, durchaus zum Teil unter Rückgriff auf Aussagen der Europäischen Menschenrechtskonvention (EMRK) bzw. des Europäischen Gerichtshofes für Menschenrechte (EGMR). Den Bereich der grundrechtlichen Schutzpflichten der georgischen Verfassung auszuleuchten ist für eine relativ neue Rechtstaatlichkeit eines postsowjetischen Staates wichtig, um den Anstoß für eine dringend nötige Debatte zu geben.
A huge number of applications require coherent radiation in the visible spectral range. Since diode lasers are very compact and efficient light sources, there exists a great interest to cover these applications with diode laser emission. Despite modern band gap engineering not all wavelengths can be accessed with diode laser radiation. Especially in the visible spectral range between 480 nm and 630 nm no emission from diode lasers is available, yet. Nonlinear frequency conversion of near-infrared radiation is a common way to generate coherent emission in the visible spectral range. However, radiation with extraordinary spatial temporal and spectral quality is required to pump frequency conversion. Broad area (BA) diode lasers are reliable high power light sources in the near-infrared spectral range. They belong to the most efficient coherent light sources with electro-optical efficiencies of more than 70%. Standard BA lasers are not suitable as pump lasers for frequency conversion because of their poor beam quality and spectral properties. For this purpose, tapered lasers and diode lasers with Bragg gratings are utilized. However, these new diode laser structures demand for additional manufacturing and assembling steps that makes their processing challenging and expensive. An alternative to BA diode lasers is the stripe-array architecture. The emitting area of a stripe-array diode laser is comparable to a BA device and the manufacturing of these arrays requires only one additional process step. Such a stripe-array consists of several narrow striped emitters realized with close proximity. Due to the overlap of the fields of neighboring emitters or the presence of leaky waves, a strong coupling between the emitters exists. As a consequence, the emission of such an array is characterized by a so called supermode. However, for the free running stripe-array mode competition between several supermodes occurs because of the lack of wavelength stabilization. This leads to power fluctuations, spectral instabilities and poor beam quality. Thus, it was necessary to study the emission properties of those stripe-arrays to find new concepts to realize an external synchronization of the emitters. The aim was to achieve stable longitudinal and transversal single mode operation with high output powers giving a brightness sufficient for efficient nonlinear frequency conversion. For this purpose a comprehensive analysis of the stripe-array devices was done here. The physical effects that are the origin of the emission characteristics were investigated theoretically and experimentally. In this context numerical models could be verified and extended. A good agreement between simulation and experiment was observed. One way to stabilize a specific supermode of an array is to operate it in an external cavity. Based on mathematical simulations and experimental work, it was possible to design novel external cavities to select a specific supermode and stabilize all emitters of the array at the same wavelength. This resulted in stable emission with 1 W output power, a narrow bandwidth in the range of 2 MHz and a very good beam quality with M²<1.5. This is a new level of brightness and brilliance compared to other BA and stripe-array diode laser systems. The emission from this external cavity diode laser (ECDL) satisfied the requirements for nonlinear frequency conversion. Furthermore, a huge improvement to existing concepts was made. In the next step newly available periodically poled crystals were used for second harmonic generation (SHG) in single pass setups. With the stripe-array ECDL as pump source, more than 140 mW of coherent radiation at 488 nm could be generated with a very high opto-optical conversion efficiency. The generated blue light had very good transversal and longitudinal properties and could be used to generate biphotons by parametric down-conversion. This was feasible because of the improvement made with the infrared stripe-array diode lasers due to the development of new physical concepts.
Foreland-basin systems are excellent archives to decipher the feedbacks between surface and tectonic processes in orogens. The sedimentary architecture of a foreland-basin system reflects the balance between tectonic subsidence causing long-term accommodation space and sediment influx corresponding to efficiency of erosion and mass-redistribution processes. In order to explore the effects of climatic and tectonic forcing in such a system, I investigated the Oligo-Miocene foreland-basin sediments of the southern Alborz mountains, an intracontinental orogen in northern Iran, related to the Arabia-Eurasia continental collision. This work includes absolute dating methods such as 40Ar/39Ar and zircon (U-Th)/He thermochronology, magnetostratigraphy, sedimentological analysis, sandstone and conglomerate provenance study, carbon and oxygen isotope analysis, and clay mineralogy study. Results show a systematic correlation between coarsening-upward cycles and sediment accumulation rates in the basin on 105 to 106yr time scales. During thrust loading phases, the coarse-grained fraction supplied by the uplifting range is stored in the proximal part of the basin (sedimentary facies retrogradation), while fine-grained sediments are deposited in distal sectors. Variations in sediment provenance during these phases of enhanced tectonic activity give evidence for erosional unroofing phases and/or drainage-reorganization events. In addition, enhanced tectonic activity promoted the growth of topography and associated orographic barrier effects, as demonstrated by sedimentologic indicators and the analysis of stable C and O isotopes from calcareous paleosols and lacustrine/palustrine samples. Extensive progradation of coarse-grained deposits occurs during phases of decreased subsidence, when the coarse-grained fraction supplied by the uplifting range cannot be completely stored in the proximal part of the basin. In this environment, a reduction in basin subsidence is associated with laterally stacked fluvial channel deposits, and is related to intra-foreland uplift, as documented by growth strata, tectonic tilting, and sediment reworking. Increase in sediment accumulation rate associated with progradation of vertically-stacked coarse-grained fluvial channels also occurs. Paleosol O-isotope data shows that this increase is related to wetter climatic phases, suggesting that surface processes are more efficient and exhumation rates increase, giving rise to a positive feedback. Furthermore, isotopic and sedimentologic data show that starting from 10-9 Ma, climate became less arid with an increase in seasonality of precipitation. Because important changes were also recorded in the Mediterranean Sea and Asia at that time, the evidence for climatic variability observed in the Alborz mountains most likely reflects changes in Northern Hemisphere atmospheric circulation patterns. This study has additional implications for the evolution of the Alborz mountains and the Arabia-Eurasia continental collision zone. At the orogenic scale, the locus of deformation did not move steadily southward, but stepped forward and backward since Oligocene time. In particular, from ~ 17.5 to 6.2 Ma the orogen grew by a combination of frontal accretion and wedge-internal deformation on time scales of ca. 0.7 to 2 m.y. Moreover, the provenance data suggest that prior to 10-9 Ma the shortening direction changed from NW-SE to NNE-SSW, in agreement with structural data. On the scale of the entire collision zone, the evolution of the studied basins and adjacent mountain ranges suggests a new geodynamic model for the evolution of the Arabia-Eurasia continental collision zone. Numerous sedimentary basins in the Alborz mountains and in other locations of the Arabia-Eurasia collision zone record a change from a tensional (transtensional) to a compressional (transpressional) tectonic setting by ~ 36 Ma. I interpret this to reflect the onset of subduction of the stretched Arabian continental lithosphere beneath central Iran, leading to moderate plate coupling and lower- and upper-plate deformation (soft continental collision). The increase in deformation rates in the southern Alborz mountains from ~ 17.5 Ma suggests that significant upper-plate deformation must have started by the early Miocene most likely in response to an increase in degree of plate coupling. I suggest that this was related to the subduction of thicker Arabian continental lithosphere and the consequent onset of hard continental collision. This model reconciles the apparent lag time of 15-20 m.y between the late Eocene to early Oligocene age for the initial Arabia-Eurasia continental collision and the onset of widespread deformation across the collision zone to the north in early to late Miocene time.
In normal everyday viewing, we perform large eye movements (saccades) and miniature or fixational eye movements. Most of our visual perception occurs while we are fixating. However, our eyes are perpetually in motion. Properties of these fixational eye movements, which are partly controlled by the brainstem, change depending on the task and the visual conditions. Currently, fixational eye movements are poorly understood because they serve the two contradictory functions of gaze stabilization and counteraction of retinal fatigue. In this dissertation, we investigate the spatial and temporal properties of time series of eye position acquired from participants staring at a tiny fixation dot or at a completely dark screen (with the instruction to fixate a remembered stimulus); these time series were acquired with high spatial and temporal resolution. First, we suggest an advanced algorithm to separate the slow phases (named drift) and fast phases (named microsaccades) of these movements, which are considered to play different roles in perception. On the basis of this identification, we investigate and compare the temporal scaling properties of the complete time series and those time series where the microsaccades are removed. For the time series obtained during fixations on a stimulus, we were able to show that they deviate from Brownian motion. On short time scales, eye movements are governed by persistent behavior and on a longer time scales, by anti-persistent behavior. The crossover point between these two regimes remains unchanged by the removal of microsaccades but is different in the horizontal and the vertical components of the eyes. Other analyses target the properties of the microsaccades, e.g., the rate and amplitude distributions, and we investigate, whether microsaccades are triggered dynamically, as a result of earlier events in the drift, or completely randomly. The results obtained from using a simple box-count measure contradict the hypothesis of a purely random generation of microsaccades (Poisson process). Second, we set up a model for the slow part of the fixational eye movements. The model is based on a delayed random walk approach within the velocity related equation, which allows us to use the data to determine control loop durations; these durations appear to be different for the vertical and horizontal components of the eye movements. The model is also motivated by the known physiological representation of saccade generation; the difference between horizontal and vertical components concurs with the spatially separated representation of saccade generating regions. Furthermore, the control loop durations in the model suggest an external feedback loop for the horizontal but not for the vertical component, which is consistent with the fact that an internal feedback loop in the neurophysiology has only been identified for the vertical component. Finally, we confirmed the scaling properties of the model by semi-analytical calculations. In conclusion, we were able to identify several properties of the different parts of fixational eye movements and propose a model approach that is in accordance with the described neurophysiology and described limitations of fixational eye movement control.
The aim of this thesis is to achieve a deep understanding of the working mechanism of polymer based solar cells and to improve the device performance. Two types of the polymer based solar cells are studied here: all-polymer solar cells comprising macromolecular donors and acceptors based on poly(p-phenylene vinylene) and hybrid cells comprising a PPV copolymer in combination with a novel small molecule electron acceptor. To understand the interplay between morphology and photovoltaic properties in all-polymer devices, I compared the photocurrent characteristics and excited state properties of bilayer and blend devices with different nano-morphology, which was fine tuned by using solvents with different boiling points. The main conclusion from these complementary measurements was that the performance-limiting step is the field-dependent generation of free charge carriers, while bimolecular recombination and charge extraction do not compromise device performance. These findings imply that the proper design of the donor-acceptor heterojunction is of major importance towards the goal of high photovoltaic efficiencies. Regarding polymer-small molecular hybrid solar cells I combined the hole-transporting polymer M3EH-PPV with a novel Vinazene-based electron acceptor. This molecule can be either deposited from solution or by thermal evaporation, allowing for a large variety of layer architectures to be realized. I then demonstrated that the layer architecture has a large influence on the photovoltaic properties. Solar cells with very high fill factors of up to 57 % and an open circuit voltage of 1V could be achieved by realizing a sharp and well-defined donor-acceptor heterojunction. In the past, fill factors exceeding 50 % have only been observed for polymers in combination with soluble fullerene-derivatives or nanocrystalline inorganic semiconductors as the electron-accepting component. The finding that proper processing of polymer-vinazene devices leads to similar high values is a major step towards the design of efficient polymer-based solar cells.
Trying to do two things at once decreases performance of one or both tasks in many cases compared to the situation when one performs each task by itself. The present thesis deals with the question why and in which cases these dual-task costs emerge and moreover, whether there are cases in which people are able to process two cognitive tasks at the same time without costs. In four experiments the influence of stimulus-response (S-R) compatibility, S-R modality pairings, interindividual differences, and practice on parallel processing ability of two tasks are examined. Results show that parallel processing is possible. Nevertheless, dual-task costs emerge when: the personal processing strategy is serial, the two tasks have not been practiced together, S-R compatibility of both tasks is low (e.g. when a left target has to be responded with a right key press and in the other task an auditorily presented “A” has to be responded by saying “B”), and modality pairings of both tasks are Non Standard (i.e., visual-spatial stimuli are responded vocally whereas auditory-verbal stimuli are responded manually). Results are explained with respect to executive-based (S-R compatibility) and content-based crosstalk (S-R modality pairings) between tasks. Finally, an alternative information processing account with respect to the central stage of response selection (i.e., the translation of the stimulus to the response) is presented.
There are many factors which make speaking and understanding a second language (L2) a highly complex challenge. Skills and competencies in in both linguistic and metalinguistic areas emerge as parts of a multi-faceted, flexible concept underlying bilingual/multilingual communication. On the linguistic level, a combination of an extended knowledge of idiomatic expressions, a broad lexical familiarity, a large vocabulary size, and the ability to deal with phonetic distinctions and fine phonetic detail has been argued necessary for effective nonnative comprehension of spoken language. The scientific interest in these factors has also led to more interest in the L2’s information structure, the way in which information is organised and packaged into informational units, both within and between clauses. On a practical level, the information structure of a language can offer the means to assign focus to a certain element considered important. Speakers can draw from a rich pool of linguistic means to express this focus, and listeners can in turn interpret these to guide them to the highlighted information which in turn facilitates comprehension, resulting in an appropriate understanding of what has been said. If a speaker doesn’t follow the principles of information structure, and the main accent in a sentence is placed on an unimportant word, then there may be inappropriate information transfer within the discourse, and misunderstandings. The concept of focus as part of the information structure of a language, the linguistic means used to express it, and the differential use of focus in native and nonnative language processing are central to this dissertation. Languages exhibit a wide range of ways of directing focus, including by prosodic means, by syntactic constructions, and by lexical means. The general principles underlying information structure seem to contrast structurally across different languages, and they can also differ in the way they express focus. In the context of L2 acquisition, characteristics of the L1 linguistic system are argued to influence the acquisition of the L2. Similarly, the conceptual patterns of information structure of the L1 may influence the organization of information in the L2. However, strategies and patterns used to exploit information structure for succesful language comprehension in the native L1, may not apply at all, or work in different ways or todifferent degrees in the L2. This means that L2 learners ideally have to understand the way that information structure is expressed in the L2 to fully use the information structural benefit in the L2. The knowledge of information structural requirements in the L2 could also imply that the learner would have to make adjustments regarding the use of information structural devices in the L2. The general question is whether the various means to mark focus in the learners’ native language are also accessible in the nonnative language, and whether a L1-L2 transfer of their usage should be considered desirable. The current work explores how information structure helps the listener to discover and structure the forms and meanings of the L2. The central hypothesis is that the ability to access information structure has an impact on the level of the learners’ appropriateness and linguistic competence in the L2. Ultimately, the ability to make use of information structure in the L2 is believed to underpin the L2 learners’ ability to effectively communicate in the L2. The present study investigated how use of focus markers affects processing speed and word recall recall in a native-nonnative language comparison. The predominant research question was whether the type of focus marking leads to more efficient and accurate word processing in marked structures than in unmarked structures, and whether differences in processing patterns can be observed between the two language conditions. Three perception studies were conducted, each concentrating on one of the following linguistic parameters: 1. Prosodic prominence: Does prosodic focus conveyed by sentence accent and by word position facilitate word recognition? 2. Syntactical means: Do cleft constructions result in faster and more accurate word processing? 3. Lexical means: Does focus conveyed by the particles even/only (German: sogar/nur) facilitate word processing and word recall? Experiments 2 and 3 additionally investigated the contribution of context in the form of preceding questions. Furthermore, they considered accent and its facilitative effect on the processing of words which are in the scope of syntactic or lexical focus marking. All three experiments tested German learners of English in a native German language condition and in English as their L2. Native English speakers were included as a control for the English language condition. Test materials consisted of single sentences, all dealing with bird life. Experiment 1 tested word recognition in three focus conditions (broad focus, narrow focus on the target, and narrow focus on a constituent than the target) in one condition using natural unmanipulated sentences, and in the other two conditions using spliced sentences. Experiment 2 (effect of syntactic focus marking) and Experiment 3 (effect of lexical focus marking) used phoneme monitoring as a measure for the speed of word processing. Additionally, a word recall test (4AFC) was conducted to assess the effective entry of target-bearing words in the listeners’ memory. Experiment 1: Focus marking by prosodic means Prosodic focus marking by pitch accent was found to highlight important information (Bolinger, 1972), making the accented word perceptually more prominent (Klatt, 1976; van Santen & Olive, 1990; Eefting, 1991; Koopmans-van Beinum & van Bergem, 1989). However, accent structure seems to be processed faster in native than in nonnative listening (Akker& Cutler, 2003, Expt. 3). Therefore, it is expected that prosodically marked words are better recognised than unmarked words, and that listeners can exploit accent structure better for accurate word recognition in their L1 than they do in the L2 (L1 > L2). Altogether, a difference in word recognition performance in L1 listening is expected between different focus conditions (narrow focus > broad focus). Results of Experiments 1 show that words were better recognized in native listening than in nonnative listening. Focal accent, however, doesn’t seem to help the German subjects recognize accented words more accurately, in both the L1 and the L2. This could be due to the focus conditions not being acoustically distinctive enough. Results of experiments with spliced materials suggest that the surrounding prosodic sentence contour made listeners remember a target word and not the local, prosodic realization of the word. Prosody seems to indeed direct listeners’ attention to the focus of the sentence (see Cutler, 1976). Regarding the salience of word position, VanPatten (2002; 2004) postulated a sentence location principle for L2 processing, stating a ranking of initial > final > medial word position. Other evidence mentions a processing adantage of items occurring late in the sentence (Akker & Cutler, 2003), and Rast (2003) observed in an English L2 production study a trend of an advantage of items occurring at the outer ends of the sentence. The current Experiment 1 aimed to keep the length of the sentences to an acceptable length, mainly to keep the task in the nonnative lnaguage condition feasable. Word length showed an effect only in combination with word position (Rast, 2003; Rast & Dommergues, 2003). Therefore, word length was included in the current experiment as a secondary factor and without hypotheses. Results of Experiment 1 revealed that the length of a word doesn’t seem to be important for its accurate recognition. Word position, specifically the final position, clearly seems to facilitate accurate word recognition in German. A similar trend emerges in condition English L2, confirming Klein (1984) and Slobin (1985). Results don’t support the sentence location principle of VanPatten (2002; 2004). The salience of the final position is interpreted as recency effect (Murdock, 1962). In addition, the advantage of the final position may benefit from the discourse convention that relevant background information is referred to first, and then what is novel later (Haviland & Clark, 1974). This structure is assumed to cue the listener as to what the speaker considers to be important information, and listeners might have reacted according to this convention. Experiment 2: Focus marking by syntactic means Atypical syntactic structures often draw listeners’ attention to certain information in an utterance, and the cleft structure as a focus marking device appears to be a common surface feature in many languages (Lambrecht, 2001). Surface structure influences sentence processing (Foss & Lynch, 1969; Langford & Holmes, 1979), which leads to competing hypotheses in Experiment 2: on the one hand, the focusing effect of the cleft construction might reduce processing times. On the other, cleft constructions in German were found to be used less to mark fo than in English (Ahlemeyer & Kohlhof, 1999; Doherty, 1999; E. Klein, 1988). The complexity of the constructions, and the experience from the native language might work against an advantage of the focus effect in the L2. Results of Experiment 2 show that the cleft structure is an effective device to mark focus in German L1. The processing advantage is explained by the low degree of structural markedness of cleft structures: listeners use the focus function of sentence types headed by the dummy subject es (English: it) due to reliance on 'safe' subject-prominent SVO-structures. The benefit of cleft is enhanced when the sentences are presented with context, suggesting a substantial benefit when focus effects of syntactic surface structure and coherence relation between sentences are integrated. Clefts facilitate word processing for English native speakers. Contrary to German L1, the marked cleft construction doesn’t reduce processing times in English L2. The L1-L2 difference was interpreted as a learner problem of applying specific linguistic structures according to the principles of information structure in the target language. Focus marking by cleft did not help German learners in native or in nonnative word recall. This could be attributed to the phonological similarity of the multiple choice options (Conrad & Hull, 1964), and to a long time span between listening and recall (Birch & Garnsey, 1995; McKoon et al., 1993). Experiment 3: Focus marking by lexical means Focus particles are elements of structure that can indicate focus (König, 1991), and their function is to emphasize a certain part of the sentence (Paterson et al., 1999). I argue that the focus particles even/only (German: sogar/nur) evoke contrast sets of alternatives resp. complements to the element in focus (Ni et al., 1996), which causes interpretations of context. Therefore, lexical focus marking isn’t expected to lead to faster word processing. However, since different mechanisms of encoding seem to underlie word memory, a benefit of the focusing function of particles is expected to show in the recall task: due to focus particles being a preferred and well-used feature for native speakers of German, a transfer of this habitualness is expected, resulting in a better recall of focused words. Results indicated that focus particles seem to be the weakest option to mark focus: Focus marking by lexical particle don’t seem to reduce word processing times in either German L1, English L2, or in English L1. The presence of focus particles is likely to instantiate a complex discourse model which lets the listener await further modifying information (Liversedge et al., 2002). This semantic complexity might slow down processing. There are no indications that focus particles facilitate native language word recall in German L1 and English L1. This could be because focus particles open sets of conditions and contexts that enlarge the set of representations in listeners rather than narrowing it down to the element in the scope of the focus particle. In word recall, the facilitative effect of focus particles emerges only in the nonnative language condition. It is suggested that L2 learners, when faced with more demanding tasks in an L2, use a broad variety of means that identify focus for a better representation of novel words in the memory. In Experiments 2 and 3, evidence suggests that accent is an important factor for efficient word processing and accurate recall in German L1 and English L1, but less so in English L2. This underlines the function of accent as core speech parameter and consistent cue to the perception of prominence native language use (see Cutler & Fodor, 1979; Pitt & Samuel, 1990a; Eriksson et al., 2002; Akker & Cutler, 2003); the L1-L2 difference is attributed to patterns of expectation that are employed in the L1 but not (yet?) in the L2. There seems to exist a fine-tuned sensitivity to how accents are distributed in the native language, listeners expect an appropriate distribution and interpret it accordingly (Eefting, 1991). This pleads for accent placement as extremely important to L2 proficiency; the current results also suggest that accent and its relationship with other speech parameters has to be newly established in the L2 to fully reveal its benefits for efficient processing of speech. There is evidence that additional context facilitates processing of complex syntactic structures but that a surplus of information has no effect if the sentence construction is less challenging for the listener. The increased amount of information to be processed seems to impede better word recall, particularly in the L2. Altogether, it seems that focus marking devices and context can combine to form an advantageous alliance: a substantial benefit in processing efficiency is found when parameters of focus marking and sentence coherence are integrated. L2 research advocates the beneficial aspects of providing context for efficient L2 word learning (Lawson & Hogben, 1996). The current thesis promotes the view that a context which offers more semantic, prosodic, or lexical connections might compensate for the additional processing load that context constitutes for the listeners. A methodological consideration concerns the order in which language conditions are presented to listeners, i.e., L1-L2 or L2-L1. Findings suggest that presentation order could enforce a learning bias, with the performance in the second experiment being influenced by knowledge acquired in the first (see Akker & Cutler, 2003). To conclude this work: The results of the present study suggest that information structure is more accessible in the native language than it is in the nonnative language. There is, however, some evidence that L2 learners have an understanding of the significance of some information-structural parameters of focus marking. This has a beneficial effect on processing efficiency and recall accuracy; on the cognitive side it illustrates the benefits and also the need of a dynamic exchange of information-structural organization between L1 and L2. The findings of the current thesis encourage the view that an understanding of information structure can help the learner to discover and categorise forms and meanings of the L2. Information structure thus emerges as a valuable resource to advance proficiency in a second language.
The seismicity of the Dead Sea fault zone (DSFZ) during the last two millennia is characterized by a number of damaging and partly devastating earthquakes. These events pose a considerable seismic hazard and seismic risk to Syria, Lebanon, Palestine, Jordan, and Israel. The occurrence rates for large earthquakes along the DSFZ show indications to temporal changes in the long-term view. The aim of this thesis is to find out, if the occurrence rates of large earthquakes (Mw ≥ 6) in different parts of the DSFZ are time-dependent and how. The results are applied to probabilistic seismic hazard assessments (PSHA) in the DSFZ and neighboring areas. Therefore, four time-dependent statistical models (distributions), including Weibull, Gamma, Lognormal and Brownian Passage Time (BPT), are applied beside the exponential distribution (Poisson process) as the classical time-independent model. In order to make sure, if the earthquake occurrence rate follows a unimodal or a multimodal form, a nonparametric bootstrap test of multimodality has been done. A modified method of weighted Maximum Likelihood Estimation (MLE) is applied to estimate the parameters of the models. For the multimodal cases, an Expectation Maximization (EM) method is used in addition to the MLE method. The selection of the best model is done by two methods; the Bayesian Information Criterion (BIC) as well as a modified Kolmogorov-Smirnov goodness-of-fit test. Finally, the confidence intervals of the estimated parameters corresponding to the candidate models are calculated, using the bootstrap confidence sets. In this thesis, earthquakes with Mw ≥ 6 along the DSFZ, with a width of about 20 km and inside 29.5° ≤ latitude ≤ 37° are considered as the dataset. The completeness of this dataset is calculated since 300 A.D. The DSFZ has been divided into three sub zones; the southern, the central and the northern sub zone respectively. The central and the northern sub zones have been investigated but not the southern sub zone, because of the lack of sufficient data. The results of the thesis for the central part of the DSFZ show that the earthquake occurrence rate does not significantly pursue a multimodal form. There is also no considerable difference between the time-dependent and time-independent models. Since the time-independent model is easier to interpret, the earthquake occurrence rate in this sub zone has been estimated under the exponential distribution assumption (Poisson process) and will be considered as time-independent with the amount of 9.72 * 10-3 events/year. The northern part of the DSFZ is a special case, where the last earthquake has occurred in 1872 (about 137 years ago). However, the mean recurrence time of Mw ≥ 6 events in this area is about 51 years. Moreover, about 96 percent of the observed earthquake inter-event times (the time between two successive earthquakes) in the dataset regarding to this sub zone are smaller than 137 years. Therefore, it is a zone with an overdue earthquake. The results for this sub zone verify that the earthquake occurrence rate is strongly time-dependent, especially shortly after an earthquake occurrence. A bimodal Weibull-Weibull model has been selected as the best fit for this sub zone. The earthquake occurrence rate, corresponding to the selected model, is a smooth function of time and reveals two clusters within the time after an earthquake occurrence. The first cluster begins right after an earthquake occurrence, lasts about 80 years, and is explicitly time-dependent. The occurrence rate, regarding to this cluster, is considerably lower right after an earthquake occurrence, increases strongly during the following ten years and reaches its maximum about 0.024 events/year, then decreases over the next 70 years to its minimum about 0.0145 events/year. The second cluster begins 80 years after an earthquake occurrence and lasts until the next earthquake occurs. The earthquake occurrence rate, corresponding to this cluster, increases extremely slowly, such as it can be considered as an almost constant rate about 0.015 events/year. The results are applied to calculate the time-dependent PSHA in the northern part of the DSFZ and neighbouring areas.
This work presents mathematical and computational approaches to cover various aspects of metabolic network modelling, especially regarding the limited availability of detailed kinetic knowledge on reaction rates. It is shown that precise mathematical formulations of problems are needed i) to find appropriate and, if possible, efficient algorithms to solve them, and ii) to determine the quality of the found approximate solutions. Furthermore, some means are introduced to gain insights on dynamic properties of metabolic networks either directly from the network structure or by additionally incorporating steady-state information. Finally, an approach to identify key reactions in a metabolic networks is introduced, which helps to develop simple yet useful kinetic models. The rise of novel techniques renders genome sequencing increasingly fast and cheap. In the near future, this will allow to analyze biological networks not only for species but also for individuals. Hence, automatic reconstruction of metabolic networks provides itself as a means for evaluating this huge amount of experimental data. A mathematical formulation as an optimization problem is presented, taking into account existing knowledge and experimental data as well as the probabilistic predictions of various bioinformatical methods. The reconstructed networks are optimized for having large connected components of high accuracy, hence avoiding fragmentation into small isolated subnetworks. The usefulness of this formalism is exemplified on the reconstruction of the sucrose biosynthesis pathway in Chlamydomonas reinhardtii. The problem is shown to be computationally demanding and therefore necessitates efficient approximation algorithms. The problem of minimal nutrient requirements for genome-scale metabolic networks is analyzed. Given a metabolic network and a set of target metabolites, the inverse scope problem has as it objective determining a minimal set of metabolites that have to be provided in order to produce the target metabolites. These target metabolites might stem from experimental measurements and therefore are known to be produced by the metabolic network under study, or are given as the desired end-products of a biotechological application. The inverse scope problem is shown to be computationally hard to solve. However, I assume that the complexity strongly depends on the number of directed cycles within the metabolic network. This might guide the development of efficient approximation algorithms. Assuming mass-action kinetics, chemical reaction network theory (CRNT) allows for eliciting conclusions about multistability directly from the structure of metabolic networks. Although CRNT is based on mass-action kinetics originally, it is shown how to incorporate further reaction schemes by emulating molecular enzyme mechanisms. CRNT is used to compare several models of the Calvin cycle, which differ in size and level of abstraction. Definite results are obtained for small models, but the available set of theorems and algorithms provided by CRNT can not be applied to larger models due to the computational limitations of the currently available implementations of the provided algorithms. Given the stoichiometry of a metabolic network together with steady-state fluxes and concentrations, structural kinetic modelling allows to analyze the dynamic behavior of the metabolic network, even if the explicit rate equations are not known. In particular, this sampling approach is used to study the stabilizing effects of allosteric regulation in a model of human erythrocytes. Furthermore, the reactions of that model can be ranked according to their impact on stability of the steady state. The most important reactions in that respect are identified as hexokinase, phosphofructokinase and pyruvate kinase, which are known to be highly regulated and almost irreversible. Kinetic modelling approaches using standard rate equations are compared and evaluated against reference models for erythrocytes and hepatocytes. The results from this simplified kinetic models can simulate acceptably the temporal behavior for small changes around a given steady state, but fail to capture important characteristics for larger changes. The aforementioned approach to rank reactions according to their influence on stability is used to identify a small number of key reactions. These reactions are modelled in detail, including knowledge about allosteric regulation, while all other reactions were still described by simplified reaction rates. These so-called hybrid models can capture the characteristics of the reference models significantly better than the simplified models alone. The resulting hybrid models might serve as a good starting point for kinetic modelling of genome-scale metabolic networks, as they provide reasonable results in the absence of experimental data, regarding, for instance, allosteric regulations, for a vast majority of enzymatic reactions.
Classical semiconductor physics has been continuously improving electronic components such as diodes, light-emitting diodes, solar cells and transistors based on highly purified inorganic crystals over the past decades. Organic semiconductors, notably polymeric, are a comparatively young field of research, the first light-emitting diode based on conjugated polymers having been demonstrated in 1990. Polymeric semiconductors are of tremendous interest for high-volume, low-cost manufacturing ("printed electronics"). Due to their rather simple device structure mostly comprising only one or two functional layers, polymeric diodes are much more difficult to optimize compared to small-molecular organic devices. Usually, functions such as charge injection and transport are handled by the same material which thus needs to be highly optimized. The present work contributes to expanding the knowledge on the physical mechanisms determining device performance by analyzing the role of charge injection and transport on device efficiency for blue and white-emitting devices, based on commercially relevant spiro-linked polyfluorene derivatives. It is shown that such polymers can act as very efficient electron conductors and that interface effects such as charge trapping play the key role in determining the overall device efficiency. This work contributes to the knowledge of how charges drift through the polymer layer to finally find neutral emissive trap states and thus allows a quantitative prediction of the emission color of multichromophoric systems, compatible with the observed color shifts upon driving voltage and temperature variation as well as with electrical conditioning effects. In a more methodically oriented part, it is demonstrated that the transient device emission observed upon terminating the driving voltage can be used to monitor the decay of geminately-bound species as well as to determine trapped charge densities. This enables direct comparisons with numerical simulations based on the known properties of charge injection, transport and recombination. The method of charge extraction under linear increasing voltages (CELIV) is investigated in some detail, correcting for errors in the published approach and highlighting the role of non-idealized conditions typically present in experiments. An improved method is suggested to determine the field dependence of charge mobility in a more accurate way. Finally, it is shown that the neglect of charge recombination has led to a misunderstanding of experimental results in terms of a time-dependent mobility relaxation.
Untersuchung des Recyclings Kaede-fusionierter Corticotropin-Releasing-Factor Rezeptoren Typ 1
(2009)
Aktivierte G-Protein-gekoppelte Rezeptoren (GPCR) werden schnell desensitisiert, internalisiert und anschließend entweder lysosomal degradiert oder zur Plasmamembran (PM) recycelt. Zur Resensitisierung der Zellen tragen neben recycelten auch neusynthetisierte Rezeptoren bei. Die Überlagerung beider Prozesse erschwert die Untersuchung des Rezeptorrecyclings. In dieser Arbeit sollte mit Hilfe des photokonvertierbaren Fluoreszenzproteins Kaede eine Technik entwickelt werden, mit der es möglich ist Recycling- von Neusyntheseprozessen zu trennen und das Recycling von GPCR mikroskopisch in Echtzeit zu beobachten. Als Modellproteine wurden der Vasopressin-1a-Rezeptor V1aR (recycelnder Rezeptor), der Vasopressin-2-Rezeptor V2R (degradierter Rezeptor) und der Corticotropin-Releasing Factor-Rezeptor Typ 1 (CRF1R) verwendet, wobei bei Letzterem untersucht werden sollte, ob er nach Stimulation zur PM zurücktransportiert wird. Da Kaede als fluoreszierendes Protein mit den GPCR fusioniert wird, wurde zunächst überprüft, ob es die Eigenschaften der Rezeptoren verändert und generell für Transportstudien geeignet ist. Eventuell könnte die bereits publizierte Tetramerisierung von Kaede seine Anwendung verhindern oder erschweren. Mittels Fluoreszenz-Korrelationsspektroskopie konnte gezeigt werden, dass Kaede nicht tetramerisiert, wenn es an ein Membranprotein fusioniert ist. Außerdem konnte in in vitro- und Zellkulturexperimenten belegt werden, dass die native und die photokonvertierte Form von Kaede gleichermaßen stabil sind. Darüber hinaus zeigten Kaede-fusionierte GPCR sowohl in Kolokalisationsstudien als auch in Agonistbindungs- und Rezeptoraktivierungsexperimenten die gleichen Eigenschaften wie CFP- bzw. die unfusionierte Rezeptoren. Lediglich die Expression der Kaede-fusionierten Rezeptoren war geringer. Parallel wurde anhand der bereits publizierten Kaede-Struktur versucht, die Tetramerisierung des Proteins durch den Austausch interagierender Aminosäuren zu unterbinden. Die eingeführten Mutationen bewirkten aber eine Fehlfaltung des Proteins und damit den Verlust der Fluoreszenz. Da zuvor gezeigt werden konnte, dass Kaede-fusionierte Membranproteine nicht tetramerisieren und nicht die Eigenschaften der fusionierten Proteine verändern, war monomerisiertes Kaede zur Untersuchung des Rezeptorrecyclings nicht notwendig. Im zweiten Teil der Arbeit wurde mit Hilfe von Kaede-Fusionsproteinen und mikroskopischer Testsysteme das noch unbekannte Recyclingverhalten des CRF1R untersucht. Hierfür wurden die Kaede-fusionierten Rezeptoren in eukaryotischen Zellen exprimiert und mit Agonisten internalisiert. Die internalisierten Rezeptoren wurden in Endosomen selektiv mit UV-Strahlung photokonvertiert. Anschließend wurde der Transport der photokonvertierten Form verfolgt. Sowohl beim CRF1R als auch beim V1aR wurden Signale in der PM detektiert, beim V2R hingegen nicht. Dies zeigt, dass es sich beim CRF1R um einen recycelnden Rezeptor handelt. Die als Kontrolle eingesetzten Rezeptoren verhielten sich in diesem Experiment wie erwartet: Der V1aR wurde zur PM zurücktransportiert, der V2R nicht. Diese Ergebnisse konnten mit Hilfe biochemischer und durchflusscytometrischer Experimente bestätigt werden. Die Internalisierung des CRF1R verläuft Clathrin-vermittelt in Anwesenheit von β-Arrestin. Je nach Stabilität der β Arrestin-Interaktion unterscheidet man zwei Klassen von Rezeptoren: Klasse A-Rezeptoren interagieren transient mit β Arrestin und können recyceln. Im Gegensatz dazu gehen Klasse B-Rezeptoren eine stabile Interaktion mit β Arrestin ein und werden nach Internalisierung degradiert. In mikroskopischen Untersuchungen konnte für die aktivierten CRF1R und V1aR eine Rekrutierung von β Arrestin zur PM und eine transiente Interaktion mit β Arrestin gezeigt werden (Klasse A-Rezeptoren). Für den V2R wurde dagegen eine stabile Interaktion mit β Arrestin beobachtet (Klasse B-Rezeptor). Diese Daten stützen die Ergebnisse des Kaede-basierten Recyclingversuchs und zeigen, dass der CRF1R ein recycelnder Rezeptor ist. Ferner wurde untersucht, ob der CRF1R zu den schnell oder langsam recycelnden Rezeptoren zählt. Schnell recycelnde Rezeptoren werden direkt aus frühen Endosomen, langsam recycelnde hingegen über das Trans-Golgi-Netzwerk (TGN) bzw. über Recycling-Endosomen zur PM transportiert. Als Marker für das TGN oder die Recycling-Endosomen wurde Rab11 verwendet. In Kolokalisationsstudien konnte gezeigt werden, dass der CRF1R den langsam recycelnden Rezeptoren zugeordnet werden kann. Zusammenfassend konnte in dieser Arbeit belegt werden, dass Kaede als Fusionspartner für Membranproteine genutzt werden kann um deren Transport in Echtzeit zu studieren. Damit wurde erstmals eine mikroskopische Methode etabliert, die es erlaubt recycelnde von neusynthetisierten Rezeptoren zu unterscheiden. Mit Hilfe dieser Methode war es möglich zu zeigen, dass der CRF1R ein recycelnder Rezeptor ist.
Die visuelle Kommunikation ist eine effiziente Methode, um dynamische Phänomene zu beschreiben. Informationsobjekte präzise wahrzunehmen, einen schnellen Zugriff auf strukturierte und relevante Informationen zu ermöglichen, erfordert konsistente und nach dem formalen Minimalprinzip konzipierte Analyse- und Darstellungsmethoden. Dynamische Raumphänomene in Geoinformationssystemen können durch den Mangel an konzeptionellen Optimierungsanpassungen aufgrund ihrer statischen Systemstruktur nur bedingt die Informationen von Raum und Zeit modellieren. Die Forschung in dieser Arbeit ist daher auf drei interdisziplinäre Ansätze fokussiert. Der erste Ansatz stellt eine echtzeitnahe Datenerfassung dar, die in Geodatenbanken zeitorientiert verwaltet wird. Der zweite Ansatz betrachtet Analyse- und Simulationsmethoden, die das dynamische Verhalten analysieren und prognostizieren. Der dritte Ansatz konzipiert Visualisierungsmethoden, die insbesondere dynamische Prozesse abbilden. Die Symbolisierung der Prozesse passt sich bedarfsweise in Abhängigkeit des Prozessverlaufes und der Interaktion zwischen Datenbanken und Simulationsmodellen den verschiedenen Entwicklungsphasen an. Dynamische Aspekte können so mit Hilfe bewährter Funktionen aus der GI-Science zeitnah mit modularen Werkzeugen entwickelt und visualisiert werden. Die Analyse-, Verschneidungs- und Datenverwaltungsfunktionen sollen hierbei als Nutzungs- und Auswertungspotential alternativ zu Methoden statischer Karten dienen. Bedeutend für die zeitliche Komponente ist das Verknüpfen neuer Technologien, z. B. die Simulation und Animation, basierend auf einer strukturierten Zeitdatenbank in Verbindung mit statistischen Verfahren. Methodisch werden Modellansätze und Visualisierungstechniken entwickelt, die auf den Bereich Verkehr transferiert werden. Verkehrsdynamische Phänomene, die nicht zusammenhängend und umfassend darstellbar sind, werden modular in einer serviceorientierten Architektur separiert, um sie in verschiedenen Ebenen räumlich und zeitlich visuell zu präsentieren. Entwicklungen der Vergangenheit und Prognosen der Zukunft werden über verschiedene Berechnungsmethoden modelliert und visuell analysiert. Die Verknüpfung einer Mikrosimulation (Abbildung einzelner Fahrzeuge) mit einer netzgesteuerten Makrosimulation (Abbildung eines gesamten Straßennetzes) ermöglicht eine maßstabsunabhängige Simulation und Visualisierung des Mobilitätsverhaltens ohne zeitaufwendige Bewertungsmodellberechnungen. Zukünftig wird die visuelle Analyse raum-zeitlicher Veränderungen für planerische Entscheidungen ein effizientes Mittel sein, um Informationen übergreifend verfügbar, klar strukturiert und zweckorientiert zur Verfügung zu stellen. Der Mehrwert durch visuelle Geoanalysen, die modular in einem System integriert sind, ist das flexible Auswerten von Messdaten nach zeitlichen und räumlichen Merkmalen.
Vitamin E wird immer noch als das wichtigste lipophile Antioxidanz in biologischen Membranen betrachtet. In den letzten Jahren hat sich jedoch der Schwerpunkt der Vitamin E-Forschung hin zu den nicht-antioxidativen Funktionen verlagert. Besonderes Interesse gilt dabei dem α-Tocopherol, der häufigsten Vitamin E-Form im Gewebe von Säugetieren, und seiner Rolle bei der Regulation der Genexpression. Das Ziel dieser Dissertation war die Untersuchung der genregulatorischen Funktionen von α-Tocoperol und die Identifizierung α-Tocopherol-sensitiver Gene in vivo. Zu diesem Zweck wurden Mäuse mit verschiedenen Mengen α-Tocopherol gefüttert. Die Analyse der hepatischen Genexpression mit Hilfe von DNA-Microarrays identifizierte 387 α-Tocopherol-sensitive Gene. Funktionelle Clusteranalysen der differentiell exprimierten Gene zeigten einen Einfluss von α-Tocooherol auf zelluläre Transportprozesse. Besonders solche Gene, die an vesikulären Transportvorgängen beteiligt sind, wurden größtenteils durch α-Tocopherol hochreguliert. Für Syntaxin 1C, Vesicle-associated membrane protein 1, N-ethylmaleimide-sensitive factor and Syntaxin binding protein 1 konnte eine erhöhte Expression mittels real time PCR bestätigt werden. Ein funktioneller Einfluss von α-Tocopherol auf vesikuläre Transportprozesse konnte mit Hilfe des in vitro β-Hexosaminidase Assays in der sekretorischen Mastzelllinie RBL-2H3 gezeigt werden. Die Inkubation der Zellen mit α-Tocopherol resultierte in einer konzentrationsabhängigen Erhöhung der PMA/Ionomycin-stimulierten Sekretion der β-Hexosaminidase. Eine erhöhte Expression ausgewählter Gene, die an der Degranulation beteiligt sind, konnte nicht beobachtet werden. Damit schien ein direkter genregulatorischer Effekt von α-Tocopherol eher unwahrscheinlich. Da eine erhöhte Sekretion auch mit β-Tocopherol aber nicht mit Trolox, einem hydrophilen Vitamin E-Analogon, gefunden wurde, wurde vermutet, dass α-Tocopherol die Degranulation möglicherweise durch seine membranständige Lokalisation beeinflussen könnte. Die Inkubation der Zellen mit α-Tocopherol resultierte in einer veränderten Verteilung des Gangliosids GM1, einem Lipid raft Marker. Es wird angenommen, dass diese Membranmikrodomänen als Plattformen für Signaltransduktionsvorgänge fungieren. Ein möglicher Einfluss von Vitamin E auf die Rekrutierung/Translokation von Signalproteinen in Membranmikrodomänen könnte die beobachteten Effekte erklären. Eine Rolle von α-Tocopherol im vesikulären Transport könnte nicht nur seine eigene Absorption und seinen Transport beeinflussen, sondern auch eine Erklärung für die bei schwerer Vitamin E-Defizienz auftretenden neuronalen Dysfunktionen bieten. Im zweiten Teil der Arbeit wurde die α-Tocopheroltransferprotein (Ttpa) Knockout-Maus als genetisches Modell für Vitamin E-Defizienz verwendet, um den Effekt von Ttpa auf die Genexpression und die Gewebeverteilung von α-Tocopherol zu analysieren. Ttpa ist ein cytosolisches Protein, das für die selektive Retention von α-Tocopherol in der Leber verantwortlich ist. Die Ttpa-Defizienz resultierte in sehr geringen α-Tocopherol-Konzentrationen im Plasma und den extrahepatischen Geweben. Die Analyse der α-Tocopherol-Gehalte im Gehirn wies auf eine Rolle von Ttpa bei der α-Tocopherol-Aufnahme ins Gehirn hin.
Dispersal behavior plays an important role for the geographical distribution and population structure of any given species. Individual’s fitness, reproductive and competitive ability, and dispersal behavior can be determined by the age of the individual. Age-dependent as well as density-dependent dispersal patterns are common in many bird species. In this thesis, I first present age-dependent breeding ability and natal site fidelity in white storks (Ciconia ciconia); migratory birds breeding in large parts of Europe. I predicted that both the proportion of breeding birds and natal site fidelity increase with the age. After the seventies of the last century, following a steep population decline, a recovery of the white stork population has been observed in many regions in Europe. Increasing population density in the white stork population in Eastern Germany especially after 1983 allowed examining density- as well as age-dependent breeding dispersal patterns. Therefore second, I present whether: young birds show more often and longer breeding dispersal than old birds, and frequency of dispersal events increase with the population density increase, especially in the young storks. Third, I present age- and density-dependent dispersal direction preferences in the give population. I asked whether and how the major spring migration direction interacts with dispersal directions of white storks: in different age, and under different population densities. The proportion of breeding individuals increased in the first 22 years of life and then decreased suggesting, the senescent decay in aging storks. Young storks were more faithful to their natal sites than old storks probably due to their innate migratory direction and distance. Young storks dispersed more frequently than old storks in general, but not for longer distance. Proportion of dispersing individuals increased significantly with increasing population densities indicating, density- dependent dispersal behavior in white storks. Moreover, the finding of a significant interaction effects between the age of dispersing birds and year (1980–2006) suggesting, older birds dispersed more from their previous nest sites over time due to increased competition. Both young and old storks dispersed along their spring migration direction; however, directional preferences were different in young storks and old storks. Young storks tended to settle down before reaching their previous nest sites (leading to the south-eastward dispersal) while old birds tended to keep migrating along the migration direction after reaching their previous nest sites (leading to the north-westward dispersal). Cues triggering dispersal events may be age-dependent. Changes in the dispersal direction over time were observed. Dispersal direction became obscured during the second half of the observation period (1993–2006). Increase in competition may affect dispersal behavior in storks. I discuss the potential role of: age for the observed age-dependent dispersal behavior, and competition for the density dependent dispersal behavior. This Ph.D. thesis contributes significantly to the understanding of population structure and geographical distribution of white storks. Moreover, presented age- and density (competition)-dependent dispersal behavior helps understanding underpinning mechanisms of dispersal behavior in bird species.