Refine
Year of publication
- 2024 (63)
- 2023 (274)
- 2022 (252)
- 2021 (276)
- 2020 (234)
- 2019 (257)
- 2018 (284)
- 2017 (303)
- 2016 (322)
- 2015 (288)
- 2014 (222)
- 2013 (311)
- 2012 (292)
- 2011 (289)
- 2010 (248)
- 2009 (247)
- 2008 (268)
- 2007 (244)
- 2006 (210)
- 2005 (206)
- 2004 (174)
- 2003 (176)
- 2002 (148)
- 2001 (151)
- 2000 (140)
- 1999 (155)
- 1998 (110)
- 1997 (88)
- 1996 (62)
- 1995 (36)
- 1994 (25)
- 1993 (27)
- 1992 (37)
- 1991 (7)
- 1988 (1)
- 1972 (1)
Document Type
- Doctoral Thesis (6463) (remove)
Language
Keywords
- climate change (55)
- Klimawandel (54)
- Modellierung (36)
- Nanopartikel (28)
- machine learning (22)
- Fernerkundung (20)
- Spracherwerb (19)
- Synchronisation (19)
- Deutschland (18)
- remote sensing (18)
Institute
- Institut für Biochemie und Biologie (1035)
- Institut für Physik und Astronomie (778)
- Institut für Chemie (676)
- Institut für Geowissenschaften (504)
- Wirtschaftswissenschaften (407)
- Institut für Ernährungswissenschaft (277)
- Öffentliches Recht (251)
- Bürgerliches Recht (219)
- Historisches Institut (213)
- Institut für Informatik und Computational Science (204)
Semantische Repräsentation, obligatorische Aktivierung und verbale Produktion arithmetischer Fakten
(2006)
Die vorliegende Arbeit widmet sich der Repräsentation und Verarbeitung arithmetischer Fakten. Dieser Bereich semantischen Wissens eignet sich unter anderem deshalb besonders gut als Forschungsgegenstand, weil nicht nur seine einzelne Bestandteile, sondern auch die Beziehungen dieser Bestandteile untereinander außergewöhnlich gut definierbar sind. Kognitive Modelle können also mit einem Grad an Präzision entwickelt werden, der in anderen Bereichen kaum je zu erreichen sein wird. Die meisten aktuellen Modelle stimmen darin überein, die Repräsentation arithmetischer Fakten als eine assoziative, netzwerkartig organisierte Struktur im deklarativen Gedächtnis zu beschreiben. Trotz dieser grundsätzlichen Übereinstimmung bleibt eine Reihe von Fragen offen. In den hier vorgestellten Untersuchungen werden solche offene Fragen in Hinsicht auf drei verschiedene Themenbereiche angegangen: 1) die neuroanatomischen Korrelate 2) Nachbarschaftskonsistenzeffekte bei der verbalen Produktion sowie 3) die automatische Aktivierung arithmetischer Fakten. In einer kombinierten fMRT- und Verhaltensstudie wurde beispielsweise der Frage nachgegangen, welche neurofunktionalen Entsprechungen es für den Erwerb arithmetischer Fakten bei Erwachsenen gibt. Den Ausgangspunkt für diese Untersuchung bildete das Triple-Code-Modell von Dehaene und Cohen, da es als einziges auch Aussagen über neuroanatomische Korrelate numerischer Leistungen macht. Das Triple-Code-Modell geht davon aus, dass zum Abruf arithmetischer Fakten eine „perisylvische“ Region der linken Hemisphäre unter Einbeziehung der Stammganglien sowie des Gyrus angularis nötig ist (Dehaene & Cohen, 1995; Dehaene & Cohen, 1997; Dehaene, Piazza, Pinel, & Cohen, 2003). In der aktuellen Studie sollten gesunde Erwachsene komplexe Multiplikationsaufgaben etwa eine Woche lang intensiv üben, so dass ihre Beantwortung immer mehr automatisiert erfolgt. Die Lösung dieser geübten Aufgaben sollte somit – im Gegensatz zu vergleichbaren ungeübten Aufgaben – immer stärker auf Faktenabruf als auf der Anwendung von Prozeduren und Strategien beruhen. Hingegen sollten ungeübte Aufgaben im Vergleich zu geübten höhere Anforderungen an exekutive Funktionen einschließlich des Arbeitsgedächtnisses stellen. Nach dem Training konnten die Teilnehmer – wie erwartet – geübte Aufgaben deutlich schneller und sicherer beantworten als ungeübte. Zusätzlich wurden sie auch im Magnetresonanztomografen untersucht. Dabei konnte zunächst bestätigt werden, dass das Lösen von Multiplikationsaufgaben allgemein von einem vorwiegend linkshemisphärischen Netzwerk frontaler und parietaler Areale unterstützt wird. Das wohl wichtigste Ergebnis ist jedoch eine Verschiebung der Hirnaktivierungen von eher frontalen Aktivierungsmustern zu einer eher parietalen Aktivierung und innerhalb des Parietallappens vom Sulcus intraparietalis zum Gyrus angularis bei den geübten im Vergleich zu den ungeübten Aufgaben. So wurde die zentrale Bedeutung von Arbeitsgedächtnis- und Planungsleistungen für komplexe ungeübte Rechenaufgaben erneut herausgestellt. Im Sinne des Triple-Code-Modells könnte die Verschiebung innerhalb des Parietallappens auf einen Wechsel von quantitätsbasierten Rechenleistungen (Sulcus intraparietalis) zu automatisiertem Faktenabruf (linker Gyrus angularis) hindeuten. Gibt es bei der verbalen Produktion arithmetischer Fakten Nachbarschaftskonsistenzeffekte ähnlich zu denen, wie sie auch in der Sprachverarbeitung beschrieben werden? Solche Effekte sind nach dem aktuellen „Dreiecksmodell“ von Verguts & Fias (2004) zur Repräsentation von Multiplikationsfakten erwartbar. Demzufolge sollten richtige Antworten leichter gegeben werden können, wenn sie Ziffern mit möglichst vielen semantisch nahen falschen Antworten gemeinsam haben. Möglicherweise sollten demnach aber auch falsche Antworten dann mit größerer Wahrscheinlichkeit produziert werden, wenn sie eine Ziffer mit der richtigen Antwort teilen. Nach dem Dreiecksmodell wäre darüber hinaus sogar der klassische Aufgabengrößeneffekt bei einfachen Multiplikationsaufgaben (Zbrodoff & Logan, 2004) auf die Konsistenzverhältnisse der richtigen Antwort mit semantisch benachbarten falschen Antworten zurückzuführen. In einer Reanalyse der Fehlerdaten von gesunden Probanden (Campbell, 1997) und einem Patienten (Domahs, Bartha, & Delazer, 2003) wurden tatsächlich Belege für das Vorhandensein von Zehnerkonsistenzeffekten beim Lösen einfacher Multiplikationsaufgaben gefunden. Die Versuchspersonen bzw. der Patient hatten solche falschen Antworten signifikant häufiger produziert, welche die gleiche Zehnerziffer wie das richtigen Ergebnisses aufwiesen, als ansonsten vergleichbare andere Fehler. Damit wird die Annahme unterstützt, dass die Zehner- und die Einerziffern zweistelliger Zahlen separate Repräsentationen aufweisen – bei der Multiplikation (Verguts & Fias, 2004) wie auch allgemein bei numerischer Verarbeitung (Nuerk, Weger, & Willmes, 2001; Nuerk & Willmes, 2005). Zusätzlich dazu wurde in einer Regressionsanalyse über die Fehlerzahlen auch erstmalig empirische Evidenz für die Hypothese vorgelegt, dass der klassische Aufgabengrößeneffekt beim Abruf von Multiplikationsfakten auf Zehnerkonsistenzeffekte zurückführbar ist: Obwohl die Aufgabengröße als erster Prädiktor in das Modell einging, wurde diese Variable wieder verworfen, sobald ein Maß für die Nachbarschaftskonsistenz der richtigen Antwort in das Modell aufgenommen wurde. Schließlich wurde in einer weiteren Studie die automatische Aktivierung von Multiplikationsfakten bei gesunden Probanden mit einer Zahlenidentifikationsaufgabe (Galfano, Rusconi, & Umilta, 2003; Lefevre, Bisanz, & Mrkonjic, 1988; Thibodeau, Lefevre, & Bisanz, 1996) untersucht. Dabei sollte erstmals die Frage beantwortet werden, wie sich die automatische Aktivierung der eigentlichen Multiplikationsergebnisse (Thibodeau et al., 1996) zur Aktivierung benachbarter falscher Antworten (Galfano et al., 2003) verhält. Ferner sollte durch die Präsentation mit verschiedenen SOAs der zeitliche Verlauf dieser Aktivierungen aufgeklärt werden. Die Ergebnisse dieser Studie können insgesamt als Evidenz für das Vorhandensein und die automatische, obligatorische Aktivierung eines Netzwerkes arithmetischer Fakten bei gesunden, gebildeten Erwachsenen gewertet werden, in dem die richtigen Produkte stärker mit den Faktoren assoziiert sind als benachbarte Produkte (Operandenfehler). Dabei führen Produkte kleiner Aufgaben zu einer stärkeren Interferenz als Produkte großer Aufgaben und Operandenfehler großer Aufgaben zu einer stärkeren Interferenz als Operandenfehler kleiner Aufgaben. Ein solches Aktivierungsmuster passt gut zu den Vorhersagen des Assoziationsverteilungsmodells von Siegler (Lemaire & Siegler, 1995; Siegler, 1988), bei dem kleine Aufgaben eine schmalgipflige Verteilung der Assoziationen um das richtige Ergebnis herum aufweisen, große Aufgaben jedoch eine breitgipflige Verteilung. Somit sollte die vorliegende Arbeit etwas mehr Licht in bislang weitgehend vernachlässigte Aspekte der Repräsentation und des Abrufs arithmetischer Fakten gebracht haben: Die neuronalen Korrelate ihres Erwerbs, die Konsequenzen ihrer Einbindung in das Stellenwertsystem mit der Basis 10 sowie die spezifischen Auswirkungen ihrer assoziativen semantischen Repräsentation auf ihre automatische Aktivierbarkeit. Literatur Campbell, J. I. (1997). On the relation between skilled performance of simple division and multiplication. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23, 1140-1159. Dehaene, S. & Cohen, L. (1995). Towards an anatomical and functional model of number processing. Mathematical Cognition, 1, 83-120. Dehaene, S. & Cohen, L. (1997). Cerebral pathways for calculation: double dissociation between rote verbal and quantitative knowledge of arithmetic. Cortex, 33, 219-250. Dehaene, S., Piazza, M., Pinel, P., & Cohen, L. (2003). Three parietal circuits for number processing. Cognitive Neuropsychology, 20, 487-506. Domahs, F., Bartha, L., & Delazer, M. (2003). Rehabilitation of arithmetic abilities: Different intervention strategies for multiplication. Brain and Language, 87, 165-166. Galfano, G., Rusconi, E., & Umilta, C. (2003). Automatic activation of multiplication facts: evidence from the nodes adjacent to the product. Quarterly Journal of Experimental Psychology A, 56, 31-61. Lefevre, J. A., Bisanz, J., & Mrkonjic, L. (1988). Cognitive arithmetic: evidence for obligatory activation of arithmetic facts. Memory and Cognition, 16, 45-53. Lemaire, P. & Siegler, R. S. (1995). Four aspects of strategic change: contributions to children's learning of multiplication. Journal of Experimental Psychology: General, 124, 83-97. Nuerk, H. C., Weger, U., & Willmes, K. (2001). Decade breaks in the mental number line? Putting the tens and units back in different bins. Cognition, 82, B25-B33. Nuerk, H. C. & Willmes, K. (2005). On the magnitude representations of two-digit numbers. Psychology Science, 47, 52-72. Siegler, R. S. (1988). Strategy choice procedures and the development of multiplication skill. Journal of Experimental Psychology: General, 117, 258-275. Thibodeau, M. H., Lefevre, J. A., & Bisanz, J. (1996). The extension of the interference effect to multiplication. Canadian Journal of Experimental Psychology, 50, 393-396. Verguts, T. & Fias, W. (2004). Neighborhood Effects in Mental Arithmetic. Psychology Science. Zbrodoff, N. J. & Logan, G. D. (2004). What everyone finds: The problem-size effect. In J. I. D. Campbell (Hrsg.), Handbook of Mathematical Cognition (pp.331-345). New York, NY: Psychology Press.
The multidrug and toxic compounds extrusion (MATE) family includes hundreds of functionally uncharacterised proteins from bacteria and all eukaryotic kingdoms except the animal kingdom, that function as drug/toxin::Na<sup>+ or H<sup>+ antiporters. In Arabidopsis thaliana the MATE family comprises 56 members, one of which is NIC2 (Novel Ion Carrier 2). Using heterologous expression systems including Escherichia coli and Saccharomyces cerevisiae, and the homologous expression system of Arabidopsis thaliana, the functional characterisation of NIC2 was performed. It has been demonstrated that NIC2 confers resistance of E. coli towards the chemically diverse compounds such as tetraethylammonium chloride (TEACl), tetramethylammonium chloride (TMACl) and a toxic analogue of indole-3-acetic acid, 5-fluoro-indole-acetic acid (F-IAA). Therefore, NIC2 may be able to transport a broad range of drug and toxic compounds. In wild-type yeast the expression of NIC2 increased the tolerance towards lithium and sodium, but not towards potassium and calcium. In A. thaliana, the overexpression of NIC2 led to strong phenotypic changes. Under normal growth condtions overexpression caused an extremely bushy phenotype with no apical dominance but an enhanced number of lateral flowering shoots. The amount of rossette leaves and flowers with accompanying siliques were also much higher than in wild-type plants and the senescence occurred earlier in the transgenic plants. In contrast, RNA interference (RNAi) used to silence NIC2 expression, induced early flower stalk development and flowering compared with wild-type plants. In additon, the main flower stalks were not able to grow vertically, but instead had a strong tendency to bend towards the ground. While NIC2 RNAi seedlings produced many lateral roots outgrowing from the primary root and the root-shoot junction, NIC2 overexpression seedlings displayed longer primary roots that were characterised by a 2 to 4 h delay in the gravitropic response. In addition, these lines exhibited an enhanced resistance to exogenously applied auxins, i.e. indole-3-acetic acid (IAA) and indole-3-butyric acid (IBA) when compared with the wild-type roots. Based on these results, it is suggested that the NIC2 overexpression and NIC2 RNAi phenotypes were due to decreased or increased levels of auxin, respectively. The ProNIC2:GUS fusion gene revealed that NIC2 is expressed in the stele of the elongation zone, in the lateral root cap, in new lateral root primordia, and in pericycle cells of the root system. In the vascular tissue of rosette leaves and inflorescence stems, the expression was observed in the xylem parenchyma cells, while in siliques it was also in vascular tissue, but as well in the dehiscence and abscission zones. The organ- and tissue-specific expression sites of NIC2 correlate with the sites of auxin action in mature Arabidopsis plants. Further experiments using ProNIC2:GUS indicated that NIC2 is an auxin-inducible gene. Additionally, during the gravitropic response when an endogenous auxin gradient across the root tip forms, the GUS activity pattern of the ProNIC2:GUS fusion gene markedly changed at the upper side of the root tip, while at the lower side stayed unchanged. Finally, at the subcellular level NIC2-GFP fusion protein localised in the peroxisomes of Nicotana tabacum BY2 protoplasts. Considering the experimental results, it is proposed that the hypothetical function of NIC2 is the efflux transport which takes part in the auxin homeostasis in plant tissues probably by removing auxin conjugates from the cytoplasm into peroxisomes.
Molecular photoswitches are attracting much attention lately mostly because of their possible applications in nano technology, and their role in biology. One of the widely studied representatives of photochromic molecules is azobenzene (AB). With light, by a static electric field, or with tunneling electrons this specie can be "switched" from the flat and energetically more stable trans form, into the compact cis form. The back reaction can be induced optically or thermally. Quantum chemical calculations, mostly based on density functional theory, on the AB molecule, AB derivatives and related systems are presented. All the calculations were done for isolated species, however, with implications for latest experimental results aiming at the switching of surface mounted ABs. In some of these experiments, it is assumed that the switching process is substrate mediated, by attaching an electron or a hole to the adsorbate forming short-lived anion or cation resonances. Therefore, we calculated also cationic and anionic ABs in this work. An influence of external electric fields on the potential energy surfaces, was also studied. Further, by the type, number and positioning of various substituent groups, systematic changes on activation energies and rates for the thermal cis-to-trans isomerization can be enforced. The nature of the transition state for ground state isomerization was investigated. Applying Eyring's transition state theory, trends in activation energies and rates were predicted and are, where a comparison was possible, in good agreement with experimental data. Further, thermal isomerization was studied in solution, for which a polarizable continuum model was employed. The influence of substitution and an environment leaves its traces on structural properties of molecules and quantitative appearance of calculated UV/Vis spectra, as well. Finally, an explicit treatment of a solid substrate was demonstrated for the conformational switching, by scanning tunneling microscope, of a 1,5-cyclooctadiene (COD) molecule at a Si(001) surface, treated by a cluster model. At first, we studied energetics and potential energy surfaces along relevant switching coordinates by quantum chemical calculations, followed by the switching dynamics using wave packet methods. We show that, in spite the simplicity of the model, our calculations support the switching of adsorbed COD, by inelastic electron tunneling at low temperatures.
Seismology, like many scientific fields, e.g., music information retrieval and speech signal pro- cessing, is experiencing exponential growth in the amount of data acquired by modern seismo- logical networks. In this thesis, I take advantage of the opportunities offered by "big data" and by the methods developed in the areas of music information retrieval and machine learning to predict better the ground motion generated by earthquakes and to study the properties of the surface layers of the Earth. In order to better predict seismic ground motions, I propose two approaches based on unsupervised deep learning methods, an autoencoder network and Generative Adversarial Networks. The autoencoder technique explores a massive amount of ground motion data, evaluates the required parameters, and generates synthetic ground motion data in the Fourier amplitude spectra (FAS) domain. This method is tested on two synthetic datasets and one real dataset. The application on the real dataset shows that the substantial information contained within the FAS data can be encoded to a four to the five-dimensional manifold. Consequently, only a few independent parameters are required for efficient ground motion prediction. I also propose a method based on Conditional Generative Adversarial Networks (CGAN) for simulating ground motion records in the time-frequency and time domains. CGAN generates the time-frequency domains based on the parameters: magnitude, distance, and shear wave velocities to 30 m depth (VS30). After generating the amplitude of the time-frequency domains using the CGAN model, instead of classical conventional methods that assume the amplitude spectra with a random phase spectrum, the phase of the time-frequency domains is recovered by minimizing the observed and reconstructed spectrograms. In the second part of this dissertation, I propose two methods for the monitoring and characterization of near-surface materials and site effect analyses. I implement an autocorrelation function and an interferometry method to monitor the velocity changes of near-surface materials resulting from the Kumamoto earthquake sequence (Japan, 2016). The observed seismic velocity changes during the strong shaking are due to the non-linear response of the near-surface materials. The results show that the velocity changes lasted for about two months after the Kumamoto mainshock. Furthermore, I used the velocity changes to evaluate the in-situ strain-stress relationship. I also propose a method for assessing the site proxy "VS30" using non-invasive analysis. In the proposed method, a dispersion curve of surface waves is inverted to estimate the shear wave velocity of the subsurface. This method is based on the Dix-like linear operators, which relate the shear wave velocity to the phase velocity. The proposed method is fast, efficient, and stable. All of the methods presented in this work can be used for processing "big data" in seismology and for the analysis of weak and strong ground motion data, to predict ground shaking, and to analyze site responses by considering potential time dependencies and nonlinearities.
This thesis deals with the synthesis of protein and composite protein-mineral microcapsules by the application of high-intensity ultrasound at the oil-water interface. While one system is stabilized by BSA molecules, the other system is stabilized by different nanoparticles modified with BSA. A comprehensive study of all synthesis stages as well as of resulting capsules were carried out and a plausible explanation of the capsule formation mechanism was proposed. During the formation of BSA microcapsules, the protein molecules adsorb firstly at the O/W interface and unfold there forming an interfacial network stabilized by hydrophobic interactions and hydrogen bonds between neighboring molecules. Simultaneously, the ultrasonic treatment causes the cross-linking of the BSA molecules via the formation of intermolecular disulfide bonds. In this thesis, the experimental evidences of ultrasonically induced cross-linking of the BSA in the shells of protein-based microcapsules are demonstrated. Therefore, the concept proposed many years ago by Suslick and co-workers is confirmed by experimental evidences for the first time. Moreover, a consistent mechanism for the formation of intermolecular disulfide bonds in capsule shells is proposed that is based on the redistribution of thiol and disulfide groups in BSA under the action of high-energy ultrasound. The formation of composite protein-mineral microcapsules loaded with three different oils and shells composed of nanoparticles was also successful. The nature of the loaded oil and the type of nanoparticles in the shell, had influence on size and shape of the microcapsules. The examination of the composite capsule revealed that the BSA molecules adsorbed on the nanoparticles surface in the capsule shell are not cross-linked by intermolecular disulfide bonds. Instead, a Pickering emulsion formation takes place. The surface modification of composite microcapsules through both pre-modification of main components and also the post-modification of the surface of ready composite microcapsules was successfully demonstrated. Additionally, the mechanical properties of protein and composite protein-mineral microcapsules were compared. The results showed that the protein microcapsules are more resistant to elastic deformation.
August Boeckh (1785–1867) verfügte über eine umfangreiche private Büchersammlung mit einem beeindruckenden Facettenreichtum. Diese spiegelt Boeckhs Philologiebegriff wider, der sämtliche Lebensbereiche umfasste, und ermöglicht durch die in seinen Büchern hinterlassenen Marginalien einen gut nachvollziehbaren Einblick in den wissenschaftlichen Arbeitsprozess des Philologen.
Aufbauend auf der rekonstruierten Boeckhschen Bibliothek blickt Julia Doborosky auf die Auseinandersetzung zwischen Boeckh und seinem Kritiker Gottfried Hermann um die Ausgestaltung der philologischen Disziplin, das wissenschaftliche Werk Boeckhs selbst und auf seine Interaktion innerhalb eines wissenschaftlich-institutionellen Netzwerks. Anhand dieser drei Säulen zeigt sie die unterschiedlichen Modalitäten auf, in denen Boeckh seinen Philologiebegriff entwickelte, darlegte und zur Anwendung brachte – und wie hierbei seine Büchersammlung als greifbares Zeugnis einer geisteswissenschaftlichen Ideen- und Disziplingeschichte stets präsent ist.
Die Enzyme der Sulfotransferase-Gensuperfamilie (SULT) konjugieren nukleophile Gruppen von kleinen endogenen Verbindungen und Fremdstoffen mit der negativ geladenen Sulfo-Gruppe. Dadurch wird die Polarität dieser Verbindungen erhöht, ihre passive Permeation von Zellmembranen verhindert und somit ihre Ausscheidung erleichtert. Jedoch stellt die Sulfo-Gruppe in bestimmten chemischen Verbindungen eine gute Abgangsgruppen dar. Aus der Spaltung resultierende Carbenium- oder Nitreniumionen können mit DNA oder anderen zellulären Nukleophilen reagieren. In Testsystemen für Mutagenität wurden zahlreiche Verbindungen, darunter Nahrungsinhaltsstoffe und Umweltkontaminanten, durch SULT zu Mutagenen aktiviert. Dabei zeigten sich zum einen eine ausgeprägte Substratspezifität selbst orthologer SULT-Formen unterschiedlicher Spezies und zum anderen Interspezies-Unterschiede in der SULT-Gewebeverteilung. Daher könnten sich die Zielgewebe einer SULT-induzierten Krebsentstehung bei Mensch und Nager unterscheiden. Um die Beteiligung von humanen SULT an der Bioaktivierung von Fremdstoffen im Tiermodell untersuchen zu können, wurden transgene Mauslinien für den Cluster der humanen SULT1A1- und -1A2-Gene sowie für die humane SULT1B1 generiert. Zur Herstellung der transgenen Linien wurden große genomische Konstrukte verwendet, die die SULT-Gene sowie – zum Erreichen einer der Humansituation entsprechenden Gewebeverteilung der Proteinexpression – deren potentielle regulatorische Sequenzen enthielten. Es wurden je drei transgene Linien für hSULT1A1/hSULT1A2 und drei transgene Linien für hSULT1B1 etabliert. Die Expression der humanen Proteine konnte in allen Linien gezeigt werden und fünf der sechs Linien konnten zur Homozygotie bezüglich der Transgene gezüchtet werden. In der molekularbiologischen Charakterisierung der transgenen Linien wurde der chromosomale Integrationsort der Konstrukte bestimmt und die Kopienzahl pro Genom untersucht. Mit Ausnahme einer hSULT1A1/hSULT1A2-transgenen Linie, bei der Kopien des Konstrukts in zwei unterschiedliche Chromosomen integriert vorliegen, wiesen alle Linien nur einen Transgen-Integrationsort auf. Die Untersuchung der Transgen-Kopienzahl ergab, dass die Mauslinien zwischen einer und etwa 20 Kopien des Transgen-Konstrukts pro Genom trugen. In der proteinbiochemischen Charakterisierung wurde gezeigt, dass die transgenen Linien die humanen Proteine mit einer weitgehend der des Menschen entsprechenden Gewebeverteilung exprimieren. Die Intensität der im Immunblot nachgewiesenen Expression korrelierte mit der Kopienzahl der Transgene. Die zelluläre und subzelluläre Verteilung der Transgen-Expression wurden bei einer der hSULT1A1/hSULT1A2-transgenen Linien in Leber, Niere, Lunge, Pankreas, Dünndarm und Kolon und bei einer der hSULT1B1-transgenen Linien im Kolon untersucht. Sie stimmte ebenfalls mit der Verteilung der entsprechenden SULT-Formen im Menschen überein. Da sich die erzeugten transgenen Linien aufgrund ihrer mit dem Menschen vergleichbaren Gewebeverteilung der SULT-Expression als Modellsystem zur Untersuchung der menschlichen SULT-vermittelten metabolischen Aktivierung eigneten, wurde eine der hSULT1A1/hSULT1A2-transgenen Linien für zwei erste toxikologische Untersuchungen eingesetzt. Den Mäusen wurden chemische Verbindungen verabreicht, für die in in-vitro-Versuchen eine hSULT1A1/hSULT1A2-vermittelte Bioaktivierung zu Mutagenen gezeigt worden war. In beiden Untersuchungen wurde die Gewebeverteilung der entstandenen DNA-Addukte als Endpunkt einer gewebespezifischen genotoxischen Wirkung ermittelt. In der ersten Untersuchung wurden 90 mg/kg Körpergewicht 2-Amino-1-methyl-6-phenylimidazo[4,5-b]pyridin – ein in gebratenem Fleisch gebildetes heterozyklisches aromatisches Amin – transgenen sowie Wildtyp-Mäusen oral verabreicht. Acht Stunden nach Applikation wiesen die transgenen Mäuse signifikant höhere Adduktniveaus als die Wildtyp-Mäuse in Leber, Lunge, Niere, Milz und Kolon auf. In der Leber der transgen Mäuse war das Adduktniveau 17fach höher als in der Leber der Wildtyp-Mäuse. Die Leber war bei den transgenen Tieren das Organ mit dem höchsten, bei den Wildtyp-Tieren hingegen mit dem niedrigsten DNA-Adduktniveau. In der zweiten Untersuchung (Pilotstudie mit geringer Tierzahl) wurde transgenen und Wildtyp-Mäusen 19 mg/kg Körpergewicht des polyzyklischen aromatischen Kohlenwasserstoffs 1-Hydroxymethylpyren – ein Metabolit der Nahrungs- und Umweltkontaminante 1-Methylpyren – intraperitoneal verabreicht. Nach 30 Minuten wurden, verglichen mit den Wildtyp-Mäusen, bis zu 25fach erhöhte Adduktniveaus bei den transgenen Mäusen in Leber, Niere, Lunge und Jejunum nachgewiesen. Somit konnte anhand einer in dieser Arbeit generierten transgenen Mauslinie erstmals gezeigt werden, dass die Expression der humanen SULT1A1/hSULT1A2 tatsächlich sowohl auf die Stärke als auch die Zielgewebe der DNA-Adduktbildung in vivo eine Auswirkung hat.
Complex emulsions are dispersions of kinetically stabilized multiphasic emulsion droplets comprised of two or more immiscible liquids that provide a novel material platform for the generation of active and dynamic soft materials. In recent years, the intrinsic reconfigurable morphological behavior of complex emulsions, which can be attributed to the unique force equilibrium between the interfacial tensions acting at the various interfaces, has become of fundamental and applied interest. As such, particularly biphasic Janus droplets have been investigated as structural templates for the generation of anisotropic precision objects, dynamic optical elements or as transducers and signal amplifiers in chemo- and bio-sensing applications. In the present thesis, switchable internal morphological responses of complex droplets triggered by stimuli-induced alterations of the balance of interfacial tensions have been explored as a universal building block for the design of multiresponsive, active, and adaptive liquid colloidal systems. A series of underlying principles and mechanisms that influence the equilibrium of interfacial tensions have been uncovered, which allowed the targeted design of emulsion bodies that can alter their shape, bind and roll on surfaces, or change their geometrical shape in response to chemical stimuli. Consequently, combinations of the unique triggerable behavior of Janus droplets with designer surfactants, such as a stimuli-responsive photosurfactant (AzoTAB) resulted for instance in shape-changing soft colloids that exhibited a jellyfish inspired buoyant motion behavior, holding great promise for the design of biological inspired active material architectures and transformable soft robotics.
In situ observations of spherical Janus emulsion droplets using a customized side-view microscopic imaging setup with accompanying pendant dropt measurements disclosed the sensitivity regime of the unique chemical-morphological coupling inside complex emulsions and enabled the recording of calibration curves for the extraction of critical parameters of surfactant effectiveness. The deduced new "responsive drop" method permitted a convenient and cost-efficient quantification and comparison of the critical micelle concentrations (CMCs) and effectiveness of various cationic, anionic, and nonionic surfactants. Moreover, the method allowed insightful characterization of stimuli-responsive surfactants and monitoring of the impact of inorganic salts on the CMC and surfactant effectiveness of ionic and nonionic surfactants. Droplet functionalization with synthetic crown ether surfactants yielded a synthetically minimal material platform capable of autonomous and reversible adaptation to its chemical environment through different supramolecular host-guest recognition events. Addition of metal or ammonium salts resulted in the uptake of the resulting hydrophobic complexes to the hydrocarbon hemisphere, whereas addition of hydrophilic ammonium compounds such as amino acids or polypeptides resulted in supramolecular assemblies at the hydrocarbon-water interface of the droplets. The multiresponsive material platform enabled interfacial complexation and
thus triggered responses of the droplets to a variety of chemical triggers including metal ions, ammonium compounds, amino acids, antibodies, carbohydrates as well as amino-functionalized solid surfaces.
In the final chapter, the first documented optical logic gates and combinatorial logic circuits based on complex emulsions are presented. More specifically, the unique reconfigurable and multiresponsive properties of complex emulsions were exploited to realize droplet-based logic gates of varying complexity using different stimuli-responsive surfactants in combination with diverse readout methods. In summary, different designs for multiresponsive, active, and adaptive liquid colloidal systems were presented and investigated, enabling the design of novel transformative chemo-intelligent soft material platforms.
Mechanosensation is a fundamental biological process that provides the basis for sensing touch and pain as well as for hearing and proprioception. A special class of ion-channel proteins known as mechanosensitive proteins convert the mechanical stimuli into electrochemical signals to mediate this process. Mechanosensitive proteins undergo conformational changes in response to mechanical force, which eventually leads to the opening of the proteins' ion channel. Mammalian mechanosensitive proteins remained a long sought-after mystery until 2010 when a family of two proteins - Piezo1 and Piezo2 - was identifed as mechanosensors [1]. The cryo-EM structures of Piezo1 and Piezo2 protein were resolved in the last years and reveal a propeller-shaped homotrimer with 114 transmembrane helices [2, 3, 4, 5]. The protein structures are curved and have been suggested to deform the surrounding membrane into a nano-dome, which mechanically responds to membrane tension resulting from external forces [2]. In this thesis, the conformations of membrane-embedded Piezo1 and Piezo2 proteins and their tension-induced conformational changes are investigated using molecular dynamics simulations. Our coarse-grained molecular dynamics simulations show that the Piezo proteins induce curvature in the surrounding membrane and form a stable protein-membrane nano-dome in the tensionless membrane. These membrane-embedded Piezo proteins, however, adopt substantially less curved conformations in our simulations compared to the cryo-EM structures solved in detergent micelles, which agrees with recent experimental investigations of the overall Piezo nano-dome shape in membrane vesicles [6, 7, 8]. At high membrane tension, the Piezo proteins attain nearly planar conformations in our simulations. Our systematic investigation of Piezo proteins under different membrane tensions indicates a half-maximal conformational response at membrane tension values rather close to the experimentally suggested values of Piezo activation [9, 10]. In addition, our simulations indicate a widening of the Piezo1 ion channel at high membrane tension, which agrees with the channel widening observed in recent nearly flattened cryo-EM structures of Piezo1 in small membrane vesicles [11]. In contrast, the Piezo2 ion channel does not respond to membrane tension in our simulations. These different responses of the Piezo1 and Piezo2 ion channels in our simulations are in line with patch-clamp experiments, in which Piezo1, but not Piezo2, was shown to be activated by membrane tension alone [12].
Today, point clouds are among the most important categories of spatial data, as they constitute digital 3D models of the as-is reality that can be created at unprecedented speed and precision. However, their unique properties, i.e., lack of structure, order, or connectivity information, necessitate specialized data structures and algorithms to leverage their full precision. In particular, this holds true for the interactive visualization of point clouds, which requires to balance hardware limitations regarding GPU memory and bandwidth against a naturally high susceptibility to visual artifacts.
This thesis focuses on concepts, techniques, and implementations of robust, scalable, and portable 3D visualization systems for massive point clouds. To that end, a number of rendering, visualization, and interaction techniques are introduced, that extend several basic strategies to decouple rendering efforts and data management: First, a novel visualization technique that facilitates context-aware filtering, highlighting, and interaction within point cloud depictions. Second, hardware-specific optimization techniques that improve rendering performance and image quality in an increasingly diversified hardware landscape. Third, natural and artificial locomotion techniques for nausea-free exploration in the context of state-of-the-art virtual reality devices. Fourth, a framework for web-based rendering that enables collaborative exploration of point clouds across device ecosystems and facilitates the integration into established workflows and software systems.
In cooperation with partners from industry and academia, the practicability and robustness of the presented techniques are showcased via several case studies using representative application scenarios and point cloud data sets. In summary, the work shows that the interactive visualization of point clouds can be implemented by a multi-tier software architecture with a number of domain-independent, generic system components that rely on optimization strategies specific to large point clouds. It demonstrates the feasibility of interactive, scalable point cloud visualization as a key component for distributed IT solutions that operate with spatial digital twins, providing arguments in favor of using point clouds as a universal type of spatial base data usable directly for visualization purposes.
Wie hängen Vertrauen, Konsumeinstellungen und Verhalten bezüglich Fairtrade zusammen?
Dies ist die grundlegende Frage, mit der sich diese Arbeit beschäftigt. Lea Dirkwinkel analysiert die Fragestellung am Beispiel des Fairtrade-Labels, das als Symbol für das Produktzertifizierungssystem von Fairtrade International steht und das bekannteste Beispiel der Fairtrade-Bewegung darstellt.
Die Forschungsfrage wird einerseits zurückgeführt auf die Tatsache, dass die Qualität von Fairtrade-Gütern durch Konsumenten nicht erfasst werden kann, und andererseits durch die sogenannte Einstellungs-Verhaltens-Lücke begründet. Die Einstellungs-Verhaltens-Lücke beschreibt die kognitive Dissonanz zwischen positiven ethischen Einstellungen und Kaufintentionen sowie dem tatsächlichen Kaufverhalten und widerspricht traditionellen Einstellungs-Verhaltens-Modellen, die besagen, dass die Einstellung das Verhalten von Menschen bestimmt. Beide zuvor genannten Aspekte begründen in der Marketingtheorie die Relevanz von Vertrauen für den Konsum von Fairtrade-Produkten, aber auch anderen nachhaltigen Gütern.
Die Analyse basiert auf einer Online-Datenerhebung und erfolgte anhand der Kombination aus Conjoint Analyse und Strukturgleichungsanalyse. Die innovative methodische Vorgehensweise lieferte sowohl für die Marketingforschung als auch für die Praxis relevante Ergebnisse. Zum einem wird die wichtige Rolle von Vertrauen für den Fairtrade-Konsum bestätigt; zum anderen erklärt die Arbeit, wie sich Fairtrade-Vertrauen auswirkt. Das Vertrauen in das Fairtrade-Label stellt den Ausgangspunkt für Vertrauensbeziehungen zwischen Fairtrade und den Konsumenten dar und wird auf die zertifizierten Produkte übertragen.
Empfehlungen, die sich daraus ergeben, konzentrieren sich auf Maßnahmen, die das Vertrauen in Fairtrade-Labels stärken, z.B. durch die Reduzierung der Anzahl verschiedener Labels oder die verstärkte Kommunikation der Unabhängigkeit von Zertifizierungsorganisationen.
Cette recherche a pour objet l'articulation entre les dimensions anthropologiques et sociologiques de l'anthropologie philosophique de Helmuth Plessner (1892-1985). Elles procèdent selon trois axes. Je m'efforce (1) d'offrir une synthèse de l'anthropologie philosophique plessnerienne afin (2) de reconstituer les conditions de possibilité du social au stade humain de l'organique. Le troisième axe (3) correspond, enfin, à l'analyse des limites structurelles du social à partir de ses deux dimensions constitutives : l'individuel (limites ontogénétiques, comportementales et inter-personnelles) et le collectif (limites culturelles, intra- et inter-culturelles).
Die Hybridomtechnik zur Produktion von monoklonalen Antikörpern ermöglichte einen großen Schritt in der Entwicklung von Immunoassays für die biochemische Forschung und klinische Diagnostik. Auch die Produktion von Antikörpern gegen niedermolekulare Analyten, Haptene, typische Targets in der Lebensmittel- und Umweltanalytik, erlangte in den letzten Jahren eine immer größere Bedeutung. Im Zuge der Durchführung der Hybridomtechnik werden tausende Antikörper-sezernierende und nicht-sezernierende Zellen generiert. Die Selektion der wenigen antigenselektiven Hybridomzellen zählt dabei zu den herausforderndsten Schritten für die Antikörpergewinnung. Bisherige Selektionsverfahren, wie die Limiting-Dilution-Klonierung in Verbindung mit Enzyme-linked Immunosorbent Assays (ELISAs), garantieren keine Monoklonalität und erlauben nur das Screening von einigen wenigen Zellklonen. Hingegen ermöglichen Hochdurchsatz-Selektionsmethoden, wie die Fluoreszenz-aktivierte Zellsortierung (FACS), einen sehr hohen Probendurchsatz. Eine Einzelzellablage garantiert hierbei Monoklonalität. Jedoch sind die dafür erforderlichen Zellmarkierungen oftmals zellschädigend oder aufwendig zu generieren. Auch ist bisher noch keine Markierungsmethode bekannt, die es ermöglicht, Hapten-selektive Hybridomzellen durchflusszytometrisch zu analysieren und eine FACS-Selektion durchzuführen.
Aus diesem Grund wurden in dieser Arbeit zwei Zellmarkierungsmethoden entwickelt, die dies ermöglichen sollten. Die membranständigen Antikörper von Hybridomzellen sollten entweder direkt oder indirekt immunfluoreszenz-markiert und dadurch für die Durchflusszytometrie und FACS-Selektion zugänglich gemacht werden. Die direkte Markierung wurde mittels eines Hapten-Fluorophor-Konjugats durchgeführt. Sie ermöglichte erstmalig den Anteil an Haptenselektiven Hybridomzellen in einer Hybridomzelllinie zu überprüfen. Dies konnte für zwei Hapten-selektive Hybridomzelllinien, die Antikörper gegen das Hormon 17β-Estradiol und das Cardenolid Digoxigenin bilden, gezeigt werden. Durchflusszytometrie und ELISAs lieferten vergleichbare Ergebnisse. Zellen, die Hapten-selektiv markiert werden konnten, sezernierten ebenfalls Hapten-selektive Antikörper. Des Weiteren konnte die direkte Markierung dazu genutzt werden, zwei Mykotoxin-selektive Hybridomzelllinien, welche Antikörper gegen Aflatoxin und Zearalenon bilden, auf Monoklonalität zu testen. Dies ist mittels ELISA nicht möglich. Die Markierungsmethode eignete sich jedoch nur für fixierte Hybridomzellen. Eine Markierung von lebenden Zellen konnte weder durchflusszytometrisch noch mittels konfokaler Laser-Scanning-Mikroskopie gezeigt werden.
Dies gelang erst mit einer neu entwickelten indirekten Immunfluoreszenzmarkierung. Dabei wurden die Zellen zunächst mit einem Hapten-Peroxidase-Konjugat inkubiert, gefolgt von einem Fluorophor-markierten anti-HRP-Antikörper-Konjugat. Dies wurde für zwei Analyten, das Hormon Estron und das Antiepileptikum Carbamazepin, gezeigt. Die indirekte Markierung wurde erfolgreich dazu verwendet, Carbamazepin-selektive Hybridomzellen aus einem Fusionsansatz für die monoklonale Antikörperproduktion auszusortieren. Damit wurde erstmalig eine Zellmarkierungsmethode entwickelt, die eine Hochdurchsatz-Selektion lebender Hybridomzellen aus einem Fusionsansatz ermöglicht. Sie ist nicht zellschädigend und kann zusätzlich zur Selektion Hapten-selektiver Plasmazellen verwendet werden.
In this work, a sensor system based on thermoresponsive materials is developed by utilizing a modular approach. By synthesizing three different key monomers containing either a carboxyl, alkene or alkyne end group connected with a spacer to the methacrylic polymerizable unit, a flexible copolymerization strategy has been set up with oligo ethylene glycol methacrylates. This allows to tune the lower critical solution temperature (LCST) of the polymers in aqueous media. The molar masses are variable thanks to the excurse taken in polymerization in ionic liquids thus stretching molar masses from 25 to over 1000 kDa. The systems that were shown shown to be effective in aqueous solution could be immobilized on surfaces by copolymerizing photo crosslinkable units. The immobilized systems were formulated to give different layer thicknesses, swelling ratios and mesh sizes depending on the demand of the coupling reaction.
The coupling of detector units or model molecules is approached via reactions of the click chemistry pool, and the reactions are evaluated on their efficiency under those aspects, too. These coupling reactions are followed by surface plasmon resonance spectroscopy (SPR) to judge efficiency. With these tools at hand, Salmonella saccharides could be selectively detected by SPR. Influenza viruses were detected in solution by turbidimetry in solution as well as by a copolymerized solvatochromic dye to track binding via the changes of the polymers’ fluorescence by said binding event. This effect could also be achieved by utilizing the thermoresponsive behavior. Another demonstrator consists of the detection system bound to a quartz surface, thus allowing the virus detection on a solid carrier.
The experiments show the great potential of combining the concepts of thermoresponsive materials and click chemistry to develop technically simple sensors for large biomolecules and viruses.
Rubisco catalyses the first step of CO2 assimilation into plant biomass. Despite its crucial role, it is notorious for its low catalytic rate and its tendency to fix O2 instead of CO2, giving rise to a toxic product that needs to be recycled in a process known as photorespiration. Since almost all our food supply relies on Rubisco, even small improvements in its specificity for CO2 could lead to an improvement of photosynthesis and ultimately, crop yield. In this work, we attempted to improve photosynthesis by decreasing photorespiration with an artificial CCM based on a fusion between Rubisco and a carbonic anhydrase (CA).
A preliminary set of plants contained fusions between one of two CAs, bCA1 and CAH3, and the N- or C-terminus of RbcL connected by a small flexible linker of 5 amino acids. Subsequently, further fusion proteins were created between RbcL C-terminus and bCA1/CAH3 with linkers of 14, 23, 32, and 41 amino acids. The transplastomic tobacco plants carrying fusions with bCA1 were able to grow autotrophically even with the shortest linkers, albeit at a low rate, and accumulated very low levels of the fusion protein. On the other hand, plants carrying fusions with CAH3 were autotrophic only with the longer linkers. The longest linker permitted nearly wild-type like growth of the plants carrying fusions with CAH3 and increased the levels of fusion protein, but also of smaller degradation products.
The fusion of catalytically inactive CAs to RbcL did not cause a different phenotype from the fusions with catalytically active CAs, suggesting that the selected CAs were not active in the fusion with RbcL or their activity did not have an effect on CO2 assimilation. However, fusions to RbcL did not abolish RbcL catalytic activity, as shown by the autotrophic growth, gas exchange and in vitro activity measurements. Furthermore, Rubisco carboxylation rate and specificity for CO2 was not altered in some of the fusion proteins, suggesting that despite the defect in RbcL folding or assembly caused by the fusions, the addition of 60-150 amino acids to RbcL does not affect its catalytic properties. On the contrary, most growth defects of the plants carrying RbcL-CA fusions are related to their reduced Rubisco content, likely caused by impaired RbcL folding or assembly. Finally, we found that fusions with RbcL C-terminus were better tolerated than with the N-terminus, and increasing the length of the linker relieved the growth impairment imposed by the fusion to RbcL. Together, the results of this work constitute considerable relevant findings for future Rubisco engineering.
Gerade in den letzten Jahren erfuhr Open Source Software (OSS) eine zunehmende Verbreitung und Popularität und hat sich in verschiedenen Anwendungsdomänen etabliert. Die Prozesse, welche sich im Kontext der OSS-Entwicklung (auch: OSSD – Open Source Software-Development) evolutionär herausgebildet haben, weisen in den verschiedenen OSS-Entwicklungsprojekten z.T. ähnliche Eigenschaften und Strukturen auf und auch die involvierten Entitäten, wie z.B. Artefakte, Rollen oder Software-Werkzeuge sind weitgehend miteinander vergleichbar. Dies motiviert den Gedanken, ein verallgemeinerbares Modell zu entwickeln, welches die generalisierbaren Entwicklungsprozesse im Kontext von OSS zu einem übertragbaren Modell abstrahiert. Auch in der Wissenschaftsdisziplin des Software Engineering (SE) wurde bereits erkannt, dass sich der OSSD-Ansatz in verschiedenen Aspekten erheblich von klassischen (proprietären) Modellen des SE unterscheidet und daher diese Methoden einer eigenen wissenschaftlichen Betrachtung bedürfen. In verschiedenen Publikationen wurden zwar bereits einzelne Aspekte der OSS-Entwicklung analysiert und Theorien über die zugrundeliegenden Entwicklungsmethoden formuliert, aber es existiert noch keine umfassende Beschreibung der typischen Prozesse der OSSD-Methodik, die auf einer empirischen Untersuchung existierender OSS-Entwicklungsprojekte basiert. Da dies eine Voraussetzung für die weitere wissenschaftliche Auseinandersetzung mit OSSD-Prozessen darstellt, wird im Rahmen dieser Arbeit auf der Basis vergleichender Fallstudien ein deskriptives Modell der OSSD-Prozesse hergeleitet und mit Modellierungselementen der UML formalisiert beschrieben. Das Modell generalisiert die identifizierten Prozesse, Prozessentitäten und Software-Infrastrukturen der untersuchten OSSD-Projekte. Es basiert auf einem eigens entwickelten Metamodell, welches die zu analysierenden Entitäten identifiziert und die Modellierungssichten und -elemente beschreibt, die zur UML-basierten Beschreibung der Entwicklungsprozesse verwendet werden. In einem weiteren Arbeitsschritt wird eine weiterführende Analyse des identifizierten Modells durchgeführt, um Implikationen, und Optimierungspotentiale aufzuzeigen. Diese umfassen beispielsweise die ungenügende Plan- und Terminierbarkeit von Prozessen oder die beobachtete Tendenz von OSSD-Akteuren, verschiedene Aktivitäten mit unterschiedlicher Intensität entsprechend der subjektiv wahrgenommenen Anreize auszuüben, was zur Vernachlässigung einiger Prozesse führt. Anschließend werden Optimierungszielstellungen dargestellt, die diese Unzulänglichkeiten adressieren, und ein Optimierungsansatz zur Verbesserung des OSSD-Modells wird beschrieben. Dieser Ansatz umfasst die Erweiterung der identifizierten Rollen, die Einführung neuer oder die Erweiterung bereits identifizierter Prozesse und die Modifikation oder Erweiterung der Artefakte des generalisierten OSS-Entwicklungsmodells. Die vorgestellten Modellerweiterungen dienen vor allem einer gesteigerten Qualitätssicherung und der Kompensation von vernachlässigten Prozessen, um sowohl die entwickelte Software- als auch die Prozessqualität im OSSD-Kontext zu verbessern. Desweiteren werden Softwarefunktionalitäten beschrieben, welche die identifizierte bestehende Software-Infrastruktur erweitern und eine gesamtheitlichere, softwaretechnische Unterstützung der OSSD-Prozesse ermöglichen sollen. Abschließend werden verschiedene Anwendungsszenarien der Methoden des OSS-Entwicklungsmodells, u.a. auch im kommerziellen SE, identifiziert und ein Implementierungsansatz basierend auf der OSS GENESIS vorgestellt, der zur Implementierung und Unterstützung des OSSD-Modells verwendet werden kann.
Die vorliegende Arbeit enthält eine statistische Analyse der Gesamtheit öffentlicher Unternehmen in Deutschland und ihrer wirtschaftlichen Lage. Für diese Untersuchung stand eine Datenbank für etwa 9000 öffentliche Unternehmen mit knapp 500 Merkmalen zur Verfügung, die im Wesentlichen den Posten der Jahresabschlüsse und verschiedenen Identifikationsmerkmalen (wie u. a. Unternehmenssitz, Wirtschaftszweig und Rechtsform) entsprechen. Die Analyse umfasst den Zeitraum von 1998 bis 2006. Die extrem umfangreiche Datengrundlage – Jahresabschlussstatistiken öffentlicher Unternehmen – ist für einen Statistiker eine große Versuchung. In der Arbeit wurden Methoden der beschreibenden Statistik und der Jahresabschlussanalyse mit Bilanzkennzahlen angewandt. Vor allem in den letzten zwanzig Jahren wurde die Entwicklung der Gesamtheit öffentlicher Unternehmen durch Wandelprozesse geprägt und von Diskussionen über ihre Leistungsfähigkeit begleitet. Die Dynamik der Gesamtheit öffentlicher Unternehmen zeigt sich v. a. an der Vielfalt ihrer Aufgabenbereiche und Organisationsformen. Daher wurde in dieser Arbeit versucht, zunächst eine Bestandsaufnahme des öffentlichen Unternehmensbereichs durchzuführen. Ein weiteres Ziel war die Beschreibung der Wirtschaftslage öffentlicher Unternehmen im letzten Jahrzehnt, wobei ihre Leistungsfähigkeit in den Vordergrund gestellt wird. Die Leistungsfähigkeit öffentlicher Unternehmen nur über die betriebswirtschaftliche Effizienz zu messen, ist gewiss einseitig und nicht ausreichend. Diese ließ sich aber im Vergleich zur volkswirtschaftlichen oder sozialen Effizienz leichter operationalisieren: Die betriebswirtschaftlichen Effizienzkriterien können gut aus den Jahresabschlüssen abgeleitet werden. Dadurch wird auch ein Vergleich mit privaten Unternehmen in gewissen Grenzen möglich. Die Beschreibung der Wirtschaftslage öffentlicher Unternehmen wurde als Analyse ihrer einzelnen Teillagen (Vermögens-, Finanz- und Ertragslage) strukturiert. Insgesamt unterstreicht die Analyse der Teillagen die enge Verflechtung zwischen öffentlichen Unternehmen und öffentlichen Haushalten. Die vorliegende Untersuchung soll die Forschung auf dem Gebiet der datengetriebenen Statistik, die im Universitätsbereich in letzten Jahren im Vergleich zur modellgetriebenen Statistik oft vernachlässigt wurde, ausweiten.
Filaments are omnipresent features in the solar chromosphere, one of the atmospheric layers of the Sun, which is located above the photosphere, the visible surface of the Sun. They are clouds of plasma reaching from the photosphere to the chromosphere, and even to the outer-most atmospheric layer, the corona. They are stabalized by the magnetic field. If the magnetic field is disturbed, filaments can erupt as coronal mass ejections (CME), releasing plasma into space, which can also hit the Earth. A special type of filaments are polar crown filaments, which form at the interface of the unipolar field of the poles and flux of opposite magnetic polarity, which was transported towards the poles. This flux transport is related to the global dynamo of the Sun and can therefore be analyzed indirectly with polar crown filaments. The main objective of this thesis is to better understand the physical properties and environment of high-latitude and polar crown filaments, which can be approached from two perspectives: (1) analyzing the large-scale properties of high-latitude and polar crown filaments with full-disk Hα observations from the Chromospheric Telescope (ChroTel) and (2) determining the relation of polar crown and high-latitude filaments from the chromosphere to the lower-lying photosphere with high-spatial resolution observations of the Vacuum Tower Telescope (VTT), which reveal the smallest details.
The Chromospheric Telescope (ChroTel) is a small 10-cm robotic telescope at Observatorio del Teide on Tenerife (Spain), which observes the entire Sun in Hα, Ca IIK, and He I 10830 Å. We present a new calibration method that includes limb-darkening correction, removal of non-uniform filter transmission, and determination of He I Doppler velocities. Chromospheric full-disk filtergrams are often obtained with Lyot filters, which may display non-uniform transmission causing large-scale intensity variations across the solar disk. Removal of a 2D symmetric limb-darkening function from full-disk images results in a flat background. However, transmission artifacts remain and are even more distinct in these contrast-enhanced images. Zernike polynomials are uniquely appropriate to fit these large-scale intensity variations of the background. The Zernike coefficients show a distinct temporal evolution for ChroTel data, which is likely related to the telescope’s alt-azimuth mount that introduces image rotation. In addition, applying this calibration to sets of seven filtergrams that cover the He I triplet facilitates determining chromospheric Doppler velocities. To validate the method, we use three datasets with varying levels of solar activity. The Doppler velocities are benchmarked with respect to co-temporal high-resolution spectroscopic data of the GREGOR Infrared Spectrograph (GRIS). Furthermore, this technique can be applied to ChroTel Hα and Ca IIK data. The calibration method for ChroTel filtergrams can be easily adapted to other full-disk data exhibiting unwanted large-scale variations. The spectral region of the He I triplet is a primary choice for high-resolution near-infrared spectropolarimetry. Here, the improved calibration of ChroTel data will provide valuable context data.
Polar crown filaments form above the polarity inversion line between the old magnetic flux of the previous cycle and the new magnetic flux of the current cycle. Studying their appearance and their properties can lead to a better understanding of the solar cycle. We use full-disk data of the ChroTel at Observatorio del Teide, Tenerife, Spain, which were taken in three different chromospheric absorption lines (Hα 6563 Å, Ca IIK 3933 Å, and He I 10830 Å), and we create synoptic maps. In addition, the spectroscopic He I data allow us to compute Doppler velocities and to create synoptic Doppler maps. ChroTel data cover the rising and decaying phase of Solar Cycle 24 on about 1000 days between 2012 and 2018. Based on these data, we automatically extract polar crown filaments with image-processing tools and study their properties. We compare contrast maps of polar crown filaments with those of quiet-Sun filaments. Furthermore, we present a super-synoptic map summarizing the entire ChroTel database. In summary, we provide statistical properties, i.e. number and location of filaments, area, and tilt angle for both the maximum and declining phase of Solar Cycle 24. This demonstrates that ChroTel provides a
promising dataset to study the solar cycle.
The cyclic behavior of polar crown filaments can be monitored by regular full-disk Hα observations. ChroTel provides such regular observations of the Sun in three chromospheric wavelengths. To analyze the cyclic behavior and the statistical properties of polar crown filaments, we have to extract the filaments from the images. Manual extraction is tedious, and extraction with morphological image processing tools produces a large number of false positive detections and the manual extraction of these takes too much time. Automatic object detection and extraction in a reliable manner allows us to process more data in a shorter time. We will present an overview of the ChroTel database and a proof of concept of a machine learning application, which allows us a unified extraction of, for example, filaments from ChroTel data.
The chromospheric Hα spectral line dominates the spectrum of the Sun and other stars. In the stellar regime, this spectral line is already used as a powerful tracer of magnetic activity. For the Sun, other tracers are typically used to monitor solar activity. Nonetheless, the Sun is observed constantly in Hα with globally distributed ground-based full-disk imagers. The aim of this study is to introduce Hα as a tracer of solar activity and compare it to other established indicators. We discuss the newly created imaging Hα excess in the perspective of possible application for modelling of stellar atmospheres. In particular, we try to determine how constant is the mean intensity of the Hα excess and number density of low-activity regions between solar maximum and minimum. Furthermore, we investigate whether the active region coverage fraction or the changing emission strength in the active regions dominates time variability in solar Hα observations. We use ChroTel observations of full-disk Hα filtergrams and morphological image processing techniques to extract the positive and negative imaging Hα excess, for bright features (plage regions) and dark absorption features (filaments and sunspots), respectively. We describe the evolution of the Hα excess during Solar Cycle 24 and compare it to other well established tracers: the relative sunspot number, the F10.7 cm radio flux, and the Mg II index. Moreover, we discuss possible applications of the Hα excess for stellar activity diagnostics and the contamination of exoplanet transmission spectra. The positive and negative Hα excess follow the behavior of the solar activity over the course of the cycle. Thereby, positive Hα excess is closely correlated to the chromospheric Mg II index. On the other hand, the negative Hα excess, created from dark features like filaments and sunspots, is introduced as a tracer of solar activity for the first time. We investigated the mean intensity distribution for active regions for solar minimum and maximum and found that the shape of both distributions is very similar but with different amplitudes. This might be related with the relatively stable coronal temperature component during the solar cycle. Furthermore, we found that the coverage fraction of Hα excess and the Hα excess of bright features are strongly correlated, which will influence modelling of stellar and exoplanet atmospheres.
High-resolution observations of polar crown and high-latitude filaments are scarce. We present a unique sample of such filaments observed in high-resolution Hα narrow-band filtergrams and broad-band images, which were obtained with a new fast camera system at the VTT. ChroTel provided full-disk context observations in Hα, Ca IIK, and He I 10830 Å. The Helioseismic and Magnetic Imager (HMI) and the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO) provided line-of-sight magnetograms and ultraviolet (UV) 1700 Å filtergrams, respectively. We study filigree in the vicinity of polar crown and high-latitude filaments and relate their locations to magnetic concentrations at the filaments’ footpoints. Bright points are a well studied phenomenon in the photosphere at low latitudes, but they were not yet studied in the quiet network close to the poles. We examine size, area, and eccentricity of bright points and find that their morphology is very similar to their counterparts at lower latitudes, but their sizes and areas are larger. Bright points at the footpoints of polar crown filaments are preferentially located at stronger magnetic flux concentrations, which are related to bright regions at the border of supergranules as observed in UV filtergrams. Examining the evolution of bright points on three consecutive days reveals that their amount increases while the filament decays, which indicates they impact the equilibrium of the cool plasma contained in filaments.
In diesem Buch geht es um ein Phänomen, das als konstantes Element in der Geschichte des Christentums bezeichnet werden kann: Neuoffenbarungen. Denn der Kanonisierung der Bibel und dem kritischen Blick der kirchlichen Orthodoxie zum Trotz gab und gibt es immer wieder Menschen, die behaupten, dass sich ihnen Gottvater, Christus, der Heilige Geist oder andere Wesenheiten (Maria, Engel, Verstorbene) offenbart haben. Religionswissenschaftler haben das Thema bislang weitgehend ignoriert. Sie haben den Bereich des Christentums den Theologen überlassen und sich allenfalls mit frei flottierender Esoterik befasst. Theologen neigen ihrerseits dazu, Neuoffenbarungen apologetisch zu bekämpfen. Die vorliegende Untersuchung leistet daher einen wichtigen Beitrag zur religionswissenschaftlichen Erforschung des Themas. Im ersten Teil des Buches wird der Begriff „Neuoffenbarung“ aus verschiedenen religionswissenschaftlichen Perspektiven betrachtet. Zunächst wird untersucht, was die christliche Theologie unter „Offenbarung“ versteht. Danach werden die verschiedenen Termini analysiert, die für das Feld der außer- und nachbiblischen Offenbarungen kursieren (Neuoffenbarung, Privatoffenbarung, Channeling, Spiritismus, Prophetie u. v. m.). Anschließend werden jene Argumente referiert, die von Neuoffenbarungsanhängern bzw. kirchlichen Apologeten ins Feld geführt werden, um die Legitimität von Neuoffenbarungen zu behaupten bzw. zu bestreiten. Dass Neuoffenbarungen gar nicht so neu sind, zeigt ein religionshistorischer Überblick. Denn der Anspruch, besondere Offenbarungen empfangen zu haben, lässt sich in jeder Epoche des Christentums nachweisen. Nachdem einige Exponenten des prophetischen Charismas als ideengeschichtliche Vorläufer und Geistesverwandte der modernen Neuoffenbarungen vorgestellt wurden, werden diese schließlich selbst in den Fokus genommen. Das disparate Feld der Neuoffenbarungsträger des 19. und 20. Jahrhunderts wird anhand exemplarischer Gestalten in einer Typologie geordnet dargestellt. Um den Zitationszirkel zu durchbrechen, der sich offensichtlich im Diskurs etabliert hat, werden darin auch bislang weniger bekannte Neuoffenbarer vorgestellt. In einer Art Tiefenbohrung werden diese religionsphilosophischen, semantischen, historischen und systematischen Zugänge im zweiten Teil an der mexikanischen Neuoffenbarung „Das Buch des Wahren Lebens“ exemplifiziert. Die analysierende Darstellung beschränkt sich jedoch nicht auf ein isoliertes Objekt, sondern dies wird in einen komparatistischen Kontext gestellt: Zentrale Topoi des „Buches des Wahren Lebens“ (Christologie, Reinkarnationslehre, Kirchenkritik u. v. m.) werden zum einen in einer Synopse mit anderen Neuoffenbarungen dargestellt und zum anderen an der orthodoxen Theologie gespiegelt. Damit wird eine doppelte Differenz gezeigt: die Nähe/Ferne zu ähnlichen Phänomenen und die Nähe/Ferne zum kirchlichen Christentum.
Im internationalen Deliktsrecht kommt es immer wieder zu Friktionen, wenn das anwendbare Recht nicht dem Recht des Ortes der schädigenden Handlung entspricht. Der maßgebliche haftungsbegründende Verhaltensmaßstab ist für den Schädiger, der sich im Regelfall am Recht des Handlungsortes orientiert, in solchen Konstellationen nur schwer vorherzusehen. Der europäische Verordnungsgeber hat daher mit Art. 17 Rom-II-VO eine Norm geschaffen, die die „Berücksichtigung” von Sicherheits- und Verhaltensregeln des Handlungsortes unabhängig vom anwendbaren Recht allgemein anordnet. Diese „Berücksichtigung” statutsfremder Regeln ist ein Fremdkörper im hergebrachten Methodengefüge des kontinentalen IPR. Vor diesem Hintergrund untersucht Yannick Diehl Möglichkeiten zur Entwicklung einer tragfähigen dogmatischen Untermauerung der bisher zu großen Teilen diffus gebliebenen Rechtsfigur.
Services that operate over the Internet are under constant threat of being exposed to fraudulent use. Maintaining good user experience for legitimate users often requires the classification of entities as malicious or legitimate in order to initiate countermeasures. As an example, inbound email spam filters decide for spam or non-spam. They can base their decision on both the content of each email as well as on features that summarize prior emails received from the sending server. In general, discriminative classification methods learn to distinguish positive from negative entities. Each decision for a label may be based on features of the entity and related entities. When labels of related entities have strong interdependencies---as can be assumed e.g. for emails being delivered by the same user---classification decisions should not be made independently and dependencies should be modeled in the decision function. This thesis addresses the formulation of discriminative classification problems that are tailored for the specific demands of the following three Internet security applications. Theoretical and algorithmic solutions are devised to protect an email service against flooding of user inboxes, to mitigate abusive usage of outbound email servers, and to protect web servers against distributed denial of service attacks.
In the application of filtering an inbound email stream for unsolicited emails, utilizing features that go beyond each individual email's content can be valuable. Information about each sending mail server can be aggregated over time and may help in identifying unwanted emails. However, while this information will be available to the deployed email filter, some parts of the training data that are compiled by third party providers may not contain this information. The missing features have to be estimated at training time in order to learn a classification model. In this thesis an algorithm is derived that learns a decision function that integrates over a distribution of values for each missing entry. The distribution of missing values is a free parameter that is optimized to learn an optimal decision function.
The outbound stream of emails of an email service provider can be separated by the customer IDs that ask for delivery. All emails that are sent by the same ID in the same period of time are related, both in content and in label. Hijacked customer accounts may send batches of unsolicited emails to other email providers, which in turn might blacklist the sender's email servers after detection of incoming spam emails. The risk of being blocked from further delivery depends on the rate of outgoing unwanted emails and the duration of high spam sending rates. An optimization problem is developed that minimizes the expected cost for the email provider by learning a decision function that assigns a limit on the sending rate to customers based on the each customer's email stream.
Identifying attacking IPs during HTTP-level DDoS attacks allows to block those IPs from further accessing the web servers. DDoS attacks are usually carried out by infected clients that are members of the same botnet and show similar traffic patterns. HTTP-level attacks aim at exhausting one or more resources of the web server infrastructure, such as CPU time. If the joint set of attackers cannot increase resource usage close to the maximum capacity, no effect will be experienced by legitimate users of hosted web sites. However, if the additional load raises the computational burden towards the critical range, user experience will degrade until service may be unavailable altogether. As the loss of missing one attacker depends on block decisions for other attackers---if most other attackers are detected, not blocking one client will likely not be harmful---a structured output model has to be learned. In this thesis an algorithm is developed that learns a structured prediction decoder that searches the space of label assignments, guided by a policy.
Each model is evaluated on real-world data and is compared to reference methods. The results show that modeling each classification problem according to the specific demands of the task improves performance over solutions that do not consider the constraints inherent to an application.
Das Ziel der vorliegenden Dissertation war es herauszuarbeiten, ob Nachhaltigkeitsbewusstsein den Konsum von Luxusgütern beeinflusst und ob verschiedene Moderatoren einen Einfluss auf diesen Zusammenhang ausüben. Das Nachhaltigkeitsbewusstsein wurde dabei basierend auf dem von Balderjahn et al. (2013) entwickelten Consciousness-for-sustainable-consumption-Modell durch die ökologische, soziale und die ökonomische Nachhaltigkeit sowie ergänzend durch das Tierschutzbewusstsein und Bewusstsein für lokale Produktion repräsentiert. Als Moderatoren dienten das Streben nach sozialer Anerkennung und Prestige, Materialismus, Hedonismus und Traditionsbewusstsein. Für die Aufdeckung möglicher Zusammenhänge zwischen den verschiedenen Dimensionen der Nachhaltigkeit und Luxuskonsum wurde eine Prädiktorenanalyse durchgeführt. Moderatorenanalysen offenbarten zusätzlich, ob ein Einfluss der verschiedenen Moderatoren auf die einzelnen Zusammenhänge vorlag. Die Untersuchung zeigte, dass jeweils das Umweltbewusstsein, das Bewusstsein für genügsamen Konsum sowie das Bewusstsein für schuldenfreien Konsum als Teil der ökonomischen Nachhaltigkeit und das Tierschutzbewusstsein einen Einfluss auf den Luxuskonsum ausüben. Darüber hinaus konnten insgesamt sieben Einflüsse durch die verschiedenen Moderatorvariablen auf die unterschiedlichen Zusammenhänge zwischen den Nachhaltigkeitsdimensionen und Luxuskonsum aufgedeckt werden.
Ground-based astronomy is set to employ next-generation telescopes with apertures larger than 25 m in diameter before this decade is out. Such giant telescopes observe their targets through a larger patch of turbulent atmosphere, demanding that most of the instruments behind them must also grow larger to make full use of the collected stellar flux. This linear scaling in size greatly complicates the design of astronomical instrumentation, inflating their cost quadratically. Adaptive optics (AO) is one approach to circumvent this scaling law, but it can only be done to an extent before the cost of the corrective system itself overwhelms that of the instrument or even that of the telescope. One promising technique for miniaturizing the instruments and thus driving down their cost is to replace some, or all, of the free space bulk optics in the optical train with integrated photonic components.
Photonic devices, however, do their work primarily in single-mode waveguides, and the atmospherically-distorted starlight must first be efficiently coupled into them if they are to outperform their bulk optic counterparts. This is doable by two means: AO systems can again help control the angular size and motion of seeing disks to the point where they will couple efficiently into astrophotonic components, but this is only feasible for the brightest of objects and over limited fields of view. Alternatively, tapered fiber devices known as photonic lanterns — with their ability to convert multimode into single-mode optical fields — can be used to feed speckle patterns into single-mode integrated optics. They, nonetheless, must conserve the degrees of freedom, and the number of output waveguides will quickly grow out of control for uncorrected large telescopes. An AO-assisted photonic lantern fed by a partially corrected wavefront presents a compromise that can have a manageable size if the trade-off between the two methods is chosen carefully. This requires end-to-end simulations that take into account all the subsystems upstream of the astrophotonic instrument, i.e., the atmospheric layers, the telescope, the AO system, and the photonic lantern, before a decision can be made on sizing the multiplexed integrated instrument.
The numerical models that simulate atmospheric turbulence and AO correction are presented in this work. The physics and models for optical fibers, arrays of waveguides, and photonic lanterns are also provided. The models are on their own useful in understanding the behavior of the individual subsystems involved and are also used together to compute the optimum sizing of photonic lanterns for feeding astrophotonic instruments. Additionally, since photonic lanterns are a relatively new concept, two novel applications are discussed for them later in this thesis: the use of mode-selective photonic lanterns (MSPLs) to reduce the multiplicity of multiplexed integrated instruments and the combination of photonic lanterns with discrete beam combiners (DBCs) to retrieve the modal content in an optical waveguide.
Recent large earthquakes put in evidence the need of improving and developing robust and rapid procedures to properly calculate the magnitude of an earthquake in a short time after its occurrence. The most famous example is the 26 December 2004 Sumatra earthquake, when the limitations of the standard procedures adopted at that time by many agencies failed to provide accurate magnitude estimates of this exceptional event in time to launch early enough warnings and appropriate response. Being related to the radiated seismic energy ES, the energy magnitude ME is a good estimator of the high frequency content radiated by the source which goes into the seismic waves. However, a procedure to rapidly determine ME (that is to say, within 15 minutes after the earthquake occurrence) was required. Here it is presented a procedure able to provide in a rapid way the energy magnitude ME for shallow earthquakes by analyzing teleseismic P‑waves in the distance range 20-98. To account for the energy loss experienced by the seismic waves from the source to the receivers, spectral amplitude decay functions obtained from numerical simulations of Greens functions based on the average global model AK135Q are used. The proposed method has been tested using a large global dataset (~1000 earthquakes) and the obtained rapid ME estimations have been compared to other magnitude scales from different agencies. Special emphasis is given to the comparison with the moment magnitude MW, since the latter is very popular and extensively used in common seismological practice. However, it is shown that MW alone provide only limited information about the seismic source properties, and that disaster management organizations would benefit from a combined use of MW and ME in the prompt evaluation of an earthquake’s tsunami and shaking potential. In addition, since the proposed approach for ME is intended to work without knowledge of the fault plane geometry (often available only hours after an earthquake occurrence), the suitability of this method is discussed by grouping the analyzed earthquakes according to their type of mechanism (strike-slip, normal faulting, thrust faulting, etc.). No clear trend is found from the rapid ME estimates with the different fault plane solution groups. This is not the case for the ME routinely determined by the U.S. Geological Survey, which uses specific radiation pattern corrections. Further studies are needed to verify the effect of such corrections on ME estimates. Finally, exploiting the redundancy of the information provided by the analyzed dataset, the components of variance on the single station ME estimates are investigated. The largest component of variance is due to the intra-station (record-to-record) error, although the inter-station (station-to-station) error is not negligible and is of several magnitude units for some stations. Moreover, it is shown that the intra-station component of error is not random but depends on the travel path from a source area to a given station. Consequently, empirical corrections may be used to account for the heterogeneities of the real Earth not considered in the theoretical calculations of the spectral amplitude decay functions used to correct the recorded data for the propagation effects.
A discrete analogue of the Witten Laplacian on the n-dimensional integer lattice is considered. After rescaling of the operator and the lattice size we analyze the tunnel effect between different wells, providing sharp asymptotics of the low-lying spectrum. Our proof, inspired by work of B. Helffer, M. Klein and F. Nier in continuous setting, is based on the construction of a discrete Witten complex and a semiclassical analysis of the corresponding discrete Witten Laplacian on 1-forms. The result can be reformulated in terms of metastable Markov processes on the lattice.
Variations in the distribution of mass within an orogen may lead to transient sediment storage, which in turn might affect the state of stress and the level of fault activity. Distinguishing between different forcing mechanisms causing variations of sediment flux and tectonic activity, is therefore one of the most challenging tasks in understanding the spatiotemporal evolution of active mountain belts.
The Himalayan mountain belt is one of the most significant Cenozoic collisional mountain belt, formed due to collision between northward-bound Indian Plate and the Eurasian Plate during the last 55-50 Ma. Ongoing convergence of these two tectonic plates is accommodated by faulting and folding within the Himalayan arc-shaped orogen and the continued lateral and vertical growth of the Tibetan Plateau and mountain belts adjacent to the plateau as well as regions farther north. Growth of the Himalayan orogen is manifested by the development of successive south-vergent thrust systems. These thrust systems divide the orogen into different morphotectonic domains. From north to south these thrusts are the Main Central Thrust (MCT), the Main Boundary Thrust (MBT) and the Main Frontal Thrust (MFT). The growing topography interacts with moisture-bearing monsoonal winds, which results in pronounced gradients in rainfall, weathering, erosion and sediment transport toward the foreland and beyond. However, a fraction of this sediment is trapped and transiently stored within the intermontane valleys or ‘dun’s within the lower-elevation foothills of the range. Improved understanding of the spatiotemporal evolution of these sediment archives could provide a unique opportunity to decipher the triggers of variations in sediment production, delivery and storage in an actively deforming mountain belt and support efforts to test linkages between sediment volumes in intermontane basins and changes in the shallow crustal stress field. As sediment redistribution in mountain belts on timescales of 102-104 years can effect cultural characteristics and infrastructure in the intermontane valleys and may even impact the seismotectonics of a mountain belt, there is a heightened interest in understanding sediment-routing processes and causal relationships between tectonism, climate and topography. It is here at the intersection between tectonic processes and superposed climatic and sedimentary processes in the Himalayan orogenic wedge, where my investigation is focused on. The study area is the intermontane Kangra Basin in the northwestern Sub-Himalaya, because the characteristics of the different Himalayan morphotectonic provinces are well developed, the area is part of a region strongly influenced by monsoonal forcing, and the existence of numerous fluvial terraces provides excellent strain markers to assess deformation processes within the Himalayan orogenic wedge. In addition, being located in front of the Dhauladhar Range the region is characterized by pronounced gradients in past and present-day erosion and sediment processes associated with repeatedly changing climatic conditions. In light of these conditions I analysed climate-driven late Pleistocene-Holocene sediment cycles in this tectonically active region, which may be responsible for triggering the tectonic re-organization within the Himalayan orogenic wedge, leading to out-of-sequence thrusting, at least since early Holocene.
The Kangra Basin is bounded by the MBT and the Sub-Himalayan Jwalamukhi Thrust (JMT) in the north and south, respectively and transiently stores sediments derived from the Dhauladhar Range. The Basin contains ~200-m-thick conglomerates reflecting two distinct aggradation phases; following aggradation, several fluvial terraces were sculpted into these fan deposits. 10Be CRN surface exposure dating of these terrace levels provides an age of 53.4±3.2 ka for the highest-preserved terrace (AF1); subsequently, this surface was incised until ~15 ka, when the second fan (AF2) began to form. AF2 fan aggradation was superseded by episodic Holocene incision, creating at least four terrace levels. We find a correlation between variations in sediment transport and ∂18O records from regions affected by the Indian Summer Monsoon (ISM). During strengthened ISMs sand post-LGM glacial retreat, aggradation occurred in the Kangra Basin, likely due to high sediment flux, whereas periods of a weakened ISM coupled with lower sediment supply coincided with renewed re-incision.
However, the evolution of fluvial terraces along Sub-Himalayan streams in the Kangra sector is also forced by tectonic processes. Back-tilted, folded terraces clearly document tectonic activity of the JMT. Offset of one of the terrace levels indicates a shortening rate of 5.6±0.8 to 7.5±1.0 mm.a-1 over the last ~10 ka. Importantly, my study reveals that late Pleistocene/Holocene out-of-sequence thrusting accommodates 40-60% of the total 14±2 mm.a-1 shortening partitioned throughout the Sub-Himalaya. Importantly, the JMT records shortening at a lower rate over longer timescales hints towards out-of-sequence activity within the Sub-Himalaya. Re-activation of the JMT could be related to changes in the tectonic stress field caused by large-scale sediment removal from the basin. I speculate that the deformation processes of the Sub-Himalaya behave according to the predictions of critical wedge model and assume the following: While >200m of sediment aggradation would trigger foreland-ward propagation of the deformation front, re-incision and removal of most of the stored sediments (nearly 80-85% of the optimum basin-fill) would again create a sub-critical condition of the wedge taper and trigger the retreat of the deformation front.
While tectonism is responsible for the longer-term processes of erosion associated with steepening hillslopes, sediment cycles in this environment are mainly the result of climatic forcing. My new 10Be cosmogenic nuclide exposure dates and a synopsis of previous studies show the late Pleistocene to Holocene alluvial fills and fluvial terraces studied here record periodic fluctuations of sediment supply and transport capacity on timescales of 1000-100000 years. To further evaluate the potential influence of climate change on these fluctuations, I compared the timing of aggradation and incision phases recorded within remnant alluvial fans and terraces with continental climate archives such as speleothems in neighboring regions affected by monsoonal precipitation. Together with previously published OSL ages yielding the timing of aggradation, I find a correlation between variations in sediment transport with oxygen-isotope records from regions affected by the Indian Summer Monsoon (ISM). Accordingly, during periods of increased monsoon intensity (transitions from dry and cold to wet and warm periods – MIS4 to MIS3 and MIS2 to MIS1) (MIS=marine isotope stage) and post-Last Glacial Maximum glacial retreat, aggradation occurred in the Kangra Basin, likely due to high sediment flux. Conversely, periods of weakened monsoon intensity or lower sediment supply coincide with re-incision of the existing basin-fill.
Finally, my study entails part of a low-temperature thermochronology study to assess the youngest exhumation history of the Dhauladhar Range. Zircon helium (ZHe) ages and existing low-temperature data sets (ZHe, apatite fission track (AFT)) across this range, together with 3D thermokinematic modeling (PECUBE) reveals constraints on exhumation and activity of the range-bounding Main Boundary Thrust (MBT) since at least mid-Miocene time. The modeling results indicate mean slip rates on the MBT-fault ramp of ~2 – 3 mm.a-1 since its activation. This has lead to the growth of the >5-km-high frontal Dhauladhar Range and continuous deep-seated exhumation and erosion. The obtained results also provide interesting constraints of deformation patterns and their variation along strike. The results point towards the absence of the time-transient ‘mid-crustal ramp’ in the basal decollement and
duplexing of the Lesser Himalayan sequence, unlike the nearby regions or even the central Nepal domain. A fraction of convergence (~10-15%) is accommodated along the deep-seated MBT-ramp, most likely merging into the MHT. This finding is crucial for a rigorous assessment of the overall level of tectonic activity in the Himalayan morphotectonic provinces as it contradicts recently-published geodetic shortening estimates. In these studies, it has been proposed that the total Himalayan shortening in the NW Himalaya is accommodated within the Sub-Himalaya whereas no tectonic activity is assigned to the MBT.
AM symbiosis has a positive influence on plant P-nutrition and growth, but little is known about the molecular mechanism of the symbiosis adaptation to different phosphate conditions. The recently described induction of several pri-miR399 transcripts in mycorrhizal shoots and subsequent accumulation of mature miR399 in mycorrhizal roots indicates that local PHO2 expression must be controlled during symbiosis, presumably in order to sustain AM symbiosis development, in spite of locally increased Pi-concentration. A reverse genetic approach used in this study demonstrated that PHO2 and thus the PHR1-miR399-PHO2 signaling pathway, is involved in certain stages of progressive root colonization. In addition, a transcriptomic approach using a split-root system provided a comprehensive insight into the systemic transcriptional changes in mycorrhizal roots and shoots of M. truncatula in response to high phosphate conditions. With regard to the transcriptional responses of the root system, the results indicate that, although the colonization is drastically reduced, AM symbiosis is still functional at high Pi concentrations and might still be beneficial to the plant. Additionally, the data suggest that a specific root-borne mycorrhizal signal systemically induces protein synthesis, amino acid metabolism and photosynthesis at low Pi conditions, which is abolished at high Pi conditions. MiRNAs, such as miR399, are involved in long-distance signaling and are therefore potential systemic signals involved in AM symbiosis. A deep-sequencing approach identified 243 novel miRNAs in the root tissue of M. truncatula. Read-count analysis, qRT-PCR measurements and in situ hybridizations clearly indicated a regulation of miR5229a/b, miR5204, miR160f*, miR160c, miR169 and miR169d*/l*/m*/e.2* during arbuscular mycorrhizal symbiosis. Moreover, miR5204* represses a GRAS TF, which is specifically transcribed in mycorrhizal roots. Since miR5204* is induced by high Pi it might represent a further Pi status-mediating signal beside miR399. This study provides additional evidence that MtNsp2, a key regulator of symbiosis-signaling, is regulated and presumably spatially restricted by miR171h cleavage. In summary, a repression of mycorrhizal root colonization at high phosphate status is most likely due to a repression of the phosphate starvation responses and the loss of beneficial responses in mycorrhizal shoots. These findings provide a new basis for investigating the regulatory network leading to cellular reprogramming during interaction between plants, arbuscular mycorrhizal fungi and different phosphate conditions.
Measuring the metabolite profile of plants can be a strong phenotyping tool, but the changes of metabolite pool sizes are often difficult to interpret, not least because metabolite pool sizes may stay constant while carbon flows are altered and vice versa. Hence, measuring the carbon allocation of metabolites enables a better understanding of the metabolic phenotype. The main challenge of such measurements is the in vivo integration of a stable or radioactive label into a plant without perturbation of the system. To follow the carbon flow of a precursor metabolite, a method is developed in this work that is based on metabolite profiling of primary metabolites measured with a mass spectrometer preceded by a gas chromatograph (Wagner et al. 2003; Erban et al. 2007; Dethloff et al. submitted). This method generates stable isotope profiling data, besides conventional metabolite profiling data. In order to allow the feeding of a 13C sucrose solution into the plant, a petiole and a hypocotyl feeding assay are developed. To enable the processing of large numbers of single leaf samples, their preparation and extraction are simplified and optimised. The metabolite profiles of primary metabolites are measured, and a simple relative calculation is done to gain information on carbon allocation from 13C sucrose. This method is tested examining single leaves of one rosette in different developmental stages, both metabolically and regarding carbon allocation from 13C sucrose. It is revealed that some metabolite pool sizes and 13C pools are tightly associated to relative leaf growth, i.e. to the developmental stage of the leaf. Fumaric acid turns out to be the most interesting candidate for further studies because pool size and 13C pool diverge considerably. In addition, the analyses are also performed on plants grown in the cold, and the initial results show a different metabolite pool size pattern across single leaves of one Arabidopsis rosette, compared to the plants grown under normal temperatures. Lastly, in situ expression of REIL genes in the cold is examined using promotor-GUS plants. Initial results suggest that single leaf metabolite profiles of reil2 differ from those of the WT.
Nanostructured materials are the materials having structural features on the scale of nanometers i.e. 10-9 m. the structural features can enhance the natural properties of the materials or induce additional properties, which are useful for day to technology as well as the future technologies One way to synthesize nanostructured materials is using templating techniques. The templating process involves use of a certain “mould” or “scaffold” to generate the structure. The mould is called as the template, can be a single molecule or assembly of molecule or a larger object, which has its own structure. The product material can be obtained by filling the space around the template with a “precursor”, transformation of precursor into the desired material and then removal of template to get product. The precursor can be any chemical moiety that can be easily transformed in to the desired material. Alternatively the desired material is processed into very tiny bricks or “nano building blocks (NBB)” and the product is obtained by arrangement of the NBB by using a scaffold. We synthesized porous metal oxide spheres of namely TiO2-M2O3: titanium dioxide- M-oxide (M = aluminum, gallium and indium) TiO2-M2O3 and cerium oxide-zirconium oxide solid solution. We used porous polymeric beads as templates. These beads used for chromatographic purposes. For the synthesis of TiO2-M2O3 we used metal- alkoxides as precursor. The pore of beads were filled with precursor and then reacted with water to give transformation of the precursor to amorphous oxide network. The network is crystallized and template is removed by heat treatment at high temperatures. In a similar way we obtained porous spheres of CexZr1-xO2. For this we synthesized nanoparticle of CexZr1-xO2 and used then for the templating process to obtain porous CexZr1-xO2 spheres. Additionally, using the same nanoparticles we synthesized nano-porous powder using self-assembly process between a block-copolymers scaffold and nanoparticles. Morphological and physico-chemical properties of these materials were studies systematically by using various analytical techniques TiO2-M2O3 material were tested for photocatalytic degradation of 2-Chlorophenol a poisonous pollutant. While CexZr1-xO2 spheres were tested for methanol steam reforming reaction to generate hydrogen, which is a fuel for future generation power sources like fuel cells. All the materials showed good catalytic performance.
The Central Andes host large reserves of base and precious metals. The region represented, in 2017, an important part of the worldwide mining activity. Three principal types of deposits have been identified and studied: 1) porphyry type deposits extending from central Chile and Argentina to Bolivia, and Northern Peru, 2) iron oxide-copper-gold (IOCG) deposits, extending from central Peru to central Chile, and 3) epithermal tin polymetallic deposits extending from Southern Peru to Northern Argentina, which compose a large part of the deposits of the Bolivian Tin Belt (BTB). Deposits in the BTB can be divided into two major types: (1) tin-tungsten-zinc pluton-related polymetallic deposits, and (2) tin-silver-lead-zinc epithermal polymetallic vein deposits.
Mina Pirquitas is a tin-silver-lead-zinc epithermal polymetallic vein deposit, located in north-west Argentina, that used to be one of the most important tin-silver producing mine of the country. It was interpreted to be part of the BTB and it shares similar mineral associations with southern pluton related BTB epithermal deposits. Two major mineralization events related to three pulses of magmatic fluids mixed with meteoric water have been identified. The first event can be divided in two stages: 1) stage I-1 with quartz, pyrite, and cassiterite precipitating from fluids between 233 and 370 °C and salinity between 0 and 7.5 wt%, corresponding to a first pulse of fluids, and 2) stage I-2 with sphalerite and tin-silver-lead-antimony sulfosalts precipitating from fluids between 213 and 274 °C with salinity up to 10.6 wt%, corresponding to a new pulse of magmatic fluids in the hydrothermal system. The mineralization event II deposited the richest silver ores at Pirquitas. Event II fluids temperatures and salinities range between 190 and 252 °C and between 0.9 and 4.3 wt% respectively. This corresponds to the waning supply of magmatic fluids. Noble gas isotopic compositions and concentrations in ore-hosted fluid inclusions demonstrate a significant contribution of magmatic fluids to the Pirquitas mineralization although no intrusive rocks are exposed in the mine area.
Lead and sulfur isotopic measurements on ore minerals show that Pirquitas shares a similar signature with southern pluton related polymetallic deposits in the BTB. Furthermore, the major part of the sulfur isotopic values of sulfide and sulfosalt minerals from Pirquitas ranges in the field for sulfur derived from igneous rocks. This suggests that the main contribution of sulfur to the hydrothermal system at Pirquitas is likely to be magma-derived. The precise age of the deposit is still unknown but the results of wolframite dating of 2.9 ± 9.1 Ma and local structural observations suggest that the late mineralization event is younger than 12 Ma.
Following the extinction of dinosaurs, the great adaptive radiation of mammals occurred, giving rise to an astonishing ecological and phenotypic diversity of mammalian species. Even closely related species often inhabit vastly different habitats, where they encounter diverse environmental challenges and are exposed to different evolutionary pressures. As a response, mammals evolved various adaptive phenotypes over time, such as morphological, physiological and behavioural ones. Mammalian genomes vary in their content and structure and this variation represents the molecular mechanism for the long-term evolution of phenotypic variation. However, understanding this molecular basis of adaptive phenotypic variation is usually not straightforward.
The recent development of sequencing technologies and bioinformatics tools has enabled a better insight into mammalian genomes. Through these advances, it was acknowledged that mammalian genomes differ more, both within and between species, as a consequence of structural variation compared to single-nucleotide differences. Structural variant types investigated in this thesis - such as deletion, duplication, inversion and insertion, represent a change in the structure of the genome, impacting the size, copy number, orientation and content of DNA sequences. Unlike short variants, structural variants can span multiple genes. They can alter gene dosage, and cause notable gene expression differences and subsequently phenotypic differences. Thus, they can lead to a more dramatic effect on the fitness (reproductive success) of individuals, local adaptation of populations and speciation.
In this thesis, I investigated and evaluated the potential functional effect of structural variations on the genomes of mustelid species. To detect the genomic regions associated with phenotypic variation I assembled the first reference genome of the tayra (Eira barbara) relying on linked-read sequencing technology to achieve a high level of genome completeness important for reliable structural variant discovery. I then set up a bioinformatics pipeline to conduct a comparative genomic analysis and explore variation between mustelid species living in different environments. I found numerous genes associated with species-specific phenotypes related to diet, body condition and reproduction among others, to be impacted by structural variants.
Furthermore, I investigated the effects of artificial selection on structural variants in mice selected for high fertility, increased body mass and high endurance. Through selective breeding of each mouse line, the desired phenotypes have spread within these populations, while maintaining structural variants specific to each line. In comparison to the control line, the litter size has doubled in the fertility lines, individuals in the high body mass lines have become considerably larger, and mice selected for treadmill performance covered substantially more distance. Structural variants were found in higher numbers in these trait-selected lines than in the control line when compared to the mouse reference genome. Moreover, we have found twice as many structural variants spanning protein-coding genes (specific to each line) in trait-selected lines. Several of these variants affect genes associated with selected phenotypic traits. These results imply that structural variation does indeed contribute to the evolution of the selected phenotypes and is heritable.
Finally, I suggest a set of critical metrics of genomic data that should be considered for a stringent structural variation analysis as comparative genomic studies strongly rely on the contiguity and completeness of genome assemblies. Because most of the available data used to represent reference genomes of mammalian species is generated using short-read sequencing technologies, we may have incomplete knowledge of genomic features. Therefore, a cautious structural variation analysis is required to minimize the effect of technical constraints.
The impact of structural variants on the adaptive evolution of mammalian genomes is slowly gaining more focus but it is still incorporated in only a small number of population studies. In my thesis, I advocate the inclusion of structural variants in studies of genomic diversity for a more comprehensive insight into genomic variation within and between species, and its effect on adaptive evolution.
L'exil comme patrie
(2017)
Theory of mRNA degradation
(2012)
One of the central themes of biology is to understand how individual cells achieve a high fidelity in gene expression. Each cell needs to ensure accurate protein levels for its proper functioning and its capability to proliferate. Therefore, complex regulatory mechanisms have evolved in order to render the expression of each gene dependent on the expression level of (all) other genes. Regulation can occur at different stages within the framework of the central dogma of molecular biology. One very effective and relatively direct mechanism concerns the regulation of the stability of mRNAs. All organisms have evolved diverse and powerful mechanisms to achieve this. In order to better comprehend the regulation in living cells, biochemists have studied specific degradation mechanisms in detail. In addition to that, modern high-throughput techniques allow to obtain quantitative data on a global scale by parallel analysis of the decay patterns of many different mRNAs from different genes. In previous studies, the interpretation of these mRNA decay experiments relied on a simple theoretical description based on an exponential decay. However, this does not account for the complexity of the responsible mechanisms and, as a consequence, the exponential decay is often not in agreement with the experimental decay patterns. We have developed an improved and more general theory of mRNA degradation which provides a general framework of mRNA expression and allows describing specific degradation mechanisms. We have made an attempt to provide detailed models for the regulation in different organisms. In the yeast S. cerevisiae, different degradation pathways are known to compete and furthermore most of them rely on the biochemical modification of mRNA molecules. In bacteria such as E. coli, degradation proceeds primarily endonucleolytically, i.e. it is governed by the initial cleavage within the coding region. In addition, it is often coupled to the level of maturity and the size of the polysome of an mRNA. Both for S. cerevisiae and E. coli, our descriptions lead to a considerable improvement of the interpretation of experimental data. The general outcome is that the degradation of mRNA must be described by an age-dependent degradation rate, which can be interpreted as a consequence of molecular aging of mRNAs. Within our theory, we find adequate ways to address this much debated topic from a theoretical perspective. The improvements of the understanding of mRNA degradation can be readily applied to further comprehend the mRNA expression under different internal or environmental conditions such as after the induction of transcription or stress application. Also, the role of mRNA decay can be assessed in the context of translation and protein synthesis. The ultimate goal in understanding gene regulation mediated by mRNA stability will be to identify the relevance and biological function of different mechanisms. Once more quantitative data will become available, our description allows to elaborate the role of each mechanism by devising a suitable model.
Paleomagnetic dating of climatic events in Late Quaternary sediments of Lake Baikal (Siberia)
(2004)
Lake Baikal provides an excellent climatic archive for Central Eurasia as global climatic variations are continuously depicted in its sediments. We performed continuous rock magnetic and paleomagnetic analyses on hemipelagic sequences retrieved from 4 underwater highs reaching back 300 ka. The rock magnetic study combined with TEM, XRD, XRF and geochemical analyses evidenced that a magnetite of detrital origin dominates the magnetic signal in glacial sediments whereas interglacial sediments are affected by early diagenesis. HIRM roughly quantifies the hematite and goethite contributions and remains the best proxy for estimating the detrital input in Lake Baikal. Relative paleointensity records of the earth′s magnetic field show a reproducible pattern, which allows for correlation with well-dated reference curves and thus provides an alternative age model for Lake Baikal sediments. Using the paleomagnetic age model we observed that cooling in the Lake Baikal region and cooling of the sea surface water in the North Atlantic, as recorded in planktonic foraminifera δ18 O, are coeval. On the other hand, benthic δ18 O curves record mainly the global ice volume change, which occurs later than the sea surface temperature change. This proves that a dating bias results from an age model based on the correlation of Lake Baikal sedimentary records with benthic δ18 O curves. The compilation of paleomagnetic curves provides a new relative paleointensity curve, “Baikal 200”. With a laser-assisted grain size analysis of the detrital input, three facies types, reflecting different sedimentary dynamics can be distinguished. (1) Glacial periods are characterised by a high clay content mostly due to wind activity and by occurrence of a coarse fraction (sand) transported over the ice by local winds. This fraction gives evidence for aridity in the hinterland. (2) At glacial/interglacial transitions, the quantity of silt increases as the moisture increases, reflecting increased sedimentary dynamics. Wind transport and snow trapping are the dominant process bringing silt to a hemipelagic site (3) During the climatic optimum of the Eemian, the silt size and quantity are minimal due to blanketing of the detrital sources by the vegetal cover.
In an experimental study the attempt was made to examine the effects of the Reciprocal Teaching method on measures of metacognition and try to identify the effective features of this method that are necessary for the learning gains to occur. Reciprocal Teaching, originally developed by Palincsar and Brown (1984), is a very successful training program which was designed to improve student's reading comprehension skills by teaching them reading strategies. In the present study the tasks and responsibilities assumed by 5thgrade elementary students (N = 55) participating in a 16-session reading strategy training were varied systematically. Not only the students who participated in the training program in one of the three experimental conditions were compared with respect to knowledge and performance measures, but there was also a comparison to their control classmates who did not participate in strategy training (N = 86). Detailed analyses of video-taped sessions provided additional information. The strategy training was most beneficial for measures of knowledge and performance more closely related to the content of the training program, namely knowledge about specific reading strategies taught in training and application of those strategies. No significant effects were observed for more distal measures (general strategy knowledge, reading comprehension). As for the features of the program, it could be shown that students of the two experimental conditions where the students were responsible for giving each other feedback on performance (with respect to both content and strategy application) and guiding the correction of the answer outperformed both the experimental condition in which the trainer was responsible for those tasks and the control group. It is concluded that it is not merely the application of strategies, but the combination of strategy application with concurrent teaching and learning of metacognitive acquisition procedures (analysis, monitoring, evaluation, and regulation) in an inter-individual way as the precedent of these processes occurring intra-individually that seems to be an efficient way of acquiring metacognitive knowledge and skills. It was also shown that strategy training does not necessarily have to include the precise kind of interaction that characterizes the Reciprocal Teaching method. Instead, the tasks of monitoring, evaluating, and regulating other children's learning processes - i.e., tasks associated with the "teacher role" - are the ones that promote the acquisition of metacognitive knowledge and skills. Generally, any strategy training program that not only provides children with plentiful opportunities for practice, but also prompts them to engage in these kinds of metacognitive processes, may help children to acquire metacognitive knowledge and skills.
In this thesis we mainly generalize two theorems from Mackaay-Picken and Picken (2002, 2004). In the first paper, Mackaay and Picken show that there is a bijective correspondence between Deligne 2-classes $\xi \in \check{H}^2(M,\mathcal{D}^2)$ and holonomy maps from the second thin-homotopy group $\pi_2^2(M)$ to $U(1)$. In the second one, a generalization of this theorem to manifolds with boundaries is given: Picken shows that there is a bijection between Deligne 2-cocycles and a certain variant of 2-dimensional topological quantum field theories. In this thesis we show that these two theorems hold in every dimension. We consider first the holonomy case, and by using simplicial methods we can prove that the group of smooth Deligne $d$-classes is isomorphic to the group of smooth holonomy maps from the $d^{th}$ thin-homotopy group $\pi_d^d(M)$ to $U(1)$, if $M$ is $(d-1)$-connected. We contrast this with a result of Gajer (1999). Gajer showed that Deligne $d$-classes can be reconstructed by a different class of holonomy maps, which not only include holonomies along spheres, but also along general $d$-manifolds in $M$. This approach does not require the manifold $M$ to be $(d-1)$-connected. We show that in the case of flat Deligne $d$-classes, our result differs from Gajers, if $M$ is not $(d-1)$-connected, but only $(d-2)$-connected. Stiefel manifolds do have this property, and if one applies our theorem to these and compare the result with that of Gajers theorem, it is revealed that our theorem reconstructs too many Deligne classes. This means, that our reconstruction theorem cannot live without the extra assumption on the manifold $M$, that is our reconstruction needs less informations about the holonomy of $d$-manifolds in $M$ at the price of assuming $M$ to be $(d-1)$-connected. We continue to show, that also the second theorem can be generalized: By introducing the concept of Picken-type topological quantum field theory in arbitrary dimensions, we can show that every Deligne $d$-cocycle induces such a $d$-dimensional field theory with two special properties, namely thin-invariance and smoothness. We show that any $d$-dimensional topological quantum field theory with these two properties gives rise to a Deligne $d$-cocycle and verify that this construction is surjective and injective, that is both groups are isomorphic.
Synthetische Transkriptionsfaktoren bestehen wie natürliche Transkriptionsfaktoren aus einer DNA-Bindedomäne, die sich spezifisch an die Bindestellensequenz vor dem Ziel-Gen anlagert, und einer Aktivierungsdomäne, die die Transkriptionsmaschinerie rekrutiert, sodass das Zielgen exprimiert wird. Der Unterschied zu den natürlichen Transkriptionsfaktoren ist, sowohl dass die DNA-Bindedomäne als auch die Aktivierungsdomäne wirtsfremd sein können und dadurch künstliche Stoffwechselwege im Wirt, größtenteils chemisch, induziert werden können. Optogenetische synthetische Transkriptionsfaktoren, die hier entwickelt wurden, gehen einen Schritt weiter. Dabei ist die DNA-Bindedomäne nicht mehr an die Aktivierungsdomäne, sondern mit dem Blaulicht-Photorezeptor CRY2 gekoppelt. Die Aktivierungsdomäne wurde mit dem Interaktionspartner CIB1 fusioniert. Unter Blaulichtbestrahlung dimerisieren CRY2 und CIB1 und damit einhergehend die beiden Domänen, sodass ein funktionsfähiger Transkriptionsfaktor entsteht. Dieses System wurde in die Saccharomyces cerevisiae genomisch integriert. Verifiziert wurde das konstruierte System mit Hilfe des Reporters yEGFP, welcher durchflusszytometrisch detektiert werden konnte. Es konnte gezeigt werden, dass die yEGFP Expression variabel gestaltet werden kann, indem unterschiedlich lange Blaulichtimpulse ausgesendet wurden, die DNA-Bindedomäne, die Aktivierungsdomäne oder die Anzahl der Bindestellen, an dem sich die DNA-Bindedomäne anlagert, verändert wurden. Um das System für industrielle Anwendungen attraktiv zu gestalten, wurde das System vom Deepwell-Maßstab auf Photobioreaktor-Maßstab hochskaliert. Außerdem erwies sich das Blaulichtsystem sowohl im Laborstamm YPH500 als auch im industriell oft verwendeten Hefestamm CEN.PK als funktional. Des Weiteren konnte ein industrierelevante Protein ebenso mit Hilfe des verifizierten Systems exprimiert werden. Schlussendlich konnte in dieser Arbeit das etablierte Blaulicht-System erfolgreich mit einem Rotlichtsystem kombiniert werden, was zuvor noch nicht beschrieben wurde.
Explaning change in flood hazard in the Mekong river : the hypothesis of nonstationary variance
(2013)
Traditional organizations are strongly encouraged by emerging digital customer behavior and digital competition to transform their businesses for the digital age. Incumbents are particularly exposed to the field of tension between maintaining and renewing their business model. Banking is one of the industries most affected by digitalization, with a large stream of digital innovations around Fintech. Most research contributions focus on digital innovations, such as Fintech, but there are only a few studies on the related challenges and perspectives of incumbent organizations, such as traditional banks. Against this background, this dissertation examines the specific causes, effects and solutions for traditional banks in digital transformation − an underrepresented research area so far.
The first part of the thesis examines how digitalization has changed the latent customer expectations in banking and studies the underlying technological drivers of evolving business-to-consumer (B2C) business models. Online consumer reviews are systematized to identify latent concepts of customer behavior and future decision paths as strategic digitalization effects. Furthermore, the service attribute preferences, the impact of influencing factors and the underlying customer segments are uncovered for checking accounts in a discrete choice experiment. The dissertation contributes here to customer behavior research in digital transformation, moving beyond the technology acceptance model. In addition, the dissertation systematizes value proposition types in the evolving discourse around smart products and services as key drivers of business models and market power in the platform economy.
The second part of the thesis focuses on the effects of digital transformation on the strategy development of financial service providers, which are classified along with their firm performance levels. Standard types are derived based on fuzzy-set qualitative comparative analysis (fsQCA), with facade digitalization as one typical standard type for low performing incumbent banks that lack a holistic strategic response to digital transformation. Based on this, the contradictory impact of digitalization measures on key business figures is examined for German savings banks, confirming that the shift towards digital customer interaction was not accompanied by new revenue models diminishing bank profitability. The dissertation further contributes to the discourse on digitalized work designs and the consequences for job perceptions in banking customer advisory. The threefold impact of the IT support perceived in customer interaction on the job satisfaction of customer advisors is disentangled.
In the third part of the dissertation, solutions are developed design-oriented for core action areas of digitalized business models, i.e., data and platforms. A consolidated taxonomy for data-driven business models and a future reference model for digital banking have been developed. The impact of the platform economy is demonstrated here using the example of the market entry by Bigtech. The role-based e3-value modeling is extended by meta-roles and role segments and linked to value co-creation mapping in VDML. In this way, the dissertation extends enterprise modeling research on platform ecosystems and value co-creation using the example of banking.
Forschendes Lernen und die digitale Transformation sind zwei der wichtigsten Einflüsse auf die Entwicklung der Hochschuldidaktik im deutschprachigen Raum. Während das forschende Lernen als normative Theorie das sollen beschreibt, geben die digitalen Werkzeuge, alte wie neue, das können in vielen Bereichen vor.
In der vorliegenden Arbeit wird ein Prozessmodell aufgestellt, was den Versuch unternimmt, das forschende Lernen hinsichtlich interaktiver, gruppenbasierter Prozesse zu systematisieren. Basierend auf dem entwickelten Modell wurde ein Softwareprototyp implementiert, der den gesamten Forschungsprozess begleiten kann. Dabei werden Gruppenformation, Feedback- und Reflexionsprozesse und das Peer Assessment mit Bildungstechnologien unterstützt. Die Entwicklungen wurden in einem qualitativen Experiment eingesetzt, um Systemwissen über die Möglichkeiten und Grenzen der digitalen Unterstützung von forschendem Lernen zu gewinnen.
Since the golden era of antibiotics natural products are of ever growing interest to both basic research and applied sciences as they are the main source of new bioactive compounds delivering lead structures for new pharmaceuticals with potent antibiotic, anti-inflammatory or anti-cancer activities. Alongside the technological advances in high-throughput genome sequencing and the better understanding of the general organization of those modular biosynthetic assembly lines of secondary metabolites, there was also a shift from wet-lab screening of active cell extracts towards algorithm-based in silico screening for new natural product biosynthesis gene clusters (BGCs). Although the increasing availability of full genome sequences revealed that such non-ribosomal peptide synthetases (NRPS), polyketide synthases (PKS) and ribosomally synthesized and post-translationally modified peptides (RiPPs) can be found in all three kingdoms of life, certain phyla like actinobacteria and cyanobacteria show a very high density of these secondary metabolite BGCs.
The facultative symbiotic, N2-fixing model organism N. punctiforme PCC73102 is a terrestrial type IV cyanobacterium that not only dedicates are very large fraction of its genome to secondary metabolite production but is also amenable to genetic modification. AntiSMASH analysis of the genome showed that there are sixteen potential secondary metabolite BGCs encoded in N. punctiforme, but until now there were only two compounds assigned to their respective BGC leaving the remaining fourteen orphan. This makes the organism a perfect subject for the establishment of a novel combinatorial genomic mining approach for the detection of new natural products.
In the course of this study a combinatorial approach of genomic mining, independent monitoring techniques and alteration of cultivation conditions lead to new insights in cyanobacterial natural product biosynthesis and ultimately to the description of a novel compound produced by N. punctiforme. With the generation and investigation of a reporter strain library consisting of CFP-producing transcriptional reporter mutants for every predicted secondary metabolite BGC of N. punctiforme, it could be shown that natural product expression is in fact not silent for all those BGCs where no compound can be detected. Instead several distinct expression patterns could be described highlighting that secondary metabolite production is under tight regulation and only a minor fraction of these BGCs is in fact silent under standard laboratory conditions. Furthermore, increasing light intensity and carbon dioxide availability and cultivating N. punctiforme to very high cell densities had a tremendous impact on the overall metabolic activity of the organism. Investigation of high density cultivated cell extracts ultimately lead to the detection of a so far undescribed set of microviridins with unusual extended peptide sequences named Microviridin N3 – N9. Both cultivation of the transcriptional reporter mutants as well as RTqPCR-based detection of secondary metabolite BGC transcription levels revealed that in fact 50% of N. punctiforme’s natural product BGCs are upregulated under high cell density conditions. In contrast to this very broad response, co-cultivation of N. punctiforme in chemical or physical contact with a N-deprived host plant (Blasia pusilla) lead to a very specific upregulation of two natural product BGCs, namely RIPP3 and RIPP4. Although this response could be confirmed by various independent monitoring techniques and heavy analytical efforts were spent, no compound could be assigned to either of these BGCs.
This study is the first in-depth systematic investigation of a cyanobacterial secondary metabolome by a combinatorial approach of genome mining and independent monitoring techniques that can serve as a new strategic approach to gain further insight into natural product synthesis of various organisms. Although there are single well described examples of secondary metabolites like the cell differentiation factor PatS in Anabaena sp. strain PCC 7120, the level and extent of regulation observed in this study is unprecedented and understanding of these mechanisms might be the key to streamline natural product discovery. However, the results of this study also highlight that induction of secondary metabolite BGCs is not the real challenge. Instead the new insights point towards analytical issues being a severe hurdle and finding reliable strategies to overcome these problems might as well drive natural product discovery.
In der wissenschaftlichen Literatur, in der betrieblichen Praxis und in der gesellschaftlichen Diskussion wird wieder zunehmend mehr die Bedeutung der Beschäftigten für den Unternehmenserfolg hervorgehoben und diskutiert. Unternehmen, die zielgerichtet geeignete Managementstrategien mit Blick auf den Umgang mit ihren Beschäftigten einsetzen, werden in bezug auf ihre ökonomische Wertschöpfung als erfolgreicher charakterisiert. Besonders im Bereich Human Resources Management lassen sich erste Belege finden, die es ermöglichen, den ökonomischen Erfolg von Unternehmen kausal auf einzelne Personalmanagementstrategien zurückzuführen. Ziel eigener Untersuchungen in der IT- und Softwarebranche war es, Unternehmenserfolg auf der Grundlage ökonomischer Erfolgsmaße und des subjektiven Erlebens der Beschäftigten in kleinen und mittleren Softwareunternehmen unter besonderer Fokussierung des Human Resources Management und der Unternehmenskultur zu untersuchen.
The author examines the cultural identity development of Oromo-Americans in Minnesota, an ethnic group originally located within the national borders of Ethiopia. Earlier studies on language and cultural identity have shown that the degree of ethnic orientation of minorities commonly decreases from generation to generation. Yet oppression and a visible minority status were identified as factors delaying the process of de-ethnicization. Given that Oromos fled persecution in Ethiopia and are confronted with the ramifications of a visible minority status in the U.S., it can be expected that they have retained strong ties to their ethnic culture. This study, however, came to a more complex and theory-building result.
We live in an aging society. The change in demographic structures poses a number of challenges, including an increase in age-associated diseases. Delirium, dementia, and depression are considered to be of particular interest in the field of aging and mental health. A common theory regarding healthy aging and mental health is that the highest satisfaction and best performance is achieved when a person's abilities match the demands of their environment. In this context, the person's environment includes both the physical and the social environment. Based on this assumption, this dissertation focuses on the investigation of non-pharmacological interventions that modify environmental factors in order to facilitate the prevention and treatment of mental disorders in older patients and their caregivers. The first part of this dissertation consists of two publications and deals with the prevention of postoperative delirium in elderly patients. The PAWEL study investigated the use of a multimodal, non-pharmacological intervention in the routine care of patients aged 70 years or older undergoing elective surgery. The intervention included an interdepartmental delirium prevention team, daily use of seven manualized “best practice” procedures, structured staff training on delirium, and the adaptation of the hospital environment to the patients’ needs. The second part of the dissertation used a meta-analysis to investigate whether technology-based interventions are a suitable form of support for informal caregivers of people with dementia. Subgroup analyses were conducted to examine the effect of different types of technology on caregiver burden and depressive symptoms. The following main results were found: The PAWEL study showed that the use of a multimodal, non-pharmacological intervention resulted in a significantly lower incidence rate of postoperative delirium and reduced days with delirium in the intervention group compared to the control group. However, this difference could not be observed in the group of patients undergoing elective cardiac surgery. The results of the meta-analysis showed that technology-based interventions offer a promising alternative to traditional “face-to-face” services. Significant effect sizes could be found in relation to both the burden and the depressive symptoms of caregiving relatives. These results provide further important information on the significant impact of non-pharmacological interventions that modify environmental factors on mental health, and support the consideration of such interventions in the prevention and treatment of mental disorders in both older patients and their caregivers.
Durch Zeit und Raum
(2014)
Arbeitnehmer werden in der heutigen Informationsgesellschaft immer häufiger, genauer und dadurch intensiver überwacht. Die moderne Technik bietet dem Arbeitgeber qualitativ und quantitativ immer bessere Kontrollmechanismen. Dabei werden die verschiedenen Überwachungsmethoden nicht selten heimlich angewandt, was das Persönlichkeitsrecht des Arbeitnehmers stark beeinträchtigt und sich oft in einer rechtlichen Grauzone abspielt.
Das Thema der Überwachung am Arbeitsplatz und der Datenschutz stehen in Deutschland insbesondere seit dem Inkrafttreten der DSGVO im Mittelpunkt juristischer Diskussionen. Im Gegensatz zum deutschen Recht fand die Diskussion zur Überwachung am Arbeitsplatz und zum Arbeitnehmerdatenschutz in der Türkei trotz der Regelung von Art. 419 tOR und der Verabschiedung des türkischen Datenschutzgesetzes im Jahr 2016 ihren verdienten Platz noch nicht.
Die Autorin nimmt zu den zentralen Streitfragen um die heimliche Überwachung am Arbeitsplatz in Deutschland und in der Türkei rechtsvergleichend Stellung und will somit zum einen in beiden Ländern das Bewusstsein für Persönlichkeits- und Datenschutz am Arbeitsplatz stärken, zum anderen herausfinden, ob die deutsche Vorgehensweise bei diesem Thema dem türkischen Gesetzgeber Lösungsalternativen bieten kann. Im Vordergrund der Untersuchungen stehen dabei die in der Praxis häufigsten heimlichen Überwachungsmethoden, die heimliche Videoüberwachung, die Überwachung durch Detektive, die Standortüberwachung durch GPS-Empfänger sowie die E-Mail-Überwachung.
With the rise of electronic integration between organizations, the need for a precise specification of interaction behavior increases. Information systems, replacing interaction previously carried out by humans via phone, faxes and emails, require a precise specification for handling all possible situations. Such interaction behavior is described in process choreographies. Choreographies enumerate the roles involved, the allowed interactions, the message contents and the behavioral dependencies between interactions. Choreographies serve as interaction contract and are the starting point for adapting existing business processes and systems or for implementing new software components. As a thorough analysis and comparison of choreography modeling languages is missing in the literature, this thesis introduces a requirements framework for choreography languages and uses it for comparing current choreography languages. Language proposals for overcoming the limitations are given for choreography modeling on the conceptual and on the technical level. Using an interconnection modeling style, behavioral dependencies are defined on a per-role basis and different roles are interconnected using message flow. This thesis reveals a number of modeling "anti-patterns" for interconnection modeling, motivating further investigations on choreography languages following the interaction modeling style. Here, interactions are seen as atomic building blocks and the behavioral dependencies between them are defined globally. Two novel language proposals are put forward for this modeling style which have already influenced industrial standardization initiatives. While avoiding many of the pitfalls of interconnection modeling, new anomalies can arise in interaction models. A choreography might not be realizable, i.e. there does not exist a set of interacting roles that collectively realize the specified behavior. This thesis investigates different dimensions of realizability.
Isometric muscle function
(2022)
The cumulative dissertation consists of four original articles. These considered isometric muscle ac-tions in healthy humans from a basic physiological view (oxygen and blood supply) as well as possibilities of their distinction. It includes a novel approach to measure a specific form of isometric hold-ing function which has not been considered in motor science so far. This function is characterized by an adaptation to varying external forces with particular importance in daily activities and sports.
The first part of the research program analyzed how the biceps brachii muscle is supplied with oxygen and blood by adapting to a moderate constant load until task failure (publication 1). In this regard, regulative mechanisms were investigated in relation to the issue of presumably compressed capillaries due to high intramuscular pressures (publication 2).
Furthermore, it was examined if oxygenation and time to task failure (TTF) differs compared to an-other isometric muscle function (publication 3). This function is mainly of diagnostic interest by measuring the maximal voluntary isometric contraction (MVIC) as a gold standard. For that, a person pulls on or pushes against an insurmountable resistance. However, the underlying pulling or pushing form of isometric muscle action (PIMA) differs compared to the holding one (HIMA).
HIMAs have mainly been examined by using constant loads. In order to quantify the adaptability to varying external forces, a new approach was necessary and considered in the second part of the research program. A device was constructed based on a previously developed pneumatic measurement system. The device should have been able to measure the Adaptive Force (AF) of elbow ex-tensor muscles. The AF determines the adaptability to increasing external forces under isometric (AFiso) and eccentric (AFecc) conditions. At first, it was questioned if these parameters can be relia-bly assessed by use of the new device (publication 4). Subsequently, the main research question was investigated: Is the maximal AFiso a specific and independent variable of muscle function in comparison to the MVIC? Furthermore, both research parts contained a sub-question of how results can be influenced.
Parameters of local oxygen saturation (SvO2) and capillary blood filling (rHb) were non-invasively recorded by a spectrophotometer during maximal and submaximal HIMAs and PIMAs.
These were the main findings: Under load, SvO2 and rHb always adjusted into a steady state after an initial decrease. Nevertheless, their behavior could roughly be categorized into two types. In type I, both parameters behaved nearly parallel to each other. In contrast, their progression over time was partly inverse in type II. The inverse behavior probably depends on the level of deoxygenation since rHb increased reliably at a suggested threshold of about 59% SvO2. This triggered mechanism and the found homeostatic steady states seem to be in conflict with the concept of mechanically compressed capillaries and consequently with a restricted blood flow. Anatomical configuration of blood vessels might provide one hypothetical explanation of how blood flow might be maintained. HIMA and PIMA did not differ regarding oxygenation and allocation to the described types. The TTF tended to be longer during PIMA.
As a sub-question, oxygenation and TTF were compared between (HIMA) and intermittent voluntary muscle twitches during a weight holding task. TTF but not oxygenation differed significantly
(Twitch > HIMA). A changed neuromuscular control might serve as a speculative explanation of how the results can be explained. This is supported by the finding that the TTF did not correlate significantly with the extent of deoxygenation irrespective of the performed task (HIMA, PIMA or Twitch).
Other neuromuscular aspects of muscle function were considered in second part of the re-search program. The new device mentioned above detected different force capacities within four trials at two days each. Among AF measurements, the functional counterpart of a concentric muscle action merging into an isometric one was analyzed in comparison to the MVIC.
Based on the results, it can be assumed that a prior concentric muscle action does not influence the MVIC. However, the results were inconsistent and possibly influenced by systematic errors. In con-trast, maximal variables of the AF (AFisomax and AFeccmax) could be measured in a reliable way which is indicated by a high test-retest reliability. Despite substantial correlations between force variables, the AFisomax differed significantly from MVIC and AFmax, which was identical with AFeccmax in almost all cases. Moreover, AFisomax revealed the highest variability between trials.
These results indicate that maximal force capacities should be assessed separately. The adaptive holding capacity of a muscle can be lower compared to a commonly determined MVIC. This is of relevance since muscles frequently need to respond adequately to external forces. If their response does not correspond to the external impact, the muscle is forced to lengthen. In this scenario, joints are not completely stabilized and an injury may occur. This outlined issue should be addressed in future research in the field of sport and health sciences.
At last, the dissertation presents another possibility to quantify the AFisomax by use of a handheld device applied in combination with a manual muscle test. This assessment delivers a more practical way for clinical purposes.
Cellulose derived polymers
(2019)
Plastics, such as polyethylene, polypropylene, and polyethylene terephthalate are part of our everyday lives in the form of packaging, household goods, electrical insulation, etc. These polymers are non-degradable and create many environmental problems and public health concerns. Additionally, these polymers are produced from finite fossils resources. With the continuous utilization of these limited resources, it is important to look towards renewable sources along with biodegradation of the produced polymers, ideally. Although many bio-based polymers are known, such as polylactic acid, polybutylene succinate adipate or polybutylene succinate, none have yet shown the promise of replacing conventional polymers like polyethylene, polypropylene and polyethylene terephthalate. Cellulose is one of the most abundant renewable resources produced in nature. It can be transformed into various small molecules, such as sugars, furans, and levoglucosenone. The aim of this research is to use the cellulose derived molecules for the synthesis of polymers.
Acid-treated cellulose was subjected to thermal pyrolysis to obtain levoglucosenone, which was reduced to levoglucosenol. Levoglucosenol was polymerized, for the first time, by ring-opening metathesis polymerization (ROMP) yielding high molar mass polymers of up to ~150 kg/mol. The poly(levoglucosenol) is thermally stable up to ~220 ℃, amorphous, and is exhibiting a relatively high glass transition temperature of ~100 ℃. The poly(levoglucosenol) can be converted to a transparent film, resembling common plastic, and was found to degrade in a moist acidic environment. This means that poly(levoglucosenol) may find its use as an alternative to conventional plastic, for instance, polystyrene.
Levoglucosenol was also converted into levoglucosenyl methyl ether, which was polymerized by cationic ring-opening metathesis polymerization (CROP). Polymers were obtained with molar masses up to ~36 kg/mol. These polymers are thermally stable up to ~220 ℃ and are semi-crystalline thermoplastics, having a glass transition temperature of ~35 ℃ and melting transition of 70-100 ℃. Additionally, the polymers underwent cross-linking, hydrogenation and thiol-ene click chemistry.
Cleft exhaustivity
(2020)
In this dissertation a series of experimental studies are presented which demonstrate that the exhaustive inference of focus-background it-clefts in English and their cross-linguistic counterparts in Akan, French, and German is neither robust nor systematic. The inter-speaker and cross-linguistic variability is accounted for with a discourse-pragmatic approach to cleft exhaustivity, in which -- following Pollard & Yasavul 2016 -- the exhaustive inference is derived from an interaction with another layer of meaning, namely, the existence presupposition encoded in clefts.
Introduction: Carbohydrate (CHO) and fat are the main substrates to fuel prolonged endurance exercise, each having its oxidation patterns regulated by several factors such as intensity, duration and mode of the activity, dietary intake pattern, muscle glycogen concentrations, gender and training status. Exercising at intensities where fat oxidation rates are high has been shown to induce metabolic benefits in recreational and health-oriented sportsmen. The exercise intensity (Fatpeak) eliciting peak fat oxidation rates is therefore of particular interest when aiming to prescribe exercise for the purpose of fat oxidation and related metabolic effects. Although running and walking are feasible and popular among the target population, no reliable protocols are available to assess Fatpeak as well as its actual velocity (VPFO) during treadmill ergometry. Moreover, to date, it remains unclear how pre-exercise CHO availability modulates the oxidative regulation of substrates when exercise is conducted at the intensity where the individual anaerobic threshold (IAT) is located (VIAT). That is, a metabolic marker representing the upper border where constant load endurance exercise can be sustained, being commonly used to guide athletic training or in performance diagnostics. The research objectives of the current thesis were therefore, 1) to assess the reliability and day-to-day variability of VPFO and Fatpeak during treadmill ergometry running; 2) to assess the impact of high CHO (HC) vs. low CHO (LC) diets (where on the LC day a combination of low CHO diet and a glycogen depleting exercise was implemented) on the oxidative regulation of CHOs and fat while exercise is conducted at VIAT. Methods: Research objective 1: Sixteen recreational athletes (f=7, m=9; 25 ± 3 y; 1.76 ± 0.09 m; 68.3 ± 13.7 kg; 23.1 ± 2.9 kg/m²) performed 2 different running protocols on 3 different days with standardized nutrition the day before testing. At day 1, peak oxygen uptake (VO2peak) and the velocities at the aerobic threshold (VLT) and respiratory exchange ratio (RER) of 1.00 (VRER) were assessed. At days 2 and 3, subjects ran an identical submaximal incremental test (Fat-peak test) composed of a 10 min warm-up (70% VLT) followed by 5 stages of 6 min with equal increments (stage 1 = VLT, stage 5 = VRER). Breath-by-breath gas exchange data was measured continuously and used to determine fat oxidation rates. A third order polynomial function was used to identify VPFO and subsequently Fatpeak. The reproducibility and variability of variables was verified with an intraclass correlation coefficient (ICC), Pearson’s correlation coefficient, coefficient of variation (CV) and the mean differences (bias) ± 95% limits of agreement (LoA). Research objective 2: Sixteen recreational runners (m=8, f=8; 28 ± 3 y; 1.76 ± 0.09 m; 72 ± 13 kg; 23 ± 2 kg/m²) performed 3 different running protocols, each allocated on a different day. At day 1, a maximal stepwise incremental test was implemented to assess the IAT and VIAT. During days 2 and 3, participants ran a constant-pace bout (30 min) at VIAT that was combined with randomly assigned HC (7g/kg/d) or LC (3g/kg/d) diets for the 24 h before testing. Breath-by-breath gas exchange data was measured continuously and used to determine substrate oxidation. Dietary data and differences in substrate oxidation were analyzed with a paired t-test. A two-way ANOVA tested the diet X gender interaction (α = 0.05). Results: Research objective 1: ICC, Pearson’s correlation and CV for VPFO and Fatpeak were 0.98, 0.97, 5.0%; and 0.90, 0.81, 7.0%, respectively. Bias ± 95% LoA was -0.3 ± 0.9 km/h for VPFO and -2 ± 8% of VO2peak for Fatpeak. Research objective 2: Overall, the IAT and VIAT were 2.74 ± 0.39 mmol/l and 11.1 ± 1.4 km/h, respectively. CHO oxidation was 3.45 ± 0.08 and 2.90 ± 0.07 g/min during HC and LC bouts respectively (P < 0.05). Likewise, fat oxidation was 0.13 ± 0.03 and 0.36 ± 0.03 g/min (P < 0.05). Females had 14% (P < 0.05) and 12% (P > 0.05) greater fat oxidation compared to males during HC and LC bouts, respectively. Conclusions: Research objective 1: In summary, relative and absolute reliability indicators for VPFO and Fatpeak were found to be excellent. The observed LoA may now serve as a basis for future training prescriptions, although fat oxidation rates at prolonged exercise bouts at this intensity still need to be investigated. Research objective 2: Twenty-four hours of high CHO consumption results in concurrent higher CHO oxidation rates and overall utilization, whereas maintaining a low systemic CHO availability significantly increases the contribution of fat to the overall energy metabolism. The observed gender differences underline the necessity of individualized dietary planning before exerting at intensities associated with performance exercise. Ultimately, future research should establish how these findings can be extrapolated to training and competitive situations and with that provide trainers and nutritionists with improved data to derive training prescriptions.
The present work deals with the variation in the linearisation of German infinitival complements from a diachronic perspective. Based on the observation that in present-day German the position of infinitival complements is restricted by properties of the matrix verb (Haider, 2010, Wurmbrand, 2001), whereas this appears much more liberal in older stages of German (Demske, 2008, Maché and Abraham, 2011, Demske, 2015), this dissertation investigates the emergence of those restrictions and the factors that have led to a reduced, yet still existing variability. The study contrasts infinitival complements of two types of matrix verbs, namely raising and control verbs. In present-day German, these show different syntactic behaviour and opposite preferences as far as the position of the infinitive is concerned: while infinitival complements of raising verbs build a single clausal domain with the with the matrix verb and occur obligatorily intraposed, infinitive complements of control verbs can form clausal constituents and occur predominantly extraposed. This correlation is not attested in older stages of German, at least not until Early New High German.
Drawing on diachronic corpus data, the present work provides a description of the changes in the linearisation of infinitival complements from Early New High German to present-day German which aims at finding out when the correlation between infinitive type and word order emerged and further examines their possible causes. The study shows that word order change in German infinitival complements is not a case of syntactic change in the narrow sense, but that the diachronic variation results from the interaction of different language-internal and language-external factors and that it reflects, on the one hand, the influence of language modality on the emerging standard language and, on the other hand, a process of specialisation.