Refine
Year of publication
- 2016 (322) (remove)
Document Type
- Doctoral Thesis (322) (remove)
Is part of the Bibliography
- yes (322)
Keywords
- Klimawandel (4)
- climate change (4)
- Blickbewegungen (3)
- Deutschland (3)
- earthquake (3)
- Aggression (2)
- Aphasie (2)
- Avantgarde (2)
- Berlin (2)
- Bodenfeuchte (2)
Institute
- Institut für Biochemie und Biologie (58)
- Institut für Geowissenschaften (37)
- Institut für Chemie (33)
- Institut für Physik und Astronomie (21)
- Institut für Ernährungswissenschaft (19)
- Wirtschaftswissenschaften (19)
- Institut für Informatik und Computational Science (16)
- Historisches Institut (14)
- Sozialwissenschaften (12)
- Department Linguistik (10)
Age of acquisition (AOA) is a psycholinguistic variable that significantly influences behavioural measures (response times and accuracy rates) in tasks that require lexical and semantic processing. Its origin is – unlike the origin of semantic typicality (TYP), which is assumed at the semantic level – controversially discussed. Different theories propose AOA effects to originate either at the semantic level or at the link between semantics and phonology (lemma-level).
The dissertation aims at investigating the influence of AOA and its interdependence with the semantic variable TYP on particularly semantic processing in order to pinpoint the origin of AOA effects. Therefore, three studies have been conducted that considered the variables AOA and TYP in semantic processing tasks (category verifications and animacy decisions) by means of behavioural and partly electrophysiological (ERP) data and in different populations (healthy young and elderly participants and in semantically impaired individuals with aphasia (IWA)).
The behavioural and electrophysiological data of the three studies provide evidence for distinct processing levels of the variables AOA and TYP. The data further support previous assumptions on a semantic origin for TYP but question the same for AOA. The findings, however, support an origin of AOA effects at the transition between the word form (phonology) and the semantic level that can be captured at the behavioural but not at the electrophysiological level.
Software-as-a-Service (SaaS) offers several advantages to both service providers and users. Service providers can benefit from the reduction of Total Cost of Ownership (TCO), better scalability, and better resource utilization. On the other hand, users can use the service anywhere and anytime, and minimize upfront investment by following the pay-as-you-go model. Despite the benefits of SaaS, users still have concerns about the security and privacy of their data. Due to the nature of SaaS and the Cloud in general, the data and the computation are beyond the users' control, and hence data security becomes a vital factor in this new paradigm. Furthermore, in multi-tenant SaaS applications, the tenants become more concerned about the confidentiality of their data since several tenants are co-located onto a shared infrastructure.
To address those concerns, we start protecting the data from the provisioning process by controlling how tenants are being placed in the infrastructure. We present a resource allocation algorithm designed to minimize the risk of co-resident tenants called SecPlace. It enables the SaaS provider to control the resource (i.e., database instance) allocation process while taking into account the security of tenants as a requirement.
Due to the design principles of the multi-tenancy model, tenants follow some degree of sharing on both application and infrastructure levels. Thus, strong security-isolation should be present. Therefore, we develop SignedQuery, a technique that prevents one tenant from accessing others' data. We use the Signing Concept to create a signature that is used to sign the tenant's request, then the server can verifies the signature and recognizes the requesting tenant, and hence ensures that the data to be accessed is belonging to the legitimate tenant.
Finally, Data confidentiality remains a critical concern due to the fact that data in the Cloud is out of users' premises, and hence beyond their control. Cryptography is increasingly proposed as a potential approach to address such a challenge. Therefore, we present SecureDB, a system designed to run SQL-based applications over an encrypted database. SecureDB captures the schema design and analyzes it to understand the internal structure of the data (i.e., relationships between the tables and their attributes). Moreover, we determine the appropriate partialhomomorphic encryption scheme for each attribute where computation is possible even when the data is encrypted.
To evaluate our work, we conduct extensive experiments with di↵erent settings. The main use case in our work is a popular open source HRM application, called OrangeHRM. The results show that our multi-layered approach is practical, provides enhanced security and isolation among tenants, and have a moderate complexity in terms of processing encrypted data.
Seit Jahrhunderten dienen die Körper der Frauen als Schlachtfelder. Doch erst vor 20 Jahren kam das Thema sexuelle Gewalt in bewaffneten Konflikten auf internationaler Ebene auf. Die Autorin untersucht den Beitrag der Vereinten Nationen zur Vorbeugung und Repression von sexueller Gewalt im Krieg. Ziel war es, eine Gesamtbestandaufnahme der ausgewählten Wege zum Schutz der Frauen vor sexueller Gewalt im Konflikt in den Bereichen 'Protection, Prevention und Prosecution' durchzuführen. Dies erfolgt anhand der Auswertung der Rechtsprechung des ICTY, ICTR, SCSL und des IStGH sowie der Durchführung der UN Action against sexual violence in conflict, der Arbeit der Human Rights Bodies und der afrikanischen Organisationen. Die Bekämpfung sexueller Gewalt im Krieg bleibt nach wie vor ein langwieriger Weg. Doch wo früher sachgerechte Normen gefehlt haben, wurden solide Grundlagen in den drei Bereichen geschaffen.
Effektivität frühzeitiger Interventionen zur Prävention von Lese- und Rechtschreibschwierigkeiten
(2016)
Die vorliegende Studie beschäftigt sich mit der Förderung der Lese- und Schreibkompetenz in der Anfangsphase des Schriftspracherwerbs. Ziel der Untersuchung ist die Erprobung und Evaluierung frühzeitiger, diagnosegeleiteter Interventionen zur Prävention von Lese- und Rechtschreibschwierigkeiten. Im Unterschied zu vielen Studien in diesem Bereich werden alle Maßnahmen unter realen schulischen Bedingungen im Rahmen des Erstlese- und Schreibunterrichts durch die Klassenlehrer selbst durchgeführt, wobei diese von der Autorin unterstützt und begleitet werden. Förder- und Prozessdiagnose sowie Elemente diagnosegeleiteter Förderung werden aus Theorien und Forschungslage abgeleitet und zu einem Interventionsset verbunden. Die Effektivität der evidenzbasierten Maßnahmen wird durch Parallelgruppenvergleiche überprüft.
An der empirischen Untersuchung nahmen insgesamt 25 Schulklassen mit 560 Erstklässlern teil, geteilt in Versuchs- und Kontrollgruppe. Mit der Eingangsdiagnose am Schulbeginn wurden Voraussetzungen für den Schriftspracherwerb erhoben und mit der Evaluierungsdiagnose am Ende der ersten Schulstufe entwicklungsadäquate schriftsprachliche Kompetenzen auf der Wortebene überprüft. Zusätzlich erfasst wurden internale und externale Einflussfaktoren, deren Wirkung in der statistischen Auswertung berücksichtigt wurde. Alle Datenerhebungen wurden in Versuchs- und Kontrollgruppe durchgeführt, während die evidenzbasierten Treatments nur in der Versuchsgruppe stattfanden.
Die Auswertung bestätigt mit signifikanten Ergebnissen den engen Zusammenhang zwischen der Phonologischen Bewusstheit zu Beginn des Schriftspracherwerbs und der Lese- und Rechtschreibfähigkeit am Ende der ersten Schulstufe sowie zwischen Familiärer Literalität und Lesefertigkeit. Schriftsprachliche Vorkenntnisse weisen eine Tendenz zur Signifikanz hinsichtlich ihrer positiven Wirkung auf die Basale Lesefertigkeit auf. Eine höchst signifikante positive Wirkung auf die Basale Lesefertigkeit zeigt die Druckschrift als Ausgangsschrift.
Die Ergebnisse deuten auf eine Überlegenheit vorschulischer präliteraler Fertigkeiten hinsichtlich ihrer Wirkung auf die Lese- und Rechtschreibfertigkeit am Ende der ersten Schulstufe gegenüber Fördermaßnahmen unter realen schulischen Bedingungen hin. Die positive Wirkung einer unverbundenen Ausgangsschrift auf den Leseerwerb betont die Wichtigkeit der Wahl der Ausgangsschrift. Im frühen Schriftspracherwerb sollte die Druckschrift für das Lesen und Schreiben verwendet werden.
In the context of an increasing population of aging people and a shift of medical paradigm towards an individualized medicine in health care, nanostructured lanthanides doped sodium yttrium fluoride (NaYF4) represents an exciting class of upconversion nanomaterials (UCNM) which are suitable to bring forward developments in biomedicine and -biodetection. Despite the fact that among various fluoride based upconversion (UC) phosphors lanthanide doped NaYF4 is one of the most studied upconversion nanomaterial, many open questions are still remaining concerning the interplay of the population routes of sensitizer and activator electronic states involved in different luminescence upconversion photophysics as well as the role of phonon coupling. The collective work aims to explore a detailed understanding of the upconversion mechanism in nanoscaled NaYF4 based materials co-doped with several lanthanides, e.g. Yb3+ and Er3+ as the "standard" type upconversion nanoparticles (UCNP) up to advanced UCNP with Gd3+ and Nd3+. Especially the impact of the crystal lattice structure as well as the resulting lattice phonons on the upconversion luminescence was investigated in detail based on different mixtures of cubic and hexagonal NaYF4 nanoscaled crystals. Three synthesis methods, depending on the attempt of the respective central spectroscopic questions, could be accomplished in the following work. NaYF4 based upconversion nanoparticles doped with several combination of lanthanides (Yb3+, Er3+, Gd3+ and Nd3+) were synthesized successfully using a hydrothermal synthesis method under mild conditions as well as a co-precipitation and a high temperature co-precipitation technique. Structural information were gathered by means of X-ray diffraction (XRD), electron microscopy (TEM), dynamic light scattering (DLS), Raman spectroscopy and inductively coupled plasma atomic emission spectrometry (ICP-OES). The results were discussed in detail with relation to the spectroscopic results. A variable spectroscopic setup was developed for multi parameter upconversion luminescence studies at various temperature 4 K to 328 K. Especially, the study of the thermal behavior of upconversion luminescence as well as time resolved area normalized emission spectra were a prerequisite for the detailed understanding of intramolecular deactivation processes, structural changes upon annealing or Gd3+ concentration, and the role of phonon coupling for the upconversion efficiency. Subsequently it became possible to synthesize UCNP with tailored upconversion luminescence properties. In the end, the potential of UCNP for life science application should be enunciated in context of current needs and improvements of a nanomaterial based optical sensors, whereas the "standard" UCNP design was attuned according to the special conditions in the biological matrix. In terms of a better biocompatibility due to a lower impact on biological tissue and higher penetrability for the excitation light. The first step into this direction was to use Nd3+ ions as a new sensitizer in tridoped NaYF4 based UCNP, whereas the achieved absolute and relative temperature sensitivity is comparable to other types of local temperature sensors in the literature.
The global carbon cycle is closely linked to Earth’s climate. In the context of continuously unchecked anthropogenic CO₂ emissions, the importance of natural CO₂ bond and carbon storage is increasing. An important biogenic mechanism of natural atmospheric CO₂ drawdown is the photosynthetic carbon fixation in plants and the subsequent longterm deposition of plant detritus in sediments.
The main objective of this thesis is to identify factors that control mobilization and transport of plant organic matter (pOM) through rivers towards sedimentation basins. I investigated this aspect in the eastern Nepalese Arun Valley. The trans-Himalayan Arun River is characterized by a strong elevation gradient (205 − 8848 m asl) that is accompanied by strong changes in ecology and climate ranging from wet tropical conditions in the Himalayan forelad to high alpine tundra on the Tibetan Plateau. Therefore, the Arun is an excellent natural laboratory, allowing the investigation of the effect of vegetation cover, climate, and topography on plant organic matter mobilization and export in tributaries along the gradient.
Based on hydrogen isotope measurements of plant waxes sampled along the Arun River and its tributaries, I first developed a model that allows for an indirect quantification of pOM contributed to the mainsetm by the Arun’s tributaries. In order to determine the role of climatic and topographic parameters of sampled tributary catchments, I looked for significant statistical relations between the amount of tributary pOM export and tributary characteristics (e.g. catchment size, plant cover, annual precipitation or runoff, topographic measures). On one hand, I demonstrated that pOMsourced from the Arun is not uniformly derived from its entire catchment area. On the other, I showed that dense vegetation is a necessary, but not sufficient, criterion for high tributary pOM export. Instead, I identified erosion and rainfall and runoff as key factors controlling pOM sourcing in the Arun Valley. This finding is supported by terrestrial cosmogenic nuclide concentrations measured on river sands along the Arun and its tributaries in order to quantify catchment wide denudation rates. Highest denudation rates corresponded well with maximum pOM mobilization and export also suggesting the link between erosion and pOM sourcing.
The second part of this thesis focusses on the applicability of stable isotope records such as plant wax n-alkanes in sediment archives as qualitative and quantitative proxy for the variability of past Indian Summer Monsoon (ISM) strength. First, I determined how ISM strength affects the hydrogen and oxygen stable isotopic composition (reported as δD and δ18O values vs. Vienna Standard Mean Ocean Water) of precipitation in the Arun Valley and if this amount effect (Dansgaard, 1964) is strong enough to be recorded in potential paleo-ISM isotope proxies. Second, I investigated if potential isotope records across the Arun catchment reflect ISM strength dependent precipitation δD values only, or if the ISM isotope signal is superimposed by winter precipitation or glacial melt. Furthermore, I tested if δD values of plant waxes in fluvial deposits reflect δD values of environmental waters in the respective catchments.
I showed that surface water δD values in the Arun Valley and precipitation δD from south of the Himalaya both changed similarly during two consecutive years (2011 & 2012) with distinct ISM rainfall amounts (~20% less in 2012). In order to evaluate the effect of other water sources (Winter-Westerly precipitation, glacial melt) and evapotranspiration in the Arun Valley, I analysed satellite remote sensing data of rainfall distribution (TRMM 3B42V7), snow cover (MODIS MOD10C1), glacial coverage (GLIMSdatabase, Global Land Ice Measurements from Space), and evapotranspiration (MODIS MOD16A2). In addition to the predominant ISM in the entire catchment I found through stable isotope analysis of surface waters indications for a considerable amount of glacial melt derived from high altitude tributaries and the Tibetan Plateau. Remotely sensed snow cover data revealed that the upper portion of the Arun also receives considerable winter precipitation, but the effect of snow melt on the Arun Valley hydrology could not be evaluated as it takes place in early summer, several months prior to our sampling campaigns. However, I infer that plant wax records and other potential stable isotope proxy archives below the snowline are well-suited for qualitative, and potentially quantitative, reconstructions of past changes of ISM strength.
The main objective of this dissertation is to analyse prerequisites, expectations, apprehensions, and attitudes of students studying computer science, who are willing to gain a bachelor degree. The research will also investigate in the students’ learning style according to the Felder-Silverman model. These investigations fall in the attempt to make an impact on reducing the “dropout”/shrinkage rate among students, and to suggest a better learning environment.
The first investigation starts with a survey that has been made at the computer science department at the University of Baghdad to investigate the attitudes of computer science students in an environment dominated by women, showing the differences in attitudes between male and female students in different study years. Students are accepted to university studies via a centrally controlled admission procedure depending mainly on their final score at school. This leads to a high percentage of students studying subjects they do not want. Our analysis shows that 75% of the female students do not regret studying computer science although it was not their first choice. And according to statistics over previous years, women manage to succeed in their study and often graduate on top of their class. We finish with a comparison of attitudes between the freshman students of two different cultures and two different university enrolment procedures (University of Baghdad, in Iraq, and the University of Potsdam, in Germany) both with opposite gender majority.
The second step of investigation took place at the department of computer science at the University of Potsdam in Germany and analyzes the learning styles of students studying the three major fields of study offered by the department (computer science, business informatics, and computer science teaching). Investigating the differences in learning styles between the students of those study fields who usually take some joint courses is important to be aware of which changes are necessary to be adopted in the teaching methods to address those different students. It was a two stage study using two questionnaires; the main one is based on the Index of Learning Styles Questionnaire of B. A. Solomon and R. M. Felder, and the second questionnaire was an investigation on the students’ attitudes towards the findings of their personal first questionnaire. Our analysis shows differences in the preferences of learning style between male and female students of the different study fields, as well as differences between students with the different specialties (computer science, business informatics, and computer science teaching).
The third investigation looks closely into the difficulties, issues, apprehensions and expectations of freshman students studying computer science. The study took place at the computer science department at the University of Potsdam with a volunteer sample of students. The goal is to determine and discuss the difficulties and issues that they are facing in their study that may lead them to think in dropping-out, changing the study field, or changing the university. The research continued with the same sample of students (with business informatics students being the majority) through more than three semesters. Difficulties and issues during the study were documented, as well as students’ attitudes, apprehensions, and expectations. Some of the professors and lecturers opinions and solutions to some students’ problems were also documented. Many participants had apprehensions and difficulties, especially towards informatics subjects. Some business informatics participants began to think of changing the university, in particular when they reached their third semester, others thought about changing their field of study. Till the end of this research, most of the participants continued in their studies (the study they have started with or the new study they have changed to) without leaving the higher education system.
Die sensorisch einwandfreie, konstant gute Qualität von Backprodukten, die beim Verbraucher einen hohen Stellenwert hat, wird maßgeblich durch den Gehalt endogener Getreideenzyme beeinflusst. Seit dem Auftreten züchtungsbedingter Enzymdefizite ist der Einsatz technischer Enzyme zur Gewährleistung dieser geforderten Qualität eine feste Größe in der Backwarenindustrie. Lebensmittelrechtlich werden technische Enzyme nicht als Zutat betrachtet, da sie theoretisch während des Backprozesses umgesetzt werden und im Endprodukt keine technologische Wirkung mehr zeigen. Vor allem in gebackenen Produkten bedarf es der Prüfung, dass die eingesetzten technischen Enzyme nicht mehr als Zutat vorliegen und sich somit einer potentiellen Deklarationspflicht entziehen. Zur Gewährleistung der Wirtschaftlichkeit muss der quantitative Einsatz technischer Enzyme in der Backwarenindustrie gesteuert werden, um optimale Effekte zu erzielen und Kosten zu sparen. Ziel dieser Arbeit war daher die Entwicklung eines Analysenverfahrens, das den simultanen Nachweis verschiedener technischer Enzyme und deren Quantifizierung im Spurenbereich auch in gebackenen Produkten ermöglicht.
Für die Einschätzung der Wirkung der technischen Enzyme Fungamyl (Novozymes), Amylase TXL (ASA Spezialenzyme GmbH) sowie Lipase FE-01 (ASA Spezialenzyme GmbH) wurden Backversuche durchgeführt, die zeigten, dass Fungamyl und Amylase TXL zu einer verbesserten Brotqualität (Volumenausbeute, Feuchtegehalt, Sensorik) beitrugen. Die Zugabe der Lipase FE-01 führte zu einer vermehrten Bildung freier Fettsäuren und wirkte sich negativ auf die sensorische Brotqualität aus. Dieser bisher nicht beschriebene Effekt konnte auf die Nutzung eines Spezialöls als Backzutat zurückgeführt werden, welches ausschließlich aus gesättigten Fettsäuren besteht. Dies bestätigt die Bedeutung der Auswahl eines geeigneten Fettes beim Zusatz technischer Lipase zum Backprozess.
Um die in Fungamyl und Lipase FE-01 enthaltenen Enzyme zu identifizieren, wurden SDS-PAGE und anschließender In-Gel-Verdau angewendet um die Analyse proteolytisch gespaltener Proteine mit MALDI-TOF-MS zu ermöglichen. Es konnte gezeigt werden, dass Fungamyl ein Gemisch aus 9,8 % alpha-Amylase (Aspergillus oryzae) und 5,2 % Endo-1,4-Xylanase (Thermomyces lanuginosus) enthält. Lipase FE-01 besteht aus der Lipase (Thermomyces lanuginosus), Amylase TXL wurde als alpha-Amylase (Aspergillus oryzae) identifiziert.
Zur Analyse der technischen Enzyme in Backwaren wurde aufgrund seiner Robustheit und Sensitivität das Verfahren der LC-MS/MS gewählt. Die Entwicklung einer solchen Methode zur Detektion spezifischer Peptide ermöglichte den qualitativen Nachweis der 3 Enzyme alpha-Amylase (Aspergillus oryzae), Endo-1,4-Xylanase (Thermomyces lanuginosus) und Lipase (Thermomyces lanuginosus). Durch eine lineare Kalibrierung aus synthetisch hergestellten Peptiden unter Einbeziehung eines Protein-Internen-Standards sowie isotopenmarkierter Peptidstandards erfolgte darüber hinaus die quantitative Bestimmung in selbst hergestellten Referenzmaterialien (Weizenmehl, Toastbrot und Biskuitkeks). In weniger als 20 Minuten Messzeit kann das Enzym alpha-Amylase ab einer Konzentration von 2,58 mg/kg (Mehl, Keks), bzw. 7,61 mg/kg (Brot) quantitativ nachgewiesen werden. Zeitgleich können die Enzyme Endo-1,4-Xylanase ab einer Konzentration von 7,75 mg/kg (Brot), 3,64 mg/kg (Keks) bzw. 15,60 mg/kg (Mehl) sowie Lipase ab einer Konzentration von 1,26 mg/kg (Mehl, Keks), bzw. 2,68 mg/kg (Brot) quantifiziert werden. Die Methode wurde nach allgemein verwendeten Richtlinien im Zuge einer Validierung statistisch geprüft und lieferte sehr robuste und reproduzierbare quantitative Werte mit Wiederfindungsraten zwischen 50 % und 122 %. Das primäre Ziel dieser Arbeit, die Entwicklung eines quantitativen Multiparameterverfahrens zum Nachweis technischer Enzyme in Backwaren, wurde somit erfolgreich umgesetzt.
This thesis presents new approaches of SAR methods and their application to tectonically active systems and related surface deformation. With 3 publications two case studies are presented:
(1) The coseismic deformation related to the Nura earthquake (5th October 2008, magnitude Mw 6.6) at the eastern termination of the intramontane Alai valley. Located between the southern Tien Shan and the northern Pamir the coseismic surface displacements are analysed using SAR (Synthetic Aperture RADAR) data. The results show clear gradients in the vertical and horizontal directions along a complex pattern of surface ruptures and active faults. To integrate and to interpret these observations in the context of the regional active tectonics a SAR data analysis is complemented with seismological data and geological field observations. The main moment release of the Nura earthquake appears to be on the Pamir Frontal thrust, while the main surface displacements and surface rupture occurred in the footwall and along of the NE–SW striking Irkeshtam fault. With InSAR data from ascending and descending satellite tracks along with pixel offset measurements the Nura earthquake source is modelled as a segmented rupture. One fault segment corresponds to high-angle brittle faulting at the Pamir Frontal thrust and two more fault segments show moderate-angle and low-friction thrusting at the Irkeshtam fault. The integrated analysis of the coseismic deformation argues for a rupture segmentation and strain partitioning associated to the earthquake. It possibly activated an orogenic wedge in the easternmost segment of the Pamir-Alai collision zone. Further, the style of the segmentation may be associated with the presence of Paleogene evaporites.
(2) The second focus is put on slope instabilities and consequent landslides in the area of prominent topographic transition between the Fergana basin and high-relief Alai range. The Alai range constitutes an active orogenic wedge of the Pamir – Tien Shan collision zone that described as a progressively northward propagating fold-and-thrust belt. The interferometric analysis of ALOS/PALSAR radar data integrates a period of 4 years (2007-2010) based on the Small Baseline Subset (SBAS) time-series technique to assess surface deformation with millimeter surface change accuracy. 118 interferograms are analyzed to observe spatially-continuous movements with downslope velocities up to 71 mm/yr. The obtained rates indicate slow movement of the deep-seated landslides during the observation time. We correlated these movements with precipitation and seismic records. The results suggest that the deformation peaks correlate with rainfall in the 3 preceding months and with one earthquake event. In the next step, to understand the spatial pattern of landslide processes, the tectonic morphologic and lithologic settings are combined with the patterns of surface deformation. We demonstrate that the lithological and tectonic structural patterns are the main controlling factors for landslide occurrence and surface deformation magnitudes. Furthermore active contractional deformation in the front of the orogenic wedge is the main mechanism to sustain relief. Some of the slower but continuously moving slope instabilities are directly related to tectonically active faults and unconsolidated young Quaternary syn-orogenic sedimentary sequences. The InSAR observed slow moving landslides represent active deep-seated gravitational slope deformation phenomena which is first time observed in the Tien Shan mountains. Our approach offers a new combination of InSAR techniques and tectonic aspects to localize and understand enhanced slope instabilities in tectonically active mountain fronts in the Kyrgyz Tien Shan.
Molekulare Charakterisierung von CP75, einem neuen centrosomalen Protein in Dictyostelium discoideum
(2016)
Das Centrosom ist ein Zellkern-assoziiertes Organell, das nicht von einer Membran umschlossen ist. Es spielt eine wichtige Rolle in vielen Mikrotubuli- abhängigen Prozessen wie Organellenpositionierung, Zellpolarität oder die Organisation der mitotischen Spindel. Das Centrosom von Dictyostelium besteht aus einer dreischichtigen Core-Struktur umgeben von einer Corona, die Mikrotubuli-nukleierende Komplexe enthält. Die Verdoppelung des Centrosoms in Dictyostelium findet zu Beginn der Mitose statt. In der Prophase vergrößert sich die geschichtete Core-Struktur und die Corona löst sich auf. Anschließend trennen sich die beiden äußeren Lagen der Core-Struktur und bilden in der Metaphase die beiden Spindelpole, die in der Telophase zu zwei vollständigen Centrosomen heranreifen. Das durch eine Proteom-Analyse identifizierte Protein CP75 lokalisiert am Centrosom abhängig von den Mitosephasen. Es dissoziiert von der Core-Struktur in der Prometaphase und erscheint an den Spindelpolen in der Telophase wieder. Dieses Verhalten korreliert mit dem Verhalten der mittleren Lage der Core-Struktur in der Mitose, was darauf hinweist, dass CP75 eine Komponente dieser Schicht sein könnte. Die FRAP-Experimente am Interphase- Centrosom zeigen, dass GFP-CP75 dort nicht mobil ist. Das deutet darauf hin, dass das Protein wichtige Funktionen im Strukturerhalt der centrosomalen Core- Struktur übernehmen könnte. Sowohl die C- als auch die N-terminale Domäne von CP75 enthalten centrosomale Targeting-Domäne. Als GFP-Fusionsproteine (GFP-CP75-N und -C) lokalisieren die beiden Fragmente am Centrosom in der Interphase. Während GFP-CP75-C in der Mitose am Centrosom verbleibt, verschwindet GFP-CP75-N in der Metaphase und kehrt erst in der späten Telophase zurück. GFP-CP75-C und GFP-CP75O/E kolokalisieren mit F-Aktin am Zellcortex, zeigen aber keine Interaktion mit Aktin mit der BioID-Methode. Die N-terminale Domäne von CP75 enthält eine potentielle Plk1- Phosphorylierungssequenz. Die Überexpression der nichtphosphorylierbaren Punktmutante (GFP-CP75-Plk-S143A) ruft verschiedene Phänotypen wie verlängerte oder überzählige Centrosomen, vergrößerte Zellkerne und Anreicherung von detyrosinierten Mikrotubuli hervor. Die ähnlichen Phänotypen konnten auch bei GFP-CP75-N und CP75-RNAi beobachtet werden. Der
Phänotyp der detyrosinierten Mikrotubuli bringt erstmals den Beweis dafür, dass I
in Dictyostelium posttranslationale Modifikation an Tubulinen stattfindet. Außerdem zeigten CP75-RNAi-Zellen Defekte in der Organisation der mitotischen Spindel. Mittels BioID-Methode konnten drei potentielle Interaktionspartner von CP75 identifiziert werden. Diese drei Proteine CP39, CP91 und Cep192 sind ebenfalls Bestandteile des Centrosoms.
Computer Security deals with the detection and mitigation of threats to computer networks, data, and computing hardware. This
thesis addresses the following two computer security problems: email spam campaign and malware detection.
Email spam campaigns can easily be generated using popular dissemination tools by specifying simple grammars that serve as message templates. A grammar is disseminated to nodes of a bot net, the nodes create messages by instantiating the grammar at random. Email spam campaigns can encompass huge data volumes and therefore pose a threat to the stability of the infrastructure of email service providers that have to store them. Malware -software that serves a malicious purpose- is affecting web servers, client computers via active content, and client computers through executable files. Without the help of malware detection systems it would be easy for malware creators to collect sensitive information or to infiltrate computers.
The detection of threats -such as email-spam messages, phishing messages, or malware- is an adversarial and therefore intrinsically
difficult problem. Threats vary greatly and evolve over time. The detection of threats based on manually-designed rules is therefore
difficult and requires a constant engineering effort. Machine-learning is a research area that revolves around the analysis of data and the discovery of patterns that describe aspects of the data. Discriminative learning methods extract prediction models from data that are optimized to predict a target attribute as accurately as possible. Machine-learning methods hold the promise of automatically identifying patterns that robustly and accurately detect threats. This thesis focuses on the design and analysis of discriminative learning methods for the two computer-security problems under investigation: email-campaign and malware detection.
The first part of this thesis addresses email-campaign detection. We focus on regular expressions as a syntactic framework, because regular expressions are intuitively comprehensible by security engineers and administrators, and they can be applied as a detection mechanism in an extremely efficient manner. In this setting, a prediction model is provided with exemplary messages from an email-spam campaign. The prediction model has to generate a regular expression that reveals the syntactic pattern that underlies the entire campaign, and that a security engineers finds comprehensible and feels confident enough to use the expression to blacklist further messages at the email server. We model this problem as two-stage learning problem with structured input and output spaces which can be solved using standard cutting plane methods. Therefore we develop an appropriate loss function, and derive a decoder for the resulting optimization problem.
The second part of this thesis deals with the problem of predicting whether a given JavaScript or PHP file is malicious or benign. Recent malware analysis techniques use static or dynamic features, or both. In fully dynamic analysis, the software or script is executed and observed for malicious behavior in a sandbox environment. By contrast, static analysis is based on features that can be extracted directly from the program file. In order to bypass static detection mechanisms, code obfuscation techniques are used to spread a malicious program file in many different syntactic variants. Deobfuscating the code before applying a static classifier can be subjected to mostly static code analysis and can overcome the problem of obfuscated malicious code, but on the other hand increases the computational costs of malware detection by an order of magnitude. In this thesis we present a cascaded architecture in which a classifier first performs a static analysis of the original code and -based on the outcome of this first classification step- the code may be deobfuscated and classified again. We explore several types of features including token $n$-grams, orthogonal sparse bigrams, subroutine-hashings, and syntax-tree features and study the robustness of detection methods and feature types against the evolution of malware over time. The developed tool scans very large file collections quickly and accurately.
Each model is evaluated on real-world data and compared to reference methods. Our approach of inferring regular expressions to filter emails belonging to an email spam campaigns leads to models with a high true-positive rate at a very low false-positive rate that is an order of magnitude lower than that of a commercial content-based filter. Our presented system -REx-SVMshort- is being used by a commercial email service provider and complements content-based and IP-address based filtering.
Our cascaded malware detection system is evaluated on a high-quality data set of almost 400,000 conspicuous PHP files and a collection of more than 1,00,000 JavaScript files. From our case study we can conclude that our system can quickly and accurately process large data collections at a low false-positive rate.
It is commonly recognized that soil moisture exhibits spatial heterogeneities occurring in a wide range of scales. These heterogeneities are caused by different factors ranging from soil structure at the plot scale to land use at the landscape scale. There is an urgent need for effi-cient approaches to deal with soil moisture heterogeneity at large scales, where manage-ment decisions are usually made. The aim of this dissertation was to test innovative ap-proaches for making efficient use of standard soil hydrological data in order to assess seep-age rates and main controls on observed hydrological behavior, including the role of soil het-erogeneities.
As a first step, the applicability of a simplified Buckingham-Darcy method to estimate deep seepage fluxes from point information of soil moisture dynamics was assessed. This was done in a numerical experiment considering a broad range of soil textures and textural het-erogeneities. The method performed well for most soil texture classes. However, in pure sand where seepage fluxes were dominated by heterogeneous flow fields it turned out to be not applicable, because it simply neglects the effect of water flow heterogeneity. In this study a need for new efficient approaches to handle heterogeneities in one-dimensional water flux models was identified.
As a further step, an approach to turn the problem of soil moisture heterogeneity into a solu-tion was presented: Principal component analysis was applied to make use of the variability among soil moisture time series for analyzing apparently complex soil hydrological systems. It can be used for identifying the main controls on the hydrological behavior, quantifying their relevance, and describing their particular effects by functional averaged time series. The ap-proach was firstly tested with soil moisture time series simulated for different texture classes in homogeneous and heterogeneous model domains. Afterwards, it was applied to 57 mois-ture time series measured in a multifactorial long term field experiment in Northeast Germa-ny.
The dimensionality of both data sets was rather low, because more than 85 % of the total moisture variance could already be explained by the hydrological input signal and by signal transformation with soil depth. The perspective of signal transformation, i.e. analyzing how hydrological input signals (e.g., rainfall, snow melt) propagate through the vadose zone, turned out to be a valuable supplement to the common mass flux considerations. Neither different textures nor spatial heterogeneities affected the general kind of signal transfor-mation showing that complex spatial structures do not necessarily evoke a complex hydro-logical behavior. In case of the field measured data another 3.6% of the total variance was unambiguously explained by different cropping systems. Additionally, it was shown that dif-ferent soil tillage practices did not affect the soil moisture dynamics at all.
The presented approach does not require a priori assumptions about the nature of physical processes, and it is not restricted to specific scales. Thus, it opens various possibilities to in-corporate the key information from monitoring data sets into the modeling exercise and thereby reduce model uncertainties.
Dynamics of mantle plumes
(2016)
Mantle plumes are a link between different scales in the Earth’s mantle: They are an important part of large-scale mantle convection, transporting material and heat from the core-mantle boundary to the surface, but also affect processes on a smaller scale, such as melt generation and transport and surface magmatism. When they reach the base of the lithosphere, they cause massive magmatism associated with the generation of large igneous provinces, and they can be related to mass extinction events (Wignall, 2001) and continental breakup (White and McKenzie, 1989).
Thus, mantle plumes have been the subject of many previous numerical modelling studies (e.g. Farnetani and Richards, 1995; d’Acremont et al., 2003; Lin and van Keken, 2005; Sobolev et al., 2011; Ballmer et al., 2013). However, complex mechanisms, such as the development and implications of chemical heterogeneities in plumes, their interaction with mid-ocean ridges and global mantle flow, and melt ascent from the source region to the surface are still not very well understood; and disagreements between observations and the predictions of classical plume models have led to a challenge of the plume concept in general (Czamanske et al., 1998; Anderson, 2000; Foulger, 2011). Hence, there is a need for more sophisticated models that can explain the underlying physics, assess which properties and processes are important, explain how they cause the observations visible at the Earth’s surface and provide a link between the different scales.
In this work, integrated plume models are developed that investigate the effect of dense recycled oceanic crust on the development of mantle plumes, plume–ridge interaction under the influence of global mantle flow and melting and melt migration in form of two-phase flow.
The presented analysis of these models leads to a new, updated picture of mantle plumes: Models considering a realistic depth-dependent density of recycled oceanic crust and peridotitic mantle material show that plumes with excess temperatures of up to 300 K can transport up to 15% of recycled oceanic crust through the whole mantle. However, due to the high density of recycled crust, plumes can only advance to the base of the lithosphere directly if they have high excess temperatures, high plume volumes and the lowermost mantle is subadiabatic, or plumes rise from the top or edges of thermo-chemical piles. They might only cause minor surface uplift, and instead of the classical head–tail structure, these low-buoyancy plumes are predicted to be broad features in the lower mantle with much less pronounced plume heads. They can form a variety of shapes and regimes, including primary plumes directly advancing to the base of the lithosphere, stagnating plumes, secondary plumes rising from the core–mantle boundary or a pool of eclogitic material in the upper mantle and failing plumes. In the upper mantle, plumes are tilted and deflected by global mantle flow, and the shape, size and stability of the melting region is influenced by the distance from nearby plate boundaries, the speed of the overlying plate and the movement of the plume tail arriving from the lower mantle. Furthermore, the structure of the lithosphere controls where hot material is accumulated and melt is generated. In addition to melting in the plume tail at the plume arrival position, hot plume material flows upwards towards opening rifts, towards mid-ocean ridges and towards other regions of thinner lithosphere, where it produces additional melt due to decompression. This leads to the generation of either broad ridges of thickened magmatic crust or the separation into multiple thinner lines of sea mount chains at the surface. Once melt is generated within the plume, it influences its dynamics, lowering the viscosity and density, and while it rises the melt volume is increased up to 20% due to decompression. Melt has the tendency to accumulate at the top of the plume head, forming diapirs and initiating small-scale convection when the plume reaches the base of the lithosphere. Together with the introduced unstable, high-density material produced by freezing of melt, this provides an efficient mechanism to thin the lithosphere above plume heads.
In summary, this thesis shows that mantle plumes are more complex than previously considered, and linking the scales and coupling the physics of different processes occurring in mantle plumes can provide insights into how mantle plumes are influenced by chemical heterogeneities, interact with the lithosphere and global mantle flow, and are affected by melting and melt migration. Including these complexities in geodynamic models shows that plumes can also have broad plume tails, might produce only negligible surface uplift, can generate one or several volcanic island chains in interaction with a mid–ocean ridge, and can magmatically thin the lithosphere.
Die Annäherung von Entwicklung und Sicherheit seit Beginn der 1990er Jahre gilt in Teilen der Fachöffentlichkeit als wesentliches Merkmal einer zunehmenden Eigennutz- und Interessenorientierung der deutschen Entwicklungspolitik nach Ende des Ost-West-Konflikts. Den Ausgangspunkt der vorliegenden Untersuchung bildete die Skepsis gegenüber diesem Befund eines Wandels deutscher Entwicklungspolitik weg von moralischen Begründungszusammenhängen und hin zu nationaler Interessenpolitik seit Beginn der 1990er Jahre. Diese Skepsis begründet sich in der Annahme, dass die bisherige Kritik gegenüber einer möglichen Versicherheitlichung von Entwicklungspolitik die Rolle von eigennutzorientierten Interessen als erklärendem Faktor überbetont und gleichzeitig ideellen Strukturen und deren möglichem Wandel als konstitutivem Faktor für politische Prozesse zu wenig Aufmerksamkeit schenkt. Die Forschungsfrage lautet dementsprechend: Kann die deutsche Entwicklungspolitik im Lichte der Verknüpfung von Entwicklung und Sicherheit als zunehmend interessenorientiert gedeutet werden und hat sich damit ein grundlegender Politikwandel vollzogen?
Theoretisch knüpft die Arbeit an die konstruktivistisch-orientierte Forschung im Thema Entwicklung und Sicherheit an und entwickelt diese weiter. Für die Herleitung der theoretischen Position wird auf konstruktivistische Überlegungen in den Theorien der Internationalen Beziehungen rekurriert. Im Vordergrund stehen dabei jene Ansätze der Internationalen Beziehungen, die die konstruktivistische Wende nicht nur ontologisch, sondern auch epistemologisch vollziehen und der Rolle von Sprache besondere Aufmerksamkeit schenken. In empirischer Hinsicht wird die Verknüpfung von Entwicklung und Sicherheit in der deutschen staatlichen Entwicklungspolitik anhand von Interpretationen dieser Verknüpfung im Agenda-Setting und in der Politikformulierung untersucht. Der Untersuchungszeitraum der empirischen Analyse beläuft sich auf die Amtsjahre der SPD-Politikerin Heidemarie Wieczorek-Zeul als Bundesministerin für wirtschaftliche Entwicklung und Zusammenarbeit, nämlich 1998 2009. Der Datenkorpus der Untersuchung in Agenda-Setting und Politikformulierung umfasst über 50 Reden von Mitgliedern der Bundesregierung sowie ausgewählte offizielle Politikdokumente, in denen relevante Textpassagen enthalten sind. Die beispielhafte Untersuchung der Institutionalisierung im Lichte der Verknüpfungen von Entwicklung und Sicherheit bezieht sich auf weitere Primär- und Sekundärquellen.
Auf der Grundlage der empirischen Analyse wird deutlich, dass unterschiedliche Interpretationen in der staatlichen deutschen Entwicklungspolitik hinsichtlich der Verknüpfung von Entwicklung und Sicherheit über den Untersuchungszeitraum 1998 - 2009 nachgezeichnet werden können. Bemerkenswert ist dabei insbesondere die diffuse Vielfalt der Konstruktionen des Sicherheitsbegriffs. Außerdem wird anhand der empirischen Untersuchung nachgezeichnet, dass zum Teil erhebliche Unterschiede bestehen zwischen den Verknüpfungen von Entwicklung und Sicherheit auf der ressortübergreifenden Ebene einerseits und der entwicklungspolitischen Ebene andererseits. Auch die beispielhafte Diskussion von Meilensteinen der institutionalisierten Entwicklungspolitik bestätigt diese Varianzen, die durch die nuancierte Analyse sprachlicher Konstruktionen sichtbar gemacht werden konnte. Ausgehend vom empirischen Ergebnis der Varianz und Variabilität der Begründungsmuster für die Verknüpfungen von Entwicklung und Sicherheit ist es nunmehr möglich, Schlussfolgerungen im Hinblick auf die Forschungsfrage zu ziehen: Ist deutsche Entwicklungspolitik im Lichte der Verknüpfung von Entwicklung und Sicherheit zunehmend eigennutz- und interessenorientiert?
In den Anfangsjahren von Wieczorek-Zeul spielen normative Aspekte wie Gerechtigkeit und Frieden im Zusammenhang mit der Genese des Themenfelds Frieden und Sicherheit eine wichtige Rolle. Prägend für die Politikformulierung sind dabei vor allem die Herausforderungen im Zusammenhang mit der Globalisierung, die den Ausgangspunkt für die Formulierung der von Wieczorek-Zeul geprägten Globalen Strukturpolitik bilden. Eine Eigennutzorientierung im realistischen Sinne scheint nur dann präsent, wenn es um unser Interesse der Wohlstandssicherung geht. Entwicklungspolitische Friedenförderung und Krisenpräventionen dienen dazu, die ökonomischen Kosten von Kriegen zu verringern und leisten einen Beitrag zur Vermeidung von wohlstandsgefährdender Migration. Es wird auf einen Sicherheitsbegriff rekurriert, der die Menschliche Sicherheit der Bevölkerung in den Entwicklungs- und Transformationsländern in den Vordergrund stellt. Nach 9/11 verschieben sich die sprachlichen Konstruktionen weg von unserem Wohlstand und dem Frieden weltweit in Richtung unsere Sicherheit. Artikulierte Eigennutzorientierung mit Bezug auf Sicherheit gewinnt an Dominanz gegenüber moralischen Begründungszusammenhängen. Diese Entwicklung lässt sich vor allem im Rahmen der ressortübergreifenden Interpretationen des Zusammenhangs von Entwicklung und Sicherheit nachzeichnen. Auch bei dieser ressortübergreifenden Verschiebung lässt sich die Verknüpfung von Entwicklung und Sicherheit auf der Ebene des für die deutsche Entwicklungspolitik federführenden Bundesministeriums für wirtschaftliche Zusammenarbeit und Entwicklung (BMZ) hingegen weiterhin als vorwiegend verpflichtungsorientiert deuten. Erst mit der Großen Koalition ab 2005 kann von umfassenderer Neu-Interpretation der Verknüpfung von Entwicklung und Sicherheit ausgegangen werden: Wohlstand und Sicherheit in der Welt werden nunmehr gleichermaßen als in unserem Interesse artikuliert, die neben der internationalen Verpflichtung zur Friedenssicherung als gleichwertig eingeschätzt werden können
Zusammenfassend bringen diese empirischen Ergebnisse im Lichte der theoretischen Deutung ein nuancierter es Bild hervor als in der bisherigen Forschung mit ihrem meist einseitigen Fokus auf einer zunehmenden Interessenorientierung angenommen wurde. Die ideellen Bezüge waren immer präsent als prägender Faktor für die deutsche Entwicklungspolitik, sie haben sich allerdings im Zeitverlauf verändert. Der theoretische Ertrag der Studie und die Policy-Relevanz liegen auf mehreren Ebenen. Erstens wird mit der differenzierten Untersuchung und Deutung deutscher Entwicklungspolitik im Lichte der Verknüpfungen von Entwicklung und Sicherheit die Forschung zum Thema Versicherheitlichung von Entwicklungspolitik angereichert und deren theoretische Prämissen weiterentwickelt. Zweitens leistet die Arbeit einen Beitrag zur Forschung zur deutschen Entwicklungspolitik. Mit der vorliegenden Studie wird diese oft an der Umsetzung und Praxis interessierte Forschung durch die theoretische Beschäftigung mit der Deutung deutscher Entwicklungspolitik angereichert. Dieser Beitrag ergibt sich konkret aus der Anwendung theoretischer Überlegungen der Sicherheitsstudien, aus dem konstruktivistischen Strang der Theorien der Internationalen Beziehungen (IB) sowie konzeptionellen Überlegungen aus der Policy-Forschung, die miteinander verknüpft werden.
Changing the perspective sometimes offers completely new insights to an already well-known phenomenon. Exercising behavior, defined as planned, structured and repeated bodily movements with the intention to maintain or increase the physical fitness (Caspersen, Powell, & Christenson, 1985), can be thought of as such a well-known phenomenon that has been in the scientific focus for many decades (Dishman & O’Connor, 2005). Within these decades a perspective that assumes rational and controlled evaluations as the basis for decision making, was predominantly used to understand why some people engage in physical activity and others do not (Ekkekakis & Zenko, 2015).
Dual-process theories (Ekkekakis & Zenko, 2015; Payne & Gawronski, 2010) provide another perspective, that is not exclusively influenced by rational reasoning. These theories differentiate two different processes that guide behavior “depending on whether they operate automatically or in a controlled fashion“ (Gawronski & Creighton, 2012, p. 282). Following this line of thought, exercise behavior is not solely influenced by thoughtful deliberations (e.g. concluding that exercising is healthy) but also by spontaneous affective reactions (e.g. disliking being sweaty while exercising). The theoretical frameworks of dual-process models are not new in psychology (Chaiken & Trope, 1999) and have already been used for the explanation of numerous behaviors (e.g. Hofmann, Friese, & Wiers, 2008; Huijding, de Jong, Wiers, & Verkooijen, 2005). However, they have only rarely been used for the explanation of exercise behavior (e.g. Bluemke, Brand, Schweizer, & Kahlert, 2010; Conroy, Hyde, Doerksen, & Ribeiro, 2010; Hyde, Doerksen, Ribeiro, & Conroy, 2010). The assumption of two dissimilar behavior influencing processes, differs fundamentally from previous theories and thus from the research that has been conducted in the last decades in exercise psychology. Research mainly concentrated on predictors of the controlled processes and addressed the identified predictors in exercise interventions (Ekkekakis & Zenko, 2015; Hagger, Chatzisarantis, & Biddle, 2002).
Predictors arising from the described automatic processes, for example automatic evaluations for exercising (AEE), have been neglected in exercise psychology for many years. Until now, only a few researchers investigated the influence of these AEE for exercising behavior (Bluemke et al., 2010; Brand & Schweizer, 2015; Markland, Hall, Duncan, & Simatovic, 2015). Marginally more researchers focused on the impact of AEE for physical activity behavior (Calitri, Lowe, Eves, & Bennett, 2009; Conroy et al., 2010; Hyde et al., 2010; Hyde, Elavsky, Doerksen, & Conroy, 2012). The extant studies mainly focused on the quality of AEE and the associated quantity of exercise (exercise much or little; Bluemke et al., 2010; Calitri et al., 2009; Conroy et al., 2010; Hyde et al., 2012). In sum, there is still a dramatic lack of empirical knowledge, when applying dual-process theories to exercising behavior, even though these theories have proven to be successful in explaining behavior in many other health-relevant domains like eating, drinking or smoking behavior (e.g. Hofmann et al., 2008).
The main goal of the present dissertation was to collect empirical evidence for the influence of AEE on exercise behavior and to expand the so far exclusively correlational studies by experimentally controlled studies. By doing so, the ongoing debate on a paradigm shift from controlled and deliberative influences of exercise behavior towards approaches that consider automatic and affective influences (Ekkekakis & Zenko, 2015) should be encouraged. All three conducted publications are embedded in dual-process theorizing (Gawronski & Bodenhausen, 2006, 2014; Strack & Deutsch, 2004). These theories offer a theoretical framework that could integrate the established controlled variables of exercise behavior explanation and additionally consider automatic factors for exercise behavior like AEE.
Taken together, the empirical findings collected suggest that AEE play an important and diverse role for exercise behavior. They represent exercise setting preferences, are a cause for short-term exercise decisions and are decisive for long-term exercise adherence. Adding to the few already present studies in this field, the influence of (positive) AEE for exercise behavior was confirmed in all three presented publications. Even though the available set of studies needs to be extended in prospectively studies, first steps towards a more complete picture have been taken. Closing with the beginning of the synopsis: I think that time is right for a change of perspectives! This means a careful extension of the present theories with controlled evaluations explaining exercise behavior. Dual-process theories including controlled and automatic evaluations could provide such a basis for future research endeavors in exercise psychology.
In many statistical applications, the aim is to model the relationship between covariates and some outcomes. A choice of the appropriate model depends on the outcome and the research objectives, such as linear models for continuous outcomes, logistic models for binary outcomes and the Cox model for time-to-event data. In epidemiological, medical, biological, societal and economic studies, the logistic regression is widely used to describe the relationship between a response variable as binary outcome and explanatory variables as a set of covariates. However, epidemiologic cohort studies are quite expensive regarding data management since following up a large number of individuals takes long time. Therefore, the case-cohort design is applied to reduce cost and time for data collection. The case-cohort sampling collects a small random sample from the entire cohort, which is called subcohort. The advantage of this design is that the covariate and follow-up data are recorded only on the subcohort and all cases (all members of the cohort who develop the event of interest during the follow-up process).
In this thesis, we investigate the estimation in the logistic model for case-cohort design. First, a model with a binary response and a binary covariate is considered. The maximum likelihood estimator (MLE) is described and its asymptotic properties are established. An estimator for the asymptotic variance of the estimator based on the maximum likelihood approach is proposed; this estimator differs slightly from the estimator introduced by Prentice (1986). Simulation results for several proportions of the subcohort show that the proposed estimator gives lower empirical bias and empirical variance than Prentice's estimator.
Then the MLE in the logistic regression with discrete covariate under case-cohort design is studied. Here the approach of the binary covariate model is extended. Proving asymptotic normality of estimators, standard errors for the estimators can be derived. The simulation study demonstrates the estimation procedure of the logistic regression model with a one-dimensional discrete covariate. Simulation results for several proportions of the subcohort and different choices of the underlying parameters indicate that the estimator developed here performs reasonably well. Moreover, the comparison between theoretical values and simulation results of the asymptotic variance of estimator is presented.
Clearly, the logistic regression is sufficient for the binary outcome refers to be available for all subjects and for a fixed time interval. Nevertheless, in practice, the observations in clinical trials are frequently collected for different time periods and subjects may drop out or relapse from other causes during follow-up. Hence, the logistic regression is not appropriate for incomplete follow-up data; for example, an individual drops out of the study before the end of data collection or an individual has not occurred the event of interest for the duration of the study. These observations are called censored observations. The survival analysis is necessary to solve these problems. Moreover, the time to the occurence of the event of interest is taken into account. The Cox model has been widely used in survival analysis, which can effectively handle the censored data. Cox (1972) proposed the model which is focused on the hazard function. The Cox model is assumed to be
λ(t|x) = λ0(t) exp(β^Tx)
where λ0(t) is an unspecified baseline hazard at time t and X is the vector of covariates, β is a p-dimensional vector of coefficient.
In this thesis, the Cox model is considered under the view point of experimental design. The estimability of the parameter β0 in the Cox model, where β0 denotes the true value of β, and the choice of optimal covariates are investigated. We give new representations of the observed information matrix In(β) and extend results for the Cox model of Andersen and Gill (1982). In this way conditions for the estimability of β0 are formulated. Under some regularity conditions, ∑ is the inverse of the asymptotic variance matrix of the MPLE of β0 in the Cox model and then some properties of the asymptotic variance matrix of the MPLE are highlighted. Based on the results of asymptotic estimability, the calculation of local optimal covariates is considered and shown in examples. In a sensitivity analysis, the efficiency of given covariates is calculated. For neighborhoods of the exponential models, the efficiencies have then been found. It is appeared that for fixed parameters β0, the efficiencies do not change very much for different baseline hazard functions. Some proposals for applicable optimal covariates and a calculation procedure for finding optimal covariates are discussed.
Furthermore, the extension of the Cox model where time-dependent coefficient are allowed, is investigated. In this situation, the maximum local partial likelihood estimator for estimating the coefficient function β(·) is described. Based on this estimator, we formulate a new test procedure for testing, whether a one-dimensional coefficient function β(·) has a prespecified parametric form, say β(·; ϑ). The score function derived from the local constant partial likelihood function at d distinct grid points is considered. It is shown that the distribution of the properly standardized quadratic form of this d-dimensional vector under the null hypothesis tends to a Chi-squared distribution. Moreover, the limit statement remains true when replacing the unknown ϑ0 by the MPLE in the hypothetical model and an asymptotic α-test is given by the quantiles or p-values of the limiting Chi-squared distribution. Finally, we propose a bootstrap version of this test. The bootstrap test is only defined for the special case of testing whether the coefficient function is constant. A simulation study illustrates the behavior of the bootstrap test under the null hypothesis and a special alternative. It gives quite good results for the chosen underlying model.
References
P. K. Andersen and R. D. Gill. Cox's regression model for counting processes: a large samplestudy. Ann. Statist., 10(4):1100{1120, 1982.
D. R. Cox. Regression models and life-tables. J. Roy. Statist. Soc. Ser. B, 34:187{220, 1972.
R. L. Prentice. A case-cohort design for epidemiologic cohort studies and disease prevention trials. Biometrika, 73(1):1{11, 1986.
This dissertation uses a common grammatical phenomenon, light verb constructions (LVCs) in English and German, to investigate how syntax-semantics mapping defaults influence the relationships between language processing, representation and conceptualization. LVCs are analyzed as a phenomenon of mismatch in the argument structure. The processing implication of this mismatch are experimentally investigated, using ERPs and a dual task. Data from these experiments point to an increase in working memory. Representational questions are investigated using structural priming. Data from this study suggest that while the syntax of LVCs is not different from other structures’, the semantics and mapping are represented differently. This hypothesis is tested with a new categorization paradigm, which reveals that the conceptual structure that LVC evoke differ in interesting, and predictable, ways from non-mismatching structures’.
Since the 1960ies, Germany has been host to a large Turkish immigrant community. While migrant communities often shift to the majority language over the course of time, Turkish is a very vital minority language in Germany and bilingualism in this community is an obvious fact which has been subject to several studies. The main focus usually is on German, the second language (L2) of these speakers (e.g. Hinnenkamp 2000, Keim 2001, Auer 2003, Cindark & Aslan (2004), Kern & Selting 2006, Selting 2009, Kern 2013). Research on the Turkish spoken by Turkish bilinguals has also attracted attention although to a lesser extend mainly in the framework of so called heritage language research (cf. Polinski 2011). Bilingual Turkish has been investigated under the perspective of code-switching and codemixing (e.g. Kallmeyer & Keim 2003, Keim 2003, 2004, Keim & Cindark 2003, Hinnenkamp 2003, 2005, 2008, Dirim & Auer 2004), and with respect to changes in the morphologic, the syntactic and the orthographic system (e.g. Rehbein & Karakoç 2004, Schroeder 2007). Attention to the changes in the prosodic system of bilingual Turkish on the other side has been exceptional so far (Queen 2001, 2006).
With the present dissertation, I provide a study on contact induced linguistic changes on the prosodic level in the Turkish heritage language of adult early German-Turkish bilinguals. It describes structural changes in the L1 Turkish intonation of yes/no questions of a representative sample of bilingual Turkish speakers. All speakers share a similar sociolinguistic background. All acquired Turkish as their first language from their families and the majority language German as an early L2 at latest in the kinder garden by the age of 3.
A study of changes in bilingual varieties requires a previous cross-linguistic comparison of both of the involved languages in language contact in order to draw conclusions on the contact-induced language change in delimitation to language-internal development.
While German is one of the best investigated languages with respect to its prosodic system, research on Turkish intonational phonology is not as progressed. To this effect, the analysis of bilingual Turkish, as elicited for the present dissertation, is preceded by an experimental study on monolingual Turkish. In this regard an additional experiment with 11 monolingual university students of non-linguistic subjects was conducted at the Ege University in Izmir in 2013. On these grounds the present dissertation additionally contributes new insights with respect to Turkish intonational phonology and typology. The results of the contrastive analysis of German and Turkish bring to light that the prosodic systems of both languages differ with respect to the use of prosodic cues in the marking of information structure (IS) and sentence type. Whereas German distinguishes in the prosodic marking between explicit categories for focus and givenness, Turkish uses only one prosodic cue to mark IS. Furthermore it is shown that Turkish in contrast to German does not use a prosodic correlate to mark yes/no questions, but a morphological question marker.
To elicit Turkish yes/no questions in a bilingual context which differ with respect to their information structure in a further step the methodology of Xu (1999) to elicit in-situ focus on different constituents was adapted in the experimental study. A data set of 400 Turkish yes/no questions of 20 bilingual Turkish speakers was compiled at the Zentrum für Allgemeine Sprachwissenschaft (ZAS) in Berlin and at the University of Potsdam in 2013. The prosodic structure of the yes/no questions was phonologically and phonetically analyzed with respect to changes in the f0 contour according to IS modifications and the use of prosodic cues to indicate sentence type.
The results of the analyses contribute surprising observations to the research of bilingual prosody. Studies on bilingual language change and language acquisition have repeatedly shown that the use of prosodic features that are considered as marked by means of lower and implicational use across and within a language cause difficulties in language contact and second language acquisition. Especially, they are not expected to pass from one language to another through language contact. However, this structurally determined expectation on language development is refuted by the results of the present study. Functionally related prosody, such as the cues to indicate IS, are transferred from German L2 to the Turkish L1 of German-Turkish bilingual speakers. This astonishing observation provides the base for an approach to language change centered on functional motivation. Based on Matras’ (2007, 2010) assumption of functionality in language change, Paradis’ (1993, 2004, 2008) approach of Language Activation and the Subsystem Theory and the Theory of Language as a Dynamic System (Heredina & Jessner 2002), it will be shown that prosodic features which are absent in one of the languages of bilingual speech communities are transferred from the respective language to the other when they contribute to the contextualization of a pragmatic concept which is not expressed by other linguistic means in the target language. To this effect language interaction is based on language activation and inhibition mechanisms dealing with differences in the implicit pragmatic knowledge between bilinguals and monolinguals. The motivator for this process of language change is the contextualization of the message itself and not the structure of the respective feature on the surface. It is shown that structural consideration may influence language change but that bilingual language change does not depend on structural restrictions nor does the structure cause a change. The conclusions drawn on the basis of empirical facts can especially contribute to a better understanding of the processes of bilingual language development as it combines methodologies and theoretical aspects of different linguistic subfields.
Protektiver Effekt von 6-Shogaol, Ellagsäure und Myrrhe auf die intestinale epitheliale Barriere
(2016)
Viele bioaktive Pflanzeninhaltsstoffe bzw. Pflanzenmetabolite besitzen antiinflammatorische Eigenschaften. Diese versprechen ein hohes Potential für den Einsatz in der Phytotherapie bzw. Prävention von chronisch-entzündlichen Darmerkrankungen (CED). Eine intestinale Barrieredysfunktion ist ein typisches Charakteristikum von CED Patienten, die dadurch an akuter Diarrhoe leiden.
In dieser Arbeit werden die Pflanzenkomponenten 6-Shogaol, Ellagsäure und Myrrhe an den intestinalen Kolonepithelzellmodellen HT-29/B6 und Caco-2 auf ihr Potential hin, die intestinale Barriere zu stärken bzw. eine Barrieredysfunktion zu verhindern, untersucht. Hauptschwerpunkt der Analysen ist die parazelluläre Barrierefunktion und die Regulation der dafür entscheidenden Proteinfamilie der Tight Junctions (TJs), der Claudine.
Die Barrierefunktion wird durch Messung des transepithelialen Widerstands (TER) und der Fluxmessung in der Ussing-Kammer bestimmt. Dazu werden die HT-29/B6- und Caco-2-Monolayer mit den Pflanzenkomponenten (6-Shogaol, Ellagsäure, Myrrhe), dem proinflammatorischen Zytokin TNF-α oder der Kombination von beiden Subsztanzen für 24 oder 48 h behandelt. Außerdem wurden zur weiteren Charakterisierung die Expression sowie die Lokalisation der für die parazelluläre Barriere relevanten Claudine, die TJ-Ultrastruktur und verschiedene Signalwege analysiert.
In Caco-2-Monolayern führten Ellagsäure und Myrrhe, nicht aber 6-Shogaol, allein zu einem TER-Anstieg bedingt durch eine verringerte Permeabilität für Natriumionen. Myrrhe verminderte die Expression des Kationenkanal-bildenden TJ-Proteins Claudin-2 über die Inhibierung des PI3K/Akt-Signalweges, während Ellagsäure die Expression der TJ-Proteine Claudin-4 und -7 reduzierte. Alle Pflanzenkomponenten schützten in den Caco-2-Zellen vor einer TNF-α-induzierten Barrieredysfunktion.
An den HT-29/B6-Monolayern änderte keine der Pflanzenkomponenten allein die Barrierefunktion. Die HT-29/B6-Zellen reagierten auf TNF-α mit einer deutlichen Verminderung des TER und einer erhöhten Fluoreszein-Permeabilität. Die TER-Abnahme war durch eine PI3K/Akt-vermittelte gesteigerte Claudin-2-Expression sowie eine NFκB-vermittelte Umverteilung des abdichtenden TJ-Proteins Claudin-1 gekennzeichnet. 6-Shogaol konnte den TER-Abfall partiell hemmen sowie die PI3K/Akt-induzierte Claudin-2-Expression und die NFκB-bedingte Claudin-1-Umverteilung verhindern. Ebenso inhibierte Myrrhe, nicht aber Ellagsäure, den TNF-α-induzierten TER-Abfall. Dabei konnte Myrrhe zwar den Claudin-2-Expressionsanstieg und die Claudin-1-Umverteilung unterbinden, jedoch weder die NFκB- noch die PI3K/Akt-Aktivierung hemmen. Diese Arbeit zeigt, dass auch STAT6 an dem Claudin-2-Expressionsanstieg durch
TNF-α in HT-29/B6-Zellen beteiligt ist. So wurde durch Myrrhe die TNF-α-induzierte Phosphorylierung von STAT6 und die erhöhte Claudin-2-Expression inhibiert.
Die Ergebnisse deuten darauf hin, dass die Pflanzenkomponenten 6-Shogaol, Ellagsäure und Myrrhe mit unterschiedlichen Mechanismen stärkend auf die Barriere einwirken. Zur Behandlung von intestinalen Erkrankungen mit Barrieredysfunktion könnten daher Kombinationspräparate aus verschiedenen Pflanzen effektiver sein als Monopräparate.
Services that operate over the Internet are under constant threat of being exposed to fraudulent use. Maintaining good user experience for legitimate users often requires the classification of entities as malicious or legitimate in order to initiate countermeasures. As an example, inbound email spam filters decide for spam or non-spam. They can base their decision on both the content of each email as well as on features that summarize prior emails received from the sending server. In general, discriminative classification methods learn to distinguish positive from negative entities. Each decision for a label may be based on features of the entity and related entities. When labels of related entities have strong interdependencies---as can be assumed e.g. for emails being delivered by the same user---classification decisions should not be made independently and dependencies should be modeled in the decision function. This thesis addresses the formulation of discriminative classification problems that are tailored for the specific demands of the following three Internet security applications. Theoretical and algorithmic solutions are devised to protect an email service against flooding of user inboxes, to mitigate abusive usage of outbound email servers, and to protect web servers against distributed denial of service attacks.
In the application of filtering an inbound email stream for unsolicited emails, utilizing features that go beyond each individual email's content can be valuable. Information about each sending mail server can be aggregated over time and may help in identifying unwanted emails. However, while this information will be available to the deployed email filter, some parts of the training data that are compiled by third party providers may not contain this information. The missing features have to be estimated at training time in order to learn a classification model. In this thesis an algorithm is derived that learns a decision function that integrates over a distribution of values for each missing entry. The distribution of missing values is a free parameter that is optimized to learn an optimal decision function.
The outbound stream of emails of an email service provider can be separated by the customer IDs that ask for delivery. All emails that are sent by the same ID in the same period of time are related, both in content and in label. Hijacked customer accounts may send batches of unsolicited emails to other email providers, which in turn might blacklist the sender's email servers after detection of incoming spam emails. The risk of being blocked from further delivery depends on the rate of outgoing unwanted emails and the duration of high spam sending rates. An optimization problem is developed that minimizes the expected cost for the email provider by learning a decision function that assigns a limit on the sending rate to customers based on the each customer's email stream.
Identifying attacking IPs during HTTP-level DDoS attacks allows to block those IPs from further accessing the web servers. DDoS attacks are usually carried out by infected clients that are members of the same botnet and show similar traffic patterns. HTTP-level attacks aim at exhausting one or more resources of the web server infrastructure, such as CPU time. If the joint set of attackers cannot increase resource usage close to the maximum capacity, no effect will be experienced by legitimate users of hosted web sites. However, if the additional load raises the computational burden towards the critical range, user experience will degrade until service may be unavailable altogether. As the loss of missing one attacker depends on block decisions for other attackers---if most other attackers are detected, not blocking one client will likely not be harmful---a structured output model has to be learned. In this thesis an algorithm is developed that learns a structured prediction decoder that searches the space of label assignments, guided by a policy.
Each model is evaluated on real-world data and is compared to reference methods. The results show that modeling each classification problem according to the specific demands of the task improves performance over solutions that do not consider the constraints inherent to an application.
Numerous reports of relatively rapid climate changes over the past century make a clear case of the impact of aerosols and clouds, identified as sources of largest uncertainty in climate projections. Earth’s radiation balance is altered by aerosols depending on their size, morphology and chemical composition. Competing effects in the atmosphere can be further studied by investigating the evolution of aerosol microphysical properties, which are the focus of the present work.
The aerosol size distribution, the refractive index, and the single scattering albedo are commonly used such properties linked to aerosol type, and radiative forcing. Highly advanced lidars (light detection and ranging) have reduced aerosol monitoring and optical profiling into a routine process. Lidar data have been widely used to retrieve the size distribution through the inversion of the so-called Lorenz-Mie model (LMM). This model offers a reasonable treatment for spherically approximated particles, it no longer provides, though, a viable description for other naturally occurring arbitrarily shaped particles, such as dust particles. On the other hand, non-spherical geometries as simple as spheroids reproduce certain optical properties with enhanced accuracy. Motivated by this, we adapt the LMM to accommodate the spheroid-particle approximation introducing the notion of a two-dimensional (2D) shape-size distribution.
Inverting only a few optical data points to retrieve the shape-size distribution is classified as a non-linear ill-posed problem. A brief mathematical analysis is presented which reveals the inherent tendency towards highly oscillatory solutions, explores the available options for a generalized solution through regularization methods and quantifies the ill-posedness. The latter will improve our understanding on the main cause fomenting instability in the produced solution spaces. The new approach facilitates the exploitation of additional lidar data points from depolarization measurements, associated with particle non-sphericity. However, the generalization of LMM vastly increases the complexity of the problem. The underlying theory for the calculation of the involved optical cross sections (T-matrix theory) is computationally so costly, that would limit a retrieval analysis to an unpractical point. Moreover the discretization of the model equation by a 2D collocation method, proposed in this work, involves double integrations which are further time consuming. We overcome these difficulties by using precalculated databases and a sophisticated retrieval software (SphInX: Spheroidal Inversion eXperiments) especially developed for our purposes, capable of performing multiple-dataset inversions and producing a wide range of microphysical retrieval outputs.
Hybrid regularization in conjunction with minimization processes is used as a basis for our algorithms. Synthetic data retrievals are performed simulating various atmospheric scenarios in order to test the efficiency of different regularization methods. The gap in contemporary literature in providing full sets of uncertainties in a wide variety of numerical instances is of major concern here. For this, the most appropriate methods are identified through a thorough analysis on an overall-behavior basis regarding accuracy and stability. The general trend of the initial size distributions is captured in our numerical experiments and the reconstruction quality depends on data error level. Moreover, the need for more or less depolarization points is explored for the first time from the point of view of the microphysical retrieval. Finally, our approach is tested in various measurement cases giving further insight for future algorithm improvements.
Savannas cover a broad geographical range across continents and are a biome best described by a mix of herbaceous and woody plants. The former create a more or less continuous layer while the latter should be sparse enough to leave an open canopy. What has long intrigued ecologists is how these two competing plant life forms of vegetation coexist.
Initially attributed to resource competition, coexistence was considered the stable outcome of a root niche differentiation between trees and grasses. The importance of environmental factors became evident later, when data from moister environments demonstrated that tree cover was often lower than what the rainfall conditions would allow for. Our current understanding relies on the interaction of competition and disturbances in space and time. Hence, the influence of grazing and fire and the corresponding feedbacks they generate have been keenly investigated. Grazing removes grass cover, initiating a self-reinforcing process propagating tree cover expansion. This is known as the encroachment phenomenon. Fire, on the other hand, imposes a bottleneck on the tree population by halting the recruitment of young trees into adulthood. Since grasses fuel fires, a feedback linking grazing, grass cover, fire, and tree cover is created. In African savannas, which are the focus of this dissertation, these feedbacks play a major role in the dynamics.
The importance of these feedbacks came into sharp focus when the notion of alternative states began to be applied to savannas. Alternative states in ecology arise when different states of an ecosystem can occur under the same conditions. According to this an open savanna and a tree-dominated savanna can be classified as alternative states, since they can both occur under the same climatic conditions. The aforementioned feedbacks are critical in the creation of alternative states. The grass-fire feedback can preserve an open canopy as long as fire intensity and frequency remain above a certain threshold. Conversely, crossing a grazing threshold can force an open savanna to shift to a tree-dominated state. Critically, transitions between such alternative states can produce hysteresis, where a return to pre-transition conditions will not suffice to restore the ecosystem to its original state.
In the chapters that follow, I will cover aspects relating to the coexistence mechanisms and the role of feedbacks in tree-grass interactions. Coming back to the coexistence question, due to the overwhelming focus on competition and disturbance another important ecological process was neglected: facilitation. Therefore, in the first study within this dissertation I examine how facilitation can expand the tree-grass coexistence range into drier conditions. For the second study I focus on another aspect of savanna dynamics which remains underrepresented in the literature: the impacts of inter-annual rainfall variability upon savanna trees and the resilience of the savanna state. In the third and final study within this dissertation I approach the well-researched encroachment phenomenon from a new perspective: I search for an early warning indicator of the process to be used as a prevention tool for savanna conservation. In order to perform all this work I developed a mathematical ecohydrological model of Ordinary Differential Equations (ODEs) with three variables: soil moisture content, grass cover and tree cover.
Facilitation: Results showed that the removal of grass cover through grazing was detrimental to trees under arid conditions, contrary to expectation based on resource competition. The reason was that grasses preserved moisture in the soil through infiltration and shading, thus ameliorating the harsh conditions for trees in accordance with the Stress Gradient Hypothesis. The exclusion of grasses from the model further demonstrated this: tree cover was lower in the absence of grasses, indicating that the benefits of grass facilitation outweighed the costs of grass competition for trees. Thus, facilitation expanded the climatic range where savannas persisted into drier conditions.
Rainfall variability: By adjusting the model to current rainfall patterns in East Africa, I simulated conditions of increasing inter-annual rainfall variability for two distinct mean rainfall scenarios: semi-arid and mesic. Alternative states of tree-less grassland and tree-dominated savanna emerged in both cases. Increasing variability reduced semi-arid savanna tree cover to the point that at high variability the savanna state was eliminated, because variability intensified resource competition and strengthened the fire disturbance during high rainfall years. Mesic savannas, on the other hand, became more resilient along the variability gradient: increasing rainfall variability created more opportunities for the rapid growth of trees to overcome the fire disturbance, boosting the chances of savannas persisting and thus increasing mesic savanna resilience.
Preventing encroachment: The breakdown in the grass-fire feedback caused by heavy grazing promoted the expansion of woody cover. This could be irreversible due to the presence of alternative states of encroached and open savanna, which I found along a simulated grazing gradient. When I simulated different short term heavy grazing treatments followed by a reduction to the original grazing conditions, certain cases converged to the encroached state. Utilising woody cover changes only during the heavy grazing treatment, I developed an early warning indicator which identified these cases with a high risk of such hysteresis and successfully distinguished them from those with a low risk. Furthermore, after validating the indicator on encroachment data, I demonstrated that it appeared early enough for encroachment to be prevented through realistic grazing-reduction treatments.
Though this dissertation is rooted in the theory of savanna dynamics, its results can have significant applications in savanna conservation. Facilitation has only recently become a topic of interest within savanna literature. Given the threat of increasing droughts and a general anticipation of drier conditions in parts of Africa, insights stemming from this research may provide clues for preserving arid savannas. The impacts of rainfall variability on savannas have not yet been thoroughly studied, either. Conflicting results appear as a result of the lack of a robust theoretical understanding of plant interactions under variable conditions. . My work and other recent studies argue that such conditions may increase the importance of fast resource acquisition creating a ‘temporal niche’. Woody encroachment has been extensively studied as phenomenon, though not from the perspective of its early identification and prevention. The development of an encroachment forecasting tool, as the one presented in this work, could protect both the savanna biome and societies dependent upon it for (economic) survival. All studies which follow are bound by the attempt to broaden the horizons of savanna-related research in order to deal with extreme conditions and phenomena; be it through the enhancement of the coexistence debate or the study of an imminent external threat or the development of a management-oriented tool for the conservation of savannas.
Molekulare Charakterisierung des Centrosom-assoziierten Proteins CP91 in Dictyostelium discoideum
(2016)
Das Dictyostelium-Centrosom ist ein Modell für acentrioläre Centrosomen. Es besteht aus einer dreischichtigen Kernstruktur und ist von einer Corona umgeben, welche Nukleationskomplexe für Mikrotubuli beinhaltet. Die Verdoppelung der Kernstruktur wird einmal pro Zellzyklus am Übergang der G2 zur M-Phase gestartet. Durch eine Proteomanalyse isolierter Centrosomen konnte CP91 identifiziert werden, ein 91 kDa großes Coiled-Coil Protein, das in der centrosomalen Kernstruktur lokalisiert. GFP-CP91 zeigte fast keine Mobilität in FRAP-Experimenten während der Interphase, was darauf hindeutet, dass es sich bei CP91 um eine Strukturkomponente des Centrosoms handelt. In der Mitose hingegen dissoziieren das GFP-CP91 als auch das endogene CP91 ab und fehlen an den Spindelpolen von der späten Prophase bis zur Anaphase. Dieses Verhalten korreliert mit dem Verschwinden der zentralen Schicht der Kernstruktur zu Beginn der Centrosomenverdopplung. Somit ist CP91 mit großer Wahrscheinlichkeit ein Bestandteil dieser Schicht. CP91-Fragmente der N-terminalen bzw. C-terminalen Domäne (GFP-CP91 N-Terminus, GFP-CP91 C-Terminus) lokalisieren als GFP-Fusionsproteine exprimiert auch am Centrosom, zeigen aber nicht die gleiche mitotische Verteilung des Volllängenproteins. Das CP91-Fragment der zentralen Coiled-Coil Domäne (GFP-CP91cc) lokalisiert als GFP-Fusionsprotein exprimiert, als ein diffuser cytosolische Cluster, in der Nähe des Centrosoms. Es zeigt eine partiell ähnliche mitotische Verteilung wie das Volllängenprotein. Dies lässt eine regulatorische Domäne innerhalb der Coiled-Coil Domäne vermuten. Die Expression der GFP-Fusionsproteine unterdrückt die Expression des endogenen CP91 und bringt überzählige Centrosomen hervor. Dies war auch eine markante Eigenschaft nach der Unterexpression von CP91 durch RNAi. Zusätzlich zeigte sich in CP91-RNAi Zellen eine stark erhöhte Ploidie verursacht durch schwere Defekte in der Chromosomensegregation verbunden mit einer erhöhten Zellgröße und Defekten im Abschnürungsprozess während der Cytokinese. Die Unterexpression von CP91 durch RNAi hatte auch einen direkten Einfluss auf die Menge an den centrosomalen Proteinen CP39, CP55 und CEP192 und dem Centromerprotein Cenp68 in der Interphase. Die Ergebnisse deuten darauf hin, dass CP91 eine zentrale centrosomale Kernkomponente ist und für den Zusammenhalt der beiden äußeren Schichten der Kernstruktur benötigt wird. Zudem spielt CP91 eine wichtige Rolle für eine ordnungsgemäße Centrosomenbiogenese und, unabhängig davon, bei dem Abschnürungsprozess der Tochterzellen während der Cytokinese.
In the current paradigm of cosmology, the formation of large-scale structures is mainly driven by non-radiating dark matter, making up the dominant part of the matter budget of the Universe. Cosmological observations however, rely on the detection of luminous galaxies, which are biased tracers of the underlying dark matter. In this thesis I present cosmological reconstructions of both, the dark matter density field that forms the cosmic web, and cosmic velocities, for which both aspects of my work are delved into, the theoretical formalism and the results of its applications to cosmological simulations and also to a galaxy redshift survey.The foundation of our method is relying on a statistical approach, in which a given galaxy catalogue is interpreted as a biased realization of the underlying dark matter density field. The inference is computationally performed on a mesh grid by sampling from a probability density function, which describes the joint posterior distribution of matter density and the three dimensional velocity field. The statistical background of our method is described in Chapter ”Implementation of argo”, where the introduction in sampling methods is given, paying special attention to Markov Chain Monte-Carlo techniques. In Chapter ”Phase-Space Reconstructions with N-body Simulations”, I introduce and implement a novel biasing scheme to relate the galaxy number density to the underlying dark matter, which I decompose into a deterministic part, described by a non-linear and scale-dependent analytic expression, and a stochastic part, by presenting a negative binomial (NB) likelihood function that models deviations from Poissonity. Both bias components had already been studied theoretically, but were so far never tested in a reconstruction algorithm. I test these new contributions againstN-body simulations to quantify improvements and show that, compared to state-of-the-art methods, the stochastic bias is inevitable at wave numbers of k≥0.15h Mpc^−1 in the power spectrum in order to obtain unbiased results from the reconstructions. In the second part of Chapter ”Phase-Space Reconstructions with N-body Simulations” I describe and validate our approach to infer the three dimensional cosmic velocity field jointly with the dark matter density. I use linear perturbation theory for the large-scale bulk flows and a dispersion term to model virialized galaxy motions, showing that our method is accurately recovering the real-space positions of the redshift-space distorted galaxies. I analyze the results with the isotropic and also the two-dimensional power spectrum.Finally, in Chapter ”Phase-space Reconstructions with Galaxy Redshift Surveys”, I show how I combine all findings and results and apply the method to the CMASS (for Constant (stellar) Mass) galaxy catalogue of the Baryon Oscillation Spectroscopic Survey (BOSS). I describe how our method is accounting for the observational selection effects inside our reconstruction algorithm. Also, I demonstrate that the renormalization of the prior distribution function is mandatory to account for higher order contributions in the structure formation model, and finally a redshift-dependent bias factor is theoretically motivated and implemented into our method. The various refinements yield unbiased results of the dark matter until scales of k≤0.2 h Mpc^−1in the power spectrum and isotropize the galaxy catalogue down to distances of r∼20h^−1 Mpc in the correlation function. We further test the results of our cosmic velocity field reconstruction by comparing them to a synthetic mock galaxy catalogue, finding a strong correlation between the mock and the reconstructed velocities. The applications of both, the density field without redshift-space distortions, and the velocity reconstructions, are very broad and can be used for improved analyses of the baryonic acoustic oscillations, environmental studies of the cosmic web, the kinematic Sunyaev-Zel’dovic or integrated Sachs-Wolfe effect.
Neue Systeme für triphile, fluorkohlenstofffreie Blockcopolymere in Form von Acrylat-basierten thermoresponsiven Blockcopolymeren sowie Acrylat- bzw. Styrol-basierten Terblock-Polyelektrolyten mit unterschiedlich chaotropen Kationen des jeweiligen polyanionischen Blocks wurden entwickelt. Multikompartiment-Mizellen, mizellare Aggregate mit ultrastrukturiertem hydrophobem Mizellkern die biologischen Strukturen wie dem Humanalbumin nachempfunden sind, sollten bei der Selbstorganisation in wässriger Umgebung entstehen. Durch Verwendung apolarer und polarer Kohlenwasserstoff-Domänen anstelle von fluorophilen Fluorkohlenstoff-Domänen sollte erstmals anhand solcher triphilen Systeme nachgewiesen werden, ob diese in der Lage zur selektiven Aufnahme hydrophober Substanzen in unterschiedliche Domänen des Mizellkerns sind.
Mit Hilfe von sequentieller RAFT-Polymerisation wurden diese neuen triphilen Systeme hergestellt, die über einen permanent hydrophilen, eine permanent stark hydrophoben und einen dritten Block verfügen, der durch externe Einflüsse, speziell die Induzierung eines thermischen Coil-to-globule-Übergangs bzw. die Zugabe von organischen, hydrophoben Gegenionen von einem wasserlöslichen in einen polar-hydrophoben Block umgewandelt werden kann. Als RAFT-Agens wurde 4-(Trimethylsilyl)benzyl(3-(trimethylsilyl)-propyl)-trithiocarbonat mit zwei unterschiedlichen TMS-Endgruppen verwendet, das kontrollierte Reaktions-bedingungen sowie die molekulare Charakterisierung der komplexen Copolymere ermöglichte.
Die beiden Grundtypen der linearen ternären Blockcopolymere wurden jeweils in zwei 2 Modell-Systeme, die geringfügig in ihren chemischen Eigenschaften sowie in dem Blocklängenverhältnis von hydrophilen und hydrophoben Polymersegmenten variierten, realisiert und unterschiedliche Permutation der Blöcke aufwiesen.
Als ersten Polymertyp wurden amphiphile thermoresponsive Blockcopolymere verwendet. Modell-System 1 bestand aus dem permanent hydrophoben Block Poly(1,3-Bis(butylthio)-prop-2-yl-acrylat), permanent hydrophilen Block Poly(Oligo(ethylenglykol)monomethyletheracrylat) und den thermoresponsiven Block Poly(N,N‘-Diethylacrylamid), dessen Homopolymer eine LCST-Phasenübergang (LCST, engl.: lower critical solution temperature) bei ca. 36°C aufweist. Das Modell-System 2 bestand aus dem permanent hydrophilen Block Poly(2-(Methylsulfinyl)ethylacrylat), dem permanent hydrophoben Block Poly(2-Ethylhexylacrylat) und wiederum Poly(N,N‘-Diethylacrylamid). Im ternären Blockcopolymer erhöhte sich, je nach Blocksequenz und relativen Blocklängen, der LCST-Übergang auf 50 – 65°C. Bei der Untersuchung der Selbstorganisation für die Polymer-Systeme dieses Typs wurde die Temperatur variiert, um verschieden mizellare Überstrukturen in wässriger Umgebung zu erzeugen bzw. oberhalb des LCST-Übergangs Multikompartiment-Mizellen nachzuweisen. Die Unterschiede in der Hydrophilie bzw. den sterischen Ansprüche der gewählten hydrophilen Blöcke sowie die Variation der jeweiligen Blocksequenzen ermöglichte darüber hinaus die Bildung verschiedenster Morphologien mizellarer Aggregate.
Der zweite Typ basierte auf ein Terblock-Polyelektrolyt-System mit Polyacrylaten bzw. Polystyrolen als Polymerrückgrat. Polymere ionische Flüssigkeiten wurden als Vorlage der Entwicklung zweier Modell-Systeme genommen. Eines der beiden Systeme bestand aus dem permanent hydrophilen Block Poly(Oligo(ethylenglykol)monomethyletheracrylat, dem permanent hydrophoben Block Poly(2-Ethylhexylacrylat) sowie dem Polyanion-Block Poly(3-Sulfopropylacrylat). Die Hydrophobie des Polyanion-Blocks variierte durch Verwendung großer organischer Gegenionen, nämlich Tetrabutylammonium, Tetraphenylphosphonium und Tetraphenylstibonium.
Analog wurde in einem weiteren System aus dem permanent hydrophilen Block Poly(4-Vinylbenzyltetrakis(ethylenoxy)methylether), dem permanent hydrophoben Block Poly(para-Methylstyrol) und Poly(4-Styrolsulfonat) mit den entsprechenden Gegenionen gebildet. Aufgrund unterschiedlicher Kettensteifigkeit in beiden Modell-Systemen sollte es bei der Selbstorganisation der mizellarer Aggregate zu unterschiedlichen Überstrukturen kommen.
Mittels DSC-Messungen konnte nachgewiesen werden, dass für alle Modell-Systeme die Blöcke in Volumen-Phase miteinander inkompatibel waren, was eine Voraussetzung für Multikompartimentierung von mizellaren Aggregaten ist. Die Größe mizellarer Aggregate sowie der Einfluss externer Einflüsse wie der Veränderung der Temperatur bzw. der Hydrophobie und Größe von Gegenionen auf den hydrodynamischen Durchmesser mittels DLS-Untersuchungen wurden für alle Modell-Systeme untersucht. Die Ergebnisse zu den thermoresponsiven ternären Blockcopolymeren belegten , dass sich oberhalb der Phasenübergangstemperatur des thermoresponsiven Blocks die Struktur der mizellaren Aggregate änderte, indem der p(DEAm)-Block scheinbar kollabierte und so zusammen mit den permanent hydrophoben Block den jeweiligen Mizellkern bildete. Nach gewisser Equilibrierungszeit konnten bei Raumtemperatur dir ursprünglichen mizellaren Strukturen regeneriert werden. Hingegen konnte für die Terblock-Polyelektrolyt-Systeme bei Verwendung der unterschiedlich hydrophoben Gegenionen kein signifikanter Unterschied in der Größe der mizellaren Aggregate beobachtet werden.
Zur Abbildung der mizellaren Aggregate mittels kryogene Transmissionselektronenmikroskopie (cryo-TEM) der mizellaren Aggregate war mit Poly(1,3-Bis(butylthio)-prop-2-yl-acrylat) ein Modell-System so konzipiert, dass ein erhöhter Elektronendichtekontrast durch Schwefel-Atome die Visualisierung ultrastrukturierter hydrophober Mizellkerne ermöglichte. Dieser Effekt sollte in den Terblock-Polyelektrolyt-Systemen auch durch die Gegenionen Tetraphenylphosphonium und Tetraphenylstibonium nachgestellt werden. Während bei den thermoresponsiven Systemen auch oberhalb des Phasenübergangs kein Hinweis auf Ultrastrukturierung beobachtet wurde, waren für die Polyelektrolyt-Systeme, insbesondere im Fall von Tetraphenylstibonium als Gegenion Überstrukturen zu erkennen. Der Nachweis der Bildung von Multikompartiment-Mizellen war für beide Polymertypen mit dieser abbildenden Methode nicht möglich. Die Unterschiede in der Elektronendichte einzelner Blöcke müsste möglicherweise weiter erhöht werden um Aussagen diesbezüglich zu treffen.
Die Untersuchung von ortsspezifischen Solubilisierungsexperimenten mit solvatochromen Fluoreszenzfarbstoffen mittels „steady-state“-Fluoreszenzspektroskopie durch Vergleich der Solubilisierungsorte der Terblockcopolymere bzw. –Polyelektrolyte mit den jeweiligen Solubilisierungsorten von Homopolymer- und Diblock-Vorstufen sollten den qualitativen Nachweis der Multikompartimentierung erbringen. Aufgrund der geringen Mengen an Farbstoff, die für die Solubilisierungsexperimente eingesetzt wurden zeigten DLS-Untersuchungen keine störenden Effekte der Sonden auf die Größe der mizellaren Aggregate. Jedoch erschwerten Quench-Effekte im Falle der Polyelektrolyt Modell-Systeme eine klare Interpretation der Daten. Im Falle der Modell-Systeme der thermoresponsiven Blockcopolymere waren dagegen deutliche solvatochrome Effekte zwischen der Solubilisierung in den mizellaren Aggregaten unterhalb und oberhalb des Phasenübergangs zu erkennen. Dies könnte ein Hinweis auf Multikompartimentierung oberhalb des LCST-Übergangs sein. Ohne die Informationen einer Strukturanalyse wie z.B. der Röntgen- oder Neutronenkleinwinkelstreuung (SAXS oder SANS), kann nicht abschließend geklärt werden, ob die Solubilisierung in mizellaren hydrophoben Domänen des kollabierten Poly(N,N‘-Diethylacrylamid) erfolgt oder in einer Mischform von mizellaren Aggregaten mit gemittelter Polarität.
Fragmente der Unruhe
(2016)
Die Kurzprosasammlung "Bestie" (1917) von Federigo Tozzi ist von der Forschung als Meisterwerk der europäischen Moderne in ihrem literarischen Potential bisher verkannt worden. Ihrer vielbeschworenen Rätselhaftigkeit geht Sophie Ratschow nun erstmals seit Erscheinen umfassend und systematisch auf den Grund. Der Band stellt damit nicht nur die erste Ganzschrift zu "Bestie", sondern darüber hinaus die erste Monographie in deutscher Sprache zu Federigo Tozzi dar. Der simulationsästhetische Analyseansatz führt unter Berücksichtigung philosophischer Konzepte von Nietzsche, Derrida u.a. sowie durch die neuartige Auswahl intertextueller Bezüge zur Lyrik Baudelaires und zum "Buch der Unruhe" von Fernando Pessoa zu vollkommen neuen Erkenntnissen über die Funktions- und Wirkungsmechanismen dieser prismenhaft angelegten, geheimnisvollen Fragmente der Unruhe.
"Microfilm" (1971) von Andrea Zanzotto ist ein hoch komplexes, intermediales Gebilde, das seit seiner erstmaligen Veröffentlichung der Zanzotto-Forschung große interpretative Schwierigkeiten bereitet hat. Die vorliegende Arbeit ist die erste umfassende Monographie zu "Microfilm". Nach Zanzottos eigener Aussage entstand "Microfilm" als künstlerische Reaktion auf die Katastrophe von Vajont vom 9. Oktober 1963, als ein Bergrutsch eine gewaltige Flutwelle auslöste, die über den Vajont-Staudamm hinweg ins Tal von Longarone stürzte. Der Hintergrund des Werkes gibt dem Autor den Anlass zu einem simulierten Hinterfragen des tragischen Ereignisses und zugleich seines eigenen Schaffens. Dies vollzieht sich in der Form einer trugbildhaften bildschriftlichen Komposition. Die Sinnlosigkeit der Katastrophe wiederholt sich in der diskontinuierlichen, hermetischen Form von Zanzottos Text.
Intermontane valley fills
(2016)
Sedimentary valley fills are a widespread characteristic of mountain belts around the world. They transiently store material over time spans ranging from thousands to millions of years and therefore play an important role in modulating the sediment flux from the orogen to the foreland and to oceanic depocenters. In most cases, their formation can be attributed to specific fluvial conditions, which are closely related to climatic and tectonic processes. Hence, valley-fill deposits constitute valuable archives that offer fundamental insight into landscape evolution, and their study may help to assess the impact of future climate change on sediment dynamics.
In this thesis I analyzed intermontane valley-fill deposits to constrain different aspects of the climatic and tectonic history of mountain belts over multiple timescales. First, I developed a method to estimate the thickness distribution of valley fills using artificial neural networks (ANNs). Based on the assumption of geometrical similarity between exposed and buried parts of the landscape, this novel and highly automated technique allows reconstructing fill thickness and bedrock topography on the scale of catchments to entire mountain belts.
Second, I used the new method for estimating the spatial distribution of post-glacial sediments that are stored in the entire European Alps. A comparison with data from exploratory drillings and from geophysical surveys revealed that the model reproduces the measurements with a root mean squared error (RMSE) of 70m and a coefficient of determination (R2) of 0.81. I used the derived sediment thickness estimates in combination with a model of the Last Glacial Maximum (LGM) icecap to infer the lithospheric response to deglaciation, erosion and deposition, and deduce their relative contribution to the present-day rock-uplift rate. For a range of different lithospheric and upper mantle-material properties, the results suggest that the long-wavelength uplift signal can be explained by glacial isostatic adjustment with a small erosional contribution and a substantial but localized tectonic component exceeding 50% in parts of the Eastern Alps and in the Swiss Rhône Valley. Furthermore, this study reveals the particular importance of deconvolving the potential components of rock uplift when interpreting recent movements along active orogens and how this can be used to constrain physical properties of the Earth’s interior.
In a third study, I used the ANN approach to estimate the sediment thickness of alluviated reaches of the Yarlung Tsangpo River, upstream of the rapidly uplifting Namche Barwa massif. This allowed my colleagues and me to reconstruct the ancient river profile of the Yarlung Tsangpo, and to show that in the past, the river had already been deeply incised into the eastern margin of the Tibetan Plateau. Dating of basal sediments from drill cores that reached the paleo-river bed to 2–2.5 Ma are consistent with mineral cooling ages from the Namche Barwa massif, which indicate initiation of rapid uplift at ~4 Ma. Hence, formation of the Tsangpo gorge and aggradation of the voluminous valley fill was most probably a consequence of rapid uplift of the Namche Barwa massif and thus tectonic activity.
The fourth and last study focuses on the interaction of fluvial and glacial processes at the southeastern edge of the Karakoram. Paleo-ice-extent indicators and remnants of a more than 400-m-thick fluvio-lacustrine valley fill point to blockage of the Shyok River, a main tributary of the upper Indus, by the Siachen Glacier, which is the largest glacier in the Karakoram Range. Field observations and 10Be exposure dating attest to a period of recurring lake formation and outburst flooding during the penultimate glaciation prior to ~110 ka. The interaction of Rivers and Glaciers all along the Karakorum is considered a key factor in landscape evolution and presumably promoted headward erosion of the Indus-Shyok drainage system into the western margin of the Tibetan Plateau.
The results of this thesis highlight the strong influence of glaciation and tectonics on valley-fill formation and how this has affected the evolution of different mountain belts. In the Alps valley-fill deposition influenced the magnitude and pattern of rock uplift since ice retreat approximately 17,000 years ago. Conversely, the analyzed valley fills in the Himalaya are much older and reflect environmental conditions that prevailed at ~110 ka and ~2.5 Ma, respectively. Thus, the newly developed method has proven useful for inferring the role of sedimentary valley-fill deposits in landscape evolution on timescales ranging from 1,000 to 10,000,000 years.
Variations in the distribution of mass within an orogen may lead to transient sediment storage, which in turn might affect the state of stress and the level of fault activity. Distinguishing between different forcing mechanisms causing variations of sediment flux and tectonic activity, is therefore one of the most challenging tasks in understanding the spatiotemporal evolution of active mountain belts.
The Himalayan mountain belt is one of the most significant Cenozoic collisional mountain belt, formed due to collision between northward-bound Indian Plate and the Eurasian Plate during the last 55-50 Ma. Ongoing convergence of these two tectonic plates is accommodated by faulting and folding within the Himalayan arc-shaped orogen and the continued lateral and vertical growth of the Tibetan Plateau and mountain belts adjacent to the plateau as well as regions farther north. Growth of the Himalayan orogen is manifested by the development of successive south-vergent thrust systems. These thrust systems divide the orogen into different morphotectonic domains. From north to south these thrusts are the Main Central Thrust (MCT), the Main Boundary Thrust (MBT) and the Main Frontal Thrust (MFT). The growing topography interacts with moisture-bearing monsoonal winds, which results in pronounced gradients in rainfall, weathering, erosion and sediment transport toward the foreland and beyond. However, a fraction of this sediment is trapped and transiently stored within the intermontane valleys or ‘dun’s within the lower-elevation foothills of the range. Improved understanding of the spatiotemporal evolution of these sediment archives could provide a unique opportunity to decipher the triggers of variations in sediment production, delivery and storage in an actively deforming mountain belt and support efforts to test linkages between sediment volumes in intermontane basins and changes in the shallow crustal stress field. As sediment redistribution in mountain belts on timescales of 102-104 years can effect cultural characteristics and infrastructure in the intermontane valleys and may even impact the seismotectonics of a mountain belt, there is a heightened interest in understanding sediment-routing processes and causal relationships between tectonism, climate and topography. It is here at the intersection between tectonic processes and superposed climatic and sedimentary processes in the Himalayan orogenic wedge, where my investigation is focused on. The study area is the intermontane Kangra Basin in the northwestern Sub-Himalaya, because the characteristics of the different Himalayan morphotectonic provinces are well developed, the area is part of a region strongly influenced by monsoonal forcing, and the existence of numerous fluvial terraces provides excellent strain markers to assess deformation processes within the Himalayan orogenic wedge. In addition, being located in front of the Dhauladhar Range the region is characterized by pronounced gradients in past and present-day erosion and sediment processes associated with repeatedly changing climatic conditions. In light of these conditions I analysed climate-driven late Pleistocene-Holocene sediment cycles in this tectonically active region, which may be responsible for triggering the tectonic re-organization within the Himalayan orogenic wedge, leading to out-of-sequence thrusting, at least since early Holocene.
The Kangra Basin is bounded by the MBT and the Sub-Himalayan Jwalamukhi Thrust (JMT) in the north and south, respectively and transiently stores sediments derived from the Dhauladhar Range. The Basin contains ~200-m-thick conglomerates reflecting two distinct aggradation phases; following aggradation, several fluvial terraces were sculpted into these fan deposits. 10Be CRN surface exposure dating of these terrace levels provides an age of 53.4±3.2 ka for the highest-preserved terrace (AF1); subsequently, this surface was incised until ~15 ka, when the second fan (AF2) began to form. AF2 fan aggradation was superseded by episodic Holocene incision, creating at least four terrace levels. We find a correlation between variations in sediment transport and ∂18O records from regions affected by the Indian Summer Monsoon (ISM). During strengthened ISMs sand post-LGM glacial retreat, aggradation occurred in the Kangra Basin, likely due to high sediment flux, whereas periods of a weakened ISM coupled with lower sediment supply coincided with renewed re-incision.
However, the evolution of fluvial terraces along Sub-Himalayan streams in the Kangra sector is also forced by tectonic processes. Back-tilted, folded terraces clearly document tectonic activity of the JMT. Offset of one of the terrace levels indicates a shortening rate of 5.6±0.8 to 7.5±1.0 mm.a-1 over the last ~10 ka. Importantly, my study reveals that late Pleistocene/Holocene out-of-sequence thrusting accommodates 40-60% of the total 14±2 mm.a-1 shortening partitioned throughout the Sub-Himalaya. Importantly, the JMT records shortening at a lower rate over longer timescales hints towards out-of-sequence activity within the Sub-Himalaya. Re-activation of the JMT could be related to changes in the tectonic stress field caused by large-scale sediment removal from the basin. I speculate that the deformation processes of the Sub-Himalaya behave according to the predictions of critical wedge model and assume the following: While >200m of sediment aggradation would trigger foreland-ward propagation of the deformation front, re-incision and removal of most of the stored sediments (nearly 80-85% of the optimum basin-fill) would again create a sub-critical condition of the wedge taper and trigger the retreat of the deformation front.
While tectonism is responsible for the longer-term processes of erosion associated with steepening hillslopes, sediment cycles in this environment are mainly the result of climatic forcing. My new 10Be cosmogenic nuclide exposure dates and a synopsis of previous studies show the late Pleistocene to Holocene alluvial fills and fluvial terraces studied here record periodic fluctuations of sediment supply and transport capacity on timescales of 1000-100000 years. To further evaluate the potential influence of climate change on these fluctuations, I compared the timing of aggradation and incision phases recorded within remnant alluvial fans and terraces with continental climate archives such as speleothems in neighboring regions affected by monsoonal precipitation. Together with previously published OSL ages yielding the timing of aggradation, I find a correlation between variations in sediment transport with oxygen-isotope records from regions affected by the Indian Summer Monsoon (ISM). Accordingly, during periods of increased monsoon intensity (transitions from dry and cold to wet and warm periods – MIS4 to MIS3 and MIS2 to MIS1) (MIS=marine isotope stage) and post-Last Glacial Maximum glacial retreat, aggradation occurred in the Kangra Basin, likely due to high sediment flux. Conversely, periods of weakened monsoon intensity or lower sediment supply coincide with re-incision of the existing basin-fill.
Finally, my study entails part of a low-temperature thermochronology study to assess the youngest exhumation history of the Dhauladhar Range. Zircon helium (ZHe) ages and existing low-temperature data sets (ZHe, apatite fission track (AFT)) across this range, together with 3D thermokinematic modeling (PECUBE) reveals constraints on exhumation and activity of the range-bounding Main Boundary Thrust (MBT) since at least mid-Miocene time. The modeling results indicate mean slip rates on the MBT-fault ramp of ~2 – 3 mm.a-1 since its activation. This has lead to the growth of the >5-km-high frontal Dhauladhar Range and continuous deep-seated exhumation and erosion. The obtained results also provide interesting constraints of deformation patterns and their variation along strike. The results point towards the absence of the time-transient ‘mid-crustal ramp’ in the basal decollement and
duplexing of the Lesser Himalayan sequence, unlike the nearby regions or even the central Nepal domain. A fraction of convergence (~10-15%) is accommodated along the deep-seated MBT-ramp, most likely merging into the MHT. This finding is crucial for a rigorous assessment of the overall level of tectonic activity in the Himalayan morphotectonic provinces as it contradicts recently-published geodetic shortening estimates. In these studies, it has been proposed that the total Himalayan shortening in the NW Himalaya is accommodated within the Sub-Himalaya whereas no tectonic activity is assigned to the MBT.
Understanding the rates and processes of denudation is key to unraveling the dynamic processes that shape active orogens. This includes decoding the roles of tectonic and climate-driven processes in the long-term evolution of high- mountain landscapes in regions with pronounced tectonic activity and steep climatic and surface-process gradients. Well-constrained denudation rates can be used to address a wide range of geologic problems. In steady-state landscapes, denudation rates are argued to be proportional to tectonic or isostatic uplift rates and provide valuable insight into the tectonic regimes underlying surface denudation. The use of denudation rates based on terrestrial cosmogenic nuclide (TCN) such as 10Beryllium has become a widely-used method to quantify catchment-mean denudation rates. Because such measurements are averaged over timescales of 102 to 105 years, they are not as susceptible to stochastic changes as shorter-term denudation rate estimates (e.g., from suspended sediment measurements) and are therefore considered more reliable for a comparison to long-term processes that operate on geologic timescales. However, the impact of various climatic, biotic, and surface processes on 10Be concentrations and the resultant denudation rates remains unclear and is subject to ongoing discussion. In this thesis, I explore the interaction of climate, the biosphere, topography, and geology in forcing and modulating denudation rates on catchment to orogen scales.
There are many processes in highly dynamic active orogens that may effect 10Be concentrations in modern river sands and therefore impact 10Be-derived denudation rates. The calculation of denudation rates from 10Be concentrations, however, requires a suite of simplifying assumptions that may not be valid or applicable in many orogens. I investigate how these processes affect 10Be concentrations in the Arun Valley of Eastern Nepal using 34 new 10Be measurements from the main stem Arun River and its tributaries. The Arun Valley is characterized by steep gradients in climate and topography, with elevations ranging from <100 m asl in the foreland basin to >8,000 asl in the high sectors to the north. This is coupled with a five-fold increase in mean annual rainfall across strike of the orogen. Denudation rates from tributary samples increase toward the core of the orogen, from <0.2 to >5 mm/yr from the Lesser to Higher Himalaya. Very high denudation rates (>2 mm/yr), however, are likely the result of 10Be TCN dilution by surface and climatic processes, such as large landsliding and glaciation, and thus may not be representative of long-term denudation rates. Mainstem Arun denudation rates increase downstream from ~0.2 mm/yr at the border with Tibet to 0.91 mm/yr at its outlet into the Sapt Kosi. However, the downstream 10Be concentrations may not be representative of the entire upstream catchment. Instead, I document evidence for downstream fining of grains from the Tibetan Plateau, resulting in an order-of-magnitude apparent decrease in the measured 10Be concentration.
In the Arun Valley and across the Himalaya, topography, climate, and vegetation are strongly interrelated. The observed increase in denudation rates at the transition from the Lesser to Higher Himalaya corresponds to abrupt increases in elevation, hillslope gradient, and mean annual rainfall. Thus, across strike (N-S), it is difficult to decipher the potential impacts of climate and vegetation cover on denudation rates. To further evaluate these relationships I instead took advantage of an along-strike west-to-east increase of mean annual rainfall and vegetation density in the Himalaya. An analysis of 136 published 10Be denudation rates from along strike of the revealed that median denudation rates do not vary considerably along strike of the Himalaya, ~1500 km E-W. However, the range of denudation rates generally decreases from west to east, with more variable denudation rates in the northwestern regions of the orogen than in the eastern regions. This denudation rate variability decreases as vegetation density increases (R=- 0.90), and increases proportionately to the annual seasonality of vegetation (R=0.99). Moreover, rainfall and vegetation modulate the relationship between topographic steepness and denudation rates such that in the wet, densely vegetated regions of the Himalaya, topography responds more linearly to changes in denudation rates than in dry, sparsely vegetated regions, where the response of topographic steepness to denudation rates is highly nonlinear. Understanding the relationships between denudation rates, topography, and climate is also critical for interpreting sedimentary archives. However, there is a lack of understanding of how terrestrial organic matter is transported out of orogens and into sedimentary archives. Plant wax lipid biomarkers derived from terrestrial and marine sedimentary records are commonly used as paleo- hydrologic proxy to help elucidate these problems. I address the issue of how to interpret the biomarker record by using the plant wax isotopic composition of modern suspended and riverbank organic matter to identify and quantify organic matter source regions in the Arun Valley. Topographic and geomorphic analysis, provided by the 10Be catchment-mean denudation rates, reveals that a combination of topographic steepness (as a proxy for denudation) and vegetation density is required to capture organic matter sourcing in the Arun River.
My studies highlight the importance of a rigorous and careful interpretation of denudation rates in tectonically active orogens that are furthermore characterized by strong climatic and biotic gradients. Unambiguous information about these issues is critical for correctly decoding and interpreting the possible tectonic and climatic forces that drive erosion and denudation, and the manifestation of the erosion products in sedimentary archives.
The lakes in the Kenyan Rift Valley offer the unique opportunity to study a wide range of hydrochemical environmental conditions, ranging from freshwater to highly saline and alkaline lakes. Because little is known about the hydro- and biogeochemical conditions in the underlying lake sediments, it was the aim of this study to extend the already existing data sets with data from porewater and biomarker analyses. Additionally, reduced sulphur compounds and sulphate reduction rates in the sediment were determined. The new data was used to examine the anthropogenic and microbial influence on the lakes sediments as well as the influence of the water chemistry on the degradation and preservation of organic matter in the sediment column. The lakes discussed in this study are: Logipi, Eight (a small crater lake in the region of Kangirinyang), Baringo, Bogoria, Naivasha, Oloiden, and Sonachi.
The biomarker compositions were similar in all studied lake sediments; nevertheless, there were some differences between the saline and freshwater lakes. One of those differences is the occurrence of a molecule related to β-carotene, which was only found in the saline lakes. This molecule most likely originates from cyanobacteria, single-celled organisms which are commonly found in saline lakes. In the two freshwater lakes, stigmasterol, a sterol characteristic for freshwater algae, was found. In this study, it was shown that Lakes Bogoria and Sonachi can be used for environmental reconstructions with biomarkers, because the absence of oxygen at the lake bottoms slowed the degradation process. Other lakes, like for example Lake Naivasha, cannot be used for such reconstructions, because of the large anthropogenic influence. But the biomarkers proved to be a useful tool to study those anthropogenic influences. Additionally, it was observed that horizons with a high concentration of elemental sulphur can be used as temporal markers. Those horizons were deposited during times when the lake levels were very low. The sulphur was deposited by microorganisms which are capable of anoxygenic photosynthesis or sulphide oxidation.
Die vorliegende Arbeit untersucht die Politik der Zentralbankunabhängigkeit (ZBU) am Beispiel der Türkei. Im Mittelpunkt der Arbeit stehen theoretische und empirische Fragen und Probleme, die sich im Zusammenhang mit der ZBU stellen und anhand der türkischen Geldpolitik diskutiert werden. Ein zentrales Ziel der Arbeit besteht darin, zu untersuchen, ob und inwiefern die türkische Zentralbank nach Erlangung der de jure institutionellen Unabhängigkeit tatsächlich als unabhängig und entpolitisiert eingestuft werden kann. Um diese Forschungsfrage zu beantworten, werden die institutionellen Bedingungen, die Ziele und die Regeln, nach denen sich die türkische Geldpolitik richtet, geklärt. Anschließend wird empirisch überprüft, ob die geldpolitische Praxis der CBRT sich an dem offiziell vorgegebenen Regelwerk orientiert. Die Hauptthese dieser Arbeit lautet, dass die formelle Unabhängigkeit der CBRT und die regelorientierte Geldpolitik nicht mit einer Entpolitisierung der Geldpolitik in der Türkei gleichzusetzen ist. Als Alternative schlägt die vorliegende Studie vor, den institutionellen Status der CBRT als einen der relativen Autonomie zu untersuchen. Auch eine de jure unabhängige Zentralbank kann sich nicht von politischen Eingriffen abkoppeln, wie das Fallbeispiel Türkei zeigen wird.
Das Buch richtet den Blick auf die zahllosen immer schon in Derridas Grammatologie. Dadurch wird die Strategie der Schrifttheorie ihrer eigenen Initialisierung unterstellt.
Auf dem vom immer schon abgesteckten Zickzackparcours durch die abendländische Metaphysikgeschichte wird das Kalkül der grammatologischen Begründungspraxis gegen sich selbst in Stellung gebracht. Das Experiment ist riskant, verführt das immer schon doch zu einem Vergleich der grammatologischen Ursprungsgenese mit jener in der jüdischen Schrifttradition. Eine beiläufige Bemerkung Derridas über Moses Mendelssohn ermöglicht das Auslesen einer Spur, die hinter die Grenzposten der griechischen Philosophie führt. Dabei treten bemerkenswerte Analogien zwischen dem Schriftbegriff Derridas und in dem jüdischen Schrifttradition hervor: das Verhältnis von Mündlichkeit und Schriftlichkeit sowie die in dieser Relation zum Tragen kommende Vorstellung einer Schrift, die älter ist als die Schrift selbst - einer Schrift.
Ephraim Carlebach
(2016)
Nathalie Hirschmann geht der Frage nach, auf welche Weise sich die Sicherheitswirtschaft im System der Sicherheit zu etablieren sucht und wie erfolgreich ihr dies gelingt. Ihre Analyse verdeutlicht, wie Schmuddelimage und begrenzte Kompetenzzuschreibung der Branche einerseits erschweren, neben der Polizei als institutionelle Trägerin der öffentlichen Sicherheit zu bestehen, und andererseits, gegenüber dem Kunden bzw. Auftraggeber in ein professionelleres Gefüge zu treten. Einen inhaltsanalytisch theoriegeleiteten, soziologisch-konzeptionellen Blick einnehmend wird deutlich, welche Ausbaubestrebungen kognitiver und sozialer Art die Sicherheitswirtschaft vorgenommen hat und wo diese an ihre Grenzen stoßen.
Extreme hydro-meteorological events, such as severe droughts or heavy rainstorms, constitute primary manifestations of climate variability and exert a critical impact on the natural environment and human society. This is particularly true for high-mountain areas, such as the eastern flank of the southern Central Andes of NW Argentina, a region impacted by deep convection processes that form the basis of extreme events, often resulting in floods, a variety of mass movements, and hillslope processes. This region is characterized by pronounced E-W gradients in topography, precipitation, and vegetation cover, spanning low to medium-elevation, humid and densely vegetated areas to high-elevation, arid and sparsely vegetated environments. This strong E-W gradient is mirrored by differences in the efficiency of surface processes, which mobilize and transport large amounts of sediment through the fluvial system, from the steep hillslopes to the intermontane basins and further to the foreland. In a highly sensitive high-mountain environment like this, even small changes in the spatiotemporal distribution, magnitude and rates of extreme events may strongly impact environmental conditions, anthropogenic activity, and the well-being of mountain communities and beyond. However, although the NW Argentine Andes comprise the catchments for the La Plata river that traverses one of the most populated and economically relevant areas of South America, there are only few detailed investigations of climate variability and extreme hydro-meteorological events.
In this thesis, I focus on deciphering the spatiotemporal variability of rainfall and river discharge, with particular emphasis on extreme hydro-meteorological events in the subtropical southern Central Andes of NW Argentina during the past seven decades. I employ various methods to assess and quantify statistically significant trend patterns of rainfall and river discharge, integrating high-quality daily time series from gauging stations (40 rainfall and 8 river discharge stations) with gridded datasets (CPC-uni and TRMM 3B42 V7), for the period between 1940 and 2015. Evidence for a general intensification of the hydrological cycle at intermediate elevations (~ 0.5 – 3 km asl) at the eastern flank of the southern Central Andes is found both from rainfall and river-discharge time-series analysis during the period from 1940 to 2015. This intensification is associated with the increase of the annual total amount of rainfall and the mean annual discharge. However, most pronounced trends are found at high percentiles, i.e. extreme hydro-meteorological events, particularly during the wet season from December to February.An important outcome of my studies is the recognition of a rapid increase in the amount of river discharge during the period between 1971 and 1977, most likely linked to the 1976-77 global climate shift, which is associated with the North Pacific Ocean sea surface temperature variability. Interestingly, after this rapid increase, both rainfall and river discharge decreased at low and intermediate elevations along the eastern flank of the Andes. In contrast, during the same time interval, at high elevations, extensive areas on the arid Puna de Atacama plateau have recorded increasing annual rainfall totals. This has been associated with more intense extreme hydro-meteorological events from 1979 to 2014. This part of the study reveals that low-, intermediate, and high-elevation sectors in the Andes of NW Argentina respond differently to changing climate conditions.
Possible forcing mechanisms of the pronounced hydro-meteorological variability observed in the study area are also investigated. For the period between 1940 and 2015, I analyzed modes of oscillation of river discharge from small to medium drainage basins (102 to 104 km2), located on the eastern flank of the orogen. First, I decomposed the relevant monthly time series using the Hilbert-Huang Transform, which is particularly appropriate for non-stationary time series that result from non-linear natural processes. I observed that in the study region discharge variability can be described by five quasi-periodic oscillatory modes on timescales varying from 1 to ~20 years. Secondly, I tested the link between river-discharge variations and large-scale climate modes of variability, using different climate indices, such as the BEST ENSO (Bivariate El Niño-Southern Oscillation Time-series) index. This analysis reveals that, although most of the variance on the annual timescale is associated with the South American Monsoon System, a relatively large part of river-discharge variability is linked to Pacific Ocean variability (PDO phases) at multi-decadal timescales (~20 years). To a lesser degree, river discharge variability is also linked to the Tropical South Atlantic (TSA) sea surface temperature anomaly at multi-annual timescales (~2-5 years).
Taken together, these findings exemplify the high degree of sensitivity of high-mountain environments with respect to climatic variability and change. This is particularly true for the topographic transitions between the humid, low-moderate elevations and the semi-arid to arid highlands of the southern Central Andes. Even subtle changes in the hydro-meteorological regime of these areas of the mountain belt react with major impacts on erosional hillslope processes and generate mass movements that fundamentally impact the transport capacity of mountain streams. Despite more severe storms in these areas, the fluvial system is characterized by pronounced variability of the stream power on different timescales, leading to cycles of sediment aggradation, the loss of agriculturally used land and severe impacts on infrastructure.
The collision of bathymetric anomalies, such as oceanic spreading centers, at convergent plate margins can profoundly affect subduction dynamics, magmatism, and the structural and geomorphic evolution of the overriding plate. The Southern Patagonian Andes of South America are a prime example for sustained oceanic ridge collision and the successive formation and widening of an extensive asthenospheric slab window since the Middle Miocene. Several of the predicted upper-plate geologic manifestations of such deep-seated geodynamic processes have been studied in this region, but many topics remain highly debated. One of the main controversial topics is the interpretation of the regional low-temperature thermochronology exhumational record and its relationship with tectonic and/or climate-driven processes, ultimately manifested and recorded in the landscape evolution of the Patagonian Andes. The prominent along-strike variance in the topographic characteristics of the Andes, combined with coupled trends in low-temperature thermochronometer cooling ages have been interpreted in very contrasting ways, considering either purely climatic (i.e. glacial erosion) or geodynamic (slab-window related) controlling factors.
This thesis focuses on two main aspects of these controversial topics. First, based on field observations and bedrock low-temperature thermochronology data, the thesis addresses an existing research gap with respect to the neotectonic activity of the upper plate in response to ridge collision - a mechanism that has been shown to affect the upper plate topography and exhumational patterns in similar tectonic settings. Secondly, the qualitative interpretation of my new and existing thermochronological data from this region is extended by inverse thermal modelling to define thermal histories recorded in the data and evaluate the relative importance of surface vs. geodynamic factors and their possible relationship with the regional cooling record.
My research is centered on the Northern Patagonian Icefield (NPI) region of the Southern Patagonian Andes. This site is located inboard of the present-day location of the Chile Triple Junction - the juncture between the colliding Chile Rise spreading center and the Nazca and Antarctic Plates along the South American convergent margin. As such this study area represents the region of most recent oceanic-ridge collision and associated slab window formation. Importantly, this location also coincides with the abrupt rise in summit elevations and relief characteristics in the Southern Patagonian Andes. Field observations, based on geological, structural and geomorphic mapping, are combined with bedrock apatite (U-Th)/He and apatite fission track (AHe and AFT) cooling ages sampled along elevation transects across the orogen. This new data reveals the existence of hitherto unrecognized neotectonic deformation along the flanks of the range capped by the NPI.
This deformation is associated with the closely spaced oblique collision of successive oceanic-ridge segments in this region over the past 6 Ma. I interpret that this has caused a crustal-scale partitioning of deformation and the decoupling, margin-parallel migration, and localized uplift of a large crustal sliver (the NPI block) along the subduction margin. The location of this uplift coincides with a major increase of summit elevations and relief at the northern edge of the NPI massif. This mechanism is compatible with possible extensional processes along the topographically subdued trailing edge of the NPI block as documented by very recent and possibly still active normal faulting. Taken together, these findings suggest a major structural control on short-wavelength variations in topography in the Southern Patagonian Andes - the region affected by ridge collision and slab window formation.
The second research topic addressed here focuses on using my new and existing bedrock low-temperature cooling ages in forward and inverse thermal modeling. The data was implemented in the HeFTy and QTQt modeling platforms to constrain the late Cenozoic thermal history of the Southern Patagonian Andes in the region of the most recent upper-plate sectors of ridge collision. The data set combines AHe and AFT data from three elevation transects in the region of the Northern Patagonian Icefield. Previous similar studies claimed far-reaching thermal effects of the approaching ridge collision and slab window to affect patterns of Late Miocene reheating in the modelled thermal histories. In contrast, my results show that the currently available data can be explained with a simpler thermal history than previously proposed. Accordingly, a reheating event is not needed to reproduce the observations. Instead, the analyzed ensemble of modelled thermal histories defines a Late Miocene protracted cooling and Pliocene-to-recent stepwise exhumation. These findings agree with the geological record of this region. Specifically, this record indicates an Early Miocene phase of active mountain building associated with surface uplift and an active fold-and-thrust belt, followed by a period of stagnating deformation, peneplanation, and lack of synorogenic deposition in the Patagonian foreland. The subsequent period of stepwise exhumation likely resulted from a combination of pulsed glacial erosion and coeval neotectonic activity. The differences between the present and previously published interpretation of the cooling record can be reconciled with important inconsistencies of previously used model setup. These include mainly the insufficient convergence of the models and improper assumptions regarding the geothermal conditions in the region. This analysis puts a methodological emphasis on the prime importance of the model setup and the need for its thorough examination to evaluate the robustness of the final outcome.
Möglichkeiten der Mittelstandsförderung durch Vergaberechtsgestaltung und Vergaberechtspraxis
(2016)
Die Förderungswürdigkeit und die Förderungsfähigkeit mittelständischer Unternehmen ist ein gesamteuropäisches, wirtschaftspolitisches Anliegen. Hiervon zeugen zum einen zahlreiche Regelungen im Primär-, Sekundär-, Verfassungs- und einfachgesetzlichem Recht, zum anderen auch die Bedeutung der mittelständischen Unternehmen im wirtschaftlichen, gesellschaftlichen und sozialen Gefüge. So herrscht innerhalb der Europäischen Union nicht nur der Slogan „Vorfahrt für KMU“, sondern auch die im Frühjahr 2014 verabschiedeten Vergaberichtlinien legten ein besonderes Augenmerk auf die Förderung des Zugangs der KMU zum öffentlichen Beschaffungsmarkt. Denn gemessen am Steuerungs- und Lenkungspotenzial der Auftragsvergabe, deren Einfluss auf die Innovationstätigkeit der Wirtschaft sowie deren Auswirkungen auf die Wirtschafts- und Wettbewerbstätigkeit auf der einen Seite und dem gesamtwirtschaftlichen Stellenwert der mittelständischen Unternehmen auf der anderen Seite, sind mittelständische Unternehmen trotz zahlreicher europäischer und nationaler Initiativen im Vergabeverfahren unterrepräsentiert. Neben der undurchsichtigen Regelungsstruktur des deutschen Vergaberechts, unterliegen die mittelständischen Unternehmen vom Beginn bis zum Ende des Vergabeverfahrens besonderen Schwierigkeiten. Dieser Ausgangsbefund wurde zum Anlass genommen, um die Möglichkeiten der Mittelstandsförderung durch Vergaberechtsgestaltung und Vergaberechtspraxis erneut auf den Prüfstand zu stellen.
Infants' lexical processing is modulated by featural manipulations made to words, suggesting that early lexical representations are sufficiently specified to establish a match with the corresponding label. However, the precise degree of detail in early words requires further investigation due to equivocal findings. We studied this question by assessing children’s sensitivity to the degree of featural manipulation (Chapters 2 and 3), and sensitivity to the featural makeup of homorganic and heterorganic consonant clusters (Chapter 4). Gradient sensitivity on the one hand and sensitivity to homorganicity on the other hand would suggest that lexical processing makes use of sub-phonemic information, which in turn would indicate that early words contain sub-phonemic detail. The studies presented in this thesis assess children’s sensitivity to sub-phonemic detail using minimally demanding online paradigms suitable for infants: single-picture pupillometry and intermodal preferential looking. Such paradigms have the potential to uncover lexical knowledge that may be masked otherwise due to cognitive limitations. The study reported in Chapter 2 obtained a differential response in pupil dilation to the degree of featural manipulation, a result consistent with gradient sensitivity. The study reported in Chapter 3 obtained a differential response in proportion of looking time and pupil dilation to the degree of featural manipulation, a result again consistent with gradient sensitivity. The study reported in Chapter 4 obtained a differential response to the manipulation of homorganic and heterorganic consonant clusters, a result consistent with sensitivity to homorganicity. These results suggest that infants' lexical representations are not only specific, but also detailed to the extent that they contain sub-phonemic information.
In this Thesis, the properties of aqueous hemicellulose polysaccharides are investigated using computer simulations. The high swelling capacity of materials composed of these molecules allows the generation of directed motion in plant materials entirely controlled by water uptake.
To explore the molecular origin of this swelling capacity, a computational model with atomistic resolution for hemicellulose polysaccharides is build and validated in comparison with experiments. Using this model, simulations of small polysaccharides are employed to gain an understanding of the interactions of these molecules with water, the influence of water on their conformational freedom, and the swelling capacity quantified in terms of osmotic pressure. It is revealed that the branched hemicellulose polysaccharides show different hydration characteristics compared to linear polysaccharides.
To study swelling properties on length and time scales that exceed the limitations imposed by atomistic simulations, a procedure to obtain transferable coarse-grain models is developed. The transferability of the coarse-grain models over both different degrees of polymerization as well as different solute concentrations is demonstrated. Therefore, the procedure allows the construction of large coarse-grained systems based on small atomistic reference systems. Finally, the coarse-grain model is applied to demonstrate that linear and branched polysaccharides show a different swelling behavior when coupled to a water bath.
Die vorliegende Untersuchung verfolgt das Ziel, kulturelle und religiöse Aspekte der Erneuerung jüdischen Lebens in Berlin seit 1989 zu erforschen. Die Entwicklungen der jüdischen Gemeinschaft in der Hauptstadt seit dem Fall der Mauer und dem Zusammenbruch der Sowjetunion führen zur Wiederannäherung eines Teils der jüdischen Bevölkerung in Deutschland an die eigene Kultur, Religion und Geschichte. Dabei kommt die Pluralität der kulturellen, literarischen und religiösen Ausdrucksformen der jüdischen Identitäten zum Vorschein. Die Arbeit verdeutlicht diese in Berlin nach 1989 einsetzende kulturelle und religiöse „Renaissance“. Vier wichtige Punkte kennzeichnen das jüdische Leben in Berlin nach 1989. Erstens gewinnt Deutschland seit der Wiedervereinigung eine neue Rolle als mögliches Einwanderungsland für Juden. Vor allem mit der massiven jüdischen Einwanderung aus den Staaten der ehemaligen Sowjetunion seit den 1990er Jahren wird Deutschland allmählich als wichtiges Zentrum in der europäischen Diaspora anerkannt. Zweitens bleibt zwar die Shoah tief verankert im Gedächtnis der jüdischen Gemeinschaft; die meisten Kinder oder Enkelkinder von Überlebenden der Shoah weigern sich jedoch, ihre jüdische Identität exklusiv durch die Shoah zu definieren. Sie gründen zur Wiederentdeckung und Forderung ihres kulturellen, religiösen und historischen Erbes jüdische Gruppen und Einrichtungen in Berlin, die in den meisten Fällen alternativ zur Jüdischen Gemeinde entstehen: Künstlergruppen, jüdische Kulturvereine, Konferenzen und Podiumsdiskussionen, religiöse Kongregationen und Lernhäuser. Damit – und dies ist der dritte Punkt – verliert zwar die offizielle Jüdische Gemeinde an Bedeutung als einzige Vertreterin der jüdischen Gemeinschaft Berlins; diese kulturelle und religiöse „Renaissance“ außerhalb der offiziellen Strukturen der Gemeinde bedeutet aber auch eine wachsende Pluralität und Diversifizierung der jüdischen Gemeinschaft in Berlin. Viertens spielt Berlin die Hauptrolle in diesem Prozess. Heute werden viele ehemalige jüdische Orte neu belebt: Synagogen werden wiederentdeckt und renoviert, Denk- und Mahnmale gebaut, Stadtführungen auf der Spur des „jüdischen Berlins“ organisiert, Rabbinerseminare neu gegründet. Die Topographie Berlins bildet auch eine Inspirationsquelle für jüdische (und nichtjüdische) Schriftsteller und Künstler. Die Analyse dieser nach 1989 entstandenen religiösen Initiativen, literarischen Werke und kulturellen Produktionen dient dazu, Aspekte der kulturellen und religiösen „Renaissance“ in Berlin näher zu verdeutlichen.
"Seid mutig und aufrecht!"
(2016)
Der 'Centralverein deutscher Staatsbürger jüdischen Glaubens' (C.V.) wurde 1893 in Berlin gegründet. Sein Tätigkeitsschwerpunkt lag ursprünglich auf dem Gebiet des Rechtsschutzes gegen antisemitische Diskriminierungen. Eine zweite Säule seiner Arbeit war die Aufklärung der Nichtjuden über das 'wahre Wesen' des Judentums. Johann Nicolai analysiert in diesem Band unter Auswertung der Moskauer Archivbestände die Entwicklung des Vereins von 1933 bis zu seiner Auflösung im Jahr 1938. Im Mittelpunkt der Studie steht die Frage, wie der Centralverein die Fortsetzung seiner Arbeit gegenüber seinen Mitgliedern begründete, obwohl im Nationalsozialismus die grundlegenden Bürgerrechte für die deutschen Juden, die er ursprünglich verteidigen wollte, nicht mehr gewährleistet waren. Ein besonderer Fokus liegt auf der Wandlung des Centralvereins nach den Nürnberger Gesetzen und der darauf erfolgten Umbenennung und Umstrukturierung. Wesentlicher Bestandteil ist auch die Auseinandersetzung mit der jüdischen Emigration nach Übersee. Nicolai zeigt auf, dass es bis tief in die 1930er-Jahre hinein starke Spannungen im deutschen Judentum gab.