Refine
Year of publication
Document Type
- Doctoral Thesis (6429) (remove)
Language
Is part of the Bibliography
- yes (6429) (remove)
Keywords
- climate change (53)
- Klimawandel (52)
- Modellierung (35)
- Nanopartikel (26)
- machine learning (23)
- Fernerkundung (20)
- Deutschland (19)
- Spracherwerb (19)
- Synchronisation (19)
- remote sensing (18)
Institute
- Institut für Biochemie und Biologie (1029)
- Institut für Physik und Astronomie (773)
- Institut für Chemie (670)
- Institut für Geowissenschaften (502)
- Wirtschaftswissenschaften (402)
- Institut für Ernährungswissenschaft (276)
- Öffentliches Recht (255)
- Bürgerliches Recht (220)
- Historisches Institut (213)
- Institut für Informatik und Computational Science (203)
Nowadays, innovative and entrepreneurial activities and their actors are embedded in interdependent systems to drive joint value creation. Innovation ecosystems and entrepreneurial ecosystems have become established system-level concepts in management research to explain how value transpires between different actors and institutions in distinct contexts. Despite the popularity of the concepts, researchers have critiqued their theoretical depth, conceptual distinctiveness, as well as operationalization and measurement (Autio & Thomas, 2022; Klimas & Czakon, 2022). Furthermore, in light of current-day challenges, research has yet to address how context impacts innovation and entrepreneurial ecosystems and their actors and elements (Wurth et al., 2022).
The aim of this cumulative thesis is to provide a deeper understanding of the conceptualization, operationalization, and measurement of innovation and entrepreneurial ecosystems and investigate how contextual factors can influence the overall ecosystem and its key actors. To this end, bibliometric and empirical-qualitative methods, as well as narrative and systematic literature reviews, are employed. After introducing the research scope and key concepts in Chapter 1, a systematic literature review to operationalize and measure the concept of innovation ecosystems is conducted, and an integrative framework of its composition is introduced in Chapter 2. In Chapter 3, the innovation journal network is outlined by means of science mapping to determine current and emerging research areas characterizing innovation studies. In Chapters 4 and 5, the interplay between the temporal context of the Covid-19 pandemic and the spatial context of entrepreneurial ecosystems is assessed by focusing on the role of organizational resilience and affordances. The findings shed new light on the dynamics and boundaries of entrepreneurial ecosystems as they move between the spatial and digital realm. Building on this, an integrative framework of digital entrepreneurial ecosystems is presented in Chapter 6. The concluding Chapter 7 summarizes my thesis’s conceptual, theoretical, and empirical insights, highlighting implications, limitations, and promising future research avenues.
The findings of this cumulative thesis contribute to the theoretical and conceptual advancement of ecosystems in innovation and entrepreneurship by providing insights into the measurement and operationalization of its elements. Furthermore, the results show that contextual factors, such as crisis events or institutional circumstances, influence innovation and entrepreneurial ecosystems and their actors, calling for a more nuanced consideration of ecosystem configurations and dynamics. By drawing from the theory of affordances, the elements that actually afford value to the actors and how they shift between the physical and digital realm are portrayed. Based on these findings, this thesis introduces novel frameworks and conceptual advancements of the configurations and boundaries of innovation and (digital) entrepreneurial ecosystems, laying the foundation for a renewed understanding of how to design, orchestrate, and evaluate ecosystems today and in the future.
In recent years, the ever-growing amount of documents on the Web as well as in closed systems for private or business contexts led to a considerable increase of valuable textual information about topics, events, and entities. It is a truism that the majority of information (i.e., business-relevant data) is only available in unstructured textual form. The text mining research field comprises various practice areas that have the common goal of harvesting high-quality information from textual data. These information help addressing users' information needs.
In this thesis, we utilize the knowledge represented in user-generated content (UGC) originating from various social media services to improve text mining results. These social media platforms provide a plethora of information with varying focuses. In many cases, an essential feature of such platforms is to share relevant content with a peer group. Thus, the data exchanged in these communities tend to be focused on the interests of the user base. The popularity of social media services is growing continuously and the inherent knowledge is available to be utilized. We show that this knowledge can be used for three different tasks.
Initially, we demonstrate that when searching persons with ambiguous names, the information from Wikipedia can be bootstrapped to group web search results according to the individuals occurring in the documents. We introduce two models and different means to handle persons missing in the UGC source. We show that the proposed approaches outperform traditional algorithms for search result clustering. Secondly, we discuss how the categorization of texts according to continuously changing community-generated folksonomies helps users to identify new information related to their interests. We specifically target temporal changes in the UGC and show how they influence the quality of different tag recommendation approaches. Finally, we introduce an algorithm to attempt the entity linking problem, a necessity for harvesting entity knowledge from large text collections. The goal is the linkage of mentions within the documents with their real-world entities. A major focus lies on the efficient derivation of coherent links.
For each of the contributions, we provide a wide range of experiments on various text corpora as well as different sources of UGC.
The evaluation shows the added value that the usage of these sources provides and confirms the appropriateness of leveraging user-generated content to serve different information needs.
The ultimate aim of this study is to better understand the relevance of weak electricity in the adaptive radiation of the African mormyrid fish. The chosen model taxon, the genus Campylomormyrus, exhibits a wide diversity of electric organ discharge (EOD) waveform types. Their EOD is age, sex, and species specific and is an important character for discriminating among species that are otherwise cryptic. After having established a complementary set of molecular markers, I examined the radiation of Campylomormyrus by a combined approach of molecular data (sequence data from the mitochondrial cytochrome b and the nuclear S7 ribosomal protein gene, as well as 18 microsatellite loci, especially developed for the genus Campylomormyrus), observation of ontogeny and diversification of EOD waveform, and morphometric analysis of relevant morphological traits. I built up the first convincing phylogenetic hypothesis for the genus Campylomormyrus. Taking advantage of microsatellite data, the identified phylogenetic clades proved to be reproductively isolated biological species. This way I detected at least six species occurring in sympatry near Brazzaville/Kinshasa (Congo Basin). By combining molecular data and EOD analyses, I could show that there are three cryptic species, characterised by their own adult EOD types, hidden under a common juvenile EOD form. In addition, I confirmed that adult male EOD is species-specific and is more different among closely related species than among more distantly related ones. This result and the observation that the EOD changes with maturity suggest its function as a reproductive isolation mechanism. As a result of my morphometric shape analysis, I could assign species types to the identified reproductively isolated groups to produce a sound taxonomy of the group. Besides this, I could also identify morphological traits relevant for the divergences between the identified species. Among them, the variations I found in the shape of the trunk-like snout, suggest the presence of different trophic specializations; therefore, this trait might have been involved in the ecological radiation of the group. In conclusion, I provided a convincing scenario envisioning an adaptive radiation of weakly electric fish triggered by sexual selection via assortative mating due to differences in EOD characteristics, but caused by a divergent selection of morphological traits correlated with the feeding ecology.
The African weakly electric fish genus Campylomormyrus includes 15 described species mostly native to the Congo River and its tributaries. They are considered sympatric species, because their distribution area overlaps. These species generate species-specific electric organ discharges (EODs) varying in waveform characteristics, including duration, polarity, and phase number. They exhibit also pronounced divergence in their snout, i.e. the length, thickness, and curvature. The diversifications in these two phenotypical traits (EOD and snout) have been proposed as key factors promoting adaptive radiation in Campylomormyrus. The role of EODs as a pre-zygotic isolation mechanism driving sympatric speciation by promoting assortative mating has been examined using behavioral, genetical, and histological approaches. However, the evolutionary effects of the snout morphology and its link to species divergence have not been closely examined. Hence, the main objective of this study is to investigate the effect of snout morphology diversification and its correlated EOD to better understand their sympatric speciation and evolutionary drivers. Moreover, I aim to utilize the intragenus and intergenus hybrids of Campylomormyrus to better understand trait divergence as well as underlying molecular/genetic mechanisms involved in the radiation scenario. To this end, I utilized three different approaches: feeding behavior analysis, diet assessment, and geometric morphometrics analysis. I performed feeding behavior experiments to evaluate the concept of the phenotype-environment correlation by testing whether Campylomormyrus species show substrate preferences. The behavioral experiments showed that the short snout species exhibits preference to sandy substrate, the long snout species prefers a stone substrate, and the species with intermediate snout size does not exhibit any substrate preference. The experiments suggest that the diverse feeding apparatus in the genus Campylomormyrus may have evolved in adaptation to their microhabitats. I also performed diet assessments of sympatric Campylomormyrus species and a sister genus species (Gnathonemus petersii) with markedly different snout morphologies and EOD using NGS-based DNA metabarcoding of their stomach contents. The diet of each species was documented showing that aquatic insects such as dipterans, coleopterans and trichopterans represent the major diet component. The results showed also that all species are able to exploit diverse food niches in their habitats. However, comparing the diet overlap indices showed that different snout morphologies and the associated divergence in the EOD translated into different prey spectra. These results further support the idea that the EOD could be a ‘magic trait’ triggering both adaptation and reproductive isolation. Geometric morphometrics method was also used to compare the phenotypical shape traits of the F1 intragenus (Campylomormyrus) and intergenus (Campylomormyrus species and Gnathonemus petersii) hybrids relative to their parents. The hybrids of these species were well separated based on the morphological traits, however the hybrid phenotypic traits were closer to the short-snouted species. In addition, the likelihood that the short snout expressed in the hybrids increases with increasing the genetic distance of the parental species. The results confirmed that additive effects produce intermediate phenotypes in F1-hybrids. It seems, therefore, that morphological shape traits in hybrids, unlike the physiological traits, were not expressed straightforward.
The rise of evolutionary novelties is one of the major drivers of evolutionary diversification. African weakly-electric fishes (Teleostei, Mormyridae) have undergone an outstanding adaptive radiation, putatively owing to their ability to communicate through species-specific Electric Organ Discharges (EODs) produced by a novel, muscle-derived electric organ. Indeed, such EODs might have acted as effective pre-zygotic isolation mechanisms, hence favoring ecological speciation in this group of fishes. Despite the evolutionary importance of this organ, genetic investigations regarding its origin and function have remained limited.
The ultimate aim of this study is to better understand the genetic basis of EOD production by exploring the transcriptomic profiles of the electric organ and of its ancestral counterpart, the skeletal muscle, in the genus Campylomormyrus. After having established a set of reference transcriptomes using “Next-Generation Sequencing” (NGS) technologies, I performed in silico analyses of differential expression, in order to identify sets of genes that might be responsible for the functional differences observed between these two kinds of tissues. The results of such analyses indicate that: i) the loss of contractile activity and the decoupling of the excitation-contraction processes are reflected by the down-regulation of the corresponding genes in the electric organ; ii) the metabolic activity of the electric organ might be specialized towards the production and turnover of membrane structures; iii) several ion channels are highly expressed in the electric organ in order to increase excitability, and iv) several myogenic factors might be down-regulated by transcription repressors in the EO.
A secondary task of this study is to improve the genus level phylogeny of Campylomormyrus by applying new methods of inference based on the multispecies coalescent model, in order to reduce the conflict among gene trees and to reconstruct a phylogenetic tree as closest as possible to the actual species-tree. By using 1 mitochondrial and 4 nuclear markers, I was able to resolve the phylogenetic relationships among most of the currently described Campylomormyrus species. Additionally, I applied several coalescent-based species delimitation methods, in order to test the hypothesis that putatively cryptic species, which are distinguishable only from their EOD, belong to independently evolving lineages. The results of this analysis were additionally validated by investigating patterns of diversification at 16 microsatellite loci. The results suggest the presence of a new, yet undescribed species of Campylomormyrus.
The electrical resistivity tomography (ERT) method is widely used to investigate geological, geotechnical, and hydrogeological problems in inland and aquatic environments (i.e., lakes, rivers, and seas). The objective of the ERT method is to obtain reliable resistivity models of the subsurface that can be interpreted in terms of the subsurface structure and petrophysical properties. The reliability of the resulting resistivity models depends not only on the quality of the acquired data, but also on the employed inversion strategy. Inversion of ERT data results in multiple solutions that explain the measured data equally well. Typical inversion approaches rely on different deterministic (local) strategies that consider different smoothing and damping strategies to stabilize the inversion. However, such strategies suffer from the trade-off of smearing possible sharp subsurface interfaces separating layers with resistivity contrasts of up to several orders of magnitude. When prior information (e.g., from outcrops, boreholes, or other geophysical surveys) suggests sharp resistivity variations, it might be advantageous to adapt the parameterization and inversion strategies to obtain more stable and geologically reliable model solutions. Adaptations to traditional local inversions, for example, by using different structural and/or geostatistical constraints, may help to retrieve sharper model solutions. In addition, layer-based model parameterization in combination with local or global inversion approaches can be used to obtain models with sharp boundaries.
In this thesis, I study three typical layered near-surface environments in which prior information is used to adapt 2D inversion strategies to favor layered model solutions. In cooperation with the coauthors of Chapters 2-4, I consider two general strategies. Our first approach uses a layer-based model parameterization and a well-established global inversion strategy to generate ensembles of model solutions and assess uncertainties related to the non-uniqueness of the inverse problem. We apply this method to invert ERT data sets collected in an inland coastal area of northern France (Chapter~2) and offshore of two Arctic regions (Chapter~3). Our second approach consists of using geostatistical regularizations with different correlation lengths. We apply this strategy to a more complex subsurface scenario on a local intermountain alluvial fan in southwestern Germany (Chapter~4). Overall, our inversion approaches allow us to obtain resistivity models that agree with the general geological understanding of the studied field sites. These strategies are rather general and can be applied to various geological environments where a layered subsurface structure is expected. The flexibility of our strategies allows adaptations to invert other kinds of geophysical data sets such as seismic refraction or electromagnetic induction methods, and could be considered for joint inversion approaches.
Adaptation of nature conservation to global change: an ecosystem-based approach to priority-setting
(2013)
Die landesgeschichtliche Forschung hat den Werdegang des märkischen Zweiges der Familie von Trott und seiner einzelnen Mitglieder bisher weitestgehend ignoriert. Mit der nun vorliegenden Arbeit soll der defizitären Informationslage abgeholfen und vor allem die Etablierung des Geschlechts in brandenburgischen Landen in den Fokus gerückt werden. Woher kamen dessen frühen Vertreter, wann und wo konnten sie in der Mark Fuß fassen? Welche reichs- und territorialpolitischen Prozesse liefen zeitgleich ab, und wie beeinflussten diese eventuell den Werdegang dieser Familie? Konnte die Familie im Gegenzug eine Einflussnahme auf die Geschicke der Reichs- und Territorialpolitik entwickeln? Welche Spuren hinterließ sie in der märkischen Region?
Während eines umfangreichen Literatur- und Quellenstudiums kristallisierte sich bei dem Versuch, diese Fragen eingehend zu beantworten, die Persönlichkeit Adam von Trott des Älteren († 1564) heraus. Um das Jahr 1500 in der Landgrafschaft Hessen geboren, führten ihn sein Ehrgeiz und seine Zielstrebigkeit in den wechselhaften geschichtlichen Zeitläufen der Reformation bald an den Hof des brandenburgischen Kurfürsten Joachim II., wo er eine nahezu beispiellose Karriere absolvierte. Nebenher akkumulierte er durch die Übernahme weitläufiger Ländereien der säkularisierten Mönchszisterze Himmelpfort enormen Erblehnsbesitz in der Uckermark, dessen Konsolidierung sich die nachfolgende Generation der märkischen Trott verschrieb. Welche Schwierigkeiten und Fährnisse Adam und seine Nachkommen dabei zu bewältigen hatten, möchte der vorliegende Band klären.
Ada (Fishman) Maimon
(2023)
Background and aims:
To succeed in competition, elite team and individual athletes often seek the development of both, high levels of muscle strength and power as well as cardiorespiratory endurance. In this context, concurrent training (CT) is a commonly applied and effective training approach. While being exposed to high training loads, youth athletes (≤ 18 years) are yet underrepresented in the scientific literature. Besides, immunological responses to CT have received little attention. Therefore, the aims of this work were to examine the acute (< 15min) and delayed (≥ 6 hours) effects of dif-ferent exercise order in CT on immunological stress responses, muscular fitness, metabolic response, and rating of perceived exertion (RPE) in highly trained youth male and female judo athletes.
Methods:
A total of twenty male and thirteen female participants, with an average age of 16 ± 1.8 years and 14.4 ± 2.1 years, respectively, were included in the study. They were randomly assigned to two CT sessions; power-endurance versus endurance-power (i.e., study 1), or strength-endurance versus endurance-strength (i.e., study 2). Markers of immune response (i.e., white-blood-cells, granulocytes, lymphocytes, mon-ocytes, and lymphocytes, granulocyte-lymphocyte-ratio, and systemic-inflammation-index), muscular fitness (i.e., counter-movement jump [CMJ]), metabolic responses (i.e., blood lactate, glucose), and RPE were collected at different time points (i.e., PRE12H, PRE, MID, POST, POST6H, POST22H).
Results (study 1):
There were significant time*order interactions for white-blood-cells, lymphocytes, granulocytes, monocytes, granulocyte-lymphocyte-ratio, and systemic-inflammation-index. The power-endurance order resulted in significantly larger PRE-to-POST increases in white-blood-cells, monocytes, and lymphocytes while the endur-ance-power order resulted in significantly larger PRE-to-POST increases in the granu-locyte-lymphocyte-ratio and systemic-inflammation-index. Likewise, significantly larger increases from PRE-to-POST6H in white-blood-cells and granulocytes were observed following the power-endurance order compared to endurance-power. All markers of immune response returned toward baseline values at POST22H. Moreover, there was a significant time*order interaction for blood glucose and lactate. Following the endur-ance-power order, blood lactate and glucose increased from PRE-to-MID but not from PRE-to-POST. Meanwhile, in the power-endurance order blood lactate and glucose increased from PRE-to-POST but not from PRE-to-MID. A significant time*order inter-action was observed for CMJ-force with larger PRE-to-POST decreases in the endur-ance-power order compared to power-endurance order. Further, CMJ-power showed larger PRE-to-MID performance decreases following the power-endurance order, com-pared to the endurance-power order. Regarding RPE, significant time*order interactions were noted with larger PRE-to-MID values following the endurance-power order and larger PRE-to-POST values following the power-endurance order.
Results (study 2):
There were significant time*order interactions for lymphocytes, monocytes, granulocyte-lymphocyte-ratio, and systemic-inflammation-index. The strength-endurance order resulted in significantly larger PRE-to-POST increases in lymphocytes while the endurance-strength order resulted in significantly larger PRE-to-POST increases in the granulocyte-lymphocyte-ratio and systemic-inflammation-index. All markers of the immune system returned toward baseline values at POST22H. Moreover, there was a significant time*order interaction for blood glucose and lactate. From PRE-to-MID, there was a significantly greater increase in blood lactate and glu-cose following the endurance-strength order compared to strength-endurance order. Meanwhile, from PRE-to-POST there was a significantly higher increase in blood glu-cose following the strength-endurance order compared to endurance-strength order. Regarding physical fitness, a significant time*order interaction was observed for CMJ-force and CMJ-power with larger PRE-to-MID increases following the endurance-strength order compared to the strength-endurance order. For RPE, significant time*order interactions were noted with larger PRE-to-MID values following the endur-ance-power order and larger PRE-to-POST values following the power-endurance or-der.
Conclusions:
The primary findings from both studies revealed order-dependent effects on immune responses. In male youth judo athletes, the results demonstrated greater immunological stress responses, both immediately (≤ 15 min) and delayed (≥ 6 hours), following the power-endurance order compared to the endurance-power order. For female youth judo athletes, the results indicated higher acute, but not delayed, order-dependent changes in immune responses following the strength-endurance order compared to the endurance-strength order. It is worth noting that in both studies, all markers of immune system response returned to baseline levels within 22 hours. This suggests that successful recovery from the exercise-induced immune stress response was achieved within 22 hours. Regarding metabolic responses, physical fitness, and perceived exertion, the findings from both studies indicated acute (≤ 15 minutes) alterations that were dependent on the exercise order. These alterations were primarily influ-enced by the endurance exercise component. Moreover, study 1 provided substantial evidence suggesting that internal load measures, such as immune markers, may differ from external load measures. This indicates a disparity between immunological, perceived, and physical responses following both concurrent training orders. Therefore, it is crucial for practitioners to acknowledge these differences and take them into consideration when designing training programs.
Assumed comparable environmental conditions of early Mars and early Earth in 3.7 Ga ago – at a time when first fossil records of life on Earth could be found – suggest the possibility of life emerging on both planets in parallel. As conditions changed, the hypothetical life on Mars either became extinct or was able to adapt and might still exist in biological niches. The controversial discussed detection of methane on Mars led to the assumption, that it must have a recent origin – either abiotic through active volcanism or chemical processes, or through biogenic production. Spatial and seasonal variations in the detected methane concentrations and correlations between the presence of water vapor and geological features such as subsurface hydrogen, which are occurring together with locally increased detected concentrations of methane, gave fuel to the hypothesis of a possible biological source of the methane on Mars.
Therefore the phylogenetically old methanogenic archaea, which have evolved under early Earth conditions, are often used as model-organisms in astrobiological studies to investigate the potential of life to exist in possible extraterrestrial habitats on our neighboring planet. In this thesis methanogenic archaea originating from two extreme environments on Earth were investigated to test their ability to be active under simulated Mars analog conditions. These extreme environments – the Siberian permafrost-affected soil and the chemoautotrophically based terrestrial ecosystem of Movile cave, Romania – are regarded as analogs for possible Martian (subsurface) habitats. Two novel species of methanogenic archaea isolated from these environments were described within the frame of this thesis.
It could be shown that concentrations up to 1 wt% of Mars regolith analogs added to the growth media had a positive influence on the methane production rates of the tested methanogenic archaea, whereas higher concentrations resulted in decreasing rates. Nevertheless it was possible for the organisms to metabolize when incubated on water-saturated soil matrixes made of Mars regolith analogs without any additional nutrients. Long-term desiccation resistance of more than 400 days was proven with reincubation and indirect counting of viable cells through a combined treatment with propidium monoazide (to inactivate DNA of destroyed cells) and quantitative PCR. Phyllosilicate rich regolith analogs seem to be the best soil mixtures for the tested methanogenic archaea to be active under Mars analog conditions. Furthermore, in a simulation chamber experiment the activity of the permafrost methanogen strain Methanosarcina soligelidi SMA-21 under Mars subsurface analog conditions could be proven. Through real-time wavelength modulation spectroscopy measurements the increase in the methane concentration at temperatures down to -5 °C could be detected.
The results presented in this thesis contribute to the understanding of the activity potential of methanogenic archaea under Mars analog conditions and therefore provide insights to the possible habitability of present-day Mars (near) subsurface environments. Thus, it contributes also to the data interpretation of future life detection missions on that planet. For example the ExoMars mission of the European Space Agency (ESA) and Roscosmos which is planned to be launched in 2018 and is aiming to drill in the Martian subsurface.
Magmatic continental rifts often constitute the earliest stage of nascent plate boundaries. These extensional tectonic provinces are characterized by ubiquitous normal faulting and volcanic activity; the spatial pattern, the geometry, and the age of these normal faults can help to unravel the spatiotemporal relationships between extensional deformation, magmatism, and long-wavelength crustal deformation of continental rift provinces. This study focuses on the active faulting in the Kenya Rift of the Cenozoic East African Rift System (EARS) with a focus on the mid-Pleistocene to the present-day.
To examine the early stages of continental break-up in the EARS, this thesis presents a time-averaged minimum extension rate for the inner graben of the Northern Kenya Rift (NKR) for the last 0.5 m.y. Using the TanDEM-X digital elevation model, fault-scarp geometries and associated throws are determined across the volcano-tectonic axis of the inner graben of the NKR. By integrating existing geochronology of faulted units with new ⁴⁰Ar/³⁹Ar radioisotopic dates, time-averaged extension rates are calculated. This study reveals that in the inner graben of the NKR, the long-term extension rate based on mid-Pleistocene to recent brittle deformation has minimum values of 1.0 to 1.6 mm yr⁻¹, locally with values up to 2.0 mm yr⁻¹. In light of virtually inactive border faults of the NKR, we show that extension is focused in the region of the active volcano-tectonic axis in the inner graben, thus highlighting the maturing of continental rifting in the NKR.
The phenomenon of focused extension is further investigated with a structural analysis of the youngest volcanic manifestations of the Kenya Rift, their relationship with extensional structures, and their overprint by Holocene faulting. In this context I analyzed the fault characteristics at the ~36 ka old Menengai Caldera and adjacent areas in the Central Kenya Rift using detailed field mapping and a structure-from-motion-based DEM generated from UAV data. In general, the Holocene intra-rift normal faults are dip-slip faults which strike NNE and thus reflect the present-day tectonic stress field; however, inside Menengai caldera persistent magmatic activity and magmatic resurgence overprints these young structures significantly. The caldera is located at the center of an actively extending rift segment and this and the other volcanic edifices of the Kenya Rift may constitute nucleation points of faulting an magmatic extensional processes that ultimately lead into a future stage of magma-assisted rifting.
When viewed at the scale of the entire Kenya Rift the protracted normal faulting in this region compartmentalizes the larger rift depressions, and influences the sedimentology and the hydrology of the intra-rift basins at a scale of less than 100 km. In the present day, most of the fault-bounded sub-basins of the Kenya Rift are hydrologically isolated due to this combination of faulting and magmatic activity that has generated efficient hydrological barriers that maintain these basins as semi-independent geomorphic entities. This isolation, however, was overcome during wetter climatic conditions during the past when the basins were transiently connected. I therefore also investigated the hydrological connectivity of the rift basins during the African Humid Period of the early Holocene, when climate was wetter. With the help of DEM analysis, lake-highstand indicators, radiocarbon dating, and a review of the fossil record, two lake-river-cascades could be identified: one directed southward, and one directed northward. Both cascades connected presently isolated rift basins during the early Holocene via spillovers of lakes and incised river gorges. This hydrological connection fostered the dispersal of aquatic faunas along the rift, and in addition, the water divide between the two river systems represented the only terrestrial dispersal corridor across the Kenya Rift. The reconstruction explains isolated distributions of Nilotic fish species in Kenya Rift lakes and of Guineo-Congolian mammal species in forests east of the Kenya Rift. On longer timescales, repeated episodes of connectivity and isolation must have occurred. To address this problem I participated in research to analyze a sediment drill core from the Koora basin of the Southern Kenya Rift, which provides a paleo-environmental record of the last 1 Ma. Based on this record it can be concluded that at ~400 ka relatively stable environmental conditions were disrupted by tectonic, hydrological, and ecological changes, resulting in increasingly large and frequent fluctuations in water availability, grassland communities, and woody plant cover. The major environmental shifts reflected in the drill core data coincide with phases where volcano-tectonic activity affected the basin. This thesis therefore shows how protracted extensional tectonic processes and the resulting geomorphologic conditions can affect the hydrology, the paleo-environment and the biodiversity of extensional zones in Kenya and elsewhere.
The field of machine learning studies algorithms that infer predictive models from data. Predictive models are applicable for many practical tasks such as spam filtering, face and handwritten digit recognition, and personalized product recommendation. In general, they are used to predict a target label for a given data instance. In order to make an informed decision about the deployment of a predictive model, it is crucial to know the model’s approximate performance. To evaluate performance, a set of labeled test instances is required that is drawn from the distribution the model will be exposed to at application time. In many practical scenarios, unlabeled test instances are readily available, but the process of labeling them can be a time- and cost-intensive task and may involve a human expert. This thesis addresses the problem of evaluating a given predictive model accurately with minimal labeling effort. We study an active model evaluation process that selects certain instances of the data according to an instrumental sampling distribution and queries their labels. We derive sampling distributions that minimize estimation error with respect to different performance measures such as error rate, mean squared error, and F-measures. An analysis of the distribution that governs the estimator leads to confidence intervals, which indicate how precise the error estimation is. Labeling costs may vary across different instances depending on certain characteristics of the data. For instance, documents differ in their length, comprehensibility, and technical requirements; these attributes affect the time a human labeler needs to judge relevance or to assign topics. To address this, the sampling distribution is extended to incorporate instance-specific costs. We empirically study conditions under which the active evaluation processes are more accurate than a standard estimate that draws equally many instances from the test distribution. We also address the problem of comparing the risks of two predictive models. The standard approach would be to draw instances according to the test distribution, label the selected instances, and apply statistical tests to identify significant differences. Drawing instances according to an instrumental distribution affects the power of a statistical test. We derive a sampling procedure that maximizes test power when used to select instances, and thereby minimizes the likelihood of choosing the inferior model. Furthermore, we investigate the task of comparing several alternative models; the objective of an evaluation could be to rank the models according to the risk that they incur or to identify the model with lowest risk. An experimental study shows that the active procedure leads to higher test power than the standard test in many application domains. Finally, we study the problem of evaluating the performance of ranking functions, which are used for example for web search. In practice, ranking performance is estimated by applying a given ranking model to a representative set of test queries and manually assessing the relevance of all retrieved items for each query. We apply the concepts of active evaluation and active comparison to ranking functions and derive optimal sampling distributions for the commonly used performance measures Discounted Cumulative Gain and Expected Reciprocal Rank. Experiments on web search engine data illustrate significant reductions in labeling costs.
The PhD thesis entitled “Actions through the lens of communicative cues. The influence of verbal cues and emotional cues on action processing and action selection in the second year of life” is based on four studies, which examined the cognitive integration of another person’s communicative cues (i.e., verbal cues, emotional cues) with behavioral cues in 18- and 24-month-olds. In the context of social learning of instrumental actions, it was investigated how the intention-related coherence of either a verbally announced action intention or an emotionally signaled action evaluation with an action demonstration influenced infants’ neuro-cognitive processing (Study I) and selection (Studies II, III, IV) of a novel object-directed action. Developmental research has shown that infants benefit from another’s behavioral cues (e.g., action effect, persistency, selectivity) to infer the underlying goal or intention, respectively, of an observed action (e.g., Cannon & Woodward, 2012; Woodward, 1998). Particularly action effects support infants in distinguishing perceptual action features (e.g., target object identity, movement trajectory, final target object state) from conceptual action features such as goals and intentions. However, less is known about infants’ ability to cognitively integrate another’s behavioral cues with additional action-related communicative cues. There is some evidence showing that in the second year of life, infants selectively imitate a novel action that is verbally (“There!”) or emotionally (positive expression) marked as aligning with the model’s action intention over an action that is verbally (“Whoops!”) or emotionally (negative expression) marked as unintentional (Carpenter, Akhtar, & Tomasello, 1998; Olineck & Poulin-Dubois, 2005, 2009; Repacholi, 2009; Repacholi, Meltzoff, Toub, & Ruba, 2016). Yet, it is currently unclear which role the specific intention-related coherence of a communicative cue with a behavioral cue plays in infants’ action processing and action selection that is, whether the communicative cue confirms, contrasts, clarifies, or is unrelated to the behavioral cue. Notably, by using both verbal cues and emotional cues, we examined not only two domains of communicative cues but also two qualitatively distinct relations between behavioral cues on the one hand and communicative cues on the other hand. More specifically, a verbal cue has the potential to communicate an action intention in the absence of an action demonstration and thus a prior-intention (Searle, 1983), whereas an emotional cue evaluates an ongoing or past action demonstration and thus signals an intention-in-action (Searle, 1983). In a first research focus, this thesis examined infants’ capacity to cognitively integrate another’s intention-related communicative cues and behavioral cues, and also focused on the role of the social cues’ coherence in infants’ action processing and action selection. In a second research focus, and to gain more elaborate insights into how the sub-processes of social learning (attention, encoding, response; cf. Bandura, 1977) are involved in this coherence-sensitive integrative processing, we employed a multi-measures approach. More specifically, we used Electroencephalography (EEG) and looking times to examine how the cues’ coherence influenced the compound of attention and encoding, and imitation (including latencies to first-touch and first-action) to address the compound of encoding and response. Based on the action-reconstruction account (Csibra, 2007), we predicted that infants use extra-motor information (i.e., communicative cues) together with behavioral cues to reconstruct another’s action intention. Accordingly, we expected infants to possess a flexibly organized internal action hierarchy, which they adapt according to the cues’ coherence that is, according to what they inferred to be the overarching action goal. More specifically, in a social-learning situation that comprised an adult model, who demonstrated an action on a novel object that offered two actions, we expected the demonstrated action to lead infants’ action hierarchy when the communicative (i.e., verbal, emotional) cue conveyed similar (confirming coherence) or no additional (un-related coherence) intention-related information relative to the behavioral cue. In terms of action selection, this action hierarchy should become evident in a selective imitation of the demonstrated action. However, when the communicative cue questioned (contrasting coherence) the behaviorally implied action goal or was the only cue conveying meaningful intention-related information (clarifying coherence), the verbally/emotionally intended action should ascend infants’ action hierarchy. Consequently, infants’ action selection should align with the verbally/emotionally intended action (goal emulation). Notably, these predictions oppose the direct-matching perspective (Rizzolatti & Craighero, 2004), according to which the observation of another’s action directly resonates with the observer’s motor repertoire, with this motor resonance enabling the identification of the underlying action goal. Importantly, the direct-matching perspective predicts a rather inflexible action hierarchy inasmuch as the process of goal identification should solely rely on the behavioral cue, irrespective of the behavioral cue’s coherence with extra-motor intention-related information, as it may be conveyed via communicative cues. As to the role of verbal cues, Study I used EEG to examine the influence of a confirming (Congruent) versus contrasting (Incongruent) coherence of a verbal action intention with the same action demonstration on 18-month-olds’ conceptual action processing (as measured via mid-latency mean negative ERP amplitude) and motor activation (as measured via central mu-frequency band power). The action was demonstrated on a novel object that offered two action alternatives from a neutral position. We expected mid-latency ERP negativity to be enhanced in Incongruent compared to Congruent, because past EEG research has demonstrated enhanced conceptual processing for stimuli that mismatched rather than matched the semantic context (Friedrich & Friederici, 2010; Kaduk et al., 2016). Regarding motor activation, Csibra (2007) posited that the identification of a clear action goal constitutes a crucial basis for motor activation to occur. We therefore predicted reduced mu power (indicating enhanced motor activation) for Congruent than Incongruent, because in Congruent, the cues’ match provides unequivocal information about the model’s action goal, whereas in Incongruent, the conflict may render the model’s action goal more unclear. Unexpectedly, in the entire sample, 18-month-olds’ mid-latency ERP negativity during the observation of the same action demonstration did not differ significantly depending on whether this action was congruent or incongruent with the model’s verbal action intention. Yet, post hoc analyses revealed the presence of two subgroups of infants, each of which exhibited significantly different mid-latency ERP negativity for Congruent versus Incongruent, but in opposing directions. The subgroups differed in their productive action-related language skills, with the linguistically more advanced infants exhibiting the expected response pattern of enhanced ERP mean negativity in Incongruent than Congruent, indicating enhanced conceptual processing of an action demonstration that was contrasted rather than confirmed by the verbal action context. As expected, central mu power in the entire sample was reduced in Congruent relative to Incongruent, indicating enhanced motor activation when the action demonstration was preceded by a confirming relative to a contrasting verbal action intention. This finding may indicate the covert preparation for a preferential imitation of the congruent relative to the incongruent action (Filippi et al., 2016; Frey & Gerry, 2006). Overall, these findings are in line with the action-reconstruction account (Csibra, 2007), because they suggest a coherence-sensitive attention to and encoding of the same perceptual features of another’s behavior and thus a cognitive integration of intention-related verbal cues and behavioral cues. Yet, because the subgroup constellation in infants’ ERPs was only discovered post hoc, future research is clearly required to substantiate this finding. Also, future research should validate our interpretation that enhanced motor activation may reflect an electrophysiological marker of subsequent imitation by employing EEG and imitation in a within-subjects design. Study II built on Study I by investigating the impact of coherence of a verbal cue and a behavioral cue on 18- and 24-month-olds’ action selection in an imitation study. When infants of both age groups observed a confirming (Congruent) or unrelated (Pseudo-word: action demonstration was associated with novel verb-like cue) coherence, they selectively imitated the demonstrated action over the not demonstrated, alternative action, with no difference between these two conditions. These findings suggest that, as expected, infants’ action hierarchy was led by the demonstrated action when the verbal cue provided similar (Congruent) or no additional (Pseudo-word) intention-related information relative to a meaningful behavioral cue. These findings support the above-mentioned interpretation that enhanced motor activation during action observation may reflect a covert preparation for imitation (Study I). Interestingly, infants did not seem to benefit from the intention-highlighting effect of the verbal cue in Congruent, suggesting that the verbal cue had an unspecific (e.g., attention-guiding) effect on infants’ action selection. Contrary, when infants observed a contrasting (Incongruent) or clarifying (Failed-attempt: model failed to manipulate the object but verbally announced a certain action intention) coherence, their action selection varied with age and also varied across the course of the experiment (block 1 vs. block 2). More specifically, the 24-month-olds made stronger use of the verbal cue for their action selection in block 1 than did the 18-month-olds. However, while the 18-month-olds’ use of the verbal cue increased across blocks, particularly in Incongruent, the 24-month-olds’ use of the verbal cue decreased across blocks. Overall, these results suggest that, as expected, infants’ action hierarchy in Incongruent (both age groups) and Failed-attempt (only 24-month-olds) drew on the verbal action intention, because in both age groups, infants emulated the verbal intention about as often as they imitated the demonstrated action or even emulated the verbal action intention preferentially. Yet, these findings were confined to certain blocks. It may be argued that the younger age group had a harder time inferring and emulating the intended, yet never observed action, because this requirement is more demanding in cognitive and motor terms. These demands may explain why the 18-month-olds needed some time to take account of the verbal action intention. Contrary, it seems that the 24-month-olds, although demonstrating their principle capacity to take account of the verbal cue in block 1, lost trust in the model’s verbal cue, maybe because the verbal cue did not have predictive value for the model’s actual behavior. Supporting this interpretation, research on selective trust has demonstrated that already infants evaluate another’s reliability or competence, respectively, based on how that model handles familiar objects (behavioral reliability) or labels familiar objects (verbal reliability; for reviews, see Mills, 2013; Poulin-Dubois & Brosseau-Liard, 2016). Relatedly, imitation research has demonstrated that the interpersonal aspects of a social-learning situation gain increasing relevance for infants during the second year of life (Gellén & Buttelmann, 2019; Matheson, Moore, & Akhtar, 2013; Uzgiris, 1981). It may thus be argued that when the 24-month-olds were repeatedly faced with a verbally unreliable model, they de-evaluated the verbal cue as signaling the model’s action intention and instead relied more heavily on alternative cues such as the behavioral cue (Incongruent) or the action context (e.g., object affordances, salience; Failed-attempt). Infants’ first-action latencies were higher in Incongruent and Failed-attempt than in both Congruent and Pseudo-word, and were also higher in Failed-attempt than in Incongruent. These latency-findings thus indicate that situations involving a meaningful verbal cue that deviated from the behavioral cue are cognitively more demanding, resulting in a delayed initiation of a behavioral response. In sum, the findings of Study II suggest that both age groups were highly flexible in their integration of a verbal cue and behavioral cue. Moreover, our results do not indicate a general superiority of either cue. Instead, it seems to depend on the informational gain conveyed by the verbal cue whether it exerts a specific, intention-highlighting effect (Incongruent, Failed-attempt) or an unspecific (e.g., attention-guiding) effect (Congruent, Pseudo-word). Studies III and IV investigated the impact of another’s action-related emotional cues on 18-month-olds’ action selection. In Study III, infants observed a model, who demonstrated two actions on a novel object in direct succession, and who combined one of the two actions with a positive (happy) emotional expression and the other action with a negative (sad) emotional expression. As expected, infants imitated the positively emoted (PE) action more often than the negatively emoted (NE) action. This preference arose from an increase in infants’ readiness to perform the PE action from the baseline period (prior to the action demonstrations) to the test period (following the action demonstrations), rather than from a decrease in readiness to the perform the NE action. The positive cue thus had a stronger behavior-regulating effect than the negative cue. Notably, infants’ more general object-directed behavior in terms of first-touch latencies remained unaffected by the emotional cues’ valence, indicating that infants had linked the emotional cues specifically to the corresponding action and not the object as a whole (Repacholi, 2009). Also, infants’ looking times during the action demonstration did not differ significantly as a function of emotional valence and were characterized by a predominant attentional focus to the action/object rather than to the model’s face. Together with the findings on infants’ first-touch latencies, these results indicate a sensitivity for the notion that emotions can have very specific referents (referential specificity; Martin, Maza, McGrath, & Phelps, 2014). Together, Study III provided evidence for selective imitation based on another’s intention-related (particularly positive) emotional cues in an action-selection task, and thus indicates that infants’ action hierarchy flexibly responds to another’s emotional evaluation of observed actions. According to Repacholi (2009), we suggest that infants used the model’s emotional evaluation to re-appraise the corresponding action (effect), for instance in terms of desirability. Study IV followed up on Study III by investigating the role of the negative emotional cue for infants’ action selection in more detail. Specifically, we investigated whether a contrasting (negative) emotional cue alone would be sufficient to differentially rank the two actions along infants’ action hierarchy or whether instead infants require direct information about the model’s action intention (in the form of a confirming action-emotion pair) to align their action selection with the emotional cues. Also, we examined whether the absence of a direct behavior-regulating effect of the negative cue in Study III was due to the negative cue itself or to the concurrently available positive cue masking the negative cue’s potential effect. To this end, we split the demonstration of the two action-emotion pairs across two trials. In each trial, one action was thus demonstrated and emoted (PE, NE action), and one action was not demonstrated and un-emoted (UE action). For trial 1, we predicted that infants, who observed a PE action demonstration, would selectively imitate the PE action, whereas infants, who observed a NE action demonstration would selectively emulate the UE action. As to trial 2, we expected the complementary action-emotion pair to provide additional clarifying information as the model’s emotional evaluation of both actions, which should either lead to adaptive perseveration (if infants’ action selection in trial 1 had already drawn on the emotional cue) or adaptive change (if infants’ action selection in trial 1 signaled a disregard of the emotional cue). As to trial 1, our findings revealed that, as expected, infants imitated the PE action more often than they emulated the UE action. Like in Study III, this selectivity arose from an increase in infants’ propensity to perform the PE action from baseline to trial 1. Also like in Study III, infants performed the NE action about equally often in baseline and trial 1, which speaks against a direct behavior-regulating effect of the negative cue also when presented in isolation. However, after a NE action demonstration, infants emulated the UE action more often in trial 1 than in baseline, suggesting an indirect behavior-regulating effect of the negative cue. Yet, this indirect effect did not yield a selective emulation of the UE action, because infants performed both action alternatives about equally often in trial 1. Unexpectedly, infants’ action selection in trial 2 was unaffected by the emotional cue. Instead, infants perseverated their action selection of trial 1 in trial 2, irrespective of whether it was adaptive or non-adaptive with respect to the model’s emotional evaluation of the action. It seems that infants changed their strategy across trials, from an initial adherence to the emotional (particularly positive) cue, towards bringing about a salient action effect (Marcovich & Zelazo, 2009). In sum, Studies III and IV indicate a dynamic interplay of different action-selection strategies, depending on valence and presentation order. Apparently, at least in infancy, action reconstruction as one basis for selective action performance reaches its limits when infants can only draw on indirect intention-related information (i.e., which action should be avoided). Overall, our findings favor the action-reconstruction account (Csibra, 2007), according to which actions are flexibly organized along a hierarchy, depending on inferential processes based on extra-motor intention-related information. At the same time, the findings question the direct-matching hypothesis (Rizzolatti & Craighero, 2004), according to which the identification (and pursuit) of action goals hinges on a direct simulation of another’s behavioral cues. Based on the studies’ findings, a preliminary working model is introduced, which seeks to integrate the two theoretical accounts by conceptualizing the routes that activation induced by social cues may take to eventually influence an infant’s action selection. Our findings indicate that it is useful to strive a differentiated conceptualization of communicative cues, because they seem to operate at different places within the process of cue integration, depending on their potential to convey direct intention-related information. Moreover, we suggest that there is bidirectional exchange within each compound of adjacent sub-processes (i.e., between attention and encoding, and encoding and response), and between the compounds. Hence, our findings highlight the benefits of a multi-measures approach when studying the development of infants’ social-cognitive abilities, because it provides a more comprehensive picture how the concerted use of social cues from different domains influences infants’ processing and selection of instrumental actions. Finally, this thesis points to potential future directions to substantiate our current interpretation of the findings.. Moreover, an extension to additional kinds of coherence is suggested to get closer to infants’ everyday-world of experience.
Leaves exhibit cells with varying degrees of shape complexity along the proximodistal axis. Heterogeneities in growth directions within individual cells bring about such complexity in cell shape. Highly complex and interconnected gene regulatory networks and signaling pathways have been identified to govern these processes. In addition, the organization of cytoskeletal networks and cell wall mechanical properties greatly influences the regulation of cell shape. Research has shown that microtubules are involved in regulating cellulose deposition and direc-tion of cell growth. However, comprehensive analysis of the regulation of the actin cytoskele-ton in cell shape regulation has not been well studied.
This thesis provides evidence that actin regulates aspects of cell growth, division, and direction-al expansion that impacts morphogenesis of developing leaves. The jigsaw puzzle piece mor-phology of epidermal pavement cells further serves as an ideal system to investigate the com-plex process of morphogenetic processes occurring at the cellular level. Here we have em-ployed live cell based imaging studies to track the development of pavement cells in actin com-promised conditions. Genetic perturbation of two predominantly expressed vegetative actin genes ACTIN2 and ACTIN7 results in delayed emergence of the cellular protrusions in pave-ment cells. Perturbation of actin also impacted the organization of microtubule in these cells that is known to promote emergence of cellular protrusions. Further, live-cell imaging of actin or-ganization revealed a correlation with cell shape, suggesting that actin plays a role in influencing pavement cell morphogenesis.
In addition, disruption of actin leads to an increase in cell size along the leaf midrib, with cells being highly anisotropic due to reduced cell division. The reduction of cell division further im-pacted the morphology of the entire leaf, with the mutant leaves being more curved. These re-sults suggests that actin plays a pivotal role in regulating morphogenesis at the cellular and tis-sue scales thereby providing valuable insights into the role of the actin cytoskeleton in plant morphogenesis.
Successful communication is often explored by people throughout their life courses. To effectively transfer one’s own information to others, people employ various linguistic tools, such as word order information, prosodic cues, and lexical choices. The exploration of these linguistic cues is known as the study of information structure (IS). Moreover, an important issue in the language acquisition of children is the investigation of how they acquire IS. This thesis seeks to improve our understanding of how children acquire different tools (i.e., prosodical cues, syntactical cues, and the focus particle only) of focus marking in a cross linguistic perspective.
In the first study, following Szendrői and her colleagues (2017)- the sentence-picture verification task- was performed to investigate whether three- to five-year-old Mandarin-speaking children as well as Mandarin-speaking adults could apply prosodic information to recognize focus in sentences. More, in the second study, not only Mandarin-speaking adults and Mandarin-speaking children but also German-speaking adults and German-speaking children were included to confirm the assumption that children could have adult-like performance in understanding sentence focus by identifying language specific cues in their mother tongue from early onwards. In this study, the same paradigm- the sentence-picture verification task- as in the first study was employed together with the eye-tracking method. Finally, in the last study, an issue of whether five-year-old Mandarin-speaking children could understand the pre-subject only sentence was carried out and again whether prosodic information would help them to better understand this kind of sentences.
The overall results seem to suggest that Mandarin-speaking children from early onwards could make use of the specific linguistic cues in their ambient language. That is, in Mandarin, a Topic-prominent and tone language, the word order information plays a more important rule than the prosodic information and even three-year-old Mandarin-speaking children could follow the word order information. More, although it seems that German-speaking children could follow the prosodic information, they did not have the adult-like performance in the object-accented condition. A feasible reason for this result is that there are more possibilities of marking focus in German, such as flexible word order, prosodic information, focus particles, and thus it would take longer time for German-speaking children to manage these linguistic tools. Another important empirical finding regarding the syntactically-marked focus in German is that it seems that the cleft construction is not a valid focus construction and this result corroborates with the previous observations (Dufter, 2009). Further, eye-tracking method did help to uncover how the parser direct their attention for recognizing focus. In the final study, it is showed that with explicit verbal context Mandarin-speaking children could understand the pre-subject only sentence and the study brought a better understanding of the acquisition of the focus particle- only with the Mandarin-speaking children.
Gravitational-wave (GW) astrophysics is a field in full blossom. Since the landmark detection of GWs from a binary black hole on September 14th 2015, fifty-two compact-object binaries have been reported by the LIGO-Virgo collaboration. Such events carry astrophysical and cosmological information ranging from an understanding of how black holes and neutron stars are formed, what neutron stars are composed of, how the Universe expands, and allow testing general relativity in the highly-dynamical strong-field regime. It is the goal of GW astrophysics to extract such information as accurately as possible. Yet, this is only possible if the tools and technology used to detect and analyze GWs are advanced enough. A key aspect of GW searches are waveform models, which encapsulate our best predictions for the gravitational radiation under a certain set of parameters, and that need to be cross-correlated with data to extract GW signals. Waveforms must be very accurate to avoid missing important physics in the data, which might be the key to answer the fundamental questions of GW astrophysics. The continuous improvements of the current LIGO-Virgo detectors, the development of next-generation ground-based detectors such as the Einstein Telescope or the Cosmic Explorer, as well as the development of the Laser Interferometer Space Antenna (LISA), demand accurate waveform models. While available models are enough to capture the low spins, comparable-mass binaries routinely detected in LIGO-Virgo searches, those for sources from both current and next-generation ground-based and spaceborne detectors must be accurate enough to detect binaries with large spins and asymmetry in the masses. Moreover, the thousands of sources that we expect to detect with future detectors demand accurate waveforms to mitigate biases in the estimation of signals’ parameters due to the presence of a foreground of many sources that overlap in the frequency band. This is recognized as one of the biggest challenges for the analysis of future-detectors’ data, since biases might hinder the extraction of important astrophysical and cosmological information from future detectors’ data. In the first part of this thesis, we discuss how to improve waveform models for binaries with high spins and asymmetry in the masses. In the second, we present the first generic metrics that have been proposed to predict biases in the presence of a foreground of many overlapping signals in GW data.
For the first task, we will focus on several classes of analytical techniques. Current models for LIGO and Virgo studies are based on the post-Newtonian (PN, weak-field, small velocities) approximation that is most natural for the bound orbits that are routinely detected in GW searches. However, two other approximations have risen in prominence, the post-Minkowskian (PM, weak- field only) approximation natural for unbound (scattering) orbits and the small-mass-ratio (SMR) approximation typical of binaries in which the mass of one body is much bigger than the other. These are most appropriate to binaries with high asymmetry in the masses that challenge current waveform models. Moreover, they allow one to “cover” regions of the parameter space of coalescing binaries, thereby improving the interpolation (and faithfulness) of waveform models. The analytical approximations to the relativistic two-body problem can synergically be included within the effective-one-body (EOB) formalism, in which the two-body information from each approximation can be recast into an effective problem of a mass orbiting a deformed Schwarzschild (or Kerr) black hole. The hope is that the resultant models can cover both the low-spin comparable-mass binaries that are routinely detected, and the ones that challenge current models. The first part of this thesis is dedicated to a study about how to best incorporate information from the PN, PM, SMR and EOB approaches in a synergistic way. We also discuss how accurate the resulting waveforms are, as compared against numerical-relativity (NR) simulations. We begin by comparing PM models, whether alone or recast in the EOB framework, against PN models and NR simulations. We will show that PM information has the potential to improve currently-employed models for LIGO and Virgo, especially if recast within the EOB formalism. This is very important, as the PM approximation comes with a host of new computational techniques from particle physics to exploit. Then, we show how a combination of PM and SMR approximations can be employed to access previously-unknown PN orders, deriving the third subleading PN dynamics for spin-orbit and (aligned) spin1-spin2 couplings. Such new results can then be included in the EOB models currently used in GW searches and parameter estimation studies, thereby improving them when the binaries have high spins. Finally, we build an EOB model for quasi-circular nonspinning binaries based on the SMR approximation (rather than the PN one as usually done). We show how this is done in detail without incurring in the divergences that had affected previous attempts, and compare the resultant model against NR simulations. We find that the SMR approximation is an excellent approximation for all (quasi-circular nonspinning) binaries, including both the equal-mass binaries that are routinely detected in GW searches and the ones with highly asymmetric masses. In particular, the SMR-based models compare much better than the PN models, suggesting that SMR-informed EOB models might be the key to model binaries in the future. In the second task of this thesis, we work within the linear-signal ap- proximation and describe generic metrics to predict inference biases on the parameters of a GW source of interest in the presence of confusion noise from unfitted foregrounds and from residuals of other signals that have been incorrectly fitted out. We illustrate the formalism with simple (yet realistic) LISA sources, and demonstrate its validity against Monte-Carlo simulations. The metrics we describe pave the way for more realistic studies to quantify the biases with future ground-based and spaceborne detectors.
This Thesis puts its focus on the physics of neutron stars and its description with methods of numerical relativity. In the first step, a new numerical framework the Whisky2D code will be developed, which solves the relativistic equations of hydrodynamics in axisymmetry. Therefore we consider an improved formulation of the conserved form of these equations. The second part will use the new code to investigate the critical behaviour of two colliding neutron stars. Considering the analogy to phase transitions in statistical physics, we will investigate the evolution of the entropy of the neutron stars during the whole process. A better understanding of the evolution of thermodynamical quantities, like the entropy in critical process, should provide deeper understanding of thermodynamics in relativity. More specifically, we have written the Whisky2D code, which solves the general-relativistic hydrodynamics equations in a flux-conservative form and in cylindrical coordinates. This of course brings in 1/r singular terms, where r is the radial cylindrical coordinate, which must be dealt with appropriately. In the above-referenced works, the flux operator is expanded and the 1/r terms, not containing derivatives, are moved to the right-hand-side of the equation (the source term), so that the left hand side assumes a form identical to the one of the three-dimensional (3D) Cartesian formulation. We call this the standard formulation. Another possibility is not to split the flux operator and to redefine the conserved variables, via a multiplication by r. We call this the new formulation. The new equations are solved with the same methods as in the Cartesian case. From a mathematical point of view, one would not expect differences between the two ways of writing the differential operator, but, of course, a difference is present at the numerical level. Our tests show that the new formulation yields results with a global truncation error which is one or more orders of magnitude smaller than those of alternative and commonly used formulations. The second part of the Thesis uses the new code for investigations of critical phenomena in general relativity. In particular, we consider the head-on-collision of two neutron stars in a region of the parameter space where two final states a new stable neutron star or a black hole, lay close to each other. In 1993, Choptuik considered one-parameter families of solutions, S[P], of the Einstein-Klein-Gordon equations for a massless scalar field in spherical symmetry, such that for every P > P⋆, S[P] contains a black hole and for every P < P⋆, S[P] is a solution not containing singularities. He studied numerically the behavior of S[P] as P → P⋆ and found that the critical solution, S[P⋆], is universal, in the sense that it is approached by all nearly-critical solutions regardless of the particular family of initial data considered. All these phenomena have the common property that, as P approaches P⋆, S[P] approaches a universal solution S[P⋆] and that all the physical quantities of S[P] depend only on |P − P⋆|. The first study of critical phenomena concerning the head-on collision of NSs was carried out by Jin and Suen in 2007. In particular, they considered a series of families of equal-mass NSs, modeled with an ideal-gas EOS, boosted towards each other and varied the mass of the stars, their separation, velocity and the polytropic index in the EOS. In this way they could observe a critical phenomenon of type I near the threshold of black-hole formation, with the putative solution being a nonlinearly oscillating star. In a successive work, they performed similar simulations but considering the head-on collision of Gaussian distributions of matter. Also in this case they found the appearance of type-I critical behaviour, but also performed a perturbative analysis of the initial distributions of matter and of the merged object. Because of the considerable difference found in the eigenfrequencies in the two cases, they concluded that the critical solution does not represent a system near equilibrium and in particular not a perturbed Tolmann-Oppenheimer-Volkoff (TOV) solution. In this Thesis we study the dynamics of the head-on collision of two equal-mass NSs using a setup which is as similar as possible to the one considered above. While we confirm that the merged object exhibits a type-I critical behaviour, we also argue against the conclusion that the critical solution cannot be described in terms of equilibrium solution. Indeed, we show that, in analogy with what is found in, the critical solution is effectively a perturbed unstable solution of the TOV equations. Our analysis also considers fine-structure of the scaling relation of type-I critical phenomena and we show that it exhibits oscillations in a similar way to the one studied in the context of scalar-field critical collapse.
The unceasing impact of intense sunlight on earth constitutes a continuous source of energy fueling countless natural processes. On a molecular level, the energy contained in the electromagnetic radiation is transferred through photochemical processes into chemical or thermal energy. In the course of such processes, photo-excitations promote molecules into thermally inaccessible excited states. This induces adaptations of their molecular geometry according to the properties of the excited state. Decay processes towards energetically lower lying states in transient molecular geometries result in the formation of excited state relaxation pathways. The photo-chemical relaxation mechanisms depend on the studied system itself, the interactions with its chemical environment and the character of the involved states. This thesis focuses on systems in which photo-induced deprotonation processes occur at specific atomic sites.
To detect these excited-state proton dynamics at the affected atoms, a local probe of molecular electronic structure is required. Therefore, site-selective and orbital-specific K-edge soft X-ray spectroscopy techniques are used here to detect photo-induced proton dynamics in gaseous and liquid sample environments. The protonation of nitrogen (N) sites in organic molecules and the oxygen (O) atom in the water molecule are probed locally through transitions between 1s orbitals and the p-derived molecular valence electronic structure. The used techniques are X-ray absorption spectroscopy (XAS) and resonant inelastic X-ray scattering (RIXS). Both yield access to the unoccupied local valence electronic structure, whereas the latter additionally probes occupied states.
We apply these probes in optical pump X-ray probe experiments to investigate valence excited-state proton transfer capabilities of aqueous 2-thiopyridone. A characteristic shift of N K-edge X-ray absorption resonances as well as a distinct X-ray emission line are established by us as spectral fingerprints of N deprotonation in the system. We utilize them to identify photo-induced N deprotonation of 2-thiopyridone on femtosecond timescales, in optical pump N K-edge RIXS probe measurements. We further establish excited state proton transfer mechanisms on picosecond and nanosecond timescales along the dominant relaxation pathways of 2-thiopyridone using transient N K-edge XAS.
Despite being an excellent probe mechanism for valence excited-state proton dynamics, the K-edge core-excitation itself also disturbs the electronic structure at specific sites of a molecule. The rapid reaction of protons to 1s photo-excitations can yield directional structural distortions within the femtosecond core-excited state lifetime. These directional proton dynamics can change the energetic separation of eigenstates of the system and alter probabilities for radiative decay between them. Both effects yield spectral signatures of the dynamics in RIXS spectra.
Using these signatures of RIXS transitions into electronically excited states, we investigate proton dynamics induced by N K-edge excitation in the amino-acid histidine. The minor core-excited state dynamics of histidine in basic and neutral chemical environments allow us to establish XAS and RIXS spectral signatures of different N protonation states at its imidazole N sites. Based on these signatures, we identify an excitation-site-independent N-H dissociation for N K-edge excitation under acidic conditions.
Such directional structural deformations, induced by core-excitations, also make proton dynamics in electronic ground states accessible through RIXS transitions into vibrationally excited states. In that context, we interpret high resolution RIXS spectra of the water molecule for three O K-edge resonances based on quantum-chemical wave packet propagation simulations. We show that highly oriented ground state vibrational modes of coupled nuclear motion can be populated through RIXS processes by preparation of core-excited state nuclear wave packets with the same directionality. Based on that, we analytically derive the possibility to extract one-dimensional directional cuts through potential energy surfaces of molecular systems from the corresponding RIXS spectra. We further verify this concept through the extraction of the gas-phase water ground state potential along three coordinates from experimental data in comparison to quantum-chemical simulations of the potential energy surface.
This thesis also contains contributions to instrumentation development for investigations of photo-induced molecular dynamics at high brilliance X-ray light sources. We characterize the setup used for the transient valence-excited state XAS measurements of 2-thiopyridone. Therein, a sub-micrometer thin liquid sample environment is established employing in-vacuum flat-jet technology, which enables a transmission experimental geometry. In combination with a MHz-laser system, we achieve a high detection sensitivity for photo-induced X-ray absorption changes. Additionally, we present conceptual improvements for temporal X-ray optical cross-correlation techniques based on transient changes of multilayer optical properties, which are crucial for the realization of femtosecond time-resolved studies at synchrotrons and free-electron lasers.
In the present thesis, AC electrokinetic forces, like dielectrophoresis and AC electroosmosis, were demonstrated as a simple and fast method to functionalize the surface of nanoelectrodes with submicrometer sized biological objects. These nanoelectrodes have a cylindrical shape with a diameter of 500 nm arranged in an array of 6256 electrodes. Due to its medical relevance influenza virus as well as anti-influenza antibodies were chosen as a model organism. Common methods to bring antibodies or proteins to biosensor surfaces are complex and time-consuming. In the present work, it was demonstrated that by applying AC electric fields influenza viruses and antibodies can be immobilized onto the nanoelectrodes within seconds without any prior chemical modification of neither the surface nor the immobilized biological object. The distribution of these immobilized objects is not uniform over the entire array, it exhibits a decreasing gradient from the outer row to the inner ones. Different causes for this gradient have been discussed, such as the vortex-shaped fluid motion above the nanoelectrodes generated by, among others, electrothermal fluid flow. It was demonstrated that parts of the accumulated material are permanently immobilized to the electrodes. This is a unique characteristic of the presented system since in the literature the AC electrokinetic immobilization is almost entirely presented as a method just for temporary immobilization. The spatial distribution of the immobilized viral material or the anti-influenza antibodies at the electrodes was observed by either the combination of fluorescence microscopy and deconvolution or by super-resolution microscopy (STED). On-chip immunoassays were performed to examine the suitability of the functionalized electrodes as a potential affinity-based biosensor. Two approaches were pursued: A) the influenza virus as the bio-receptor or B) the influenza virus as the analyte. Different sources of error were eliminated by ELISA and passivation experiments. Hence, the activity of the immobilized object was inspected by incubation with the analyte. This resulted in the successful detection of anti-influenza antibodies by the immobilized viral material. On the other hand, a detection of influenza virus particles by the immobilized anti-influenza antibodies was not possible. The latter might be due to lost activity or wrong orientation of the antibodies. Thus, further examinations on the activity of by AC electric fields immobilized antibodies should follow. When combined with microfluidics and an electrical read-out system, the functionalized chips possess the potential to serve as a rapid, portable, and cost-effective point-of-care (POC) device. This device can be utilized as a basis for diverse applications in diagnosing and treating influenza, as well as various other pathogens.
Abzug unter Beobachtung
(2022)
Mehr als vier Jahrzehnte lang beobachteten die Streitkräfte und Militärnachrichtendienste der NATO-Staaten die sowjetischen Truppen in der DDR. Hierfür übernahm in der Bundesrepublik Deutschland der Bundesnachrichtendienst (BND) die militärische Auslandsaufklärung unter Anwendung nachrichtendienstlicher Mittel und Methoden. Die Bundeswehr betrieb dagegen taktische Fernmelde- und elektronische Aufklärung und hörte vor allem den Funkverkehr der „Gruppe der sowjetischen Streitkräfte in Deutschland“ (GSSD) ab. Mit der Aufstellung einer zentralen Dienststelle für das militärische Nachrichtenwesen, dem Amt für Nachrichtenwesen der Bundeswehr, bündelte und erweiterte zugleich das Bundesministerium für Verteidigung in den 1980er Jahren seine analytischen Kapazitäten. Das Monopol des BND in der militärischen Auslandsaufklärung wurde von der Bundeswehr dadurch zunehmend infrage gestellt.
Nach der deutschen Wiedervereinigung am 3. Oktober 1990 befanden sich immer noch mehr als 300.000 sowjetische Soldaten auf deutschem Territorium. Die 1989 in Westgruppe der Truppen (WGT) umbenannte GSSD sollte – so der Zwei-plus-Vier-Vertrag – bis 1994 vollständig abziehen. Der Vertrag verbot auch den drei Westmächten, in den neuen Bundesländern militärisch tätig zu sein. Die für die Militäraufklärung bis dahin unverzichtbaren Militärverbindungsmissionen der Westmächte mussten ihre Dienste einstellen. Doch was geschah mit diesem „alliierten Erbe“? Wer übernahm auf deutscher Seite die Aufklärung der sowjetischen Truppen und wer kontrollierte den Truppenabzug?
Die Studie untersucht die Rolle von Bundeswehr und BND beim Abzug der WGT zwischen 1990 und 1994 und fragt dabei nach Kooperation und Konkurrenz zwischen Streitkräften und Nachrichtendiensten. Welche militärischen und nachrichtendienstlichen Mittel und Fähigkeiten stellte die Bundesregierung zur Bewältigung des Truppenabzugs zur Verfügung, nachdem die westlichen Militärverbindungsmissionen aufgelöst wurden? Wie veränderten sich die Anforderungen an die militärische Auslandsaufklärung des BND? Inwieweit setzten sich Konkurrenz und Kooperation von Bundeswehr und BNDbeim Truppenabzug fort? Welche Rolle spielten dabei die einstigen Westmächte? Die Arbeit versteht sich nicht nur als Beitrag zur Militärgeschichte, sondern auch zur deutschen Nachrichtendienstgeschichte.
An einigen CT-Modellkomplexen in verschiedenen Lösungsmitteln und bei Temperaturen von 113-300 K sollte der Einfluß der Umgebung auf die Form und Lage der Absorption von CT-Komplexen unterschiedlicher Bindungsfestigkeit untersucht werden.
Dazu wurden bekannte Bandenprofilfunktionen auf ihre Anwendbar-keit geprüft. Da eine optimale Anpassung nicht möglich war, wurde eine neue Profilfunktion entwickelt, die eine bessere Beschreibung ergab.
Nach der Bestimmung der Gleichgewichtskonstante und des Extink-tionskoeffizienten konnte mit der Profilfläche das Übergangsmoment berechnet werden.
Die Lösungsmittelabhängigkeit wurde bei verschiedenen Brechzahlen und Dielektrizitätskonstanten untersucht.
Für feste Komplexe wurde eine spezielle Präparationstechnik gewählt. Die beobachteten Feinstrukturen und der auftretende Streuuntergrund werden diskutiert.
Different lake systems might reflect different climate elements of climate changes, while the responses of lake systems are also divers, and are not completely understood so far. Therefore, a comparison of lakes in different climate zones, during the high-amplitude and abrupt climate fluctuations of the Last Glacial to Holocene transition provides an exceptional opportunity to investigate distinct natural lake system responses to different abrupt climate changes. The aim of this doctoral thesis was to reconstruct climatic and environmental fluctuations down to (sub-) annual resolution from two different lake systems during the Last Glacial-Interglacial transition (~17 and 11 ka). Lake Gościąż, situated in the temperate central Poland, developed in the Allerød after recession of the Last Glacial ice sheets. The Dead Sea is located in the Levant (eastern Mediterranean) within a steep gradient from sub-humid to hyper-arid climate, and formed in the mid-Miocene. Despite their differences in sedimentation processes, both lakes form annual laminations (varves), which are crucial for studies of abrupt climate fluctuations. This doctoral thesis was carried out within the DFG project PALEX-II (Paleohydrology and Extreme Floods from the Dead Sea ICDP Core) that investigates extreme hydro-meteorological events in the ICDP core in relation to climate changes, and ICLEA (Virtual Institute of Integrated Climate and Landscape Evolution Analyses) that intends to better the understanding of climate dynamics and landscape evolutions in north-central Europe since the Last Glacial. Further, it contributes to the Helmholtz Climate Initiative REKLIM (Regional Climate Change and Humans) Research Theme 3 “Extreme events across temporal and spatial scales” that investigates extreme events using climate data, paleo-records and model-based simulations. The three main aims were to (1) establish robust chronologies of the lakes, (2) investigate how major and abrupt climate changes affect the lake systems, and (3) to compare the responses of the two varved lakes to these hemispheric-scale climate changes.
Robust chronologies are a prerequisite for high-resolved climate and environmental reconstructions, as well as for archive comparisons. Thus, addressing the first aim, the novel chronology of Lake Gościąż was established by microscopic varve counting and Bayesian age-depth modelling in Bacon for a non-varved section, and was corroborated by independent age constrains from 137Cs activity concentration measurements, AMS radiocarbon dating and pollen analysis. The varve chronology reaches from the late Allerød until AD 2015, revealing more Holocene varves than a previous study of Lake Gościąż suggested. Varve formation throughout the complete Younger Dryas (YD) even allowed the identification of annually- to decadal-resolved leads and lags in proxy responses at the YD transitions.
The lateglacial chronology of the Dead Sea (DS) was thus far mainly based on radiocarbon and U/Th-dating. In the unique ICDP core from the deep lake centre, continuous search for cryptotephra has been carried out in lateglacial sediments between two prominent gypsum deposits – the Upper and Additional Gypsum Units (UGU and AGU, respectively). Two cryptotephras were identified with glass analyses that correlate with tephra deposits from the Süphan and Nemrut volcanoes indicating that the AGU is ~1000 years younger than previously assumed, shifting it into the YD, and the underlying varved interval into the Bølling/Allerød, contradicting previous assumptions.
Using microfacies analyses, stable isotopes and temperature reconstructions, the second aim was achieved at Lake Gościąż. The YD lake system was dynamic, characterized by higher aquatic bioproductivity, more re-suspended material and less anoxia than during the Allerød and Early Holocene, mainly influenced by stronger water circulation and catchment erosion due to stronger westerly winds and less lake sheltering. Cooling at the YD onset was ~100 years longer than the final warming, while environmental proxies lagged the onset of cooling by ~90 years, but occurred contemporaneously during the termination of the YD. Chironomid-based temperature reconstructions support recent studies indicating mild YD summer temperatures. Such a comparison of annually-resolved proxy responses to both abrupt YD transitions is rare, because most European lake archives do not preserve varves during the YD.
To accomplish the second aim at the DS, microfacies analyses were performed between the UGU (~17 ka) and Holocene onset (~11 ka) in shallow- (Masada) and deep-water (ICDP core) environments. This time interval is marked by a huge but fluctuating lake level drop and therefore the complete transition into the Holocene is only recorded in the deep-basin ICDP core. In this thesis, this transition was investigated for the first time continuously and in detail. The final two pronounced lake level drops recorded by deposition of the UGU and AGU, were interrupted by one millennium of relative depositional stability and a positive water budget as recorded by aragonite varve deposition interrupted by only a few event layers. Further, intercalation of aragonite varves between the gypsum beds of the UGU and AGU shows that these generally dry intervals were also marked by decadal- to centennial-long rises in lake level. While continuous aragonite varves indicate decadal-long stable phases, the occurrence of thicker and more frequent event layers suggests general more instability during the gypsum units. These results suggest a pattern of complex and variable hydroclimate at different time scales during the Lateglacial at the DS.
The third aim was accomplished based on the individual studies above that jointly provide an integrated picture of different lake responses to different climate elements of hemispheric-scale abrupt climate changes during the Last Glacial-Interglacial transition. In general, climatically-driven facies changes are more dramatic in the DS than at Lake Gościąż. Further, Lake Gościąż is characterized by continuous varve formation nearly throughout the complete profile, whereas the DS record is widely characterized by extreme event layers, hampering the establishment of a continuous varve chronology. The lateglacial sedimentation in Lake Gościąż is mainly influenced by westerly winds and minor by changes in catchment vegetation, whereas the DS is primarily influenced by changes in winter precipitation, which are caused by temperature variations in the Mediterranean. Interestingly, sedimentation in both archives is more stable during the Bølling/Allerød and more dynamic during the YD, even when sedimentation processes are different.
In summary, this doctoral thesis presents seasonally-resolved records from two lake archives during the Lateglacial (ca 17-11 ka) to investigate the impact of abrupt climate changes in different lake systems. New age constrains from the identification of volcanic glass shards in the lateglacial sediments of the DS allowed the first lithology-based interpretation of the YD in the DS record and its comparison to Lake Gościąż. This highlights the importance of the construction of a robust chronology, and provides a first step for synchronization of the DS with other eastern Mediterranean archives. Further, climate reconstructions from the lake sediments showed variability on different time scales in the different archives, i.e. decadal- to millennial fluctuations in the lateglacial DS, and even annual variations and sub-decadal leads and lags in proxy responses during the rapid YD transitions in Lake Gościąż. This showed the importance of a comparison of different lake archives to better understand the regional and local impacts of hemispheric-scale climate variability. An unprecedented example is demonstrated here of how different lake systems show different lake responses and also react to different climate elements of abrupt climate changes. This further highlights the importance of the understanding of the respective lake system for climate reconstructions.
The size and morphology control of precipitated solid particles is a major economic issue for numerous industries. For instance, it is interesting for the nuclear industry, concerning the recovery of radioactive species from used nuclear fuel.
The precipitates features, which are a key parameter from the post-precipitate processing, depend on the process local mixing conditions. So far, the relationship between precipitation features and hydrodynamic conditions have not been investigated.
In this study, a new experimental configuration consisting of coalescing drops is set to investigate the link between reactive crystallization and hydrodynamics. Two configurations of aqueous drops are examined. The first one corresponds to high contact angle drops (>90°) in oil, as a model system for flowing drops, the second one correspond to sessile drops in air with low contact angle (<25°). In both cases, one reactive is dissolved in each drop, namely oxalic acid and cerium nitrate. When both drops get into contact, they may coalesce; the dissolved species mix and react to produce insoluble cerium oxalate. The precipitates features and effect on hydrodynamics are investigated depending on the solvent. In the case of sessile drops in air, the surface tension difference between the drops generates a gradient which induces a Marangoni flow from the low surface tension drop over the high surface tension drop. By setting the surface tension difference between the two drops and thus the Marangoni flow, the hydrodynamics conditions during the drop coalescence could be modified. Diols/water mixtures are used as solvent, in order to fix the surface tension difference between the liquids of both drops regardless from the reactant concentration. More precisely, the used diols, 1,2-propanediol and 1,3-propanediol, are isomer with identical density and close viscosity. By keeping the water volume fraction constant and playing with the 1,2-propanediol and 1,3-propanediol volume fractions of the solvents, the mixtures surface tensions differ up to 10 mN/m for identical/constant reactant concentration, density and viscosity. 3 precipitation behaviors were identified for the coalescence of water/diols/recatants drops depending on the oxalic excess. The corresponding precipitates patterns are visualized by optical microscopy and the precipitates are characterized by confocal microscopy SEM, XRD and SAXS measurements. In the intermediate oxalic excess regime, formation of periodic patterns can be observed. These patterns consist in alternating cerium oxalate precipitates with distinct morphologies, namely needles and “microflowers”. Such periodic fringes can be explained by a feedback mechanism between convection, reaction and the diffusion.
About the relation between implicit Theory of Mind & the comprehension of complement sentences
(2010)
Previous studies on the relation between language and social cognition have shown that children’s mastery of embedded sentential complements plays a causal role for the development of a Theory of Mind (ToM). Children start to succeed on complementation tasks in which they are required to report the content of an embedded clause in the second half of the fourth year. Traditional ToM tasks test the child’s ability to predict that a person who is holding a false belief (FB) about a situation will act "falsely". In these task, children do not represent FBs until the age of 4 years. According the linguistic determinism hypothesis, only the unique syntax of complement sentences provides the format for representing FBs. However, experiments measuring children’s looking behavior instead of their explicit predictions provided evidence that already 2-year olds possess an implicit ToM. This dissertation examined the question of whether there is an interrelation also between implicit ToM and the comprehension of complement sentences in typically developing German preschoolers. Two studies were conducted. In a correlational study (Study 1 ), 3-year-old children’s performance on a traditional (explicit) FB task, on an implicit FB task and on language tasks measuring children’s comprehension of tensed sentential complements were collected and tested for their interdependence. Eye-tracking methodology was used to assess implicit ToM by measuring participants’ spontaneous anticipatory eye movements while they were watching FB movies. Two central findings emerged. First, predictive looking (implicit ToM) was not correlated with complement mastery, although both measures were associated with explicit FB task performance. This pattern of results suggests that explicit, but not implicit ToM is language dependent. Second, as a group, 3-year-olds did not display implicit FB understanding. That is, previous findings on a precocious reasoning ability could not be replicated. This indicates that the characteristics of predictive looking tasks play a role for the elicitation of implicit FB understanding as the current task was completely nonverbal and as complex as traditional FB tasks. Study 2 took a methodological approach by investigating whether children display an earlier comprehension of sentential complements when using the same means of measurement as used in experimental tasks tapping implicit ToM, namely anticipatory looking. Two experiments were conducted. 3-year-olds were confronted either with a complement sentence expressing the protagonist’s FB (Exp. 1) or with a complex sentence expressing the protagonist’s belief without giving any information about the truth/ falsity of the belief (Exp. 2). Afterwards, their expectations about the protagonist’s future behavior were measured. Overall, implicit measures reveal no considerably earlier understanding of sentential complementation. Whereas 3-year-olds did not display a comprehension of complex sentences if these embedded a false proposition, children from 3;9 years on were proficient in processing complement sentences if the truth value of the embedded proposition could not be evaluated. This pattern of results suggests that (1) the linguistic expression of a person’s FB does not elicit implicit FB understanding and that (2) the assessment of the purely syntactic understanding of complement sentences is affected by competing reality information. In conclusion, this dissertation found no evidence that the implicit ToM is related to the comprehension of sentential complementation. The findings suggest that implicit ToM might be based on nonlinguistic processes. Results are discussed in the light of recently proposed dual-process models that assume two cognitive mechanisms that account for different levels of ToM task performance.
Many complex systems that we encounter in the world can be formalized using networks. Consequently, they have been in the focus of computer science for decades, where algorithms are developed to understand and utilize these systems.
Surprisingly, our theoretical understanding of these algorithms and their behavior in practice often diverge significantly. In fact, they tend to perform much better on real-world networks than one would expect when considering the theoretical worst-case bounds. One way of capturing this discrepancy is the average-case analysis, where the idea is to acknowledge the differences between practical and worst-case instances by focusing on networks whose properties match those of real graphs. Recent observations indicate that good representations of real-world networks are obtained by assuming that a network has an underlying hyperbolic geometry.
In this thesis, we demonstrate that the connection between networks and hyperbolic space can be utilized as a powerful tool for average-case analysis. To this end, we first introduce strongly hyperbolic unit disk graphs and identify the famous hyperbolic random graph model as a special case of them. We then consider four problems where recent empirical results highlight a gap between theory and practice and use hyperbolic graph models to explain these phenomena theoretically. First, we develop a routing scheme, used to forward information in a network, and analyze its efficiency on strongly hyperbolic unit disk graphs. For the special case of hyperbolic random graphs, our algorithm beats existing performance lower bounds. Afterwards, we use the hyperbolic random graph model to theoretically explain empirical observations about the performance of the bidirectional breadth-first search. Finally, we develop algorithms for computing optimal and nearly optimal vertex covers (problems known to be NP-hard) and show that, on hyperbolic random graphs, they run in polynomial and quasi-linear time, respectively.
Our theoretical analyses reveal interesting properties of hyperbolic random graphs and our empirical studies present evidence that these properties, as well as our algorithmic improvements translate back into practice.
In den letzten Jahren wurden relativ komplexe Erosionsmodelle entwickelt, deren Teilprozesse immer mehr auf physikalisch begründeten Ansätzen beruhen. Damit verbunden ist eine höhere Anzahl aktueller Eingangsparameter, deren Bestimmung im Feld arbeits- und kostenaufwendig ist. Zudem werden die Parameter punktuell, also an bestimmten Stellen und nicht flächenhaft wie bei der Fernerkundung, erfasst. Im Rahmen dieser Arbeit wird gezeigt, wie Satellitendaten als relativ kostengünstige Ergänzung oder Alternative zur konventionellen Parametererhebung genutzt werden können. Dazu werden beispielhaft der Blattflächenindex (LAI) und der Bedeckungsgrad für das physikalisch begründete Erosionsmodell EROSION 3D abgeleitet. Im Mittelpunkt des Interesses steht dabei das Aufzeigen von existierenden Methoden, die die Basis für eine operationelle Bereitstellung solcher Größen nicht nur für Erosions- sondern allgemein für Prozessmodelle darstellen. Als Untersuchungsgebiet dient das primär landwirtschaftlich genutzte Einzugsgebiet des Mehltheuer Baches, das sich im Sächsischen Lößgefilde befindet und für das Simulationsrechnungen mit konventionell erhobenen Eingangsparametern für 29 Niederschlagsereignisse im Jahr 1999 vorliegen [MICHAEL et al. 2000]. Die Fernerkundungsdatengrundlage bilden Landsat-5-TM-Daten vom 13.03.1999, 30.04.1999 und 19.07.1999. Da die Vegetationsparameter für alle Niederschlagsereignisse vorliegen sollen, werden sie basierend auf der Entwicklung des LAI zeitlich interpoliert. Dazu erfolgt zunächst die Ableitung des LAI für alle vorhandenen Fruchtarten nach den semi-empirischen Modellen von CLEVERS [1986] und BARET & GUYOT [1991] mit aus der Literatur entnommenen Koeffizienten. Des Weiteren wird eine Methode untersucht, nach der die Koeffizienten für das Clevers-Modell aus den TM-Daten und einem vereinfachten Wachstumsmodell bestimmt werden. Der Bedeckungsgrad wird nach ROSS [1981] aus dem LAI ermittelt. Die zeitliche Interpolation des LAI wird durch die schlagbezogene Anpassung eines vereinfachten Wachstumsmodells umgesetzt, das dem hydrologischen Modell SWIM [KRYSANOVA et al. 1999] entstammt und in das durchschnittliche Tagestemperaturen eingehen. Mit den genannten Methoden bleiben abgestorbene Pflanzenteile unberücksichtigt. Im Vergleich zur konventionellen terrestrischen Parametererhebung ermöglichen sie eine differenziertere Abbildung räumlicher Variabilitäten und des zeitlichen Verlaufes der Vegetationsparameter. Die Simulationsrechnungen werden sowohl mit den direkten Bedeckungsgraden aus den TM-Daten (pixelbezogen) als auch mit den zeitlich interpolierten Bedeckungsgraden für alle Ereignisse (schlagbezogen) durchgeführt. Bei beiden Vorgehensweisen wird im Vergleich zur bisherigen Abschätzung eine Verbesserung der räumlichen Verteilung der Parameter und somit eine räumliche Umverteilung von Erosions- und Depositionsflächen erreicht. Für die im Untersuchungsgebiet vorliegende räumliche Heterogenität (z. B. Schlaggröße) bieten Landsat-TM-Daten eine ausreichend genaue räumliche Auflösung. Damit wird nachgewiesen, dass die satellitengestützte Fernerkundung im Rahmen dieser Untersuchungen sinnvoll einsetzbar ist. Für eine operationelle Bereitstellung der Parameter mit einem vertretbaren Aufwand ist es erforderlich, die Methoden weiter zu validieren und möglichst weitestgehend zu automatisieren.
Die vorliegende Arbeit 'Abflußentwicklung in Teileinzugsgebieten des Rheins - Simulationen für den Ist-Zustand und für Klimaszenarien' untersucht Auswirkungen möglicher zukünftiger Klimaänderungen auf das Abflußgeschehen in ausgewählten, durch Mittelgebirge geprägten Teileinzugsgebieten des Rheins: Mosel (bis Pegel Cochem); Sieg (bis Pegel Menden 1) und Main (bis Pegel Kemmern).In einem ersten Schritt werden unter Verwendung des hydrologischen Modells HBV-D wichtige Modellprozesse entsprechend der Einzugsgebietscharakteristik parametrisiert und ein Abbild der Gebietshydrologie erzeugt, das mit Zeitreihen gemessener Tageswerte (Temperatur, Niederschlag) eine Zeitreihe der Pegeldurchflüsse simulieren kann. Die Güte der Simulation des Ist-Zustandes (Standard-Meßzeitraum 1.1.1961-31.12.1999) ist für die Kalibrierungs- und Validierungszeiträume in allen Untersuchungsgebieten gut bis sehr gut.Zur Erleichterung der umfangreichen, zeitaufwendigen einzugsgebietsbezogenen Datenaufbereitung für das hydrologische Modell HBV-D wurde eine Arbeitsumgebung auf Basis von Programmerweiterungen des Geoinformationssystems ArcView und zusätzlichen Hilfsprogrammen entwickelt. Die Arbeitsumgebung HBV-Params enthält eine graphische Benutzeroberfläche und räumt sowohl erfahrenen Hydrologen als auch hydrologisch geschulten Anwendern, z.B. Studenten der Vertiefungsrichtung Hydrologie, Flexibilität und vollständige Kontrolle bei der Ableitung von Parameterwerten und der Editierung von Parameter- und Steuerdateien ein. Somit ist HBV-D im Gegensatz zu Vorläuferversionen mit rudimentären Arbeitsumgebungen auch außerhalb der Forschung für Lehr- und Übungszwecke einsetzbar.In einem zweiten Schritt werden Gebietsniederschlagssummen, Gebietstemperaturen und simulierte Mittelwerte des Durchflusses (MQ) des Ist-Zustandes mit den Zuständen zweier Klimaszenarien für den Szenarienzeitraum 100 Jahre später (2061-2099) verglichen. Die Klimaszenarien beruhen auf simulierten Zirkulationsmustern je eines Modellaufes zweier Globaler Zirkulationsmodelle (GCM), die mit einem statistischen Regionalisierungsverfahren in Tageswertszenarien (Temperatur, Niederschlag) an Meßstationen in den Untersuchungsgebieten überführt wurden und als Eingangsdaten des hydrologischen Modells verwendet werden.Für die zweite Hälfte des 21. Jahrhunderts weisen beide regionalisierten Klimaszenarien eine Zunahme der Jahresmittel der Gebietstemperatur sowie eine Zunahme der Jahressummen der Gebietsniederschläge auf, die mit einer hohen Variabilität einhergeht. Eine Betrachtung der saisonalen (monatlichen) Änderungsbeträge von Temperatur, Niederschlag und mittlerem Durchfluß zwischen Szenarienzeitraum (2061-2099) und Ist-Zustand ergibt in allen Untersuchungsgebieten eine Temperaturzunahme (höher im Sommer als im Winter) und eine generelle Zunahme der Niederschlagssummen (mit starken Schwankungen zwischen den Einzelmonaten), die bei der hydrologischen Simulation zu deutlich höheren mittleren Durchflüssen von November bis März und leicht erhöhten mittleren Durchflüssen in den restlichen Monaten führen. Die Stärke der Durchflußerhöhung ist nach den individuellen Klimaszenarien unterschiedlich und im Sommer- bzw. Winterhalbjahr gegenläufig ausgeprägt. Hauptursache für die simulierte starke Zunahme der mittleren Durchflüsse im Winterhalbjahr ist die trotz Temperaturerhöhung der Klimaszenarien winterlich niedrige Evapotranspiration, so daß erhöhte Niederschläge direkt in erhöhten Durchfluß transformiert werden können.Der Vergleich der Untersuchungsgebiete zeigt in Einzelmonaten von West nach Ost abnehmende Änderungsbeträge der Niederschlagssummen, die als Hinweis auf die Bedeutung der Kontinentalitätseinflüsse auch unter geänderten klimatischen Bedingungen in Südwestdeutschland aufgefaßt werden könnten.Aus den regionalisierten Klimaszenarien werden Änderungsbeträge für die Modulation gemessener Zeitreihen mittels synthetischer Szenarien abgeleitet, die mit einem geringen Rechenaufwand in hydrologische Modellantworten überführt werden können. Die direkte Ableitung synthetischer Szenarien aus GCM-Ergebniswerten (bodennahe Temperatur und Gesamtniederschlag) an einzelnen GCM-Gitterpunkten erbrachte unbefriedigende Ergebnisse.Ob, in welcher Höhe und zeitlichen Verteilung die in den (synthetischen) Szenarien verwendeten Niederschlags- und Temperaturänderungen eintreten werden, kann nur die Zukunft zeigen. Eine Abschätzung, wie sich die Abflußverhältnisse und insbesondere die mittleren Durchflüsse der Untersuchungsgebiete bei möglichen Änderungen entwickeln würden, kann jedoch heute schon vorgenommen werden. Simulationen auf Szenariogrundlagen sind ein Weg, unbekannte zukünftige Randbedingungen sowie regionale Auswirkungen möglicher Änderungen des Klimasystems ausschnittsweise abzuschätzen und entsprechende Risikominderungsstrategien zu entwickeln. Jegliche Modellierung und Simulation natürlicher Systeme ist jedoch mit beträchtlichen Unsicherheiten verknüpft. Vergleichsweise große Unsicherheiten sind mit der zukünftigen Entwicklung des sozioökonomischen Systems und der Komplexität des Klimasystems verbunden. Weiterhin haben Unsicherheiten der einzelnen Modellbausteine der Modellkette Emissionsszenarien/Gaszyklusmodelle - Globale Zirkulationsmodelle/Regionalisierung - hydrologisches Modell, die eine Kaskade der Unsicherheiten ergeben, neben Datenunsicherheiten bei der Erfassung hydrometeorologischer Meßgrößen einen erheblichen Einfluß auf die Vertrauenswürdigkeit der Simulationsergebnisse, die als ein dargestellter Wert eines Ergebnisbandes zu interpretieren sind.Der Einsatz (1) robuster hydrologischer Modelle, die insbesondere temperaturbeeinflußte Prozesse adäquat beschreiben,(2) die Verwendung langer Zeitreihen (wenigsten 30 Jahre) von Meßwerten und(3) die gleichzeitige vergleichende Betrachtung von Klimaszenarien, die auf unterschiedlichen GCMs beruhen (und wenn möglich, verschiedene Emissionsszenarien berücksichtigen),sollte aus Gründen der wissenschaftlichen Sorgfalt, aber auch der besseren Vergleichbarkeit der Ergebnisse von Regionalstudien im noch jungen Forschungsfeld der Klimafolgenforschung beachtet werden.
Iron-sulfur clusters are essential enzyme cofactors. The most common and stable clusters are [2Fe-2S] and [4Fe-4S] that are found in nature. They are involved in crucial biological processes like respiration, gene regulation, protein translation, replication and DNA repair in prokaryotes and eukaryotes. In Escherichia coli, Fe-S clusters are essential for molybdenum cofactor (Moco) biosynthesis, which is a ubiquitous and highly conserved pathway. The first step of Moco biosynthesis is catalyzed by the MoaA protein to produce cyclic pyranopterin monophosphate (cPMP) from 5’GTP. MoaA is a [4Fe-4S] cluster containing radical S-adenosyl-L-methionine (SAM) enzyme. The focus of this study was to investigate Fe-S cluster insertion into MoaA under nitrate and TMAO respiratory conditions using E. coli as a model organism. Nitrate and TMAO respiration usually occur under anaerobic conditions, where oxygen is depleted. Under these conditions, E. coli uses nitrate and TMAO as terminal electron. Previous studies revealed that Fe-S cluster insertion is performed by Fe-S cluster carrier proteins. In E. coli, these proteins are known as A-type carrier proteins (ATC) by phylogenomic and genetic studies. So far, three of them have been characterized in detail in E. coli, namely IscA, SufA, and ErpA. This study shows that ErpA and IscA are involved in Fe-S cluster insertion into MoaA under nitrate and TMAO respiratory conditions. ErpA and IscA can partially replace each other in their role to provide [4Fe-4S] clusters for MoaA. SufA is not able to replace the functions of IscA or ErpA under nitrate respiratory conditions.
Nitrate reductase is a molybdoenzyme that coordinates Moco and Fe-S clusters. Under nitrate respiratory conditions, the expression of nitrate reductase is significantly increased in E. coli. Nitrate reductase is encoded in narGHJI genes, the expression of which is regulated by the transcriptional regulator, fumarate and nitrate reduction (FNR). The activation of FNR under conditions of nitrate respiration requires one [4Fe-4S] cluster. In this part of the study, we analyzed the insertion of Fe-S cluster into FNR for the expression of narGHJI genes in E. coli. The results indicate that ErpA is essential for the FNR-dependent expression of the narGHJI genes, a role that can be replaced partially by IscA and SufA when they are produced sufficiently under the conditions tested. This observation suggests that ErpA is indirectly regulating nitrate reductase expression via inserting Fe-S clusters into FNR.
Most molybdoenzymes are complex multi-subunit and multi-cofactor-containing enzymes that coordinate Fe-S clusters, which are functioning as electron transfer chains for catalysis. In E. coli, periplasmic aldehyde oxidoreductase (PaoAC) is a heterotrimeric molybdoenzyme that
consists of flavin, two [2Fe-2S], one [4Fe-4S] cluster and Moco. In the last part of this study, we investigated the insertion of Fe-S clusters into E. coli periplasmic aldehyde oxidoreductase (PaoAC). The results show that SufA and ErpA are involved in inserting [4Fe-4S] and [2Fe-2S] clusters into PaoABC, respectively under aerobic respiratory conditions.
In this thesis, I examine different A-bar movement dependencies in Igbo, a Benue-Congo language spoken in southern Nigeria. Movement dependencies are found in constructions where an element is moved to the left edge of the clause to express information-structural categories such as in questions, relativization and focus. I show that these constructions in Igbo are very uniform from a syntactic point of view. The constructions are built on two basic fronting operations: relativization and focus movement, and are biclausal. I further investigate several morphophonological effects that are found in these A-bar constructions. I propose that these effects are reflexes of movement that are triggered when an element is moved overtly in relativization or focus. This proposal helps to explain the tone patterns that have previously been assumed to be a property of relative clauses. The thesis adds to the growing body of tonal reflexes of A-bar movement reported for a few African languages. The thesis also provides an insight into the complementizer domain (C-domain) of Igbo.
A water quality model for shallow river-lake systems and its application in river basin management
(2007)
This work documents the development and application of a new model for simulating mass transport and turnover in rivers and shallow lakes. The simulation tool called 'TRAM' is intended to complement mesoscale eco-hydrological catchment models in studies on river basin management. TRAM aims at describing the water quality of individual water bodies, using problem- and scale-adequate approaches for representing their hydrological and ecological characteristics. The need for such flexible water quality analysis and prediction tools is expected to further increase during the implementation of the European Water Framework Directive (WFD) as well as in the context of climate change research. The developed simulation tool consists of a transport and a reaction module with the latter being highly flexible with respect to the description of turnover processes in the aquatic environment. Therefore, simulation approaches of different complexity can easily be tested and model formulations can be chosen in consideration of the problem at hand, knowledge of process functioning, and data availability. Consequently, TRAM is suitable for both heavily simplified engineering applications as well as scientific ecosystem studies involving a large number of state variables, interactions, and boundary conditions. TRAM can easily be linked to catchment models off-line and it requires the use of external hydrodynamic simulation software. Parametrization of the model and visualization of simulation results are facilitated by the use of geographical information systems as well as specific pre- and post-processors. TRAM has been developed within the research project 'Management Options for the Havel River Basin' funded by the German Ministry of Education and Research. The project focused on the analysis of different options for reducing the nutrient load of surface waters. It was intended to support the implementation of the WFD in the lowland catchment of the Havel River located in North-East Germany. Within the above-mentioned study TRAM was applied with two goals in mind. In a first step, the model was used for identifying the magnitude as well as spatial and temporal patterns of nitrogen retention and sediment phosphorus release in a 100~km stretch of the highly eutrophic Lower Havel River. From the system analysis, strongly simplified conceptual approaches for modeling N-retention and P-remobilization in the studied river-lake system were obtained. In a second step, the impact of reduced external nutrient loading on the nitrogen and phosphorus concentrations of the Havel River was simulated (scenario analysis) taking into account internal retention/release. The boundary conditions for the scenario analysis such as runoff and nutrient emissions from river basins were computed by project partners using the catchment models SWIM and ArcEGMO-Urban. Based on the output of TRAM, the considered options of emission control could finally be evaluated using a site-specific assessment scale which is compatible with the requirements of the WFD. Uncertainties in the model predictions were also examined. According to simulation results, the target of the WFD -- with respect to total phosphorus concentrations in the Lower Havel River -- could be achieved in the medium-term, if the full potential for reducing point and non-point emissions was tapped. Furthermore, model results suggest that internal phosphorus loading will ease off noticeably until 2015 due to a declining pool of sedimentary mobile phosphate. Mass balance calculations revealed that the lakes of the Lower Havel River are an important nitrogen sink. This natural retention effect contributes significantly to the efforts aimed at reducing the river's nitrogen load. If a sustainable improvement of the river system's water quality is to be achieved, enhanced measures to further reduce the immissions of both phosphorus and nitrogen are required.
This thesis discusses challenges in IT security education, points out a gap between e-learning and practical education, and presents a work to fill the gap. E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in conventional computer laboratories poses particular problems such as immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. In this thesis, we introduce the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibility to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide effective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system configurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by a virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present a security management solution to prevent the misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This thesis demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.
A task-based parallel elliptic solver for numerical relativity with discontinuous Galerkin methods
(2022)
Elliptic partial differential equations are ubiquitous in physics. In numerical relativity---the study of computational solutions to the Einstein field equations of general relativity---elliptic equations govern the initial data that seed every simulation of merging black holes and neutron stars. In the quest to produce detailed numerical simulations of these most cataclysmic astrophysical events in our Universe, numerical relativists resort to the vast computing power offered by current and future supercomputers. To leverage these computational resources, numerical codes for the time evolution of general-relativistic initial value problems are being developed with a renewed focus on parallelization and computational efficiency. Their capability to solve elliptic problems for accurate initial data must keep pace with the increasing detail of the simulations, but elliptic problems are traditionally hard to parallelize effectively.
In this thesis, I develop new numerical methods to solve elliptic partial differential equations on computing clusters, with a focus on initial data for orbiting black holes and neutron stars. I develop a discontinuous Galerkin scheme for a wide range of elliptic equations, and a stack of task-based parallel algorithms for their iterative solution. The resulting multigrid-Schwarz preconditioned Newton-Krylov elliptic solver proves capable of parallelizing over 200 million degrees of freedom to at least a few thousand cores, and already solves initial data for a black hole binary about ten times faster than the numerical relativity code SpEC. I also demonstrate the applicability of the new elliptic solver across physical disciplines, simulating the thermal noise in thin mirror coatings of interferometric gravitational-wave detectors to unprecedented accuracy. The elliptic solver is implemented in the new open-source SpECTRE numerical relativity code, and set up to support simulations of astrophysical scenarios for the emerging era of gravitational-wave and multimessenger astronomy.
In Systems Medicine, in addition to high-throughput molecular data (*omics), the wealth of clinical characterization plays a major role in the overall understanding of a disease. Unique problems and challenges arise from the heterogeneity of data and require new solutions to software and analysis methods. The SMART and EurValve studies establish a Systems Medicine approach to valvular heart disease -- the primary cause of subsequent heart failure.
With the aim to ascertain a holistic understanding, different *omics as well as the clinical picture of patients with aortic stenosis (AS) and mitral regurgitation (MR) are collected. Our task within the SMART consortium was to develop an IT platform for Systems Medicine as a basis for data storage, processing, and analysis as a prerequisite for collaborative research. Based on this platform, this thesis deals on the one hand with the transfer of the used Systems Biology methods to their use in the Systems Medicine context and on the other hand with the clinical and biomolecular differences of the two heart valve diseases. To advance differential expression/abundance (DE/DA) analysis software for use in Systems Medicine, we state 21 general software requirements and features of automated DE/DA software, including a novel concept for the simple formulation of experimental designs that can represent complex hypotheses, such as comparison of multiple experimental groups, and demonstrate our handling of the wealth of clinical data in two research applications DEAME and Eatomics. In user interviews, we show that novice users are empowered to formulate and test their multiple DE hypotheses based on clinical phenotype. Furthermore, we describe insights into users' general impression and expectation of the software's performance and show their intention to continue using the software for their work in the future. Both research applications cover most of the features of existing tools or even extend them, especially with respect to complex experimental designs. Eatomics is freely available to the research community as a user-friendly R Shiny application.
Eatomics continued to help drive the collaborative analysis and interpretation of the proteomic profile of 75 human left myocardial tissue samples from the SMART and EurValve studies. Here, we investigate molecular changes within the two most common types of valvular heart disease: aortic valve stenosis (AS) and mitral valve regurgitation (MR). Through DE/DA analyses, we explore shared and disease-specific protein alterations, particularly signatures that could only be found in the sex-stratified analysis. In addition, we relate changes in the myocardial proteome to parameters from clinical imaging. We find comparable cardiac hypertrophy but differences in ventricular size, the extent of fibrosis, and cardiac function. We find that AS and MR show many shared remodeling effects, the most prominent of which is an increase in the extracellular matrix and a decrease in metabolism. Both effects are stronger in AS. In muscle and cytoskeletal adaptations, we see a greater increase in mechanotransduction in AS and an increase in cortical cytoskeleton in MR. The decrease in proteostasis proteins is mainly attributable to the signature of female patients with AS. We also find relevant therapeutic targets.
In addition to the new findings, our work confirms several concepts from animal and heart failure studies by providing the largest collection of human tissue from in vivo collected biopsies to date. Our dataset contributing a resource for isoform-specific protein expression in two of the most common valvular heart diseases. Apart from the general proteomic landscape, we demonstrate the added value of the dataset by showing proteomic and transcriptomic evidence for increased expression of the SARS-CoV-2- receptor at pressure load but not at volume load in the left ventricle and also provide the basis of a newly developed metabolic model of the heart.
A systems biological approach towards the molecular basis of heterosis in Arabidopsis thaliana
(2011)
Heterosis is defined as the superiority in performance of heterozygous genotypes compared to their corresponding genetically different homozygous parents. This phenomenon is already known since the beginning of the last century and it has been widely used in plant breeding, but the underlying genetic and molecular mechanisms are not well understood. In this work, a systems biological approach based on molecular network structures is proposed to contribute to the understanding of heterosis. Hybrids are likely to contain additional regulatory possibilities compared to their homozygous parents and, therefore, they may be able to correctly respond to a higher number of environmental challenges, which leads to a higher adaptability and, thus, the heterosis phenomenon. In the network hypothesis for heterosis, presented in this work, more regulatory interactions are expected in the molecular networks of the hybrids compared to the homozygous parents. Partial correlations were used to assess this difference in the global interaction structure of regulatory networks between the hybrids and the homozygous genotypes. This network hypothesis for heterosis was tested on metabolite profiles as well as gene expression data of the two parental Arabidopsis thaliana accessions C24 and Col-0 and their reciprocal crosses. These plants are known to show a heterosis effect in their biomass phenotype. The hypothesis was confirmed for mid-parent and best-parent heterosis for either hybrid of our experimental metabolite as well as gene expression data. It was shown that this result is influenced by the used cutoffs during the analyses. Too strict filtering resulted in sets of metabolites and genes for which the network hypothesis for heterosis does not hold true for either hybrid regarding mid-parent as well as best-parent heterosis. In an over-representation analysis, the genes that show the largest heterosis effects according to our network hypothesis were compared to genes of heterotic quantitative trait loci (QTL) regions. Separately for either hybrid regarding mid-parent as well as best-parent heterosis, a significantly larger overlap between the resulting gene lists of the two different approaches towards biomass heterosis was detected than expected by chance. This suggests that each heterotic QTL region contains many genes influencing biomass heterosis in the early development of Arabidopsis thaliana. Furthermore, this integrative analysis led to a confinement and an increased confidence in the group of candidate genes for biomass heterosis in Arabidopsis thaliana identified by both approaches.
Actin is one of the most highly conserved proteins in eukaryotes and distinct actin-related proteins with filament-forming properties are even found in prokaryotes. Due to these commonalities, actin-modulating proteins of many species share similar structural properties and proposed functions. The polymerization and depolymerization of actin are critical processes for a cell as they can contribute to shape changes to adapt to its environment and to move and distribute nutrients and cellular components within the cell. However, to what extent functions of actin-binding proteins are conserved between distantly related species, has only been addressed in a few cases. In this work, functions of Coronin-A (CorA) and Actin-interacting protein 1 (Aip1), two proteins involved in actin dynamics, were characterized. In addition, the interchangeability and function of Aip1 were investigated in two phylogenetically distant model organisms. The flowering plant Arabidopsis thaliana (encoding two homologs, AIP1-1 and AIP1-2) and in the amoeba Dictyostelium discoideum (encoding one homolog, DdAip1) were chosen because the functions of their actin cytoskeletons may differ in many aspects. Functional analyses between species were conducted for AIP1 homologs as flowering plants do not harbor a CorA gene.
In the first part of the study, the effect of four different mutation methods on the function of Coronin-A protein and the resulting phenotype in D. discoideum was revealed in two genetic knockouts, one RNAi knockdown and a sudden loss-of-function mutant created by chemical-induced dislocation (CID). The advantages and disadvantages of the different mutation methods on the motility, appearance and development of the amoebae were investigated, and the results showed that not all observed properties were affected with the same intensity. Remarkably, a new combination of Selection-Linked Integration and CID could be established.
In the second and third parts of the thesis, the exchange of Aip1 between plant and amoeba was carried out. For A. thaliana, the two homologs (AIP1-1 and AIP1-2) were analyzed for functionality as well as in D. discoideum. In the Aip1-deficient amoeba, rescue with AIP1-1 was more effective than with AIP1-2. The main results in the plant showed that in the aip1-2 mutant background, reintroduced AIP1-2 displayed the most efficient rescue and A. thaliana AIP1-1 rescued better than DdAip1. The choice of the tagging site was important for the function of Aip1 as steric hindrance is a problem. The DdAip1 was less effective when tagged at the C-terminus, while the plant AIP1s showed mixed results depending on the tag position. In conclusion, the foreign proteins partially rescued phenotypes of mutant plants and mutant amoebae, despite the organisms only being very distantly related in evolutionary terms.
Spatio-temporal data denotes a category of data that contains spatial as well as temporal components. For example, time-series of geo-data, thematic maps that change over time, or tracking data of moving entities can be interpreted as spatio-temporal data.
In today's automated world, an increasing number of data sources exist, which constantly generate spatio-temporal data. This includes for example traffic surveillance systems, which gather movement data about human or vehicle movements, remote-sensing systems, which frequently scan our surroundings and produce digital representations of cities and landscapes, as well as sensor networks in different domains, such as logistics, animal behavior study, or climate research.
For the analysis of spatio-temporal data, in addition to automatic statistical and data mining methods, exploratory analysis methods are employed, which are based on interactive visualization. These analysis methods let users explore a data set by interactively manipulating a visualization, thereby employing the human cognitive system and knowledge of the users to find patterns and gain insight into the data.
This thesis describes a software framework for the visualization of spatio-temporal data, which consists of GPU-based techniques to enable the interactive visualization and exploration of large spatio-temporal data sets. The developed techniques include data management, processing, and rendering, facilitating real-time processing and visualization of large geo-temporal data sets. It includes three main contributions:
- Concept and Implementation of a GPU-Based Visualization Pipeline.
The developed visualization methods are based on the concept of a GPU-based visualization pipeline, in which all steps -- processing, mapping, and rendering -- are implemented on the GPU. With this concept, spatio-temporal data is represented directly in GPU memory, using shader programs to process and filter the data, apply mappings to visual properties, and finally generate the geometric representations for a visualization during the rendering process. Data processing, filtering, and mapping are thereby executed in real-time, enabling dynamic control over the mapping and a visualization process which can be controlled interactively by a user.
- Attributed 3D Trajectory Visualization.
A visualization method has been developed for the interactive exploration of large numbers of 3D movement trajectories. The trajectories are visualized in a virtual geographic environment, supporting basic geometries such as lines, ribbons, spheres, or tubes. Interactive mapping can be applied to visualize the values of per-node or per-trajectory attributes, supporting shape, height, size, color, texturing, and animation as visual properties. Using the dynamic mapping system, several kind of visualization methods have been implemented, such as focus+context visualization of trajectories using interactive density maps, and space-time cube visualization to focus on the temporal aspects of individual movements.
- Geographic Network Visualization.
A method for the interactive exploration of geo-referenced networks has been developed, which enables the visualization of large numbers of nodes and edges in a geographic context. Several geographic environments are supported, such as a 3D globe, as well as 2D maps using different map projections, to enable the analysis of networks in different contexts and scales. Interactive filtering, mapping, and selection can be applied to analyze these geographic networks, and visualization methods for specific types of networks, such as coupled 3D networks or temporal networks have been implemented.
As a demonstration of the developed visualization concepts, interactive visualization tools for two distinct use cases have been developed. The first contains the visualization of attributed 3D movement trajectories of airplanes around an airport. It allows users to explore and analyze the trajectories of approaching and departing aircrafts, which have been recorded over the period of a month. By applying the interactive visualization methods for trajectory visualization and interactive density maps, analysts can derive insight from the data, such as common flight paths, regular and irregular patterns, or uncommon incidents such as missed approaches on the airport.
The second use case involves the visualization of climate networks, which are geographic networks in the climate research domain. They represent the dynamics of the climate system using a network structure that expresses statistical interrelationships between different regions. The interactive tool allows climate analysts to explore these large networks, analyzing the network's structure and relating it to the geographic background. Interactive filtering and selection enables them to find patterns in the climate data and identify e.g. clusters in the networks or flow patterns.
As a result of CMOS scaling, radiation-induced Single-Event Effects (SEEs) in electronic circuits became a critical reliability issue for modern Integrated Circuits (ICs) operating under harsh radiation conditions. SEEs can be triggered in combinational or sequential logic by the impact of high-energy particles, leading to destructive or non-destructive faults, resulting in data corruption or even system failure. Typically, the SEE mitigation methods are deployed statically in processing architectures based on the worst-case radiation conditions, which is most of the time unnecessary and results in a resource overhead. Moreover, the space radiation conditions are dynamically changing, especially during Solar Particle Events (SPEs). The intensity of space radiation can differ over five orders of magnitude within a few hours or days, resulting in several orders of magnitude fault probability variation in ICs during SPEs. This thesis introduces a comprehensive approach for designing a self-adaptive fault resilient multiprocessing system to overcome the static mitigation overhead issue. This work mainly addresses the following topics: (1) Design of on-chip radiation particle monitor for real-time radiation environment detection, (2) Investigation of space environment predictor, as support for solar particle events forecast, (3) Dynamic mode configuration in the resilient multiprocessing system. Therefore, according to detected and predicted in-flight space radiation conditions, the target system can be configured to use no mitigation or low-overhead mitigation during non-critical periods of time. The redundant resources can be used to improve system performance or save power. On the other hand, during increased radiation activity periods, such as SPEs, the mitigation methods can be dynamically configured appropriately depending on the real-time space radiation environment, resulting in higher system reliability. Thus, a dynamic trade-off in the target system between reliability, performance and power consumption in real-time can be achieved. All results of this work are evaluated in a highly reliable quad-core multiprocessing system that allows the self-adaptive setting of optimal radiation mitigation mechanisms during run-time. Proposed methods can serve as a basis for establishing a comprehensive self-adaptive resilient system design process. Successful implementation of the proposed design in the quad-core multiprocessor shows its application perspective also in the other designs.
Pichia pastoris (syn. Komagataella phaffi) is a distinguished expression system widely used in industrial production processes. Recent molecular research has focused on numerous approaches to increase recombinant protein yield in P. pastoris. For example, the design of expression vectors and synthetic genetic elements, gene copy number optimization, or co-expression of helper proteins
(transcription factors, chaperones, etc.). However, high clonal variability of transformants and low screening throughput have hampered significant success.
To enhance screening capacities, display-based methodologies inherit the potential for efficient isolation of producer clones via fluorescence-activated cell sorting (FACS). Therefore, this study focused on developing a novel clone selection method that is based on the non-covalent attachment of Fab fragments on the P. pastoris cell surface to be applicable for FACS.
Initially, a P. pastoris display system was developed, which is a prerequisite for the surface capture of secreted Fabs. A Design of Experiments approach was applied to analyze the influence of various genetic elements on antibody fragment display. The combined P. pastoris formaldehyde dehydrogenase promoter (PFLD1), Saccharomyces cerevisiae invertase 2 signal peptide (ScSUC2), - agglutinin (ScSAG1) anchor protein, and the ARS of Kluyveromyces lactis (panARS) conferred highest display levels.
Subsequently, eight single-chain variable fragments (scFv) specific for the constant part of the Fab heavy or light chain were individually displayed in P. pastoris. Among the tested scFvs, the anti-human CH1 IgG domain scFv allowed the most efficient Fab capture detected by flow cytometry.
Irrespective of the Fab sequence, exogenously added as well as simultaneously secreted Fabs were successfully captured on the cell surface. Furthermore, Fab secretion capacities were shown to correlate to the level of surface-bound Fabs as demonstrated for characterized producer clones.
Flow-sorted clones presenting high amounts of Fabs showed an increase in median Fab titers (factor of 21 to 49) compared to unsorted clones when screened in deep-well plates. For selected candidates, improved functional Fab yields of sorted cells vs. unsorted cells were confirmed in an upscaled shake flask production. Since the scFv capture matrix was encoded on an episomal plasmid with inherently unstable autonomously replicating sequences (ARS), efficient plasmid curing was observed after removing the selective pressure. Hence, sorted clones could be immediately used for production without the need to modify the expression host or vector. The resulting switchable display/secretion system provides a streamlined approach for the isolation of Fab producers and subsequent Fab production.
This thesis is focused on the electronic, spin-dependent and dynamical properties of thin magnetic systems. Photoemission-related techniques are combined with synchrotron radiation to study the spin-dependent properties of these systems in the energy and time domains. In the first part of this thesis, the strength of electron correlation effects in the spin-dependent electronic structure of ferromagnetic bcc Fe(110) and hcp Co(0001) is investigated by means of spin- and angle-resolved photoemission spectroscopy. The experimental results are compared to theoretical calculations within the three-body scattering approximation and within the dynamical mean-field theory, together with one-step model calculations of the photoemission process. From this comparison it is demonstrated that the present state of the art many-body calculations, although improving the description of correlation effects in Fe and Co, give too small mass renormalizations and scattering rates thus demanding more refined many-body theories including nonlocal fluctuations. In the second part, it is shown in detail monitoring by photoelectron spectroscopy how graphene can be grown by chemical vapour deposition on the transition-metal surfaces Ni(111) and Co(0001) and intercalated by a monoatomic layer of Au. For both systems, a linear E(k) dispersion of massless Dirac fermions is observed in the graphene pi-band in the vicinity of the Fermi energy. Spin-resolved photoemission from the graphene pi-band shows that the ferromagnetic polarization of graphene/Ni(111) and graphene/Co(0001) is negligible and that graphene on Ni(111) is after intercalation of Au spin-orbit split by the Rashba effect. In the last part, a time-resolved x-ray magnetic circular dichroic-photoelectron emission microscopy study of a permalloy platelet comprising three cross-tie domain walls is presented. It is shown how a fast picosecond magnetic response in the precessional motion of the magnetization can be induced by means of a laser-excited photoswitch. From a comparision to micromagnetic calculations it is demonstrated that the relatively high precessional frequency observed in the experiments is directly linked to the nature of the vortex/antivortex dynamics and its response to the magnetic perturbation. This includes the time-dependent reversal of the vortex core polarization, a process which is beyond the limit of detection in the present experiments.
A phagocyte-specific Irf8 gene enhancer establishes early conventional dendritic cell commitment
(2011)
Haematopoietic development is a complex process that is strictly hierarchically organized. Here, the phagocyte lineages are a very heterogeneous cell compartment with specialized functions in innate immunity and induction of adaptive immune responses. Their generation from a common precursor must be tightly controlled. Interference within lineage formation programs for example by mutation or change in expression levels of transcription factors (TF) is causative to leukaemia. However, the molecular mechanisms driving specification into distinct phagocytes remain poorly understood. In the present study I identify the transcription factor Interferon Regulatory Factor 8 (IRF8) as the specification factor of dendritic cell (DC) commitment in early phagocyte precursors. Employing an IRF8 reporter mouse, I showed the distinct Irf8 expression in haematopoietic lineage diversification and isolated a novel bone marrow resident progenitor which selectively differentiates into CD8α+ conventional dendritic cells (cDCs) in vivo. This progenitor strictly depends on Irf8 expression to properly establish its transcriptional DC program while suppressing a lineage-inappropriate neutrophile program. Moreover, I demonstrated that Irf8 expression during this cDC commitment-step depends on a newly discovered myeloid-specific cis-enhancer which is controlled by the haematopoietic transcription factors PU.1 and RUNX1. Interference with their binding leads to abrogation of Irf8 expression, subsequently to disturbed cell fate decisions, demonstrating the importance of these factors for proper phagocyte cell development. Collectively, these data delineate a transcriptional program establishing cDC fate choice with IRF8 in its center.
Con la sua proposta di una Somaestetica, articolata fondamentalmente in analitica, pragmatica e pratica, Richard Shusterman intende in primo luogo fornire e creare una cornice metodologica, un orientamento unitario che sia in grado di rintracciare, ricostruire e portare a manifestazione - all’interno di eterogenee riflessioni teoriche e pratiche somatiche - la comune esigenza di ridare luce alla dimensione corporea come modo primario di essere nel mondo. Recuperando l’accezione baumgarteniana di Aesthetica come gnoseologia inferiore, arte dell’analogo della ragione, scienza della conoscenza sensibile, la somaestetica intende dare nuovo impulso alla più profonda radice di estetica e filosofia che coglie la vita nel suo processo di metamorfosi e rigenerazione continua, in quel respiro vitale che, per quanto possa diventare cosciente, non è mai totalmente afferrabile dalla ragione discorsiva, situandosi piuttosto in quello spazio primordiale in cui coscienza e corpo si coappartengono, in cui il soggetto non è ancora individualizzabile perché fuso con l’ambiente, non è totalmente privatizzabile perché intrinsecamente plasmato dal tessuto sociale cui egli stesso conferisce dinamicamente forma. A partire dunque dalla rivalutazione del concetto di Aisthesis la disciplina somaestetica mira ad una intensificazione di sensorialità, percezione, emozione, commozione, rintracciando proprio nel Soma la fonte di quelle facoltà “inferiori” irriducibili a quelle puramente intellettuali, che permettono di accedere alle dimensioni qualitative dell’esperienza, di portare a manifestazione e far maturare l’essere umano come essere indivisibile che non si lascia incontrare da un pensiero che ne rinnega l’unitarietà in nome di fittizie e laceranti distinzioni dicotomiche. Nel corpo infatti si radicano in modo silente regole, convenzioni, norme e valori socioculturali che determinano e talvolta limitano la configurazione ed espressione di sensazioni, percezioni, cognizioni, pensieri, azioni, volizioni, disposizioni di un soggetto da sempre inserito in una Mitwelt (mondo comune), ed è allora proprio al corpo che bisogna rivolgersi per riconfigurare più autentiche modalità di espressione del soggetto che crea equilibri dinamici per mantenere una relazione di coerenza con il più ampio contesto sociale, culturale, ambientale. L’apertura al confronto con eterogenee posizioni filosofiche e l’intrinseca multidisciplinarietà spiegano la centralità nel contemporaneo dibattito estetologico internazionale della Somaestetica che, rivolgendosi tanto ad una formulazione teorica quanto ad una concreta applicazione pratica, intende rivalutare il soma come corporeità intelligente, senziente, intenzionale e attiva, non riducibile all’accezione peccaminosa di caro (mero corpo fisico privo di vita e sensazione). Attraverso la riflessione e la pratica di tecniche di coscienza somatica si portano in primo piano i modi in cui il sempre più consapevole rapporto con la propria corporeità come mediatamente esperita e immediatamente vissuta, sentita, offre occasioni autentiche di realizzazione progressiva di sé innanzitutto come persone, capaci di autocoltivazione, di riflessione cosciente sulle proprie abitudini incorporate, di ristrutturazione creativa di sé, di intensificata percezione e apprezzamento sensoriale sia nel concreto agire quotidiano, sia nella dimensione più propriamente estetologica di ricezione, fruizione e creazione artistica. L’indirizzo essenzialmente pragmatista della riflessione di Shusterman traccia così una concezione fondamentalmente relazionale dell’estetica in grado di porsi proprio nel movimento e nel rapporto continuamente diveniente di vera e propria trasformazione e passaggio tra le dimensioni fisiche, proprio-corporee, psichiche e spirituali del soggetto la cui interazione, ed il cui reciproco riversarsi le une nelle altre, può risultare profondamente arricchito attraverso una progressiva e sempre crescente consapevolizzazione della ricchezza della dimensione corporea in quanto intenzionale, percettiva, senziente, volitiva, tanto quanto vulnerabile, limitante, caduca, patica. Il presente lavoro intende ripercorrere ed approfondire alcuni dei principali referenti di Shusterman, focalizzandosi prevalentemente sulla radice pragmatista della sua proposta e sul confronto con il dibattito di area tedesca tra estetica, antropologia filosofica, neofenomenologia e antropologia medica, per riguadagnare una nozione di soma che proprio a partire dal contrasto, dall’impatto irriducibile con la potenza annullante delle situazioni limite, della crisi possa acquisire un più complesso e ricco valore armonizzante delle intrinseche e molteplici dimensioni che costituiscono il tessuto della soggettività incarnata. In particolare il primo capitolo (1. Somaestetica) chiarisce le radici essenzialmente pragmatiste della proposta shustermaniana e mostra come sia possibile destrutturare e dunque riconfigurare radicati modi di esperienza, rendendo coscienti abitudini e modi di vivere che si fissano a livello somatico in modo per lo più inavvertito. Il confronto con la nozione di Habitus, di cui Pierre Bourdieu mette brillantemente in luce l’invisibile e socialmente determinata matrice somatica, lascia scorgere come ogni manifestazione umana sia sostenuta dall’incorporazione di norme, credenze, valori che determinano e talvolta limitano l’espressione, lo sviluppo, persino le predisposizioni e le inclinazioni degli individui. Ed è proprio intervenendo a questo livello che si può restituire libertà alle scelte e aprirsi così alle dimensioni essenzialmente qualitative dell’esperienza che, nell’accezione deweyana è un insieme olistico unitario e coeso che fa da sfondo alle relazioni organismo-ambiente, un intreccio inestricabile di teoria e prassi, particolare e universale, psiche e soma, ragione ed emozione, percettivo e concettuale, insomma quell’immediata conoscenza corporea che struttura lo sfondo di manifestazione della coscienza.
The spread of shrubs in Namibian savannas raises questions about the resilience of these ecosystems to global change. This makes it necessary to understand the past dynamics of the vegetation, since there is no consensus on whether shrub encroachment is a new phenomenon, nor on its main drivers. However, a lack of long-term vegetation datasets for the region and the scarcity of suitable palaeoecological archives, makes reconstructing past vegetation and land cover of the savannas a challenge.
To help meet this challenge, this study addresses three main research questions: 1) is pollen analysis a suitable tool to reflect the vegetation change associated with shrub encroachment in savanna environments? 2) Does the current encroached landscape correspond to an alternative stable state of savanna vegetation? 3) To what extent do pollen-based quantitative vegetation reconstructions reflect changes in past land cover?
The research focuses on north-central Namibia, where despite being the region most affected by shrub invasion, particularly since the 21st century, little is known about the dynamics of this phenomenon.
Field-based vegetation data were compared with modern pollen data to assess their correspondence in terms of composition and diversity along precipitation and grazing intensity gradients. In addition, two sediment cores from Lake Otjikoto were analysed to reveal changes in vegetation composition that have occurred in the region over the past 170 years and their possible drivers. For this, a multiproxy approach (fossil pollen, sedimentary ancient DNA (sedaDNA), biomarkers, compound specific carbon (δ13C) and deuterium (δD) isotopes, bulk carbon isotopes (δ13Corg), grain size, geochemical properties) was applied at high taxonomic and temporal resolution. REVEALS modelling of the fossil pollen record from Lake Otjikoto was run to quantitatively reconstruct past vegetation cover. For this, we first made pollen productivity estimates (PPE) of the most relevant savanna taxa in the region using the extended R-value model and two pollen dispersal options (Gaussian plume model and Lagrangian stochastic model). The REVEALS-based vegetation reconstruction was then validated using remote sensing-based regional vegetation data.
The results show that modern pollen reflects the composition of the vegetation well, but diversity less well. Interestingly, precipitation and grazing explain a significant amount of the compositional change in the pollen and vegetation spectra. The multiproxy record shows that a state change from open Combretum woodland to encroached Terminalia shrubland can occur over a century, and that the transition between states spans around 80 years and is characterized by a unique vegetation composition. This transition is supported by gradual environmental changes induced by management (i.e. broad-scale logging for the mining industry, selective grazing and reduced fire activity associated with intensified farming) and related land-use change. Derived environmental changes (i.e. reduced soil moisture, reduced grass cover, changes in species composition and competitiveness, reduced fire intensity) may have affected the resilience of Combretum open woodlands, making them more susceptible to change to an encroached state by stochastic events such as consecutive years of precipitation and drought, and by high concentrations of pCO2. We assume that the resulting encroached state was further stabilized by feedback mechanisms that favour the establishment and competitiveness of woody vegetation.
The REVEALS-based quantitative estimates of plant taxa indicate the predominance of a semi-open landscape throughout the 20th century and a reduction in grass cover below 50% since the 21st century associated with the spread of encroacher woody taxa. Cover estimates show a close match with regional vegetation data, providing support for the vegetation dynamics inferred from multiproxy analyses. Reasonable PPEs were made for all woody taxa, but not for Poaceae.
In conclusion, pollen analysis is a suitable tool to reconstruct past vegetation dynamics in savannas. However, because pollen cannot identify grasses beyond family level, a multiproxy approach, particularly the use of sedaDNA, is required. I was able to separate stable encroached states from mere woodland phases, and could identify drivers and speculate about related feedbacks. In addition, the REVEALS-based quantitative vegetation reconstruction clearly reflects the magnitude of the changes in the vegetation cover that occurred during the last 130 years, despite the limitations of some PPEs.
This research provides new insights into pollen-vegetation relationships in savannas and highlights the importance of multiproxy approaches when reconstructing past vegetation dynamics in semi-arid environments. It also provides the first time series with sufficient taxonomic resolution to show changes in vegetation composition during shrub encroachment, as well as the first quantitative reconstruction of past land cover in the region. These results help to identify the different stages in savanna dynamics and can be used to calibrate predictive models of vegetation change, which are highly relevant to land management.
This thesis provides a novel view on the early stage of crystallization utilizing calcium carbonate as a model system. Calcium carbonate is of great economical, scientific and ecological importance, because it is a major part of water hardness, the most abundant Biomineral and forms huge amounts of geological sediments thus binding large amounts of carbon dioxide. The primary experiments base on the evolution of supersaturation via slow addition of dilute calcium chloride solution into dilute carbonate buffer. The time-dependent measurement of the Ca2+ potential and concurrent pH = constant titration facilitate the calculation of the amount of calcium and carbonate ions bound in pre-nucleation stage clusters, which have never been detected experimentally so far, and in the new phase after nucleation, respectively. Analytical Ultracentrifugation independently proves the existence of pre-nucleation stage clusters, and shows that the clusters forming at pH = 9.00 have a proximately time-averaged size of altogether 70 calcium and carbonate ions. Both experiments show that pre-nucleation stage cluster formation can be described by means of equilibrium thermodynamics. Effectively, the cluster formation equilibrium is physico-chemically characterized by means of a multiple-binding equilibrium of calcium ions to a ‘lattice’ of carbonate ions. The evaluation gives GIBBS standard energy for the formation of calcium/carbonate ion pairs in clusters, which exhibits a maximal value of approximately 17.2 kJ mol^-1 at pH = 9.75 and relates to a minimal binding strength in clusters at this pH-value. Nucleated calcium carbonate particles are amorphous at first and subsequently become crystalline. At high binding strength in clusters, only calcite (the thermodynamically stable polymorph) is finally obtained, while with decreasing binding strength in clusters, vaterite (the thermodynamically least stable polymorph) and presumably aragonite (the thermodynamically intermediate stable polymorph) are obtained additionally. Concurrently, two different solubility products of nucleated amorphous calcium carbonate (ACC) are detected at low binding strength and high binding strength in clusters (ACC I 3.1EE-8 M^2, ACC II 3.8EE-8 M^2), respectively, indicating the precipitation of at least two different ACC species, while the clusters provide the precursor species of ACC. It is proximate that ACC I may relate to calcitic ACC –i.e. ACC exhibiting short range order similar to the long range order of calcite and that ACC II may relate to vateritic ACC, which will subsequently transform into the particular crystalline polymorph as discussed in the literature, respectively. Detailed analysis of nucleated particles forming at minimal binding strength in clusters (pH = 9.75) by means of SEM, TEM, WAXS and light microscopy shows that predominantly vaterite with traces of calcite forms. The crystalline particles of early stages are composed of nano-crystallites of approximately 5 to 10 nm size, respectively, which are aligned in high mutual order as in mesocrystals. The analyses of precipitation at pH = 9.75 in presence of additives –polyacrylic acid (pAA) as a model compound for scale inhibitors and peptides exhibiting calcium carbonate binding affinity as model compounds for crystal modifiers- shows that ACC I and ACC II are precipitated in parallel: pAA stabilizes ACC II particles against crystallization leading to their dissolution for the benefit of crystals that form from ACC I and exclusively calcite is finally obtained. Concurrently, the peptide additives analogously inhibit the formation of calcite and exclusively vaterite is finally obtained in case of one of the peptide additives. These findings show that classical nucleation theory is hardly applicable for the nucleation of calcium carbonate. The metastable system is stabilized remarkably due to cluster formation, while clusters forming by means of equilibrium thermodynamics are the nucleation relevant species and not ions. Most likely, the concept of cluster formation is a common phenomenon occurring during the precipitation of hardly soluble compounds as qualitatively shown for calcium oxalate and calcium phosphate. This finding is important for the fundamental understanding of crystallization and nucleation-inhibition and modification by additives with impact on materials of huge scientific and industrial importance as well as for better understanding of the mass transport in crystallization. It can provide a novel basis for simulation and modelling approaches. New mechanisms of scale formation in Bio- and Geomineralization and also in scale inhibition on the basis of the newly reported reaction channel need to be considered.
In soils and sediments there is a strong coupling between local biogeochemical processes and the distribution of water, electron acceptors, acids and nutrients. Both sides are closely related and affect each other from small scale to larger scales. Soil structures such as aggregates, roots, layers or macropores enhance the patchiness of these distributions. At the same time it is difficult to access the spatial distribution and temporal dynamics of these parameter. Noninvasive imaging techniques with high spatial and temporal resolution overcome these limitations. And new non-invasive techniques are needed to study the dynamic interaction of plant roots with the surrounding soil, but also the complex physical and chemical processes in structured soils. In this study we developed an efficient non-destructive in-situ method to determine biogeochemical parameters relevant to plant roots growing in soil. This is a quantitative fluorescence imaging method suitable for visualizing the spatial and temporal pH changes around roots. We adapted the fluorescence imaging set-up and coupled it with neutron radiography to study simultaneously root growth, oxygen depletion by respiration activity and root water uptake. The combined set up was subsequently applied to a structured soil system to map the patchy structure of oxic and anoxic zones induced by a chemical oxygen consumption reaction for spatially varying water contents. Moreover, results from a similar fluorescence imaging technique for nitrate detection were complemented by a numerical modeling study where we used imaging data, aiming to simulate biodegradation under anaerobic, nitrate reducing conditions.
In the past, floods were basically managed by flood control mechanisms. The focus was set on the reduction of flood hazard. The potential consequences were of minor interest. Nowadays river flooding is increasingly seen from the risk perspective, including possible consequences. Moreover, the large-scale picture of flood risk became increasingly important for disaster management planning, national risk developments and the (re-) insurance industry. Therefore, it is widely accepted that risk-orientated flood management ap-proaches at the basin-scale are needed. However, large-scale flood risk assessment methods for areas of several 10,000 km² are still in early stages. Traditional flood risk assessments are performed reach wise, assuming constant probabilities for the entire reach or basin. This might be helpful on a local basis, but where large-scale patterns are important this approach is of limited use. Assuming a T-year flood (e.g. 100 years) for the entire river network is unrealistic and would lead to an overestimation of flood risk at the large scale. Due to the lack of damage data, additionally, the probability of peak discharge or rainfall is usually used as proxy for damage probability to derive flood risk. With a continuous and long term simulation of the entire flood risk chain, the spatial variability of probabilities could be consider and flood risk could be directly derived from damage data in a consistent way.
The objective of this study is the development and application of a full flood risk chain, appropriate for the large scale and based on long term and continuous simulation. The novel approach of ‘derived flood risk based on continuous simulations’ is introduced, where the synthetic discharge time series is used as input into flood impact models and flood risk is directly derived from the resulting synthetic damage time series.
The bottleneck at this scale is the hydrodynamic simu-lation. To find suitable hydrodynamic approaches for the large-scale a benchmark study with simplified 2D hydrodynamic models was performed. A raster-based approach with inertia formulation and a relatively high resolution of 100 m in combination with a fast 1D channel routing model was chosen.
To investigate the suitability of the continuous simulation of a full flood risk chain for the large scale, all model parts were integrated into a new framework, the Regional Flood Model (RFM). RFM consists of the hydrological model SWIM, a 1D hydrodynamic river network model, a 2D raster based inundation model and the flood loss model FELMOps+r. Subsequently, the model chain was applied to the Elbe catchment, one of the largest catchments in Germany. For the proof-of-concept, a continuous simulation was per-formed for the period of 1990-2003. Results were evaluated / validated as far as possible with available observed data in this period. Although each model part introduced its own uncertainties, results and runtime were generally found to be adequate for the purpose of continuous simulation at the large catchment scale.
Finally, RFM was applied to a meso-scale catchment in the east of Germany to firstly perform a flood risk assessment with the novel approach of ‘derived flood risk assessment based on continuous simulations’. Therefore, RFM was driven by long term synthetic meteorological input data generated by a weather generator. Thereby, a virtual time series of climate data of 100 x 100 years was generated and served as input to RFM providing subsequent 100 x 100 years of spatially consistent river discharge series, inundation patterns and damage values. On this basis, flood risk curves and expected annual damage could be derived directly from damage data, providing a large-scale picture of flood risk. In contrast to traditional flood risk analysis, where homogenous return periods are assumed for the entire basin, the presented approach provides a coherent large-scale picture of flood risk. The spatial variability of occurrence probability is respected. Additionally, data and methods are consistent. Catchment and floodplain processes are repre-sented in a holistic way. Antecedent catchment conditions are implicitly taken into account, as well as physical processes like storage effects, flood attenuation or channel–floodplain interactions and related damage influencing effects. Finally, the simulation of a virtual period of 100 x 100 years and consequently large data set on flood loss events enabled the calculation of flood risk directly from damage distributions. Problems associated with the transfer of probabilities in rainfall or peak runoff to probabilities in damage, as often used in traditional approaches, are bypassed.
RFM and the ‘derived flood risk approach based on continuous simulations’ has the potential to provide flood risk statements for national planning, re-insurance aspects or other questions where spatially consistent, large-scale assessments are required.
Quantified Boolean formulas (QBFs) play an important role in theoretical computer science. QBF extends propositional logic in such a way that many advanced forms of reasoning can be easily formulated and evaluated. In this dissertation we present our ZQSAT, which is an algorithm for evaluating quantified Boolean formulas. ZQSAT is based on ZBDD: Zero-Suppressed Binary Decision Diagram , which is a variant of BDD, and an adopted version of the DPLL algorithm. It has been implemented in C using the CUDD: Colorado University Decision Diagram package. The capability of ZBDDs in storing sets of subsets efficiently enabled us to store the clauses of a QBF very compactly and let us to embed the notion of memoization to the DPLL algorithm. These points led us to implement the search algorithm in such a way that we could store and reuse the results of all previously solved subformulas with a little overheads. ZQSAT can solve some sets of standard QBF benchmark problems (known to be hard for DPLL based algorithms) faster than the best existing solvers. In addition to prenex-CNF, ZQSAT accepts prenex-NNF formulas. We show and prove how this capability can be exponentially beneficial.
Additive Manufacturing (AM) in terms of laser powder-bed fusion (L-PBF) offers new prospects regarding the design of parts and enables therefore the production of lattice structures. These lattice structures shall be implemented in various industrial applications (e.g. gas turbines) for reasons of material savings or cooling channels. However, internal defects, residual stress, and structural deviations from the nominal geometry are unavoidable.
In this work, the structural integrity of lattice structures manufactured by means of L-PBF was non-destructively investigated on a multiscale approach.
A workflow for quantitative 3D powder analysis in terms of particle size, particle shape, particle porosity, inter-particle distance and packing density was established. Synchrotron computed tomography (CT) was used to correlate the packing density with the particle size and particle shape. It was also observed that at least about 50% of the powder porosity was released during production of the struts.
Struts are the component of lattice structures and were investigated by means of laboratory CT. The focus was on the influence of the build angle on part porosity and surface quality. The surface topography analysis was advanced by the quantitative characterisation of re-entrant surface features. This characterisation was compared with conventional surface parameters showing their complementary information, but also the need for AM specific surface parameters.
The mechanical behaviour of the lattice structure was investigated with in-situ CT under compression and successive digital volume correlation (DVC). The deformation was found to be knot-dominated, and therefore the lattice folds unit cell layer wise.
The residual stress was determined experimentally for the first time in such lattice structures. Neutron diffraction was used for the non-destructive 3D stress investigation. The principal stress directions and values were determined in dependence of the number of measured directions. While a significant uni-axial stress state was found in the strut, a more hydrostatic stress state was found in the knot. In both cases, strut and knot, seven directions were at least needed to find reliable principal stress directions.
This thesis focuses on the study of marked Gibbs point processes, in particular presenting some results on their existence and uniqueness, with ideas and techniques drawn from different areas of statistical mechanics: the entropy method from large deviations theory, cluster expansion and the Kirkwood--Salsburg equations, the Dobrushin contraction principle and disagreement percolation.
We first present an existence result for infinite-volume marked Gibbs point processes. More precisely, we use the so-called entropy method (and large-deviation tools) to construct marked Gibbs point processes in R^d under quite general assumptions. In particular, the random marks belong to a general normed space S and are not bounded. Moreover, we allow for interaction functionals that may be unbounded and whose range is finite but random. The entropy method relies on showing that a family of finite-volume Gibbs point processes belongs to sequentially compact entropy level sets, and is therefore tight.
We then present infinite-dimensional Langevin diffusions, that we put in interaction via a Gibbsian description. In this setting, we are able to adapt the general result above to show the existence of the associated infinite-volume measure. We also study its correlation functions via cluster expansion techniques, and obtain the uniqueness of the Gibbs process for all inverse temperatures β and activities z below a certain threshold. This method relies in first showing that the correlation functions of the process satisfy a so-called Ruelle bound, and then using it to solve a fixed point problem in an appropriate Banach space. The uniqueness domain we obtain consists then of the model parameters z and β for which such a problem has exactly one solution.
Finally, we explore further the question of uniqueness of infinite-volume Gibbs point processes on R^d, in the unmarked setting. We present, in the context of repulsive interactions with a hard-core component, a novel approach to uniqueness by applying the discrete Dobrushin criterion to the continuum framework. We first fix a discretisation parameter a>0 and then study the behaviour of the uniqueness domain as a goes to 0. With this technique we are able to obtain explicit thresholds for the parameters z and β, which we then compare to existing results coming from the different methods of cluster expansion and disagreement percolation.
Throughout this thesis, we illustrate our theoretical results with various examples both from classical statistical mechanics and stochastic geometry.
Bacteria respond to changing environmental conditions by switching the global pattern of expressed genes. In response to specific environmental stresses the cell activates several stress-specific molecules such as sigma factors. They reversibly bind the RNA polymerase to form the so-called holoenzyme and direct it towards the appropriate stress response genes. In exponentially growing E. coli cells, the majority of the transcriptional activity is carried out by the housekeeping sigma factor, while stress responses are often under the control of alternative sigma factors. Different sigma factors compete for binding to a limited pool of RNA polymerase (RNAP) core enzymes, providing a mechanism for cross talk between genes or gene classes via the sharing of expression machinery. To quantitatively analyze the contribution of sigma factor competition to global changes in gene expression, we develop a thermodynamic model that describes binding between sigma factors and core RNAP at equilibrium, transcription, non-specific binding to DNA and the modulation of the availability of the molecular components.
Association of housekeeping sigma factor to RNAP is generally favored by its abundance and higher binding affinity to the core. In order to promote transcription by alternative sigma subunits, the bacterial cell modulates the transcriptional efficiency in a reversible manner through several strategies such as anti-sigma factors, 6S RNA and generally any kind of transcriptional regulators (e.g. activators or inhibitors). By shifting the outcome of sigma factor competition for the core, these modulators bias the transcriptional program of the cell. The model is validated by comparison with in vitro competition experiments, with which excellent agreement is found. We observe that transcription is affected via the modulation of the concentrations of the different types of holoenzymes, so saturated promoters are only weakly affected by sigma factor competition. However, in case of overlapping promoters or promoters recognized by two types of sigma factors, we find that even saturated promoters are strongly affected.
Active transcription effectively lowers the affinity between the sigma factor driving it and the core RNAP, resulting in complex cross talk effects and raising the question of how their in vitro measure is relevant in the cell. We also estimate that sigma factor competition is not strongly affected by non-specific binding of core RNAPs, sigma factors, and holoenzymes to DNA. Finally, we analyze the role of increased core RNAP availability upon the shut-down of ribosomal RNA transcription during stringent response. We find that passive up-regulation of alternative sigma-dependent transcription is not only possible, but also displays hypersensitivity based on the sigma factor competition. Our theoretical analysis thus provides support for a significant role of passive control during that global switch of the gene expression program and gives new insights into RNAP partitioning in the cell.
An important goal in biotechnology and (bio-) medical research is the isolation of single cells from a heterogeneous cell population. These specialised cells are of great interest for bioproduction, diagnostics, drug development, (cancer) therapy and research. To tackle emerging questions, an ever finer differentiation between target cells and non-target cells is required. This precise differentiation is a challenge for a growing number of available methods.
Since the physiological properties of the cells are closely linked to their morphology, it is beneficial to include their appearance in the sorting decision. For established methods, this represents a non addressable parameter, requiring new methods for the identification and isolation of target cells. Consequently, a variety of new flow-based methods have been developed and presented in recent years utilising 2D imaging data to identify target cells within a sample. As these methods aim for high throughput, the devices developed typically require highly complex fluid handling techniques, making them expensive while offering limited image quality.
In this work, a new continuous flow system for image-based cell sorting was developed that uses dielectrophoresis to precisely handle cells in a microchannel. Dielectrophoretic forces are exerted by inhomogeneous alternating electric fields on polarisable particles (here: cells). In the present system, the electric fields can be switched on and off precisely and quickly by a signal generator. In addition to the resulting simple and effective cell handling, the system is characterised by the outstanding quality of the image data generated and its compatibility with standard microscopes. These aspects result in low complexity, making it both affordable and user-friendly.
With the developed cell sorting system, cells could be sorted reliably and efficiently according to their cytosolic staining as well as morphological properties at different optical magnifications. The achieved purity of the target cell population was up to 95% and about 85% of the sorted cells could be recovered from the system. Good agreement was achieved between the results obtained and theoretical considerations. The achieved throughput of the system was up to 12,000 cells per hour. Cell viability studies indicated a high biocompatibility of the system.
The results presented demonstrate the potential of image-based cell sorting using dielectrophoresis. The outstanding image quality and highly precise yet gentle handling of the cells set the system apart from other technologies. This results in enormous potential for processing valuable and sensitive cell samples.
For the elucidation of the dynamics of signal transduction processes that are induced by cellular interactions, defined events along the signal transduction cascade and subsequent activation steps have to be analyzed and then also correlated with each other. This cannot be achieved by ensemble measurements because averaging biological data ignores the variability in timing and response patterns of individual cells and leads to highly blurred results. Instead, only a multi-parameter analysis at a single-cell level is able to exploit the information that is crucially needed for deducing the signaling pathways involved. The aim of this work was to develop a process line that allows the initiation of cell-cell or cell-particle interactions while at the same time the induced cellular reactions can be analyzed at various stages along the signal transduction cascade and correlated with each other. As this approach requires the gentle management of individually addressable cells, a dielectrophoresis (DEP)-based microfluidic system was employed that provides the manipulation of microscale objects with very high spatiotemporal precision and without the need of contacting the cell membrane. The system offers a high potential for automation and parallelization. This is essential for achieving a high level of robustness and reproducibility, which are key requirements in order to qualify this approach for a biomedical application. As an example process for intercellular communication, T cell activation has been chosen. The activation of the single T cells was triggered by contacting them individually with microbeads that were coated with antibodies directed against specific cell surface proteins, like the T cell receptor-associated kinase CD3 and the costimulatory molecule CD28 (CD; cluster of differentiation). The stimulation of the cells with the functionalized beads led to a rapid rise of their cytosolic Ca2+ concentration which was analyzed by a dual-wavelength ratiometric fluorescence measurement of the Ca2+-sensitive dye Fura-2. After Ca2+ imaging, the cells were isolated individually from the microfluidic system and cultivated further. Cell division and expression of the marker molecule CD69 as a late activation event of great significance were analyzed the following day and correlated with the previously recorded Ca2+ traces for each individual cell. It turned out such that the temporal profile of the Ca2+ traces between both activated and non-activated cells as well as dividing and non-dividing cells differed significantly. This shows that the pattern of Ca2+ signals in T cells can provide early information about a later reaction of the cell. As isolated cells are highly delicate objects, a precondition for these experiments was the successful adaptation of the system to maintain the vitality of single cells during and after manipulation. In this context, the influences of the microfluidic environment as well as the applied electric fields on the vitality of the cells and the cytosolic Ca2+ concentration as crucially important physiological parameters were thoroughly investigated. While a short-term DEP manipulation did not affect the vitality of the cells, they showed irregular Ca2+ transients upon exposure to the DEP field only. The rate and the strength of these Ca2+ signals depended on exposure time, electric field strength and field frequency. By minimizing their occurrence rate, experimental conditions were identified that caused the least interference with the physiology of the cell. The possibility to precisely control the exact time point of stimulus application, to simultaneously analyze short-term reactions and to correlate them with later events of the signal transduction cascade on the level of individual cells makes this approach unique among previously described applications and offers new possibilities to unravel the mechanisms underlying intercellular communication.
With the downscaling of CMOS technologies, the radiation-induced Single Event Transient (SET) effects in combinational logic have become a critical reliability issue for modern integrated circuits (ICs) intended for operation under harsh radiation conditions. The SET pulses generated in combinational logic may propagate through the circuit and eventually result in soft errors. It has thus become an imperative to address the SET effects in the early phases of the radiation-hard IC design. In general, the soft error mitigation solutions should accommodate both static and dynamic measures to ensure the optimal utilization of available resources. An efficient soft-error-aware design should address synergistically three main aspects: (i) characterization and modeling of soft errors, (ii) multi-level soft error mitigation, and (iii) online soft error monitoring. Although significant results have been achieved, the effectiveness of SET characterization methods, accuracy of predictive SET models, and efficiency of SET mitigation measures are still critical issues. Therefore, this work addresses the following topics: (i) Characterization and modeling of SET effects in standard combinational cells, (ii) Static mitigation of SET effects in standard combinational cells, and (iii) Online particle detection, as a support for dynamic soft error mitigation.
Since the standard digital libraries are widely used in the design of radiation-hard ICs, the characterization of SET effects in standard cells and the availability of accurate SET models for the Soft Error Rate (SER) evaluation are the main prerequisites for efficient radiation-hard design. This work introduces an approach for the SPICE-based standard cell characterization with the reduced number of simulations, improved SET models and optimized SET sensitivity database. It has been shown that the inherent similarities in the SET response of logic cells for different input levels can be utilized to reduce the number of required simulations. Based on characterization results, the fitting models for the SET sensitivity metrics (critical charge, generated SET pulse width and propagated SET pulse width) have been developed. The proposed models are based on the principle of superposition, and they express explicitly the dependence of the SET sensitivity of individual combinational cells on design, operating and irradiation parameters. In contrast to the state-of-the-art characterization methodologies which employ extensive look-up tables (LUTs) for storing the simulation results, this work proposes the use of LUTs for storing the fitting coefficients of the SET sensitivity models derived from the characterization results. In that way the amount of characterization data in the SET sensitivity database is reduced significantly.
The initial step in enhancing the robustness of combinational logic is the application of gate-level mitigation techniques. As a result, significant improvement of the overall SER can be achieved with minimum area, delay and power overheads. For the SET mitigation in standard cells, it is essential to employ the techniques that do not require modifying the cell structure. This work introduces the use of decoupling cells for improving the robustness of standard combinational cells. By insertion of two decoupling cells at the output of a target cell, the critical charge of the cell’s output node is increased and the attenuation of short SETs is enhanced. In comparison to the most common gate-level techniques (gate upsizing and gate duplication), the proposed approach provides better SET filtering. However, as there is no single gate-level mitigation technique with optimal performance, a combination of multiple techniques is required. This work introduces a comprehensive characterization of gate-level mitigation techniques aimed to quantify their impact on the SET robustness improvement, as well as introduced area, delay and power overhead per gate. By characterizing the gate-level mitigation techniques together with the standard cells, the required effort in subsequent SER analysis of a target design can be reduced. The characterization database of the hardened standard cells can be utilized as a guideline for selection of the most appropriate mitigation solution for a given design.
As a support for dynamic soft error mitigation techniques, it is important to enable the online detection of energetic particles causing the soft errors. This allows activating the power-greedy fault-tolerant configurations based on N-modular redundancy only at the high radiation levels. To enable such a functionality, it is necessary to monitor both the particle flux and the variation of particle LET, as these two parameters contribute significantly to the system SER. In this work, a particle detection approach based on custom-sized pulse stretching inverters is proposed. Employing the pulse stretching inverters connected in parallel enables to measure the particle flux in terms of the number of detected SETs, while the particle LET variations can be estimated from the distribution of SET pulse widths. This approach requires a purely digital processing logic, in contrast to the standard detectors which require complex mixed-signal processing. Besides the possibility of LET monitoring, additional advantages of the proposed particle detector are low detection latency and power consumption, and immunity to error accumulation.
The results achieved in this thesis can serve as a basis for establishment of an overall soft-error-aware database for a given digital library, and a comprehensive multi-level radiation-hard design flow that can be implemented with the standard IC design tools. The following step will be to evaluate the achieved results with the irradiation experiments.
Recognizing, understanding, and responding to quantities are considerable skills for human beings. We can easily communicate quantities, and we are extremely efficient in adapting our behavior to numerical related tasks. One usual task is to compare quantities. We also use symbols like digits in numerical-related tasks. To solve tasks including digits, we must to rely on our previously learned internal number representations.
This thesis elaborates on the process of number comparison with the use of noisy mental representations of numbers, the interaction of number and size representations and how we use mental number representations strategically. For this, three studies were carried out.
In the first study, participants had to decide which of two presented digits was numerically larger. They had to respond with a saccade in the direction of the anticipated answer. Using only a small set of meaningfully interpretable parameters, a variant of random walk models is described that accounts for response time, error rate, and variance of response time for the full matrix of 72 digit pairs. In addition, the used random walk model predicts a numerical distance effect even for error response times and this effect clearly occurs in the observed data. In relation to corresponding correct answers error responses were systematically faster. However, different from standard assumptions often made in random walk models, this account required that the distributions of step sizes of the induced random walks be asymmetric to account for this asymmetry between correct and incorrect responses.
Furthermore, the presented model provides a well-defined framework to investigate the nature and scale (e.g., linear vs. logarithmic) of the mapping of numerical magnitude onto its internal representation. In comparison of the fits of proposed models with linear and logarithmic mapping, the logarithmic mapping is suggested to be prioritized.
Finally, we discuss how our findings can help interpret complex findings (e.g., conflicting speed vs. accuracy trends) in applied studies that use number comparison as a well-established diagnostic tool. Furthermore, a novel oculomotoric effect is reported, namely the saccadic overschoot effect. The participants responded by saccadic eye movements and the amplitude of these saccadic responses decreases with numerical distance.
For the second study, an experimental design was developed that allows us to apply the signal detection theory to a task where participants had to decide whether a presented digit was physically smaller or larger. A remaining question is, whether the benefit in (numerical magnitude – physical size) congruent conditions is related to a better perception than in incongruent conditions. Alternatively, the number-size congruency effect is mediated by response biases due to numbers magnitude. The signal detection theory is a perfect tool to distinguish between these two alternatives. It describes two parameters, namely sensitivity and response bias. Changes in the sensitivity are related to the actual task performance due to real differences in perception processes whereas changes in the response bias simply reflect strategic implications as a stronger preparation (activation) of an anticipated answer. Our results clearly demonstrate that the number-size congruency effect cannot be reduced to mere response bias effects, and that genuine sensitivity gains for congruent number-size pairings contribute to the number-size congruency effect.
Third, participants had to perform a SNARC task – deciding whether a presented digit was odd or even. Local transition probability of irrelevant attributes (magnitude) was varied while local transition probability of relevant attributes (parity) and global probability occurrence of each stimulus were kept constantly. Participants were quite sensitive in recognizing the underlying local transition probability of irrelevant attributes. A gain in performance was observed for actual repetitions of the irrelevant attribute in relation to changes of the irrelevant attribute in high repetition conditions compared to low repetition conditions. One interpretation of these findings is that information about the irrelevant attribute (magnitude) in the previous trial is used as an informative precue, so that participants can prepare early processing stages in the current trial, with the corresponding benefits and costs typical of standard cueing studies.
Finally, the results reported in this thesis are discussed in relation to recent studies in numerical cognition.
What are the consequences of unemployment and precarious employment for individuals' health in Europe? What are the moderating factors that may offset (or increase) the health consequences of labor-market risks? How do the effects of these risks vary across different contexts, which differ in their institutional and cultural settings? Does gender, regarded as a social structure, play a role, and how? To answer these questions is the aim of my cumulative thesis. This study aims to advance our knowledge about the health consequences that unemployment and precariousness cause over the life course. In particular, I investigate how several moderating factors, such as gender, the family, and the broader cultural and institutional context, may offset or increase the impact of employment instability and insecurity on individual health.
In my first paper, 'The buffering role of the family in the relationship between job loss and self-perceived health: Longitudinal results from Europe, 2004-2011', I and my co-authors measure the causal effect of job loss on health and the role of the family and welfare states (regimes) as moderating factors. Using EU-SILC longitudinal data (2004-2011), we estimate the probability of experiencing 'bad health' following a transition to unemployment by applying linear probability models and undertake separate analyses for men and women. Firstly, we measure whether changes in the independent variable 'job loss' lead to changes in the dependent variable 'self-rated health' for men and women separately. Then, by adding into the model different interaction terms, we measure the moderating effect of the family, both in terms of emotional and economic support, and how much it varies across different welfare regimes. As an identification strategy, we first implement static fixed-effect panel models, which control for time-varying observables and indirect health selection—i.e., constant unobserved heterogeneity. Secondly, to control for reverse causality and path dependency, we implement dynamic fixed-effect panel models, adding a lagged dependent variable to the model.
We explore the role of the family by focusing on close ties within households: we consider the presence of a stable partner and his/her working status as a source of social and economic support. According to previous literature, having a partner should reduce the stress from adverse events, thanks to the symbolic and emotional dimensions that such a relationship entails, regardless of any economic benefits. Our results, however, suggest that benefits linked to the presence of a (female) partner also come from the financial stability that (s)he can provide in terms of a second income. Furthermore, we find partners' employment to be at least as important as the mere presence of the partner in reducing the negative effect of job loss on the individual's health by maintaining the household's standard of living and decreasing economic strain on the family. Our results are in line with previous research, which has highlighted that some people cope better than others with adverse life circumstances, and the support provided by the family is a crucial resource in that regard.
We also reported an important interaction between the family and the welfare state in moderating the health consequences of unemployment, showing how the compensation effect of the family varies across welfare regimes. The family plays a decisive role in cushioning the adverse consequences of labor market risks in Southern and Eastern welfare states, characterized by less developed social protection systems and –especially the Southern – high level of familialism.
The first paper also found important gender differences concerning job loss, family and welfare effects. Of particular interest is the evidence suggesting that health selection works differently for men and women, playing a more prominent role for women than for men in explaining the relationship between job loss and self-perceived health. The second paper, 'Gender roles and selection mechanisms across contexts: A comparative analysis of the relationship between unemployment, self-perceived health, and gender.' investigates more in-depth the gender differential in health driven by unemployment.
Being a highly contested issue in literature, we aim to study whether men are more penalized than women or the other way around and the mechanisms that may explain the gender difference. To do that, we rely on two theoretical arguments: the availability of alternative roles and social selection. The first argument builds on the idea that men and women may compensate for the detrimental health consequences of unemployment through the commitment to 'alternative roles,' which can provide for the resources needed to fulfill people's socially constructed needs. Notably, the availability of alternative options depends on the different positions that men and women have in society.
Further, we merge the availability of the 'alternative roles' argument with the health selection argument. We assume that health selection could be contingent on people's social position as defined by gender and, thus, explain the gender differential in the relationship between unemployment and health. Ill people might be less reluctant to fall or remain (i.e., self-select) in unemployment if they have alternative roles. In Western societies, women generally have more alternative roles than men and thus more discretion in their labor market attachment. Therefore, health selection should be stronger for them, explaining why unemployment is less menace for women than for their male counterparts.
Finally, relying on the idea of different gender regimes, we extended these arguments to comparison across contexts. For example, in contexts where being a caregiver is assumed to be women's traditional and primary roles and the primary breadwinner role is reserved to men, unemployment is less stigmatized, and taking up alternative roles is more socially accepted for women than for men (Hp.1). Accordingly, social (self)selection should be stronger for women than for men in traditional contexts, where, in the case of ill-health, the separation from work is eased by the availability of alternative roles (Hp.2).
By focusing on contexts that are representative of different gender regimes, we implement a multiple-step comparative approach. Firstly, by using EU-SILC longitudinal data (2004-2015), our analysis tests gender roles and selection mechanisms for Sweden and Italy, representing radically different gender regimes, thus providing institutional and cultural variation. Then, we limit institutional heterogeneity by focusing on Germany and comparing East- and West-Germany and older and younger cohorts—for West-Germany (SOEP data 1995-2017). Next, to assess the differential impact of unemployment for men and women, we compared (unemployed and employed) men with (unemployed and employed) women. To do so, we calculate predicted probabilities and average marginal effect from two distinct random-effects probit models. Our first step is estimating random-effects models that assess the association between unemployment and self-perceived health, controlling for observable characteristics. In the second step, our fully adjusted model controls for both direct and indirect selection. We do this using dynamic correlated random-effects (CRE) models. Further, based on the fully adjusted model, we test our hypotheses on alternative roles (Hp.1) by comparing several contexts – models are estimated separately for each context. For this hypothesis, we pool men and women and include an interaction term between unemployment and gender, which has the advantage to allow for directly testing whether gender differences in the effect of unemployment exist and are statistically significant. Finally, we test the role of selection mechanisms (Hp.2), using the KHB method to compare coefficients across nested nonlinear models. Specifically, we test the role of selection for the relationship between unemployment and health by comparing the partially-adjusted and fully-adjusted models. To allow selection mechanisms to operate differently between genders, we estimate separate models for men and women.
We found support to our first hypotheses—the context where people are embedded structures the relationship between unemployment, health, and gender. We found no gendered effect of unemployment on health in the egalitarian context of Sweden. Conversely, in the traditional context of Italy, we observed substantive and statistically significant gender differences in the effect of unemployment on bad health, with women suffering less than men. We found the same pattern for comparing East and West Germany and younger and older cohorts in West Germany.
On the contrary, our results did not support our theoretical argument on social selection. We found that in Sweden, women are more selected out of employment than men. In contrast, in Italy, health selection does not seem to be the primary mechanism behind the gender differential—Italian men and women seem to be selected out of employment to the same extent. Namely, we do not find any evidence that health selection is stronger for women in more traditional countries (Hp2), despite the fact that the institutional and the cultural context would offer them a more comprehensive range of 'alternative roles' relative to men. Moreover, our second hypothesis is also rejected in the second and third comparisons, where the cross-country heterogeneity is reduced to maximize cultural differences within the same institutional context. Further research that addresses selection into inactivity is needed to evaluate the interplay between selection and social roles across gender regimes.
While the health consequences of unemployment have been on the research agenda for a pretty long time, the interest in precarious employment—defined as the linking of the vulnerable worker to work that is characterized by uncertainty and insecurity concerning pay, the stability of the work arrangement, limited access to social benefits, and statutory protections—has emerged only later. Since the 80s, scholars from different disciplines have raised concerns about the social consequences of de-standardization of employment relationships. However, while work has become undoubtedly more precarious, very little is known about its causal effect on individual health and the role of gender as a moderator. These questions are at the core of my third paper : 'Bad job, bad health? A longitudinal analysis of the interaction between precariousness, gender and self-perceived health in Germany'. Herein, I investigate the multidimensional nature of precarious employment and its causal effect on health, particularly focusing on gender differences.
With this paper, I aim at overcoming three major shortcomings of earlier studies: The first one regards the cross-sectional nature of data that prevents the authors from ruling out unobserved heterogeneity as a mechanism for the association between precarious employment and health. Indeed, several unmeasured individual characteristics—such as cognitive abilities—may confound the relationship between precarious work and health, leading to biased results. Secondly, only a few studies have directly addressed the role of gender in shaping the relationship. Moreover, available results on the gender differential are mixed and inconsistent: some found precarious employment being more detrimental for women's health, while others found no gender differences or stronger negative association for men. Finally, previous attempts to an empirical translation of the employment precariousness (EP) concept have not always been coherent with their theoretical framework. EP is usually assumed to be a multidimensional and continuous phenomenon; it is characterized by different dimensions of insecurity that may overlap in the same job and lead to different "degrees of precariousness." However, researchers have predominantly focused on one-dimensional indicators—e.g., temporary employment, subjective job insecurity—to measure EP and study the association with health. Besides the fact that this approach partially grasps the phenomenon's complexity, the major problem is the inconsistency of evidence that it has produced. Indeed, this line of inquiry generally reveals an ambiguous picture, with some studies finding substantial adverse effects of temporary over permanent employment, while others report only minor differences.
To measure the (causal) effect of precarious work on self-rated health and its variation by gender, I focus on Germany and use four waves from SOEP data (2003, 2007, 2011, and 2015). Germany is a suitable context for my study. Indeed, since the 1980s, the labor market and welfare system have been restructured in many ways to increase the German economy's competitiveness in the global market. As a result, the (standard) employment relationship has been de-standardized: non-standard and atypical employment arrangements—i.e., part-time work, fixed-term contracts, mini-jobs, and work agencies—have increased over time while wages have lowered, even among workers with standard work. In addition, the power of unions has also fallen over the last three decades, leaving a large share of workers without collective protection. Because of this process of de-standardization, the link between wage employment and strong social rights has eroded, making workers more powerless and more vulnerable to labor market risks than in the past. EP refers to this uneven distribution of power in the employment relationship, which can be detrimental to workers' health. Indeed, by affecting individuals' access to power and other resources, EP puts precarious workers at risk of experiencing health shocks and influences their ability to gain and accumulate health advantages (Hp.1).
Further, the focus on Germany allows me to investigate my second research question on the gender differential. Germany is usually regarded as a traditionalist gender regime: a context characterized by a configuration of roles. Here, being a caregiver is assumed to be women's primary role, whereas the primary breadwinner role is reserved for men. Although many signs of progress have been made over the last decades towards a greater equalization of opportunities and more egalitarianism, the breadwinner model has barely changed towards a modified version. Thus, women usually take on the double role of workers (the so-called secondary earner) and caregivers, and men still devote most of their time to paid work activities. Moreover, the overall upward trend towards more egalitarian gender ideologies has leveled off over the last decades, moving notably towards more traditional gender ideologies.
In this setting, two alternative hypotheses are possible. Firstly, I assume that the negative relationship between EP and health is stronger for women than for men. This is because women are systematically more disadvantaged than men in the public and private spheres of life, having less access to formal and informal sources of power. These gender-related power asymmetries may interact with EP-related power asymmetries resulting in a stronger effect of EP on women's health than on men's health (Hp.2).
An alternative way of looking at the gender differential is to consider the interaction that precariousness might have with men's and women's gender identities. According to this view, the negative relationship between EP and health is weaker for women than for men (Hp.2a). In a society with a gendered division of labor and a strong link between masculine identities and stable and well-rewarded job—i.e., a job that confers the role of primary family provider—a male worker with precarious employment might violate the traditional male gender role. Men in precarious jobs may perceive themselves (and by others) as possessing a socially undesirable characteristic, which conflicts with the stereotypical idea of themselves as the male breadwinner. Engaging in behaviors that contradict stereotypical gender identity may decrease self-esteem and foster feelings of inferiority, helplessness, and jealousy, leading to poor health.
I develop a new indicator of EP that empirically translates a definition of EP as a multidimensional and continuous phenomenon. I assume that EP is a latent construct composed of seven dimensions of insecurity chosen according to the theory and previous empirical research: Income insecurity, social insecurity, legal insecurity, employment insecurity, working-time insecurity, representation insecurity, worker's vulnerability. The seven dimensions are proxied by eight indicators available in the four waves of the SOEP dataset. The EP composite indicator is obtained by performing a multiple correspondence analysis (MCA) on the eight indicators. This approach aims to construct a summary scale in which all dimensions contribute jointly to the measured experience of precariousness and its health impact.
Further, the relationship between EP and 'general self-perceived health' is estimated by applying ordered probit random-effects estimators and calculating average marginal effect (further AME). Then, to control for unobserved heterogeneity, I implement correlated random-effects models that add to the model the within-individual means of the time-varying independent variables. To test the significance of the gender differential, I add an interaction term between EP and gender in the fully adjusted model in the pooled sample.
My correlated random-effects models showed EP's negative and substantial 'effect' on self-perceived health for both men and women. Although nonsignificant, the evidence seems in line with previous cross-sectional literature. It supports the hypothesis that employment precariousness could be detrimental to workers' health. Further, my results showed the crucial role of unobserved heterogeneity in shaping the health consequences of precarious employment. This is particularly important as evidence accumulates, yet it is still mostly descriptive.
Moreover, my results revealed a substantial difference among men and women in the relationship between EP and health: when EP increases, the risk of experiencing poor health increases much more for men than for women. This evidence falsifies previous theory according to whom the gender differential is contingent on the structurally disadvantaged position of women in western societies. In contrast, they seem to confirm the idea that men in precarious work could experience role conflict to a larger extent than women, as their self-standard is supposed to be the stereotypical breadwinner worker with a good and well-rewarded job. Finally, results from the multiple correspondence analysis contribute to the methodological debate on precariousness, showing that a multidimensional and continuous indicator can express a latent variable of EP.
All in all, complementarities are revealed in the results of unemployment and employment precariousness, which have two implications: Policy-makers need to be aware that the total costs of unemployment and precariousness go far beyond the economic and material realm penetrating other fundamental life domains such as individual health. Moreover, they need to balance the trade-off between protecting adequately unemployed people and fostering high-quality employment in reaction to the highlighted market pressures. In this sense, the further development of a (universalistic) welfare state certainly helps mitigate the adverse health effects of unemployment and, therefore, the future costs of both individuals' health and welfare spending. In addition, the presence of a working partner is crucial for reducing the health consequences of employment instability. Therefore, policies aiming to increase female labor market participation should be promoted, especially in those contexts where the welfare state is less developed.
Moreover, my results support the significance of taking account of a gender perspective in health research. The findings of the three articles show that job loss, unemployment, and precarious employment, in general, have adverse effects on men's health but less or absent consequences for women's health. Indeed, this suggests the importance of labor and health policies that consider and further distinguish the specific needs of the male and female labor force in Europe. Nevertheless, a further implication emerges: the health consequences of employment instability and de-standardization need to be investigated in light of the gender arrangements and the transforming gender relationships in specific cultural and institutional contexts. My results indeed seem to suggest that women's health advantage may be a transitory phenomenon, contingent on the predominant gendered institutional and cultural context. As the structural difference between men's and women's position in society is eroded, egalitarianism becomes the dominant normative status, so will probably be the gender difference in the health consequences of job loss and precariousness. Therefore, while gender equality in opportunities and roles is a desirable aspect for contemporary societies and a political goal that cannot be postponed further, this thesis raises a further and maybe more crucial question: What kind of equality should be pursued to provide men and women with both good life quality and equal chances in the public and private spheres? In this sense, I believe that social and labor policies aiming to reduce gender inequality in society should focus on improving women's integration into the labor market, implementing policies targeting men, and facilitating their involvement in the private sphere of life. Equal redistribution of social roles could activate a crucial transformation of gender roles and the cultural models that sustain and still legitimate gender inequality in Western societies.
Nowadays, graph data models are employed, when relationships between entities have to be stored and are in the scope of queries. For each entity, this graph data model locally stores relationships to adjacent entities. Users employ graph queries to query and modify these entities and relationships. These graph queries employ graph patterns to lookup all subgraphs in the graph data that satisfy certain graph structures. These subgraphs are called graph pattern matches. However, this graph pattern matching is NP-complete for subgraph isomorphism. Thus, graph queries can suffer a long response time, when the number of entities and relationships in the graph data or the graph patterns increases.
One possibility to improve the graph query performance is to employ graph views that keep ready graph pattern matches for complex graph queries for later retrieval. However, these graph views must be maintained by means of an incremental graph pattern matching to keep them consistent with the graph data from which they are derived, when the graph data changes. This maintenance adds subgraphs that satisfy a graph pattern to the graph views and removes subgraphs that do not satisfy a graph pattern anymore from the graph views.
Current approaches for incremental graph pattern matching employ Rete networks. Rete networks are discrimination networks that enumerate and maintain all graph pattern matches of certain graph queries by employing a network of condition tests, which implement partial graph patterns that together constitute the overall graph query. Each condition test stores all subgraphs that satisfy the partial graph pattern. Thus, Rete networks suffer high memory consumptions, because they store a large number of partial graph pattern matches. But, especially these partial graph pattern matches enable Rete networks to update the stored graph pattern matches efficiently, because the network maintenance exploits the already stored partial graph pattern matches to find new graph pattern matches. However, other kinds of discrimination networks exist that can perform better in time and space than Rete networks. Currently, these other kinds of networks are not used for incremental graph pattern matching.
This thesis employs generalized discrimination networks for incremental graph pattern matching. These discrimination networks permit a generalized network structure of condition tests to enable users to steer the trade-off between memory consumption and execution time for the incremental graph pattern matching. For that purpose, this thesis contributes a modeling language for the effective definition of generalized discrimination networks. Furthermore, this thesis contributes an efficient and scalable incremental maintenance algorithm, which updates the (partial) graph pattern matches that are stored by each condition test. Moreover, this thesis provides a modeling evaluation, which shows that the proposed modeling language enables the effective modeling of generalized discrimination networks. Furthermore, this thesis provides a performance evaluation, which shows that a) the incremental maintenance algorithm scales, when the graph data becomes large, and b) the generalized discrimination network structures can outperform Rete network structures in time and space at the same time for incremental graph pattern matching.
This work analyzes the saving and consumption behavior of agents faced with the possibility of unemployment in a dynamic and stochastic life cycle model. The intertemporal optimization is based on Dynamic Programming with a backward recursion algorithm. The implemented uncertainty is not based on income shocks as it is done in traditional life cycle models but uses Markov probabilities where the probability for the next employment status of the agent depends on the current status. The utility function used is a CRRA function (constant relative risk aversion), combined with a CES function (constant elasticity of substitution) and has several consumption goods, a subsistence level, money and a bequest function.
In the context of cosmological structure formation sheets, filaments and eventually halos form due to gravitational instabilities. It is noteworthy, that at all times, the majority of the baryons in the universe does not reside in the dense halos but in the filaments and the sheets of the intergalactic medium. While at higher redshifts of z > 2, these baryons can be detected via the absorption of light (originating from more distant sources) by neutral hydrogen at temperatures of T ~ 10^4 K (the Lyman-alpha forest), at lower redshifts only about 20 % can be found in this state. The remain (about 50 to 70 % of the total baryons mass) is unaccounted for by observational means. Numerical simulations predict that these missing baryons could reside in the filaments and sheets of the cosmic web at high temperatures of T = 10^4.5 - 10^7 K, but only at low to intermediate densities, and constitutes the warm-hot intergalactic medium (WHIM). The high temperatures of the WHIM are caused by the formation of shocks and the subsequent shock-heating of the gas. This results in a high degree of ionization and renders the reliable detection of the WHIM a challenging task. Recent high-resolution hydrodynamical simulations indicate that, at redshifts of z ~ 2, filaments are able to provide very massive galaxies with a significant amount of cool gas at temperatures of T ~ 10^4 K. This could have an important impact on the star-formation in those galaxies. It is therefore of principle importance to investigate the particular hydro- and thermodynamical conditions of these large filament structures. Density and temperature profiles, and velocity fields, are expected to leave their special imprint on spectroscopic observations. A potential multiphase structure may act as tracer in observational studies of the WHIM. In the context of cold streams, it is important to explore the processes, which regulate the amount of gas transported by the streams. This includes the time evolution of filaments, as well as possible quenching mechanisms. In this context, the halo mass range in which cold stream accretion occurs is of particular interest. In order to address these questions, we perform particular hydrodynamical simulations of very high resolution, and investigate the formation and evolution of prototype structures representing the typical filaments and sheets of the WHIM. We start with a comprehensive study of the one-dimensional collapse of a sinusoidal density perturbation (pancake formation) and examine the influence of radiative cooling, heating due to an UV background, thermal conduction, and the effect of small-scale perturbations given by the cosmological power spectrum. We use a set of simulations, parametrized by the wave length of the initial perturbation L. For L ~ 2 Mpc/h the collapse leads to shock-confined structures. As a result of radiative cooling and of heating due to an UV background, a relatively cold and dense core forms. With increasing L the core becomes denser and more concentrated. Thermal conduction enhances this trend and may lead to an evaporation of the core at very large L ~ 30 Mpc/h. When extending our simulations into three dimensions, instead of a pancake structure, we obtain a configuration consisting of well-defined sheets, filaments, and a gaseous halo. For L > 4 Mpc/h filaments form, which are fully confined by an accretion shock. As with the one-dimensional pancakes, they exhibit an isothermal core. Thus, our results confirm a multiphase structure, which may generate particular spectral tracers. We find that, after its formation, the core becomes shielded against further infall of gas onto the filament, and its mass content decreases with time. In the vicinity of the halo, the filament's core can be attributed to the cold streams found in other studies. We show, that the basic structure of these cold streams exists from the very beginning of the collapse process. Further on, the cross section of the streams is constricted by the outwards moving accretion shock of the halo. Thermal conduction leads to a complete evaporation of the cold stream for L > 6 Mpc/h. This corresponds to halos with a total mass higher than M_halo = 10^13 M_sun, and predicts that in more massive halos star-formation can not be sustained by cold streams. Far away from the gaseous halo, the temperature gradients in the filament are not sufficiently strong for thermal conduction to be effective.