Refine
Year of publication
- 2022 (155)
- 2021 (960)
- 2020 (1269)
- 2019 (2467)
- 2018 (2768)
- 2017 (2053)
- 2016 (2319)
- 2015 (2130)
- 2014 (1881)
- 2013 (2108)
- 2012 (1986)
- 2011 (2038)
- 2010 (1428)
- 2009 (1823)
- 2008 (1355)
- 2007 (1396)
- 2006 (1800)
- 2005 (1956)
- 2004 (2019)
- 2003 (1552)
- 2002 (1354)
- 2001 (1424)
- 2000 (1684)
- 1999 (1852)
- 1998 (1690)
- 1997 (1541)
- 1996 (1557)
- 1995 (1473)
- 1994 (1031)
- 1993 (405)
- 1992 (255)
- 1991 (169)
- 1990 (16)
- 1989 (28)
- 1988 (22)
- 1987 (23)
- 1986 (16)
- 1985 (12)
- 1984 (15)
- 1983 (31)
- 1982 (10)
- 1981 (9)
- 1980 (10)
- 1979 (15)
- 1978 (9)
- 1977 (12)
- 1976 (7)
- 1975 (3)
- 1974 (2)
- 1973 (2)
- 1972 (2)
- 1971 (2)
- 1970 (1)
- 1958 (1)
Document Type
- Article (30827)
- Doctoral Thesis (5923)
- Monograph/Edited Volume (5330)
- Postprint (2903)
- Review (2101)
- Other (745)
- Preprint (566)
- Conference Proceeding (432)
- Part of Periodical (423)
- Part of a Book (412)
- Master's Thesis (223)
- Working Paper (206)
- Habilitation Thesis (84)
- Bachelor Thesis (32)
- Report (32)
- Journal/Publication series (17)
- Course Material (13)
- Lecture (10)
- Contribution to a Periodical (8)
- Moving Images (5)
- Sound (2)
- Study Thesis (1)
Language
- English (25522)
- German (23745)
- Spanish (333)
- French (319)
- Russian (107)
- Italian (105)
- Multiple languages (65)
- Polish (24)
- Hebrew (21)
- Portuguese (21)
Keywords
- Germany (159)
- Deutschland (122)
- climate change (120)
- Patholinguistik (71)
- patholinguistics (71)
- Sprachtherapie (70)
- European Union (69)
- morphology (66)
- Europäische Union (64)
- Migration (62)
Institute
- Institut für Physik und Astronomie (4750)
- Institut für Biochemie und Biologie (4734)
- Institut für Geowissenschaften (3133)
- Institut für Chemie (3070)
- Wirtschaftswissenschaften (2493)
- Historisches Institut (2216)
- Department Psychologie (2084)
- Institut für Romanistik (2009)
- Institut für Mathematik (1973)
- Sozialwissenschaften (1813)
Background
Relatively little is known about protective factors and the emergence and maintenance of positive outcomes in the field of adolescents with chronic conditions. Therefore, the primary aim of the study is to acquire a deeper understanding of the dynamic process of resilience factors, coping strategies and psychosocial adjustment of adolescents living with chronic conditions.
Methods/design
We plan to consecutively recruit N = 450 adolescents (12–21 years) from three German patient registries for chronic conditions (type 1 diabetes, cystic fibrosis, or juvenile idiopathic arthritis). Based on screening for anxiety and depression, adolescents are assigned to two parallel groups – “inconspicuous” (PHQ-9 and GAD-7 < 7) vs. “conspicuous” (PHQ-9 or GAD-7 ≥ 7) – participating in a prospective online survey at baseline and 12-month follow-up. At two time points (T1, T2), we assess (1) intra- and interpersonal resiliency factors, (2) coping strategies, and (3) health-related quality of life, well-being, satisfaction with life, anxiety and depression. Using a cross-lagged panel design, we will examine the bidirectional longitudinal relations between resiliency factors and coping strategies, psychological adaptation, and psychosocial adjustment. To monitor Covid-19 pandemic effects, participants are also invited to take part in an intermediate online survey.
Discussion
The study will provide a deeper understanding of adaptive, potentially modifiable processes and will therefore help to develop novel, tailored interventions supporting a positive adaptation in youths with a chronic condition. These strategies should not only support those at risk but also promote the maintenance of a successful adaptation.
Trial registration
German Clinical Trials Register (DRKS), no. DRKS00025125. Registered on May 17, 2021.
Background
Relatively little is known about protective factors and the emergence and maintenance of positive outcomes in the field of adolescents with chronic conditions. Therefore, the primary aim of the study is to acquire a deeper understanding of the dynamic process of resilience factors, coping strategies and psychosocial adjustment of adolescents living with chronic conditions.
Methods/design
We plan to consecutively recruit N = 450 adolescents (12–21 years) from three German patient registries for chronic conditions (type 1 diabetes, cystic fibrosis, or juvenile idiopathic arthritis). Based on screening for anxiety and depression, adolescents are assigned to two parallel groups – “inconspicuous” (PHQ-9 and GAD-7 < 7) vs. “conspicuous” (PHQ-9 or GAD-7 ≥ 7) – participating in a prospective online survey at baseline and 12-month follow-up. At two time points (T1, T2), we assess (1) intra- and interpersonal resiliency factors, (2) coping strategies, and (3) health-related quality of life, well-being, satisfaction with life, anxiety and depression. Using a cross-lagged panel design, we will examine the bidirectional longitudinal relations between resiliency factors and coping strategies, psychological adaptation, and psychosocial adjustment. To monitor Covid-19 pandemic effects, participants are also invited to take part in an intermediate online survey.
Discussion
The study will provide a deeper understanding of adaptive, potentially modifiable processes and will therefore help to develop novel, tailored interventions supporting a positive adaptation in youths with a chronic condition. These strategies should not only support those at risk but also promote the maintenance of a successful adaptation.
Trial registration
German Clinical Trials Register (DRKS), no. DRKS00025125. Registered on May 17, 2021.
Models are useful tools for understanding and predicting ecological patterns and processes. Under ongoing climate and biodiversity change, they can greatly facilitate decision-making in conservation and restoration and help designing adequate management strategies for an uncertain future. Here, we review the use of spatially explicit models for decision support and to identify key gaps in current modelling in conservation and restoration. Of 650 reviewed publications, 217 publications had a clear management application and were included in our quantitative analyses. Overall, modelling studies were biased towards static models (79%), towards the species and population level (80%) and towards conservation (rather than restoration) applications (71%). Correlative niche models were the most widely used model type. Dynamic models as well as the gene-to-individual level and the community-to-ecosystem level were underrepresented, and explicit cost optimisation approaches were only used in 10% of the studies. We present a new model typology for selecting models for animal conservation and restoration, characterising model types according to organisational levels, biological processes of interest and desired management applications. This typology will help to more closely link models to management goals. Additionally, future efforts need to overcome important challenges related to data integration, model integration and decision-making. We conclude with five key recommendations, suggesting that wider usage of spatially explicit models for decision support can be achieved by 1) developing a toolbox with multiple, easier-to-use methods, 2) improving calibration and validation of dynamic modelling approaches and 3) developing best-practise guidelines for applying these models. Further, more robust decision-making can be achieved by 4) combining multiple modelling approaches to assess uncertainty, and 5) placing models at the core of adaptive management. These efforts must be accompanied by long-term funding for modelling and monitoring, and improved communication between research and practise to ensure optimal conservation and restoration outcomes.
Models are useful tools for understanding and predicting ecological patterns and processes. Under ongoing climate and biodiversity change, they can greatly facilitate decision-making in conservation and restoration and help designing adequate management strategies for an uncertain future. Here, we review the use of spatially explicit models for decision support and to identify key gaps in current modelling in conservation and restoration. Of 650 reviewed publications, 217 publications had a clear management application and were included in our quantitative analyses. Overall, modelling studies were biased towards static models (79%), towards the species and population level (80%) and towards conservation (rather than restoration) applications (71%). Correlative niche models were the most widely used model type. Dynamic models as well as the gene-to-individual level and the community-to-ecosystem level were underrepresented, and explicit cost optimisation approaches were only used in 10% of the studies. We present a new model typology for selecting models for animal conservation and restoration, characterising model types according to organisational levels, biological processes of interest and desired management applications. This typology will help to more closely link models to management goals. Additionally, future efforts need to overcome important challenges related to data integration, model integration and decision-making. We conclude with five key recommendations, suggesting that wider usage of spatially explicit models for decision support can be achieved by 1) developing a toolbox with multiple, easier-to-use methods, 2) improving calibration and validation of dynamic modelling approaches and 3) developing best-practise guidelines for applying these models. Further, more robust decision-making can be achieved by 4) combining multiple modelling approaches to assess uncertainty, and 5) placing models at the core of adaptive management. These efforts must be accompanied by long-term funding for modelling and monitoring, and improved communication between research and practise to ensure optimal conservation and restoration outcomes.
More than a century ago the phenomenon of non-Mendelian inheritance (NMI), defined as any type of inheritance pattern in which traits do not segregate in accordance with Mendel’s laws, was first reported. In the plant kingdom three genomic compartments, the nucleus, chloroplast, and mitochondrion, can participate in such a phenomenon. High-throughput sequencing (HTS) proved to be a key technology to investigate NMI phenomena by assembling and/or resequencing entire genomes. However, generation, analysis and interpretation of such datasets remain challenging by the multi-layered biological complexity. To advance our knowledge in the field of NMI, I conducted three studies involving different HTS technologies and implemented two new algorithms to analyze them.
In the first study I implemented a novel post-assembly pipeline, called Semi-Automated Graph-Based Assembly Curator (SAGBAC), which visualizes non-graph-based assemblies as graphs, identifies recombinogenic repeat pairs (RRPs), and reconstructs plant mitochondrial genomes (PMG) in a semiautomated workflow. We applied this pipeline to assemblies of three Oenothera species resulting in a spatially folded and circularized model. This model was confirmed by PCR and Southern blot analyses and was used to predict a defined set of 70 PMG isoforms. With Illumina Mate Pair and PacBio RSII data, the stoichiometry of the RRPs was determined quantitatively differing up to three-fold.
In the second study I developed a post-multiple sequence alignment algorithm, called correlation mapping (CM), which correlates segment-wise numbers of nucleotide changes to a numeric ascertainable phenotype. We applied this algorithm to 14 wild type and 18 mutagenized plastome assemblies within the Oenothera genus and identified two genes, accD and ycf2 that may cause the competitive behavior of plastid genotypes as plastids can be biparental inherited in Oenothera. Moreover, lipid composition of the plastid envelope membrane is affected by polymorphisms within these two genes.
For the third study, I programmed a pipeline to investigate a NMI phenomenon, known as paramutation, in tomato by analyzing DNA and bisulfite sequencing data as well as microarray data. We identified the responsible gene (Solyc02g0005200) and were able to fully repress its caused phenotype by heterologous complementation with a paramutation insensitive transgene of the Arabidopsis thaliana orthologue. Additionally, a suppressor mutant shows a globally altered DNA methylation pattern and carries a large deletion leading to a gene fusion involving a histone deacetylase.
In conclusion, my developed and implemented algorithms and data analysis pipelines are suitable to investigate NMI and led to novel insights about such phenomena by reconstructing PMGs (SAGBAC) as a requirement to study mitochondria-associated phenotypes, by identifying genes (CM) causing interplastidial competition as well by applying a DNA/Bisulfite-seq analysis pipeline to shed light in a transgenerational epigenetic inheritance phenomenon.
Vegetation change at high latitudes is one of the central issues nowadays with respect to ongoing climate changes and triggered potential feedback. At high latitude ecosystems, the expected changes include boreal treeline advance, compositional, phenological, physiological (plants), biomass (phytomass) and productivity changes. However, the rate and the extent of the changes under climate change are yet poorly understood and projections are necessary for effective adaptive strategies and forehanded minimisation of the possible negative feedbacks.
The vegetation itself and environmental conditions, which are playing a great role in its development and distribution are diverse throughout the Subarctic to the Arctic. Among the least investigated areas is central Chukotka in North-Eastern Siberia, Russia. Chukotka has mountainous terrain and a wide variety of vegetation types on the gradient from treeless tundra to northern taiga forests. The treeline there in contrast to subarctic North America and north-western and central Siberia is represented by a deciduous conifer, Larix cajanderi Mayr. The vegetation varies from prostrate lichen Dryas octopetala L. tundra to open graminoid (hummock and non-hummock) tundra to tall Pinus pumila (Pall.) Regel shrublands to sparse and dense larch forests.
Hence, this thesis presents investigations on recent compositional and above-ground biomass (AGB) changes, as well as potential future changes in AGB in central Chukotka. The aim is to assess how tundra-taiga vegetation develops under changing climate conditions particularly in Fareast Russia, central Chukotka. Therefore, three main research questions were considered:
1) What changes in vegetation composition have recently occurred in central Chukotka?
2) How have the above-ground biomass AGB rates and distribution changed in central Chukotka?
3) What are the spatial dynamics and rates of tree AGB change in the upcoming millennia in the northern tundra-taiga of central Chukotka?
Remote sensing provides information on the spatial and temporal variability of vegetation. I used Landsat satellite data together with field data (foliage projective cover and AGB) from two expeditions in 2016 and 2018 to Chukotka to upscale vegetation types and AGB for the study area. More specifically, I used Landsat spectral indices (Normalised Difference Vegetation Index (NDVI), Normalised Difference Water Index (NDWI) and Normalised Difference Snow Index (NDSI)) and constrained ordination (Redundancy analysis, RDA) for further k-means-based land-cover classification and general additive model (GAM)-based AGB maps for 2000/2001/2002 and 2016/2017. I also used Tandem-X DEM data for a topographical correction of the Landsat satellite data and to derive slope, aspect, and Topographical Wetness Index (TWI) data for forecasting AGB.
Firstly, in 2016, taxa-specific projective cover data were collected during a Russian-German expedition. I processed the field data and coupled them with Landsat spectral Indices in the RDA model that was used for k-means classification. I could establish four meaningful land-cover classes: (1) larch closed-canopy forest, (2) forest tundra and shrub tundra, (3) graminoid tundra and (4) prostrate herb tundra and barren areas, and accordingly, I produced the land cover maps for 2000/2001/2002 and 2016/20017. Changes in land-cover classes between the beginning of the century (2000/2001/2002) and the present time (2016/2017) were estimated and interpreted as recent compositional changes in central Chukotka. The transition from graminoid tundra to forest tundra and shrub tundra was interpreted as shrubification and amounts to a 20% area increase in the tundra-taiga zone and 40% area increase in the northern taiga. Major contributors of shrubification are alder, dwarf birch and some species of the heather family. Land-cover change from the forest tundra and shrub tundra class to the larch closed-canopy forest class is interpreted as tree infilling and is notable in the northern taiga. We find almost no land-cover changes in the present treeless tundra.
Secondly, total AGB state and change were investigated for the same areas. In addition to the total vegetation AGB, I provided estimations for the different taxa present at the field sites. As an outcome, AGB in the study region of central Chukotka ranged from 0 kg m-2 at barren areas to 16 kg m-2 in closed-canopy forests with the larch trees contributing the highest. A comparison of changes in AGB within the investigated period from 2000 to 2016 shows that the greatest changes (up to 1.25 kg m 2 yr 1) occurred in the northern taiga and in areas where land cover changed to larch closed-canopy forest. Our estimations indicate a general increase in total AGB throughout the investigated tundra-taiga and northern taiga, whereas the tundra showed no evidence of change in AGB within the 15 years from 2002 to 2017.
In the third manuscript, potential future AGB changes were estimated based on the results of simulations of the individual-based spatially explicit vegetation model LAVESI using different climate scenarios, depending on Representative Concentration Pathways (RCPs) RCP 2.6, RCP 4.5 and RCP 8.5 with or without cooling after 2300 CE. LAVESI-based AGB was simulated for the current state until 3000 CE for the northern tundra-taiga study area for larch species because we expect the most notable changes to occur will be associated with forest expansion in the treeline ecotone. The spatial distribution and current state of tree AGB was validated against AGB field data, AGB extracted from Landsat satellite data and a high spatial resolution image with distinctive trees visible. The simulation results are indicating differences in tree AGB dynamics plot wise, depending on the distance to the current treeline. The simulated tree AGB dynamics are in concordance with fundamental ecological (emigrational and successional) processes: tree stand formation in simulated results starts with seed dispersion, tree stand establishment, tree stand densification and episodic thinning. Our results suggest mostly densification of existing tree stands in the study region within the current century in the study region and a lagged forest expansion (up to 39% of total area in the RCP 8.5) under all considered climate scenarios without cooling in different local areas depending on the closeness to the current treeline. In scenarios with cooling air temperature after 2300 CE, forests stopped expanding at 2300 CE (up to 10%, RCP 8.5) and then gradually retreated to their pre-21st century position. The average tree AGB rates of increase are the strongest in the first 300 years of the 21st century. The rates depend on the RCP scenario, where the highest are as expected under RCP 8.5.
Overall, this interdisciplinary thesis shows a successful integration of field data, satellite data and modelling for tracking recent and predicting future vegetation changes in mountainous subarctic regions. The obtained results are unique for the focus area in central Chukotka and overall, for mountainous high latitude ecosystems.
In my doctoral thesis, I examine continuous gravity measurements for monitoring of the geothermal site at Þeistareykir in North Iceland. With the help of high-precision superconducting gravity meters (iGravs), I investigate underground mass changes that are caused by operation of the geothermal power plant (i.e. by extraction of hot water and reinjection of cold water). The overall goal of this research project is to make a statement about the sustainable use of the geothermal reservoir, from which also the Icelandic energy supplier and power plant operator Landsvirkjun should benefit.
As a first step, for investigating the performance and measurement stability of the gravity meters, in summer 2017, I performed comparative measurements at the gravimetric observatory J9 in Strasbourg. From the three-month gravity time series, I examined calibration, noise and drift behaviour of the iGravs in comparison to stable long-term time series of the observatory superconducting gravity meters. After preparatory work in Iceland (setup of gravity stations, additional measuring equipment and infrastructure, discussions with Landsvirkjun and meetings with the Icelandic partner institute ISOR), gravity monitoring at Þeistareykir was started in December 2017. With the help of the iGrav records of the initial 18 months after start of measurements, I carried out the same investigations (on calibration, noise and drift behaviour) as in J9 to understand how the transport of the superconducting gravity meters to Iceland may influence instrumental parameters.
In the further course of this work, I focus on modelling and reduction of local gravity contributions at Þeistareykir. These comprise additional mass changes due to rain, snowfall and vertical surface displacements that superimpose onto the geothermal signal of the gravity measurements. For this purpose, I used data sets from additional monitoring sensors that are installed at each gravity station and adapted scripts for hydro-gravitational modelling. The third part of my thesis targets geothermal signals in the gravity measurements.
Together with my PhD colleague Nolwenn Portier from France, I carried out additional gravity measurements with a Scintrex CG5 gravity meter at 26 measuring points within the geothermal field in the summers of 2017, 2018 and 2019. These annual time-lapse gravity measurements are intended to increase the spatial coverage of gravity data from the three continuous monitoring stations to the entire geothermal field. The combination of CG5 and iGrav observations, as well as annual reference measurements with an FG5 absolute gravity meter represent the hybrid gravimetric monitoring method for Þeistareykir. Comparison of the gravimetric data to local borehole measurements (of groundwater levels, geothermal extraction and injection rates) is used to relate the observed gravity changes to the actually extracted (and reinjected) geothermal fluids. An approach to explain the observed gravity signals by means of forward modelling of the geothermal production rate is presented at the end of the third (hybrid gravimetric) study. Further modelling with the help of the processed gravity data is planned by Landsvirkjun. In addition, the experience from time-lapse and continuous gravity monitoring will be used for future gravity measurements at the Krafla geothermal field 22 km south-east of Þeistareykir.
Das Centrosom von Dictyostelium ist acentriolär aufgebaut, misst ca. 500 nm und besteht aus einer dreischichten Core-Struktur mit umgebender Corona, an der Mikrotubuli nukleieren. In dieser Arbeit wurden das centrosomale Protein Cep192 und mögliche Interaktionspartner am Centrosom eingehend untersucht. Die einleitende Lokalisationsuntersuchung von Cep192 ergab, dass es während der gesamten Mitose an den Spindelpolen lokalisiert und im Vergleich zu den anderen Strukturproteinen der Core-Struktur am stärksten exprimiert ist. Die dauerhafte Lokalisation an den Spindelpolen während der Mitose wird für Proteine angenommen, die in den beiden identisch aufgebauten äußeren Core-Schichten lokalisieren, die das mitotische Centrosom formen. Ein Knockdown von Cep192 führte zur Ausbildung von überzähligen Mikrotubuli-organisierenden Zentren (MTOC) sowie zu einer leicht erhöhten Ploidie. Deshalb wird eine Destabilisierung des Centrosoms durch die verminderte Cep192-Expression angenommen. An Cep192 wurden zwei kleine Tags, der SpotH6- und BioH6-Tag, etabliert, die mit kleinen fluoreszierenden Nachweiskonjugaten markiert werden konnten. Mit den so getagten Proteinen konnte die hochauflösende Expansion Microscopy für das Centrosom optimiert werden und die Core-Struktur erstmals proteinspezifisch in der Fluoreszenzmikroskopie dargestellt werden. Cep192 lokalisiert dabei in den äußeren Core-Schichten. Die kombinierte Markierung von Cep192 und den centrosomalen Proteinen CP39 und CP91 in der Expansion Microscopy erlaubte die Darstellung des dreischichtigen Aufbaus der centrosomalen Core-Struktur, wobei CP39 und CP91 zwischen Cep192 in der inneren Core-Schicht lokalisieren. Auch die Corona wurde in der Expansion Microscopy untersucht: Das Corona-Protein CDK5RAP2 lokalisiert in räumlicher Nähe zu Cep192 in der inneren Corona. Ein Vergleich der Corona-Proteine CDK5RAP2, CP148 und CP224 in der Expansion Microscopy ergab unterscheidbare Sublokalisationen der Proteine innerhalb der Corona und relativ zur Core-Struktur. In Biotinylierungsassays mit den centrosomalen Core-Proteinen CP39 und CP91 sowie des Corona-Proteins CDK5RAP2 konnte Cep192 als möglicher Interaktionspartner identifiziert werden.
Die Ergebnisse dieser Arbeit zeigen die wichtige Funktion des Proteins Cep192 im Dictyostelium-Centrosom und ermöglichen durch die Kombination aus Biotinylierungsassays und Expansion Microscopy der untersuchten Proteine ein verbessertes Verständnis der Topologie des Centrosoms.
Das vorliegende Material soll dazu dienen, Schülern, die sich in unserem Gesundheitssystem kaum Sorgen um ihre Versorgung im Krankheitsfall machen müssen, im Lateinunterricht eine Kultur nahe zu bringen, in der die Arztpraxis ein paar Straßen weiter keine Selbstverständlichkeit war. Angesichts der geringen Zahl an Lektüreheften, die sich mit dem Thema "Medizin in der Antike" befassen, wird letzteres in dieser Arbeit für den Einsatz in der Schule neu aufgearbeitet. Das Konzept des Materials sieht vor, dass es unverändert im Rahmen einer Unterrichtssequenz eingesetzt werden kann. Doch auch die unabhängige Verwendung einzelner Kapitel ist problemlos möglich.
Children’s physical fitness development and related moderating effects of age and sex are well documented, especially boys’ and girls’ divergence during puberty. The situation might be different during prepuberty. As girls mature approximately two years earlier than boys, we tested a possible convergence of performance with five tests representing four components of physical fitness in a large sample of 108,295 eight-year old third-graders. Within this single prepubertal year of life and irrespective of the test, performance increased linearly with chronological age, and boys outperformed girls to a larger extent in tests requiring muscle mass for successful performance. Tests differed in the magnitude of age effects (gains), but there was no evidence for an interaction between age and sex. Moreover, “physical fitness” of schools correlated at r = 0.48 with their age effect which might imply that "fit schools” promote larger gains; expected secular trends from 2011 to 2019 were replicated.
Children’s physical fitness development and related moderating effects of age and sex are well documented, especially boys’ and girls’ divergence during puberty. The situation might be different during prepuberty. As girls mature approximately two years earlier than boys, we tested a possible convergence of performance with five tests representing four components of physical fitness in a large sample of 108,295 eight-year old third-graders. Within this single prepubertal year of life and irrespective of the test, performance increased linearly with chronological age, and boys outperformed girls to a larger extent in tests requiring muscle mass for successful performance. Tests differed in the magnitude of age effects (gains), but there was no evidence for an interaction between age and sex. Moreover, “physical fitness” of schools correlated at r = 0.48 with their age effect which might imply that "fit schools” promote larger gains; expected secular trends from 2011 to 2019 were replicated.
Plate tectonics describes the movement of rigid plates at the surface of the Earth as well as their complex deformation at three types of plate boundaries: 1) divergent boundaries such as rift zones and mid-ocean ridges, 2) strike-slip boundaries where plates grind past each other, such as the San Andreas Fault, and 3) convergent boundaries that form large mountain ranges like the Andes. The generally narrow deformation zones that bound the plates exhibit complex strain patterns that evolve through time. During this evolution, plate boundary deformation is driven by tectonic forces arising from Earth’s deep interior and from within the lithosphere, but also by surface processes, which erode topographic highs and deposit the resulting sediment into regions of low elevation. Through the combination of these factors, the surface of the Earth evolves in a highly dynamic way with several feedback mechanisms. At divergent boundaries, for example, tensional stresses thin the lithosphere, forcing uplift and subsequent erosion of rift flanks, which creates a sediment source. Meanwhile, the rift center subsides and becomes a topographic low where sediments accumulate. This mass transfer from foot- to hanging wall plays an important role during rifting, as it prolongs the activity of individual normal faults. When rifting continues, continents are eventually split apart, exhuming Earth’s mantle and creating new oceanic crust. Because of the complex interplay between deep tectonic forces that shape plate boundaries and mass redistribution at the Earth’s surface, it is vital to understand feedbacks between the two domains and how they shape our planet.
In this study I aim to provide insight on two primary questions: 1) How do divergent and strike-slip plate boundaries evolve? 2) How is this evolution, on a large temporal scale and a smaller structural scale, affected by the alteration of the surface through erosion and deposition? This is done in three chapters that examine the evolution of divergent and strike-slip plate boundaries using numerical models. Chapter 2 takes a detailed look at the evolution of rift systems using two-dimensional models. Specifically, I extract faults from a range of rift models and correlate them through time to examine how fault networks evolve in space and time. By implementing a two-way coupling between the geodynamic code ASPECT and landscape evolution code FastScape, I investigate how the fault network and rift evolution are influenced by the system’s erosional efficiency, which represents many factors like lithology or climate. In Chapter 3, I examine rift evolution from a three-dimensional perspective. In this chapter I study linkage modes for offset rifts to determine when fast-rotating plate-boundary structures known as continental microplates form. Chapter 4 uses the two-way numerical coupling between tectonics and landscape evolution to investigate how a strike-slip boundary responds to large sediment loads, and whether this is sufficient to form an entirely new type of flexural strike-slip basin.
The Andes are a ~7000 km long N-S trending mountain range developed along the South American western continental margin. Driven by the subduction of the oceanic Nazca plate beneath the continental South American plate, the formation of the northern and central parts of the orogen is a type case for a non-collisional orogeny. In the southern Central Andes (SCA, 29°S-39°S), the oceanic plate changes the subduction angle between 33°S and 35°S from almost horizontal (< 5° dip) in the north to a steeper angle (~30° dip) in the south. This sector of the Andes also displays remarkable along- and across- strike variations of the tectonic deformation patterns. These include a systematic decrease of topographic elevation, of crustal shortening and foreland and orogenic width, as well as an alternation of the foreland deformation style between thick-skinned and thin-skinned recorded along- and across the strike of the subduction zone. Moreover, the SCA are a very seismically active region. The continental plate is characterized by a relatively shallow seismicity (< 30 km depth) which is mainly focussed at the transition from the orogen to the lowland areas of the foreland and the forearc; in contrast, deeper seismicity occurs below the interiors of the northern foreland. Additionally, frequent seismicity is also recorded in the shallow parts of the oceanic plate and in a sector of the flat slab segment between 31°S and 33°S. The observed spatial heterogeneity in tectonic and seismic deformation in the SCA has been attributed to multiple causes, including variations in sediment thickness, the presence of inherited structures and changes in the subduction angle of the oceanic slab. However, there is no study that inquired the relationship between the long-term rheological configuration of the SCA and the spatial deformation patterns. Moreover, the effects of the density and thickness configuration of the continental plate and of variations in the slab dip angle in the rheological state of the lithosphere have been not thoroughly investigated yet. Since rheology depends on composition, pressure and temperature, a detailed characterization of the compositional, structural and thermal fields of the lithosphere is needed. Therefore, by using multiple geophysical approaches and data sources, I constructed the following 3D models of the SCA lithosphere: (i) a seismically-constrained structural and density model that was tested against the gravity field; (ii) a thermal model integrating the conversion of mantle shear-wave velocities to temperature with steady-state conductive calculations in the uppermost lithosphere (< 50 km depth), validated by temperature and heat-flow measurements; and (iii) a rheological model of the long-term lithospheric strength using as input the previously-generated models.
The results of this dissertation indicate that the present-day thermal and rheological fields of the SCA are controlled by different mechanisms at different depths. At shallow depths (< 50 km), the thermomechanical field is modulated by the heterogeneous composition of the continental lithosphere. The overprint of the oceanic slab is detectable where the oceanic plate is shallow (< 85 km depth) and the radiogenic crust is thin, resulting in overall lower temperatures and higher strength compared to regions where the slab is steep and the radiogenic crust is thick. At depths > 50 km, largest temperatures variations occur where the descending slab is detected, which implies that the deep thermal field is mainly affected by the slab dip geometry.
The outcomes of this thesis suggests that long-term thermomechanical state of the lithosphere influences the spatial distribution of seismic deformation. Most of the seismicity within the continental plate occurs above the modelled transition from brittle to ductile conditions. Additionally, there is a spatial correlation between the location of these events and the transition from the mechanically strong domains of the forearc and foreland to the weak domain of the orogen. In contrast, seismicity within the oceanic plate is also detected where long-term ductile conditions are expected. I therefore analysed the possible influence of additional mechanisms triggering these earthquakes, including the compaction of sediments in the subduction interface and dehydration reactions in the slab. To that aim, I carried out a qualitative analysis of the state of hydration in the mantle using the ratio between compressional- and shear-wave velocity (vp/vs ratio) from a previous seismic tomography. The results from this analysis indicate that the majority of the seismicity spatially correlates with hydrated areas of the slab and overlying continental mantle, with the exception of the cluster within the flat slab segment. In this region, earthquakes are likely triggered by flexural processes where the slab changes from a flat to a steep subduction angle.
First-order variations in the observed tectonic patterns also seem to be influenced by the thermomechanical configuration of the lithosphere. The mechanically strong domains of the forearc and foreland, due to their resistance to deformation, display smaller amounts of shortening than the relatively weak orogenic domain. In addition, the structural and thermomechanical characteristics modelled in this dissertation confirm previous analyses from geodynamic models pointing to the control of the observed heterogeneities in the orogen and foreland deformation style. These characteristics include the lithospheric and crustal thickness, the presence of weak sediments and the variations in gravitational potential energy.
Specific conditions occur in the cold and strong northern foreland, which is characterized by active seismicity and thick-skinned structures, although the modelled crustal strength exceeds the typical values of externally-applied tectonic stresses. The additional mechanisms that could explain the strain localization in a region that should resist deformation are: (i) increased tectonic forces coming from the steepening of the slab and (ii) enhanced weakening along inherited structures from pre-Andean deformation events. Finally, the thermomechanical conditions of this sector of the foreland could be a key factor influencing the preservation of the flat subduction angle at these latitudes of the SCA.
Quantitative geomorphic research depends on accurate topographic data often collected via remote sensing. Lidar, and photogrammetric methods like structure-from-motion, provide the highest quality data for generating digital elevation models (DEMs). Unfortunately, these data are restricted to relatively small areas, and may be expensive or time-consuming to collect. Global and near-global DEMs with 1 arcsec (∼30 m) ground sampling from spaceborne radar and optical sensors offer an alternative gridded, continuous surface at the cost of resolution and accuracy. Accuracy is typically defined with respect to external datasets, often, but not always, in the form of point or profile measurements from sources like differential Global Navigation Satellite System (GNSS), spaceborne lidar (e.g., ICESat), and other geodetic measurements. Vertical point or profile accuracy metrics can miss the pixel-to-pixel variability (sometimes called DEM noise) that is unrelated to true topographic signal, but rather sensor-, orbital-, and/or processing-related artifacts. This is most concerning in selecting a DEM for geomorphic analysis, as this variability can affect derivatives of elevation (e.g., slope and curvature) and impact flow routing. We use (near) global DEMs at 1 arcsec resolution (SRTM, ASTER, ALOS, TanDEM-X, and the recently released Copernicus) and develop new internal accuracy metrics to assess inter-pixel variability without reference data. Our study area is in the arid, steep Central Andes, and is nearly vegetation-free, creating ideal conditions for remote sensing of the bare-earth surface. We use a novel hillshade-filtering approach to detrend long-wavelength topographic signals and accentuate short-wavelength variability. Fourier transformations of the spatial signal to the frequency domain allows us to quantify: 1) artifacts in the un-projected 1 arcsec DEMs at wavelengths greater than the Nyquist (twice the nominal resolution, so > 2 arcsec); and 2) the relative variance of adjacent pixels in DEMs resampled to 30-m resolution (UTM projected). We translate results into their impact on hillslope and channel slope calculations, and we highlight the quality of the five DEMs. We find that the Copernicus DEM, which is based on a carefully edited commercial version of the TanDEM-X, provides the highest quality landscape representation, and should become the preferred DEM for topographic analysis in areas without sufficient coverage of higher-quality local DEMs.
Quantitative geomorphic research depends on accurate topographic data often collected via remote sensing. Lidar, and photogrammetric methods like structure-from-motion, provide the highest quality data for generating digital elevation models (DEMs). Unfortunately, these data are restricted to relatively small areas, and may be expensive or time-consuming to collect. Global and near-global DEMs with 1 arcsec (∼30 m) ground sampling from spaceborne radar and optical sensors offer an alternative gridded, continuous surface at the cost of resolution and accuracy. Accuracy is typically defined with respect to external datasets, often, but not always, in the form of point or profile measurements from sources like differential Global Navigation Satellite System (GNSS), spaceborne lidar (e.g., ICESat), and other geodetic measurements. Vertical point or profile accuracy metrics can miss the pixel-to-pixel variability (sometimes called DEM noise) that is unrelated to true topographic signal, but rather sensor-, orbital-, and/or processing-related artifacts. This is most concerning in selecting a DEM for geomorphic analysis, as this variability can affect derivatives of elevation (e.g., slope and curvature) and impact flow routing. We use (near) global DEMs at 1 arcsec resolution (SRTM, ASTER, ALOS, TanDEM-X, and the recently released Copernicus) and develop new internal accuracy metrics to assess inter-pixel variability without reference data. Our study area is in the arid, steep Central Andes, and is nearly vegetation-free, creating ideal conditions for remote sensing of the bare-earth surface. We use a novel hillshade-filtering approach to detrend long-wavelength topographic signals and accentuate short-wavelength variability. Fourier transformations of the spatial signal to the frequency domain allows us to quantify: 1) artifacts in the un-projected 1 arcsec DEMs at wavelengths greater than the Nyquist (twice the nominal resolution, so > 2 arcsec); and 2) the relative variance of adjacent pixels in DEMs resampled to 30-m resolution (UTM projected). We translate results into their impact on hillslope and channel slope calculations, and we highlight the quality of the five DEMs. We find that the Copernicus DEM, which is based on a carefully edited commercial version of the TanDEM-X, provides the highest quality landscape representation, and should become the preferred DEM for topographic analysis in areas without sufficient coverage of higher-quality local DEMs.
Language developers who design domain-specific languages or new language features need a way to make fast changes to language definitions. Those fast changes require immediate feedback. Also, it should be possible to parse the developed languages quickly to handle extensive sets of code.
Parsing expression grammars provides an easy to understand method for language definitions. Packrat parsing is a method to parse grammars of this kind, but this method is unable to handle left-recursion properly. Existing solutions either partially rewrite left-recursive rules and partly forbid them, or use complex extensions to packrat parsing that are hard to understand and cost-intensive. We investigated methods to make parsing as fast as possible, using easy to follow algorithms while not losing the ability to make fast changes to grammars.
We focused our efforts on two approaches.
One is to start from an existing technique for limited left-recursion rewriting and enhance it to work for general left-recursive grammars. The second approach is to design a grammar compilation process to find left-recursion before parsing, and in this way, reduce computational costs wherever possible and generate ready to use parser classes.
Rewriting parsing expression grammars is a task that, if done in a general way, unveils a large number of cases such that any rewriting algorithm surpasses the complexity of other left-recursive parsing algorithms. Lookahead operators introduce this complexity. However, most languages have only little portions that are left-recursive and in virtually all cases, have no indirect or hidden left-recursion. This means that the distinction of left-recursive parts of grammars from components that are non-left-recursive holds great improvement potential for existing parsers.
In this report, we list all the required steps for grammar rewriting to handle left-recursion, including grammar analysis, grammar rewriting itself, and syntax tree restructuring. Also, we describe the implementation of a parsing expression grammar framework in Squeak/Smalltalk and the possible interactions with the already existing parser Ohm/S. We quantitatively benchmarked this framework directing our focus on parsing time and the ability to use it in a live programming context. Compared with Ohm, we achieved massive parsing time improvements while preserving the ability to use our parser it as a live programming tool.
The work is essential because, for one, we outlined the difficulties and complexity that come with grammar rewriting. Also, we removed the existing limitations that came with left-recursion by eliminating them before parsing.
Boolean Satisfiability (SAT) is one of the problems at the core of theoretical computer science. It was the first problem proven to be NP-complete by Cook and, independently, by Levin. Nowadays it is conjectured that SAT cannot be solved in sub-exponential time. Thus, it is generally assumed that SAT and its restricted version k-SAT are hard to solve. However, state-of-the-art SAT solvers can solve even huge practical instances of these problems in a reasonable amount of time.
Why is SAT hard in theory, but easy in practice? One approach to answering this question is investigating the average runtime of SAT. In order to analyze this average runtime the random k-SAT model was introduced. The model generates all k-SAT instances with n variables and m clauses with uniform probability. Researching random k-SAT led to a multitude of insights and tools for analyzing random structures in general. One major observation was the emergence of the so-called satisfiability threshold: A phase transition point in the number of clauses at which the generated formulas go from asymptotically almost surely satisfiable to asymptotically almost surely unsatisfiable. Additionally, instances around the threshold seem to be particularly hard to solve.
In this thesis we analyze a more general model of random k-SAT that we call non-uniform random k-SAT. In contrast to the classical model each of the n Boolean variables now has a distinct probability of being drawn. For each of the m clauses we draw k variables according to the variable distribution and choose their signs uniformly at random. Non-uniform random k-SAT gives us more control over the distribution of Boolean variables in the resulting formulas. This allows us to tailor distributions to the ones observed in practice. Notably, non-uniform random k-SAT contains the previously proposed models random k-SAT, power-law random k-SAT and geometric random k-SAT as special cases.
We analyze the satisfiability threshold in non-uniform random k-SAT depending on the variable probability distribution. Our goal is to derive conditions on this distribution under which an equivalent of the satisfiability threshold conjecture holds. We start with the arguably simpler case of non-uniform random 2-SAT. For this model we show under which conditions a threshold exists, if it is sharp or coarse, and what the leading constant of the threshold function is. These are exactly the three ingredients one needs in order to prove or disprove the satisfiability threshold conjecture. For non-uniform random k-SAT with k=3 we only prove sufficient conditions under which a threshold exists. We also show some properties of the variable probabilities under which the threshold is sharp in this case. These are the first results on the threshold behavior of non-uniform random k-SAT.
The current generation of ground-based instruments has rapidly extended the limits of the range accessible to us with very-high-energy (VHE) gamma-rays, and more than a hundred sources have now been detected in the Milky Way. These sources represent only the tip of the iceberg, but their number has reached a level that allows population studies. In this work, a model of the global population of VHE gamma-ray sources based on the most comprehensive census of Galactic sources in this energy regime, the H.E.S.S. Galactic plane survey (HGPS), will be presented. A population synthesis approach was followed in the construction of the model. Particular attention was paid to correcting for the strong observational bias inherent in the sample of detected sources. The methods developed for estimating the model parameters have been validated with extensive Monte Carlo simulations and will be shown to provide unbiased estimates of the model parameters. With these methods, five models for different spatial distributions of sources have been constructed. To test the validity of these models, their predictions for the composition of sources within the sensitivity range of the HGPS are compared with the observed sample. With one exception, similar results are obtained for all spatial distributions, showing that the observed longitude profile and the source distribution over photon flux are in fair agreement with observation. Regarding the latitude profile and the source distribution over angular extent, it becomes apparent that the model needs to be further adjusted to bring its predictions in agreement with observation. Based on the model, predictions of the global properties of the Galactic population of VHE gamma-ray sources and the prospects of the Cherenkov Telescope Array (CTA) will be presented.
CTA will significantly increase our knowledge of VHE gamma-ray sources by lowering the threshold for source detection, primarily through a larger detection area compared to current-generation instruments. In ground-based gamma-ray astronomy, the sensitivity of an instrument depends strongly, in addition to the detection area, on the ability to distinguish images of air showers produced by gamma-rays from those produced by cosmic rays, which are a strong background. This means that the number of detectable sources depends on the background rejection algorithm used and therefore may also be increased by improving the performance of such algorithms. In this context, in addition to the population model, this work presents a study on the application of deep-learning techniques to the task of gamma-hadron separation in the analysis of data from ground-based gamma-ray instruments. Based on a systematic survey of different neural-network architectures, it is shown that robust classifiers can be constructed with competitive performance compared to the best existing algorithms. Despite the broad coverage of neural-network architectures discussed, only part of the potential offered by the
application of deep-learning techniques to the analysis of gamma-ray data is exploited in the context of this study. Nevertheless, it provides an important basis for further research on this topic.
The NAC transcription factor (TF) JUNGBRUNNEN1 (JUB1) is an important negative regulator of plant senescence, as well as of gibberellic acid (GA) and brassinosteroid (BR) biosynthesis in Arabidopsis thaliana. Overexpression of JUB1 promotes longevity and enhances tolerance to drought and other abiotic stresses. A similar role of JUB1 has been observed in other plant species, including tomato and banana. Our data show that JUB1 overexpressors (JUB1-OXs) accumulate higher levels of proline than WT plants under control conditions, during the onset of drought stress, and thereafter. We identified that overexpression of JUB1 induces key proline biosynthesis and suppresses key proline degradation genes. Furthermore, bZIP63, the transcription factor involved in proline metabolism, was identified as a novel downstream target of JUB1 by Yeast One-Hybrid (Y1H) analysis and Chromatin immunoprecipitation (ChIP). However, based on Electrophoretic Mobility Shift Assay (EMSA), direct binding of JUB1 to bZIP63 could not be confirmed. Our data indicate that JUB1-OX plants exhibit reduced stomatal conductance under control conditions. However, selective overexpression of JUB1 in guard cells did not improve drought stress tolerance in Arabidopsis. Moreover, the drought-tolerant phenotype of JUB1 overexpressors does not solely depend on the transcriptional control of the DREB2A gene. Thus, our data suggest that JUB1 confers tolerance to drought stress by regulating multiple components. Until today, none of the previous studies on JUB1´s regulatory network focused on identifying protein-protein interactions. We, therefore, performed a yeast two-hybrid screen (Y2H) which identified several protein interactors of JUB1, two of which are the calcium-binding proteins CaM1 and CaM4. Both proteins interact with JUB1 in the nucleus of Arabidopsis protoplasts. Moreover, JUB1 is expressed with CaM1 and CaM4 under the same conditions. Since CaM1.1 and CaM4.1 encode proteins with identical amino acid sequences, all further experiments were performed with constructs involving the CaM4 coding sequence. Our data show that JUB1 harbors multiple CaM-binding sites, which are localized in both the N-terminal and C-terminal regions of the protein. One of the CaM-binding sites, localized in the DNA-binding domain of JUB1, was identified as a functional CaM-binding site since its mutation strongly reduced the binding of CaM4 to JUB1. Furthermore, JUB1 transactivates expression of the stress-related gene DREB2A in mesophyll cells; this effect is significantly reduced when the calcium-binding protein CaM4 is expressed as well. Overexpression of both genes in Arabidopsis results in early senescence observed through lower chlorophyll content and an enhanced expression of senescence-associated genes (SAGs) when compared with single JUB1 overexpressors. Our data also show that JUB1 and CaM4 proteins interact in senescent leaves, which have increased Ca2+ levels when compared to young leaves. Collectively, our data indicate that JUB1 activity towards its downstream targets is fine-tuned by calcium-binding proteins during leaf senescence.
Durch den demographischen Wandel wird das Erwerbspersonenpotential und damit die Anzahl erwerbstätiger Personen, insbesondere die Zahl der Fachkräfte in den kommen-den Jahren in Deutschland zurückgehen. Aufgrund dessen wird es für Arbeitgeber zukünftig schwieriger werden, qualifizierten Nachwuchs zu finden. Aufgrund seiner Alterstruktur und der zunehmenden Arbeitsverdichtung ist der öffentliche Dienst, sowie der Teilbereich der öffentlichen Verwaltung, stärker als andere Arbeitgeber mit der Notwendigkeit konfrontiert, mittelfristig externes Personal zu rekrutieren. In Anbetracht dessen ging die Arbeit der Frage nach, inwieweit die öffentliche Verwaltung das hierfür geeignete, innovative Instrument des Social - Media - Personalmarketings bereits imple-mentiert hat und wie sich das ermittelte Ergebnis erklären lässt. Hinsichtlich der aktuellen Anwendung konnte festgestellt werden, dass Social - Media - Personalmarketing erst vor Kurzem in der öffentlichen Verwaltung implementiert wurde und aufgrund dessen gegenwärtig primär zur operativen Personalgewinnung genutzt wird. Als erklärende Einflussfaktoren konnten im Rahmen einer empirischen Untersuchung die mangelnde Relevanz des Personalmarketings als Aufgabe der öffentlichen Verwaltung, der aktuelle Per-sonalbestand und dessen digitale Kompetenzen, sowie die hierarchisch geprägten Kommunikationswege innerhalb der öffentlichen Verwaltung ermittelt werden. Mit Ausnahme der Kommunikationswege decken die Faktoren sich mit denen der Privatwirtschaft. Die öffentliche Verwaltung ist dazu angehalten, den aktuellen Ausprägungsgrad der Amtshierarchie kritisch zu hinterfragen, um das volle Potential des Social - Media - Personalmarketings zukünftig zu heben.
Computation of the instantaneous phase and amplitude via the Hilbert Transform is a powerful tool of data analysis. This approach finds many applications in various science and engineering branches but is not proper for causal estimation because it requires knowledge of the signal’s past and future. However, several problems require real-time estimation of phase and amplitude; an illustrative example is phase-locked or amplitude-dependent stimulation in neuroscience. In this paper, we discuss and compare three causal algorithms that do not rely on the Hilbert Transform but exploit well-known physical phenomena, the synchronization and the resonance. After testing the algorithms on a synthetic data set, we illustrate their performance computing phase and amplitude for the accelerometer tremor measurements and a Parkinsonian patient’s beta-band brain activity.
Computation of the instantaneous phase and amplitude via the Hilbert Transform is a powerful tool of data analysis. This approach finds many applications in various science and engineering branches but is not proper for causal estimation because it requires knowledge of the signal’s past and future. However, several problems require real-time estimation of phase and amplitude; an illustrative example is phase-locked or amplitude-dependent stimulation in neuroscience. In this paper, we discuss and compare three causal algorithms that do not rely on the Hilbert Transform but exploit well-known physical phenomena, the synchronization and the resonance. After testing the algorithms on a synthetic data set, we illustrate their performance computing phase and amplitude for the accelerometer tremor measurements and a Parkinsonian patient’s beta-band brain activity.
Semi-natural habitats (SNHs) are becoming increasingly scarce in modern agricultural landscapes. This may reduce natural ecosystem services such as pest control with its putatively positive effect on crop production. In agreement with other studies, we recently reported wheat yield reductions at field borders which were linked to the type of SNH and the distance to the border. In this experimental landscape-wide study, we asked whether these yield losses have a biotic origin while analyzing fungal seed and fungal leaf pathogens, herbivory of cereal leaf beetles, and weed cover as hypothesized mediators between SNHs and yield. We established experimental winter wheat plots of a single variety within conventionally managed wheat fields at fixed distances either to a hedgerow or to an in-field kettle hole. For each plot, we recorded the fungal infection rate on seeds, fungal infection and herbivory rates on leaves, and weed cover. Using several generalized linear mixed-effects models as well as a structural equation model, we tested the effects of SNHs at a field scale (SNH type and distance to SNH) and at a landscape scale (percentage and diversity of SNHs within a 1000-m radius). In the dry year of 2016, we detected one putative biotic culprit: Weed cover was negatively associated with yield values at a 1-m and 5-m distance from the field border with a SNH. None of the fungal and insect pests, however, significantly affected yield, neither solely nor depending on type of or distance to a SNH. However, the pest groups themselves responded differently to SNH at the field scale and at the landscape scale. Our findings highlight that crop losses at field borders may be caused by biotic culprits; however, their negative impact seems weak and is putatively reduced by conventional farming practices.
Semi-natural habitats (SNHs) are becoming increasingly scarce in modern agricultural landscapes. This may reduce natural ecosystem services such as pest control with its putatively positive effect on crop production. In agreement with other studies, we recently reported wheat yield reductions at field borders which were linked to the type of SNH and the distance to the border. In this experimental landscape-wide study, we asked whether these yield losses have a biotic origin while analyzing fungal seed and fungal leaf pathogens, herbivory of cereal leaf beetles, and weed cover as hypothesized mediators between SNHs and yield. We established experimental winter wheat plots of a single variety within conventionally managed wheat fields at fixed distances either to a hedgerow or to an in-field kettle hole. For each plot, we recorded the fungal infection rate on seeds, fungal infection and herbivory rates on leaves, and weed cover. Using several generalized linear mixed-effects models as well as a structural equation model, we tested the effects of SNHs at a field scale (SNH type and distance to SNH) and at a landscape scale (percentage and diversity of SNHs within a 1000-m radius). In the dry year of 2016, we detected one putative biotic culprit: Weed cover was negatively associated with yield values at a 1-m and 5-m distance from the field border with a SNH. None of the fungal and insect pests, however, significantly affected yield, neither solely nor depending on type of or distance to a SNH. However, the pest groups themselves responded differently to SNH at the field scale and at the landscape scale. Our findings highlight that crop losses at field borders may be caused by biotic culprits; however, their negative impact seems weak and is putatively reduced by conventional farming practices.
Electrical muscle stimulation (EMS) is an increasingly popular training method and has become the focus of research in recent years. New EMS devices offer a wide range of mobile applications for whole-body EMS (WB-EMS) training, e.g., the intensification of dynamic low-intensity endurance exercises through WB-EMS. The present study aimed to determine the differences in exercise intensity between WB-EMS-superimposed and conventional walking (EMS-CW), and CON and WB-EMS-superimposed Nordic walking (WB-EMS-NW) during a treadmill test. Eleven participants (52.0 ± years; 85.9 ± 7.4 kg, 182 ± 6 cm, BMI 25.9 ± 2.2 kg/m2) performed a 10 min treadmill test at a given velocity (6.5 km/h) in four different test situations, walking (W) and Nordic walking (NW) in both conventional and WB-EMS superimposed. Oxygen uptake in absolute (VO2) and relative to body weight (rel. VO2), lactate, and the rate of perceived exertion (RPE) were measured before and after the test. WB-EMS intensity was adjusted individually according to the feedback of the participant. The descriptive statistics were given in mean ± SD. For the statistical analyses, one-factorial ANOVA for repeated measures and two-factorial ANOVA [factors include EMS, W/NW, and factor combination (EMS*W/NW)] were performed (α = 0.05). Significant effects were found for EMS and W/NW factors for the outcome variables VO2 (EMS: p = 0.006, r = 0.736; W/NW: p < 0.001, r = 0.870), relative VO2 (EMS: p < 0.001, r = 0.850; W/NW: p < 0.001, r = 0.937), and lactate (EMS: p = 0.003, r = 0.771; w/NW: p = 0.003, r = 0.764) and both the factors produced higher results. However, the difference in VO2 and relative VO2 is within the range of biological variability of ± 12%. The factor combination EMS*W/NW is statistically non-significant for all three variables. WB-EMS resulted in the higher RPE values (p = 0.035, r = 0.613), RPE differences for W/NW and EMS*W/NW were not significant. The current study results indicate that WB-EMS influences the parameters of exercise intensity. The impact on exercise intensity and the clinical relevance of WB-EMS-superimposed walking (WB-EMS-W) exercise is questionable because of the marginal differences in the outcome variables.
Electrical muscle stimulation (EMS) is an increasingly popular training method and has become the focus of research in recent years. New EMS devices offer a wide range of mobile applications for whole-body EMS (WB-EMS) training, e.g., the intensification of dynamic low-intensity endurance exercises through WB-EMS. The present study aimed to determine the differences in exercise intensity between WB-EMS-superimposed and conventional walking (EMS-CW), and CON and WB-EMS-superimposed Nordic walking (WB-EMS-NW) during a treadmill test. Eleven participants (52.0 ± years; 85.9 ± 7.4 kg, 182 ± 6 cm, BMI 25.9 ± 2.2 kg/m2) performed a 10 min treadmill test at a given velocity (6.5 km/h) in four different test situations, walking (W) and Nordic walking (NW) in both conventional and WB-EMS superimposed. Oxygen uptake in absolute (VO2) and relative to body weight (rel. VO2), lactate, and the rate of perceived exertion (RPE) were measured before and after the test. WB-EMS intensity was adjusted individually according to the feedback of the participant. The descriptive statistics were given in mean ± SD. For the statistical analyses, one-factorial ANOVA for repeated measures and two-factorial ANOVA [factors include EMS, W/NW, and factor combination (EMS*W/NW)] were performed (α = 0.05). Significant effects were found for EMS and W/NW factors for the outcome variables VO2 (EMS: p = 0.006, r = 0.736; W/NW: p < 0.001, r = 0.870), relative VO2 (EMS: p < 0.001, r = 0.850; W/NW: p < 0.001, r = 0.937), and lactate (EMS: p = 0.003, r = 0.771; w/NW: p = 0.003, r = 0.764) and both the factors produced higher results. However, the difference in VO2 and relative VO2 is within the range of biological variability of ± 12%. The factor combination EMS*W/NW is statistically non-significant for all three variables. WB-EMS resulted in the higher RPE values (p = 0.035, r = 0.613), RPE differences for W/NW and EMS*W/NW were not significant. The current study results indicate that WB-EMS influences the parameters of exercise intensity. The impact on exercise intensity and the clinical relevance of WB-EMS-superimposed walking (WB-EMS-W) exercise is questionable because of the marginal differences in the outcome variables.
Der Klang der Erinnerung
(2021)
Gründung von Mensa AG's im Rahmen der Qualitätsoffensive Schulverpflegung im Land Brandenburg
(2020)
Background
The metabolic syndrome (MetS) is a risk cluster for a number of secondary diseases. The implementation of prevention programs requires early detection of individuals at risk. However, access to health care providers is limited in structurally weak regions. Brandenburg, a rural federal state in Germany, has an especially high MetS prevalence and disease burden. This study aims to validate and test the feasibility of a setup for mobile diagnostics of MetS and its secondary diseases, to evaluate the MetS prevalence and its association with moderating factors in Brandenburg and to identify new ways of early prevention, while establishing a “Mobile Brandenburg Cohort” to reveal new causes and risk factors for MetS.
Methods
In a pilot study, setups for mobile diagnostics of MetS and secondary diseases will be developed and validated. A van will be equipped as an examination room using point-of-care blood analyzers and by mobilizing standard methods. In study part A, these mobile diagnostic units will be placed at different locations in Brandenburg to locally recruit 5000 participants aged 40-70 years. They will be examined for MetS and advice on nutrition and physical activity will be provided. Questionnaires will be used to evaluate sociodemographics, stress perception, and physical activity. In study part B, participants with MetS, but without known secondary diseases, will receive a detailed mobile medical examination, including MetS diagnostics, medical history, clinical examinations, and instrumental diagnostics for internal, cardiovascular, musculoskeletal, and cognitive disorders. Participants will receive advice on nutrition and an exercise program will be demonstrated on site. People unable to participate in these mobile examinations will be interviewed by telephone. If necessary, participants will be referred to general practitioners for further diagnosis.
Discussion
The mobile diagnostics approach enables early detection of individuals at risk, and their targeted referral to local health care providers. Evaluation of the MetS prevalence, its relation to risk-increasing factors, and the “Mobile Brandenburg Cohort” create a unique database for further longitudinal studies on the implementation of home-based prevention programs to reduce mortality, especially in rural regions.
Trial registration
German Clinical Trials Register, DRKS00022764; registered 07 October 2020—retrospectively registered.
Background
The metabolic syndrome (MetS) is a risk cluster for a number of secondary diseases. The implementation of prevention programs requires early detection of individuals at risk. However, access to health care providers is limited in structurally weak regions. Brandenburg, a rural federal state in Germany, has an especially high MetS prevalence and disease burden. This study aims to validate and test the feasibility of a setup for mobile diagnostics of MetS and its secondary diseases, to evaluate the MetS prevalence and its association with moderating factors in Brandenburg and to identify new ways of early prevention, while establishing a “Mobile Brandenburg Cohort” to reveal new causes and risk factors for MetS.
Methods
In a pilot study, setups for mobile diagnostics of MetS and secondary diseases will be developed and validated. A van will be equipped as an examination room using point-of-care blood analyzers and by mobilizing standard methods. In study part A, these mobile diagnostic units will be placed at different locations in Brandenburg to locally recruit 5000 participants aged 40-70 years. They will be examined for MetS and advice on nutrition and physical activity will be provided. Questionnaires will be used to evaluate sociodemographics, stress perception, and physical activity. In study part B, participants with MetS, but without known secondary diseases, will receive a detailed mobile medical examination, including MetS diagnostics, medical history, clinical examinations, and instrumental diagnostics for internal, cardiovascular, musculoskeletal, and cognitive disorders. Participants will receive advice on nutrition and an exercise program will be demonstrated on site. People unable to participate in these mobile examinations will be interviewed by telephone. If necessary, participants will be referred to general practitioners for further diagnosis.
Discussion
The mobile diagnostics approach enables early detection of individuals at risk, and their targeted referral to local health care providers. Evaluation of the MetS prevalence, its relation to risk-increasing factors, and the “Mobile Brandenburg Cohort” create a unique database for further longitudinal studies on the implementation of home-based prevention programs to reduce mortality, especially in rural regions.
Trial registration
German Clinical Trials Register, DRKS00022764; registered 07 October 2020—retrospectively registered.
Leonhard Frank (1882-1961) hat bereits ein an Erfahrungen reiches Leben hinter sich, als er 1950 nach Deutschland zurückkehrt. Das langjährige Exil lässt der Schriftsteller nicht zuletzt aufgrund mangelnder Veröffentlichungs- und Verdienstmöglichkeiten sowie einer rigiden Immigrationsgesetzgebung in den USA hinter sich. Vor allem jedoch zieht es ihn in seine Heimat als das Land seiner Sprache zurück. Und nur hier weiß er sein Publikum, dessen Resonanz er als existenziell empfindet.
In dieser Studie werden die Bemühungen nachvollzogen, die ein während der Weimarer Republik renommierter und ökonomisch erfolgreicher Autor nach dem erzwungenen Exil während der Zeit des Nationalsozialismus unternimmt, um als linker Schriftsteller seine Position innerhalb der besonderen literarischen und kulturpolitischen Bedingungen in der BRD und der DDR im Jahrzehnt der 1950er Jahre zu finden. Ausgehend von umfangreichen Archivmaterialien werden die Lebens- und Arbeitsumstände Franks in seinem letzten Lebensjahrzehnt in die jeweiligen biographischen, sozialen, politischen und kulturellen Kontexte gesetzt.
Besondere Aufmerksamkeit erfährt dabei die Besonderheit, dass Frank als Bürger der Bundesrepublik fortdauernde Beziehungen zu Institutionen und Personen in der DDR unterhielt. Aus dieser Konstellation resultiert eingedenk der politisch-historischen Prozesse, in die sie sich einbettete, ein Spannungsfeld, dessen Spezifika transparent gemacht werden.
The main goal of this dissertation is to experimentally investigate how focus is realised, perceived, and processed by native Turkish speakers, independent of preconceived notions of positional restrictions. Crucially, there are various issues and scientific debates surrounding focus in the Turkish language in the existing literature (chapter 1). It is argued in this dissertation that two factors led to the stagnant literature on focus in Turkish: the lack of clearly defined, modern understandings of information structure and its fundamental notion of focus, and the ongoing and ill-defined debate surrounding the question of whether there is an immediately preverbal focus position in Turkish. These issues gave rise to specific research questions addressed across this dissertation. Specifically, we were interested in how the focus dimensions such as focus size (comparing narrow constituent and broad sentence focus), focus target (comparing narrow subject and narrow object focus), and focus type (comparing new-information and contrastive focus) affect Turkish focus realisation and, in turn, focus comprehension when speakers are provided syntactic freedom to position focus as they see fit.
To provide data on these core goals, we presented three behavioural experiments based on a systematic framework of information structure and its notions (chapter 2): (i) a production task with trigger wh-questions and contextual animations manipulated to elicit the focus dimensions of interest (chapter 3), (ii) a timed acceptability judgment task in listening to the recorded answers in our production task (chapter 4), and (iii) a self-paced reading task to gather on-line processing data (chapter 5).
Based on the results of the conducted experiments, multiple conclusions are made in this dissertation (chapter 6). Firstly, this dissertation demonstrated empirically that there is no focus position in Turkish, neither in the sense of a strict focus position language nor as a focally loaded position facilitating focus perception and/or processing. While focus is, in fact, syntactically variable in the Turkish preverbal area, this is a consequence of movement triggered by other IS aspects like topicalisation and backgrounding, and the observational markedness of narrow subject focus compared to narrow object focus. As for focus type in Turkish, this dimension is not associated with word order in production, perception, or processing. Significant acoustic correlates of focus size (broad sentence focus vs narrow constituent focus) and focus target (narrow subject focus vs narrow object focus) were observed in fundamental frequency and intensity, representing focal boost, (postfocal) deaccentuation, and the presence or absence of a phrase-final rise in the prenucleus, while the perceivability of these effects remains to be investigated. In contrast, no acoustic correlates of focus type in simple, three-word transitive structures were observed, with focus types being interchangeable in mismatched question-answer pairs. Overall, the findings of this dissertation highlight the need for experimental investigations regarding focus in Turkish, as theoretical predictions do not necessarily align with experimental data. As such, the fallacy of implying causation from correlation should be strictly kept in mind, especially when constructions coincide with canonical structures, such as the immediately preverbal position in narrow object foci. Finally, numerous open questions remain to be explored, especially as focus and word order in Turkish are multifaceted. As shown, givenness is a confounding factor when investigating focus types, while thematic role assignment potentially confounds word order preferences. Further research based on established, modern information structure frameworks is needed, with chapter 5 concluding with specific recommendations for such future research.
Job satisfaction has been found to impact behavioral choices at the workplace. Since levels of satisfaction are not guaranteed to remain high, understanding the consequences of job dissatisfaction is essential. Hence, I analyze the relationship between a worker’s job satisfaction and her training investments. Based on my theoretical model, I expect a U-shaped relationship if dissatisfied workers attempt to improve the situation or plan to quit. In contrast, there is an overall positive relationship if dissatisfied workers neglect their duties. Using logit regressions with the Household, Income and Labour Dynamics in Australia (HILDA) survey I find tentative evidence that there is on average an overall positive relationship with a 1 standard deviation increase in job satisfaction being associated with a 1.5% increased likelihood of participating in training. A closer inspection of the reasons for training as well as quit intentions reveals some hints of a U-shaped relationship. My results highlight the importance of considering the source of dissatisfaction as there are heterogeneous effects along different job satisfaction facets.
Prospects for Cherenkov Telescope Array Observations of the Young Supernova Remnant RX J1713.7-3946
(2017)
We perform simulations for future Cherenkov Telescope Array (CTA) observations of RX J1713.7-3946, a young supernova remnant (SNR) and one of the brightest sources ever discovered in very high energy (VHE) gamma rays. Special attention is paid to exploring possible spatial (anti) correlations of gamma rays with emission at other wavelengths, in particular X-rays and CO/H I emission. We present a series of simulated images of RX J1713.7-3946 for CTA based on a set of observationally motivated models for the gamma-ray emission. In these models, VHE gamma rays produced by high-energy electrons are assumed to trace the nonthermal X-ray emission observed by XMM-Newton, whereas those originating from relativistic protons delineate the local gas distributions. The local atomic and molecular gas distributions are deduced by the NANTEN team from CO and H I observations. Our primary goal is to show how one can distinguish the emission mechanism(s) of the gamma rays (i.e., hadronic versus leptonic, or a mixture of the two) through information provided by their spatial distribution, spectra, and time variation. This work is the first attempt to quantitatively evaluate the capabilities of CTA to achieve various proposed scientific goals by observing this important cosmic particle accelerator.
Bifurcations of dynamos in rotating and buoyancy-driven spherical Rayleigh-Benard convection in an electrically conducting fluid are investigated numerically. Both nonmagnetic and magnetic solution branches comprised of rotating waves are traced by path-following techniques, and their bifurcations and interconnections for different Ekman numbers are determined. In particular, the question of whether the dynamo branches bifurcate super- or sub-critically and whether a direct link to the primary pure convective states exists is answered.
Individuals have an intrinsic need to express themselves to other humans within a given community by sharing their experiences, thoughts, actions, and opinions. As a means, they mostly prefer to use modern online social media platforms such as Twitter, Facebook, personal blogs, and Reddit. Users of these social networks interact by drafting their own statuses updates, publishing photos, and giving likes leaving a considerable amount of data behind them to be analyzed. Researchers recently started exploring the shared social media data to understand online users better and predict their Big five personality traits: agreeableness, conscientiousness, extraversion, neuroticism, and openness to experience. This thesis intends to investigate the possible relationship between users’ Big five personality traits and the published information on their social media profiles. Facebook public data such as linguistic status updates, meta-data of likes objects, profile pictures, emotions, or reactions records were adopted to address the proposed research questions. Several machine learning predictions models were constructed with various experiments to utilize the engineered features correlated with the Big 5 Personality traits. The final predictive performances improved the prediction accuracy compared to state-of-the-art approaches, and the models were evaluated based on established benchmarks in the domain. The research experiments were implemented while ethical and privacy points were concerned. Furthermore, the research aims to raise awareness about privacy between social media users and show what third parties can reveal about users’ private traits from what they share and act on different social networking platforms.
In the second part of the thesis, the variation in personality development is studied within a cross-platform environment such as Facebook and Twitter platforms. The constructed personality profiles in these social platforms are compared to evaluate the effect of the used platforms on one user’s personality development. Likewise, personality continuity and stability analysis are performed using two social media platforms samples. The implemented experiments are based on ten-year longitudinal samples aiming to understand users’ long-term personality development and further unlock the potential of cooperation between psychologists and data scientists.
fMRI studies of reward find increased neural activity in ventral striatum and medial prefrontal cortex (mPFC), whereas other regions, including the dorsolateral prefrontal cortex (d1PFC), anterior cingulate cortex (ACC), and anterior insula, are activated when anticipating aversive exposure. Although these data suggest differential activation during anticipation of pleasant or of unpleasant exposure, they also arise in the context of different paradigms (e.g., preparation for reward vs. threat of shock) and participants. To determine overlapping and unique regions active during emotional anticipation, we compared neural activity during anticipation of pleasant or unpleasant exposure in the same participants. Cues signalled the upcoming presentation of erotic/romantic, violent, or everyday pictures while BOLD activity during the 9-s anticipatory period was measured using fMRI. Ventral striatum and a ventral mPFC subregion were activated when anticipating pleasant, but not unpleasant or neutral, pictures, whereas activation in other regions was enhanced when anticipating appetitive or aversive scenes.
Processes having the same bridges as a given reference Markov process constitute its reciprocal class. In this paper we study the reciprocal class of compound Poisson processes whose jumps belong to a finite set . We propose a characterization of the reciprocal class as the unique set of probability measures on which a family of time and space transformations induces the same density, expressed in terms of the reciprocal invariants. The geometry of plays a crucial role in the design of the transformations, and we use tools from discrete geometry to obtain an optimal characterization. We deduce explicit conditions for two Markov jump processes to belong to the same class. Finally, we provide a natural interpretation of the invariants as short-time asymptotics for the probability that the reference process makes a cycle around its current state.
We report the results from two experiments examining native and non-native German speakers’ sensitivity to crossover constraints on pronoun resolution. Our critical stimuli sentences contained personal pronouns in either strong (SCO) or weak crossover (WCO) configurations. Using eye-movement monitoring during reading and a gender-mismatch paradigm, Experiment 1 investigated whether a fronted wh-phrase would be considered as a potential antecedent for a pronoun intervening between the wh-phrase and its canonical position. Both native and non-native readers initially attempted coreference in WCO but not in SCO configurations, as evidenced by early gender-mismatch effects in our WCO conditions. Experiment 2 was an offline antecedent judgement task whose results mirrored the SCO/WCO asymmetry observed in our reading-time data. Taken together, our results show that the SCO constraint immediately restricts pronoun interpretation in both native and non-native comprehension, and further suggest that SCO and WCO constraints derive from different sources.
Patienten mit Herzerkrankung leiden unter zahlreichen kognitiven Defiziten, die mit steigendem Alter und der Schwere der kardialen Erkrankung zunehmen. Die Genese kognitiver Defizite und ihre Wechselwirkung mit Herzerkrankungen ist multifaktoriell, potenziell sind sie jedoch durch eine adäquate medizinische Behandlung der Herzerkrankung modifizierbar. Oft haben neuropsychologische Störungen wie beeinträchtigte Aufmerksamkeits-, Gedächtnis- oder Exekutivfunktionen nachhaltige Auswirkungen auf die Lebensqualität und auf das Outcome kardiologischer Rehabilitationsmaßnahmen und können Herzerkrankungen verschlimmern (bspw. durch die Aufrechterhaltung eines ungesunden Lebensstils oder unzureichende Medikamentenadhärenz). Ein routinemäßig angewandtes neuropsychologisches Screening könnte helfen, kognitiv beeinträchtigte Patienten zu identifizieren, um medizinische und rehabilitative Maßnahmen optimieren zu können.
The Leopard cat Prionailurus bengalensis is a habitat generalist that is widely distributed across Southeast Asia. Based on morphological traits, this species has been subdivided into 12 subspecies. Thus far, there have been few molecular studies investigating intraspecific variation, and those had been limited in geographic scope. For this reason, we aimed to study the genetic structure and evolutionary history of this species across its very large distribution range in Asia. We employed both PCR-based (short mtDNA fragments, 94 samples) and high throughput sequencing based methods (whole mitochondrial genomes, 52 samples) on archival, noninvasively collected and fresh samples to investigate the distribution of intraspecific genetic variation. Our comprehensive sampling coupled with the improved resolution of a mitochondrial genome analyses provided strong support for a deep split between Mainland and Sundaic Leopard cats. Although we identified multiple haplogroups within the species’ distribution, we found no matrilineal evidence for the distinction of 12 subspecies. In the context of Leopard cat biogeography, we cautiously recommend a revision of the Prionailurus bengalensis subspecific taxonomy: namely, a reduction to 4 subspecies (2 mainland and 2 Sundaic forms).
Dyskalkulie
(2017)
Hintergrund
Ausgeprägte Schwierigkeiten beim Erwerb der grundlegenden arithmetischen Fertigkeiten bei ansonsten durchschnittlichen Schulleistungen werden als Rechenstörung oder Dyskalkulie bezeichnet. Davon betroffen sind etwa 5 % der Grundschülerpopulation. Die Ursachen und die Symptome sind ebenso vielgestaltig wie die Methoden der differenziellen Förderung und Therapie.
Material und Methode
Selektive Literaturrecherche zur Rechenstörung aus verschiedenen mit dem Gegenstand befassten wissenschaftlichen Disziplinen.
Ergebnisse
Der Erwerb von Fähigkeiten zur Zahlenverarbeitung und zum Rechnen wird als ein erfahrungsabhängiger neuroplastischer Reifungsprozess verstanden, der zu einem komplexen, spezialisierten neuronalen Netzwerk führt und verschiedene kognitive Zahlenrepräsentationen hervorbringt. Die Entwicklung dieser domänenspezifischen Fähigkeiten ist abhängig von der Entwicklung domänenübergreifender Fähigkeiten, wie Aufmerksamkeit, Arbeitsgedächtnis, Sprache und visuell-räumlichen Fähigkeiten. Störungen dieser Reifungsprozesse können in verschiedenen Entwicklungsstadien unterschiedliche Komponenten der Entwicklung dieses komplexen kognitiven Systems betreffen und sind daher im klinischen Erscheinungsbild vielgestaltig. Sonderpädagogische, lerntherapeutische und ggf. medizinische Maßnahmen benötigen eine differenzielle Diagnostik und Indikationsstellung. Moderne computerbasierte Lernsoftware kann sowohl die schulische Didaktik als auch lerntherapeutische Vorgehensweisen unterstützen.
Schlussfolgerung
Frühzeitiges Erkennen sowie differenzielle und individualisierte Förderung können die Gefahr des Auftretens sekundärer emotionaler Störungen mindern. Die Diagnostik und die Behandlung der Rechenstörung sollten evidenzbasiert und leitlinienorientiert erfolgen sowie der Komplexität und Vielgestaltigkeit der Symptombildungen Rechnung tragen.
Alternative electron acceptors are being actively explored in order to advance the development of bulk-heterojunction (BHJ) organic solar cells (OSCs). The indene-C-60 bisadduct (ICBA) has been regarded as a promising candidate, as it provides high open-circuit voltage in BHJ solar cells; however, the photovoltaic performance of such ICBA-based devices is often inferior when compared to cells with the omnipresent PCBM electron acceptor. Here, by pairing the high performance polymer (FTAZ) as the donor with either PCBM or ICBA as the acceptor, we explore the physical mechanism behind the reduced performance of the ICBA-based device. Time delayed collection field (TDCF) experiments reveal reduced, yet field-independent free charge generation in the FTAZ:ICBA system, explaining the overall lower photocurrent in its cells. Through the analysis of the photoluminescence, photogeneration, and electroluminescence, we find that the lower generation efficiency is neither caused by inefficient exciton splitting, nor do we find evidence for significant energy back-transfer from the CT state to singlet excitons. In fact, the increase in open circuit voltage when replacing PCBM by ICBA is entirely caused by the increase in the CT energy, related to the shift in the LUMO energy, while changes in the radiative and nonradiative recombination losses are nearly absent. On the other hand, space charge limited current (SCLC) and bias-assisted charge extraction (BACE) measurements consistently reveal a severely lower electron mobilitiy in the FTAZ:ICBA blend. Studies of the blends with resonant soft X-ray scattering (R-SoXS), grazing incident wide-angle X-ray scattering (GIWAXS), and scanning transmission X-ray microscopy (STXM) reveal very little differences in the mesoscopic morphology but significantly less nanoscale molecular ordering of the fullerene domains in the ICBA based blends, which we propose as the main cause for the lower generation efficiency and smaller electron mobility. Calculations of the JV curves with an analytical model, using measured values, show good agreement with the experimentally determined JV characteristics, proving that these devices suffer from slow carrier extraction, resulting in significant bimolecular recombination losses. Therefore, this study highlights the importance of high charge carrier mobility for newly synthesized acceptor materials, in addition to having suitable energy levels.
We contribute to the theoretical understanding of randomized search heuristics by investigating their optimization behavior on satisfiable random k-satisfiability instances both in the planted solution model and the uniform model conditional on satisfiability. Denoting the number of variables by n, our main technical result is that the simple () evolutionary algorithm with high probability finds a satisfying assignment in time when the clause-variable density is at least logarithmic. For low density instances, evolutionary algorithms seem to be less effective, and all we can show is a subexponential upper bound on the runtime for densities below . We complement these mathematical results with numerical experiments on a broader density spectrum. They indicate that, indeed, the () EA is less efficient on lower densities. Our experiments also suggest that the implicit constants hidden in our main runtime guarantee are low. Our main result extends and considerably improves the result obtained by Sutton and Neumann (Lect Notes Comput Sci 8672:942-951, 2014) in terms of runtime, minimum density, and clause length. These improvements are made possible by establishing a close fitness-distance correlation in certain parts of the search space. This approach might be of independent interest and could be useful for other average-case analyses of randomized search heuristics. While the notion of a fitness-distance correlation has been around for a long time, to the best of our knowledge, this is the first time that fitness-distance correlation is explicitly used to rigorously prove a performance statement for an evolutionary algorithm.
ObjectivePsychotherapy for hypochondriasis has greatly improved over the last decades and cognitive-behavioral treatments are most promising. However, research on predictors of treatment outcome for hypochondriasis is rare. Possible predictors of treatment outcome in cognitive therapy (CT) and exposure therapy (ET) for hypochondriasis were investigated. MethodCharacteristics and behaviors of 75 patients were considered as possible predictors: sociodemographic variables (sex, age, and cohabitation); psychopathology (pretreatment hypochondriacal symptoms, comorbid mental disorders, and levels of depression, anxiety, and somatic symptoms); and patient in-session interpersonal behavior. ResultsSeverity of pretreatment hypochondriacal symptoms, comorbid mental disorders, and patient in-session interpersonal behavior were significant predictors in multiple hierarchical regression analyses. Interactions between the predictors and the treatment (CT or ET) were not found. ConclusionsIn-session interpersonal behavior is an important predictor of outcome. Furthermore, there are no specific contraindications to treating hypochondriasis with CT or ET.
Perovskite solar cells now compete with their inorganic counterparts in terms of power conversion efficiency, not least because of their small open-circuit voltage (V-OC) losses. A key to surpass traditional thin-film solar cells is the fill factor (FF). Therefore, more insights into the physical mechanisms that define the bias dependence of the photocurrent are urgently required. In this work, we studied charge extraction and recombination in efficient triple cation perovskite solar cells with undoped organic electron/hole transport layers (ETL/HTL). Using integral time of flight we identify the transit time through the HTL as the key figure of merit for maximizing the fill factor (FF) and efficiency. Complementarily, intensity dependent photocurrent and V-OC measurements elucidate the role of the HTL on the bias dependence of non-radiative and transport-related loss channels. We show that charge transport losses can be completely avoided under certain conditions, yielding devices with FFs of up to 84%. Optimized cells exhibit power conversion efficiencies of above 20% for 6 mm(2) sized pixels and 18.9% for a device area of 1 cm(2). These are record efficiencies for hybrid perovskite devices with dopant-free transport layers, highlighting the potential of this device technology to avoid charge-transport limitations and to approach the Shockley-Queisser limit.
Aus dem Inhalt:
- Kollektiv zum Recht: Der Kollektivbeschwerdemechanismus zur Europäischen Sozialcharta
- 30 Jahre Ratifizierung der UN-Kinderrechte. Wie steht es um das Prinzip des Kindeswohlvorranges ? Zum Umsetzungsstand in Deutschland am Beispiel kindgerechter Justiz
- EGMR, Vavřička u. a. ./. Tschechische Republik (47621/ 13), Urteil vom 8. April 2021 – Pflichtimpfungen für Kinder
Pädagog*innen der Primarstufe nehmen an spezifischen bewegungsorientierten Weiterbildungen teil. Zahlreiche Untersuchungen im Kontext von Fort- und Weiterbildungen stellen dar, unter welchen Bedingungen sich Teilnahmen förderlich oder hinderlich auswirken. In didaktisch-konzeptionellen Überlegungen werden häufig Fragen diskutiert, wie äußere Umstände, etwa in Bezug auf zeitliche, räumliche oder inhaltliche Dimensionen, zu gestalten sind, damit Bildungsangebote im Schulsystem bestimmte Wirkungen erzielen. Unter welchen Bedingungen erfolgt sozusagen günstiger Weise eine Vermittlung von spezifischen Inhalten an Lehrer*innen, damit über diese ein Transfereffekt von (system-)relevantem Wissen in das Schulsystem erfolgen kann?
In dieser Forschungsarbeit soll nicht ein Bedingungsdiskurs im Vordergrund stehen, auf dessen Grundlage wirkungsvolle Vermittlungsstrategien für Bildungsangebote diskutiert werden. Im Zentrum steht die Frage nach je eigenen Teilnahme- und Lernbegründungen von Pädagog*innen, und wie sie sich zu ihrer Weiterbildung ins Verhältnis setzen. Dieser Zugang verändert die Perspektive auf die Thematik und erlaubt die Auseinandersetzung mit Subjekten im Rahmen eines Begründungsdiskurses. Im Zuge einer empirisch-qualitativen Studie werden narrative Interviews mit elf Absolvent*innen einer bewegungsorientierten Weiterbildung geführt, die Auswertung der Daten erfolgt mit der Dokumentarischen Methode. Die Rekonstruktionsergebnisse werden in Form von zwei Fallbeschreibungen und durch vier typische, in der Studie entwickelte, Begründungsfiguren dargestellt: die Figur Lernen, die Figur Wissensmanagement, die Figur Neugierige Suche und die Figur Körperliche Aktivität. Neben der Rekonstruktion von Teilnahme- und Lernbegründungsmustern wird deutlich, dass Teilnehmen und Lernen keine unterschiedlichen Zugangslogiken in Bezug auf Bedeutungs-Begründungs-Zusammenhänge verfolgen. Vielmehr sind sowohl expansive als auch defensive Lernbegründungen im Zuge von Teilnahmebegründungen identifizierbar.
We investigate the effect of the COVID-19 pandemic on self-employed people’s mental health. Using representative longitudinal survey data from Germany, we reveal differential effects by gender: whereas self-employed women experienced a substantial deterioration in their mental health, self-employed men displayed no significant changes up to early 2021. Financial losses are important in explaining these differences. In addition, we find larger mental health responses among self-employed women who were directly affected by government-imposed restrictions and bore an increased childcare burden due to school and daycare closures. We also find that self-employed individuals who are more resilient coped better with the crisis.
In light of climate change mitigation efforts, revenues from climate policies are growing, with no consensus yet on how they should be used. Potential efficiency gains from reducing distortionary taxes and the distributional implications of different revenue recycling schemes are currently debated. To account for households heterogeneity and dynamic trade-offs, we study the macroeconomic and welfare performance of different revenue recycling schemes using an Environmental Two-Agent New-Keynesian model, calibrated on the German economy. We find that, in the long run, welfare gains are higher when revenues are used to reduce distortionary taxes on capital, but this comes at the cost of higher inequality: while all households prefer labor income tax reductions to lump-sum transfers, only financially unconstrained households are better off when reducing taxes on capital income. Interestingly, we find that over the transition period relevant to meet short-medium run climate targets, labor income tax cuts are the most efficient and equitable instrument.
Digital transformation (DT) has not only been a major challenge in recent years, it is also supposed to continue to enormously impact our society and economy in the forthcoming decade. On the one hand, digital technologies have emerged, diffusing and determining our private and professional lives. On the other hand, digital platforms have leveraged the potentials of digital technologies to provide new business models. These dynamics have a massive effect on individuals, companies, and entire ecosystems. Digital technologies and platforms have changed the way persons consume or interact with each other. Moreover, they offer companies new opportunities to conduct their business in terms of value creation (e.g., business processes), value proposition (e.g., business models), or customer interaction (e.g., communication channels), i.e., the three dimensions of DT. However, they also can become a threat for a company's competitiveness or even survival. Eventually, the emergence, diffusion, and employment of digital technologies and platforms bear the potential to transform entire markets and ecosystems.
Against this background, IS research has explored and theorized the phenomena in the context of DT in the past decade, but not to its full extent. This is not surprising, given the complexity and pervasiveness of DT, which still requires far more research to further understand DT with its interdependencies in its entirety and in greater detail, particularly through the IS perspective at the confluence of technology, economy, and society. Consequently, the IS research discipline has determined and emphasized several relevant research gaps for exploring and understanding DT, including empirical data, theories as well as knowledge of the dynamic and transformative capabilities of digital technologies and platforms for both organizations and entire industries.
Hence, this thesis aims to address these research gaps on the IS research agenda and consists of two streams. The first stream of this thesis includes four papers that investigate the impact of digital technologies on organizations. In particular, these papers study the effects of new technologies on firms (paper II.1) and their innovative capabilities (II.2), the nature and characteristics of data-driven business models (II.3), and current developments in research and practice regarding on-demand healthcare (II.4). Consequently, the papers provide novel insights on the dynamic capabilities of digital technologies along the three dimensions of DT. Furthermore, they offer companies some opportunities to systematically explore, employ, and evaluate digital technologies to modify or redesign their organizations or business models.
The second stream comprises three papers that explore and theorize the impact of digital platforms on traditional companies, markets, and the economy and society at large. At this, paper III.1 examines the implications for the business of traditional insurance companies through the emergence and diffusion of multi-sided platforms, particularly in terms of value creation, value proposition, and customer interaction. Paper III.2 approaches the platform impact more holistically and investigates how the ongoing digital transformation and "platformization" in healthcare lastingly transform value creation in the healthcare market. Paper III.3 moves on from the level of single businesses or markets to the regulatory problems that result from the platform economy for economy and society, and proposes appropriate regulatory approaches for addressing these problems. Hence, these papers bring new insights on the table about the transformative capabilities of digital platforms for incumbent companies in particular and entire ecosystems in general.
Altogether, this thesis contributes to the understanding of the impact of DT on organizations and markets through the conduction of multiple-case study analyses that are systematically reflected with the current state of the art in research. On this empirical basis, the thesis also provides conceptual models, taxonomies, and frameworks that help describing, explaining, or predicting the impact of digital technologies and digital platforms on companies, markets and the economy or society at large from an interdisciplinary viewpoint.
Salt deposits offer a variety of usage types. These include the mining of rock salt and potash salt as important raw materials, the storage of energy in man-made underground caverns, and the disposal of hazardous substances in former mines. The most serious risk with any of these usage types comes from the contact with groundwater or surface water. It causes an uncontrolled dissolution of salt rock, which in the worst case can result in the flooding or collapse of underground facilities. Especially along potash seams, cavernous structures can spread quickly, because potash salts show a much higher solubility than rock salt. However, as their chemical behavior is quite complex, previous models do not account for these highly soluble interlayers. Therefore, the objective of the present thesis is to describe the evolution of cavernous structures along potash seams in space and time in order to improve hazard mitigation during the utilization of salt deposits.
The formation of cavernous structures represents an interplay of chemical and hydraulic processes. Hence, the first step is to systematically investigate the dissolution and precipitation reactions that occur when water and potash salt come into contact. For this purpose, a geochemical reaction model is used. The results show that the minerals are only partially dissolved, resulting in a porous sponge like structure. With the saturation of the solution increasing, various secondary minerals are formed, whose number and type depend on the original rock composition. Field data confirm a correlation between the degree of saturation and the distance from the center of the cavern, where solution is entering. Subsequently, the reaction model is coupled with a flow and transport code and supplemented by a novel approach called ‘interchange’. The latter enables the exchange of solution and rock between areas of different porosity and mineralogy, and thus ultimately the growth of the cavernous structure. By means of several scenario analyses, cavern shape, growth rate and mineralogy are systematically investigated, taking also heterogeneous potash seams into account. The results show that basically four different cases can be distinguished, with mixed forms being a frequent occurrence in nature. The classification scheme is based on the dimensionless numbers Péclet and Damköhler, and allows for a first assessment of the hazard potential. In future, the model can be applied to any field case, using measurement data for calibration.
The presented research work provides a reactive transport model that is able to spatially and temporally characterize the propagation of cavernous structures along potash seams for the first time. Furthermore, it allows to determine thickness and composition of transition zones between cavern center and unaffected salt rock. The latter is particularly important in potash mining, so that natural cavernous structures can be located at an early stage and the risk of mine flooding can thus be reduced. The models may also contribute to an improved hazard prevention in the construction of storage caverns and the disposal of hazardous waste in salt deposits. Predictions regarding the characteristics and evolution of cavernous structures enable a better assessment of potential hazards, such as integrity or stability loss, as well as of suitable mitigation measures.
Plant metabolism is the main process of converting assimilated carbon to different crucial compounds for plant growth and therefore crop yield, which makes it an important research topic. Although major advances in understanding genetic principles contributing to metabolism and yield have been made, little is known about the genetics responsible for trait variation or canalization although the concepts have been known for a long time. In light of a growing global population and progressing climate change, understanding canalization of metabolism and yield seems ever-more important to ensure food security. Our group has recently found canalization metabolite quantitative trait loci (cmQTL) for tomato fruit metabolism, showing that the concept of canalization applies on metabolism. In this work two approaches to investigate plant metabolic canalization and one approach to investigate yield canalization are presented.
In the first project, primary and secondary metabolic data from Arabidopsis thaliana and Phaseolus vulgaris leaf material, obtained from plants grown under different conditions was used to calculate cross-environment coefficient of variations or fold-changes of metabolite levels per genotype and used as input for genome wide association studies. While primary metabolites have lower CV across conditions and show few and mostly weak associations to genomic regions, secondary metabolites have higher CV and show more, strong metabolite to genome associations. As candidate genes, both potential regulatory genes as well as metabolic genes, can be found, albeit most metabolic genes are rarely directly related to the target metabolites, suggesting a role for both potential regulatory mechanisms as well as metabolic network structure for canalization of metabolism.
In the second project, candidate genes of the Solanum lycopersicum cmQTL mapping are selected and CRISPR/Cas9-mediated gene-edited tomato lines are created, to validate the genes role in canalization of metabolism. Obtained mutants appeared to either have strong aberrant developmental phenotypes or appear wild type-like. One phenotypically inconspicuous mutant of a pantothenate kinase, selected as candidate for malic acid canalization shows a significant increase of CV across different watering conditions. Another such mutant of a protein putatively involved in amino acid transport, selected as candidate for phenylalanine canalization shows a similar tendency to increased CV without statistical significance. This potential role of two genes involved in metabolism supports the hypothesis of structural relevance of metabolism for its own stability.
In the third project, a mutant for a putative disulfide isomerase, important for thylakoid biogenesis, is characterized by a multi-omics approach. The mutant was characterized previously in a yield stability screening and showed a variegated leaf phenotype, ranging from green leaves with wild type levels of chlorophyll over differently patterned variegated to completely white leaves almost completely devoid of photosynthetic pigments. White mutant leaves show wild type transcript levels of photosystem assembly factors, with the exception of ELIP and DEG orthologs indicating a stagnation at an etioplast to chloroplast transition state. Green mutant leaves show an upregulation of these assembly factors, possibly acting as overcompensation for partially defective disulfide isomerase, which seems sufficient for proper chloroplast development as confirmed by a wild type-like proteome. Likely as a result of this phenotype, a general stress response, a shift to a sink-like tissue and abnormal thylakoid membranes, strongly alter the metabolic profile of white mutant leaves. As the severity and pattern of variegation varies from plant to plant and may be effected by external factors, the effect on yield instability, may be a cause of a decanalized ability to fully exploit the whole leaf surface area for photosynthetic activity.
The olfactomotor system is especially investigated by examining the sniffing in reaction to olfactory stimuli. The motor output of respiratory-independent muscles was seldomly considered regarding possible influences of smells. The Adaptive Force (AF) characterizes the capability of the neuromuscular system to adapt to external forces in a holding manner and was suggested to be more vulnerable to possible interfering stimuli due to the underlying complex control processes. The aim of this pilot study was to measure the effects of olfactory inputs on the AF of the hip and elbow flexors, respectively. The AF of 10 subjects was examined manually by experienced testers while smelling at sniffing sticks with neutral, pleasant or disgusting odours. The reaction force and the limb position were recorded by a handheld device. The results show, inter alia, a significantly lower maximal isometric AF and a significantly higher AF at the onset of oscillations by perceiving disgusting odours compared to pleasant or neutral odours (p < 0.001). The adaptive holding capacity seems to reflect the functionality of the neuromuscular control, which can be impaired by disgusting olfactory inputs. An undisturbed functioning neuromuscular system appears to be characterized by a proper length tension control and by an earlier onset of mutual oscillations during an external force increase. This highlights the strong connection of olfaction and motor control also regarding respiratory-independent muscles.
The olfactomotor system is especially investigated by examining the sniffing in reaction to olfactory stimuli. The motor output of respiratory-independent muscles was seldomly considered regarding possible influences of smells. The Adaptive Force (AF) characterizes the capability of the neuromuscular system to adapt to external forces in a holding manner and was suggested to be more vulnerable to possible interfering stimuli due to the underlying complex control processes. The aim of this pilot study was to measure the effects of olfactory inputs on the AF of the hip and elbow flexors, respectively. The AF of 10 subjects was examined manually by experienced testers while smelling at sniffing sticks with neutral, pleasant or disgusting odours. The reaction force and the limb position were recorded by a handheld device. The results show, inter alia, a significantly lower maximal isometric AF and a significantly higher AF at the onset of oscillations by perceiving disgusting odours compared to pleasant or neutral odours (p < 0.001). The adaptive holding capacity seems to reflect the functionality of the neuromuscular control, which can be impaired by disgusting olfactory inputs. An undisturbed functioning neuromuscular system appears to be characterized by a proper length tension control and by an earlier onset of mutual oscillations during an external force increase. This highlights the strong connection of olfaction and motor control also regarding respiratory-independent muscles.
Background
Reproducible benchmarking is important for assessing the effectiveness of novel feature selection approaches applied on gene expression data, especially for prior knowledge approaches that incorporate biological information from online knowledge bases. However, no full-fledged benchmarking system exists that is extensible, provides built-in feature selection approaches, and a comprehensive result assessment encompassing classification performance, robustness, and biological relevance. Moreover, the particular needs of prior knowledge feature selection approaches, i.e. uniform access to knowledge bases, are not addressed. As a consequence, prior knowledge approaches are not evaluated amongst each other, leaving open questions regarding their effectiveness.
Results
We present the Comprior benchmark tool, which facilitates the rapid development and effortless benchmarking of feature selection approaches, with a special focus on prior knowledge approaches. Comprior is extensible by custom approaches, offers built-in standard feature selection approaches, enables uniform access to multiple knowledge bases, and provides a customizable evaluation infrastructure to compare multiple feature selection approaches regarding their classification performance, robustness, runtime, and biological relevance.
Conclusion
Comprior allows reproducible benchmarking especially of prior knowledge approaches, which facilitates their applicability and for the first time enables a comprehensive assessment of their effectiveness
Background
Reproducible benchmarking is important for assessing the effectiveness of novel feature selection approaches applied on gene expression data, especially for prior knowledge approaches that incorporate biological information from online knowledge bases. However, no full-fledged benchmarking system exists that is extensible, provides built-in feature selection approaches, and a comprehensive result assessment encompassing classification performance, robustness, and biological relevance. Moreover, the particular needs of prior knowledge feature selection approaches, i.e. uniform access to knowledge bases, are not addressed. As a consequence, prior knowledge approaches are not evaluated amongst each other, leaving open questions regarding their effectiveness.
Results
We present the Comprior benchmark tool, which facilitates the rapid development and effortless benchmarking of feature selection approaches, with a special focus on prior knowledge approaches. Comprior is extensible by custom approaches, offers built-in standard feature selection approaches, enables uniform access to multiple knowledge bases, and provides a customizable evaluation infrastructure to compare multiple feature selection approaches regarding their classification performance, robustness, runtime, and biological relevance.
Conclusion
Comprior allows reproducible benchmarking especially of prior knowledge approaches, which facilitates their applicability and for the first time enables a comprehensive assessment of their effectiveness
Objective
The Caribbean is an important global biodiversity hotspot. Adaptive radiations there lead to many speciation events within a limited period and hence are particularly prominent biodiversity generators. A prime example are freshwater fish of the genus Limia, endemic to the Greater Antilles. Within Hispaniola, nine species have been described from a single isolated site, Lake Miragoâne, pointing towards extraordinary sympatric speciation. This study examines the evolutionary history of the Limia species in Lake Miragoâne, relative to their congeners throughout the Caribbean.
Results
For 12 Limia species, we obtained almost complete sequences of the mitochondrial cytochrome b gene, a well-established marker for lower-level taxonomic relationships. We included sequences of six further Limia species from GenBank (total N = 18 species). Our phylogenies are in concordance with other published phylogenies of Limia. There is strong support that the species found in Lake Miragoâne in Haiti are monophyletic, confirming a recent local radiation. Within Lake Miragoâne, speciation is likely extremely recent, leading to incomplete lineage sorting in the mtDNA. Future studies using multiple unlinked genetic markers are needed to disentangle the relationships within the Lake Miragoâne clade.
Objective
The Caribbean is an important global biodiversity hotspot. Adaptive radiations there lead to many speciation events within a limited period and hence are particularly prominent biodiversity generators. A prime example are freshwater fish of the genus Limia, endemic to the Greater Antilles. Within Hispaniola, nine species have been described from a single isolated site, Lake Miragoâne, pointing towards extraordinary sympatric speciation. This study examines the evolutionary history of the Limia species in Lake Miragoâne, relative to their congeners throughout the Caribbean.
Results
For 12 Limia species, we obtained almost complete sequences of the mitochondrial cytochrome b gene, a well-established marker for lower-level taxonomic relationships. We included sequences of six further Limia species from GenBank (total N = 18 species). Our phylogenies are in concordance with other published phylogenies of Limia. There is strong support that the species found in Lake Miragoâne in Haiti are monophyletic, confirming a recent local radiation. Within Lake Miragoâne, speciation is likely extremely recent, leading to incomplete lineage sorting in the mtDNA. Future studies using multiple unlinked genetic markers are needed to disentangle the relationships within the Lake Miragoâne clade.
The intensification of Northern Hemisphere glaciations at the end of the Pliocene epoch marks one of the most substantial climatic shifts of the Cenozoic. Despite global cooling, sea surface temperatures in the high latitude North Atlantic Ocean rose between 2.9–2.7 million years ago. Here we present sedimentary geochemical proxy data from the Gulf of Cadiz to reconstruct the variability of Mediterranean Outflow Water, an important heat source to the North Atlantic. We find evidence for enhanced production of Mediterranean Outflow from the mid-Pliocene to the late Pliocene which we infer could have driven a sub-surface heat channel into the high-latitude North Atlantic. We then use Earth System Models to constrain the impact of enhanced Mediterranean Outflow production on the northward heat transport in the North Atlantic. In accord with the proxy data, the numerical model results support the formation of a sub-surface channel that pumped heat from the subtropics into the high latitude North Atlantic. We further suggest that this mechanism could have delayed ice sheet growth at the end of the Pliocene.
The intensification of Northern Hemisphere glaciations at the end of the Pliocene epoch marks one of the most substantial climatic shifts of the Cenozoic. Despite global cooling, sea surface temperatures in the high latitude North Atlantic Ocean rose between 2.9–2.7 million years ago. Here we present sedimentary geochemical proxy data from the Gulf of Cadiz to reconstruct the variability of Mediterranean Outflow Water, an important heat source to the North Atlantic. We find evidence for enhanced production of Mediterranean Outflow from the mid-Pliocene to the late Pliocene which we infer could have driven a sub-surface heat channel into the high-latitude North Atlantic. We then use Earth System Models to constrain the impact of enhanced Mediterranean Outflow production on the northward heat transport in the North Atlantic. In accord with the proxy data, the numerical model results support the formation of a sub-surface channel that pumped heat from the subtropics into the high latitude North Atlantic. We further suggest that this mechanism could have delayed ice sheet growth at the end of the Pliocene.
In the frame of a world fighting a dramatic global warming caused by human-related activities, research towards the development of renewable energies plays a crucial role. Solar energy is one of the most important clean energy sources and its role in the satisfaction of the global energy demand is set to increase. In this context, a particular class of materials captured the attention of the scientific community for its attractive properties: halide perovskites. Devices with perovskite as light-absorber saw an impressive development within the last decade, reaching nowadays efficiencies comparable to mature photovoltaic technologies like silicon solar cells. Yet, there are still several roadblocks to overcome before a wide-spread commercialization of this kind of devices is enabled. One of the critical points lies at the interfaces: perovskite solar cells (PSCs) are made of several layers with different chemical and physical features. In order for the device to function properly, these properties have to be well-matched.
This dissertation deals with some of the challenges related to interfaces in PSCs, with a focus on the interface between the perovskite material itself and the subsequent charge transport layer. In particular, molecular assemblies with specific properties are deposited on the perovskite surface to functionalize it. The functionalization results in energy level alignment adjustment, interfacial losses reduction, and stability improvement.
First, a strategy to tune the perovskite’s energy levels is introduced: self-assembled monolayers of dipolar molecules are used to functionalize the surface, obtaining simultaneously a shift in the vacuum level position and a saturation of the dangling bonds at the surface. A shift in the vacuum level corresponds to an equal change in work function, ionization energy, and electron affinity. The direction of the shift depends on the direction of the collective interfacial dipole. The magnitude of the shift can be tailored by controlling the deposition parameters, such as the concentration of the solution used for the deposition. The shift for different molecules is characterized by several non-invasive techniques, including in particular Kelvin probe. Overall, it is shown that it is possible to shift the perovskite energy levels in both directions by several hundreds of meV. Moreover, interesting insights on the molecules deposition dynamics are revealed.
Secondly, the application of this strategy in perovskite solar cells is explored. Devices with different perovskite compositions (“triple cation perovskite” and MAPbBr3) are prepared. The two resulting model systems present different energetic offsets at the perovskite/hole-transport layer interface. Upon tailored perovskite surface functionalization, the devices show a stabilized open circuit voltage (Voc) enhancement of approximately 60 meV on average for devices with MAPbBr3, while the impact is limited on triple-cation solar cells. This suggests that the proposed energy level tuning method is valid, but its effectiveness depends on factors such as the significance of the energetic offset compared to the other losses in the devices.
Finally, the above presented method is further developed by incorporating the ability to interact with the perovskite surface directly into a novel hole-transport material (HTM), named PFI. The HTM can anchor to the perovskite halide ions via halogen bonding (XB). Its behaviour is compared to that of another HTM (PF) with same chemical structure and properties, except for the ability of forming XB. The interaction of perovskite with PFI and PF is characterized through UV-Vis, atomic force microscopy and Kelvin probe measurements combined with simulations. Compared to PF, PFI exhibits enhanced resilience against solvent exposure and improved energy level alignment with the perovskite layer. As a consequence, devices comprising PFI show enhanced Voc and operational stability during maximum-power-point tracking, in addition to hysteresis reduction. XB promotes the formation of a high-quality interface by anchoring to the halide ions and forming a stable and ordered interfacial layer, showing to be a particularly interesting candidate for the development of tailored charge transport materials in PSCs.
Overall, the results exposed in this dissertation introduce and discuss a versatile tool to functionalize the perovskite surface and tune its energy levels. The application of this method in devices is explored and insights on its challenges and advantages are given. Within this frame, the results shed light on XB as ideal interaction for enhancing stability and efficiency in perovskite-based devices.
Strong as a Hippo’s Heart: Biomechanical Hippo Signaling During Zebrafish Cardiac Development
(2021)
The heart is comprised of multiple tissues that contribute to its physiological functions. During development, the growth of myocardium and endocardium is coupled and morphogenetic processes within these separate tissue layers are integrated. Here, we discuss the roles of mechanosensitive Hippo signaling in growth and morphogenesis of the zebrafish heart. Hippo signaling is involved in defining numbers of cardiac progenitor cells derived from the secondary heart field, in restricting the growth of the epicardium, and in guiding trabeculation and outflow tract formation. Recent work also shows that myocardial chamber dimensions serve as a blueprint for Hippo signaling-dependent growth of the endocardium. Evidently, Hippo pathway components act at the crossroads of various signaling pathways involved in embryonic zebrafish heart development. Elucidating how biomechanical Hippo signaling guides heart morphogenesis has direct implications for our understanding of cardiac physiology and pathophysiology.
Strong as a Hippo’s Heart: Biomechanical Hippo Signaling During Zebrafish Cardiac Development
(2021)
The heart is comprised of multiple tissues that contribute to its physiological functions. During development, the growth of myocardium and endocardium is coupled and morphogenetic processes within these separate tissue layers are integrated. Here, we discuss the roles of mechanosensitive Hippo signaling in growth and morphogenesis of the zebrafish heart. Hippo signaling is involved in defining numbers of cardiac progenitor cells derived from the secondary heart field, in restricting the growth of the epicardium, and in guiding trabeculation and outflow tract formation. Recent work also shows that myocardial chamber dimensions serve as a blueprint for Hippo signaling-dependent growth of the endocardium. Evidently, Hippo pathway components act at the crossroads of various signaling pathways involved in embryonic zebrafish heart development. Elucidating how biomechanical Hippo signaling guides heart morphogenesis has direct implications for our understanding of cardiac physiology and pathophysiology.
Implementing innovation laboratories to leverage intrapreneurship are an increasingly popular organizational practice. A typical feature in these creative environments are semi-autonomous teams in which multiple members collectively exert leadership influence, thereby challenging traditional command-and-control conceptions of leadership. An extensive body of research on the team-centric concept of shared leadership has recognized the potential for pluralized leadership structures in enhancing team effectiveness; however, little empirical work has been conducted in organizational contexts in which creativity is key. This study set out to explore antecedents of shared leadership and its influence on team creativity in an innovation lab. Building on extant shared leadership and innovation research, we propose antecedents customary to creative teamwork, that is, experimental culture, task reflexivity, and voice. Multisource data were collected from 104 team members and 49 evaluations of 29 coaches nested in 21 teams working in a prototypical innovation lab. We identify factors specific to creative teamwork that facilitate the emergence of shared leadership by providing room for experimentation, encouraging team members to speak up in the creative process, and cultivating a reflective application of entrepreneurial thinking. We provide specific exemplary activities for innovation lab teams to increase levels of shared leadership.
Implementing innovation laboratories to leverage intrapreneurship are an increasingly popular organizational practice. A typical feature in these creative environments are semi-autonomous teams in which multiple members collectively exert leadership influence, thereby challenging traditional command-and-control conceptions of leadership. An extensive body of research on the team-centric concept of shared leadership has recognized the potential for pluralized leadership structures in enhancing team effectiveness; however, little empirical work has been conducted in organizational contexts in which creativity is key. This study set out to explore antecedents of shared leadership and its influence on team creativity in an innovation lab. Building on extant shared leadership and innovation research, we propose antecedents customary to creative teamwork, that is, experimental culture, task reflexivity, and voice. Multisource data were collected from 104 team members and 49 evaluations of 29 coaches nested in 21 teams working in a prototypical innovation lab. We identify factors specific to creative teamwork that facilitate the emergence of shared leadership by providing room for experimentation, encouraging team members to speak up in the creative process, and cultivating a reflective application of entrepreneurial thinking. We provide specific exemplary activities for innovation lab teams to increase levels of shared leadership.
Macrophages play an integral role for the innate immune system. It is critically important for basic research and therapeutic applications to find approaches to potentially modulate their function as the first line of defense. Transient genetic engineering via delivery of synthetic mRNA can serve for such purposes as a robust, reliable and safe technology to modulate macrophage functions. However, a major drawback particularly in the transfection of sensitive immune cells such as macrophages is the immunogenicity of exogenous IVT-mRNAs. Consequently, the direct modulation of human macrophage activity by mRNA-mediated genetic engineering was the aim of this work. The synthetic mRNA can instruct macrophages to synthesize specific target proteins, which can steer macrophage activity in a tailored fashion. Thus, the focus of this dissertation was to identify parameters triggering unwanted immune activation of macrophages, and to find approaches to minimize such effects. When comparing different carrier types as well as mRNA chemistries, the latter had unequivocally a more pronounced impact on activation of human macrophages and monocytes. Exploratory investigations revealed that the choice of nucleoside chemistry, particularly of modified uridine, plays a crucial role for IVT-mRNA-induced immune activation, in a dose-dependent fashion. Additionally, the contribution of the various 5’ cap structures tested was only minor. Moreover, to address the technical aspects of the delivery of multiple genes as often mandatory for advanced gene delivery studies, two different strategies of payload design were investigated, namely “bicistronic” delivery and “monocistronic” co-delivery. The side-by-side comparison of mRNA co-delivery via a bicistronic design (two genes, one mRNA) with a monocistronic design (two gene, two mRNAs) unexpectedly revealed that, despite the intrinsic equimolar nature of the bicistronic approach, it was outperformed by the monocistronic approach in terms of reliable co-expression when quantified on the single cell level. Overall, the incorporation of chemical modifications into IVT-mRNA by using respective building blocks, primarily with the aim to minimize immune activation as exemplified in this thesis, has the potential to facilitate the selection of the proper mRNA chemistry to address specific biological and clinical challenges. The technological aspects of gene delivery evaluated and validated by the quantitative methods allowed us to shed light on crucial process parameters and mRNA design criteria, required for reliable co-expression schemes of IVT-mRNA delivery.
Objective
Insulin regulates mitochondrial function, thereby propagating an efficient metabolism. Conversely, diabetes and insulin resistance are linked to mitochondrial dysfunction with a decreased expression of the mitochondrial chaperone HSP60. The aim of this investigation was to determine the effect of a reduced HSP60 expression on the development of obesity and insulin resistance.
Methods
Control and heterozygous whole-body HSP60 knockout (Hsp60+/−) mice were fed a high-fat diet (HFD, 60% calories from fat) for 16 weeks and subjected to extensive metabolic phenotyping. To understand the effect of HSP60 on white adipose tissue, microarray analysis of gonadal WAT was performed, ex vivo experiments were performed, and a lentiviral knockdown of HSP60 in 3T3-L1 cells was conducted to gain detailed insights into the effect of reduced HSP60 levels on adipocyte homeostasis.
Results
Male Hsp60+/− mice exhibited lower body weight with lower fat mass. These mice exhibited improved insulin sensitivity compared to control, as assessed by Matsuda Index and HOMA-IR. Accordingly, insulin levels were significantly reduced in Hsp60+/− mice in a glucose tolerance test. However, Hsp60+/− mice exhibited an altered adipose tissue metabolism with elevated insulin-independent glucose uptake, adipocyte hyperplasia in the presence of mitochondrial dysfunction, altered autophagy, and local insulin resistance.
Conclusions
We discovered that the reduction of HSP60 in mice predominantly affects adipose tissue homeostasis, leading to beneficial alterations in body weight, body composition, and adipocyte morphology, albeit exhibiting local insulin resistance.
Objective
Insulin regulates mitochondrial function, thereby propagating an efficient metabolism. Conversely, diabetes and insulin resistance are linked to mitochondrial dysfunction with a decreased expression of the mitochondrial chaperone HSP60. The aim of this investigation was to determine the effect of a reduced HSP60 expression on the development of obesity and insulin resistance.
Methods
Control and heterozygous whole-body HSP60 knockout (Hsp60+/−) mice were fed a high-fat diet (HFD, 60% calories from fat) for 16 weeks and subjected to extensive metabolic phenotyping. To understand the effect of HSP60 on white adipose tissue, microarray analysis of gonadal WAT was performed, ex vivo experiments were performed, and a lentiviral knockdown of HSP60 in 3T3-L1 cells was conducted to gain detailed insights into the effect of reduced HSP60 levels on adipocyte homeostasis.
Results
Male Hsp60+/− mice exhibited lower body weight with lower fat mass. These mice exhibited improved insulin sensitivity compared to control, as assessed by Matsuda Index and HOMA-IR. Accordingly, insulin levels were significantly reduced in Hsp60+/− mice in a glucose tolerance test. However, Hsp60+/− mice exhibited an altered adipose tissue metabolism with elevated insulin-independent glucose uptake, adipocyte hyperplasia in the presence of mitochondrial dysfunction, altered autophagy, and local insulin resistance.
Conclusions
We discovered that the reduction of HSP60 in mice predominantly affects adipose tissue homeostasis, leading to beneficial alterations in body weight, body composition, and adipocyte morphology, albeit exhibiting local insulin resistance.
Traditional organizations are strongly encouraged by emerging digital customer behavior and digital competition to transform their businesses for the digital age. Incumbents are particularly exposed to the field of tension between maintaining and renewing their business model. Banking is one of the industries most affected by digitalization, with a large stream of digital innovations around Fintech. Most research contributions focus on digital innovations, such as Fintech, but there are only a few studies on the related challenges and perspectives of incumbent organizations, such as traditional banks. Against this background, this dissertation examines the specific causes, effects and solutions for traditional banks in digital transformation − an underrepresented research area so far.
The first part of the thesis examines how digitalization has changed the latent customer expectations in banking and studies the underlying technological drivers of evolving business-to-consumer (B2C) business models. Online consumer reviews are systematized to identify latent concepts of customer behavior and future decision paths as strategic digitalization effects. Furthermore, the service attribute preferences, the impact of influencing factors and the underlying customer segments are uncovered for checking accounts in a discrete choice experiment. The dissertation contributes here to customer behavior research in digital transformation, moving beyond the technology acceptance model. In addition, the dissertation systematizes value proposition types in the evolving discourse around smart products and services as key drivers of business models and market power in the platform economy.
The second part of the thesis focuses on the effects of digital transformation on the strategy development of financial service providers, which are classified along with their firm performance levels. Standard types are derived based on fuzzy-set qualitative comparative analysis (fsQCA), with facade digitalization as one typical standard type for low performing incumbent banks that lack a holistic strategic response to digital transformation. Based on this, the contradictory impact of digitalization measures on key business figures is examined for German savings banks, confirming that the shift towards digital customer interaction was not accompanied by new revenue models diminishing bank profitability. The dissertation further contributes to the discourse on digitalized work designs and the consequences for job perceptions in banking customer advisory. The threefold impact of the IT support perceived in customer interaction on the job satisfaction of customer advisors is disentangled.
In the third part of the dissertation, solutions are developed design-oriented for core action areas of digitalized business models, i.e., data and platforms. A consolidated taxonomy for data-driven business models and a future reference model for digital banking have been developed. The impact of the platform economy is demonstrated here using the example of the market entry by Bigtech. The role-based e3-value modeling is extended by meta-roles and role segments and linked to value co-creation mapping in VDML. In this way, the dissertation extends enterprise modeling research on platform ecosystems and value co-creation using the example of banking.
Organic solar cells (OSCs), in recent years, have shown high efficiencies through the development of novel non-fullerene acceptors (NFAs). Fullerene derivatives have been the centerpiece of the accepting materials used throughout organic photovoltaic (OPV) research. However, since 2015 novel NFAs have been a game-changer and have overtaken fullerenes. However, the current understanding of the properties of NFAs for OPV is still relatively limited and critical mechanisms defining the performance of OPVs are still topics of debate.
In this thesis, attention is paid to understanding reduced-Langevin recombination with respect to the device physics properties of fullerene and non-fullerene systems. The work is comprised of four closely linked studies. The first is a detailed exploration of the fill factor (FF) expressed in terms of transport and recombination properties in a comparison of fullerene and non-fullerene acceptors. We investigated the key reason behind the reduced FF in the NFA (ITIC-based) devices which is faster non-geminate recombination relative to the fullerene (PCBM[70]-based) devices. This is then followed by a consideration of a newly synthesized NFA Y-series derivative which exhibits the highest power conversion efficiency for OSC at the time. Such that in the second study, we illustrated the role of disorder on the non-geminate recombination and charge extraction of thick NFA (Y6-based) devices. As a result, we enhanced the FF of thick PM6:Y6 by reducing the disorder which leads to suppressing the non-geminate recombination toward non-Langevin system. In the third work, we revealed the reason behind thickness independence of the short circuit current of PM6:Y6 devices, caused by the extraordinarily long diffusion length of Y6. The fourth study entails a broad comparison of a selection of fullerene and non-fullerene blends with respect to charge generation efficiency and recombination to unveil the importance of efficient charge generation for achieving reduced recombination.
I employed transient measurements such as Time Delayed Collection Field (TDCF), Resistance dependent Photovoltage (RPV), and steady-state techniques such as Bias Assisted Charge Extraction (BACE), Temperature-Dependent Space Charge Limited Current (T-SCLC), Capacitance-Voltage (CV), and Photo-Induce Absorption (PIA), to analyze the OSCs.
The outcomes in this thesis together draw a complex picture of multiple factors that affect reduced-Langevin recombination and thereby the FF and overall performance. This provides a suitable platform for identifying important parameters when designing new blend systems. As a result, we succeeded to improve the overall performance through enhancing the FF of thick NFA device by adjustment of the amount of the solvent additive in the active blend solution. It also highlights potentially critical gaps in the current experimental understanding of fundamental charge interaction and recombination dynamics.
Complex networks like the Internet or social networks are fundamental parts of our everyday lives. It is essential to understand their structural properties and how these networks are formed. A game-theoretic approach to network design problems has become of high interest in the last decades. The reason is that many real-world networks are the outcomes of decentralized strategic behavior of independent agents without central coordination. Fabrikant, Luthra, Maneva, Papadimitriou, and Schenker proposed a game-theoretic model aiming to explain the formation of the Internet-like networks. In this model, called the Network Creation Game, agents are associated with nodes of a network. Each agent seeks to maximize her centrality by establishing costly connections to other agents. The model is relatively simple but shows a high potential in modeling complex real-world networks. In this thesis, we contribute to the line of research on variants of the Network Creation Games. Inspired by real-world networks, we propose and analyze several novel network creation models. We aim to understand the impact of certain realistic modeling assumptions on the structure of the created networks and the involved agents’ behavior.
The first natural additional objective that we consider is the network’s robustness. We consider a game where the agents seek to maximize their centrality and, at the same time, the stability of the created network against random edge failure.
Our second point of interest is a model that incorporates an underlying geometry. We consider a network creation model where the agents correspond to points in some underlying space and where edge lengths are equal to the distances between the endpoints in that space. The geometric setting captures many physical real-world networks like transport networks and fiber-optic communication networks.
We focus on the formation of social networks and consider two models that incorporate particular realistic behavior observed in real-world networks. In the first model, we embed the anti-preferential attachment link formation. Namely, we assume that the cost of the connection is proportional to the popularity of the targeted agent. Our second model is based on the observation that the probability of two persons to connect is inversely proportional to the length of their shortest chain of mutual acquaintances.
For each of the four models above, we provide a complete game-theoretical analysis. In particular, we focus on distinctive structural properties of the equilibria, the hardness of computing a best response, the quality of equilibria in comparison to the centrally designed socially optimal networks. We also analyze the game dynamics, i.e., the process of sequential strategic improvements by the agents, and analyze the convergence to an equilibrium state and its properties.
The index theorem for elliptic operators on a closed Riemannian manifold by Atiyah and Singer has many applications in analysis, geometry and topology, but it is not suitable for a generalization to a Lorentzian setting.
In the case where a boundary is present Atiyah, Patodi and Singer provide an index theorem for compact Riemannian manifolds by introducing non-local boundary conditions obtained via the spectral decomposition of an induced boundary operator, so called APS boundary conditions. Bär and Strohmaier prove a Lorentzian version of this index theorem for the Dirac operator on a manifold with boundary by utilizing results from APS and the characterization of the spectral flow by Phillips. In their case the Lorentzian manifold is assumed to be globally hyperbolic and spatially compact, and the induced boundary operator is given by the Riemannian Dirac operator on a spacelike Cauchy hypersurface. Their results show that imposing APS boundary conditions for these boundary operator will yield a Fredholm operator with a smooth kernel and its index can be calculated by a formula similar to the Riemannian case.
Back in the Riemannian setting, Bär and Ballmann provide an analysis of the most general kind of boundary conditions that can be imposed on a first order elliptic differential operator that will still yield regularity for solutions as well as Fredholm property for the resulting operator. These boundary conditions can be thought of as deformations to the graph of a suitable operator mapping APS boundary conditions to their orthogonal complement.
This thesis aims at applying the boundary conditions found by Bär and Ballmann to a Lorentzian setting to understand more general types of boundary conditions for the Dirac operator, conserving Fredholm property as well as providing regularity results and relative index formulas for the resulting operators. As it turns out, there are some differences in applying these graph-type boundary conditions to the Lorentzian Dirac operator when compared to the Riemannian setting. It will be shown that in contrast to the Riemannian case, going from a Fredholm boundary condition to its orthogonal complement works out fine in the Lorentzian setting. On the other hand, in order to deduce Fredholm property and regularity of solutions for graph-type boundary conditions, additional assumptions for the deformation maps need to be made.
The thesis is organized as follows. In chapter 1 basic facts about Lorentzian and Riemannian spin manifolds, their spinor bundles and the Dirac operator are listed. These will serve as a foundation to define the setting and prove the results of later chapters.
Chapter 2 defines the general notion of boundary conditions for the Dirac operator used in this thesis and introduces the APS boundary conditions as well as their graph type deformations. Also the role of the wave evolution operator in finding Fredholm boundary conditions is analyzed and these boundary conditions are connected to notion of Fredholm pairs in a given Hilbert space.
Chapter 3 focuses on the principal symbol calculation of the wave evolution operator and the results are used to proof Fredholm property as well as regularity of solutions for suitable graph-type boundary conditions. Also sufficient conditions are derived for (pseudo-)local boundary conditions imposed on the Dirac operator to yield a Fredholm operator with a smooth solution space.
In the last chapter 4, a few examples of boundary conditions are calculated applying the results of previous chapters. Restricting to special geometries and/or boundary conditions, results can be obtained that are not covered by the more general statements, and it is shown that so-called transmission conditions behave very differently than in the Riemannian setting.
This dissertation is concerned with the relation between qualitative phonological organization in the form of syllabic structure and continuous phonetics, that is, the spatial and temporal dimensions of vocal tract action that express syllabic structure. The main claim of the dissertation is twofold. First, we argue that syllabic organization exerts multiple effects on the spatio-temporal properties of the segments that partake in that organization. That is, there is no unique or privileged exponent of syllabic organization. Rather, syllabic organization is expressed in a pleiotropy of phonetic indices. Second, we claim that a better understanding of the relation between qualitative phonological organization and continuous phonetics is reached when one considers how the string of segments (over which the nature of the phonological organization is assessed) responds to perturbations (scaling of phonetic variables) of localized properties (such as durations) within that string. Specifically, variation in phonetic variables and more specifically prosodic variation is a crucial key to understanding the nature of the link between (phonological) syllabic organization and the phonetic spatio-temporal manifestation of that organization. The effects of prosodic variation on segmental properties and on the overlap between the segments, we argue, offer the right pathway to discover patterns related to syllabic organization. In our approach, to uncover evidence for global organization, the sequence of segments partaking in that organization as well as properties of these segments or their relations with one another must be somehow locally varied. The consequences of such variation on the rest of the sequence can then be used to unveil the span of organization. When local perturbations to segments or relations between adjacent segments have effects that ripple through the rest of the sequence, this is evidence that organization is global. If instead local perturbations stay local with no consequences for the rest of the whole, this indicates that organization is local.
Extending synchrotron X-ray refraction techniques to the quantitative analysis of metallic materials
(2022)
In this work, two X-ray refraction based imaging methods, namely, synchrotron X-ray refraction radiography (SXRR) and synchrotron X-ray refraction computed tomography (SXRCT), are applied to analyze quantitatively cracks and porosity in metallic materials. SXRR and SXRCT make use of the refraction of X-rays at inner surfaces of the material, e.g., the surfaces of cracks and pores, for image contrast. Both methods are, therefore, sensitive to smaller defects than their absorption based counterparts X-ray radiography and computed tomography. They can detect defects of nanometric size. So far the methods have been applied to the analysis of ceramic materials and fiber reinforced plastics. The analysis of metallic materials requires higher photon energies to achieve sufficient X-ray transmission due to their higher density. This causes smaller refraction angles and, thus, lower image contrast because the refraction index depends on the photon energy. Here, for the first time, a conclusive study is presented exploring the possibility to apply SXRR and SXRCT to metallic materials. It is shown that both methods can be optimized to overcome the reduced contrast due to smaller refraction angles. Hence, the only remaining limitation is the achievable X-ray transmission which is common to all X-ray imaging methods. Further, a model for the quantitative analysis of the inner surfaces is presented and verified.
For this purpose four case studies are conducted each posing a specific challenge to the imaging task. Case study A investigates cracks in a coupon taken from an aluminum weld seam. This case study primarily serves to verify the model for quantitative analysis and prove the sensitivity to sub-resolution features. In case study B, the damage evolution in an aluminum-based particle reinforced metal-matrix composite is analyzed. Here, the accuracy and repeatability of subsequent SXRR measurements is investigated showing that measurement errors of less than 3 % can be achieved. Further, case study B marks the fist application of SXRR in combination with in-situ tensile loading. Case study C is out of the highly topical field of additive manufacturing. Here, porosity in additively manufactured Ti-Al6-V4 is analyzed with a special interest in the pore morphology. A classification scheme based on SXRR measurements is devised which allows to distinguish binding defects from keyhole pores even if the defects cannot be spatially resolved. In case study D, SXRCT is applied to the analysis of hydrogen assisted cracking in steel. Due to the high X-ray attenuation of steel a comparatively high photonenergy of 50 keV is required here. This causes increased noise and lower contrast in the data compared to the other case studies. However, despite the lower data quality a quantitative analysis of the occurance of cracks in dependence of hydrogen content and applied mechanical load is possible.
Knowledge-intensive business processes are flexible and data-driven. Therefore, traditional process modeling languages do not meet their requirements: These languages focus on highly structured processes in which data plays a minor role. As a result, process-oriented information systems fail to assist knowledge workers on executing their processes. We propose a novel case management approach that combines flexible activity-centric processes with data models, and we provide a joint semantics using colored Petri nets. The approach is suited to model, verify, and enact knowledge-intensive processes and can aid the development of information systems that support knowledge work.
Knowledge-intensive processes are human-centered, multi-variant, and data-driven. Typical domains include healthcare, insurances, and law. The processes cannot be fully modeled, since the underlying knowledge is too vast and changes too quickly. Thus, models for knowledge-intensive processes are necessarily underspecified. In fact, a case emerges gradually as knowledge workers make informed decisions. Knowledge work imposes special requirements on modeling and managing respective processes. They include flexibility during design and execution, ad-hoc adaption to unforeseen situations, and the integration of behavior and data. However, the predominantly used process modeling languages (e.g., BPMN) are unsuited for this task.
Therefore, novel modeling languages have been proposed. Many of them focus on activities' data requirements and declarative constraints rather than imperative control flow. Fragment-Based Case Management, for example, combines activity-centric imperative process fragments with declarative data requirements. At runtime, fragments can be combined dynamically, and new ones can be added. Yet, no integrated semantics for flexible activity-centric process models and data models exists.
In this thesis, Wickr, a novel case modeling approach extending fragment-based Case Management, is presented. It supports batch processing of data, sharing data among cases, and a full-fledged data model with associations and multiplicity constraints. We develop a translational semantics for Wickr targeting (colored) Petri nets. The semantics assert that a case adheres to the constraints in both the process fragments and the data models. Among other things, multiplicity constraints must not be violated. Furthermore, the semantics are extended to multiple cases that operate on shared data. Wickr shows that the data structure may reflect process behavior and vice versa. Based on its semantics, prototypes for executing and verifying case models showcase the feasibility of Wickr. Its applicability to knowledge-intensive and to data-centric processes is evaluated using well-known requirements from related work.
Ziel:
Untersucht wurden subjektive bio-psycho-soziale Auswirkungen chronischer Herz- und Gefäßerkrankungen, Bewältigungsstrategien und Formen sozialer Unterstützung bei Rehabilitanden in besonderen beruflichen Problemlagen (BBPL).
Methodik:
Für die qualitative Untersuchung wurden 17 Patienten (48,9±7,0 Jahre, 13 männl.) mit BBPL (SIMBO-C>30) in leitfadengestützten Interviews befragt. Die Auswertung erfolgte softwaregestützt nach dem inhaltsanalytischen Ansatz von Mayring.
Ergebnisse:
Im Rahmen der Krankheitsauswirkungen benannten die Patienten soziale, einschließlich beruflicher Aspekte mit 62% der Aussagen deutlich häufiger als physische oder psychische Faktoren (9 bzw. 29%). Angewandte Bewältigungsstrategien und erfahrene Unterstützungsleistungen richteten sich jedoch überwiegend auf körperliche Einschränkungen (70 bzw. 45%).
Schlussfolgerung:
Obgleich soziale Krankheitsauswirkungen für die befragten Rehabilitanden subjektiv bedeutsam waren, gelang die Entwicklung geeigneter Bewältigungsstrategien nur unzureichen
On the basis of many years of personal experience the paper describes Buddhist meditation (Zazen, Vipassanā) as a mystical practice. After a short discussion of the role of some central concepts (longing, suffering, and love) in Buddhism, William James’ concept of religious experience is used to explain the goal of meditators as the achievement of a special kind of an experience of this kind. Systematically, its main point is to explain the difference between (on the one hand) a craving for pleasant ‘mental events’ in the sense of short-term moods, and (on the other) the long-term project of achieving a deep change in one’s attitude to life as a whole, a change that allows the acceptance of suffering and death. The last part argues that there is no reason to call the discussed practice irrational in a negative sense. Changes of attitude of the discussed kind cannot be brought about by argument alone. Therefore, a considered use of age-old practices like meditation should be seen as an addition, not as an undermining of reason.
Twenty two species of ectoparasites (Family Nycteribiidae: Nycteribia (Listropoda) schmidlii schmidlii, Nycteribia (Nycteribia) latreillii, Nycteribia (Nycteribia) pedicularia, Penicillidia (Penicillidia) dufourii, and Phthiridium biarticulatum; Family Streblidae: Brachytarsina (Brachytarsina) flavipennis and Raymondia huberi; Order Siphonaptera: Rhinolophopsylla unipectinata arabs, Nycteridopsylla longiceps, Araeopsylla gestroi, Ischnopsyllus intermedius, and Ischnopsyllus octactenus; Order Heteroptera: Cimex pipistrelli, Cimex lectularius, and Cacodmus vicinus; Class Arachnida: Order Mesostigmata: Spinturnix myoti and Eyndhovenia euryalis; Order Ixodida: Family Argasidae: Argas transgariepinus and Argas vespertilionis; Family Ixodidae: Hyalomma dromedarii, Ixodes ricinus, and Ixodes vespertilionis) were recovered from 19 bat species in Algeria. New host records for bats are recorded for the first time: N. schmidlii from Rh. clivosus and R. cystops; N. latreillii from Rh. blasii and P. gaisleri; R. huberi from Rh. clivosus; C. pipistrelli from E. isabellinus and H. savii; C. vicinus from E. isabellinus; S. myoti from P. gaisleri; E. euryalis from P. gaisleri and Rh. blasii; A. vespertilionis from P. gaisleri; I. ricinus from T. teniotis and Rh. hipposideros and H. dromedarii from P. kuhlii. Raymondia huberi is recorded for the first time from Algeria.
An ‛Aukward’ tale
(2017)
One hundred and seventy-three years ago, the last two Great Auks, Pinguinus impennis, ever reliably seen were killed. Their internal organs can be found in the collections of the Natural History Museum of Denmark, but the location of their skins has remained a mystery. In 1999, Great Auk expert Errol Fuller proposed a list of five potential candidate skins in museums around the world. Here we take a palaeogenomic approach to test which—if any—of Fuller’s candidate skins likely belong to either of the two birds. Using mitochondrial genomes from the five candidate birds (housed in museums in Bremen, Brussels, Kiel, Los Angeles, and Oldenburg) and the organs of the last two known individuals, we partially solve the mystery that has been on Great Auk scholars’ minds for generations and make new suggestions as to the whereabouts of the still-missing skin from these two birds.
Molecularly imprinted polymers (MIPs) have the potential to complement antibodies in bioanalysis, are more stable under harsh conditions, and are potentially cheaper to produce. However, the affinity and especially the selectivity of MIPs are in general lower than those of their biological pendants. Enzymes are useful tools for the preparation of MIPs for both low and high-molecular weight targets: As a green alternative to the well-established methods of chemical polymerization, enzyme-initiated polymerization has been introduced and the removal of protein templates by proteases has been successfully applied. Furthermore, MIPs have been coupled with enzymes in order to enhance the analytical performance of biomimetic sensors: Enzymes have been used in MIP-sensors as tracers for the generation and amplification of the measuring signal. In addition, enzymatic pretreatment of an analyte can extend the analyte spectrum and eliminate interferences.
Year-to-year variations in crop yields can have major impacts on the livelihoods of subsistence farmers and may trigger significant global price fluctuations, with severe consequences for people in developing countries. Fluctuations can be induced by weather conditions, management decisions, weeds, diseases, and pests. Although an explicit quantification and deeper understanding of weather-induced crop-yield variability is essential for adaptation strategies, so far it has only been addressed by empirical models. Here, we provide conservative estimates of the fraction of reported national yield variabilities that can be attributed to weather by state-of-the-art, process-based crop model simulations. We find that observed weather variations can explain more than 50% of the variability in wheat yields in Australia, Canada, Spain, Hungary, and Romania. For maize, weather sensitivities exceed 50% in seven countries, including the United States. The explained variance exceeds 50% for rice in Japan and South Korea and for soy in Argentina. Avoiding water stress by simulating yields assuming full irrigation shows that water limitation is a major driver of the observed variations in most of these countries. Identifying the mechanisms leading to crop-yield fluctuations is not only fundamental for dampening fluctuations, but is also important in the context of the debate on the attribution of loss and damage to climate change. Since process-based crop models not only account for weather influences on crop yields, but also provide options to represent human-management measures, they could become essential tools for differentiating these drivers, and for exploring options to reduce future yield fluctuations.
Sampling
(2022)
Zwischenbericht
(2022)
In Deutschland leben aktuell rund 1,8 Mio. als schutzsuchend registrierte Menschen mit Fluchterfahrung, deren Integration eine gesamtgesellschaftliche Aufgabe darstellt. Viele dieser Personen sind hoch qualifiziert und arbeiteten in ihrem Herkunftsland als Lehrkräfte. Das Qualifizierungsprogramm Lehrkräfte Plus ermöglicht migrierten Lehrkräften den beruflichen Wiedereinstieg in Deutschland zu erlangen. Da bislang wenig wissenschaftliche Evidenz zur Wirksamkeit solcher Qualifizierungsprogramme vorliegt, wird das Programm Lehrkräfte Plus durch ein Forschungsvorhaben der Universität Potsdam untersucht. In dem vorliegenden Zwischenbericht werden erste Ergebnisse der wissenschaftlichen Begleitforschung auf Basis der ersten Erhebungen vorgestellt.
Organic solar cells demonstrate external quantum efficiencies and fill factors approaching those of conventional photovoltaic technologies. However, as compared with the optical gap of the absorber materials, their open-circuit voltage is much lower, largely due to the presence of significant non-radiative recombination. Here, we study a large data set of published and new material combinations and find that non-radiative voltage losses decrease with increasing charge-transfer-state energies. This observation is explained by considering non-radiative charge-transfer-state decay as electron transfer in the Marcus inverted regime, being facilitated by a common skeletal molecular vibrational mode. Our results suggest an intrinsic link between non-radiative voltage losses and electron-vibration coupling, indicating that these losses are unavoidable. Accordingly, the theoretical upper limit for the power conversion efficiency of single-junction organic solar cells would be reduced to about 25.5% and the optimal optical gap increases to (1.45-1.65) eV, that is, (0.2-0.3) eV higher than for technologies with minimized non-radiative voltage losses.
This longitudinal study examined relationships between student-perceived teaching for meaning, support for autonomy, and competence in mathematic classrooms (Time 1), and students’ achievement goal orientations and engagement in mathematics 6 months later (Time 2). We tested whether student-perceived instructional characteristics at Time 1 indirectly related to student engagement at Time 2, via their achievement goal orientations (Time 2), and, whether student gender moderated these relationships. Participants were ninth and tenth graders (55.2% girls) from 46 classrooms in ten secondary schools in Berlin, Germany. Only data from students who participated at both timepoints were included (N = 746 out of total at Time 1 1118; dropout 33.27%). Longitudinal structural equation modeling showed that student-perceived teaching for meaning and support for competence indirectly predicted intrinsic motivation and effort, via students’ mastery goal orientation. These paths were equivalent for girls and boys. The findings are significant for mathematics education, in identifying motivational processes that partly explain the relationships between student-perceived teaching for meaning and competence support and intrinsic motivation and effort in mathematics.
Martin Heideggers Hölderlin-Lesungen – im Zeichen von Norbert von Hellingrath and Stefan George
(2017)
Martin Heidegger hat Anfang der 1960er Jahre 10 Gedichte Hölderlins für eine Sprechschallplatte des Günther Neske-Verlags in Pfullingen eingesprochen. Die insgesamt rund 50 Minuten dauernde Langspiel-Schallplatte wurde seit 1964 gewerblich vertrieben. Was hat einen Philosophen dazu bewogen, hinter dem Dichter zurückzutreten, um nur noch dessen Sprachrohr zu sein? Heidegger knüpfte mit seinem Hölderlin-Verständnis an Norbert von Hellingraths Auffassung vom Dichterpropheten und der Dichtung als heiligem Wort an. Seine rhythmischen Rezitationen in monoton psalmodierendem Stil leiten sich vortragsgeschichtlich von Hellingrath und der George-Schule her.
Novel metal-doped bacteriostatic hybrid clay composites for point-of-use disinfection of water
(2017)
This study reports the facile microwave-assisted thermal preparation of novel metal-doped hybrid clay composite adsorbents consisting of Kaolinite clay, Carica papaya seeds and/or plantain peels (Musa paradisiaca) and ZnCl2. Fourier Transformed IR spectroscopy, X-ray diffraction, Scanning Electron Microscopy and Brunauer-Emmett-Teller (BET) analysis are employed to characterize these composite adsorbents. The physicochemical analysis of these composites suggests that they act as bacteriostatic rather than bacteriacidal agents. This bacterostactic action is induced by the ZnO phase in the composites whose amount correlates with the efficacy of the composite. The composite prepared with papaya seeds (PS-HYCA) provides the best disinfection efficacy (when compared with composite prepared with Musa paradisiaca peels-PP-HYCA) against gram-negative enteric bacteria with a breakthrough time of 400 and 700 min for the removal of 1.5 x10(6) cfu/mL S. typhi and V. cholerae from water respectively. At 10(3) cfu/mL of each bacterium in solution, 2 g of both composite adsorbents kept the levels the bacteria in effluent solutions at zero for up to 24 h. Steam regeneration of 2 g of bacteria-loaded Carica papaya prepared composite adsorbent shows a loss of ca. 31% of its capacity even after the 3rd regeneration cycle of 25 h of service time. The composite adsorbent prepared with Carica papaya seeds will be useful for developing simple point-of-use water treatment systems for water disinfection application. This composite adsorbent is comparatively of good performance and shows relatively long hydraulic contact times and is expected to minimize energy intensive traditional treatment processes.
Molecules often fragment after photoionization in the gas phase. Usually, this process can only be investigated spectroscopically as long as there exists electron correlation between the photofragments. Important parameters, like their kinetic energy after separation, cannot be investigated. We are reporting on a femtosecond time-resolved Auger electron spectroscopy study concerning the photofragmentation dynamics of thymine. We observe the appearance of clearly distinguishable signatures from thymines neutral photofragment isocyanic acid. Furthermore, we observe a time-dependent shift of its spectrum, which we can attribute to the influence of the charged fragment on the Auger electron. This allows us to map our time-dependent dataset onto the fragmentation coordinate. The time dependence of the shift supports efficient transformation of the excess energy gained from photoionization into kinetic energy of the fragments. Our method is broadly applicable to the investigation of photofragmentation processes.
From the 1940s well into the 1960s, a new sociocultural constellation let American Jews redefine their relationship to the religious tradition. This article analyzes the response of a religious elite of rabbis and intellectuals to this process, which was driven by various factors. Many American Jews were at least one generation away from traditional Judaism, which seemed out of place in postwar America. Liberal Judaism, with its narrow concept of religion, on the other hand, while fitting a larger social consensus, did not satiate many Jews' spiritual and identity needs. Sensing this deficit, rabbis and other religious thinkers explored broader concepts of Judaism. Religious journals that sprang up in the postwar decades served as vehicles for the attempt to understand Judaism in broader, cultural terms, while preserving a religious core. The article shows how in this search religious thinkers turned to the Eastern European past as a resource. As other groups similarly tried to mine this past for the sake of their present agendas, its reconstruction became a key process in the transformation of postwar American Judaism and its relationship to the tradition.
The North Tabriz Fault is seismologically an active fault with current right lateral strike-slip movements. Restricted mafic to intermediate Fate Cretaceous igneous rocks are exposed along the North Tabriz Fault. Whole rock and clinopyroxene phenocrysts geochemistry were studied in order to characterize the petrogenesis of these mafic rocks and their possible relation to an oceanic crust. The results indicate a tholeiitic parental magma that formed in an evolved mid-ocean ridge tectonic setting similar to the Iceland mid-Atlantic ridge basalts. The ocean floor basalt characteristics give evidence of an oceanic crust along the North Tabriz Fault. Therefore, the trend of the North Tabriz Fault more likely marks a suture zone related to the closure of a branch of the Neo-Tethys Ocean in the NW Iran. This fault, in addition to the Caucasus and Zagros suture zones, compensates an important part of the convergence between the Arabian and Eurasian plates resulting from the Red Sea divergence. It is concluded that the North Tabriz Fault appears to be possible southeastern continuation of the North Anatolian suture zone.
In this paper, we analyze stochastic dynamic pricing and advertising differential games in special oligopoly markets with constant price and advertising elasticity. We consider the sale of perishable as well as durable goods and include adoption effects in the demand. Based on a unique stochastic feedback Nash equilibrium, we derive closed-form solution formulas of the value functions and the optimal feedback policies of all competing firms. Efficient simulation techniques are used to evaluate optimally controlled sales processes over time. This way, the evolution of optimal controls as well as the firms’ profit distributions are analyzed. Moreover, we are able to compare feedback solutions of the stochastic model with its deterministic counterpart. We show that the market power of the competing firms is exactly the same as in the deterministic version of the model. Further, we discover two fundamental effects that determine the relation between both models. First, the volatility in demand results in a decline of expected profits compared to the deterministic model. Second, we find that saturation effects in demand have an opposite character. We show that the second effect can be strong enough to either exactly balance or even overcompensate the first one. As a result we are able to identify cases in which feedback solutions of the deterministic model provide useful approximations of solutions of the stochastic model.
The Influence of Land Use Intensity on the Plant-Associated Microbiome of Dactylis glomerata L.
(2017)
In this study, we investigated the impact of different land use intensities (LUI) on the root-associated microbiome of Dactylis glomerata (orchardgrass). For this purpose, eight sampling sites with different land use intensity levels but comparable soil properties were selected in the southwest of Germany. Experimental plots covered land use levels from natural grassland up to intensively managed meadows. We used 16S rRNA gene based barcoding to assess the plant-associated community structure in the endosphere, rhizosphere and bulk soil of D. glomerata. Samples were taken at the reproductive stage of the plant in early summer. Our data indicated that roots harbor a distinct bacterial community, which clearly differed from the microbiome of the rhizosphere and bulk soil. Our results revealed Pseudomonadaceae, Enterobacteriaceae and Comamonadaceae as the most abundant endophytes independently of land use intensity. Rhizosphere and bulk soil were dominated also by Proteobacteria, but the most abundant families differed from those obtained from root samples. In the soil, the effect of land use intensity was more pronounced compared to root endophytes leading to a clearly distinct pattern of bacterial communities under different LUI from rhizosphere and bulk soil vs. endophytes. Overall, a change of community structure on the plant-soil interface was observed, as the number of shared OTUs between all three compartments investigated increased with decreasing land use intensity. Thus, our findings suggest a stronger interaction of the plant with its surrounding soil under low land use intensity. Furthermore, the amount and quality of available nitrogen was identified as a major driver for shifts in the microbiome structure in all compartments.