Refine
Year of publication
- 2019 (257) (remove)
Document Type
- Doctoral Thesis (257) (remove)
Keywords
- Klimawandel (4)
- Spektroskopie (4)
- climate change (4)
- Biodiversität (3)
- adaptation (3)
- machine learning (3)
- movement ecology (3)
- spectroscopy (3)
- Anden (2)
- Andes (2)
Institute
- Institut für Biochemie und Biologie (54)
- Institut für Chemie (25)
- Institut für Geowissenschaften (24)
- Institut für Physik und Astronomie (21)
- Hasso-Plattner-Institut für Digital Engineering GmbH (13)
- Extern (12)
- Sozialwissenschaften (11)
- Wirtschaftswissenschaften (11)
- Historisches Institut (10)
- Department Linguistik (9)
Although the search for promising business models (BMs) is crucial for every profit-oriented venture, searching for those challenges in particular entrepreneurs. Limited resources, missing expertise and absolute uncertainty call entrepreneurs to strongly rely on their cognition in searching for a promising BM. However, as prior studies have examined cognitive search activities in isolation and neglected cognitive differences, explanations of how cognitive factors affect the BM process and outcomes are thus far insufficient.
Addressing the overall question of how BMs emerge, the dissertation contributes to the cognitive perspective in entrepreneurship and BM research. Building on the dual-process theory from cognitive psychology, the micro-foundations of managerial decision-making and insights from framing literature, this dissertation explicitly investigates the impacts of different cognitive dispositions, search activities and visual framing effects. The core assumption is that cognitive dispositions and entrepreneurs’ searches for information determine their BM decision-making. Furthermore, BM visualisations have become popular instruments with which to explain and manage today’s complex business interactions. As they abstract from reality, they can also unfold impacts on the cognitive processes.
This dissertation offers new explanations to these aspects and consists of three studies and one reflective article. The first study explores the impacts of differences in search activities and cognitive dispositions in a qualitative study with 70 entrepreneurship students. The second qualitative study explores the cognitive impacts of 103 BM visualisations. Third, a quantitative PLS-SEM experiment with 197 entrepreneurs illuminates the link between BM visualisations and cognition. The reflective article expresses the results’ meaning for the teaching of BMs.
In sum, the studies have resulted in a new theory of stabilising factors explaining how cognitive dispositions, search activities and visual framing determine entrepreneurs’ decisions to imitate or deviate from existing BMs. It indicates that the decision depends on the context-dependent strategic orientation and cognitive disposition-dependent cognitive safety, that is the correspondence between characteristics of cognitive dispositions and search activities. Moreover, the studies identified five visual framing effects that are independent of cognitive dispositions and prior experiences. This provides fertile contributions to the literature on BM methods and how BM visualisations affect decisions. Most importantly, BM visualisations provide an emotionally stabilising function to rational entrepreneurs, a cognitively stabilising function to experiential participants and do not affect indifferent participants in general.
Restful choreographies
(2019)
Business process management has become a key instrument to organize work as many companies represent their operations in business process models. Recently, business process choreography diagrams have been introduced as part of the Business Process Model and Notation standard to represent interactions between business processes, run by different partners. When it comes to the interactions between services on the Web, Representational State Transfer (REST) is one of the primary architectural styles employed by web services today. Ideally, the RESTful interactions between participants should implement the interactions defined at the business choreography level.
The problem, however, is the conceptual gap between the business process choreography diagrams and RESTful interactions. Choreography diagrams, on the one hand, are modeled from business domain experts with the purpose of capturing, communicating and, ideally, driving the business interactions. RESTful interactions, on the other hand, depend on RESTful interfaces that are designed by web engineers with the purpose of facilitating the interaction between participants on the internet. In most cases however, business domain experts are unaware of the technology behind web service interfaces and web engineers tend to overlook the overall business goals of web services. While there is considerable work on using process models during process implementation, there is little work on using choreography models to implement interactions between business processes. This thesis addresses this research gap by raising the following research question: How to close the conceptual gap between business process choreographies and RESTful interactions? This thesis offers several research contributions that jointly answer the research question.
The main research contribution is the design of a language that captures RESTful interactions between participants---RESTful choreography modeling language. Formal completeness properties (with respect to REST) are introduced to validate its instances, called RESTful choreographies. A systematic semi-automatic method for deriving RESTful choreographies from business process choreographies is proposed. The method employs natural language processing techniques to translate business interactions into RESTful interactions. The effectiveness of the approach is shown by developing a prototypical tool that evaluates the derivation method over a large number of choreography models.
In addition, the thesis proposes solutions towards implementing RESTful choreographies. In particular, two RESTful service specifications are introduced for aiding, respectively, the execution of choreographies' exclusive gateways and the guidance of RESTful interactions.
Sin pragmática no sería posible la comunicación, puesto que no podríamos interpretar enunciados lingüísticos. A cualquier aprendiente de una lengua que no domina, no le basta con ser competente lingüísticamente, puesto que su fin es comunicarse con otras personas y en contextos determinados. Solo una enseñanza que facilite la habilidad de producir y comprender enunciados para realizar actos de lengua, seleccionando los más apropiados para un contexto determinado, podrá preciarse de ser eficiente.
El trabajo que aquí se presenta pretende dar a conocer a la comunidad científica y, en especial, a los y las involucradas directa e indirectamente en el proceso de enseñanza, el concepto de pragmática verbal y contrastarlo con otros como gramática, cultura o interculturalidad, así como concienciarlos de la importancia y de la necesidad imperiosa del establecimiento de la pragmática como disciplina relevante en el proceso comunicativo y, en especial, de su inclusión sistemática y manifiesta en los libros de texto de español como lengua extranjera elaborados para el contexto escolar. Para ello se investiga la presencia de elementos pragmáticos y el fomento de la competencia pragmática en libros de texto para principiantes, por ser estos el material utilizado por excelencia en las escuelas y por su relevancia a la hora de especificar contenidos, tipo de progresión y metodología.
One method of embedding groups into skew fields was introduced by A. I. Mal'tsev and B. H. Neumann (cf. [18, 19]). If G is an ordered group and F is a skew field, the set F((G)) of formal power series over F in G with well-ordered support forms a skew field into which the group ring F[G] can be embedded. Unfortunately it is not suficient that G is left-ordered since F((G)) is only an F-vector space in this case as there is no natural way to define a multiplication on F((G)). One way to extend the original idea onto left-ordered groups is to examine the endomorphism ring of F((G)) as explored by N. I. Dubrovin (cf. [5, 6]). It is possible to embed any crossed product ring F[G; η, σ] into the endomorphism ring of F((G)) such that each non-zero element of F[G; η, σ] defines an automorphism of F((G)) (cf. [5, 10]). Thus, the rational closure of F[G; η, σ] in the endomorphism ring of F((G)), which we will call the Dubrovin-ring of F[G; η, σ], is a potential candidate for a skew field of fractions of F[G; η, σ]. The methods of N. I. Dubrovin allowed to show that specific classes of groups can be embedded into a skew field. For example, N. I. Dubrovin contrived some special criteria, which are applicable on the universal covering group of SL(2, R). These methods have also been explored by J. Gräter and R. P. Sperner (cf. [10]) as well as N.H. Halimi and T. Ito (cf. [11]). Furthermore, it is of interest to know if skew fields of fractions are unique. For example, left and right Ore domains have unique skew fields of fractions (cf. [2]). This is not the general case as for example the free group with 2 generators can be embedded into non-isomorphic skew fields of fractions (cf. [12]). It seems likely that Ore domains are the most general case for which unique skew fields of fractions exist. One approach to gain uniqueness is to restrict the search to skew fields of fractions with additional properties. I. Hughes has defined skew fields of fractions of crossed product rings F[G; η, σ] with locally indicable G which fulfill a special condition. These are called Hughes-free skew fields of fractions and I. Hughes has proven that they are unique if they exist [13, 14]. This thesis will connect the ideas of N. I. Dubrovin and I. Hughes. The first chapter contains the basic terminology and concepts used in this thesis. We present methods provided by N. I. Dubrovin such as the complexity of elements in rational closures and special properties of endomorphisms of the vector space of formal power series F((G)). To combine the ideas of N.I. Dubrovin and I. Hughes we introduce Conradian left-ordered groups of maximal rank and examine their connection to locally indicable groups. Furthermore we provide notations for crossed product rings, skew fields of fractions as well as Dubrovin-rings and prove some technical statements which are used in later parts. The second chapter focuses on Hughes-free skew fields of fractions and their connection to Dubrovin-rings. For that purpose we introduce series representations to interpret elements of Hughes-free skew fields of fractions as skew formal Laurent series. This 1 Introduction allows us to prove that for Conradian left-ordered groups G of maximal rank the statement "F[G; η, σ] has a Hughes-free skew field of fractions" implies "The Dubrovin ring of F [G; η, σ] is a skew field". We will also prove the reverse and apply the results to give a new prove of Theorem 1 in [13]. Furthermore we will show how to extend injective ring homomorphisms of some crossed product rings onto their Hughes-free skew fields of fractions. At last we will be able to answer the open question whether Hughes--free skew fields are strongly Hughes-free (cf. [17, page 53]).
Introduction: Cystic fibrosis (CF) is a genetic disease which disrupts the function of an epithelial surface anion channel, CFTR (cystic fibrosis transmembrane conductance regulator). Impairment to this channel leads to inflammation and infection in the lung causing the majority of morbidity and mortality. However, CF is a multiorgan disease affecting many tissues, including vascular smooth muscle. Studies have revealed young people with cystic fibrosis lacking inflammation and infection still demonstrate vascular endothelial dysfunction, measured per flow-mediated dilation (FMD). In other disease cohorts, i.e. diabetic and obese, endurance exercise interventions have been shown improve or taper this impairment. However, long-term exercise interventions are risky, as well as costly in terms of time and resources. Nevertheless, emerging research has correlated the acute effects of exercise with its long-term benefits and advocates the study of acute exercise effects on FMD prior to longitudinal studies. The acute effects of exercise on FMD have previously not been examined in young people with CF, but could yield insights on the potential benefits of long-term exercise interventions.
The aims of these studies were to 1) develop and test the reliability of the FMD method and its applicability to study acute exercise effects; 2) compare baseline FMD and the acute exercise effect on FMD between young people with and without CF; and 3) explore associations between the acute effects of exercise on FMD and demographic characteristics, physical activity levels, lung function, maximal exercise capacity or inflammatory hsCRP levels.
Methods: Thirty young volunteers (10 people with CF, 10 non-CF and 10 non-CF active matched controls) between the ages of 10 and 30 years old completed blood draws, pulmonary function tests, maximal exercise capacity tests and baseline FMD measurements, before returning approximately 1 week later and performing a 30-min constant load training at 75% HRmax. FMD measurements were taken prior, immediately after, 30 minutes after and 1 hour after constant load training. ANOVAs and repeated measures ANOVAs were employed to explore differences between groups and timepoints, respectively. Linear regression was implemented and evaluated to assess correlations between FMD and demographic characteristics, physical activity levels, lung function, maximal exercise capacity or inflammatory hsCRP levels. For all comparisons, statistical significance was set at a p-value of α < 0.05.
Results: Young people with CF presented with decreased lung function and maximal exercise capacity compared to matched controls. Baseline FMD was also significantly decreased in the CF group (CF: 5.23% v non-CF: 8.27% v non-CF active: 9.12%). Immediately post-training, FMD was significantly attenuated (approximately 40%) in all groups with CF still demonstrating the most minimal FMD. Follow-up measurements of FMD revealed a slow recovery towards baseline values 30 min post-training and improvements in the CF and non-CF active groups 60 min post-training. Linear regression exposed significant correlations between maximal exercise capacity (VO2 peak), BMI and FMD immediately post-training.
Conclusion: These new findings confirm that CF vascular endothelial dysfunction can be acutely modified by exercise and will aid in underlining the importance of exercise in CF populations. The potential benefits of long-term exercise interventions on vascular endothelial dysfunction in young people with CF warrants further investigation.
In den letzten Jahrzehnten fand auch in der Beschichtungsindustrie ein Umdenken hin zu umweltfreundlicheren Farben und Lacken statt. Allerdings basieren auch neue Lösungen meist nicht auf Biopolymeren und in einem noch geringeren Anteil auf wasserbasierten Beschichtungssystemen aus nachwachsenden Rohstoffen. Dies stellt den Anknüpfungspunkt dieser Arbeit dar, in der untersucht wurde, ob das Biopolymer Stärke das Potenzial zum wasserbasierten Filmbildner für Farben und Lacke besitzt. Dabei müssen angelehnt an etablierte synthetische Marktprodukte die folgenden Kriterien erfüllt werden: Die wässrige Dispersion muss mindestens einen 30%igen Feststoffgehalt haben, bei Raumtemperatur verarbeitet werden können und Viskositäten zwischen 10^2-10^3 mPa·s aufweisen. Die finale Beschichtung muss einen geschlossenen Film bilden und sehr gute Haftfestigkeiten zu einer spezifischen Oberfläche, in dieser Arbeit Glas, besitzen. Als Grundlage für die Modifizierung der Stärke wurde eine Kombination von molekularem Abbau und chemischer Funktionalisierung ausgewählt. Da nicht bekannt war, welchen Einfluss die Stärkeart, die gewählte Abbaureaktion als auch verschiedene Substituenten auf die Dispersionsherstellung und deren Eigenschaften sowie die Beschichtungseigenschaften ausüben könnten, wurden die strukturellen Parameter getrennt voneinander untersucht.
Das erste Themengebiet beinhaltete den oxidativen Abbau von Kartoffel- und Palerbsenstärke mittels des Hypochlorit-Abbaus (OCl-) und des ManOx-Abbaus (H2O2, KMnO4). Mit beiden Abbaureaktionen konnten vergleichbare gewichtsmittlere Molmassen (Mw) von 2·10^5-10^6 g/mol (GPC-MALS) hergestellt werden. Allerdings führten die gewählten Reaktionsbedingungen beim ManOx-Abbau zur Bildung von Gelpartikeln. Diese lagen im µm-Bereich (DLS und Kryo-REM-Messungen) und hatten zur Folge, dass die ManOx-Proben deutlich erhöhte Viskositäten (c: 7,5 %; 9-260 mPa·s) im Vergleich zu den OCl--Proben (4-10 mPa·s) bei scherverdünnendem Verhalten besaßen und die Eigenschaften von viskoelastischen Gelen (G‘ > G‘‘) zeigten. Des Weiteren wiesen sie reduzierte Heißwasserlöslichkeiten (95 °C, vorrangig: 70-99 %) auf. Der OCl--Abbau führte zu hydrophileren (Carboxylgruppengehalt bis zu 6,1 %; ManOx: bis zu 3,1 %), nach 95 °C-Behandlung vollständig wasserlöslichen abgebauten Stärken, die ein Newtonsches Fließverhalten mit Eigenschaften einer viskoelastischen Flüssigkeit (G‘‘ > G‘) hatten. Die OCl--Proben konnten im Vergleich zu den ManOx-Produkten (10-20 %) zu konzentrierteren Dispersionen (20-40 %) verarbeitet werden, die gleichzeitig die Einschränkung von anwendungsrelevanten Mw auf < 7·10^5 g/mol zuließen (Konzentration sollte > 30 % sein). Außerdem führten nur die OCl--Proben der Kartoffelstärke zu transparenten (alle anderen waren opak) geschlossenen Beschichtungsfilmen. Somit hebt sich die Kombination von OCl--Abbau und Kartoffelstärke mit Hinblick auf die Endanwendung ab.
Das zweite Themengebiet umfasste Untersuchungen zum Einfluss von Ester- und Hydroxyalkylether-Substituenten auf Basis einer industriell abgebauten Kartoffelstärke (Mw: 1,2·10^5 g/mol) vor allem auf die Dispersionsherstellung, die rheologischen Eigenschaften der Dispersionen und die Beschichtungseigenschaften in Kombination mit Glassubstraten. Dazu wurden Ester und Ether mit DS/MS-Werten von 0,07-0,91 synthetisiert. Die Derivate konnten zu wasserbasierten Dispersionen mit Konzentrationen von 30-45 % verarbeitet werden, wobei bei hydrophoberen Modifikaten ein Co-Lösemittel, Diethylenglycolmonobutylether (DEGBE), eingesetzt werden musste. Die Feststoffgehalte sanken dabei für beide Derivatklassen vor allem mit zunehmender Alkylkettenlänge. Die anwendungsrelevanten Viskositäten (323-1240 mPa·s) stiegen auf Grund von Wechselwirkungen tendenziell mit DS/MS und Alkylkettenlänge an. Hinsichtlich der Beschichtungseigenschaften erwiesen sich die Ester vergleichend zu den Ethern als die bevorzugte Substituentenklasse, da nur die Ester geschlossene, defektfreie und mehrheitlich transparente Beschichtungsfilme bildeten, die exzellente bis sehr gute Haftfestigkeiten (ISO Klasse: 0 und 1) auf Glas besaßen. Die Ether bildeten mehrheitlich brüchige Filme. Basierend auf der Kombination der Ergebnisse aus Lösemittelaustausch, den rheologischen Untersuchungen und zusätzlichen Oberflächenspannungsmessungen (30-61 mN/m) konnte geschlossen werden, dass wahrscheinlich fehlende oder schlechte Haftfestigkeiten vorrangig akkumuliertem Wasser in den Beschichtungsfilmen (visuell: trüb oder weiß) geschuldet sind, während die Brüchigkeit vermutlich auf Wechselwirkungen (H-Brücken Wechselwirkungen, hydrophobe Wechselwirkungen) zwischen den Polymeren zurückgeführt werden kann.
Insgesamt scheint die Kombination aus Kartoffelstärke basierend auf dem OCl--Abbau mit Mw < 7·10^5 g/mol und einem Estersubstituenten eine gute Wahl für wasserbasierte Dispersionen mit hohen Feststoffkonzentrationen (> 30 %), guter Filmbildung und exzellenten Haftungen auf Glas zu sein.
The business model has emerged as a construct to understand how firms drive innovation through emerging technologies. It is defined as the ‘architecture of the firm’s value creation, delivery and appropriation mechanisms’ (Foss & Saebi, 2018, p. 5). The architecture is characterized by complex functional interrelations between activities that are conducted by various actors, some within and some outside of the firm. In other words, a firm’s value architecture is embedded within a wider system of actors that all contribute to the output of the value architecture.
The question of what drives innovation within this system and how the firm can shape and navigate this innovation is an essential question within innova- tion management research. This dissertation is a compendium of four individual research articles that examine how the design of a firm’s value architecture can fa- cilitate system-wide innovation in the context of Artificial Intelligence and Block- chain Technology. The first article studies how firms use Blockchain Technology to design a governance infrastructure that enables innovation within a platform ecosystem. The findings propose a framework for blockchain-enabled platform ecosystems that address the essential problem of opening the platform to allow for innovation while also ensuring that all actors get to capture their share of the value. The second article analyzes how German Artificial Intelligence startups design their business models. It identifies three distinct types of startup with dif- ferent underlying business models. The third article aims to understand the role of a firm’s value architecture during the socio-technical transition process of Arti- ficial Intelligence. It identifies three distinct ways in which Artificial Intelligence startups create a shared understanding of the technology. The last article exam- ines how corporate venture capital units configure value-adding services for their venture portfolios. It derives a taxonomy of different corporate venture capital types, driven by different strategic motivations.
Ultimately, this dissertation provides novel empirical insights into how a firm’s value architecture determines it’s role within a wider system of actors and how that role enables the firm to facilitate innovation. In that way, it contributes to both business model and innovation management literature.
The African weakly electric fish genus Campylomormyrus is a well-investigated fish group of the species-rich family Mormyridae. They are able to generate species-specific electric organ discharges (EODs) which vary in their waveform characteristics including polarity, phase umber and duration. In mormyrid species EODs are used for communication, species discrimination and mate recognition, and it is thought hat they serve as pre-zygotic isolation mechanism driving sympatric speciation by promoting assortative mating. The EOD diversification, its volutionary effects and the link to species divergence have been examined histologically, behaviorally, and genetically. Molecular analyses are a major tool to identify species and their phenotypic traits by studying the underlying genes. The genetic variability between species further provides information from which evolutionary processes, such as speciation, can be deduced. Hence, the ultimate aim of this study is the investigation of genetic variability within the African weakly electric fish genus Campylomormyrus to better understand their sympatric speciation and comprehend their evolutionary drivers. In order to extend the current knowledge and gain more insights into its species history, karyological and genomic approaches are being pursued considering species differences. Previous studies have shown that species with different EOD duration have specific gene expression patterns and single nucleotide polymorphisms (SNPs). As EODs play a crucial role during the evolution of Campylomormyrus species, the identification of its underlying genes may suggest how the EOD diversity evolved and whether this trait is based on a complex network of genetic processes or is regulated by only a few genes. The results obtained in this study suggest that genes with non-synonymous SNPs, which are exclusive to C. tshokwe with an elongated EOD, have frequent functions ssociated with tissue morphogenesis and transcriptional regulation. Therefore, it is proposed that these processes likely co-determine EOD characteristics of Campylomormyrus species. Furthermore, genome-wide analyses confirm the genetic difference among most Campylomormyrus species. In contrast, the same analyses reveal genetic similarity among individuals of the alces-complex showing different EOD waveforms. It is therefore hypothesized that the low genetic variability and high EOD diversity represents incipient sympatric speciation. The karyological description of a Campylomormyrus species provides crucial information about chromosome number and shapes. Its diploid chromosome number of 2n=48 supports the conservation of this trait within Mormyridae. Differences have been detected in the number of bi-armed chromosomes which is unusually high compared to other mormyrid species. This high amount can be due to chromosome rearrangements which could cause genetic incompatibility and reproductive isolation. Hence an alternative hypothesis regarding processes which cause sympatric speciation is that chromosome differences are involved in the speciation process of Campylomormyrus by acting as postzygotic isolation mechanism. In summary, the karyological and genomic investigations conducted in this study contributed to the increase of knowledge about Campylomormyrus species, to the solution of some existing ambiguities like phylogenetic relationships and to the raising of new hypothesis explaining the sympatric speciation of those African weakly electric fish. This study provides a basis for future genomic research to obtain a complete picture for causes and results of evolutionary processes in Campylomormyrus.
The importance of cryptic diversity in rotifers is well understood regarding its ecological consequences, but there remains an in depth comprehension of the underlying molecular mechanisms and forces driving speciation. Temperature has been found several times to affect species spatio-temporal distribution and organisms’ performance, but we lack information on the mechanisms that provide thermal tolerance to rotifers. High cryptic diversity was found recently in the freshwater rotifer “Brachionus calyciflorus”, showing that the complex comprises at least four species: B. calyciflorus sensu stricto (s.s.), B. fernandoi, B. dorcas, and B. elevatus. The temporal succession among species which have been observed in sympatry led to the idea that temperature might play a crucial role in species differentiation.
The central aim of this study was to unravel differences in thermal tolerance between species of the former B. calyciflorus species complex by comparing phenotypic and gene expression responses. More specifically, I used the critical maximum temperature as a proxy for inter-species differences in heat-tolerance; this was modeled as a bi-dimensional phenotypic trait taking into consideration the intention and the duration of heat stress. Significant differences on heat-tolerance between species were detected, with B. calyciflorus s.s. being able to tolerate higher temperatures than B. fernandoi.
Based on evidence of within species neutral genetic variation, I further examined adaptive genetic variability within two different mtDNA lineages of the heat tolerant B. calyciflorus s.s. to identify SNPs and genes under selection that might reflect their adaptive history. These analyses did not reveal adaptive genetic variation related to heat, however, they show putatively adaptive genetic variation which may reflect local adaptation. Functional enrichment of putatively positively selected genes revealed signals of adaptation in genes related to “lipid metabolism”, “xenobiotics biodegradation and metabolism” and “sensory system”, comprising candidate genes which can be utilized in studies on local adaptation. An absence of genetically-based differences in thermal adaptation between the two mtDNA lineages, together with our knowledge that B. calyciflorus s.s. can withstand a broad range of temperatures, led to the idea to further investigate shared transcriptomic responses to long-term exposure to high and low temperatures regimes. With this, I identified candidate genes that are involved in the response to temperature imposed stress. Lastly, I used comparative transcriptomics to examine responses to imposed heat-stress in heat-tolerant and heat-sensitive Brachionus species. I found considerably different patterns of gene expression in the two species. Most striking are patterns of expression regarding the heat shock proteins (hsps) between the two species. In the heat-tolerant, B. calyciflorus s.s., significant up-regulation of hsps at low temperatures was indicative of a stress response at the cooler end of the temperature regimes tested here. In contrast, in the heat-sensitive B. fernandoi, hsps generally exhibited up-regulation of these genes along with rising temperatures. Overall, identification of differences in expression of genes suggests suppression of protein biosynthesis to be a mechanism to increase thermal tolerance. Observed patterns in population growth are correlated with the hsp gene expression differences, indicating that this physiological stress response is indeed related to phenotypic life history performance.
A contemporary challenge in Ecology and Evolutionary Biology is to anticipate the fate of populations of organisms in the context of a changing world. Climate change and landscape changes due to anthropic activities have been of major concern in the contemporary history. Organisms facing these threats are expected to respond by local adaptation (i.e., genetic changes or phenotypic plasticity) or by shifting their distributional range (migration). However, there are limits to their responses. For example, isolated populations will have more difficulties in developing adaptive innovations by means of genetic changes than interconnected metapopulations. Similarly, the topography of the environment can limit dispersal opportunities for crawling organisms as compared to those that rely on wind. Thus, populations of species with different life history strategy may differ in their ability to cope with changing environmental conditions. However, depending on the taxon, empirical studies investigating organisms’ responses to environmental change may become too complex, long and expensive; plus, complications arising from dealing with endangered species. In consequence, eco-evolutionary modeling offers an opportunity to overcome these limitations and complement empirical studies, understand the action and limitations of underlying mechanisms, and project into possible future scenarios. In this work I take a modeling approach and investigate the effect and relative importance of evolutionary mechanisms (including phenotypic plasticity) on the ability for local adaptation of populations with different life strategy experiencing climate change scenarios. For this, I performed a review on the state of the art of eco-evolutionary Individual-Based Models (IBMs) and identify gaps for future research. Then, I used the results from the review to develop an eco-evolutionary individual-based modeling tool to study the role of genetic and plastic mechanisms in promoting local adaption of populations of organisms with different life strategies experiencing scenarios of climate change and environmental stochasticity. The environment was simulated through a climate variable (e.g., temperature) defining a phenotypic optimum moving at a given rate of change. The rate of change was changed to simulate different scenarios of climate change (no change, slow, medium, rapid climate change). Several scenarios of stochastic noise color resembling different climatic conditions were explored. Results show that populations of sexual species will rely mainly on standing genetic variation and phenotypic plasticity for local adaptation. Population of species with relatively slow growth rate (e.g., large mammals) – especially those of small size – are the most vulnerable, particularly if their plasticity is limited (i.e., specialist species). In addition, whenever organisms from these populations are capable of adaptive plasticity, they can buffer fitness losses in reddish climatic conditions. Likewise, whenever they can adjust their plastic response (e.g., bed-hedging strategy) they will cope with bluish environmental conditions as well. In contrast, life strategies of high fecundity can rely on non-adaptive plasticity for their local adaptation to novel environmental conditions, unless the rate of change is too rapid. A recommended management measure is to guarantee interconnection of isolated populations into metapopulations, such that the supply of useful genetic variation can be increased, and, at the same time, provide them with movement opportunities to follow their preferred niche, when local adaptation becomes problematic. This is particularly important for bluish and reddish climatic conditions, when the rate of change is slow, or for any climatic condition when the level of stress (rate of change) is relatively high.
En se penchant sur les réécritures de l'histoire pour le citoyen dans l’espace germanique et la France des Lumières et de la Révolution, ce livre apporte un regard nouveau et distancié sur les usages publics de l’histoire aujourd'hui, en France en particulier où le débat autour du roman national reste vif. La première partie de l’ouvrage, consacrée à l’exemplarité d’une histoire illustrée de gravures qui ont durablement marqué les représentations du passé, revisite la question des grands hommes, reproduit, traduit et analyse la circulation d’exemples édifiants entre les deux espaces.
La deuxième partie traite d’un mode de représentation pédagogique de l’histoire qui suscitait, et suscite toujours, la fascination tout en posant un défi de méthode: l’usage pédagogique d’un tableau permettant de saisir d’un seul coup d’oeil toute l’histoire d’un peuple voire de l’humanité tout entière, et d’en tirer des leçons politiques. L’idée, encore structurante aujourd’hui, d’un modèle politique ou pédagogique allemand ou français d’une écriture de l’histoire couplée, ou non, à la géographie est examinée ici au prisme des contextes précis où elle a été pensée.
A central insight from psychological studies on human eye movements is that eye movement patterns are highly individually characteristic. They can, therefore, be used as a biometric feature, that is, subjects can be identified based on their eye movements. This thesis introduces new machine learning methods to identify subjects based on their eye movements while viewing arbitrary content. The thesis focuses on probabilistic modeling of the problem, which has yielded the best results in the most recent literature. The thesis studies the problem in three phases by proposing a purely probabilistic, probabilistic deep learning, and probabilistic deep metric learning approach. In the first phase, the thesis studies models that rely on psychological concepts about eye movements. Recent literature illustrates that individual-specific distributions of gaze patterns can be used to accurately identify individuals. In these studies, models were based on a simple parametric family of distributions. Such simple parametric models can be robustly estimated from sparse data, but have limited flexibility to capture the differences between individuals. Therefore, this thesis proposes a semiparametric model of gaze patterns that is flexible yet robust for individual identification. These patterns can be understood as domain knowledge derived from psychological literature. Fixations and saccades are examples of simple gaze patterns. The proposed semiparametric densities are drawn under a Gaussian process prior centered at a simple parametric distribution. Thus, the model will stay close to the parametric class of densities if little data is available, but it can also deviate from this class if enough data is available, increasing the flexibility of the model. The proposed method is evaluated on a large-scale dataset, showing significant improvements over the state-of-the-art. Later, the thesis replaces the model based on gaze patterns derived from psychological concepts with a deep neural network that can learn more informative and complex patterns from raw eye movement data. As previous work has shown that the distribution of these patterns across a sequence is informative, a novel statistical aggregation layer called the quantile layer is introduced. It explicitly fits the distribution of deep patterns learned directly from the raw eye movement data. The proposed deep learning approach is end-to-end learnable, such that the deep model learns to extract informative, short local patterns while the quantile layer learns to approximate the distributions of these patterns. Quantile layers are a generic approach that can converge to standard pooling layers or have a more detailed description of the features being pooled, depending on the problem. The proposed model is evaluated in a large-scale study using the eye movements of subjects viewing arbitrary visual input. The model improves upon the standard pooling layers and other statistical aggregation layers proposed in the literature. It also improves upon the state-of-the-art eye movement biometrics by a wide margin. Finally, for the model to identify any subject — not just the set of subjects it is trained on — a metric learning approach is developed. Metric learning learns a distance function over instances. The metric learning model maps the instances into a metric space, where sequences of the same individual are close, and sequences of different individuals are further apart. This thesis introduces a deep metric learning approach with distributional embeddings. The approach represents sequences as a set of continuous distributions in a metric space; to achieve this, a new loss function based on Wasserstein distances is introduced. The proposed method is evaluated on multiple domains besides eye movement biometrics. This approach outperforms the state of the art in deep metric learning in several domains while also outperforming the state of the art in eye movement biometrics.
Phytoplankton growth depends not only on the mean intensity but also on the dynamics of the light supply. The nonlinear light-dependency of growth is characterized by a small number of basic parameters: the compensation light intensity PARcompμ, where production and losses are balanced, the growth efficiency at sub-saturating light αµ, and the maximum growth rate at saturating light µmax. In surface mixed layers, phytoplankton may rapidly move between high light intensities and almost darkness. Because of the different frequency distribution of light and/or acclimation processes, the light-dependency of growth may differ between constant and fluctuating light. Very few studies measured growth under fluctuating light at a sufficient number of mean light intensities to estimate the parameters of the growth-irradiance relationship. Hence, the influence of light dynamics on µmax, αµ and PARcompμ are still largely unknown. By extension, accurate modelling predictions of phytoplankton development under fluctuating light exposure remain difficult to make. This PhD thesis does not intend to directly extrapolate few experimental results to aquatic systems – but rather improving the mechanistic understanding of the variation of the light-dependency of growth under light fluctuations and effects on phytoplankton development.
In Lake TaiHu and at the Three Gorges Reservoir (China), we incubated phytoplankton communities in bottles placed either at fixed depths or moved vertically through the water column to mimic vertical mixing. Phytoplankton at fixed depths received only the diurnal changes in light (defined as constant light regime), while phytoplankton received rapidly fluctuating light by superimposing the vertical light gradient on the natural sinusoidal diurnal sunlight. The vertically moved samples followed a circular movement with 20 min per revolution, replicating to some extent the full overturn of typical Langmuir cells. Growth, photosynthesis, oxygen production and respiration of communities (at Lake TaiHu) were
measured. To complete these investigations, a physiological experiment was performed in the laboratory on a toxic strain of Microcystis aeruginosa (FACBH 1322) incubated under 20 min period fluctuating light. Here, we measured electron transport rates and net oxygen production at a much higher time resolution (single minute timescale).
The present PhD thesis provides evidence for substantial effects of fluctuating light on the eco-physiology of phytoplankton. Both experiments performed under semi-natural conditions in Lake TaiHu and at the Three Gorges Reservoir gave similar results. The significant decline in community growth efficiencies αµ under fluctuating light was caused for a great share by different frequency distribution of light intensities that shortened the effective daylength for production. The remaining gap in community αµ was attributed to species-specific photoacclimation mechanisms and to light-dependent respiratory losses. In contrast, community maximal growth rates µmax were similar between incubations at constant and fluctuating light. At daily growth saturating light supply, differences in losses for biosynthesis between the two light regimes were observed. Phytoplankton experiencing constant light suffered photo-inhibition - leading to photosynthesis foregone and additional respiratory costs for photosystems repair. On the contrary, intermittent exposure to low and high light intensities prevented photo-inhibition of mixed algae but forced them to develop alternative light strategy. They better harvested and exploited surface irradiance by enhancing their photosynthesis. In the laboratory, we showed that Microcystis aeruginosa increased its oxygen consumption by dark respiration in the light few minutes only after exposure to increasing light intensities. More, we proved that within a simulated Langmuir cell, the net production at saturating light and the compensation light intensity for production at limiting light are positively related. These results are best explained by an accumulation of photosynthetic products at increasing irradiance and mobilization of these fresh resources by rapid enhancement of dark respiration for maintenance and biosynthesis at decreasing irradiance. At the daily timescale, we showed that the enhancement of photosynthesis at high irradiance for biosynthesis of species increased their maintenance respiratory costs at limiting light. Species-specific growth at saturating light µmax and compensation light intensity for growth PARcompμ of species incubated in Lake TaiHu were positively related. Because of this species-specific physiological tradeoff, species displayed different light affinities to limiting and saturating light - thereby exhibiting a gleaner-opportunist tradeoff. In Lake TaiHu, we showed that inter-specific differences in light acquisition traits (µmax and PARcompμ) allowed coexis¬tence of species on a gradient of constant
light while avoiding competitive exclusion. More interestingly we demonstrated for the first time that vertical mixing (inducing fluctuating light supply for phytoplankton) may alter or even reverse the light utilization strategies of species within couple of days. The intra-specific variation in traits under fluctuating light increased the niche space for acclimated species, precluding competitive exclusion.
Overall, this PhD thesis contributes to a better understanding of phytoplankton eco-physiology under fluctuating light supply. This work could enhance the quality of predictions of phytoplankton development under certain weather conditions or climate change scenarios.
Hepcidin-25 (Hep-25) plays a crucial role in the control of iron homeostasis. Since the dysfunction of the hepcidin pathway leads to multiple diseases as a result of iron imbalance, hepcidin represents a potential target for the diagnosis and treatment of disorders of iron metabolism. Despite intense research in the last decade targeted at developing a selective immunoassay for iron disorder diagnosis and treatment and better understanding the ferroportin-hepcidin interaction, questions remain. The key to resolving these underlying questions is acquiring exact knowledge of the 3D structure of native Hep-25. Since it was determined that the N-terminus, which is responsible for the bioactivity of Hep-25, contains a small Cu(II)-binding site known as the ATCUN motif, it was assumed that the Hep-25-Cu(II) complex is the native, bioactive form of the hepcidin. This structure has thus far not been elucidated in detail. Owing to the lack of structural information on metal-bound Hep-25, little is known about its possible biological role in iron metabolism. Therefore, this work is focused on structurally characterizing the metal-bound Hep-25 by NMR spectroscopy and molecular dynamics simulations. For the present work, a protocol was developed to prepare and purify properly folded Hep-25 in high quantities. In order to overcome the low solubility of Hep-25 at neutral pH, we introduced the C-terminal DEDEDE solubility tag. The metal binding was investigated through a series of NMR spectroscopic experiments to identify the most affected amino acids that mediate metal coordination. Based on the obtained NMR data, a structural calculation was performed in order to generate a model structure of the Hep-25-Ni(II) complex. The DEDEDE tag was excluded from the structural calculation due to a lack of NMR restraints. The dynamic nature and fast exchange of some of the amide protons with solvent reduced the overall number of NMR restraints needed for a high-quality structure. The NMR data revealed that the 20 Cterminal Hep-25 amino acids experienced no significant conformational changes, compared to published results, as a result of a pH change from pH 3 to pH 7 and metal binding. A 3D model of the Hep-25-Ni(II) complex was constructed from NMR data recorded for the hexapeptideNi(II) complex and Hep-25-DEDEDE-Ni(II) complex in combination with the fixed conformation of 19 C-terminal amino acids. The NMR data of the Hep-25-DEDEDE-Ni(II) complex indicates that the ATCUN motif moves independently from the rest of the structure. The 3D model structure of the metal-bound Hep-25 allows for future works to elucidate hepcidin’s interaction with its receptor ferroportin and should serve as a starting point for the development of antibodies with improved selectivity.
For millennia, humans have affected landscapes all over the world. Due to horizontal expansion, agriculture plays a major role in the process of fragmentation. This process is caused by a substitution of natural habitats by agricultural land leading to agricultural landscapes. These landscapes are characterized by an alternation of agriculture and other land use like forests. In addition, there are landscape elements of natural origin like small water bodies. Areas of different land use are beside each other like patches, or fragments. They are physically distinguishable which makes them look like a patchwork from an aerial perspective. These fragments are each an own ecosystem with conditions and properties that differ from their adjacent fragments. As open systems, they are in exchange of information, matter and energy across their boundaries. These boundary areas are called transition zones. Here, the habitat properties and environmental conditions are altered compared to the interior of the fragments. This changes the abundance and the composition of species in the transition zones, which in turn has a feedback effect on the environmental conditions.
The literature mainly offers information and insights on species abundance and composition in forested transition zones. Abiotic effects, the gradual changes in energy and matter, received less attention. In addition, little is known about non-forested transition zones. For example, the effects on agricultural yield in transition zones of an altered microclimate, matter dynamics or different light regimes are hardly researched or understood. The processes in transition zones are closely connected with altered provisioning and regulating ecosystem services. To disentangle the mechanisms and to upscale the effects, models can be used.
My thesis provides insights into these topics: literature was reviewed and a conceptual framework for the quantitative description of gradients of matter and energy in transition zones was introduced. The results of measurements of environmental gradients like microclimate, aboveground biomass and soil carbon and nitrogen content are presented that span from within the forest into arable land. Both the measurements and the literature review could not validate a transition zone of 100 m for abiotic effects. Although this value is often reported and used in the literature, it is likely to be smaller.
Further, the measurements suggest that on the one hand trees in transition zones are smaller compared to those in the interior of the fragments, while on the other hand less biomass was measured in the arable lands’ transition zone. These results support the hypothesis that less carbon is stored in the aboveground biomass in transition zones. The soil at the edge (zero line) between adjacent forest and arable land contains more nitrogen and carbon content compared to the interior of the fragments. One-year measurements in the transition zone also provided evidence that microclimate is different compared to the fragments’ interior.
To predict the possible yield decreases that transition zones might cause, a modelling approach was developed. Using a small virtual landscape, I modelled the effect of a forest fragment shading the adjacent arable land and the effects of this on yield using the MONICA crop growth model. In the transition zone yield was less compared to the interior due to shading. The results of the simulations were upscaled to the landscape level and exemplarily calculated for the arable land of a whole region in Brandenburg, Germany.
The major findings of my thesis are: (1) Transition zones are likely to be much smaller than assumed in the scientific literature; (2) transition zones aren’t solely a phenomenon of forested ecosystems, but significantly extend into arable land as well; (3) empirical and modelling results show that transition zones encompass biotic and abiotic changes that are likely to be important to a variety of agricultural landscape ecosystem services.
Increasing concerns regarding the environmental impact of our chemical production have shifted attention towards possibilities for sustainable biotechnology. One-carbon (C1) compounds, including methane, methanol, formate and CO, are promising feedstocks for future bioindustry. CO2 is another interesting feedstock, as it can also be transformed using renewable energy to other C1 feedstocks for use. While formaldehyde is not suitable as a feedstock due to its high toxicity, it is a central intermediate in the process of C1 assimilation. This thesis explores formaldehyde metabolism and aims to engineer formaldehyde assimilation in the model organism Escherichia coli for the future C1-based bioindustry.
The first chapter of the thesis aims to establish growth of E. coli on formaldehyde via the most efficient naturally occurring route, the ribulose monophosphate pathway. Linear variants of the pathway were constructed in multiple-gene knockouts strains, coupling E. coli growth to the activities of the key enzymes of the pathway. Formaldehyde-dependent growth was achieved in rationally designed strains. In the final strain, the synthetic pathway provides the cell with almost all biomass and energy requirements.
In the second chapter, taking advantage of the unique feature of its reactivity, formaldehyde assimilation via condensation with glycine and pyruvate by two promiscuous aldolases was explored. Facilitated by these two reactions, the newly designed homoserine cycle is expected to support higher yields of a wide array of products than its counterparts. By dividing the pathway into segments and coupling them to the growth of dedicated strains, all pathway reactions were demonstrated to be sufficiently active. The work paves a way for future implementation of a highly efficient route for C1 feedstocks into commodity chemicals.
In the third chapter, the in vivo rate of the spontaneous formaldehyde tetrahydrofolate condensation to methylene-tetrahydrofolate was assessed in order to evaluate its applicability as a biotechnological process. Tested within an E. coli strain deleted in essential genes for native methylene-tetrahydrofolate biosynthesis, the reaction was shown to support the production of this essential intermediate. However, only low growth rates were observed and only at high formaldehyde concentrations. Computational analysis dependent on in vivo evidence from this strain deduced the slow rate of this spontaneous reaction, thus ruling out its substantial contribution to growth on C1 feedstocks.
The reactivity of formaldehyde makes it highly toxic. In the last chapter, the formation of thioproline, the condensation product of cysteine and formaldehyde, was confirmed to contribute this toxicity effect. Xaa-Pro aminopeptidase (PepP), which genetically links with folate metabolism, was shown to hydrolyze thioproline-containing peptides. Deleting pepP increased strain sensitivity to formaldehyde, pointing towards the toxicity of thioproline-containing peptides and the importance of their removal. The characterization in this study could be useful in handling this toxic intermediate.
Overall, this thesis identified challenges related to formaldehyde metabolism and provided novel solutions towards a future bioindustry based on sustainable C1 feedstocks in which formaldehyde serves as a key intermediate.
Accurate weather observations are the keystone to many quantitative applications, such as precipitation monitoring and nowcasting, hydrological modelling and forecasting, climate studies, as well as understanding precipitation-driven natural hazards (i.e. floods, landslides, debris flow). Weather radars have been an increasingly popular tool since the 1940s to provide high spatial and temporal resolution precipitation data at the mesoscale, bridging the gap between synoptic and point scale observations. Yet, many institutions still struggle to tap the potential of the large archives of reflectivity, as there is still much to understand about factors that contribute to measurement errors, one of which is calibration. Calibration represents a substantial source of uncertainty in quantitative precipitation estimation (QPE). A miscalibration of a few dBZ can easily deteriorate the accuracy of precipitation estimates by an order of magnitude. Instances where rain cells carrying torrential rains are misidentified by the radar as moderate rain could mean the difference between a timely warning and a devastating flood.
Since 2012, the Philippine Atmospheric, Geophysical, and Astronomical Services Administration (PAGASA) has been expanding the country’s ground radar network. We had a first look into the dataset from one of the longest running radars (the Subic radar) after devastating week-long torrential rains and thunderstorms in August 2012 caused by the annual southwestmonsoon and enhanced by the north-passing Typhoon Haikui. The analysis of the rainfall spatial distribution revealed the added value of radar-based QPE in comparison to interpolated rain gauge observations. However, when compared with local gauge measurements, severe miscalibration of the Subic radar was found. As a consequence, the radar-based QPE would have underestimated the rainfall amount by up to 60% if they had not been adjusted by rain gauge observations—a technique that is not only affected by other uncertainties, but which is also not feasible in other regions of the country with very sparse rain gauge coverage.
Relative calibration techniques, or the assessment of bias from the reflectivity of two radars, has been steadily gaining popularity. Previous studies have demonstrated that reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and its successor, the Global Precipitation Measurement (GPM), are accurate enough to serve as a calibration reference for ground radars over low-to-mid-latitudes (± 35 deg for TRMM; ± 65 deg for GPM). Comparing spaceborne radars (SR) and ground radars (GR) requires cautious consideration of differences in measurement geometry and instrument specifications, as well as temporal coincidence. For this purpose, we implement a 3-D volume matching method developed by Schwaller and Morris (2011) and extended by Warren et al. (2018) to 5 years worth of observations from the Subic radar. In this method, only the volumetric intersections of the SR and GR beams are considered.
Calibration bias affects reflectivity observations homogeneously across the entire radar domain. Yet, other sources of systematic measurement errors are highly heterogeneous in space, and can either enhance or balance the bias introduced by miscalibration. In order to account for such heterogeneous errors, and thus isolate the calibration bias, we assign a quality index to each matching SR–GR volume, and thus compute the GR calibration bias as a qualityweighted average of reflectivity differences in any sample of matching SR–GR volumes. We exemplify the idea of quality-weighted averaging by using beam blockage fraction (BBF) as a quality variable. Quality-weighted averaging is able to increase the consistency of SR and GR observations by decreasing the standard deviation of the SR–GR differences, and thus increasing the precision of the bias estimates.
To extend this framework further, the SR–GR quality-weighted bias estimation is applied to the neighboring Tagaytay radar, but this time focusing on path-integrated attenuation (PIA) as the source of uncertainty. Tagaytay is a C-band radar operating at a lower wavelength and is therefore more affected by attenuation. Applying the same method used for the Subic radar, a time series of calibration bias is also established for the Tagaytay radar.
Tagaytay radar sits at a higher altitude than the Subic radar and is surrounded by a gentler terrain, so beam blockage is negligible, especially in the overlapping region. Conversely, Subic radar is largely affected by beam blockage in the overlapping region, but being an SBand radar, attenuation is considered negligible. This coincidentally independent uncertainty contributions of each radar in the region of overlap provides an ideal environment to experiment with the different scenarios of quality filtering when comparing reflectivities from the two ground radars. The standard deviation of the GR–GR differences already decreases if we consider either BBF or PIA to compute the quality index and thus the weights. However, combining them multiplicatively resulted in the largest decrease in standard deviation, suggesting that taking both factors into account increases the consistency between the matched samples.
The overlap between the two radars and the instances of the SR passing over the two radars at the same time allows for verification of the SR–GR quality-weighted bias estimation method. In this regard, the consistency between the two ground radars is analyzed before and after bias correction is applied. For cases when all three radars are coincident during a significant rainfall event, the correction of GR reflectivities with calibration bias estimates from SR overpasses dramatically improves the consistency between the two ground radars which have shown incoherent observations before correction. We also show that for cases where adequate SR coverage is unavailable, interpolating the calibration biases using a moving average can be used to correct the GR observations for any point in time to some extent. By using the interpolated biases to correct GR observations, we demonstrate that bias correction reduces the absolute value of the mean difference in most cases, and therefore improves the consistency between the two ground radars.
This thesis demonstrates that in general, taking into account systematic sources of uncertainty that are heterogeneous in space (e.g. BBF) and time (e.g. PIA) allows for a more consistent estimation of calibration bias, a homogeneous quantity. The bias still exhibits an unexpected variability in time, which hints that there are still other sources of errors that remain unexplored. Nevertheless, the increase in consistency between SR and GR as well as between the two ground radars, suggests that considering BBF and PIA in a weighted-averaging approach is a step in the right direction.
Despite the ample room for improvement, the approach that combines volume matching between radars (either SR–GR or GR–GR) and quality-weighted comparison is readily available for application or further scrutiny. As a step towards reproducibility and transparency in atmospheric science, the 3D matching procedure and the analysis workflows as well as sample data are made available in public repositories. Open-source software such as Python and wradlib are used for all radar data processing in this thesis. This approach towards open science provides both research institutions and weather services with a valuable tool that can be applied to radar calibration, from monitoring to a posteriori correction of archived data.
Business process management (BPM) deals with modeling, executing, monitoring, analyzing, and improving business processes. During execution, the process communicates with its environment to get relevant contextual information represented as events. Recent development of big data and the Internet of Things (IoT) enables sources like smart devices and sensors to generate tons of events which can be filtered, grouped, and composed to trigger and drive business processes.
The industry standard Business Process Model and Notation (BPMN) provides several event constructs to capture the interaction possibilities between a process and its environment, e.g., to instantiate a process, to abort an ongoing activity in an exceptional situation, to take decisions based on the information carried by the events, as well as to choose among the alternative paths for further process execution. The specifications of such interactions are termed as event handling. However, in a distributed setup, the event sources are most often unaware of the status of process execution and therefore, an event is produced irrespective of the process being ready to consume it. BPMN semantics does not support such scenarios and thus increases the chance of processes getting delayed or getting in a deadlock by missing out on event occurrences which might still be relevant.
The work in this thesis reviews the challenges and shortcomings of integrating real-world events into business processes, especially the subscription management. The basic integration is achieved with an architecture consisting of a process modeler, a process engine, and an event processing platform. Further, points of subscription and unsubscription along the process execution timeline are defined for different BPMN event constructs. Semantic and temporal dependencies among event subscription, event occurrence, event consumption and event unsubscription are considered. To this end, an event buffer with policies for updating the buffer, retrieving the most suitable event for the current process instance, and reusing the event has been discussed that supports issuing of early subscription.
The Petri net mapping of the event handling model provides our approach with a translation of semantics from a business process perspective. Two applications based on this formal foundation are presented to support the significance of different event handling configurations on correct process execution and reachability of a process path. Prototype implementations of the approaches show that realizing flexible event handling is feasible with minor extensions of off-the-shelf process engines and event platforms.
Ultrafast magnetisation dynamics have been investigated intensely for two decades. The recovery process after demagnetisation, however, was rarely studied experimentally and discussed in detail. The focus of this work lies on the investigation of the magnetisation on long timescales after laser excitation. It combines two ultrafast time resolved methods to study the relaxation of the magnetic and lattice system after excitation with a high fluence ultrashort laser pulse. The magnetic system is investigated by time resolved measurements of the magneto-optical Kerr effect. The experimental setup has been implemented in the scope of this work. The lattice dynamics were obtained with ultrafast X-ray diffraction. The combination of both techniques leads to a better understanding of the mechanisms involved in magnetisation recovery from a non-equilibrium condition. Three different groups of samples are investigated in this work: Thin Nickel layers capped with nonmagnetic materials, a continuous sample of the ordered L10 phase of Iron Platinum and a sample consisting of Iron Platinum nanoparticles embedded in a carbon matrix. The study of the remagnetisation reveals a general trend for all of the samples: The remagnetisation process can be described by two time dependences. A first exponential recovery that slows down with an increasing amount of energy absorbed in the system until an approximately linear time dependence is observed. This is followed by a second exponential recovery. In case of low fluence excitation, the first recovery is faster than the second. With increasing fluence the first recovery is slowed down and can be described as a linear function. If the pump-induced temperature increase in the sample is sufficiently high, a phase transition to a paramagnetic state is observed. In the remagnetisation process, the transition into the ferromagnetic state is characterised by a distinct transition between the linear and exponential recovery. From the combination of the transient lattice temperature Tp(t) obtained from ultrafast X-ray measurements and magnetisation M(t) gained from magneto-optical measurements we construct the transient magnetisation versus temperature relations M(Tp). If the lattice temperature remains below the Curie temperature the remagnetisation curve M(Tp) is linear and stays below the M(T) curve in equilibrium in the continuous transition metal layers. When the sample is heated above phase transition, the remagnetisation converges towards the static temperature dependence. For the granular Iron Platinum sample the M(Tp) curves for different fluences coincide, i.e. the remagnetisation follows a similar path irrespective of the initial laser-induced temperature jump.
Bildungsort Familie
(2019)
In der Bildungs- und Familienforschung wird die intergenerationale Weitergabe von Bildung innerhalb der Familie hauptsächlich unter dem Blickwinkel des schulischen Erfolges der nachwachsenden Generation thematisiert. „Wie“ aber bildungsbezogene Transferprozesse innerhalb der Familie konkret ablaufen, bleibt jedoch in der deutschen Forschungslandschaft weitestgehend unbearbeitet. An dieser Stelle setzt diese qualitativ angelegte Arbeit an. Ziel dieser Arbeit ist, bildungsbezogene Transferprozesse innerhalb von russischen Dreigenerationenfamilien, die aus der ehemaligen Sowjetunion nach Berlin seit 1989 ausgewandert sind und zwischen der Großeltern-, Elterngeneration und der Enkelgeneration ablaufen, zu untersuchen. Hinter diesen Transferprozessen verbergen sich im Sinne Bourdieus bewusste und unbewusste Bildungsstrategien der interviewten Familienmitglieder. Im Rahmen dieser Arbeit wurden zwei Spätaussiedlerfamilien – zu diesen zählen Familie Hoffmann und Familie Popow, sowie zwei russisch-jüdische Familien – zu diesen zählen Familie Rosenthal und Familie Buchbinder, interviewt. Es wurden mit den einzelnen Mitgliedern der vier untersuchten Dreigenerationenfamilien Gruppendiskussionen sowie mit je einem Vertreter einer Generation leitfadengestützte Einzelinterviews geführt. Die Erhebungsphase fand in Berlin im Zeitraum von 2010 bis 2012 statt. Das auf diese Weise gewonnene empirische Material wurde mithilfe der dokumentarischen Methode nach Bohnsack ausgewertet. Hierdurch wurde es möglich die implizite Selbstverständlichkeit, mit der sich Bildung in Familien nach Bourdieu habituell vollzieht, einzufangen und rekonstruierbar zu machen. In der Arbeit wurden eine habitustheoretische Interpretation der russischen Dreigenerationenfamilien und die entsprechende Feldanalyse nach Bourdieu vorgenommen. In diesem Zusammenhang wurde der soziale Raum der untersuchten Familien in der Ankunftsgesellschaft bezüglich ihres Vergleichshorizontes der Herkunftsgesellschaft rekonstruiert. Weiter wurde der Bildungstransfer vor dem jeweiligen Erlebnishintergrund der einzelnen Familien untersucht und diesbezüglich eine Typisierung vorgenommen.
Im Rahmen dieser Untersuchung konnten neue Erkenntnisse zum bisher unerforschten Feld des Bildungstransfers russischer Dreigenerationenfamilien in Berlin gewonnen werden. Ein wesentliches Ergebnis dieser Arbeit ist, dass die Anwendung von Bourdieus Klassentheorie auch auf Gruppen, die in einer sozialistischen Gesellschaft sozialisiert wurden und in eine kapitalistisch orientierte Gesellschaft ausgewandert sind, produktiv sein kann. Ein weiteres zentrales Ergebnis der Studie ist, dass bei zwei der vier untersuchten Familien die Migration den intergenerationalen Bildungstransfer beeinflusste. In diesem Zusammenhang weist Familie Rosenthal durch die Migration einen „gespaltenen“ Habitus auf. Dieser ist darauf zurückzuführen, dass diese Familie bei der Planung des Berufes für die Enkelin in Berlin sich am Praktischen und Notwendigen orientierte. Während die bewusste Bildungsstrategie der Großeltern- und Elterngeneration für die Enkelgeneration im Ankunftsland dem Habitus der Notwendigkeit, den Bourdieu der Arbeiterklasse zuschreibt, zugeordnet werden kann, lässt sich hingegen das Freizeitverhalten der Familie Rosenthal dem Habitus der Distinktion zuordnen, der typisch für die herrschende Klasse ist. Ein weiterer Befund dieser Untersuchung ist, dass im Vergleich zur Enkelin Rosenthal bei der Enkelin Popow eine sogenannte Sphärendiskrepanz rekonstruiert wurde. So ist die Enkelin Popow in der äußeren Sphäre der Schule auf sich gestellt, da die Großeltern- und Elterngeneration zum deutschen Schulsystem nur über einen geringen Informationsstand verfügen. Die Enkelin grenzt sich einerseits von ihrer Familie (innere Sphäre) und deutschen Schulabbrechern (äußere Sphäre) ab, orientiert sich aber andererseits beim Versuch sozial aufzusteigen an russischsprachigen Peers, die die gymnasiale Oberstufe besuchen (dritte Sphäre). Bei Enkelin Popow fungiert demzufolge die Peergruppe und nicht die Familie als zentraler Bildungsort. An dieser Stelle sei angemerkt, dass sowohl bei einer russisch-jüdischen Familie als auch bei einer Spätaussiedlerfamilie der intergenerationale Bildungstransfer durch die Migration beeinflusst wurde. Während Familie Rosenthal in der Herkunftsgesellschaft der Intelligenzija zuzuordnen ist, gehört Familie Popow der Arbeiterschaft an. Daraus folgt, dass der intergenerationale Bildungstransfer der untersuchten Familien sowohl unabhängig vom Spätaussiedler- und Kontingentflüchtlingsstatus als auch vom herkunftsortspezifischen sozialen Status abläuft. Demnach kann geschlussfolgert werden, dass im Rahmen dieser Studie die Migration ein zentraler Faktor für den intergenerationalen Bildungstransfer ist.
In der Dissertationsarbeit mit dem Titel „Eine Hypothese über die Grundlagen von Moral und einige Implikationen“ unternimmt die Autorin den Versuch, die anthropologischen Prämissen moralischen Handelns herauszuarbeiten. Es wird eine Hypothese aufgestellt und erläutert, die behauptet, dass moralisches Handeln nur dann verständlich wird, wenn der Handelnde erstens die Fähigkeit der Phantasie aufweist, zweitens auf Erfahrungen (mittels seines Gedächtnisses) zugreifen kann und durch Konversation mit anderen Personen interagierte und interagiert, denn nur auf der Basis dieser drei Grundlagen von Moral können sich diejenigen Fähigkeiten ent¬wickeln, die als Voraussetzungen moralischen Handeln gesehen werden müssen: Selbstbewusstsein, Freiheit, die Entwicklung eines Wir-Gefühls, die Genese eines moralischen Ideals und die Fähigkeit, sich im Entscheiden und Handeln nach diesem Ideal richten zu können. Außerdem werden in dieser Dissertation einige Implikationen dieser Hypothese auf individueller und zwischenmenschlicher Ebene diskutiert.
Since half a century, cytometry has been a major scientific discipline in the field of cytomics - the study of system’s biology at single cell level. It enables the investigation of physiological processes, functional characteristics and rare events with proteins by analysing multiple parameters on an individual cell basis. In the last decade, mass cytometry has been established which increased the parallel measurement to up to 50 proteins. This has shifted the analysis strategy from conventional consecutive manual gates towards multi-dimensional data processing. Novel algorithms have been developed to tackle these high-dimensional protein combinations in the data. They are mainly based on clustering or non-linear dimension reduction techniques, or both, often combined with an upstream downsampling procedure. However, these tools have obstacles either in comprehensible interpretability, reproducibility, computational complexity or in comparability between samples and groups.
To address this bottleneck, a reproducible, semi-automated cytometric data mining workflow PRI (pattern recognition of immune cells) is proposed which combines three main steps: i) data preparation and storage; ii) bin-based combinatorial variable engineering of three protein markers, the so called triploTs, and subsequent sectioning of these triploTs in four parts; and iii) deployment of a data-driven supervised learning algorithm, the cross-validated elastic-net regularized logistic regression, with these triploT sections as input variables. As a result, the selected variables from the models are ranked by their prevalence, which potentially have discriminative value. The purpose is to significantly facilitate the identification of meaningful subpopulations, which are most distinguish between two groups. The proposed workflow PRI is exemplified by a recently published public mass cytometry data set. The authors found a T cell subpopulation which is discriminative between effective and ineffective treatment of breast carcinomas in mice. With PRI, that subpopulation was not only validated, but was further narrowed down as a particular Th1 cell population. Moreover, additional insights of combinatorial protein expressions are revealed in a traceable manner. An essential element in the workflow is the reproducible variable engineering. These variables serve as basis for a clearly interpretable visualization, for a structured variable exploration and as input layers in neural network constructs.
PRI facilitates the determination of marker levels in a semi-continuous manner. Jointly with the combinatorial display, it allows a straightforward observation of correlating patterns, and thus, the dominant expressed markers and cell hierarchies. Furthermore, it enables the identification and complex characterization of discriminating subpopulations due to its reproducible and pseudo-multi-parametric pattern presentation. This endorses its applicability as a tool for unbiased investigations on cell subsets within multi-dimensional cytometric data sets.
Während der hemdsärmelige Reporter lediglich für den Tag schreibt, schafft der Literat Texte für die Ewigkeit. Mit diesem über Jahrhunderte tradierten Klischee brechen zwei Autoren ganz bewusst, die in beiden Bereichen erfolgreich sind: Joseph Roth und Tom Wolfe - in Personalunion Journalisten wie Literaten - hinterfragen den Wertungsunterschied, der gemeinhin zwischen den Bereichen gemacht wird. Fanny Opitz zeichnet die diskursiven Umfelder nach, in denen die Debatte um die Leistung von Journalismus und Literatur breit geführt wurde: im Deutschland der 1920er Jahre im Kontext der gesamtkulturellen Strömung der Neuen Sachlichkeit und im US-amerikanischen New Journalism der 1960er und 1970er Jahre.
Magmatic-hydrothermal fluids are responsible for numerous mineralization types, including porphyry copper and granite related tin-tungsten (Sn-W) deposits. Ore formation is dependent on various factors, including, the pressure and temperature regime of the intrusions, the chemical composition of the magma and hydrothermal fluids, and fluid rock interaction during the ascent. Fluid inclusions have potential to provide direct information on the temperature, salinity, pressure and chemical composition of fluids responsible for ore formation. Numerical modeling allows the parametrization of pluton features that cannot be analyzed directly via geological observations.
Microthermometry of fluid inclusions from the Zinnwald Sn-W deposit, Erzgebirge, Germany / Czech Republic, provide evidence that the greisen mineralization is associated with a low salinity (2-10 wt.% NaCl eq.) fluid with homogenization temperatures between 350°C and 400°C. Quartzes from numerous veins are host to inclusions with the same temperatures and salinities, whereas cassiterite- and wolframite-hosted assemblages with slightly lower temperatures (around 350°C) and higher salinities (ca. 15 wt. NaCl eq.). Further, rare quartz samples contained boiling assemblages consisting of coexisting brine and vapor phases. The formation of ore minerals within the greisen is driven by invasive fluid-rock interaction, resulting in the loss of complexing agents (Cl-) leading to precipitation of cassiterite. The fluid inclusion record in the veins suggests boiling as the main reason for cassiterite and wolframite mineralization. Ore and coexisting gangue minerals hosted different types of fluid inclusions where the beginning boiling processes are solely preserved by the ore minerals emphasizing the importance of microthermometry in ore minerals. Further, the study indicates that boiling as a precipitation mechanism can only occur in mineralization related to shallow intrusions whereas deeper plutons prevent the fluid from boiling and can therefore form tungsten mineralization in the distal regions.
The tin mineralization in the Hämmerlein deposit, Erzgebirge, Germany, occurs within a skarn horizon and the underlying schist. Cassiterite within the skarn contains highly saline (30-50 wt% NaCl eq.) fluid inclusions, with homogenization temperatures up to 500°C, whereas cassiterites from the schist and additional greisen samples contain inclusions of lower salinity (~5 wt% NaCl eq.) and temperature (between 350 and 400°C). Inclusions in the gangue minerals (quartz, fluorite) preserve homogenization temperatures below 350°C and sphalerite showed the lowest homogenization temperatures (ca. 200°C) whereby all minerals (cassiterite from schist and greisen, gangue minerals and sphalerite) show similar salinity ranges (2-5 wt% NaCl eq.). Similar trace element contents and linear trends in the chemistry of the inclusions suggest a common source fluid. The inclusion record in the Hämmerlein deposit documents an early exsolution of hot brines from the underlying granite which is responsible for the mineralization hosted by the skarn. Cassiterites in schist and greisen are mainly forming due to fluid-rock interaction at lower temperatures. The low temperature inclusions documented in the sphalerite mineralization as well as their generally low trace element composition in comparison to the other minerals suggests that their formation was induced by mixing with meteoric fluids.
Numerical simulations of magma chambers and overlying copper distribution document the importance of incremental growth by sills. We analyzed the cooling behavior at variable injection intervals as well as sill thicknesses. The models suggest that magma accumulation requires volumetric injection rates of at least 4 x 10-4 km³/y. These injection rates are further needed to form a stable magmatic-hydrothermal fluid plume above the magma chamber to ensure a constant copper precipitation and enrichment within a confined location in order to form high-grade ore shells within a narrow geological timeframe between 50 and 100 kyrs as suggested for porphyry copper deposits. The highest copper enrichment can be found in regions with steep temperature gradients, typical of regions where the magmatic-hydrothermal fluid meets the cooler ambient fluids.
Since 1980 Iraq passed through various wars and conflicts including Iraq-Iran war, Saddam Hussein’s the Anfals and Halabja campaigns against the Kurds and the killing campaigns against Shiite in 1986, Saddam Hussein’s invasion of Kuwait in August 1990, the Gulf war in 1990, Iraq war in 2003 and the fall of Saddam, the conflicts and chaos in the transmission of power after the death of Saddam, and the war against ISIS . All these wars left severe impacts in most households in Iraq; on women and children in particular.
The consequences of such long wars could be observed in all sectors including economic, social, cultural and religious sectors. The social structure, norms and attitudes are intensely affected. Many women specifically divorced women found them-selves in challenging different difficulties such as social as well as economic situations. Thus the divorced women in Iraqi Kurdistan are the focus of this research.
Considering the fact that there is very few empirical researches on this topic, a constructivist grounded theory methodology (CGT) is viewed as reliable in order to come up with a comprehensive picture about the everyday life of divorced women in Iraqi Kurdistan. Data collected in Sulaimani city in Iraqi Kurdistan. The work of Kathy Charmaz was chosen to be the main methodological context of the research and the main data collection method was individual intensive narrative interviews with divorced women.
Women generally and divorced women specifically in Iraqi Kurdistan are living in a patriarchal society that passing through many changes due to the above mentioned wars among many other factors. This research is trying to study the everyday life of divorced women in such situations and the forms of social insecurity they are experiencing. The social institutions starting from the family as a very significant institution for women to the governmental and non-governmental institutions that are working to support women, and the copying strategies, are in focus in this research. The main research argument is that the family is playing ambivalent roles in divorced women’s life. For instance, on one side families are revealed to be an essential source of security to most respondents, on the other side families posed also many threats and restrictions on those women. This argument supported by what called by Suad joseph "the paradox of support and suppression" . Another important finding is that the stat institution(laws , constitutions ,Offices of combating violence against woman and family) are supporting women somehow and offering them protection from the insecurities but it is clear that the existence of the laws does not stop the violence against women in Iraqi Kurdistan, As explained by Pateman because the laws /the contract is a sexual-social contract that upholds the sex rights of males and grants them more privileges than females. The political instability, Tribal social norms also play a major role in influencing the rule of law.
It is noteworthy to refer that analyzing the interviews in this research showed that in spite that divorced women living in insecurities and facing difficulties but most of the respondents try to find a coping strategies to tackle difficult situations and to deal with the violence they face; these strategies are bargaining, sometimes compromising or resisting …etc. Different theories used to explain these coping strategies such as bargaining with patriarchy. Kandiyoti who stated that women living under certain restraints struggle to find way and strategies to enhance their situations. The research finding also revealed that the western liberal feminist view of agency is limited this is agree with Saba Mahmood and what she explained about Muslim women agency. For my respondents, who are divorced women, their agency reveals itself in different ways, in resisting or compromising with or even obeying the power of male relatives, and the normative system in the society. Agency is also explained the behavior of women contacting formal state institutions in cases of violence like the police or Offices of combating violence against woman and family.
The immense popularity of online communication services in the last decade has not only upended our lives (with news spreading like wildfire on the Web, presidents announcing their decisions on Twitter, and the outcome of political elections being determined on Facebook) but also dramatically increased the amount of data exchanged on these platforms. Therefore, if we wish to understand the needs of modern society better and want to protect it from new threats, we urgently need more robust, higher-quality natural language processing (NLP) applications that can recognize such necessities and menaces automatically, by analyzing uncensored texts. Unfortunately, most NLP programs today have been created for standard language, as we know it from newspapers, or, in the best case, adapted to the specifics of English social media.
This thesis reduces the existing deficit by entering the new frontier of German online communication and addressing one of its most prolific forms—users’ conversations on Twitter. In particular, it explores the ways and means by how people express their opinions on this service, examines current approaches to automatic mining of these feelings, and proposes novel methods, which outperform state-of-the-art techniques. For this purpose, I introduce a new corpus of German tweets that have been manually annotated with sentiments, their targets and holders, as well as lexical polarity items and their contextual modifiers. Using these data, I explore four major areas of sentiment research: (i) generation of sentiment lexicons, (ii) fine-grained opinion mining, (iii) message-level polarity classification, and (iv) discourse-aware sentiment analysis. In the first task, I compare three popular groups of lexicon generation methods: dictionary-, corpus-, and word-embedding–based ones, finding that dictionary-based systems generally yield better polarity lists than the last two groups. Apart from this, I propose a linear projection algorithm, whose results surpass many existing automatically-generated lexicons. Afterwords, in the second task, I examine two common approaches to automatic prediction of sentiment spans, their sources, and targets: conditional random fields (CRFs) and recurrent neural networks, obtaining higher scores with the former model and improving these results even further by redefining the structure of CRF graphs. When dealing with message-level polarity classification, I juxtapose three major sentiment paradigms: lexicon-, machine-learning–, and deep-learning–based systems, and try to unite the first and last of these method groups by introducing a bidirectional neural network with lexicon-based attention. Finally, in order to make the new classifier aware of microblogs' discourse structure, I let it separately analyze the elementary discourse units of each tweet and infer the overall polarity of a message from the scores of its EDUs with the help of two new approaches: latent-marginalized CRFs and Recursive Dirichlet Process.
Die Wissenschaftsfreiheit ist ein Grundrecht, dessen Sinn und Auslegung im Rahmen von Reformen des Hochschulsystems nicht nur der Justiz, sondern auch der Wissenschaft selbst immer wieder Anlass zur Diskussion geben, so auch im Zuge der Einführung des so genannten Qualitätsmanagements von Studium und Lehre an deutschen Hochschulen. Die vorliegende Dissertationsschrift stellt die Ergebnisse einer empirischen Studie vor, die mit einer soziologischen Betrachtung des Qualitätsmanagements unterschiedlicher Hochschulen zu dieser Diskussion beiträgt.
Auf Grundlage der Prämisse, dass Verlauf und Folgen einer organisationalen Innovation nur verstanden werden können, wenn der alltägliche Umgang der Organisationsmitglieder mit den neuen Strukturen und Prozessen in die Analyse einbezogen wird, geht die Studie von der Frage aus, wie Akteurinnen und Akteure an deutschen Hochschulen die Qualitätsmanagementsysteme ihrer Organisationen nutzen. Die qualitative inhaltsanalytische Auswertung von 26 Leitfaden-Interviews mit Prorektorinnen und -rektoren, Qualitätsmanagement-Personal und Studiendekaninnen und -dekanen an neun Hochschulen ergibt, dass die Strategien der Akteursgruppen an den Hochschulen im Zusammenspiel mit strukturellen Aspekten unterschiedliche Dynamiken entstehen lassen, mit denen Implikationen für die Lehrfreiheit verbunden sind: Während die Autonomie der Lehrenden durch das Qualitätsmanagement an einigen Hochschulen unterstützt wird, sind sowohl Autonomie als auch Verantwortung für Studium und Lehre an anderen Hochschulen Gegenstand andauernder Konflikte, die auch das Qualitätsmanagement einschließen.
Business process management is an established technique for business organizations to manage and support their processes. Those processes are typically represented by graphical models designed with modeling languages, such as the Business Process Model and Notation (BPMN).
Since process models do not only serve the purpose of documentation but are also a basis for implementation and automation of the processes, they have to satisfy certain correctness requirements. In this regard, the notion of soundness of workflow nets was developed, that can be applied to BPMN process models in order to verify their correctness. Because the original soundness criteria are very restrictive regarding the behavior of the model, different variants of the soundness notion have been developed for situations in which certain violations are not even harmful.
All of those notions do only consider the control-flow structure of a process model, however. This poses a problem, taking into account the fact that with the recent release and the ongoing development of the Decision Model and Notation (DMN) standard, an increasing number of process models are complemented by respective decision models. DMN is a dedicated modeling language for decision logic and separates the concerns of process and decision logic into two different models, process and decision models respectively.
Hence, this thesis is concerned with the development of decisionaware soundness notions, i.e., notions of soundness that build upon the original soundness ideas for process models, but additionally take into account complementary decision models. Similar to the various notions of workflow net soundness, this thesis investigates different notions of decision soundness that can be applied depending on the desired degree of restrictiveness. Since decision tables are a standardized means of DMN to represent decision logic, this thesis also puts special focus on decision tables, discussing how they can be translated into an unambiguous format and how their possible output values can be efficiently determined.
Moreover, a prototypical implementation is described that supports checking a basic version of decision soundness. The decision soundness notions were also empirically evaluated on models from participants of an online course on process and decision modeling as well as from a process management project of a large insurance company. The evaluation demonstrates that violations of decision soundness indeed occur and can be detected with our approach.
There is evidence that infants start extracting words from fluent speech around 7.5 months of age (e.g., Jusczyk & Aslin, 1995) and that they use at least two mechanisms to segment words forms from fluent speech: prosodic information (e.g., Jusczyk, Cutler & Redanz, 1993) and statistical information (e.g., Saffran, Aslin & Newport, 1996). However, how these two mechanisms interact and whether they change during development is still not fully understood.
The main aim of the present work is to understand in what way different cues to word segmentation are exploited by infants when learning the language in their environment, as well as to explore whether this ability is related to later language skills. In Chapter 3 we pursued to determine the reliability of the method used in most of the experiments in the present thesis (the Headturn Preference Procedure), as well as to examine correlations and individual differences between infants’ performance and later language outcomes. In Chapter 4 we investigated how German-speaking adults weigh statistical and prosodic information for word segmentation. We familiarized adults with an auditory string in which statistical and prosodic information indicated different word boundaries and obtained both behavioral and pupillometry responses. Then, we conducted further experiments to understand in what way different cues to word segmentation are exploited by 9-month-old German-learning infants (Chapter 5) and by 6-month-old German-learning infants (Chapter 6). In addition, we conducted follow-up questionnaires with the infants and obtained language outcomes at later stages of development.
Our findings from this thesis revealed that (1) German-speaking adults show a strong weight of prosodic cues, at least for the materials used in this study and that (2) German-learning infants weight these two kind of cues differently depending on age and/or language experience. We observed that, unlike English-learning infants, 6-month-old infants relied more strongly on prosodic cues. Nine-month-olds do not show any preference for either of the cues in the word segmentation task. From the present results it remains unclear whether the ability to use prosodic cues to word segmentation relates to later language vocabulary. We speculate that prosody provides infants with their first window into the specific acoustic regularities in the signal, which enables them to master the specific stress pattern of German rapidly. Our findings are a step forwards in the understanding of an early impact of the native prosody compared to statistical learning in early word segmentation.
Data assimilation has been an active area of research in recent years, owing to its wide utility. At the core of data assimilation are filtering, prediction, and smoothing procedures. Filtering entails incorporation of measurements' information into the model to gain more insight into a given state governed by a noisy state space model. Most natural laws are governed by time-continuous nonlinear models. For the most part, the knowledge available about a model is incomplete; and hence uncertainties are approximated by means of probabilities. Time-continuous filtering, therefore, holds promise for wider usefulness, for it offers a means of combining noisy measurements with imperfect model to provide more insight on a given state.
The solution to time-continuous nonlinear Gaussian filtering problem is provided for by the Kushner-Stratonovich equation. Unfortunately, the Kushner-Stratonovich equation lacks a closed-form solution. Moreover, the numerical approximations based on Taylor expansion above third order are fraught with computational complications. For this reason, numerical methods based on Monte Carlo methods have been resorted to. Chief among these methods are sequential Monte-Carlo methods (or particle filters), for they allow for online assimilation of data. Particle filters are not without challenges: they suffer from particle degeneracy, sample impoverishment, and computational costs arising from resampling.
The goal of this thesis is to:— i) Review the derivation of Kushner-Stratonovich equation from first principles and its extant numerical approximation methods, ii) Study the feedback particle filters as a way of avoiding resampling in particle filters, iii) Study joint state and parameter estimation in time-continuous settings, iv) Apply the notions studied to linear hyperbolic stochastic differential equations.
The interconnection between Itô integrals and stochastic partial differential equations and those of Stratonovich is introduced in anticipation of feedback particle filters. With these ideas and motivated by the variants of ensemble Kalman-Bucy filters founded on the structure of the innovation process, a feedback particle filter with randomly perturbed innovation is proposed. Moreover, feedback particle filters based on coupling of prediction and analysis measures are proposed. They register a better performance than the bootstrap particle filter at lower ensemble sizes.
We study joint state and parameter estimation, both by means of extended state spaces and by use of dual filters. Feedback particle filters seem to perform well in both cases. Finally, we apply joint state and parameter estimation in the advection and wave equation, whose velocity is spatially varying. Two methods are employed: Metropolis Hastings with filter likelihood and a dual filter comprising of Kalman-Bucy filter and ensemble Kalman-Bucy filter. The former performs better than the latter.
The fabrication of 1D nanostrands composed of stimuli responsive microgels has been shown in this work. Microgels are well known materials able to respond to various stimuli from outer environment. Since these microgels respond via a volume change to an external stimulus, a targeted mechanical response can be achieved. Through carefully choosing the right composition of the polymer matrix, microgels can be designed to react precisely to the targeted stimuli (e.g. drug delivery via pH and temperature changes, or selective contractions through changes in electrical current125).
In this work, it was aimed to create flexible nano-filaments which are capable of fast anisotropic contractions similar to muscle filaments. For the fabrication of such filaments or strands, nanostructured templates (PDMS wrinkles) were chosen due to a facile and low-cost fabrication and versatile tunability of their dimensions. Additionally, wrinkling is a well-known lithography-free method which enables the fabrication of nanostructures in a reproducible manner and with a high long-range periodicity.
In Chapter 2.1, it was shown for the first time that microgels as soft matter particles can be aligned to densely packed microgel arrays of various lateral dimensions. The alignment of microgels with different compositions (e.g. VCL/AAEM, NIPAAm, NIPAAm/VCL and charged microgels) was shown by using different assembly techniques (e.g. spin-coating, template confined molding). It was chosen to set one experimental parameter constant which was the SiOx surface composition of the templates and substrates (e.g. oxidized PDMS wrinkles, Si-wafers and glass slides). It was shown that the fabrication of nanoarrays was feasible with all tested microgel types. Although the microgels exhibited different deformability when aligned on a flat surface, they retained their thermo-responsivity and swelling behavior.
Towards the fabrication of 1D microgel strands interparticle connectivity was aspired. This was achieved via different cross-linking methods (i.e. cross-linking via UV-irradiation and host-guest complexation) discussed in Chapter 2.2. The microgel arrays created by different assembly methods and microgel types were tested for their cross-linking suitability. It was observed that NIPAAm based microgels cannot be cross-linked with UV light. Furthermore, it was found that these microgels exhibit a strong surface-particle-interaction and therefore could not be detached from the given substrates. In contrast to the latter, with VCL/AAEM based microgels it was possible to both UV cross-link them based on the keto-enol tautomerism of the AAEM copolymer, and to detach them from the substrate due to the lower adhesion energy towards SiOx surfaces. With VCL/AAEM microgels long, one-dimensional microgel strands could be re-dispersed in water for further analysis. It has also been shown that at least one lateral dimension of the free dispersed 1D microgel strands is easily controllable by adjusting the wavelength of the wrinkled template. For further work, only VCL/AAEM based microgels were used to focus on the main aim of this work, i.e. the fabrication of 1D microgel nanostrands.
As an alternative to the unspecific and harsh UV cross-linking, the host-guest complexation via diazobenzene cross-linkers and cyclodextrin hosts was explored. The idea behind this approach was to give means to a future construction kit-like approach by incorporation of cyclodextrin comonomers in a broad variety of particle systems (e.g. microgels, nanoparticles). For this purpose, VCL/AAEM microgels were copolymerized with different amounts of mono-acrylate functionalized β-cyclodextrin (CD). After successfully testing the cross-linking capability in solution, the cross-linking of aligned VCL/AAEM/CD microgels was tried. Although the cross-linking worked well, once the single arrays came into contact to each other, they agglomerated. As a reason for this behavior residual amounts of mono-complexed diazobenzene linkers were suspected. Thus, end-capping strategies were tried out (e.g. excess amounts of β-cyclodextrin and coverage with azobenzene functionalized AuNPs) but were unsuccessful. With deeper thought, entropy effects were taken into consideration which favor the release of complexed diazobenzene linker leading to agglomerations. To circumvent this entropy driven effect, a multifunctional polymer with 50% azobenzene groups (Harada polymer) was used. First experiments with this polymer showed promising results regarding a less pronounced agglomeration (Figure 77). Thus, this approach could be pursued in the future. In this chapter it was found out that in contrast to pearl necklace and ribbon like formations, particle alignment in zigzag formation provided the best compromise in terms of stability in dispersion (see Figure 44a and Figure 51) while maintaining sufficient flexibility.
For this reason, microgel strands in zigzag formation were used for the motion analysis described in Chapter 2.3. The aim was to observe the properties of unrestrained microgel strands in solution (e.g. diffusion behavior, rotational properties and ideally, anisotropic contraction after temperature increase). Initially, 1D microgel strands were manipulated via AFM in a liquid cell setup. It could be observed that the strands required a higher load force compared to single microgels to be detached from the surface. However, with the AFM it was not possible to detach the strands in a controllable manner but resulted in a complete removal of single microgel particles and a tearing off the strands from the surface, respectively. For this reason, to observe the motion behavior of unrestrained microgel strands in solution, confocal microscopy was used. Furthermore, to hinder an adsorption of the strands, it was found out that coating the surface of the substrates with a repulsive polymer film was beneficial. Confocal and wide-field microscopy videos showed that the microgel strands exhibit translational and rotational diffusive motion in solution without perceptible bending. Unfortunately, with these methods the detection of the anisotropic stimuli responsive contraction of the free moving microgel strands was not possible. To summarize, the flexibility of microgel strands is more comparable to the mechanical behavior of a semi flexible cable than to a yarn. The strands studied here consist of dozens or even hundreds of discrete submicron units strung together by cross-linking, having few parallels in nanotechnology.
With the insights gained in this work on microgel-surface interactions, in the future, a targeted functionalization of the template and substrate surfaces can be conducted to actively prevent unwanted microgel adsorption for a given microgel system (e.g. PVCL and polystyrene coating235). This measure would make the discussed alignment methods more diverse. As shown herein, the assembly methods enable a versatile microgel alignment (e.g. microgel meshes, double and triple strands). To go further, one could use more complex templates (e.g. ceramic rhombs and star shaped wrinkles (Figure 14) to expand the possibilities of microgel alignment and to precisely control their aspect ratios (e.g. microgel rods with homogeneous size distributions).
L’auteur conçoit la modalité comme une catégorie sémantico-fonctionnelle, indépendante des éléments qui l’expriment et du niveau de la structure grammaticale dont ils relèvent. Pour définir la modalité, il tient compte également de ses caractéristiques structurelles ainsi que de phénomènes relevant de niveaux cognitifs plus hauts et plus bas. Cela permet de porter un regard critique sur les recherches antérieures, de développer un cadre théorique conciliant les différentes approches et d’analyser systématiquement les expressions de la modalité en français (verbes et adverbes modaux, modes verbaux etc.). L’interaction entre plusieurs éléments modaux dans le même énoncé peut déclencher trois types d’interaction et produit des phénomènes modaux particulièrement complexes.
Im Mittelpunkt dieser Arbeit standen Analysen zur Charakterisierung der periplasmatischen Aldehyd Oxidoreduktase aus E. coli. Kinetische Untersuchungen mit Ferricyanid als Elektronenakzeptor unter anaeroben Bedingungen zeigten für dieses Enzym eine höhere Aktivität als unter aeroben Bedingungen. Die getroffene Hypothese, dass PaoABC fähig ist Elektronen an molekularen Sauerstoff weiter zu geben, konnte bestätigt werden. Für den Umsatz aromatischer Aldehyde mit molekularem Sauerstoff wurde ein Optimum von pH 6,0 ermittelt. Dies steht im Gegensatz zur Reaktion mit Ferricyanid, mit welchem ein pH-Optimum von 4,0 gezeigt wurde. Die Reaktion von PaoABC mit molekularem Sauerstoff generiert zwar Wasserstoffperoxid, die Produktion von Superoxid konnte dagegen nicht beobachtet werden. Dass aerobe Bedingungen einen Einfluss auf das Auslösen der Expression von PaoABC haben, wurde in dieser Arbeit ebenfalls ermittelt.
Im Zusammenhang mit der Produktion von ROS durch PaoABC wurde die Funktion eines kürzlich in Elektronentransfer-Distanz zum FAD identifizierten [4Fe4S]-Clusters untersucht. Ein Austausch der für die Bindung des Clusters zuständigen Cysteine führte zur Instabilität der Proteinvarianten, weswegen für diese keine weiteren Untersuchungen erfolgten. Daher wird zumindest ein struktur-stabilisierender Einfluss des [4Fe4S]-Clusters angenommen. Zur weiteren Untersuchung der Funktion dieses Clusters, wurde ein zwischen FAD und [4Fe4S]-Cluster lokalisiertes Arginin gegen ein Alanin ausgetauscht. Diese Proteinvariante zeigte eine reduzierte Geschwindigkeit der Reaktion gegenüber dem Wildtyp. Die Bildung von Superoxid konnte auch hier nicht beobachtet werden. Die Vermutung, dass dieser Cluster einen elektronen-sammelnden Mechanismus unterstützt, welcher die Radikalbildung verhindert, kann trotz allem nicht ausgeschlossen werden. Da im Umkreis des Arginins weitere geladene und aromatische Aminosäuren lokalisiert sind, können diese den notwendigen Elektronentransfer übernehmen.
Neben der Ermittlung eines physiologischen Elektronenakzeptors und dessen Einfluss auf die Expression von PaoABC zeigt diese Arbeit auch, dass die Chaperone PaoD und MocA während der Reifung des MCD-Kofaktor eine gemeinsame Bindung an PaoABC realisieren. Es konnte im aktiven Zentrum von PaoABC ein Arginin beschrieben werden, welches auf Grund der engen Nachbarschaft zum MCD-Kofaktor und zum Glutamat (PaoABC-EC692) am Prozess der Substratbindung beteiligt ist. Im Zusammenhang mit dem Austausch dieses Arginins gegen ein Histidin oder ein Lysin wurden die Enzymspezifität und der Einfluss physiologischer Bedingungen, wie pH und Ionenstärke, auf die Reaktion des Enzyms untersucht. Gegenüber dem Wildtyp zeigten die Varianten mit molekularem Sauerstoff eine geringere Affinität zum Substrat aber auch eine höhere Geschwindigkeit der Reaktion. Vor allem für die Histidin-Variante konnte im gesamten pH-Bereich ein instabiles Verhalten bestimmt werden. Der Grund dafür wurde durch das Lösen der Struktur der Histidin-Variante beschreiben. Durch den Austausch der Aminosäuren entfällt die stabilisierende Wirkung der delokalisierten Elektronen des Arginins und es kommt zu einer Konformationsänderung im aktiven Zentrum.
Neben der Reaktion von PaoABC mit einer Vielzahl aromatischer Aldehyde konnte auch der Umsatz von Salicylaldehyd zu Salicylsäure durch PaoABC in einer Farbreaktion bestimmt werden. Durch Ausschluss von molekularem Sauerstoff als terminaler Elektronenakzeptor, in einer enzym-gekoppelten Reaktion, erfolgte ein Elektronentransport auf Ferrocencarboxylsäure. Die Kombination aus beiden Methoden ermöglichte eine Verwendung von Ferrocen-Derivaten zur Generierung einer enzym-gekoppelten Reaktion mit PaoABC.
Die Untersuchungen zu PaoABC zeigen, dass die Vielfalt der durch das Enzym katalysierten Rektionen weitere Möglichkeiten der enzymatischen Bestimmung biokatalytischer Prozesse bietet.
Parlamentarier als Beruf
(2019)
Die politische Professionalisierung hat innerhalb der institutionellen Rahmenbedingungen zur Sozialfigur des Berufspolitikers geführt. Diese Entwicklung wird im theoretischen Teil der Arbeit hergeleitet und im empirischen Teil mit umfangreichen Daten belegt. Bemerkenswert ist, dass es dabei nicht zu erheblichen Veränderungen in den Rekrutierungsmustern und Karriereverläufen der Abgeordneten gekommen ist. Vielmehr erweisen sich die von Dietrich Herzog herausgearbeiteten Karrieretypen auch heute noch als gültig und mussten nur moderat angepasst werden. Es zeigt sich damit eine erstaunliche Kontinuität in der politischen Elitenbildung. Die in Deutschland sehr gefestigten institutionellen Rahmenbedingungen, die den Zugang und die Attraktivität politischer Karrieren determinieren, haben offensichtlich auch zu einer Stabilisierung der Karrieretypen geführt.
Die männliche Taille ist ein in der Forschung bisher ausgesparter Bereich, von welchem jedoch für die Entwicklung der Männermode wesentliche Impulse ausgingen. Im Zentrum von Julia Burdes Buch steht der sich mit der Mode wandelnde männliche Modekörper als Diskurs der Schneiderei im 18. und 19. Jahrhundert. Burde zeigt, wie sich die Männermode von einem erst sichelförmig durchgebogenen, dann schmal taillierten Körper - modelliert aus Watte und Stoff - zu einem modernen Körper in gerade geschnittener Kleidung entwickelte, von dessen Anatomie sich das Schnittmuster losgelöst hat. Anhand zeitgenössischer Quellen wird dabei deutlich, wie Herrenschneider Körper im Zuschnitt konstruierten und Mode durch gezieltes Lancieren von Modeberichten beeinflussten.
Selenite pseudomorphs
(2019)
Pillars of Salt
(2019)
During lower sea levels in glacial periods, deep permafrost formed on large continental shelf areas of the Arctic Ocean. Subsequent sea level rise and coastal erosion created subsea permafrost, which generally degrades after inundation under the influence of a complex suite of marine, near-shore processes. Global warming is especially pronounced in the Arctic, and will increase the transition to and the degradation of subsea permafrost, with implications for atmospheric climate forcing, offshore infrastructure, and aquatic ecosystems.
This thesis combines new geophysical, borehole observational and modelling approaches to enhance our understanding of subsea permafrost dynamics. Three specific areas for advancement were identified: (I) sparsity of observational data, (II) lacking implementation of salt infiltration mechanisms in models, and (III) poor understanding of the regional differences in key driving parameters. This study tested the combination of spectral ratios of the ambient vibration seismic wavefield, together with estimated shear wave velocity from seismic interferometry analysis, for estimating the thickness of the unfrozen sediment overlying the ice-bonded permafrost offshore. Mesoscale numerical calculations (10^1 to 10^2 m, thousands of years) were employed to develop and solve the coupled heat diffusion and salt transport equations including phase change effects. Model soil parameters were constrained by borehole data, and the impact of a variety of influences during the transgression was tested in modelling studies. In addition, two inversion schemes (particle swarm optimization and a least-square method) were used to reconstruct temperature histories for the past 200-300 years in the Laptev Sea region in Siberia from two permafrost borehole temperature records. These data were evaluated against larger scale reconstructions from the region.
It was found (I) that peaks in spectral ratios modelled for three-layer, one-dimensional systems corresponded with thaw depths. Around Muostakh Island in the central Laptev Sea seismic receivers were deployed on the seabed. Derived depths of the ice-bonded permafrost table were between 3.7-20.7 m ± 15 %, increasing with distance from the coast. (II) Temperatures modelled during the transition to subsea permafrost resembled isothermal conditions after about 2000 years of inundation at Cape Mamontov Klyk, consistent with observations from offshore boreholes. Stratigraphic scenarios showed that salt distribution and infiltration had a large impact on the ice saturation in the sediments. Three key factors were identified that, when changed, shifted the modelled permafrost thaw depth most strongly: bottom water temperatures, shoreline retreat rate and initial temperature before inundation. Salt transport based on diffusion and contribution from arbitrary density-driven mechanisms only accounted for about 50 % of observed thaw depths at offshore sites hundreds to thousands of years after inundation. This bias was found consistently at all three sites in the Laptev Sea region. (III) In the temperature reconstructions, distinct differences in the local temperature histories between the western Laptev Sea and the Lena Delta sites were recognized, such as a transition to warmer temperatures a century later in the western Laptev Sea as well as a peak in warming three decades later. The local permafrost surface temperature history at Sardakh Island in the Lena Delta was reminiscent of the circum-Arctic regional average trends. However, Mamontov Klyk in the western Laptev Sea was consistent to Arctic trends only in the most recent decade and was more similar to northern hemispheric mean trends. Both sites were consistent with a rapid synoptic recent warming.
In conclusion, the consistency between modelled response, expected permafrost distribution, and observational data suggests that the passive seismic method is promising for the determination of the thickness of unfrozen sediment on the continental Arctic shelf. The quantified gap between currently modelled and observed thaw depths means that the impact of degradation on climate forcing, ecosystems, and infrastructure is larger than current models predict. This discrepancy suggests the importance of further mechanisms of salt penetration and thaw that have not been considered – either pre-inundation or post-inundation, or both. In addition, any meaningful modelling of subsea permafrost would have to constrain the identified key factors and their regional differences well. The shallow permafrost boreholes provide missing well-resolved short-scale temperature information in the coastal permafrost tundra of the Arctic. As local differences from circum-Arctic reconstructions, such as later warming and higher warming magnitude, were shown to exist in this region, these results provide a basis for local surface temperature record parameterization of climate and, in particular, permafrost models. The results of this work bring us one step further to understanding the full picture of the transition from terrestrial to subsea permafrost.
This study assesses and explains international bureaucracies’ performance and role as policy advisors and as expert authorities from the perspective of domestic stakeholders. International bureaucracies are the secretariats of international organizations that carry out their work including generating knowledge, providing policy advice and implementing policy programs and projects. Scholars increasingly regard them as governance actors that are able to influence global and domestic policy making. In order to explain this influence, research has mainly focused on international bureaucracies’ formal features and/or staff characteristics. The way in which they are actually perceived by their domestic stakeholders, in particular by national bureaucrats, has not been systematically studied. Yet, this is equally important, given that they represent international bureaucracies’ addressees and are actors that (potentially) make use of international bureaucracies’ policy advice, which can be seen as an indicator for international bureaucracies’ influence. Accordingly, I argue that domestic stakeholders’ assessments can likewise contribute to explaining international bureaucracies’ influence.
The overarching research questions the study addresses are what are national stakeholders’ perspectives on international bureaucracies and under which conditions do they consider international bureaucracies’ policy advice? In answering these questions, I focus on three specific organizational features that the literature has considered important for international bureaucracies’ independent influence, namely international bureaucracies’ performance and their role as policy advisors and as expert authorities. These three features are studied separately in three independent articles, which are presented in Part II of this article-based dissertation.
To answer the research questions, I draw on novel data from a global survey among ministry officials of 121 countries. The survey captures ministry officials’ assessments of international bureaucracies’ features and their behavior with respect to international bureaucracies’ policy advice. The overall sample comprises the bureaucracies of nine global and nine regional international organizations in eight thematic areas in the policy fields of agriculture and finance.
The overall finding of this study is that international bureaucracies’ performance and their role as policy advisors and expert authorities as perceived by ministry officials are highly context-specific and relational. These features vary not only across international bureaucracies but much more intra-organizationally across the different thematic areas that an international bureaucracy addresses, i.e. across different thematic contexts. As far as to the relational nature of international bureaucracies’ features, the study generally finds strong variation across the assessments by ministry officials from different countries and across thematic areas. Hence, the findings highlight that it is likewise important to study international bureaucracies via the perspective of their stakeholders and to take account of the different thematic areas and contexts in which international bureaucracies operate.
The study contributes to current research on international bureaucracies in various ways. First, it directly surveys one important type of domestic stakeholders, namely national ministry officials, as to how they evaluate certain aspects of international bureaucracies instead of deriving them from their structural features, policy documents or assessments by their staff. Furthermore, the study empirically tests a range of theoretical hypotheses derived from the literature on international bureaucracies’ influence, as well as related literature. Second, the study advances methods of assessing international bureaucracies through a large-N, cross-national expert survey among ministry officials. A survey of this type of stakeholder and of this scope is – to my knowledge – unprecedented. Yet, as argued above, their perspectives are equally important for assessing and explaining international bureaucracies’ influence. Third, the study adapts common theories of international bureaucracies’ policy influence and expert authority to the assessments by ministry officials. In so doing, it tests hypotheses that are rooted in both rationalist and constructivist accounts and combines perspectives on international bureaucracies from both International Relations and Public Administration. Empirically supporting and challenging these hypotheses further complements the theoretical understanding of the determinants of international bureaucracies’ influence among national bureaucracies from both rationalist and constructivist perspectives.
Overall, this study advances our understanding of international bureaucracies by systematically taking into account ministry officials’ perspectives in order to determine under which conditions international bureaucracies are perceived to perform well and are able to have an effect as policy advisors and expert authorities among national bureaucracies. Thereby, the study helps to specify to what extent international bureaucracies – as global governance actors – are able to permeate domestic governance via ministry officials and, thus, contribute to the question of why some international bureaucracies play a greater role and are ultimately able to have more influence than others.
Force plays a fundamental role in the regulation of biological processes. Cells can sense the mechanical properties of the extracellular matrix (ECM) by applying forces and transmitting mechanical signals. They further use mechanical information for regulating a wide range of cellular functions, including adhesion, migration, proliferation, as well as differentiation and apoptosis. Even though it is well understood that mechanical signals play a crucial role in directing cell fate, surprisingly little is known about the range of forces that define cell-ECM interactions at the molecular level.
Recently, synthetic molecular force sensor (MFS) designs have been established for measuring the molecular forces acting at the cell-ECM interface. MFSs detect the traction forces generated by cells and convert this mechanical input into an optical readout. They are composed of calibrated mechanoresponsive building blocks and are usually equipped with a fluorescence reporter system. Up to date, many different MFS designs have been introduced and successfully used for measuring forces involved in the adhesion of mammalian cells. These MFSs utilize different molecular building blocks, such as double-stranded deoxyribonucleic acid (dsDNA) molecules, DNA hairpins and synthetic polymers like polyethylene glycol (PEG). These currently available MFS designs lack ECM mimicking properties.
In this work, I introduce a new MFS building block for cell biology applications, derived from the natural ECM. It combines mechanical tunability with the ability to mimic the native cellular microenvironment. Inspired by structural ECM proteins with load bearing function, this new MFS design utilizes coiled coil (CC)-forming peptides. CCs are involved in structural and mechanical tasks in the cellular microenvironment and many of the key protein components of the cytoskeleton and the ECM contain CC structures. The well-known folding motif of CC structures, an easy synthesis via solid phase methods and the many roles CCs play in biological processes have inspired studies to use CCs as tunable model systems for protein design and assembly. All these properties make CCs ideal candidates as building blocks for MFSs. In this work, a series of heterodimeric CCs were designed, characterized and further used as molecular building blocks for establishing a novel, next-generation MFS prototype.
A mechanistic molecular understanding of their structural response to mechanical load is essential for revealing the sequence-structure-mechanics relationships of CCs. Here, synthetic heterodimeric CCs of different length were loaded in shear geometry and their mechanical response was investigated using a combination of atomic force microscope (AFM)-based single-molecule force spectroscopy (SMFS) and steered molecular dynamics (SMD) simulations. SMFS showed that the rupture forces of short heterodimeric CCs (3-5 heptads) lie in the range of 20-50 pN, depending on CC length, pulling geometry and the applied loading rate (dF/dt). Upon shearing, an initial rise in the force, followed by a force plateau and ultimately strand separation was observed in SMD simulations. A detailed structural analysis revealed that CC response to shear load depends on the loading rate and involves helix uncoiling, uncoiling-assisted sliding in the direction of the applied force and uncoiling-assisted dissociation perpendicular to the force axis.
The application potential of these mechanically characterized CCs as building blocks for MFSs has been tested in 2D cell culture applications with the goal of determining the threshold force for cell adhesion. Fully calibrated, 4- to 5-heptad long, CC motifs (CC-A4B4 and CC-A5B5) were used for functionalizing glass surfaces with MFSs. 3T3 fibroblasts and endothelial cells carrying mutations in a signaling pathway linked to cell adhesion and mechanotransduction processes were used as model systems for time-dependent adhesion experiments. A5B5-MFS efficiently supported cell attachment to the functionalized surfaces for both cell types, while A4B4-MFS failed to maintain attachment of 3T3 fibroblasts after the first 2 hours of initial cell adhesion. This difference in cell adhesion behavior demonstrates that the magnitude of cell-ECM forces varies depending on the cell type and further supports the application potential of CCs as mechanoresponsive and tunable molecular building blocks for the development of next-generation protein-based MFSs.This novel CC-based MFS design is expected to provide a powerful new tool for observing cellular mechanosensing processes at the molecular level and to deliver new insights into the mechanisms and forces involved. This MFS design, utilizing mechanically tunable CC building blocks, will not only allow for measuring the molecular forces acting at the cell-ECM interface, but also yield a new platform for the development of mechanically controlled materials for a large number of biological and medical applications.
In this work we investigated ultrafast demagnetization in a Heusler-alloy. This material belongs to the halfmetal and exists in a ferromagnetic phase. A special feature of investigated alloy is a structure of electronic bands. The last leads to the specific density of the states. Majority electrons form a metallic like structure while minority electrons form a gap near the Fermi-level, like in semiconductor. This particularity offers a good possibility to use this material as model-like structure and to make some proof of principles concerning demagnetization. Using pump-probe experiments we carried out time-resolved measurements to figure out the times of demagnetization. For the pumping we used ultrashort laser pulses with duration around 100 fs. Simultaneously we used two excitation regimes with two different wavelengths namely 400 nm and 1240 nm. Decreasing the energy of photons to the gap size of the minority electrons we explored the effect of the gap on the demagnetization dynamics. During this work we used for the first time OPA (Optical Parametrical Amplifier) for the generation of the laser irradiation in a long-wave regime. We tested it on the FETOSPEX-beamline in BASSYII electron storage ring. With this new technique we measured wavelength dependent demagnetization dynamics. We estimated that the demagnetization time is in a correlation with photon energy of the excitation pulse. Higher photon energy leads to the faster demagnetization in our material. We associate this result with the existence of the energy-gap for minority electrons and explained it with Elliot-Yaffet-scattering events. Additionally we applied new probe-method for magnetization state in this work and verified their effectivity. It is about the well-known XMCD (X-ray magnetic circular dichroism) which we adopted for the measurements in reflection geometry. Static experiments confirmed that the pure electronic dynamics can be separated from the magnetic one. We used photon energy fixed on the L3 of the corresponding elements with circular polarization. Appropriate incidence angel was estimated from static measurements. Using this probe method in dynamic measurements we explored electronic and magnetic dynamics in this alloy.
Membrane adhesion is a fundamental biological process in which membranes are attached to neighboring membranes or surfaces. Membrane adhesion emerges from a complex interplay between the binding of membrane-anchored receptors/ligands and the membrane properties. In this work, we study membrane adhesion mediated by lipid-anchored saccharides using microsecond-long full-atomistic molecular dynamics simulations. Motivated by neutron scattering experiments on membrane adhesion via lipid-anchored saccharides, we investigate the role of LeX, Lac1, and Lac2 saccharides and membrane fluctuations in membrane adhesion.
We study the binding of saccharides in three different systems: for saccharides in water, for saccharides anchored to essentially planar membranes at fixed separations, and for saccharides anchored to apposing fluctuating membranes. Our simulations of two saccharides in water indicate that the saccharides engage in weak interactions to form dimers. We find that the binding occurs in a continuum of bound states instead of a certain number of well-defined bound structures, which we term as "diffuse binding".
The binding of saccharides anchored to essentially planar membranes strongly depends on separation of the membranes, which is fixed in our simulation system. We show that the binding constants for trans-interactions of two lipid-anchored saccharides monotonically decrease with increasing separation. Saccharides anchored to the same membrane leaflet engage in cis-interactions with binding constants comparable to the trans-binding constants at the smallest membrane separations. The interplay of cis- and trans-binding can be investigated in simulation systems with many lipid-anchored saccharides. For Lac2, our simulation results indicate a positive cooperativity of trans- and cis-binding. In this cooperative binding the trans-binding constant is enhanced by the cis-interactions. For LeX, in contrast, we observe no cooperativity between trans- and cis-binding. In addition, we determine the forces generated by trans-binding of lipid-anchored saccharides in planar membranes from the binding-induced deviations of the lipid-anchors. We find that the forces acting on trans-bound saccharides increase with increasing membrane separation to values of the order of 10 pN.
The binding of saccharides anchored to the fluctuating membranes results from an interplay between the binding properties of the lipid-anchored saccharides and membrane fluctuations. Our simulations, which have the same average separation of the membranes as obtained from the neutron scattering experiments, yield a binding constant larger than in planar membranes with the same separation. This result demonstrates that membrane fluctuations play an important role at average membrane separations which are seemingly too large for effective binding. We further show that the probability distribution of the local separation can be well approximated by a Gaussian distribution. We calculate the relative membrane roughness and show that our results are in good agreement with the roughness values reported from the neutron scattering experiments.
Ministerial administrations are pivotal in the process of defining problems and developing policy solutions due to their technocratic expertise, particularly when this process is applied to climate policy. This innovative book explores how and why policies are changed or continued by employing in-depth studies from a diverse range of EU countries.
Climate Policy in Denmark, Germany, Estonia and Poland works to narrow the research gap surrounding administrative institutions within the field of climate policy change by integrating ideas, discourses and institutions to provide a better understanding of both climate policy and policy change. Differences in approach to democratization and Europeanization between Western and Central Eastern European countries provide rich empirical material for the study of policy formulation. This timely book demonstrates how the substance and formation of policies are shaped by their political and administrative institutional contexts.
Analytical and accessible, this discerning book will be of value to scholars and students of climate policy, public policy and public administration alike. Providing lessons on institutional reform in climate and energy policy, this explorative book will also be of interest to practitioners and policy-makers.
The usage of mobile devices is rapidly growing with Android being the most prevalent mobile operating system. Thanks to the vast variety of mobile applications, users are preferring smartphones over desktops for day to day tasks like Internet surfing. Consequently, smartphones store a plenitude of sensitive data. This data together with the high values of smartphones make them an attractive target for device/data theft (thieves/malicious applications).
Unfortunately, state-of-the-art anti-theft solutions do not work if they do not have an active network connection, e.g., if the SIM card was removed from the device. In the majority of these cases, device owners permanently lose their smartphone together with their personal data, which is even worse.
Apart from that malevolent applications perform malicious activities to steal sensitive information from smartphones. Recent research considered static program analysis to detect dangerous data leaks. These analyses work well for data leaks due to inter-component communication, but suffer from shortcomings for inter-app communication with respect to precision, soundness, and scalability.
This thesis focuses on enhancing users' privacy on Android against physical device loss/theft and (un)intentional data leaks. It presents three novel frameworks: (1) ThiefTrap, an anti-theft framework for Android, (2) IIFA, a modular inter-app intent information flow analysis of Android applications, and (3) PIAnalyzer, a precise approach for PendingIntent vulnerability analysis.
ThiefTrap is based on a novel concept of an anti-theft honeypot account that protects the owner's data while preventing a thief from resetting the device.
We implemented the proposed scheme and evaluated it through an empirical user study with 35 participants. In this study, the owner's data could be protected, recovered, and anti-theft functionality could be performed unnoticed from the thief in all cases.
IIFA proposes a novel approach for Android's inter-component/inter-app communication (ICC/IAC) analysis. Our main contribution is the first fully automatic, sound, and precise ICC/IAC information flow analysis that is scalable for realistic apps due to modularity, avoiding combinatorial explosion: Our approach determines communicating apps using short summaries rather than inlining intent calls between components and apps, which requires simultaneously analyzing all apps installed on a device.
We evaluate IIFA in terms of precision, recall, and demonstrate its scalability to a large corpus of real-world apps. IIFA reports 62 problematic ICC-/IAC-related information flows via two or more apps/components.
PIAnalyzer proposes a novel approach to analyze PendingIntent related vulnerabilities. PendingIntents are a powerful and universal feature of Android for inter-component communication. We empirically evaluate PIAnalyzer on a set of 1000 randomly selected applications and find 1358 insecure usages of PendingIntents, including 70 severe vulnerabilities.
Die deutschen Wirtschaftsverbände geraten zunehmend unter Konkurrenzdruck. Immer mehr Verbände werben um dieselbe Mitgliederklientel und haben einen ähnlichen Themenzuschnitt. Darüber hinaus betreiben immer mehr Mitgliedsunternehmen eigene politische Interessenvertretung. Die vorliegende Studie untersucht auf der Grundlage des aus der Evolutionsbiologie stammenden Population Ecology-Ansatzes, wie die Verbände mit dieser Konkurrenz umgehen.
In zweigliedrigen Sekundarschulsystemen mit gleicher Abschlussorientierung – wie sie in immer mehr Bundesländern zu finden sind – tritt die Einzelschule mit ihren spezifischen Eigenschaften in den Fokus der Schulwahl. Als eine der ersten Studien untersucht Anne Jurczok, welche Schuleigenschaften Eltern präferieren, welche Informationen sie nutzen und welche Einzelschulen sie unter den Bedingungen der Zweigliedrigkeit auswählen und ablehnen. Auf Grundlage werterwartungstheoretischer Überlegungen und der Frame-Selektion wird der Prozess der Einzelschulwahl konzeptualisiert und die institutionellen, lokalen und sozio-kulturellen Bedingungen der Wahlsituation berücksichtigt.
Die vorliegende Dissertation zielt generell darauf ab, die Anwendung der dialektischen Methodologie auf den Bereich der Sprachphilosophie zu rechtfertigen und eine systematische Bearbeitung eines begrenzten Teils der Sprachphilosophie mithilfe der Dialektik durchzuführen. Um diese Herangehensweise, die in der Forschung kaum oder gar nicht vertreten ist, aufzuklären und festzustellen, werde ich zuerst auf die philosophischen Überlegungen von zwei Autoren zurückgreifen: Hegel und Wittgenstein.
Hegel und Wittgenstein sind, auf den ersten Blick, Autoren, die kaum Gemeinsamkeiten haben, außer dass sie sich beide mit der Philosophie als Fach beschäftigt und unvermeidlich ein gemeinschaftliches Thema, die Sprache, behandelt haben, wobei jedoch weder eine inhaltliche noch eine methodologische Verbindung hervorgehoben werden könnte. Die erste Voraussetzung dieser Dissertation, in Bezug auf die Geschichte der Idee, besteht darin, darauf hinzudeuten, dass der hegelsche Geistesbegriff und Wittgensteins Lebensform zwei Ansätze und Resultat einer philosophischen Bemühung sind, die gemeinsam die notwendige Auflösung bzw. Überwindung skeptischer Argumentation vornehmen. Tatsächlich hat Wittgenstein in seinen Philosophischen Untersuchungen eine Argumentation entwickelt, die als „Paradox des Regelfolgens“ bezeichnet und in der sekundären Literatur (hauptsächlich bei Kripke) als eine Art skeptische Argumentation betrachtet wurde. Demnach wird Wittgensteins Theorie der Sprache entweder als eine Auflösung dieses Skeptizismus oder einfach als ein skeptischer Text selbst ausgelegt (Brandom). Das erste Ziel meiner Dissertation besteht darin, zu zeigen, dass dieses Paradox als skeptische Argumentation allerdings unvollständig geblieben ist dass dieses Paradox als der erster entscheidender Moment zu der höchsten Form der skeptischen Herausforderung, der Antinomie, betrachtet werden kann. Eine vollständige skeptische Argumentation heißt, dass die alleinige Auflösung des Paradoxes, der Dispositionalismus und die Negation dieser Theorie, beide beweisbar sind. Ich werde also versuchen, aus der in den PU dargestellten Auflösung des Paradoxes des Regelfolgens die Vervollständigung einer Antinomie des Begriffes der Normativität in Bezug auf die Sprachregeln festzulegen, ähnlich der von Kant entwickelten kosmologischen Antinomie (Thesis cum Antithesis). Das zweite Ziel meiner Dissertation besteht folglich darin, zu zeigen, 1. dass die kantische Auflösung der Antinomie unwirksam bezüglich der Antinomie der Normativität ist, 2. dass diese Antinomie eine notwendige Auseinandersetzung mit einem radikalen Skeptizismus bedeutet und dass wir logisch gezwungen sind, nicht nur irgendeine Theorie der Sprachphilosophie neu zu bestimmen, sondern unsere Methodologie – das heißt die Anwendung der üblichen Normen der Rationalität – selbst grundsätzlich tiefer gehend in Frage zu stellen, und 3. dass die hegelsche Dialektik sich als die methodologische Auflösung einer solchen radikalen skeptischen Herausforderung bzw. als die Auflösung einer Antinomie überhaupt ergibt. Anlässlich dieser methodologischen Revidierung wird auf die hegelsche Dialektik zurückgegriffen.
Dennoch begrenzt sich der Zweck dieser Dissertation nicht darauf, eine Interpretation von Hegels Dialektik oder eine Überwindung von Wittgensteins Lebensform darzustellen, vielmehr geht es darum, auf die Problematik und die Grundsätze des Begriffs der Lebensform bzw. des theoretischen Geistes zurückzugreifen und kraft Hegels Dialektik darüber hinauszuführen, um den Platz und die Funktion der Sprache besser zu verstehen. Diese Arbeit erfolgt vielmehr im Rahmen eines wissenschaftlichen Projekts, oder anders gesagt, sie nutzt die methodologischen Resultate von zwei Autoren der Philosophie, um ein wissenschaftliches Programm vorzustellen. Der Anspruch dieser Arbeit ist dementsprechend, durch das Zurückgreifen auf Hegels Dialektik eine neue Erkenntnis über die Sprache zu gewinnen, wobei die beiden kontradiktorischen Momente der Kognition – die Normativität, die durch das Bewusstsein erfolgt und diejenige, die durch Dispositionen erfolgt –, konstruktiv verbinden sind. Der konkrete Gewinn dieser Methodologie ist es demnach, eine Sprachphilosophie überhaupt als System festlegen zu können, ein System, das es ermöglicht, sprachliche Phänomene in all ihren Aspekten in kohärenter Weise zu fassen. Inhaltlich betrachtet zielt dieses Programm darauf ab, die allgemeine Stufe des Begriffs der Sprache als Moment des Begriffs des Geistes dialektisch abzuleiten, d. h. den richtigen Sinn der Sprache festzulegen. Eine vollständige Bearbeitung der Sprachphilosophie mithilfe der Dialektik konnte ich allerdings nicht durchführen, und der Umfang der mithilfe der Dialektik hergeleiteten Sprachkategorien begrenzt sich auf die Lehre der Einbildungskraft, die die Lehre der allgemeinen Semiologie und der Grammatik einschließt.
Light-switchable proteins are being used increasingly to understand and manipulate complex molecular systems. The success of this approach has fueled the development of tailored photo-switchable proteins, to enable targeted molecular events to be studied using light. The development of novel photo-switchable tools has to date largely relied on rational design. Complementing this approach with directed evolution would be expected to facilitate these efforts. Directed evolution, however, has been relatively infrequently used to develop photo-switchable proteins due to the challenge presented by high-throughput evaluation of switchable protein activity. This thesis describes the development of two genetic circuits that can be used to evaluate libraries of switchable proteins, enabling optimization of both the on- and off-states. A screening system is described, which permits detection of DNA-binding activity based on conditional expression of a fluorescent protein. In addition, a tunable selection system is presented, which allows for the targeted selection of protein-protein interactions of a desired affinity range. This thesis additionally describes the development and characterization of a synthetic protein that was designed to investigate chromophore reconstitution in photoactive yellow protein (PYP), a promising scaffold for engineering photo-controlled protein tools.
Bei der Arbeit mit literarischen Texten spielen Kontexte für das Verstehen naturgemäß eine herausgehobene Rolle. Lehrerinnen und Lehrer sind dabei mit vielfältigen Problemen konfrontiert: Welche Kontexte sind in welchen Jahrgangsstufen sinnvoll und ergiebig? Wann überfrachtet die Anreicherung mit Kontexten einen Verstehensprozess, wann ist sie unumgänglich? Wie gelingt es, dass Lernende Kontexte nicht nur schematisch, sondern flexibel für das Verstehen einsetzen? Wie können die Forderungen der Bildungsstandards und Lehrpläne nach einer intensiven Kontextualisierung im Unterricht konkret umgesetzt werden? Der vorliegende Band gibt auf diese Fragen anwendungsorientierte Antworten. Dies geschieht auf drei Ebenen: Zum einen wird über einen analytischen Zugang der Kontextbegriff systematisiert (mit den Kontextbereichen Gattung, Literaturgeschichte, Autorbiografie und Intertextualität), zum anderen wird in einer empirischen Studie die Wirksamkeit reflektierter Kontextnutzung für das Textverstehen verdeutlicht. Drittens werden erprobte Modelle für den konkreten Einsatz im Unterricht vorgestellt und diskutiert. So können Kontexte als ein ergiebiges Werkzeug für den Unterricht bereitgestellt werden.
Dieses Buch liefert überblicksartig Befunde einer Wirkungsforschung im Fach Sport und behandelt Determinanten und Konsequenzen von Leistungen von Schülern und Schülerinnen im Sportunterricht. Ausgehend vom Angebot-Nutzen-Modell (nach Helmke, 2010) analysiert Sara Seiler distale und proximale Merkmale des Lehr-Lern-Prozesses in ihrer Einflussnahme auf Lernleistungen von Schülern und Schülerinnen in einem Mehrebenenmodell. Entgegen einer Reduzierung auf kognitive Lernleistungen werden motorische, motivationale, volitionale und personale Aspekte als Lernleistungen im Sport diskutiert.
Skarn deposits are found on every continents and were formed at different times from Precambrian to Tertiary. Typically, the formation of a skarn is induced by a granitic intrusion in carbonates-rich sedimentary rocks. During contact metamorphism, fluids derived from the granite interact with the sedimentary host rocks, which results in the formation of calc-silicate minerals at the expense of carbonates. Those newly formed minerals generally develop in a metamorphic zoned aureole with garnet in the proximal and pyroxene in the distal zone. Ore elements contained in magmatic fluids are precipitated due to the change in fluid composition. The temperature decrease of the entire system, due to the cooling of magmatic fluids and the entering of meteoric water, allows retrogression of some prograde minerals.
The Hämmerlein skarn deposit has a multi-stage history with a skarn formation during regional metamorphism and a retrogression of primary skarn minerals during the granitic intrusion. Tin was mobilized during both events. The 340 Ma old tin-bearing skarn minerals show that tin was present in sediments before the granite intrusion, and that the first Sn enrichment occurred during the skarn formation by regional metamorphism fluids. In a second step at ca. 320 Ma, tin-bearing fluids were produced with the intrusion of the Eibenstock granite. Tin, which has been added by the granite and remobilized from skarn calc-silicates, precipitated as cassiterite.
Compared to clay or marl, the skarn is enriched in Sn, W, In, Zn, and Cu. These metals have been supplied during both regional metamorphism and granite emplacement. In addition, the several isotopic and chemical data of skarn samples show that the granite selectively added elements such as Sn, and that there was no visible granitic contribution to the sedimentary signature of the skarn
The example of Hämmerlein shows that it is possible to form a tin-rich skarn without associated granite when tin has already been transported from tin-bearing sediments during regional metamorphism by aqueous metamorphic fluids. These skarns are economically not interesting if tin is only contained in the skarn minerals. Later alteration of the skarn (the heat and fluid source is not necessarily a granite), however, can lead to the formation of secondary cassiterite (SnO2), with which the skarn can become economically highly interesting.
Hat für Personen eine ethische Auseinandersetzung mit ihrem Leiden an Problemen und Konflikten gegenüber strategischen und technischen Lösungen eine Bedeutung? Diese Abhandlung zeigt, dass Ansätze philosophischer Ethik, die von formalen Prinzipien, menschlichen Lebensformen oder sozialen Praktiken ausgehen, diese Frage unzureichend beantworten. Zu deren Beantwortung werden stattdessen ethische Subjektivität in der Klage über Leid, ethische Überlegungen als Negation von Leid und ethischer Dialog als Überwindung von Leid erörtert.
Totalsynthese benzoannellierter Sauerstoffheterocyclen durch Mikrowellen induzierte Tandem-Sequenzen
(2019)
Die Beschäftigten des französischen Uhrenherstellers LIP machten 1973 europaweit Furore: Im Kampf gegen Entlassungen stellten sie Verhandlungsroutinen und Hierarchien infrage und nahmen Produktion und Verkauf von Armbanduhren in die eigene Hand. Wenige Jahre später gründeten die »LIPs« mehrere Produktionsgenossenschaften.
Jens Beckmann untersucht diese Auseinandersetzungen von ihren Anfängen bis zum Arbeitsalltag in den 1980er Jahren. Er zeigt, welche Vorstellungen von Selbstverwaltung sich hier niederschlugen, und nimmt eine gründliche Kontextualisierung in Branche und Region vor – von der Revolte der 1968er Jahre bis zu Kurzarbeit und Sozialplänen.
Geomagnetic paleosecular variations (PSVs) are an expression of geodynamo processes inside the Earth’s liquid outer core. These paleomagnetic time series provide insights into the properties of the Earth’s magnetic field, from normal behavior with a dominating dipolar geometry, over field crises, such as pronounced intensity lows and geomagnetic excursions with a distorted field geometry, to the complete reversal of the dominating dipole contribution. Particularly, long-term high-resolution and high-quality PSV time series are needed for properly reconstructing the higher frequency components in the spectrum of geomagnetic field variations and for a better understanding of the effects of smoothing during the recording of such paleomagnetic records by sedimentary archives.
In this doctorate study, full vector paleomagnetic records were derived from 16 sediment cores recovered from the southeastern Black Sea. Age models are based on radiocarbon dating and correlations of warming/cooling cycles monitored by high-resolution X-ray fluorescence (XRF) elementary ratios as well as ice-rafted debris (IRD) in Black Sea sediments to the sequence of ‘Dansgaard-Oeschger’ (DO) events defined from Greenland ice core oxygen isotope stratigraphy.
In order to identify the carriers of magnetization in Black Sea sediments, core MSM33-55-1 recovered from the southeast Black Sea was subjected to detailed rock magnetic and electron microscopy investigations. The younger part of core MSM33-55-1 was continuously deposited since 41 ka. Before 17.5 ka, the magnetic minerals were dominated by a mixture of greigite (Fe3S4) and titanomagnetite (Fe3-xTixO4) in samples with SIRM/κLF >10 kAm-1, or exclusively by titanomagnetite in samples with SIRM/κLF ≤10 kAm-1. It was found that greigite is generally present as crustal aggregates in locally reducing micro-environments. From 17.5 ka to 8.3 ka, the dominant magnetic mineral in this transition phase was changing from greigite (17.5 – ~10.0 ka) to probably silicate-hosted titanomagnetite (~10.0 – 8.3 ka). After 8.3 ka, the anoxic Black Sea was a favorable environment for the formation of non-magnetic pyrite (FeS2) framboids.
Aiming to avoid compromising of paleomagnetic data by erroneous directions carried by greigite, paleomagnetic data from samples with SIRM/κLF >10 kAm-1, shown to contain greigite by various methods, were removed from obtained records. Consequently, full vector paleomagnetic records, comprising directional data and relative paleointensity (rPI), were derived only from samples with SIRM/κLF ≤10 kAm-1 from 16 Black Sea sediment cores. The obtained data sets were used to create a stack covering the time window between 68.9 and 14.5 ka with temporal resolution between 40 and 100 years, depending on sedimentation rates.
At 64.5 ka, according to obtained results from Black Sea sediments, the second deepest minimum in relative paleointensity during the past 69 ka occurred. The field minimum during MIS 4 is associated with large declination swings beginning about 3 ka before the minimum. While a swing to 50°E is associated with steep inclinations (50-60°) according to the coring site at 42°N, the subsequent declination swing to 30°W is associated with shallow inclinations of down to 40°. Nevertheless, these large deviations from the direction of a geocentric axial dipole field (I=61°, D=0°) still can not yet be termed as 'excursional', since latitudes of corresponding VGPs only reach down to 51.5°N (120°E) and 61.5°N (75°W), respectively. However, these VGP positions at opposite sides of the globe are linked with VGP drift rates of up to 0.2° per year in between. These extreme secular variations might be the mid-latitude expression of the Norwegian–Greenland Sea excursion found at several sites much further North in Arctic marine sediments between 69°N and 81°N.
At about 34.5 ka, the Mono Lake excursion is evidenced in the stacked Black Sea PSV record by both a rPI minimum and directional shifts. Associated VGPs from stacked Black Sea data migrated from Alaska, via central Asia and the Tibetan Plateau, to Greenland, performing a clockwise loop. This agrees with data recorded in the Wilson Creek Formation, USA., and Arctic sediment core PS2644-5 from the Iceland Sea, suggesting a dominant dipole field. On the other hand, the Auckland lava flows, New Zealand, the Summer Lake, USA., and Arctic sediment core from ODP Site-919 yield distinct VGPs located in the central Pacific Ocean due to a presumably non-dipole (multi-pole) field configuration.
A directional anomaly at 18.5 ka, associated with pronounced swings in inclination and declination, as well as a low in rPI, is probably contemporaneous with the Hilina Pali excursion, originally reported from Hawaiian lava flows. However, virtual geomagnetic poles (VGPs) calculated from Black Sea sediments are not located at latitudes lower than 60° N, which denotes normal, though pronounced secular variations. During the postulated Hilina Pali excursion, the VGPs calculated from Black Sea data migrated clockwise only along the coasts of the Arctic Ocean from NE Canada (20.0 ka), via Alaska (18.6 ka) and NE Siberia (18.0 ka) to Svalbard (17.0 ka), then looping clockwise through the Eastern Arctic Ocean.
In addition to the Mono Lake and the Norwegian–Greenland Sea excursions, the Laschamp excursion was evidenced in the Black Sea PSV record with the lowest paleointensities at about 41.6 ka and a short-term (~500 years) full reversal centered at 41 ka. These excursions are further evidenced by an abnormal PSV index, though only the Laschamp and the Mono Lake excursions exhibit excursional VGP positions. The stacked Black Sea paleomagnetic record was also converted into one component parallel to the direction expected from a geocentric axial dipole (GAD) and two components perpendicular to it, representing only non-GAD components of the geomagnetic field. The Laschamp and the Norwegian–Greenland Sea excursions are characterized by extremely low GAD components, while the Mono Lake excursion is marked by large non-GAD contributions. Notably, negative values of the GAD component, indicating a fully reversed geomagnetic field, are observed only during the Laschamp excursion.
In summary, this doctoral thesis reconstructed high-resolution and high-fidelity PSV records from SE Black Sea sediments. The obtained record comprises three geomagnetic excursions, the Norwegian–Greenland Sea excursion, the Laschamp excursion, and the Mono Lake excursion. They are characterized by abnormal secular variations of different amplitudes centered at about 64.5 ka, 41.0 ka and 34.5 ka, respectively. In addition, the obtained PSV record from the Black Sea do not provide evidence for the postulated 'Hilina Pali excursion' at about 18.5 ka. Anyway, the obtained Black Sea paleomagnetic record, covering field fluctuations from normal secular variations, over excursions, to a short but full reversal, points to a geomagnetic field characterized by a large dynamic range in intensity and a highly variable superposition of dipole and non-dipole contributions from the geodynamo during the past 68.9 to 14.5 ka.
In this thesis we introduce the concept of the degree of formality. It is directed against a dualistic point of view, which only distinguishes between formal and informal proofs. This dualistic attitude does not respect the differences between the argumentations classified as informal and it is unproductive because the individual potential of the respective argumentation styles cannot be appreciated and remains untapped.
This thesis has two parts. In the first of them we analyse the concept of the degree of formality (including a discussion about the respective benefits for each degree) while in the second we demonstrate its usefulness in three case studies. In the first case study we will repair Haskell B. Curry's view of mathematics, which incidentally is of great importance in the first part of this thesis, in light of the different degrees of formality. In the second case study we delineate how awareness of the different degrees of formality can be used to help students to learn how to prove. Third, we will show how the advantages of proofs of different degrees of formality can be combined by the development of so called tactics having a medium degree of formality. Together the three case studies show that the degrees of formality provide a convincing solution to the problem of untapped potential.
Carbonate-rich silicate and carbonate melts play a crucial role in deep Earth magmatic processes and their melt structure is a key parameter, as it controls physical and chemical properties. Carbonate-rich melts can be strongly enriched in geochemically important trace elements. The structural incorporation mechanisms of these elements are difficult to study because such melts generally cannot be quenched to glasses, which are usually employed for structural investigations. This thesis investigates the influence of CO2 on the local environments of trace elements contained in silicate glasses with variable CO2 concentrations as well as in silicate and carbonate melts. The compositions studied include sodium-rich peralkaline silicate melts and glasses and carbonate melts similar to those occurring naturally at Oldoinyo Lengai volcano, Tanzania.
The local environments of the three elements yttrium (Y), lanthanum (La) and strontium (Sr) were investigated in synthesized glasses and melts using X-ray absorption fine structure (XAFS) spectroscopy. Especially extended X-ray absorption fine structure spectroscopy (EXAFS) provides element specific information on local structure, such as bond lengths, coordination numbers and the degree of disorder. To cope with the enhanced structural disorder present in glasses and melts, EXAFS analysis was based on fitting approaches using an asymmetric distribution function as well as a correlation model according to bond valence theory. Firstly, silicate glasses quenched from high pressure/temperature melts with up to 7.6 wt % CO2 were investigated. In strongly and extremely peralkaline glasses the local structure of Y is unaffected by the CO2 content (with oxygen bond lengths of ~ 2.29 Å). Contrary, the bond lengths for Sr-O and La-O increase with increasing CO2 content in the strongly peralkaline glasses from ~ 2.53 to ~ 2.57 Å and from ~ 2.52 to ~ 2.54 Å, respectively, while they remain constant in extremely peralkaline glasses (at ~ 2.55 Å and 2.54 Å, respectively). Furthermore, silicate and unquenchable carbonate melts were investigated in-situ at high pressure/temperature conditions (2.2 to 2.6 GPa, 1200 to 1500 °C) using a Paris-Edinburgh press. A novel design of the pressure medium assembly for this press was developed, which features increased mechanical stability as well as enhanced transmittance at relevant energies to allow for low content element EXAFS in transmission. Compared to glasses the bond lengths of Y-O, La-O and Sr-O are elongated by up to + 3 % in the melt and exhibit higher asymmetric pair distributions. For all investigated silicate melt compositions Y-O bond lengths were found constant at ~ 2.37 Å, while in the carbonate melt the Y-O length increases slightly to 2.41 Å. The La-O bond lengths in turn, increase systematically over the whole silicate – carbonate melt joint from 2.55 to 2.60 Å. Sr-O bond lengths in melts increase from ~ 2.60 to 2.64 Å from pure silicate to silicate-bearing carbonate composition with constant elevated bond length within the carbonate region.
For comparison and deeper insight, glass and melt structures of Y and Sr bearing sodium-rich silicate to carbonate compositions were simulated in an explorative ab initio molecular dynamics (MD) study. The simulations confirm observed patterns of CO2-dependent local changes around Y and Sr and additionally provide further insights into detailed incorporation mechanisms of the trace elements and CO2. Principle findings include that in sodium-rich silicate compositions carbon either is mainly incorporated as a free carbonate-group or shares one oxygen with a network former (Si or [4]Al) to form a non-bridging carbonate. Of minor importance are bridging carbonates between two network formers. Here, a clear preference for two [4]Al as adjacent network formers occurs, compared to what a statistical distribution would suggest. In C-bearing silicate melts minor amounts of molecular CO2 are present, which is almost totally dissolved as carbonate in the quenched glasses.
The combination of experiment and simulation provides extraordinary insights into glass and melt structures. The new data is interpreted on the basis of bond valence theory and is used to deduce potential mechanisms for structural incorporation of investigated elements, which allow for prediction on their partitioning behavior in natural melts. Furthermore, it provides unique insights into the dissolution mechanisms of CO2 in silicate melts and into the carbonate melt structure. For the latter, a structural model is suggested, which is based on planar CO3-groups linking 7- to 9-fold cation polyhedra, in accordance to structural units as found in the Na-Ca carbonate nyerereite. Ultimately, the outcome of this study contributes to rationalize the unique physical properties and geological phenomena related to carbonated silicate-carbonate melts.
The trace gases CO2 and CH4 pertain to the most relevant greenhouse gases and are important exchange fluxes of the global carbon (C) cycle. Their atmospheric quantity increased significantly as a result of the intensification of anthropogenic activities, such as especially land-use and land-use change, since the mid of the 18th century. To mitigate global climate change and ensure food security, land-use systems need to be developed, which favor reduced trace gas emissions and a sustainable soil carbon management. This requires the accurate and precise quantification of the influence of land-use and land-use change on CO2 and CH4 emissions. A common method to determine the trace gas dynamics and C sink or source function of a particular ecosystem is the closed chamber method. This method is often used assuming that accuracy and precision are high enough to determine differences in C gas emissions for e.g., treatment comparisons or different ecosystem components.
However, the broad range of different chamber designs, related operational procedures and data-processing strategies which are described in the scientific literature contribute to the overall uncertainty of closed chamber-based emission estimates. Hence, the outcomes of meta-analyses are limited, since these methodical differences hamper the comparability between studies. Thus, a standardization of closed chamber data acquisition and processing is much-needed.
Within this thesis, a set of case studies were performed to: (I) develop standardized routines for an unbiased data acquisition and processing, with the aim of providing traceable, reproducible and comparable closed chamber based C emission estimates; (II) validate those routines by comparing C emissions derived using closed chambers with independent C emission estimates; and (III) reveal processes driving the spatio-temporal dynamics of C emissions by developing (data processing based) flux separation approaches.
The case studies showed: (I) the importance to test chamber designs under field conditions for an appropriate sealing integrity and to ensure an unbiased flux measurement. Compared to the sealing integrity, the use of a pressure vent and fan was of minor importance, affecting mainly measurement precision; (II) that the developed standardized data processing routines proved to be a powerful and flexible tool to estimate C gas emissions and that this tool can be successfully applied on a broad range of flux data sets from very different ecosystem; (III) that automatic chamber measurements display temporal dynamics of CO2 and CH4 fluxes very well and most importantly, that they accurately detect small-scale spatial differences in the development of soil C when validated against repeated soil inventories; and (IV) that a simple algorithm to separate CH4 fluxes into ebullition and diffusion improves the identification of environmental drivers, which allows for an accurate gap-filling of measured CH4 fluxes.
Overall, the proposed standardized data acquisition and processing routines strongly improved the detection accuracy and precision of source/sink patterns of gaseous C emissions. Hence, future studies, which consider the recommended improvements, will deliver valuable new data and insights to broaden our understanding of spatio-temporal C gas dynamics, their particular environmental drivers and underlying processes.
Rowdytum, ein Begriff, der im russischen Zarenreich für die Ungezogenheit niederer Schichten stand, bezeichnet heute vor allem die Gewaltexzesse von Fußballfans. In der Sowjetunion und im übrigen Ostblock war Rowdytum viel mehr: ein Sinnbild abweichenden Verhaltens, eine unscharf definierte Straftat, ein willkürliches Stigma, das jeden treffen konnte.
Matej Kotalík erforscht das Rowdytum grenzübergreifend und verfolgt dessen Geobiografie von der Sowjetunion in die CS(S)R und in die DDR, von 1956 bis 1989. Seine Bilanz ist ambivalent. Ungeachtet sowjetischer Impulse war der entsprechende Straftatbestand durch eigene Rechtstraditionen beider Länder geprägt. Als Alltagserscheinung markierte Rowdytum die Grenzen des gesellschaftlich Akzeptablen, die in den 1960er Jahren zugleich neu verhandelt wurden. Die Polizeigewalt gegen Außenseiter wandelte sich in einer individualisierten Gesellschaft zunehmend zum Streitpunkt zwischen dem Staat und seinen Bürgern.
The Italian Army’s participation in Hitler’s war against the Soviet Union has remained unrecognized and understudied. Bastian Matteo Scianna offers a wide-ranging, in-depth corrective. Mining Italian, German and Russian sources, he examines the history of the Italian campaign in the East between 1941 and 1943, as well as how the campaign was remembered and memorialized in the domestic and international arena during the Cold War. Linking operational military history with memory studies, this book revises our understanding of the Italian Army in the Second World War.
The Atlantic Meridional Overturning Circulation (AMOC) is likely the most well-known system of ocean currents on Earth, redistributing heat, nutrients and carbon over a large part of the Earth’s surface and affecting global climate as a result. Due to enhanced freshwater fluxes into the subpolar North Atlantic as a response to global warming, the AMOC is expected, and may have already started, to weaken and these changes will likely have global impacts. It is therefore of considerable relevance to improve our understanding of past and future AMOC changes. My thesis tries to answer some of the open questions in this field by giving strong evidence that the AMOC has already weakened over the last century, by narrowing future projections of this slowdown and
by studying the impacts on global surface warming.
While there have been various studies trying to reconstruct the strength of the overturning circulation in the past, often based on model simulations in combination with observations (Jackson et al., 2016, Kanzow et al., 2010) or proxies (Frajka-Williams, 2015, Latif et al., 2006), the results so far, due to lack of direct measurements, have been inconclusive. In the first paper I build on previous work that links the anomalously low sea surface temperatures (SSTs) in the North Atlantic with the reduced meridional heat transport due to a weaker AMOC. Using the output of a high-resolution global climate model, I derive a characteristic spatial and seasonal SST fingerprint of an AMOC slowdown and an improved SST-based AMOC index. The same fingerprint is seen in
the observational SSTs since the late 19th Century, giving strong evidence that since then the AMOC has slowed down. In addition, the reconstruction of the historical overturning strength with the new AMOC index agrees well with and extends the results of earlier studies as well as the direct measurements from the RAPID project and shows a strong decline of the AMOC by about 15% (3±1 Sv) since the mid-20th Century (Caesar et al., 2018).
The reconstruction of the historical overturning strength with the AMOC index enables us to weight future AMOC projections based on their skill in modeling the historical AMOC as described in the second paper of this thesis (Olson et al., 2018). Using Bayesian model averaging we considerably narrow the projections of the CMIP5 ensemble to a decrease of -4.0 Sv and -6.8 Sv between the years 1960-1999 and 2060-2099 for the RCP4.5 and RCP8.5 emission scenarios, respectively. These values fit to, yet are at the lower end of, previously published estimates.
In the third paper I examine how the AMOC slowdown affects the global mean surface temperature (GMST) with a focus on how it will change the ocean heat uptake (OHC). Accounting for the effect of changes in the radiative forcing on the GMST, I test how AMOC variations correlate with the residual part of surface temperature changes in the past. I find that the correlation is positive which fits the understanding that the deep-water formation that is important in driving the AMOC cools the deep ocean and therefore warms the surface (Caesar et al., 2019). The future weakening of the overturning circulation could therefore delay global surface warming.
Due to nonlinear behavior and scale specific changes it can be difficult to study the dominant processes and modes that drive climate variability. In the fourth paper we develop and test a new technique based on the wavelet multiscale correlation (WMC) similarity measure to study climate variability on different temporal and spatial scales (Agarwal et al., 2018). In a fifth contribution to my thesis this method is applied to the observed sea surface temperatures. The results reconfirm well-known relations between SST anomalies such as the El Niño-Southern Oscillation (ENSO) and the Pacific Decadal Oscillation (PDO) on inter-annual and decadal timescales, respectively. They
furthermore give new insights into the characteristics and origins of long-range teleconnections, for example, that the teleconnection between ENSO and Indian Ocean dipole exist mainly between the northern part of the ENSO tongue and the equatorial Indian Ocean, and provides therefore valuable knowledge about the regions that are necessary to include when modeling regional climate variability at a certain scale (Agarwal et al., 2019).
In summary, my PhD thesis investigates past and future AMOC variability and its effects on global mean surface temperature by utilizing a combination of observational sea surface data and the output of historical and future climate model simulations from both the high-resolution CM2.6 model as well as the CMIP5 ensemble. It further includes the development and validation of a new method to study climate variability, that, applied to the observed sea surface temperatures, gives new insight about teleconnections in the Earth System. My findings provide evidence that the AMOC has already slowed down, will continue to do so in the future, and will impact the global mean temperature. Further impacts of an AMOC slowdown may include increased sea-level rise at the U.S. east coast (Ezer, 2015), heat extremes in Europe (Duchez et al., 2016) and increased storm activity in the North Atlantic region (Jackson et al., 2015), all of which have significant socio-economic implications.
Im Netz der Zeit
(2019)
In der vorliegenden Schrift, die aus der Dissertation der Zwillinge Konstantin und Kornelius Keulen im Fach Philosophie hervorgegangen ist, werden die vielfältigen Verwebungs- und Vernetzungszusammenhänge des Internet zeit- und ereignisphilosophischer Ausdeutung weist nach Meinung der Autoren den Weg, das Konglomerat Internet als technomediales menschliches Produkt in seinen sozio-kulturellen, politisch-ökonomischen und psychosozialen Komplexitäten ausdeutbar zu machen.
Sprachliche Kompetenzen spielen für den Bildungserfolg von Schülerinnen und Schülern eine grundlegende Rolle. Die besonderen sprachlichen Anforderungen der Bildungsinstitutionen stellen einige Kinder und Jugendliche vor Herausforderungen, die ihnen eine erfolgreiche Bildungslaufbahn erschweren. Um allen Lernenden den Zugang zu Bildung zu gewähren, sollten sprachliche Kompetenzen im Rahmen des schulischen Alltags und insbesondere des Fachunterrichts gefördert werden. Um Lehrerinnen und Lehrer für diese Aufgabe zu qualifizieren, sind wirksame Fortbildungen essenziell. Fortbildungen sind allerdings nicht per se wirksam. Vielmehr wird ihre Wirksamkeit von einer Vielzahl von Faktoren bedingt und sie variiert je nach Bereich. Inwieweit Merkmale und Bedingungen wirksamer Fortbildungen sowie Gesamteffekte, die bereichsübergreifend und für einige spezifische Bereiche bekannt sind, auch für Lehrkräftefortbildungen zur fachintegrierten Sprachförderung gelten, ist bisher ungeklärt. Im deutschen Raum fehlt es an Evaluationsstudien, die die Wirksamkeit solcher Fortbildungen und ihre Gelingensbedingungen untersuchen. Im internationalen Raum sind solche Studien zwar vorhanden, in ihrer Gesamtheit jedoch schwer zu überblicken, sodass sich bislang keine umfassenden Aussagen über den Erfolg dieser Fortbildungen treffen lassen. Vor diesem Hintergrund beschäftigt sich die vorliegende Dissertation anhand von drei Teilstudien mit der Wirksamkeit von Fortbildungen, die Lehrkräfte dafür qualifizieren sollen, eine in die Fächer und Lernbereiche der Schule integrierte Sprachförderung umzusetzen.
In einem systematischen Review (Studie 1) wurden die vorhandenen englischsprachigen Studien, in denen solche Fortbildungsmaßnahmen evaluiert wurden, systematisch ausgewertet. Insgesamt wurden 38 Studien einbezogen. Anhand dieser wurde qualitativ-inhaltsanalytisch untersucht, ob Merkmale wirksamer Lehrkräftefortbildung, die aus der fächerübergreifenden Forschung bekannt sind, für das spezifische Feld der Sprachförderung im Fachunterricht ebenfalls von Bedeutung sind oder ob dort andere Merkmale eine Rolle spielen. Die Studien deuten darauf hin, dass alle evaluierten Fortbildungen zumindest in gewissem Maß wirksam waren. Im Ergebnis zeigte sich, dass die Maßnahmen viele Eigenschaften teilen, die für eine erfolgreiche Lehrkräftefortbildung über verschiedene Fächer hinweg wichtig sind. Sie enthalten darüber hinaus einige Merkmale, die spezifisch für Fortbildung zur fachintegrierten Sprachförderung zu sein scheinen. Das Review stützt die Annahme, dass Fortbildungen die Kognitionen und die Unterrichtspraxis von Lehrkräften verändern und den Schülerinnen und Schülern zugutekommen kann, wenn bestimmte Merkmale bei der Gestaltung und Umsetzung berücksichtigt werden.
Aufbauend auf das Review wurden mit einer Meta-Analyse (Studie 2) die Effekte aus denjenigen zehn Studien aggregiert, die sich auf quantitative Weise analysieren ließen. Es wurde der Gesamteffekt der Fortbildungen sowohl auf die Kognitionen (z. B. Überzeugungen) der Lehrkräfte als auch auf das unterrichtspraktische Handeln (z. B. Verwendung sprachförderlicher Strategien) der Lehrkräfte ermittelt. Außerdem wurde untersucht, welche Rolle Merkmale der einbezogenen Studien sowie der Fortbildungen für die Ausprägung der Effekte spielen. Die Analysen ergaben einen kleinen Fortbildungseffekt auf die Kognitionen und einen mittleren bis großen Effekt auf das unterrichtspraktische Handeln der Lehrkräfte. Studienmerkmale, die die methodische Qualität der Studien betrafen, moderierten die Effekte. Dennoch deuten die Ergebnisse darauf hin, dass Fortbildung zur fachintegrierten Sprachförderung sich günstig auf die Kognitionen und Handlungen von Lehrkräften auswirkt.
Mit einer quasi-experimentellen Tagebuchstudie (Studie 3) wurde eine in Deutschland umgesetzte Fortbildung zur integrierten Sprachförderung formativ evaluiert. Die zentrale Frage war, inwiefern die fortgebildeten Grundschullehrkräfte in der Maßnahme vermittelte Sprachförderstrategien nach eigenen Angaben häufiger anwenden als ihre Kolleginnen und Kollegen, die nicht an der Fortbildung teilgenommen hatten. Untersucht wurde außerdem, inwiefern Faktoren wie die berichtete Kooperation im Kollegium mit der berichteten Häufigkeit der Strategieanwendung zusammenhängen. Mit einem standardisierten Tagebuch wurden 59 Grundschullehrkräfte befragt. Die mehrebenenanalytische Auswertung der Daten ergab keine signifikanten Unterschiede in der angegebenen Häufigkeit der Strategieanwendung zwischen den beiden Gruppen. Allerdings wurde die Nutzung einiger Strategien häufiger berichtet, wenn die Kooperation im Kollegium höher eingeschätzt wurde. Zudem fühlten sich die fortgebildeten Lehrkräfte im Vergleich zu den nicht fortgebildeten in der Anwendung der Strategien sicherer.
Die zentralen Ergebnisse dieser Dissertation werden abschließend zusammengefasst und diskutiert. Implikationen für zukünftige Forschung, Fortbildungspraxis und Bildungspolitik werden formuliert.
STERILE APETALA (SAP) is known to be an essential regulator of flower development for over 20 years. Loss of SAP function in the model plant Arabidopsis thaliana is associated with a reduction of floral organ number, size and fertility. In accordance with the function of SAP during early flower development, its spatial expression in flowers is confined to meristematic stages and to developing ovules. However, to date, despite extensive research, the molecular function of SAP and the regulation of its spatio-temporal expression still remain elusive.
In this work, amino acid sequence analysis and homology modeling revealed that SAP belongs to the rare class of plant F-box proteins with C-terminal WD40 repeats. In opisthokonts, this type of F-box proteins constitutes the substrate binding subunit of SCF complexes, which catalyze the ubiquitination of proteins to initiate their proteasomal degradation. With LC-MS/MS-based protein complex isolation, the interaction of SAP with major SCF complex subunits was confirmed. Additionally, candidate substrate proteins, such as the growth repressor PEAPOD 1 and 2 (PPD1/2), could be revealed during early stages of flower development. Also INDOLE-3-BUTYRIC ACID RESPONSE 5 (IBR5) was identified among putative interactors. Genetic analyses indicated that, different from substrate proteins, IBR5 is required for SAP function. Protein complex isolation together with transcriptome profiling emphasized that the SCFSAP complex integrates multiple biological processes, such as proliferative growth, vascular development, hormonal signaling and reproduction. Phenotypic analysis of sap mutant and SAP overexpressing plants positively correlated SAP function with plant growth during reproductive and vegetative development.
Furthermore, to elaborate on the transcriptional regulation of SAP, publicly available ChIP-seq data of key floral homeotic proteins were reanalyzed. Here, it was shown that the MADS-domain transcription factors APETALA 1 (AP1), APETALA 3 (AP3), PISTILLATA (PI), AGAMOUS (AG) and SEPALLATA 3 (SEP3) bind to the SAP locus, which indicates that SAP is expressed in a floral organ-specific manner. Reporter gene analyses in combination with CRISPR/Cas9-mediated deletion of putative regulatory regions further demonstrated that the intron contains major regulatory elements of SAP in Arabidopsis thaliana.
In conclusion, these data indicate that SAP is a pleiotropic developmental regulator that acts through tissue-specific destabilization of proteins. The presumed transcriptional regulation of SAP by the floral MADS-domain transcription factors could provide a missing link between the specification of floral organ identity and floral organ growth pathways.
Der Betrieb gewerblicher Art
(2019)
Die Arbeit untersucht die Grundlagen der Besteuerung der öffentlichen Hand, hier insbesondere die geschichtliche Entwicklung des Betriebs gewerblicher Art und die seit 2009 geltende Querverbundsystematik. Es wird die Entwicklung der einzelnen Tatbestände bis hin zur tatbestandsmäßigen Zusammenfassung einzelner Betriebe gewerblicher Art untersucht. Dabei stehen immer wieder die Wettbewerbssystematik und der steuerliche Gleichbehandlungsgrundsatz im Diskussionsmittelpunkt. Nachstehend schließt sich ein Vergleich zwischen dem horizontalen und vertikalen Querverbund an, der die grundlegenden Rechtsfolgen und die dauerdefizitären Betätigungen umfasst. Das Ergebnis zeigt auf, dass die öffentliche Hand mit ihrem wahlweisen horizontalen Querverbund und der fehlenden Gewinnerzielungsabsicht erhebliche Vorteile gegenüber privatrechtlich geführten Unternehmensformen genießt und somit erhebliches Steuersparpotential generiert.
Electrosynthesis and characterization of molecularly imprinted polymers for peptides and proteins
(2019)
Supermassive black holes reside in the hearts of almost all massive galaxies. Their evolutionary path seems to be strongly linked to the evolution of their host galaxies, as implied by several empirical relations between the black hole mass (M BH ) and different host galaxy properties. The physical driver of this co-evolution is, however, still not understood. More mass measurements over homogeneous samples and a detailed understanding of systematic uncertainties are required to fathom the origin of the scaling relations.
In this thesis, I present the mass estimations of supermassive black holes in the nuclei of one late-type and thirteen early-type galaxies. Our SMASHING sample extends from the intermediate to the massive galaxy mass regime and was selected to fill in gaps in number of galaxies along the scaling relations. All galaxies were observed at high spatial resolution, making use of the adaptive-optics mode of integral field unit (IFU) instruments on state-of-the-art telescopes (SINFONI, NIFS, MUSE). I extracted the stellar kinematics from these observations and constructed dynamical Jeans and Schwarzschild models to estimate the mass of the central black holes robustly. My new mass estimates increase the number of early-type galaxies with measured black hole masses by 15%. The seven measured galaxies with nuclear light deficits (’cores’) augment the sample of cored galaxies with measured black holes by 40%. Next to determining massive black hole masses, evaluating the accuracy of black hole masses is crucial for understanding the intrinsic scatter of the black hole- host galaxy scaling relations. I tested various sources of systematic uncertainty on my derived mass estimates.
The M BH estimate of the single late-type galaxy of the sample yielded an upper limit, which I could constrain very robustly. I tested the effects of dust, mass-to-light ratio (M/L) variation, and dark matter on my measured M BH . Based on these tests, the typically assumed constant M/L ratio can be an adequate assumption to account for the small amounts of dark matter in the center of that galaxy. I also tested the effect of a variable M/L variation on the M BH measurement on a second galaxy. By considering stellar M/L variations in the dynamical modeling, the measured M BH decreased by 30%. In the future, this test should be performed on additional galaxies to learn how an as constant assumed M/L flaws the estimated black hole masses.
Based on our upper limit mass measurement, I confirm previous suggestions that resolving the predicted BH sphere-of-influence is not a strict condition to measure black hole masses. Instead, it is only a rough guide for the detection of the black hole if high-quality, and high signal-to-noise IFU data are used for the measurement. About half of our sample consists of massive early-type galaxies which show nuclear surface brightness cores and signs of triaxiality. While these types of galaxies are typically modeled with axisymmetric modeling methods, the effects on M BH are not well studied yet. The massive galaxies of our presented galaxy sample are well suited to test the effect of different stellar dynamical models on the measured black hole mass in evidently triaxial galaxies. I have compared spherical Jeans and axisymmetric Schwarzschild models and will add triaxial Schwarzschild models to this comparison in the future. The constructed Jeans and Schwarzschild models mostly disagree with each other and cannot reproduce many of the triaxial features of the galaxies (e.g., nuclear sub-components, prolate rotation). The consequence of the axisymmetric-triaxial assumption on the accuracy of M BH and its impact on the black hole - host galaxy relation needs to be carefully examined in the future.
In the sample of galaxies with published M BH , we find measurements based on different dynamical tracers, requiring different observations, assumptions, and methods. Crucially, different tracers do not always give consistent results. I have used two independent tracers (cold molecular gas and stars) to estimate M BH in a regular galaxy of our sample. While the two estimates are consistent within their errors, the stellar-based measurement is twice as high as the gas-based. Similar trends have also been found in the literature. Therefore, a rigorous test of the systematics associated with the different modeling methods is required in the future. I caution to take the effects of different tracers (and methods) into account when discussing the scaling relations.
I conclude this thesis by comparing my galaxy sample with the compilation of galaxies with measured black holes from the literature, also adding six SMASHING galaxies, which were published outside of this thesis. None of the SMASHING galaxies deviates significantly from the literature measurements. Their inclusion to the published early-type galaxies causes a change towards a shallower slope for the M BH - effective velocity dispersion relation, which is mainly driven by the massive galaxies of our sample. More unbiased and homogenous measurements are needed in the future to determine the shape of the relation and understand its physical origin.
For many years, psycholinguistic evidence has been predominantly based on findings from native speakers of Indo-European languages, primarily English, thus providing a rather limited perspective into the human language system. In recent years a growing body of experimental research has been devoted to broadening this picture, testing a wide range of speakers and languages, aiming to understanding the factors that lead to variability in linguistic performance. The present dissertation investigates sources of variability within the morphological domain, examining how and to what extent morphological processes and representations are shaped by specific properties of languages and speakers. Firstly, the present work focuses on a less explored language, Hebrew, to investigate how the unique non-concatenative morphological structure of Hebrew, namely a non-linear combination of consonantal roots and vowel patterns to form lexical entries (L-M-D + CiCeC = limed ‘teach’), affects morphological processes and representations in the Hebrew lexicon. Secondly, a less investigated population was tested: late learners of a second language. We directly compare native (L1) and non-native (L2) speakers, specifically highly proficient and immersed late learners of Hebrew. Throughout all publications, we have focused on a morphological phenomenon of inflectional classes (called binyanim; singular: binyan), comparing productive (class Piel, e.g., limed ‘teach’) and unproductive (class Paal, e.g., lamad ‘learn’) verbal inflectional classes. By using this test case, two psycholinguistic aspects of morphology were examined: (i) how morphological structure affects online recognition of complex words, using masked priming (Publications I and II) and cross-modal priming (Publication III) techniques, and (ii) what type of cues are used when extending morpho-phonological patterns to novel complex forms, a process referred to as morphological generalization, using an elicited production task (Publication IV).
The findings obtained in the four manuscripts, either published or under review, provide significant insights into the role of productivity in Hebrew morphological processing and generalization in L1 and L2 speakers. Firstly, the present L1 data revealed a close relationship between productivity of Hebrew verbal classes and recognition process, as revealed in both priming techniques. The consonantal root was accessed only in the productive class (Piel) but not the unproductive class (Paal). Another dissociation between the two classes was revealed in the cross-modal priming, yielding a semantic relatedness effect only for Paal but not Piel primes. These findings are taken to reflect that the Hebrew mental representations display a balance between stored undecomposable unstructured stems (Paal) and decomposed structured stems (Piel), in a similar manner to a typical dual-route architecture, showing that the Hebrew mental lexicon is less unique than previously claimed in psycholinguistic research. The results of the generalization study, however, indicate that there are still substantial differences between inflectional classes of Hebrew and other Indo-European classes, particularly in the type of information they rely on in generalization to novel forms. Hebrew binyan generalization relies more on cues of argument structure and less on phonological cues.
Secondly, clear L1/L2 differences were observed in the sensitivity to abstract morphological and morpho-syntactic information during complex word recognition and generalization. While L1 Hebrew speakers were sensitive to the binyan information during recognition, expressed by the contrast in root priming, L2 speakers showed similar root priming effects for both classes, but only when the primes were presented in an infinitive form. A root priming effect was not obtained for primes in a finite form. These patterns are interpreted as evidence for a reduced sensitivity of L2 speakers to morphological information, such as information about inflectional classes, and evidence for processing costs in recognition of forms carrying complex morpho-syntactic information. Reduced reliance on structural information cues was found in production of novel verbal forms, when the L2 group displayed a weaker effect of argument structure for Piel responses, in comparison to the L1 group. Given the L2 results, we suggest that morphological and morphosyntactic information remains challenging for late bilinguals, even at high proficiency levels.