Refine
Has Fulltext
- yes (4500) (remove)
Year of publication
- 2024 (62)
- 2023 (213)
- 2022 (335)
- 2021 (309)
- 2020 (444)
- 2019 (355)
- 2018 (409)
- 2017 (362)
- 2016 (342)
- 2015 (312)
- 2014 (182)
- 2013 (165)
- 2012 (147)
- 2011 (131)
- 2010 (92)
- 2009 (109)
- 2008 (87)
- 2007 (83)
- 2006 (67)
- 2005 (79)
- 2004 (80)
- 2003 (30)
- 2002 (20)
- 2001 (16)
- 2000 (14)
- 1999 (19)
- 1998 (8)
- 1997 (2)
- 1996 (2)
- 1995 (10)
- 1994 (5)
- 1992 (1)
Document Type
- Postprint (2089)
- Doctoral Thesis (1690)
- Monograph/Edited Volume (221)
- Preprint (126)
- Working Paper (120)
- Article (86)
- Master's Thesis (57)
- Habilitation Thesis (38)
- Part of Periodical (26)
- Conference Proceeding (25)
Language
- English (4500) (remove)
Is part of the Bibliography
- yes (4500) (remove)
Keywords
- climate change (69)
- Klimawandel (47)
- machine learning (40)
- Modellierung (34)
- diffusion (28)
- morphology (27)
- climate (25)
- model (25)
- Germany (24)
- German (22)
Institute
- Institut für Biochemie und Biologie (531)
- Institut für Physik und Astronomie (522)
- Mathematisch-Naturwissenschaftliche Fakultät (482)
- Institut für Geowissenschaften (444)
- Institut für Chemie (363)
- Humanwissenschaftliche Fakultät (207)
- Extern (204)
- Strukturbereich Kognitionswissenschaften (174)
- Wirtschaftswissenschaften (169)
- Institut für Mathematik (166)
- Institut für Umweltwissenschaften und Geographie (153)
- Department Linguistik (146)
- Hasso-Plattner-Institut für Digital Engineering GmbH (144)
- Institut für Ernährungswissenschaft (126)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (121)
- Department Psychologie (120)
- Department Sport- und Gesundheitswissenschaften (112)
- Institut für Informatik und Computational Science (79)
- Center for Economic Policy Analysis (CEPA) (71)
- Fachgruppe Volkswirtschaftslehre (65)
- Institut für Anglistik und Amerikanistik (59)
- Sozialwissenschaften (53)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (41)
- Fachgruppe Betriebswirtschaftslehre (29)
- Philosophische Fakultät (29)
- Strukturbereich Bildungswissenschaften (26)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (25)
- Referat für Presse- und Öffentlichkeitsarbeit (22)
- Institut für Romanistik (17)
- Vereinigung für Jüdische Studien e. V. (17)
- Fakultät für Gesundheitswissenschaften (16)
- Fachgruppe Politik- & Verwaltungswissenschaft (14)
- Department Erziehungswissenschaft (13)
- Institut für Künste und Medien (13)
- Institut für Jüdische Studien und Religionswissenschaft (11)
- Sonderforschungsbereich 632 - Informationsstruktur (10)
- Institut für Germanistik (9)
- Digital Engineering Fakultät (8)
- Institut für Philosophie (7)
- MenschenRechtsZentrum (7)
- Öffentliches Recht (7)
- Berlin Potsdam Research Group "The International Rule of Law - Rise or Decline?" (6)
- Department für Inklusionspädagogik (6)
- Fachgruppe Soziologie (6)
- Historisches Institut (6)
- Potsdam Research Institute for Multilingualism (PRIM) (5)
- Bürgerliches Recht (4)
- Institut für Jüdische Theologie (4)
- Institut für Slavistik (4)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (4)
- Department Grundschulpädagogik (3)
- Hochschulambulanz (3)
- Multilingualism (3)
- Patholinguistics/Neurocognition of Language (3)
- Psycholinguistics and Neurolinguistics (3)
- Zentrum für Qualitätsentwicklung in Lehre und Studium (ZfQ) (3)
- Akademie für Psychotherapie und Interventionsforschung GmbH (2)
- Senat (2)
- WeltTrends e.V. Potsdam (2)
- Applied Computational Linguistics (1)
- Department Musik und Kunst (1)
- Foundations of Computational Linguistics (1)
- Institut für Religionswissenschaft (1)
- Interdisziplinäres Zentrum für Kognitive Studien (1)
- Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung (1)
- Juristische Fakultät (1)
- Kommissionen des Senats (1)
- Language Acquisition (1)
- Phonology & Phonetics (1)
- Syntax, Morphology & Variability (1)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (1)
Define real, Moron!
(2011)
Academic language should not be a ghetto dialect at odds with ordinary language, but rather an extension that is compatible with lay-language. To define ‘game’ with the unrealistic ambition of satisfying both lay-people and experts should not be a major concern for a game ontology, since the field it addresses is subject to cultural evolution and diachronic change. Instead of the impossible mission of turning the common word into an analytic concept, a useful task for an ontology of games is to model game differences, to show how the things we call games can be different from each other in a number of different ways.
Hepcidin-25 was identified as themain iron regulator in the human body, and it by binds to the sole iron-exporter ferroportin. Studies showed that the N-terminus of hepcidin is responsible for this interaction, the same N-terminus that encompasses a small copper(II) binding site known as the ATCUN (amino-terminal Cu(II)- and Ni(II)-binding) motif. Interestingly, this copper-binding property is largely ignored in most papers dealing with hepcidin-25. In this context, detailed investigations of the complex formed between hepcidin-25 and copper could reveal insight into its biological role. The present work focuses on metal-bound hepcidin-25 that can be considered the biologically active form. The first part is devoted to the reversed-phase chromatographic separation of copper-bound and copper-free hepcidin-25 achieved by applying basic mobile phases containing 0.1% ammonia. Further, mass spectrometry (tandemmass spectrometry (MS/MS), high-resolutionmass spectrometry (HRMS)) and nuclear magnetic resonance (NMR) spectroscopy were employed to characterize the copper-peptide. Lastly, a three-dimensional (3D)model of hepcidin-25with bound copper(II) is presented. The identification of metal complexes and potential isoforms and isomers, from which the latter usually are left undetected by mass spectrometry, led to the conclusion that complementary analytical methods are needed to characterize a peptide calibrant or referencematerial comprehensively. Quantitative nuclear magnetic resonance (qNMR), inductively-coupled plasma mass spectrometry (ICP-MS), ion-mobility spectrometry (IMS) and chiral amino acid analysis (AAA) should be considered among others.
An exploration of rhythmic grouping of speech sequences by french- and german-learning infants
(2016)
Rhythm in music and speech can be characterized by a constellation of several acoustic cues. Individually, these cues have different effects on rhythmic perception: sequences of sounds alternating in duration are perceived as short-long pairs (weak-strong/iambicpattern), whereas sequences of sounds alternating in intensity or pitch are perceived as loud-soft, or high-low pairs (strong-weak/trochaic pattern). This perceptual bias-called the lambic-Trochaic Law (ITL) has been claimed to be an universal property of the auditory system applying in both the music and the language domains. Recent studies have shown that language experience can modulate the effects of the ITL on rhythmic perception of both speech and non-speech sequences in adults, and of non-speech sequences in 7.5-month-old infants. The goal of the present study was to explore whether language experience also modulates infants' grouping of speech. To do so, we presented sequences of syllables to monolingual French- and German-learning 7.5-month-olds. Using the Headturn Preference Procedure (HPP), we examined whether they were able to perceive a rhythmic structure in sequences of syllables that alternated in duration, pitch, or intensity. Our findings show that both French- and German-learning infants perceived a rhythmic structure when it was cued by duration or pitch but not intensity. Our findings also show differences in how these infants use duration and pitch cues to group syllable sequences, suggesting that pitch cues were the easier ones to use. Moreover, performance did not differ across languages, failing to reveal early language effects on rhythmic perception. These results contribute to our understanding of the origin of rhythmic perception and perceptual mechanisms shared across music and speech, which may bootstrap language acquisition.
We report on the detection of very high energy (VHE; E > 100 GeV) gamma-ray emission from the BL Lac objects KUV 00311-1938 and PKS 1440-389 with the High Energy Stereoscopic System (H.E.S.S.). H.E.S.S. observations were accompanied or preceded by multiwavelength observations with Fermi/LAT, XRT and UVOT onboard the Swift satellite, and ATOM. Based on an extrapolation of the Fermi/LAT spectrum towards the VHE gamma-ray regime, we deduce a 95 per cent confidence level upper limit on the unknown redshift of KUV 00311-1938 of z < 0.98 and of PKS 1440-389 of z < 0.53. When combined with previous spectroscopy results, the redshift of KUV 00311-1938 is constrained to 0.51 <= z < 0.98 and of PKS 1440-389 to 0.14 (sic) z < 0.53.
Proceedings of KogWis 2010 : 10th Biannual Meeting of the German Society for Cognitive Science
(2010)
As the latest biannual meeting of the German Society for Cognitive Science (Gesellschaft für Kognitionswissenschaft, GK), KogWis 2010 at Potsdam University reflects the current trends in a fascinating domain of research concerned with human and artificial cognition and the interaction of mind and brain. The Plenary talks provide a venue for questions of the numerical capacities and human arithmetic (Brian Butterworth), of the theoretical development of cognitive architectures and intelligent virtual agents (Pat Langley), of categorizations induced by linguistic constructions (Claudia Maienborn), and of a cross-level account of the “Self as a complex system“ (Paul Thagard). KogWis 2010 integrates a wealth of experimental research, cognitive modelling, and conceptual analysis in 5 invited symposia, over 150 individual talks, 6 symposia, and more than 40 poster contributions. Some of the invited symposia reflect local and regional strenghts of research in the Berlin-Brandenburg area: the two largests research fields of the university Cognitive Sciences Area of Excellence in Potsdam are represented by an invited symposium on “Information Structure” by the Special Research Area 632 (“Sonderforschungsbereich”, SFB) of the same name, of Potsdam University and Humboldt-University Berlin, and by a satellite conference of the research group “Mind and Brain Dynamics”. The Berlin School of Mind and Brain at Humboldt-University Berlin takes part with an invited symposium on “Decision Making” from a perspective of cognitive neuroscience and philosophy and the DFG Cluster of Excellence “Languages of Emotion” of Free University presents interdisciplinary research results in an invited symposium on “Symbolising Emotions”.
A central insight from psychological studies on human eye movements is that eye movement patterns are highly individually characteristic. They can, therefore, be used as a biometric feature, that is, subjects can be identified based on their eye movements. This thesis introduces new machine learning methods to identify subjects based on their eye movements while viewing arbitrary content. The thesis focuses on probabilistic modeling of the problem, which has yielded the best results in the most recent literature. The thesis studies the problem in three phases by proposing a purely probabilistic, probabilistic deep learning, and probabilistic deep metric learning approach. In the first phase, the thesis studies models that rely on psychological concepts about eye movements. Recent literature illustrates that individual-specific distributions of gaze patterns can be used to accurately identify individuals. In these studies, models were based on a simple parametric family of distributions. Such simple parametric models can be robustly estimated from sparse data, but have limited flexibility to capture the differences between individuals. Therefore, this thesis proposes a semiparametric model of gaze patterns that is flexible yet robust for individual identification. These patterns can be understood as domain knowledge derived from psychological literature. Fixations and saccades are examples of simple gaze patterns. The proposed semiparametric densities are drawn under a Gaussian process prior centered at a simple parametric distribution. Thus, the model will stay close to the parametric class of densities if little data is available, but it can also deviate from this class if enough data is available, increasing the flexibility of the model. The proposed method is evaluated on a large-scale dataset, showing significant improvements over the state-of-the-art. Later, the thesis replaces the model based on gaze patterns derived from psychological concepts with a deep neural network that can learn more informative and complex patterns from raw eye movement data. As previous work has shown that the distribution of these patterns across a sequence is informative, a novel statistical aggregation layer called the quantile layer is introduced. It explicitly fits the distribution of deep patterns learned directly from the raw eye movement data. The proposed deep learning approach is end-to-end learnable, such that the deep model learns to extract informative, short local patterns while the quantile layer learns to approximate the distributions of these patterns. Quantile layers are a generic approach that can converge to standard pooling layers or have a more detailed description of the features being pooled, depending on the problem. The proposed model is evaluated in a large-scale study using the eye movements of subjects viewing arbitrary visual input. The model improves upon the standard pooling layers and other statistical aggregation layers proposed in the literature. It also improves upon the state-of-the-art eye movement biometrics by a wide margin. Finally, for the model to identify any subject — not just the set of subjects it is trained on — a metric learning approach is developed. Metric learning learns a distance function over instances. The metric learning model maps the instances into a metric space, where sequences of the same individual are close, and sequences of different individuals are further apart. This thesis introduces a deep metric learning approach with distributional embeddings. The approach represents sequences as a set of continuous distributions in a metric space; to achieve this, a new loss function based on Wasserstein distances is introduced. The proposed method is evaluated on multiple domains besides eye movement biometrics. This approach outperforms the state of the art in deep metric learning in several domains while also outperforming the state of the art in eye movement biometrics.
We establish elements of a new approach to ellipticity and parametrices within operator algebras on manifolds with higher singularities, only based on some general axiomatic requirements on parameter-dependent operators in suitable scales of spaes. The idea is to model an iterative process with new generations of parameter-dependent operator theories, together with new scales of spaces that satisfy analogous requirements as the original ones, now on a corresponding higher level. The "full" calculus involves two separate theories, one near the tip of the corner and another one at the conical exit to infinity. However, concerning the conical exit to infinity, we establish here a new concrete calculus of edge-degenerate operators which can be iterated to higher singularities.
Linked Open Data (LOD) comprises very many and often large public data sets and knowledge bases. Those datasets are mostly presented in the RDF triple structure of subject, predicate, and object, where each triple represents a statement or fact. Unfortunately, the heterogeneity of available open data requires significant integration steps before it can be used in applications. Meta information, such as ontological definitions and exact range definitions of predicates, are desirable and ideally provided by an ontology. However in the context of LOD, ontologies are often incomplete or simply not available. Thus, it is useful to automatically generate meta information, such as ontological dependencies, range definitions, and topical classifications. Association rule mining, which was originally applied for sales analysis on transactional databases, is a promising and novel technique to explore such data. We designed an adaptation of this technique for min-ing Rdf data and introduce the concept of “mining configurations”, which allows us to mine RDF data sets in various ways. Different configurations enable us to identify schema and value dependencies that in combination result in interesting use cases. To this end, we present rule-based approaches for auto-completion, data enrichment, ontology improvement, and query relaxation. Auto-completion remedies the problem of inconsistent ontology usage, providing an editing user with a sorted list of commonly used predicates. A combination of different configurations step extends this approach to create completely new facts for a knowledge base. We present two approaches for fact generation, a user-based approach where a user selects the entity to be amended with new facts and a data-driven approach where an algorithm discovers entities that have to be amended with missing facts. As knowledge bases constantly grow and evolve, another approach to improve the usage of RDF data is to improve existing ontologies. Here, we present an association rule based approach to reconcile ontology and data. Interlacing different mining configurations, we infer an algorithm to discover synonymously used predicates. Those predicates can be used to expand query results and to support users during query formulation. We provide a wide range of experiments on real world datasets for each use case. The experiments and evaluations show the added value of association rule mining for the integration and usability of RDF data and confirm the appropriateness of our mining configuration methodology.
Unique column combinations of a relational database table are sets of columns that contain only unique values. Discovering such combinations is a fundamental research problem and has many different data management and knowledge discovery applications. Existing discovery algorithms are either brute force or have a high memory load and can thus be applied only to small datasets or samples. In this paper, the wellknown GORDIAN algorithm and "Apriori-based" algorithms are compared and analyzed for further optimization. We greatly improve the Apriori algorithms through efficient candidate generation and statistics-based pruning methods. A hybrid solution HCAGORDIAN combines the advantages of GORDIAN and our new algorithm HCA, and it significantly outperforms all previous work in many situations.
As digital media infiltrate an increasingly greater proportion of our lives, concern about the possibility of various forms of technology addictions has emerged. For technology addiction, researchers have developed a variety of self-reported scales in areas such as Internet, smartphones, videogames, social network sites (SNS) or television. However, no uniform criteria or definition exists for technology addiction. Utilized dimensions of technology addiction, to measure specific outcomes, lack a conceptual standard. Therefore, linkages between technology areas dimensions have not been examined in a broader way by the research community, in order to develop a uniform technology addiction scale.
In this regard, firstly, a theoretical model was developed in order to extract common technology dimensions. Secondly, a systematic literature review in the areas of Internet, smartphone, video games and SNS was conducted in order to extract the dimensions used. To identify relevant studies, nine databases (GoogleScholar, ScienceDirect, PubMed, EmeraldInsight, Wiley, SpringerLink, ACM, iEEE and JSTOR) were searched, producing 4698 results, and 50 studies met the inclusion criteria. Thirdly, the developed theoretical model was utilized in order to determine the dimension in each of the identified scales.
Based on analysis of the dimensional distributions, the findings suggest that there are common dimensions across areas of technology such as “compulsive use” and “negative outcomes” but also differences in dimensions across areas such as “social comfort” and “mood regulation”, which are more used in the area of SNS. Moreover, new dimensions such as “cognitive absorption” or “utility and function loss" for technology addiction were extracted, which should be considered as these have not yet been researched in a broader way. In addition, no gold standard for the conceptual criteria or definition for technology addiction has been developed yet.
Health effects, attributed to the environmental pollution resulted from using solvents such as benzene, are relatively unexplored among petroleum workers, personal use, and laboratory researchers. Solvents can cause various health problems, such as neurotoxicity, immunotoxicity, and carcinogenicity. As such it can be absorbed via epidermal or respiratory into the human body resulting in interacting with molecules that are responsible for biochemical and physiological processes of the brain.
Owing to the ever-growing demand for finding a solution, an Ionic liquid can use as an alternative solvent. Ionic liquids are salts in a liquid state at low temperature (below 100 C), or even at room temperature. Ionic liquids impart a unique architectural platform, which has been interesting because of their unusual properties that can be tuned by simple ways such as mixing two ionic liquids.
Ionic liquids not only used as reaction solvents but they became a key developing for novel applications based on their thermal stability, electric conductivity with very low vapor pressure in contrast to the conventional solvents.
In this study, ionic liquids were used as a solvent and reactant at the same time for the novel nanomaterials synthesis for different applications including solar cells, gas sensors, and water splitting.
The field of ionic liquids continues to grow, and become one of the most important branches of science. It appears to be at a point where research and industry can work together in a new way of thinking for green chemistry and sustainable production.
Sharing marketplaces emerged as the new Holy Grail of value creation by enabling exchanges between strangers. Identity reveal, encouraged by platforms, cuts both ways: While inducing pre-transaction confidence, it is suspected of backfiring on the information senders with its discriminative potential. This study employs a discrete choice experiment to explore the role of names as signifiers of discriminative peculiarities and the importance of accompanying cues in peer choices of a ridesharing offer. We quantify users' preferences for quality signals in monetary terms and evidence comparative disadvantage of Middle Eastern descent male names for drivers and co-travelers. It translates into a lower willingness to accept and pay for an offer. Market simulations confirm the robustness of the findings. Further, we discover that females are choosier and include more signifiers of involuntary personal attributes in their decision-making. Price discounts and positive information only partly compensate for the initial disadvantage, and identity concealment is perceived negatively.
One for all, all for one
(2022)
We propose a conceptual model of acceptance of contact tracing apps based on the privacy calculus perspective. Moving beyond the duality of personal benefits and privacy risks, we theorize that users hold social considerations (i.e., social benefits and risks) that underlie their acceptance decisions. To test our propositions, we chose the context of COVID-19 contact tracing apps and conducted a qualitative pre-study and longitudinal quantitative main study with 589 participants from Germany and Switzerland. Our findings confirm the prominence of individual privacy calculus in explaining intention to use and actual behavior. While privacy risks are a significant determinant of intention to use, social risks (operationalized as fear of mass surveillance) have a notably stronger impact. Our mediation analysis suggests that social risks represent the underlying mechanism behind the observed negative link between individual privacy risks and contact tracing apps' acceptance. Furthermore, we find a substantial intention–behavior gap.
Digital inclusion
(2021)
In this thesis, we tackle two social disruptions: recent refugee waves in Germany and the COVID-19 pandemic. We focus on the use of information and communication technology (ICT) as a key means of alleviating these disruptions and promoting social inclusion. As social disruptions typically lead to frustration and fragmentation, it is essential to ensure the social inclusion of individuals and societies during such times.
In the context of the social inclusion of refugees, we focus on the Syrian refugees who arrived in Germany as of 2015, as they form a large and coherent refugee community. In particular, we address the role of ICTs in refugees’ social inclusion and investigate how different ICTs (especially smartphones and social networks) can foster refugees’ integration and social inclusion. In the context of the COVID-19 pandemic, we focus on the widespread unconventional working model of work from home (WFH). Our research here centers on the main constructs of WFH and the key differences in WFH experiences based on personal characteristics such as gender and parental status.
We reveal novel insights through four well-established research methods: literature review, mixed methods, qualitative method, and quantitative method. The results of our research have been published in the form of eight articles in major information systems venues and journals. Key results from the refugee research stream include the following: Smartphones represent a central component of refugee ICT use; refugees view ICT as a source of information and power; the social connectedness of refugees is strongly correlated with their Internet use; refugees are not relying solely on traditional methods to learn the German language or pursue further education; the ability to use smartphones anytime and anywhere gives refugees an empowering feeling of global connectedness; and ICTs empower refugees on three levels (community participation, sense of control, and self-efficacy).
Key insights from the COVID-19 WFH stream include: Gender and the presence of children under the age of 18 affect workers’ control over their time, technology usefulness, and WFH conflicts, while not affecting their WFH attitudes; and both personal and technology-related factors affect an individual’s attitude toward WFH and their productivity. Further insights are being gathered at the time of submitting this thesis.
This thesis contributes to the discussion within the information systems community regarding how to use different ICT solutions to promote the social inclusion of refugees in their new communities and foster an inclusive society. It also adds to the growing body of research on COVID-19, in particular on the sudden workplace transformation to WFH. The insights gathered in this thesis reveal theoretical implications and future opportunities for research in the field of information systems, practical implications for relevant stakeholders, and social implications related to the refugee crisis and the COVID-19 pandemic that must be addressed.
Since the beginning of the recent global refugee crisis, researchers have been tackling many of its associated aspects, investigating how we can help to alleviate this crisis, in particular, using ICTs capabilities. In our research, we investigated the use of ICT solutions by refugees to foster the social inclusion process in the host community. To tackle this topic, we conducted thirteen interviews with Syrian refugees in Germany. Our findings reveal different ICT usages by refugees and how these contribute to feeling empowered. Moreover, we show the sources of empowerment for refugees that are gained by ICT use. Finally, we identified the two types of social inclusion benefits that were derived from empowerment sources. Our results provide practical implications to different stakeholders and decision-makers on how ICT usage can empower refugees, which can foster the social inclusion of refugees, and what should be considered to support them in their integration effort.
Extract: Topics in psycholinguistics and the neurocognition of language rarely attract the attention of journalists or the general public. One topic that has done so, however, is the potential benefits of bilingualism for general cognitive functioning and development, and as a precaution against cognitive decline in old age. Sensational claims have been made in the public domain, mostly by journalists and politicians. Recently (September 4, 2014) The Guardian reported that “learning a foreign language can increase the size of your brain”, and Michael Gove, the UK's previous Education Secretary, noted in an interview with The Guardian (September 30, 2011) that “learning languages makes you smarter”. The present issue of BLC addresses these topics by providing a state-of-the-art overview of theoretical and experimental research on the role of bilingualism for cognition in children and adults.
Assimilation of pseudo-tree-ring-width observations into an atmospheric general circulation model
(2017)
Paleoclimate data assimilation (DA) is a promising technique to systematically combine the information from climate model simulations and proxy records. Here, we investigate the assimilation of tree-ring-width (TRW) chronologies into an atmospheric global climate model using ensemble Kalman filter (EnKF) techniques and a process-based tree-growth forward model as an observation operator. Our results, within a perfect-model experiment setting, indicate that the "online DA" approach did not outperform the "off-line" one, despite its considerable additional implementation complexity. On the other hand, it was observed that the nonlinear response of tree growth to surface temperature and soil moisture does deteriorate the operation of the time-averaged EnKF methodology. Moreover, for the first time we show that this skill loss appears significantly sensitive to the structure of the growth rate function, used to represent the principle of limiting factors (PLF) within the forward model. In general, our experiments showed that the error reduction achieved by assimilating pseudo-TRW chronologies is modulated by the magnitude of the yearly internal variability in themodel. This result might help the dendrochronology community to optimize their sampling efforts.
Towards the assimilation of tree-ring-width records using ensemble Kalman filtering techniques
(2015)
This paper investigates the applicability of the Vaganov–Shashkin–Lite (VSL) forward model for tree-ring-width chronologies as observation operator within a proxy data assimilation (DA) setting. Based on the principle of limiting factors, VSL combines temperature and moisture time series in a nonlinear fashion to obtain simulated TRW chronologies. When used as observation operator, this modelling approach implies three compounding, challenging features: (1) time averaging, (2) “switching recording” of 2 variables and (3) bounded response windows leading to “thresholded response”. We generate pseudo-TRW observations from a chaotic 2-scale dynamical system, used as a cartoon of the atmosphere-land system, and attempt to assimilate them via ensemble Kalman filtering techniques. Results within our simplified setting reveal that VSL’s nonlinearities may lead to considerable loss of assimilation skill, as compared to the utilization of a time-averaged (TA) linear observation operator. In order to understand this undesired effect, we embed VSL’s formulation into the framework of fuzzy logic (FL) theory, which thereby exposes multiple representations of the principle of limiting factors. DA experiments employing three alternative growth rate functions disclose a strong link between the lack of smoothness of the growth rate function and the loss of optimality in the estimate of the TA state. Accordingly, VSL’s performance as observation operator can be enhanced by resorting to smoother FL representations of the principle of limiting factors. This finding fosters new interpretations of tree-ring-growth limitation processes.
This article considers Isabella Bird’s representation of medicine in Unbeaten Tracks in Japan (1880) and Journeys in Persia and Kurdistan (1891), the two books in which she engages most extensively with both local (Chinese/Islamic) and Western medical science and practice. I explore how Bird uses medicine to assert her narrative authority and define her travelling persona in opposition to local medical practitioners. I argue that her ambivalence and the unease she frequently expresses concerning medical practice (expressed particularly in her later adoption of the Persian appellation “Feringhi Hakīm” [European physician] to describe her work) serves as a means for her to negotiate the colonial and gendered pressures on Victorian medicine. While in Japan this attitude works to destabilise her hierarchical understanding of science and results in some acknowledgement of traditional Japanese traditions, in Persia it functions more to disguise her increasing collusion with overt British colonial ambitions.
The closer the better
(2012)
A growing literature has suggested that processing of visual information presented near the hands is facilitated. In this study, we investigated whether the near-hands superiority effect also occurs with the hands moving. In two experiments, participants performed a cyclical bimanual movement task requiring concurrent visual identification of briefly presented letters. For both the static and dynamic hand conditions, the results showed improved letter recognition performance with the hands closer to the stimuli. The finding that the encoding advantage for near-hand stimuli also occurred with the hands moving suggests that the effect is regulated in real time, in accordance with the concept of a bimodal neural system that dynamically updates hand position in external space.
The title compound was prepared by the reaction of 1,4,10,13-tetraoxa-7,16-diazacyclo-octadecane with 4-chloro-2-methyl-phenoxyacetic acid in a ratio of 1:2. The structure has been proved by the data of elemental analysis, IR spectroscopy, NMR ( 1 H, 13 C) technique and by X-ray diffraction analysis. Intermolecular hydrogen bonds between the azonium protons and oxygen atoms of the carboxylate groups were found. Immunoactive properties of the title compound have been screened. The compound has the ability to suppress spontaneous and Con A-stimulated cell proliferation in vitro and therefore can be considered as immunodepressant.
This study investigates whether number dissimilarities on subject and object DPs facilitate the comprehension of subject-and object-extracted centre-embedded relative clauses in children with Grammatical Specific Language Impairment (G-SLI). We compared the performance of a group of English-speaking children with G-SLI (mean age: 12; 11) with that of two groups of younger typically developing (TD) children, matched on grammar and receptive vocabulary, respectively. All groups were more accurate on subject-extracted relative clauses than object-extracted ones and, crucially, they all showed greater accuracy for sentences with dissimilar number features (i.e., one singular, one plural) on the head noun and the embedded DP. These findings are interpreted in the light of current psycholinguistic models of sentence comprehension in TD children and provide further insight into the linguistic nature of G-SLI.
We elicited the production of various types of relative clauses in a group of German-speaking children with specific language impairment (SLI) and typically developing controls in order to test the movement optionality account of grammatical difficulty in SLI. The results show that German-speaking children with SLI are impaired in relative clause production compared to typically developing children. The alternative structures that they produce consist of simple main clauses, as well as nominal and prepositional phrases produced in isolation, sometimes contextually appropriate, and sometimes not. Crucially for evaluating the movement optionality account, children with SLI produce very few instances of embedded clauses where the relative clause head noun is pronounced in situ; in fact, such responses are more common among the typically developing child controls. These results underscore the difficulty German-speaking children with SLI have with structures involving movement, but provide no specific support for the movement optionality account.
The predictions of two contrasting approaches to the acquisition of transitive relative clauses were tested within the same groups of German-speaking participants aged from 3 to 5 years old. The input frequency approach predicts that object relative clauses with inanimate heads (e.g., the pullover that the man is scratching) are comprehended earlier and more accurately than those with an animate head (e.g., the man that the boy is scratching). In contrast, the structural intervention approach predicts that object relative clauses with two full NP arguments mismatching in number (e.g., the man that the boys are scratching) are comprehended earlier and more accurately than those with number-matching NPs (e.g., the man that the boy is scratching). These approaches were tested in two steps. First, we ran a corpus analysis to ensure that object relative clauses with number-mismatching NPs are not more frequent than object relative clauses with number-matching NPs in child directed speech. Next, the comprehension of these structures was tested experimentally in 3-, 4-, and 5-year-olds respectively by means of a color naming task. By comparing the predictions of the two approaches within the same participant groups, we were able to uncover that the effects predicted by the input frequency and by the structural intervention approaches co-exist and that they both influence the performance of children on transitive relative clauses, but in a manner that is modulated by age. These results reveal a sensitivity to animacy mismatch already being demonstrated by 3-year-olds and show that animacy is initially deployed more reliably than number to interpret relative clauses correctly. In all age groups, the animacy mismatch appears to explain the performance of children, thus, showing that the comprehension of frequent object relative clauses is enhanced compared to the other conditions. Starting with 4-year-olds but especially in 5-year-olds, the number mismatch supported comprehension—a facilitation that is unlikely to be driven by input frequency. Once children fine-tune their sensitivity to verb agreement information around the age of four, they are also able to deploy number marking to overcome the intervention effects. This study highlights the importance of testing experimentally contrasting theoretical approaches in order to characterize the multifaceted, developmental nature of language acquisition.
The aim of this work was the generation of carbon materials with high surface area, exhibiting a hierarchical pore system in the macro- and mesorange. Such a pore system facilitates the transport through the material and enhances the interaction with the carbon matrix (macropores are pores with diameters > 50 nm, mesopores between 2 – 50 nm). Thereto, new strategies for the synthesis of novel carbon materials with designed porosity were developed that are in particular useful for the storage of energy. Besides the porosity, it is the graphene structure itself that determines the properties of a carbon material. Non-graphitic carbon materials usually exhibit a quite large degree of disorder with many defects in the graphene structure, and thus exhibit inherent microporosity (d < 2nm). These pores are traps and oppose reversible interaction with the carbon matrix. Furthermore they reduce the stability and conductivity of the carbon material, which was undesired for the proposed applications. As one part of this work, the graphene structures of different non-graphitic carbon materials were studied in detail using a novel wide-angle x-ray scattering model that allowed precise information about the nature of the carbon building units (graphene stacks). Different carbon precursors were evaluated regarding their potential use for the synthesis shown in this work, whereas mesophase pitch proved to be advantageous when a less disordered carbon microstructure is desired. By using mesophase pitch as carbon precursor, two templating strategies were developed using the nanocasting approach. The synthesized (monolithic) materials combined for the first time the advantages of a hierarchical interconnected pore system in the macro- and mesorange with the advantages of mesophase pitch as carbon precursor. In the first case, hierarchical macro- / mesoporous carbon monoliths were synthesized by replication of hard (silica) templates. Thus, a suitable synthesis procedure was developed that allowed the infiltration of the template with the hardly soluble carbon precursor. In the second case, hierarchical macro- / mesoporous carbon materials were synthesized by a novel soft-templating technique, taking advantage of the phase separation (spinodal decomposition) between mesophase pitch and polystyrene. The synthesis also allowed the generation of monolithic samples and incorporation of functional nanoparticles into the material. The synthesized materials showed excellent properties as an anode material in lithium batteries and support material for supercapacitors.
It is a well-attested finding in head-initial languages that individuals with aphasia (IWA) have greater difficulties in comprehending object-extracted relative clauses (ORCs) as compared to subject-extracted relative clauses (SRCs). Adopting the linguistically based approach of Relativized Minimality (RM; Rizzi, 1990, 2004), the subject-object asymmetry is attributed to the occurrence of a Minimality effect in ORCs due to reduced processing capacities in IWA (Garraffa & Grillo, 2008; Grillo, 2008, 2009). For ORCs, it is claimed that the embedded subject intervenes in the syntactic dependency between the moved object and its trace, resulting in greater processing demands. In contrast, no such intervener is present in SRCs. Based on the theoretical framework of RM and findings from language acquisition (Belletti et al., 2012; Friedmann et al., 2009), it is assumed that Minimality effects are alleviated when the moved object and the intervening subject differ in terms of relevant syntactic features. For German, the language under investigation, the RM approach predicts that number (i.e., singular vs. plural) and the lexical restriction [+NP] feature (i.e., lexically restricted determiner phrases vs. lexically unrestricted pronouns) are considered relevant in the computation of Minimality. Greater degrees of featural distinctiveness are predicted to result in more facilitated processing of ORCs, because IWA can more easily distinguish between the moved object and the intervener.
This cumulative dissertation aims to provide empirical evidence on the validity of the RM approach in accounting for comprehension patterns during relative clause (RC) processing in German-speaking IWA. For that purpose, I conducted two studies including visual-world eye-tracking experiments embedded within an auditory referent-identification task to study the offline and online processing of German RCs. More specifically, target sentences were created to evaluate (a) whether IWA demonstrate a subject-object asymmetry, (b) whether dissimilarity in the number and/or the [+NP] features facilitates ORC processing, and (c) whether sentence processing in IWA benefits from greater degrees of featural distinctiveness. Furthermore, by comparing RCs disambiguated through case marking (at the relative pronoun or the following noun phrase) and number marking (inflection of the sentence-final verb), it was possible to consider the role of the relative position of the disambiguation point. The RM approach predicts that dissimilarity in case should not affect the occurrence of Minimality effects. However, the case cue to sentence interpretation appears earlier within RCs than the number cue, which may result in lower processing costs in case-disambiguated RCs compared to number-disambiguated RCs.
In study I, target sentences varied with respect to word order (SRC vs. ORC) and dissimilarity in the [+NP] feature (lexically restricted determiner phrase vs. pronouns as embedded element). Moreover, by comparing the impact of these manipulations in case- and number-disambiguated RCs, the effect of dissimilarity in the number feature was explored. IWA demonstrated a subject-object asymmetry, indicating the occurrence of a Minimality effect in ORCs. However, dissimilarity neither in the number feature nor in the [+NP] feature alone facilitated ORC processing. Instead, only ORCs involving distinct specifications of both the number and the [+NP] features were well comprehended by IWA. In study II, only temporarily ambiguous ORCs disambiguated through case or number marking were investigated, while controlling for varying points of disambiguation. There was a slight processing advantage of case marking as cue to sentence interpretation as compared to number marking.
Taken together, these findings suggest that the RM approach can only partially capture empirical data from German IWA. In processing complex syntactic structures, IWA are susceptible to the occurrence of the intervening subject in ORCs. The new findings reported in the thesis show that structural dissimilarity can modulate sentence comprehension in aphasia. Interestingly, IWA can override Minimality effects in ORCs and derive correct sentence meaning if the featural specifications of the constituents are maximally different, because they can more easily distinguish the moved object and the intervening subject given their reduced processing capacities. This dissertation presents new scientific knowledge that highlights how the syntactic theory of RM helps to uncover selective effects of morpho-syntactic features on sentence comprehension in aphasia, emphasizing the close link between assumptions from theoretical syntax and empirical research.
Exploring generalisation following treatment of language deficits in aphasia can provide insights into the functional relation of the cognitive processing systems involved. In the present study, we first review treatment outcomes of interventions targeting sentence processing deficits and, second report a treatment study examining the occurrence of practice effects and generalisation in sentence comprehension and production. In order to explore the potential linkage between processing systems involved in comprehending and producing sentences, we investigated whether improvements generalise within (i.e., uni-modal generalisation in comprehension or in production) and/or across modalities (i.e., cross-modal generalisation from comprehension to production or vice versa). Two individuals with aphasia displaying co-occurring deficits in sentence comprehension and production were trained on complex, non-canonical sentences in both modalities. Two evidence-based treatment protocols were applied in a crossover intervention study with sequence of treatment phases being randomly allocated. Both participants benefited significantly from treatment, leading to uni-modal generalisation in both comprehension and production. However, cross-modal generalisation did not occur. The magnitude of uni-modal generalisation in sentence production was related to participants’ sentence comprehension performance prior to treatment. These findings support the assumption of modality-specific sub-systems for sentence comprehension and production, being linked uni-directionally from comprehension to production.
Metabolically active microbial communities are present in a wide range of subsurface environments. Techniques like enumeration of microbial cells, activity measurements with radiotracer assays and the analysis of porewater constituents are currently being used to explore the subsurface biosphere, alongside with molecular biological analyses. However, many of these techniques reach their detection limits due to low microbial activity and abundance. Direct measurements of microbial turnover not just face issues of insufficient sensitivity, they only provide information about a single specific process but in sediments many different process can occur simultaneously. Therefore, the development of a new technique to measure total microbial activity would be a major improvement. A new tritium-based hydrogenase-enzyme assay appeared to be a promising tool to quantify total living biomass, even in low activity subsurface environments. In this PhD project total microbial biomass and microbial activity was quantified in different subsurface sediments using established techniques (cell enumeration and pore water geochemistry) as well as a new tritium-based hydrogenase enzyme assay. By using a large database of our own cell enumeration data from equatorial Pacific and north Pacific sediments and published data it was shown that the global geographic distribution of subseafloor sedimentary microbes varies between sites by 5 to 6 orders of magnitude and correlates with the sedimentation rate and distance from land. Based on these correlations, global subseafloor biomass was estimated to be 4.1 petagram-C and ~0.6 % of Earth's total living biomass, which is significantly lower than previous estimates. Despite the massive reduction in biomass the subseafloor biosphere is still an important player in global biogeochemical cycles. To understand the relationship between microbial activity, abundance and organic matter flux into the sediment an expedition to the equatorial Pacific upwelling area and the north Pacific Gyre was carried out. Oxygen respiration rates in subseafloor sediments from the north Pacific Gyre, which are deposited at sedimentation rates of 1 mm per 1000 years, showed that microbial communities could survive for millions of years without fresh supply of organic carbon. Contrary to the north Pacific Gyre oxygen was completely depleted within the upper few millimeters to centimeters in sediments of the equatorial upwelling region due to a higher supply of organic matter and higher metabolic activity. So occurrence and variability of electron acceptors over depth and sites make the subsurface a complex environment for the quantification of total microbial activity. Recent studies showed that electron acceptor processes, which were previously thought to thermodynamically exclude each other can occur simultaneously. So in many cases a simple measure of the total microbial activity would be a better and more robust solution than assays for several specific processes, for example sulfate reduction rates or methanogenesis. Enzyme or molecular assays provide a more general approach as they target key metabolic compounds. Since hydrogenase enzymes are ubiquitous in microbes, the recently developed tritium-based hydrogenase radiotracer assay is applied to quantify hydrogenase enzyme activity as a parameter of total living cell activity. Hydrogenase enzyme activity was measured in sediments from different locations (Lake Van, Barents Sea, Equatorial Pacific and Gulf of Mexico). In sediment samples that contained nitrate, we found the lowest cell specific enzyme activity around 10^(-5) nmol H_(2) cell^(-1) d^(-1). With decreasing energy yield of the electron acceptor used, cell-specific hydrogenase activity increased and maximum values of up to 1 nmol H_(2) cell^(-1) d^(-1) were found in samples with methane concentrations of >10 ppm. Although hydrogenase activity cannot be converted directly into a turnover rate of a specific process, cell-specific activity factors can be used to identify specific metabolism and to quantify the metabolically active microbial population. In another study on sediments from the Nankai Trough microbial abundance and hydrogenase activity data show that both the habitat and the activity of subseafloor sedimentary microbial communities have been impacted by seismic activities. An increase in hydrogenase activity near the fault zone revealed that the microbial community was supplied with hydrogen as an energy source and that the microbes were specialized to hydrogen metabolism.
Subsurface microbial communities undertake many terminal electron-accepting processes, often simultaneously. Using a tritium-based assay, we measured the potential hydrogen oxidation catalyzed by hydrogenase enzymes in several subsurface sedimentary environments (Lake Van, Barents Sea, Equatorial Pacific, and Gulf of Mexico) with different predominant electron-acceptors. Hydrogenases constitute a diverse family of enzymes expressed by microorganisms that utilize molecular hydrogen as a metabolic substrate, product, or intermediate. The assay reveals the potential for utilizing molecular hydrogen and allows qualitative detection of microbial activity irrespective of the predominant electron-accepting process. Because the method only requires samples frozen immediately after recovery, the assay can be used for identifying microbial activity in subsurface ecosystems without the need to preserve live material. We measured potential hydrogen oxidation rates in all samples from multiple depths at several sites that collectively span a wide range of environmental conditions and biogeochemical zones. Potential activity normalized to total cell abundance ranges over five orders of magnitude and varies, dependent upon the predominant terminal electron acceptor. Lowest per-cell potential rates characterize the zone of nitrate reduction and highest per-cell potential rates occur in the methanogenic zone. Possible reasons for this relationship to predominant electron acceptor include (i) increasing importance of fermentation in successively deeper biogeochemical zones and (ii) adaptation of H(2)ases to successively higher concentrations of H-2 in successively deeper zones.
Technical report
(2019)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Parsing of argumentative structures has become a very active line of research in recent years. Like discourse parsing or any other natural language task that requires prediction of linguistic structures, most approaches choose to learn a local model and then perform global decoding over the local probability distributions, often imposing constraints that are specific to the task at hand. Specifically for argumentation parsing, two decoding approaches have been recently proposed: Minimum Spanning Trees (MST) and Integer Linear Programming (ILP), following similar trends in discourse parsing. In contrast to discourse parsing though, where trees are not always used as underlying annotation schemes, argumentation structures so far have always been represented with trees. Using the 'argumentative microtext corpus' [in: Argumentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation, Lisbon 2015 / Vol. 2, College Publications, London, 2016, pp. 801-815] as underlying data and replicating three different decoding mechanisms, in this paper we propose a novel ILP decoder and an extension to our earlier MST work, and then thoroughly compare the approaches. The result is that our new decoder outperforms related work in important respects, and that in general, ILP and MST yield very similar performance.
With the recent growth of sensors, cloud computing handles the data processing of many applications. Processing some of this data on the cloud raises, however, many concerns regarding, e.g., privacy, latency, or single points of failure. Alternatively, thanks to the development of embedded systems, smart wireless devices can share their computation capacity, creating a local wireless cloud for in-network processing. In this context, the processing of an application is divided into smaller jobs so that a device can run one or more jobs.
The contribution of this thesis to this scenario is divided into three parts. In part one, I focus on wireless aspects, such as power control and interference management, for deciding which jobs to run on which node and how to route data between nodes. Hence, I formulate optimization problems and develop heuristic and meta-heuristic algorithms to allocate wireless and computation resources. Additionally, to deal with multiple applications competing for these resources, I develop a reinforcement learning (RL) admission controller to decide which application should be admitted. Next, I look into acoustic applications to improve wireless throughput by using microphone clock synchronization to synchronize wireless transmissions.
In the second part, I jointly work with colleagues from the acoustic processing field to optimize both network and application (i.e., acoustic) qualities. My contribution focuses on the network part, where I study the relation between acoustic and network qualities when selecting a subset of microphones for collecting audio data or selecting a subset of optional jobs for processing these data; too many microphones or too many jobs can lessen quality by unnecessary delays. Hence, I develop RL solutions to select the subset of microphones under network constraints when the speaker is moving while still providing good acoustic quality. Furthermore, I show that autonomous vehicles carrying microphones improve the acoustic qualities of different applications. Accordingly, I develop RL solutions (single and multi-agent ones) for controlling these vehicles.
In the third part, I close the gap between theory and practice. I describe the features of my open-source framework used as a proof of concept for wireless in-network processing. Next, I demonstrate how to run some algorithms developed by colleagues from acoustic processing using my framework. I also use the framework for studying in-network delays (wireless and processing) using different distributions of jobs and network topologies.
Education in knowledge society is challenged with a lot of problems in particular the interaction between the teacher and learner in social networking software as a key factor affects the learners’ learning and satisfaction (Prammanee, 2005) where “to teach is to communicate, to communicate is to interact, to interact is to learn” (Hefzallah, 2004, p. 48). Analyzing the relation between teacher-learner interaction from a side and learning outcome and learners’ satisfaction from the other side, some basic problems regarding a new learning culture using social networking software are discussed. Most of the educational institutions pay a lot of attentions to the equipments and emerging Information and Communication Technologies (ICTs) in learning situations. They try to incorporate ICT into their institutions as teaching and learning environments. They do this because they expect that by doing so they will improve the outcome of the learning process. Despite this, the learning outcome as reported in most studies is very limited, because the expectations of self-directed learning are much higher than the reality. Findings from an empirical study (investigating the role of teacher-learner interaction through new digital media wiki in higher education and learning outcome and learner’s satisfaction) are presented recommendations about the necessity of pedagogical interactions in support of teaching and learning activities in wiki courses in order to improve the learning outcome. Conclusions show the necessity for significant changes in the approach of vocational teacher training programs of online teachers in order to meet the requirements of new digital media in coherence with a new learning culture. These changes have to address collaborative instead of individual learning and ICT wiki as a tool for knowledge construction instead of a tool for gathering information.
The climate is a complex dynamical system involving interactions and feedbacks among different processes at multiple temporal and spatial scales. Although numerous studies have attempted to understand the climate system, nonetheless, the studies investigating the multiscale characteristics of the climate are scarce. Further, the present set of techniques are limited in their ability to unravel the multi-scale variability of the climate system. It is completely plausible that extreme events and abrupt transitions, which are of great interest to climate community, are resultant of interactions among processes operating at multi-scale. For instance, storms, weather patterns, seasonal irregularities such as El Niño, floods and droughts, and decades-long climate variations can be better understood and even predicted by quantifying their multi-scale dynamics. This makes a strong argument to unravel the interaction and patterns of climatic processes at different scales. With this background, the thesis aims at developing measures to understand and quantify multi-scale interactions within the climate system.
In the first part of the thesis, I proposed two new methods, viz, multi-scale event synchronization (MSES) and wavelet multi-scale correlation (WMC) to capture the scale-specific features present in the climatic processes. The proposed methods were tested on various synthetic and real-world time series in order to check their applicability and replicability. The results indicate that both methods (WMC and MSES) are able to capture scale-specific associations that exist between processes at different time scales in a more detailed manner as compared to the traditional single scale counterparts.
In the second part of the thesis, the proposed multi-scale similarity measures were used in constructing climate networks to investigate the evolution of spatial connections within climatic processes at multiple timescales. The proposed methods WMC and MSES, together with complex network were applied to two different datasets.
In the first application, climate networks based on WMC were constructed for the univariate global sea surface temperature (SST) data to identify and visualize the SSTs patterns that develop very similarly over time and distinguish them from those that have long-range teleconnections to other ocean regions. Further investigations of climate networks on different timescales revealed (i) various high variability and co-variability regions, and (ii) short and long-range teleconnection regions with varying spatial distance. The outcomes of the study not only re-confirmed the existing knowledge on the link between SST patterns like El Niño Southern Oscillation and the Pacific Decadal Oscillation, but also suggested new insights into the characteristics and origins of long-range teleconnections.
In the second application, I used the developed non-linear MSES similarity measure to quantify the multivariate teleconnections between extreme Indian precipitation and climatic patterns with the highest relevance for Indian sub-continent. The results confirmed significant non-linear influences that were not well captured by the traditional methods. Further, there was a substantial variation in the strength and nature of teleconnection across India, and across time scales.
Overall, the results from investigations conducted in the thesis strongly highlight the need for considering the multi-scale aspects in climatic processes, and the proposed methods provide robust framework for quantifying the multi-scale characteristics.
Sea surface temperature (SST) patterns can – as surface climate forcing – affect weather and climate at large distances. One example is El Niño-Southern Oscillation (ENSO) that causes climate anomalies around the globe via teleconnections. Although several studies identified and characterized these teleconnections, our understanding of climate processes remains incomplete, since interactions and feedbacks are typically exhibited at unique or multiple temporal and spatial scales. This study characterizes the interactions between the cells of a global SST data set at different temporal and spatial scales using climate networks. These networks are constructed using wavelet multi-scale correlation that investigate the correlation between the SST time series at a range of scales allowing instantaneously deeper insights into the correlation patterns compared to traditional methods like empirical orthogonal functions or classical correlation analysis. This allows us to identify and visualise regions of – at a certain timescale – similarly evolving SSTs and distinguish them from those with long-range teleconnections to other ocean regions. Our findings re-confirm accepted knowledge about known highly linked SST patterns like ENSO and the Pacific Decadal Oscillation, but also suggest new insights into the characteristics and origins of long-range teleconnections like the connection between ENSO and Indian Ocean Dipole.
The temporal dynamics of climate processes are spread across different timescales and, as such, the study of these processes at only one selected timescale might not reveal the complete mechanisms and interactions within and between the (sub-) processes. To capture the non-linear interactions between climatic events, the method of event synchronization has found increasing attention recently. The main drawback with the present estimation of event synchronization is its restriction to analysing the time series at one reference timescale only. The study of event synchronization at multiple scales would be of great interest to comprehend the dynamics of the investigated climate processes. In this paper, the wavelet-based multi-scale event synchronization (MSES) method is proposed by combining the wavelet transform and event synchronization. Wavelets are used extensively to comprehend multi-scale processes and the dynamics of processes across various timescales. The proposed method allows the study of spatio-temporal patterns across different timescales. The method is tested on synthetic and real-world time series in order to check its replicability and applicability. The results indicate that MSES is able to capture relationships that exist between processes at different timescales.
Hydrometric networks play a vital role in providing information for decision-making in water resource management. They should be set up optimally to provide as much information as possible that is as accurate as possible and, at the same time, be cost-effective. Although the design of hydrometric networks is a well-identified problem in hydrometeorology and has received considerable attention, there is still scope for further advancement. In this study, we use complex network analysis, defined as a collection of nodes interconnected by links, to propose a new measure that identifies critical nodes of station networks. The approach can support the design and redesign of hydrometric station networks. The science of complex networks is a relatively young field and has gained significant momentum over the last few years in different areas such as brain networks, social networks, technological networks, or climate networks. The identification of influential nodes in complex networks is an important field of research. We propose a new node-ranking measure – the weighted degree–betweenness (WDB) measure – to evaluate the importance of nodes in a network. It is compared to previously proposed measures used on synthetic sample networks and then applied to a real-world rain gauge network comprising 1229 stations across Germany to demonstrate its applicability. The proposed measure is evaluated using the decline rate of the network efficiency and the kriging error. The results suggest that WDB effectively quantifies the importance of rain gauges, although the benefits of the method need to be investigated in more detail.
Simultaneous Barcode Sequencing of Diverse Museum Collection Specimens Using a Mixed RNA Bait Set
(2022)
A growing number of publications presenting results from sequencing natural history collection specimens reflect the importance of DNA sequence information from such samples. Ancient DNA extraction and library preparation methods in combination with target gene capture are a way of unlocking archival DNA, including from formalin-fixed wet-collection material. Here we report on an experiment, in which we used an RNA bait set containing baits from a wide taxonomic range of species for DNA hybridisation capture of nuclear and mitochondrial targets for analysing natural history collection specimens. The bait set used consists of 2,492 mitochondrial and 530 nuclear RNA baits and comprises specific barcode loci of diverse animal groups including both invertebrates and vertebrates. The baits allowed to capture DNA sequence information of target barcode loci from 84% of the 37 samples tested, with nuclear markers being captured more frequently and consensus sequences of these being more complete compared to mitochondrial markers. Samples from dry material had a higher rate of success than wet-collection specimens, although target sequence information could be captured from 50% of formalin-fixed samples. Our study illustrates how efforts to obtain barcode sequence information from natural history collection specimens may be combined and are a way of implementing barcoding inventories of scientific collection material.
A common feature in Answer Set Programming is the use of a second negation, stronger than default negation and sometimes called explicit, strong or classical negation. This explicit negation is normally used in front of atoms, rather than allowing its use as a regular operator. In this paper we consider the arbitrary combination of explicit negation with nested expressions, as those defined by Lifschitz, Tang and Turner. We extend the concept of reduct for this new syntax and then prove that it can be captured by an extension of Equilibrium Logic with this second negation. We study some properties of this variant and compare to the already known combination of Equilibrium Logic with Nelson's strong negation.
Subject of this work is the investigation of universal scaling laws which are observed in coupled chaotic systems. Progress is made by replacing the chaotic fluctuations in the perturbation dynamics by stochastic processes. First, a continuous-time stochastic model for weakly coupled chaotic systems is introduced to study the scaling of the Lyapunov exponents with the coupling strength (coupling sensitivity of chaos). By means of the the Fokker-Planck equation scaling relations are derived, which are confirmed by results of numerical simulations. Next, the new effect of avoided crossing of Lyapunov exponents of weakly coupled disordered chaotic systems is described, which is qualitatively similar to the energy level repulsion in quantum systems. Using the scaling relations obtained for the coupling sensitivity of chaos, an asymptotic expression for the distribution function of small spacings between Lyapunov exponents is derived and compared with results of numerical simulations. Finally, the synchronization transition in strongly coupled spatially extended chaotic systems is shown to resemble a continuous phase transition, with the coupling strength and the synchronization error as control and order parameter, respectively. Using results of numerical simulations and theoretical considerations in terms of a multiplicative noise partial differential equation, the universality classes of the observed two types of transition are determined (Kardar-Parisi-Zhang equation with saturating term, directed percolation).
Development and application of novel genetic transformation technologies in maize (Zea mays L.)
(2007)
Plant genetic engineering approaches are of pivotal importance to both basic and applied research. However, rapid commercialization of genetically engineered crops, especially maize, raises several ecological and environmental concerns largely related to transgene flow via pollination. In most crops, the plastid genome is inherited uniparentally in a maternal manner. Consequently, a trait introduced into the plastid genome would not be transferred to the sexually compatible relatives of the crops via pollination. Thus, beside its several other advantages, plastid transformation provides transgene containment, and therefore, is an environmentally friendly approach for genetic engineering of crop plants. Reliable in vitro regeneration systems allowing repeated rounds of regeneration are of utmost importance to development of plastid transformation technologies in higher plants. While being the world’s major food crops, cereals are among the most difficult-to-handle plants in tissue culture which severely limits genetic engineering approaches. In maize, immature zygotic embryos provide the predominantly used material for establishing regeneration-competent cell or callus cultures for genetic transformation experiments. The procedures involved are demanding, laborious and time consuming and depend on greenhouse facilities. In one part of this work, a novel tissue culture and plant regeneration system was developed that uses maize leaf tissue and thus is independent of zygotic embryos and greenhouse facilities. Also, protocols were established for (i) the efficient induction of regeneration-competent callus from maize leaves in the dark, (ii) inducing highly regenerable callus in the light, and (iii) the use of leaf-derived callus for the generation of stably transformed maize plants. Furthermore, several selection methods were tested for developing a plastid transformation system in maize. However, stable plastid transformed maize plants could not be yet recovered. Possible explanations as well as suggestions for future attempts towards developing plastid transformation in maize are discussed. Nevertheless, these results represent a first essential step towards developing chloroplast transformation technology for maize, a method that requires multiple rounds of plant regeneration and selection to obtain genetically stable transgenic plants. In order to apply the newly developed transformation system towards metabolic engineering of carotenoid biosynthesis, the daffodil phytoene synthase (PSY) gene was integrated into the maize genome. The results illustrate that expression of a recombinant PSY significantly increases carotenoid levels in leaves. The beta-carotene (pro-vitamin A) amounts in leaves of transgenic plants were increased by ~21% in comparison to the wild-type. These results represent evidence for maize to have significant potential to accumulate higher amounts of carotenoids, especially beta-carotene, through transgenic expression of phytoene synthases. Finally, progresses were made towards developing transformation technologies in Peperomia (Piperaceae) by establishing an efficient leaf-based regeneration system. Also, factors determining plastid size and number in Peperomia, whose species display great interspecific variation in chloroplast size and number per cell, were investigated. The results suggest that organelle size and number are regulated in a tissue-specific manner rather than in dependency on the plastid type. Investigating plastid morphology in Peperomia species with giant chloroplasts, plasmatic connections between chloroplasts (stromules) were observed under the light microscope and in the absence of tissue fixation or GFP overexpression demonstrating the relevance of these structures in vivo. Furthermore, bacteria-like microorganisms were discovered within Peperomia cells, suggesting that this genus provides an interesting model not only for studying plastid biology but also for investigating plant-microbe interactions.
Vienna
(2021)
This book explores and debates the urban transformations that have taken place in Vienna over the past 30 years and their consequences in policy fields such as labour and housing, political and social participation and the environment. Historically, European cities have been characterised by a strong association between social cohesion, quality of life, economic ambition and a robust State. Vienna is an excellent example for that. In more recent years, however, cities were pressured to change policy principles and mechanisms in the context of demographic shifts, post-industrial transformations and welfare recalibration which have led to worsened social conditions in many cities. Each chapter in this volume discusses Vienna's responses to these pressures in key policy arenas, looking at outcomes from the context-specific local arrangements. Against a theoretical framework debating the European city as a model of inclusion and social justice, authors explore the local capacity to innovate urban policies and to address new social risks, while paying attention to potential trade-offs.
The book questions and assesses the city's resilience using time series and an institutional analysis of four key dimensions that characterise the European city model within the context of post-industrial transition: redistribution, recognition, representation and sustainability. It offers a multiscalar perspective of urban governance through labour, housing, participatory and environmental policies, bringing together different levels and public policy types.
In the present work, we study wave phenomena in strongly nonlinear lattices. Such lattices are characterized by the absence of classical linear waves. We demonstrate that compactons – strongly localized solitary waves with tails decaying faster than exponential – exist and that they play a major role in the dynamics of the system under consideration. We investigate compactons in different physical setups. One part deals with lattices of dispersively coupled limit cycle oscillators which find various applications in natural sciences such as Josephson junction arrays or coupled Ginzburg-Landau equations. Another part deals with Hamiltonian lattices. Here, a prominent example in which compactons can be found is the granular chain. In the third part, we study systems which are related to the discrete nonlinear Schrödinger equation describing, for example, coupled optical wave-guides or the dynamics of Bose-Einstein condensates in optical lattices. Our investigations are based on a numerical method to solve the traveling wave equation. This results in a quasi-exact solution (up to numerical errors) which is the compacton. Another ansatz which is employed throughout this work is the quasi-continuous approximation where the lattice is described by a continuous medium. Here, compactons are found analytically, but they are defined on a truly compact support. Remarkably, both ways give similar qualitative and quantitative results. Additionally, we study the dynamical properties of compactons by means of numerical simulation of the lattice equations. Especially, we concentrate on their emergence from physically realizable initial conditions as well as on their stability due to collisions. We show that the collisions are not exactly elastic but that a small part of the energy remains at the location of the collision. In finite lattices, this remaining part will then trigger a multiple scattering process resulting in a chaotic state.
In the last decade, the number and dimensions of catastrophic flooding events in the Niger River Basin (NRB) have markedly increased. Despite the devastating impact of the floods on the population and the mainly agriculturally based economy of the riverine nations, awareness of the hazards in policy and science is still low. The urgency of this topic and the existing research deficits are the motivation for the present dissertation.
The thesis is an initial detailed assessment of the increasing flood risk in the NRB. The research strategy is based on four questions regarding (1) features of the change in flood risk, (2) reasons for the change in the flood regime, (3) expected changes of the flood regime given climate and land use changes, and (4) recommendations from previous analysis for reducing the flood risk in the NRB.
The question examining the features of change in the flood regime is answered by means of statistical analysis. Trend, correlation, changepoint, and variance analyses show that, in addition to the factors exposure and vulnerability, the hazard itself has also increased significantly in the NRB, in accordance with the decadal climate pattern of West Africa. The northern arid and semi-arid parts of the NRB are those most affected by the changes.
As potential reasons for the increase in flood magnitudes, climate and land use changes are attributed by means of a hypothesis-testing framework. Two different approaches, based on either data analysis or simulation, lead to similar results, showing that the influence of climatic changes is generally larger compared to that of land use changes. Only in the dry areas of the NRB is the influence of land use changes comparable to that of climatic alterations.
Future changes of the flood regime are evaluated using modelling results. First ensembles of statistically and dynamically downscaled climate models based on different emission scenarios are analyzed. The models agree with a distinct increase in temperature. The precipitation signal, however, is not coherent. The climate scenarios are used to drive an eco-hydrological model. The influence of climatic changes on the flood regime is uncertain due to the unclear precipitation signal. Still, in general, higher flood peaks are expected. In a next step, effects of land use changes are integrated into the model. Different scenarios show that regreening might help to reduce flood peaks. In contrast, an expansion of agriculture might enhance the flood peaks in the NRB. Similarly to the analysis of observed changes in the flood regime, the impacts of climate- and land use changes for the future scenarios are also most severe in the dry areas of the NRB.
In order to answer the final research question, the results of the above analysis are integrated into a range of recommendations for science and policy on how to reduce flood risk in the NRB. The main recommendations include a stronger consideration of the enormous natural climate variability in the NRB and a focus on so called “no-regret” adaptation strategies which account for high uncertainty, as well as a stronger consideration of regional differences. Regarding the prevention and mitigation of catastrophic flooding, the most vulnerable and sensitive areas in the basin, the arid and semi-arid Sahelian and Sudano-Sahelian regions, should be prioritized. Eventually, an active, science-based and science-guided flood policy is recommended. The enormous population growth in the NRB in connection with the expected deterioration of environmental and climatic conditions is likely to enhance the region´s vulnerability to flooding. A smart and sustainable flood policy can help mitigate these negative impacts of flooding on the development of riverine societies in West Africa.
Climate or land use?
(2017)
This study intends to contribute to the ongoing discussion on whether land use and land cover changes (LULC) or climate trends have the major influence on the observed increase of flood magnitudes in the Sahel. A simulation-based approach is used for attributing the observed trends to the postulated drivers. For this purpose, the ecohydrological model SWIM (Soil and Water Integrated Model) with a new, dynamic LULC module was set up for the Sahelian part of the Niger River until Niamey, including the main tributaries Sirba and Goroul. The model was driven with observed, reanalyzed climate and LULC data for the years 1950–2009. In order to quantify the shares of influence, one simulation was carried out with constant land cover as of 1950, and one including LULC. As quantitative measure, the gradients of the simulated trends were compared to the observed trend. The modeling studies showed that for the Sirba River only the simulation which included LULC was able to reproduce the observed trend. The simulation without LULC showed a positive trend for flood magnitudes, but underestimated the trend significantly. For the Goroul River and the local flood of the Niger River at Niamey, the simulations were only partly able to reproduce the observed trend. In conclusion, the new LULC module enabled some first quantitative insights into the relative influence of LULC and climatic changes. For the Sirba catchment, the results imply that LULC and climatic changes contribute in roughly equal shares to the observed increase in flooding. For the other parts of the subcatchment, the results are less clear but show, that climatic changes and LULC are drivers for the flood increase; however their shares cannot be quantified. Based on these modeling results, we argue for a two-pillar adaptation strategy to reduce current and future flood risk: Flood mitigation for reducing LULC-induced flood increase, and flood adaptation for a general reduction of flood vulnerability.
The Tibetan Plateau is the largest elevated landmass in the world and profoundly influences atmospheric circulation patterns such as the Asian monsoon system. Therefore this area has been increasingly in focus of palaeoenvironmental studies. This thesis evaluates the applicability of organic biomarkers for palaeolimnological purposes on the Tibetan Plateau with a focus on aquatic macrophyte-derived biomarkers. Submerged aquatic macrophytes have to be considered to significantly influence the sediment organic matter due to their high abundance in many Tibetan lakes. They can show highly 13C-enriched biomass because of their carbon metabolism and it is therefore crucial for the interpretation of δ13C values in sediment cores to understand to which extent aquatic macrophytes contribute to the isotopic signal of the sediments in Tibetan lakes and in which way variations can be explained in a palaeolimnological context. Additionally, the high abundance of macrophytes makes them interesting as potential recorders of lake water δD. Hydrogen isotope analysis of biomarkers is a rapidly evolving field to reconstruct past hydrological conditions and therefore of special relevance on the Tibetan Plateau due to the direct linkage between variations of monsoon intensity and changes in regional precipitation / evaporation balances. A set of surface sediment and aquatic macrophyte samples from the central and eastern Tibetan Plateau was analysed for composition as well as carbon and hydrogen isotopes of n-alkanes. It was shown how variable δ13C values of bulk organic matter and leaf lipids can be in submerged macrophytes even of a single species and how strongly these parameters are affected by them in corresponding sediments. The estimated contribution of the macrophytes by means of a binary isotopic model was calculated to be up to 60% (mean: 40%) to total organic carbon and up to 100% (mean: 66%) to mid-chain n-alkanes. Hydrogen isotopes of n-alkanes turned out to record δD of meteoric water of the summer precipitation. The apparent enrichment factor between water and n-alkanes was in range of previously reported ones (≈-130‰) at the most humid sites, but smaller (average: -86‰) at sites with a negative moisture budget. This indicates an influence of evaporation and evapotranspiration on δD of source water for aquatic and terrestrial plants. The offset between δD of mid- and long-chain n-alkanes was close to zero in most of the samples, suggesting that lake water as well as soil and leaf water are affected to a similar extent by those effects. To apply biomarkers in a palaeolimnological context, the aliphatic biomarker fraction of a sediment core from Lake Koucha (34.0° N; 97.2° E; eastern Tibetan Plateau) was analysed for concentrations, δ13C and δD values of compounds. Before ca. 8 cal ka BP, the lake was dominated by aquatic macrophyte-derived mid-chain n-alkanes, while after 6 cal ka BP high concentrations of a C20 highly branched isoprenoid compound indicate a predominance of phytoplankton. Those two principally different states of the lake were linked by a transition period with high abundances of microbial biomarkers. δ13C values were relatively constant for long-chain n-alkanes, while mid-chain n-alkanes showed variations between -23.5 to -12.6‰. Highest values were observed for the assumed period of maximum macrophyte growth during the late glacial and for the phytoplankton maximum during the middle and late Holocene. Therefore, the enriched values were interpreted to be caused by carbon limitation which in turn was induced by high macrophyte and primary productivity, respectively. Hydrogen isotope signatures of mid-chain n-alkanes have been shown to be able to track a previously deduced episode of reduced moisture availability between ca. 10 and 7 cal ka BP, indicated by a 20‰ shift towards higher δD values. Indications for cooler episodes at 6.0, 3.1 and 1.8 cal ka BP were gained from drops of biomarker concentrations, especially microbial-derived hopanoids, and from coincidental shifts towards lower δ13C values. Those episodes correspond well with cool events reported from other locations on the Tibetan Plateau as well as in the Northern Hemisphere. To conclude, the study of recent sediments and plants improved the understanding of factors affecting the composition and isotopic signatures of aliphatic biomarkers in sediments. Concentrations and isotopic signatures of the biomarkers in Lake Koucha could be interpreted in a palaeolimnological context and contribute to the knowledge about the history of the lake. Aquatic macrophyte-derived mid-chain n-alkanes were especially useful, due to their high abundance in many Tibetan Lakes and their ability to record major changes of lake productivity and palaeo-hydrological conditions. Therefore, they have the potential to contribute to a fuller understanding of past climate variability in this key region for atmospheric circulation systems.
Central Asia is located at the confluence of large-scale atmospheric circulation systems. It is thus likely to be highly susceptible to changes in the dynamics of those systems; however, little is still known about the regional paleoclimate history. Here we present carbon and hydrogen isotopic compositions of n-alkanoic acids from a late Holocene sediment core from Lake Karakuli (eastern Pamir, Xinjiang Province, China). Instrumental evidence and isotopeenabled climate model experiments with the Laboratoire de Meteorologie Dynamique Zoom model version 4 (LMDZ4) demonstrate that delta D values of precipitation in the region are influenced by both temperature and precipitation amount. We find that these parameters are inversely correlated on an annual scale, i.e., the climate has varied between relatively cool and wet and more warm and dry over the last 50 years. Since the isotopic signals of these changes are in the same direction and therefore additive, isotopes in precipitation are sensitive recorders of climatic changes in the region. Additionally, we infer that plants use year-round precipitation (including snowmelt), and thus leaf wax delta D values must also respond to shifts in the proportion of moisture derived from westerly storms during late winter and early spring. Downcore results give evidence for a gradual shift to cooler and wetter climates between 3.5 and 2.5 cal kyr BP, interrupted by a warm and dry episode between 3.0 and 2.7 kyr BP. Further cool and wet episodes occur between 1.9 and 1.5 and between 0.6 and 0.1 kyr BP, the latter coeval with the Little Ice Age. Warm and dry episodes from 2.5 to 1.9 and 1.5 to 0.6 kyr BP coincide with the Roman Warm Period and Medieval Climate Anomaly, respectively. Finally, we find a drying tend in recent decades. Regional comparisons lead us to infer that the strength and position of the westerlies, and wider northern hemispheric climate dynamics, control climatic shifts in arid Central Asia, leading to complex local responses. Our new archive from Lake Karakuli provides a detailed record of the local signatures of these climate transitions in the eastern Pamir.
Es gibt in Berlin eine einzigartige Vereinslandschaft im Amateur – und semiprofessionellen Fußballsport, in der einst von türkischen Migranten gegründete Vereine einen festen Platz einnehmen. Fußballsport bietet einen sozialen Raum für Jugendliche verschiedener kultureller, ethnischer und religiöser Herkunft, in dem Gruppen gebildet werden, um gegen einander zu konkurrieren. Ebenso eröffnet Fußball dem Einzelnen die Möglichkeit, die Gültigkeit und Relevanz von Vorurteilen und von gängigen Stereotypisierungen anderer Gruppen im Spielalltag einer ständigen Prüfung zu unterziehen. Fußballspieler können sich sowohl zwischen multi-kulturellen als auch mono-ethnischen Gruppenkonstellationen, in einigen Fällen auch in transnationalen Konstellationen bewegen, womit sie dabei wesentlich an der Sinngebung ihrer eigenen sozialen Zugehörigkeit mitwirken, die sich aus dem Spannungsfeld von Selbst- und Fremdwahrnehmungsmustern ergibt. In Folge dessen werden in diesem Raum Anerkennungsmechanismen konstituiert.
Die vorliegende Dissertation befasst sich mit dem alltäglichen Leben von türkisch-stämmigen, jugendlichen Amateur- und semiprofessionellen Fußballspielern (delikanli), sowie von anderen sozialen Akteuren der türkischen Fußballwelt, wie zum Beispiel „ältere“ Fußballspieler (agbi) und Fußballtrainer (hoca). Hauptanliegen der Arbeit war die Rekonstruktion kollektiver Wahrnehmungs-, Deutungs - und Handlungsmuster von Mitgliedern türkischer Fußballvereine im allgemeinen und ihrer Selbstdarstellung aber auch ihrer Wahrnehmung der „Anderen“ im besonderen. Mittels dieser Studie sollte nachvollzogen werden, ob und inwiefern sich traditionelle soziale Verhaltensmuster der gewählten Gruppe im technisch regulierten und stark Konkurrenz-orientierten Handlungsraum widerspiegeln und die reziproken Beziehungen zwischen dem „Selbst“ und den „Anderen“ regulieren. Dabei wurde die Relevanz von herkunftsbezogenen Stereotypisierungen und Vorurteilen in der kollektiven Konstitution von Selbstwahrnehmungen und Fremdverstehen im partikularen sozialen Feld (Bourdieu, 2001) des Fußballs rekonstruiert.
In dieser Arbeit wurde darüber hinaus beleuchtet, welche Rolle türkische Fußballvereine auf der einen Seite bei der Entstehung sozialer Raumzugehörigkeit zu den Stadtquartieren in Berlin einnehmen und welche Art von Mechanismen der sozialen Integration sie in diesen Vereinen herstellen. Auf der anderen Seite wurde hinterfragt, inwiefern sie zur sozialen Kohäsion zwischen diversen Kulturen beitragen. Daher wurde geprüft, ob und inwiefern die negativ konnotierte ethnozentrische Wahrnehmung von „Differenz“ (Bielefeld, 1998), die als soziales Konstrukt zwischen autochthonen und allochthonen Gruppen hergestellt wird, durch das Engagement der Vereinsakteure einen konstruktiven Wandel erfährt.
Übergeordnetes Ziel all dieser Forschungsfragen war es, ein fundiertes Verständnis über die Rolle von türkischen Fußballvereinen als soziale Mechanismen zu erlangen und deren Funktionsweise bei der Konstitution von Anpassungsstrategien in diesem sozialen Feld untersuchen. Detailliert wurde diese Rolle unter der Konzeptualisierung von sozialen Positionierungsmuster betrachtet, die als Gefüge von Deutungen des Alltäglichen verstanden werden, das individuelle und kollektive Handlungsmuster und implizit Muster des Fremdverstehens sowie des othering im Migrationskontext reguliert. Eine Rekonstruktion der sozialen Positionierungsmuster bietet eine eingehende soziologische Untersuchung dieser Teilnehmergruppe, die zudem Aufschluss über die Bedeutung und das Verständnis von ethnischer Zugehörigkeit für letztere gibt.
Neben umfangreicher Feldbeobachtung wurden in dieser qualitativen Studie mit Spielern verschiedener Vereine insgesamt zehn Gruppendiskussionen (Bohnsack, 2004) innerhalb ihrer Mannschaften zu gemeinsamen alltäglichen Erlebnissen und Erfahrungen durchgeführt, aufgezeichnet und mittels sozialwissenschaftlichem hermeneutischem Verfahren (Soeffner, 2004) interpretiert. Auch mit anderen Vereinsmitgliedern, d. h. mit Trainern bzw. hoca, Vorsitzenden, Managern und Sponsoren wurden jeweils zehn narrative und sieben biographische Einzelinterviews sowie sieben Experteninterviews durchgeführt. Deren Analyse erlaubt es, die Rolle dieser Mitglieder sowie wirkende Autoritätsmechanismen und kollektiv konstituierte Verhaltensmuster innerhalb der gesamten Vereinsgruppe zu rekonstruieren. Dabei wurde bezweckt, die Gesamtheit des sozialen Netzwerkes bzw. die Beziehungsschemata innerhalb der türkischen Fußballvereine Berlins zu verdeutlichen.
In der Arbeit werden zwei Standpunkte der theoretischen Auseinandersetzung verwendet. Auf der einen Seite wird die Lebensweltanalyse (Schütz und Luckmann, 1979, 1990) angewendet, um das soziale Erbe der in der Vergangenheit gesellschaftlich konstituierten Titulierung „Menschen mit Migrationshintergrund“ zu rekonstruieren, bzw. den Einfluss dieser sozialen Reproduktion auf die Wahrnehmungs-, Deutungs- und Handlungsmuster der Akteure zu untersuchen. Auf der anderen Seite wird die soziale Wirkung der tatsächlichen, alltäglichen Erfahrungsschemata im sozialen Feld des Fußballs auf die Selbstpositionierungen der Akteure mittels Goffmanscher Rahmenanalyse (Goffman, 1980) herausgearbeitet.
This thesis explores the variation in coreference patterns across language modes (i.e., spoken and written) and text genres. The significance of research on variation in language use has been emphasized in a number of linguistic studies. For instance, Biber and Conrad [2009] state that “register/genre variation is a fundamental aspect of human language” and “Given the ubiquity of register/genre variation, an understanding of how linguistic features are used in patterned ways across text varieties is of central importance for both the description of particular languages and the development of cross-linguistic theories of language use.”[p.23]
We examine the variation across genres with the primary goal of contributing to the body of knowledge on the description of language use in English. On the computational side, we believe that incorporating linguistic knowledge into learning-based systems can boost the performance of automatic natural language processing systems, particularly for non-standard texts. Therefore, in addition to their descriptive value, the linguistic findings we provide in this study may prove to be helpful for improving the performance of automatic coreference resolution, which is essential for a good text understanding and beneficial for several downstream NLP applications, including machine translation and text summarization.
In particular, we study a genre of texts that is formed of conversational interactions on the well-known social media platform Twitter. Two factors motivate us: First, Twitter conversations are realized in written form but resemble spoken communication [Scheffler, 2017], and therefore they form an atypical genre for the written mode. Second, while Twitter texts are a complicated genre for automatic coreference resolution, due to their widespread use in the digital sphere, at the same time they are highly relevant for applications that seek to extract information or sentiments from users’ messages. Thus, we are interested in discovering more about the linguistic and computational aspects of coreference in Twitter conversations. We first created a corpus of such conversations for this purpose and annotated it for coreference. We are interested in not only the coreference patterns but the overall discourse behavior of Twitter conversations. To address this, in addition to the coreference relations, we also annotated the coherence relations on the corpus we compiled. The corpus is available online in a newly developed form that allows for separating the tweets from their annotations.
This study consists of three empirical analyses where we independently apply corpus-based, psycholinguistic and computational approaches for the investigation of variation in coreference patterns in a complementary manner. (1) We first make a descriptive analysis of variation across genres through a corpus-based study. We investigate the linguistic aspects of nominal coreference in Twitter conversations and we determine how this genre relates to other text genres in spoken and written modes. In addition to the variation across genres, studying the differences in spoken-written modes is also in focus of linguistic research since from Woolbert [1922]. (2) In order to investigate whether the language mode alone has any effect on coreference patterns, we carry out a crowdsourced experiment and analyze the patterns in the same genre for both spoken and written modes. (3) Finally, we explore the potentials of domain adaptation of automatic coreference resolution (ACR) for the conversational Twitter data. In order to answer the question of how the genre of Twitter conversations relates to other genres in spoken and written modes with respect to coreference patterns, we employ a state-of-the-art neural ACR model [Lee et al., 2018] to examine whether ACR on Twitter conversations will benefit from mode-based separation in out-of-domain training data.
This paper reopens the discussion on focus marking in Akan (Kwa,
Niger-Congo) by examining the semantics of the so-called focus marker
in the language. It is shown that the so-called focus marker expresses
exhaustivity when it occurs in a sentence with narrow focus. The study
employs four standard tests for exhaustivity proposed in the literature
to examine the semantics of Akan focus constructions (Szabolsci 1981,
1994; É. Kiss 1998; Hartmann and Zimmermann 2007). It is shown that
although a focused entity with the so-called focus marker nà is
interpreted to mean ‘only X and nothing/nobody else,’ this meaning
appears to be pragmatic.
The highly structured nature of the educational sector demands effective policy mechanisms close to the needs of the field. That is why evidence-based policy making, endorsed by the European Commission under Erasmus+ Key Action 3, aims to make an alignment between the domains of policy and practice. Against this background, this article addresses two issues: First, that there is a vertical gap in the translation of higher-level policies to local strategies and regulations. Second, that there is a horizontal gap between educational domains regarding the policy awareness of individual players. This was analyzed in quantitative and qualitative studies with domain experts from the fields of virtual mobility and teacher training. From our findings, we argue that the combination of both gaps puts the academic bridge from secondary to tertiary education at risk, including the associated knowledge proficiency levels. We discuss the role of digitalization in the academic bridge by asking the question: which value does the involved stakeholders expect from educational policies? As a theoretical basis, we rely on the model of value co-creation for and by stakeholders. We describe the used instruments along with the obtained results and proposed benefits. Moreover, we reflect on the methodology applied, and we finally derive recommendations for future academic bridge policies.
Although it has become common practice to build applications based on the reuse of existing components or services, technical complexity and semantic challenges constitute barriers to ensuring a successful and wide reuse of components and services. In the geospatial application domain, the barriers are self-evident due to heterogeneous geographic data, a lack of interoperability and complex analysis processes.
Constructing workflows manually and discovering proper services and data that match user intents and preferences is difficult and time-consuming especially for users who are not trained in software development. Furthermore, considering the multi-objective nature of environmental modeling for the assessment of climate change impacts and the various types of geospatial data (e.g., formats, scales, and georeferencing systems) increases the complexity challenges.
Automatic service composition approaches that provide semantics-based assistance in the process of workflow design have proven to be a solution to overcome these challenges and have become a frequent demand especially by end users who are not IT experts. In this light, the major contributions of this thesis are:
(i) Simplification of service reuse and workflow design of applications for climate impact analysis by following the eXtreme Model-Driven Development (XMDD) paradigm.
(ii) Design of a semantic domain model for climate impact analysis applications that comprises specifically designed services, ontologies that provide domain-specific vocabulary for referring to types and services, and the input/output annotation of the services using the terms defined in the ontologies.
(iii) Application of a constraint-driven method for the automatic composition of workflows for analyzing the impacts of sea-level rise. The application scenario demonstrates the impact of domain modeling decisions on the results and the performance of the synthesis algorithm.
Sinkholes and depressions are typical landforms of karst regions. They pose a considerable natural hazard to infrastructure, agriculture, economy and human life in affected areas worldwide. The physio-chemical processes of sinkholes and depression formation are manifold, ranging from dissolution and material erosion in the subsurface to mechanical subsidence/failure of the overburden. This thesis addresses the mechanisms leading to the development of sinkholes and depressions by using complementary methods: remote sensing, distinct element modelling and near-surface geophysics.
In the first part, detailed information about the (hydro)-geological background, ground structures, morphologies and spatio-temporal development of sinkholes and depressions at a very active karst area at the Dead Sea are derived from satellite image analysis, photogrammetry and geologic field surveys. There, clusters of an increasing number of sinkholes have been developing since the 1980s within large-scale depressions and are distributed over different kinds of surface materials: clayey mud, sandy-gravel alluvium and lacustrine evaporites (salt). The morphology of sinkholes differs depending in which material they form: Sinkholes in sandy-gravel alluvium and salt are generally deeper and narrower than sinkholes in the interbedded evaporite and mud deposits. From repeated aerial surveys, collapse precursory features like small-scale subsidence, individual holes and cracks are identified in all materials. The analysis sheds light on the ongoing hazardous subsidence process, which is driven by the base-level fall of the Dead Sea and by the dynamic formation of subsurface water channels.
In the second part of this thesis, a novel, 2D distinct element geomechanical modelling approach with the software PFC2D-V5 to simulating individual and multiple cavity growth and sinkhole and large-scale depression development is presented. The approach involves a stepwise material removal technique in void spaces of arbitrarily shaped geometries and is benchmarked by analytical and boundary element method solutions for circular cavities. Simulated compression and tension tests are used to calibrate model parameters with bulk rock properties for the materials of the field site. The simulations show that cavity and sinkhole evolution is controlled by material strength of both overburden and cavity host material, the depth and relative speed of the cavity growth and the developed stress pattern in the subsurface. Major findings are: (1) A progressively deepening differential subrosion with variable growth speed yields a more fragmented stress pattern with stress interaction between the cavities. It favours multiple sinkhole collapses and nesting within large-scale depressions. (2) Low-strength materials do not support large cavities in the material removal zone, and subsidence is mainly characterised by gradual sagging into the material removal zone with synclinal bending. (3) High-strength materials support large cavity formation, leading to sinkhole formation by sudden collapse of the overburden. (4) Large-scale depression formation happens either by coalescence of collapsing holes, block-wise brittle failure, or gradual sagging and lateral widening.
The distinct element based approach is compared to results from remote sensing and geophysics at the field site. The numerical simulation outcomes are generally in good agreement with derived morphometrics, documented surface and subsurface structures as well as seismic velocities. Complementary findings on the subrosion process are provided from electric and seismic measurements in the area.
Based on the novel combination of methods in this thesis, a generic model of karst landform evolution with focus on sinkhole and depression formation is developed. A deepening subrosion system related to preferential flow paths evolves and creates void spaces and subsurface conduits. This subsequently leads to hazardous subsidence, and the formation of sinkholes within large-scale depressions. Finally, a monitoring system for shallow natural hazard phenomena consisting of geodetic and geophysical observations is proposed for similarly affected areas.
Mechanical and/or chemical removal of material from the subsurface may generate large subsurface cavities, the destabilisation of which can lead to ground collapse and the formation of sinkholes. Numerical simulation of the interaction of cavity growth, host material deformation and overburden collapse is desirable to better understand the sinkhole hazard but is a challenging task due to the involved high strains and material discontinuities. Here, we present 2-D distinct element method numerical simulations of cavity growth and sinkhole development. Firstly, we simulate cavity formation by quasi-static, stepwise removal of material in a single growing zone of an arbitrary geometry and depth. We benchmark this approach against analytical and boundary element method models of a deep void space in a linear elastic material. Secondly, we explore the effects of properties of different uniform materials on cavity stability and sinkhole development. We perform simulated biaxial tests to calibrate macroscopic geotechnical parameters of three model materials representative of those in which sinkholes develop at the Dead Sea shoreline: mud, alluvium and salt. We show that weak materials do not support large cavities, leading to gradual sagging or suffusion-style subsidence. Strong materials support quasi-stable to stable cavities, the overburdens of which may fail suddenly in a caprock or bedrock collapse style. Thirdly, we examine the consequences of layered arrangements of weak and strong materials. We find that these are more susceptible to sinkhole collapse than uniform materials not only due to a lower integrated strength of the overburden but also due to an inhibition of stabilising stress arching. Finally, we compare our model sinkhole geometries to observations at the Ghor Al-Haditha sinkhole site in Jordan. Sinkhole depth ∕ diameter ratios of 0.15 in mud, 0.37 in alluvium and 0.33 in salt are reproduced successfully in the calibrated model materials. The model results suggest that the observed distribution of sinkhole depth ∕ diameter values in each material type may partly reflect sinkhole growth trends.
This opinion article describes recent approaches to use the "biorefinery" concept to lower the carbon footprint of typical mass polymers, by replacing parts of the fossil monomers with similar or even the same monomer made from regrowing dendritic biomass. Herein, the new and green catalytic synthetic routes are for lactic acid (LA), isosorbide (IS), 2,5-furandicarboxylic acid (FDCA), and p-xylene (pXL). Furthermore, the synthesis of two unconventional lignocellulosic biomass derivable monomers, i.e., alpha-methylene-gamma-valerolactone (MeGVL) and levoglucosenol (LG), are presented. All those have the potential to enter in a cost-effective way, also the mass market and thereby recover lost areas for polymer materials. The differences of catalytic unit operations of the biorefinery are also discussed and the challenges that must be addressed along the synthesis path of each monomers.
Depending on the biochemical and biotechnical approach, the aim of this work was to understand the mechanism of protein-glucan interactions in regulation and control of starch degradation. Although starch degradation starts with the phosphorylation process, the mechanisms by which this process is controlling and adjusting starch degradation are not yet fully understood. Phosphorylation is a major process performed by the two dikinases enzymes α-glucan, water dikinase (GWD) and phosphoglucan water dikinase (PWD). GWD and PWD enzymes phosphorylate the starch granule surface; thereby stimulate starch degradation by hydrolytic enzymes. Despite these important roles for GWD and PWD, so far the biochemical processes by which these enzymes are able to regulate and adjust the rate of phosphate incorporation into starch during the degradation process haven‘t been understood. Recently, some proteins were found associated with the starch granule. Two of these proteins are named Early Starvation Protein 1 (ESV1) and its homologue Like-Early Starvation Protein 1 (LESV). It was supposed that both are involved in the control of starch degradation, but their function has not been clearly known until now. To understand how ESV1 and LESV-glucan interactions are regulated and affect the starch breakdown, it was analyzed the influence of ESV1 and LESV proteins on the phosphorylating enzyme GWD and PWD and hydrolysing enzymes ISA, BAM, and AMY. However, the analysis determined the location of LESV and ESV1 in the chloroplast stroma of Arabidopsis. Mass spectrometry data predicted ESV1and LESV proteins as a product of the At1g42430 and At3g55760 genes with a predicted mass of ~50 kDa and ~66 kDa, respectively. The ChloroP program predicted that ESV1 lacks the chloroplast transit peptide, but it predicted the first 56 amino acids N-terminal region as a chloroplast transit peptide for LESV. Usually, the transit peptide is processed during transport of the proteins into plastids. Given that this processing is critical, two forms of each ESV1 and LESV were generated and purified, a full-length form and a truncated form that lacks the transit peptide, namely, (ESV1and tESV1) and (LESV and tLESV), respectively. Both protein forms were included in the analysis assays, but only slight differences in glucan binding and protein action between ESV1 and tESV1 were observed, while no differences in the glucan binding and effect on the GWD and PWD action were observed between LESV and tLESV. The results revealed that the presence of the N-terminal is not massively altering the action of ESV1 or LESV. Therefore, it was only used the ESV1 and tLESV forms data to explain the function of both proteins.
However, the analysis of the results revealed that LESV and ESV1 proteins bind strongly at the starch granule surface. Furthermore, not all of both proteins were released after their incubation with starches after washing the granules with 2% [w/v] SDS indicates to their binding to the deeper layers of the granule surface. Supporting of this finding comes after the binding of both proteins to starches after removing the free glucans chains from the surface by the action of ISA and BAM. Although both proteins are capable of binding to the starch structure, only LESV showed binding to amylose, while in ESV1, binding was not observed. The alteration of glucan structures at the starch granule surface is essential for the incorporation of phosphate into starch granule while the phosphorylation of starch by GWD and PWD increased after removing the free glucan chains by ISA. Furthermore, PWD showed the possibility of starch phosphorylation without prephosphorylation by GWD.
Biochemical studies on protein-glucan interactions between LESV or ESV1 with different types of starch showed a potentially important mechanism of regulating and adjusting the phosphorylation process while the binding of LESV and ESV1 leads to altering the glucan structures of starches, hence, render the effect of the action of dikinases enzymes (GWD and PWD) more able to control the rate of starch degradation. Despite the presence of ESV1 which revealed an antagonistic effect on the PWD action as the PWD action was decreased without prephosphorylation by GWD and increased after prephosphorylation by GWD (Chapter 4), PWD showed a significant reduction in its action with or without prephosphorylation by GWD in the presence of ESV1 whether separately or together with LESV (Chapter 5). However, the presence of LESV and ESV1 together revealed the same effect compared to the effect of each one alone on the phosphorylation process, therefore it is difficult to distinguish the specific function between them. However, non-interactions were detected between LESV and ESV1 or between each of them with GWD and PWD or between GWD and PWD indicating the independent work for these proteins. It was also observed that the alteration of the starch structure by LESV and ESV1 plays a role in adjusting starch degradation rates not only by affecting the dikinases but also by affecting some of the hydrolysing enzymes since it was found that the presence of LESV and ESV1leads to the reduction of the action of BAM, but does not abolish it.
The main objective of this dissertation is to analyse prerequisites, expectations, apprehensions, and attitudes of students studying computer science, who are willing to gain a bachelor degree. The research will also investigate in the students’ learning style according to the Felder-Silverman model. These investigations fall in the attempt to make an impact on reducing the “dropout”/shrinkage rate among students, and to suggest a better learning environment.
The first investigation starts with a survey that has been made at the computer science department at the University of Baghdad to investigate the attitudes of computer science students in an environment dominated by women, showing the differences in attitudes between male and female students in different study years. Students are accepted to university studies via a centrally controlled admission procedure depending mainly on their final score at school. This leads to a high percentage of students studying subjects they do not want. Our analysis shows that 75% of the female students do not regret studying computer science although it was not their first choice. And according to statistics over previous years, women manage to succeed in their study and often graduate on top of their class. We finish with a comparison of attitudes between the freshman students of two different cultures and two different university enrolment procedures (University of Baghdad, in Iraq, and the University of Potsdam, in Germany) both with opposite gender majority.
The second step of investigation took place at the department of computer science at the University of Potsdam in Germany and analyzes the learning styles of students studying the three major fields of study offered by the department (computer science, business informatics, and computer science teaching). Investigating the differences in learning styles between the students of those study fields who usually take some joint courses is important to be aware of which changes are necessary to be adopted in the teaching methods to address those different students. It was a two stage study using two questionnaires; the main one is based on the Index of Learning Styles Questionnaire of B. A. Solomon and R. M. Felder, and the second questionnaire was an investigation on the students’ attitudes towards the findings of their personal first questionnaire. Our analysis shows differences in the preferences of learning style between male and female students of the different study fields, as well as differences between students with the different specialties (computer science, business informatics, and computer science teaching).
The third investigation looks closely into the difficulties, issues, apprehensions and expectations of freshman students studying computer science. The study took place at the computer science department at the University of Potsdam with a volunteer sample of students. The goal is to determine and discuss the difficulties and issues that they are facing in their study that may lead them to think in dropping-out, changing the study field, or changing the university. The research continued with the same sample of students (with business informatics students being the majority) through more than three semesters. Difficulties and issues during the study were documented, as well as students’ attitudes, apprehensions, and expectations. Some of the professors and lecturers opinions and solutions to some students’ problems were also documented. Many participants had apprehensions and difficulties, especially towards informatics subjects. Some business informatics participants began to think of changing the university, in particular when they reached their third semester, others thought about changing their field of study. Till the end of this research, most of the participants continued in their studies (the study they have started with or the new study they have changed to) without leaving the higher education system.
Relativistic pair beams produced in the cosmic voids by TeV gamma rays from blazars are expected to produce a detectable GeV-scale cascade emission missing in the observations. The suppression of this secondary cascade implies either the deflection of the pair beam by intergalactic magnetic fields (IGMFs) or an energy loss of the beam due to the electrostatic beam-plasma instability. IGMF of femto-Gauss strength is sufficient to significantly deflect the pair beams reducing the flux of secondary cascade below the observational limits. A similar flux reduction may result in the absence of the IGMF from the beam energy loss by the instability before the inverse Compton cooling. This dissertation consists of two studies about the instability role in the evolution of blazar-induced beams.
Firstly, we investigated the effect of sub-fG level IGMF on the beam energy loss by the instability. Considering IGMF with correlation lengths smaller than a few kpc, we found that such fields increase the transverse momentum of the pair beam particles, dramatically reducing the linear growth rate of the electrostatic instability and hence the energy-loss rate of the pair beam. Our results show that the IGMF eliminates beam plasma instability as an effective energy-loss agent at a field strength three orders of magnitude below that needed to suppress the secondary cascade emission by magnetic deflection. For intermediate-strength IGMF, we do not know a viable process to explain the observed absence of GeV-scale cascade emission and hence can be excluded.
Secondly, we probed how the beam-plasma instability feeds back on the beam, using a realistic two-dimensional beam distribution. We found that the instability broadens the beam opening angles significantly without any significant energy loss, thus confirming a recent feedback study on a simplified one-dimensional beam distribution. However, narrowing diffusion feedback of the beam particles with Lorentz factors less than 1e6 might become relevant even though initially it is negligible. Finally, when considering the continuous creation of TeV pairs, we found that the beam distribution and the wave spectrum reach a new quasi-steady state, in which the scattering of beam particles persists and the beam opening angle may increase by a factor of hundreds. This new intrinsic scattering of the cascade can result in time delays of around ten years, thus potentially mimicking the IGMF deflection. Understanding the implications on the GeV cascade emission requires accounting for inverse Compton cooling and simulating the beam-plasma system at different points in the IGM.
Since their discovery in 1610 by Galileo Galilei, Saturn's rings continue to fascinate both experts and amateurs. Countless numbers of icy grains in almost Keplerian orbits reveal a wealth of structures such as ringlets, voids and gaps, wakes and waves, and many more. Grains are found to increase in size with increasing radial distance to Saturn. Recently discovered "propeller" structures in the Cassini spacecraft data, provide evidence for the existence of embedded moonlets. In the wake of these findings, the discussion resumes about origin and evolution of planetary rings, and growth processes in tidal environments. In this thesis, a contact model for binary adhesive, viscoelastic collisions is developed that accounts for agglomeration as well as restitution. Collisional outcomes are crucially determined by the impact speed and masses of the collision partners and yield a maximal impact velocity at which agglomeration still occurs. Based on the latter, a self-consistent kinetic concept is proposed. The model considers all possible collisional outcomes as there are coagulation, restitution, and fragmentation. Emphasizing the evolution of the mass spectrum and furthermore concentrating on coagulation alone, a coagulation equation, including a restricted sticking probability is derived. The otherwise phenomenological Smoluchowski equation is reproduced from basic principles and denotes a limit case to the derived coagulation equation. Qualitative and quantitative analysis of the relevance of adhesion to force-free granular gases and to those under the influence of Keplerian shear is investigated. Capture probability, agglomerate stability, and the mass spectrum evolution are investigated in the context of adhesive interactions. A size dependent radial limit distance from the central planet is obtained refining the Roche criterion. Furthermore, capture probability in the presence of adhesion is generally different compared to the case of pure gravitational capture. In contrast to a Smoluchowski-type evolution of the mass spectrum, numerical simulations of the obtained coagulation equation revealed, that a transition from smaller grains to larger bodies cannot occur via a collisional cascade alone. For parameters used in this study, effective growth ceases at an average size of centimeters.
Gait analysis is an important tool for the early detection of neurological diseases and for the assessment of risk of falling in elderly people. The availability of low-cost camera hardware on the market today and recent advances in Machine Learning enable a wide range of clinical and health-related applications, such as patient monitoring or exercise recognition at home. In this study, we evaluated the motion tracking performance of the latest generation of the Microsoft Kinect camera, Azure Kinect, compared to its predecessor Kinect v2 in terms of treadmill walking using a gold standard Vicon multi-camera motion capturing system and the 39 marker Plug-in Gait model. Five young and healthy subjects walked on a treadmill at three different velocities while data were recorded simultaneously with all three camera systems. An easy-to-administer camera calibration method developed here was used to spatially align the 3D skeleton data from both Kinect cameras and the Vicon system. With this calibration, the spatial agreement of joint positions between the two Kinect cameras and the reference system was evaluated. In addition, we compared the accuracy of certain spatio-temporal gait parameters, i.e., step length, step time, step width, and stride time calculated from the Kinect data, with the gold standard system. Our results showed that the improved hardware and the motion tracking algorithm of the Azure Kinect camera led to a significantly higher accuracy of the spatial gait parameters than the predecessor Kinect v2, while no significant differences were found between the temporal parameters. Furthermore, we explain in detail how this experimental setup could be used to continuously monitor the progress during gait rehabilitation in older people.
Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.
Insight by de—sign
(2023)
The calculus of design is a diagrammatic approach towards the relationship between design and insight. The thesis I am evolving is that insights are not discovered, gained, explored, revealed, or mined, but are operatively de—signed. The de in design neglects the contingency of the space towards the sign. The — is the drawing of a distinction within the operation. Space collapses through the negativity of the sign; the command draws a distinction that neglects the space for the form's sake. The operation to de—sign is counterintuitively not the creation of signs, but their removal, the exclusion of possible sign propositions of space. De—sign is thus an act of exclusion; the possibilities of space are crossed into form.
A dramatic efficiency improvement of bulk heterojunction solar cells based on electron-donating conjugated polymers in combination with soluble fullerene derivatives has been achieved over the past years. Certified and reported power conversion efficiencies now reach over 9% for single junctions and exceed the 10% benchmark for tandem solar cells. This trend brightens the vision of organic photovoltaics becoming competitive with inorganic solar cells including the realization of low-cost and large-area organic photovoltaics. For the best performing organic materials systems, the yield of charge generation can be very efficient. However, a detailed understanding of the free charge carrier generation mechanisms at the donor acceptor interface and the energy loss associated with it needs to be established. Moreover, organic solar cells are limited by the competition between charge extraction and free charge recombination, accounting for further efficiency losses. A conclusive picture and the development of precise methodologies for investigating the fundamental processes in organic solar cells are crucial for future material design, efficiency optimization, and the implementation of organic solar cells into commercial products.
In order to advance the development of organic photovoltaics, my thesis focuses on the comprehensive understanding of charge generation, recombination and extraction in organic bulk heterojunction solar cells summarized in 6 chapters on the cumulative basis of 7 individual publications.
The general motivation guiding this work was the realization of an efficient hybrid inorganic/organic tandem solar cell with sub-cells made from amorphous hydrogenated silicon and organic bulk heterojunctions. To realize this project aim, the focus was directed to the low band-gap copolymer PCPDTBT and its derivatives, resulting in the examination of the charge carrier dynamics in PCPDTBT:PC70BM blends in relation to by the blend morphology. The phase separation in this blend can be controlled by the processing additive diiodooctane, enhancing domain purity and size. The quantitative investigation of the free charge formation was realized by utilizing and improving the time delayed collection field technique. Interestingly, a pronounced field dependence of the free carrier generation for all blends is found, with the field dependence being stronger without the additive. Also, the bimolecular recombination coefficient for both blends is rather high and increases with decreasing internal field which we suggest to be caused by a negative field dependence of mobility. The additive speeds up charge extraction which is rationalized by the threefold increase in mobility.
By fluorine attachment within the electron deficient subunit of PCPDTBT, a new polymer F-PCPDTBT is designed. This new material is characterized by a stronger tendency to aggregate as compared to non-fluorinated PCPDTBT. Our measurements show that for F-PCPDTBT:PCBM blends the charge carrier generation becomes more efficient and the field-dependence of free charge carrier generation is weakened. The stronger tendency to aggregate induced by the fluorination also leads to increased polymer rich domains, accompanied in a threefold reduction in the non-geminate recombination coefficient at conditions of open circuit. The size of the polymer domains is nicely correlated to the field-dependence of charge generation and the Langevin reduction factor, which highlights the importance of the domain size and domain purity for efficient charge carrier generation. In total, fluorination of PCPDTBT causes the PCE to increase from 3.6 to 6.1% due to enhanced fill factor, short circuit current and open circuit voltage. Further optimization of the blend ratio, active layer thickness, and polymer molecular weight resulted in 6.6% efficiency for F-PCPDTBT:PC70BM solar cells.
Interestingly, the double fluorinated version 2F-PCPDTBT exhibited poorer FF despite a further reduction of geminate and non-geminate recombination losses. To further analyze this finding, a new technique is developed that measures the effective extraction mobility under charge carrier densities and electrical fields comparable to solar cell operation conditions. This method involves the bias enhanced charge extraction technique. With the knowledge of the carrier density under different electrical field and illumination conditions, a conclusive picture of the changes in charge carrier dynamics leading to differences in the fill factor upon fluorination of PCPDTBT is attained. The more efficient charge generation and reduced recombination with fluorination is counterbalanced by a decreased extraction mobility. Thus, the highest fill factor of 60% and efficiency of 6.6% is reached for F-PCPDTBT blends, while 2F-PCPDTBT blends have only moderate fill factors of 54% caused by the lower effective extraction mobility, limiting the efficiency to 6.5%.
To understand the details of the charge generation mechanism and the related losses, we evaluated the yield and field-dependence of free charge generation using time delayed collection field in combination with sensitive measurements of the external quantum efficiency and absorption coefficients for a variety of blends. Importantly, both the yield and field-dependence of free charge generation is found to be unaffected by excitation energy, including direct charge transfer excitation below the optical band gap. To access the non-detectable absorption at energies of the relaxed charge transfer emission, the absorption was reconstructed from the CT emission, induced via the recombination of thermalized charges in electroluminescence. For a variety of blends, the quantum yield at energies of charge transfer emission was identical to excitations with energies well above the optical band-gap. Thus, the generation proceeds via the split-up of the thermalized charge transfer states in working solar cells. Further measurements were conducted on blends with fine-tuned energy levels and similar blend morphologies by using different fullerene derivatives. A direct correlation between the efficiency of free carrier generation and the energy difference of the relaxed charge transfer state relative to the energy of the charge separated state is found. These findings open up new guidelines for future material design as new high efficiency materials require a minimum energetic offset between charge transfer and the charge separated state while keeping the HOMO level (and LUMO level) difference between donor and acceptor as small as possible.
The theory of atomic Boson-Fermion mixtures in the dilute limit beyond mean-field is considered in this thesis. Extending the formalism of quantum field theory we derived expressions for the quasi-particle excitation spectra, the ground state energy, and related quantities for a homogenous system to first order in the dilute gas parameter. In the framework of density functional theory we could carry over the previous results to inhomogeneous systems. We then determined to density distributions for various parameter values and identified three different phase regions: (i) a stable mixed regime, (ii) a phase separated regime, and (iii) a collapsed regime. We found a significant contribution of exchange-correlation effects in the latter case. Next, we determined the shift of the Bose-Einstein condensation temperature caused by Boson-Fermion interactions in a harmonic trap due to redistribution of the density profiles. We then considered Boson-Fermion mixtures in optical lattices. We calculated the criterion for stability against phase separation, identified the Mott-insulating and superfluid regimes both, analytically within a mean-field calculation, and numerically by virtue of a Gutzwiller Ansatz. We also found new frustrated ground states in the limit of very strong lattices. ----Anmerkung: Der Autor ist Träger des durch die Physikalische Gesellschaft zu Berlin vergebenen Carl-Ramsauer-Preises 2004 für die jeweils beste Dissertation der vier Universitäten Freie Universität Berlin, Humboldt-Universität zu Berlin, Technische Universität Berlin und Universität Potsdam.
A form-function mismatch?
(2019)
Planets outside our solar system, so-called "exoplanets", can be detected with different methods, and currently more than 5000 exoplanets have been confirmed, according to NASA Exoplanet Archive. One major highlight of the studies on exoplanets in the past twenty years is the characterization of their atmospheres usingtransmission spectroscopy as the exoplanet transits. However, this characterization is a challenging process and sometimes there are reported discrepancies in the literature regarding the atmosphere of the same exoplanet. One potential reason for the observed atmospheric inconsistencies is called impact parameter degeneracy, and it is highly driven by the limb darkening effect of the host star. A brief introductionto those topics in presented in chapter 1, while the motivation and objectives of thiswork are described in chapter 2.The first goal is to clarify the origin of the transmission spectrum, which is anindicator of an exoplanet’s atmosphere; whether it is real or influenced by the impactparameter degeneracy. A second goal is to determine whether photometry from space using the Transiting Exoplanet Survey Satellite (TESS), could improve on the major parameters, which are responsible for the aforementioned degeneracy, of known exoplanetary systems. Three individual projects were conducted in order toaddress those goals. The three manuscripts are presented, in short, in the manuscriptoverview in chapter 3.More specifically, in chapter 4, the first manuscript is presented, which is an ex-tended investigation on the impact parameter degeneracy and its application onsynthetic transmission spectra. Evidently, the limb darkening of the host star isan important driver for this effect. It keeps the degeneracy persisting through different groups of exoplanets, based on the uncertainty of their impact parameter and on the type of their host star. The second goal, was addressed in the second and third manuscripts (chapter 5 and chapter 6 respectively). Using observationsfrom the TESS mission, two samples of exoplanets were studied; 10 transiting inflated hot-Jupiters and 43 transiting grazing systems. Potentially, the refinement or confirmation of their major system parameters’ measurements can assist in solving current or future discrepancies regarding their atmospheric characterization.In chapter 7 the conclusions of this work are discussed, while in chapter 8 itis proposed how TESS’s measurements can be able to discern between erroneousinterpretations of transmission spectra, especially on systems where the impact parameter degeneracy is likely not applicable.
Characterization of altered inflorescence architecture in Arabidopsis thaliana BG-5 x Kro-0 hybrid
(2018)
A reciprocal cross between two A. thaliana accessions, Kro-0 (Krotzenburg, Germany) and BG-5 (Seattle, USA), displays purple rosette leaves and dwarf bushy phenotype in F1 hybrids when grown at 17 °C and a parental-like phenotype when grown at 21 °C. This F1 temperature-dependent-dwarf-bushy phenotype is characterized by reduced growth of the primary stem together with an increased number of branches. The reduced stem growth was the strongest at the first internode. In addition, we found that a temperature switch from 21 °C to 17 °C induced the phenotype only before the formation of the first internode of the stem. Similarly, the F1 dwarf-bushy phenotype could not be reversed when plants were shifted from 17 °C to 21 °C after the first internode was formed. Metabolic analysis showed that the F1 phenotype was associated with a significant upregulation of anthocyanin(s), kaempferol(s), salicylic acid, jasmonic acid and abscisic acid. As it has been previously shown that the dwarf-bushy phenotype is linked to two loci, one on chromosome 2 from Kro-0 and one on chromosome 3 from BG-5, an artificial micro-RNA approach was used to investigate the necessary genes on these intervals. From the results obtained, it was found that two genes, AT2G14120 that encodes for a DYNAMIN RELATED PROTEIN3B and AT2G14100 that encodes a member of the Cytochrome P450 family protein CYP705A13, were necessary for the appearance of the F1 phenotype on chromosome 2. It was also discovered that AT3G61035 that encodes for another cytochrome P450 family protein CYP705A13 and AT3G60840 that encodes for a MICROTUBULE-ASSOCIATED PROTEIN65-4 on chromosome 3 were both necessary for the induction of the F1 phenotype. To prove the causality of these genes, genomic constructs of the Kro-0 candidate genes on chromosome 2 were transferred to BG-5 and genomic constructs of the chromosome 3 candidate genes from BG-5 were transferred to Kro-0. The T1 lines showed that these genes are not sufficient alone to induce the phenotype. In addition to the F1 phenotype, more severe phenotypes were observed in the F2 generations that were grouped into five different phenotypic classes. Whilst seed yield was comparable between F1 hybrids and parental lines, three phenotypic classes in the F2 generation exhibited hybrid breakdown in the form of reproductive failure. This F2 hybrid breakdown was less sensitive to temperature and showed a dose-dependent effect of the loci involved in F1 phenotype. The severest class of hybrid breakdown phenotypes was observed only in the population of backcross with the parent Kro-0, which indicates a stronger contribution of the BG-5 allele when compared to the Kro-0 allele on the hybrid breakdown phenotypes. Overall, the findings of my thesis provide a further understanding of the genetic and metabolic factors underlying altered shoot architecture in hybrid dysfunction.
Classification, prediction and evaluation of graph neural networks on online social media platforms
(2024)
The vast amount of data generated on social media platforms have made them a valuable source of information for businesses, governments and researchers. Social media data can provide insights into user behavior, preferences, and opinions. In this work, we address two important challenges in social media analytics. Predicting user engagement with online content has become a critical task for content creators to increase user engagement and reach larger audiences. Traditional user engagement prediction approaches rely solely on features derived from the user and content. However, a new class of deep learning methods based on graphs captures not only the content features but also the graph structure of social media networks.
This thesis proposes a novel Graph Neural Network (GNN) approach to predict user interaction with tweets. The proposed approach combines the features of users, tweets and their engagement graphs. The tweet text features are extracted using pre-trained embeddings from language models, and a GNN layer is used to embed the user in a vector space. The GNN model then combines the features and graph structure to predict user engagement. The proposed approach achieves an accuracy value of 94.22% in classifying user interactions, including likes, retweets, replies, and quotes.
Another major challenge in social media analysis is detecting and classifying social bot accounts. Social bots are automated accounts used to manipulate public opinion by spreading misinformation or generating fake interactions. Detecting social bots is critical to prevent their negative impact on public opinion and trust in social media. In this thesis, we classify social bots on Twitter by applying Graph Neural Networks. The proposed approach uses a combination of both the features of a node and an aggregation of the features of a node’s neighborhood to classify social bot accounts. Our final results indicate a 6% improvement in the area under the curve score in the final predictions through the utilization of GNN.
Overall, our work highlights the importance of social media data and the potential of new methods such as GNNs to predict user engagement and detect social bots. These methods have important implications for improving the quality and reliability of information on social media platforms and mitigating the negative impact of social bots on public opinion and discourse.
Bacteria are one of the most widespread kinds of microorganisms that play essential roles in many biological and ecological processes. Bacteria live either as independent individuals or in organized communities. At the level of single cells, interactions between bacteria, their neighbors, and the surrounding physical and chemical environment are the foundations of microbial processes. Modern microscopy imaging techniques provide attractive and promising means to study the impact of these interactions on the dynamics of bacteria. The aim of this dissertation is to deepen our understanding four fundamental bacterial processes – single-cell motility, chemotaxis, bacterial interactions with environmental constraints, and their communication with neighbors – through a live cell imaging technique. By exploring these processes, we expanded our knowledge on so far unexplained mechanisms of bacterial interactions.
Firstly, we studied the motility of the soil bacterium Pseudomonas putida (P. putida), which swims through flagella propulsion, and has a complex, multi-mode swimming tactic. It was recently reported that P. putida exhibits several distinct swimming modes – the flagella can push and pull the cell body or wrap around it. Using a new combined phase-contrast and fluorescence imaging set-up, the swimming mode (push, pull, or wrapped) of each run phase was automatically recorded, which provided the full swimming statistics of the multi-mode swimmer. Furthermore, the investigation of cell interactions with a solid boundary illustrated an asymmetry for the different swimming modes; in contrast to the push and pull modes, the curvature of runs in wrapped mode was not affected by the solid boundary. This finding suggested that having a multi-mode swimming strategy may provide further versatility to react to environmental constraints.
Then we determined how P. putida navigates toward chemoattractants, i.e. its chemotaxis strategies. We found that individual run modes show distinct chemotactic responses in nutrition gradients. In particular, P. putida cells exhibited an asymmetry in their chemotactic responsiveness; the wrapped mode (slow swimming mode) was affected by the chemoattractant, whereas the push mode (fast swimming mode) was not. These results can be seen as a starting point to understand more complex chemotaxis strategies of multi-mode swimmers going beyond the well-known paradigm of Escherichia coli, that exhibits only one swimming mode.
Finally we considered the cell dynamics in a dense population. Besides physical interactions with their neighbors, cells communicate their activities and orchestrate their population behaviors via quorum-sensing. Molecules that are secreted to the surrounding by the bacterial cells, act as signals and regulate the cell population behaviour. We studied P. putida’s motility in a dense population by exposing the cells to environments with different concentrations of chemical signals. We found that higher amounts of chemical signals in the surrounding influenced the single-cell behaviourr, suggesting that cell-cell communications may also affect the flagellar dynamics.
In summary, this dissertation studies the dynamics of a bacterium with a multi-mode swimming tactic and how it is affected by the surrounding environment using microscopy imaging. The detailed description of the bacterial motility in fundamental bacterial processes can provide new insights into the ecology of microorganisms.
Bacterial chemotaxis-a fundamental example of directional navigation in the living world-is key to many biological processes, including the spreading of bacterial infections. Many bacterial species were recently reported to exhibit several distinct swimming modes-the flagella may, for example, push the cell body or wrap around it. How do the different run modes shape the chemotaxis strategy of a multimode swimmer? Here, we investigate chemotactic motion of the soil bacterium Pseudomonas putida as a model organism. By simultaneously tracking the position of the cell body and the configuration of its flagella, we demonstrate that individual run modes show different chemotactic responses in nutrition gradients and, thus, constitute distinct behavioral states. On the basis of an active particle model, we demonstrate that switching between multiple run states that differ in their speed and responsiveness provides the basis for robust and efficient chemotaxis in complex natural habitats.
Zinc is an essential trace element, making it crucial to have a reliable biomarker for evaluating an individual’s zinc status. The total serum zinc concentration, which is presently the most commonly used biomarker, is not ideal for this purpose, but a superior alternative is still missing. The free zinc concentration, which describes the fraction of zinc that is only loosely bound and easily exchangeable, has been proposed for this purpose, as it reflects the highly bioavailable part of serum zinc. This report presents a fluorescence-based method for determining the free zinc concentration in human serum samples, using the fluorescent probe Zinpyr-1. The assay has been applied on 154 commercially obtained human serum samples. Measured free zinc concentrations ranged from 0.09 to 0.42 nM with a mean of 0.22 ± 0.05 nM. It did not correlate with age or the total serum concentrations of zinc, manganese, iron or selenium. A negative correlation between the concentration of free zinc and total copper has been seen for sera from females. In addition, the free zinc concentration in sera from females (0.21 ± 0.05 nM) was significantly lower than in males (0.23 ± 0.06 nM). The assay uses a sample volume of less than 10 µL, is rapid and cost-effective and allows us to address questions regarding factors influencing the free serum zinc concentration, its connection with the body’s zinc status, and its suitability as a future biomarker for an individual’s zinc status.
The topic of synchronization forms a link between nonlinear dynamics and neuroscience. On the one hand, neurobiological research has shown that the synchronization of neuronal activity is an essential aspect of the working principle of the brain. On the other hand, recent advances in the physical theory have led to the discovery of the phenomenon of phase synchronization. A method of data analysis that is motivated by this finding - phase synchronization analysis - has already been successfully applied to empirical data. The present doctoral thesis ties up to these converging lines of research. Its subject are methodical contributions to the further development of phase synchronization analysis, as well as its application to event-related potentials, a form of EEG data that is especially important in the cognitive sciences. The methodical contributions of this work consist firstly in a number of specialized statistical tests for a difference in the synchronization strength in two different states of a system of two oscillators. Secondly, in regard of the many-channel character of EEG data an approach to multivariate phase synchronization analysis is presented. For the empirical investigation of neuronal synchronization a classic experiment on language processing was replicated, comparing the effect of a semantic violation in a sentence context with that of the manipulation of physical stimulus properties (font color). Here phase synchronization analysis detects a decrease of global synchronization for the semantic violation as well as an increase for the physical manipulation. In the latter case, by means of the multivariate analysis the global synchronization effect can be traced back to an interaction of symmetrically located brain areas.<BR> The findings presented show that the method of phase synchronization analysis motivated by physics is able to provide a relevant contribution to the investigation of event-related potentials in the cognitive sciences.
Modeling random crawling, membrane deformation and intracellular polarity of motile amoeboid cells
(2018)
Amoeboid movement is one of the most widespread forms of cell motility that plays a key role in numerous biological contexts. While many aspects of this process are well investigated, the large cell-to-cell variability in the motile characteristics of an otherwise uniform population remains an open question that was largely ignored by previous models. In this article, we present a mathematical model of amoeboid motility that combines noisy bistable kinetics with a dynamic phase field for the cell shape. To capture cell-to-cell variability, we introduce a single parameter for tuning the balance between polarity formation and intracellular noise. We compare numerical simulations of our model to experiments with the social amoeba Dictyostelium discoideum. Despite the simple structure of our model, we found close agreement with the experimental results for the center-of-mass motion as well as for the evolution of the cell shape and the overall intracellular patterns. We thus conjecture that the building blocks of our model capture essential features of amoeboid motility and may serve as a starting point for more detailed descriptions of cell motion in chemical gradients and confined environments.
The purpose of this study was to examine the test-retest reliability, and convergent and discriminative validity of a new taekwondo-specific change-of-direction (COD) speed test with striking techniques (TST) in elite taekwondo athletes. Twenty (10 males and 10 females) elite (athletes who compete at national level) and top-elite (athletes who compete at national and international level) taekwondo athletes with an average training background of 8.9 ± 1.3 years of systematic taekwondo training participated in this study. During the two-week test-retest period, various generic performance tests measuring COD speed, balance, speed, and jump performance were carried out during the first week and as a retest during the second week. Three TST trials were conducted with each athlete and the best trial was used for further analyses. The relevant performance measure derived from the TST was the time with striking penalty (TST-TSP). TST-TSP performances amounted to 10.57 ± 1.08 s for males and 11.74 ± 1.34 s for females. The reliability analysis of the TST performance was conducted after logarithmic transformation, in order to address the problem of heteroscedasticity. In both groups, the TST demonstrated a high relative test-retest reliability (intraclass correlation coefficients and 90% compatibility limits were 0.80 and 0.47 to 0.93, respectively). For absolute reliability, the TST’s typical error of measurement (TEM), 90% compatibility limits, and magnitudes were 4.6%, 3.4 to 7.7, for males, and 5.4%, 3.9 to 9.0, for females. The homogeneous sample of taekwondo athletes meant that the TST’s TEM exceeded the usual smallest important change (SIC) with 0.2 effect size in the two groups. The new test showed mostly very large correlations with linear sprint speed (r = 0.71 to 0.85) and dynamic balance (r = −0.71 and −0.74), large correlations with COD speed (r = 0.57 to 0.60) and vertical jump performance (r = −0.50 to −0.65), and moderate correlations with horizontal jump performance (r = −0.34 to −0.45) and static balance (r = −0.39 to −0.44). Top-elite athletes showed better TST performances than elite counterparts. Receiver operating characteristic analysis indicated that the TST effectively discriminated between top-elite and elite taekwondo athletes. In conclusion, the TST is a valid, and sensitive test to evaluate the COD speed with taekwondo specific skills, and reliable when considering ICC and TEM. Although the usefulness of the TST is questioned to detect small performance changes in the present population, the TST can detect moderate changes in taekwondo-specific COD speed.
We develop the method of Fischer-Riesz equations for general boundary value problems elliptic in the sense of Douglis-Nirenberg. To this end we reduce them to a boundary problem for a (possibly overdetermined) first order system whose classical symbol has a left inverse. For such a problem there is a uniquely determined boundary value problem which is adjoint to the given one with respect to the Green formula. On using a well elaborated theory of approximation by solutions of the adjoint problem, we find the Cauchy data of solutions of our problem.
We define weak boundary values of solutions to those nonlinear differential equations which appear as Euler-Lagrange equations of variational problems. As a result we initiate the theory of Lagrangian boundary value problems in spaces of appropriate smoothness. We also analyse if the concept of mapping degree of current importance applies to the study of Lagrangian problems.
We elaborate a boundary Fourier method for studying an analogue of the Hilbert problem for analytic functions within the framework of generalised Cauchy-Riemann equations. The boundary value problem need not satisfy the Shapiro-Lopatinskij condition and so it fails to be Fredholm in Sobolev spaces. We show a solvability condition of the Hilbert problem, which looks like those for ill-posed
problems, and construct an explicit formula for approximate solutions.
The controlled dosage of substances from a device to its environment, such as a tissue or an organ in medical applications or a reactor, room, machinery or ecosystem in technical, should ideally match the requirements of the applications, e.g. in terms of the time point at which the cargo is released. On-demand dosage systems may enable such a desired release pattern, if the device contain suitable features that can translate external signals into a release function. This study is motivated by the opportunities arising from microsystems capable of an on-demand release and the contributions that geometrical design may have in realizing such features. The goals of this work included the design, fabrication, characterization and experimental proof-of-concept of geometry-assisted triggerable dosing effect (a) with a sequential dosing release and (b) in a self-sufficient dosage system. Structure-function relationships were addressed on the molecular, morphological and, with a particular attention, the device design level, which is on the micrometer scale. Models and/or computational tools were used to screen the parameter space and provide guidance for experiments.
Background
Riociguat is the first of a new class of drugs, the soluble guanylate cyclase (sGC) stimulators. Riociguat has a dual mode of action: it sensitizes sGC to the body’s own NO and can also increase sGC activity in the absence of NO. The NO-sGC-pathway is impaired in many cardiovascular diseases such as heart failure, pulmonary hypertension and diabetic nephropathy (DN). DN leads to high cardiovascular morbidity and mortality. There is still a high unmet medical need. The urinary albumin excretion rate is a predictive biomarker for these clinical events. Therefore, we investigated the effect of riociguat, alone and in combination with the angiotensin II receptor antagonist (ARB) telmisartan on the progression of DN in diabetic eNOS knock out mice, a new model closely resembling human pathology.
Methods
Seventy-six male eNOS knockout C57BL/6J mice were divided into 4 groups after receiving intraperitoneal high-dose streptozotocin: telmisartan (1 mg/kg), riociguat (3 mg/kg), riociguat+telmisartan (3 and 1 mg/kg), and vehicle. Fourteen mice were used as non-diabetic controls. After 12 weeks, urine and blood were obtained and blood pressure measured. Glucose concentrations were highly increased and similar in all diabetic groups.
Results
Riociguat, alone (105.2 ± 2.5 mmHg; mean±SEM; n = 14) and in combination with telmisartan (105.0 ± 3.2 mmHg; n = 12), significantly reduced blood pressure versus diabetic controls (117.1 ± 2.2 mmHg; n = 14; p = 0.002 and p = 0.004, respectively), whereas telmisartan alone (111.2 ± 2.6 mmHg) showed a modest blood pressure lowering trend (p = 0.071; n = 14). The effects of single treatment with either riociguat (97.1 ± 15.7 µg/d; n = 13) or telmisartan (97.8 ± 26.4 µg/d; n = 14) did not significantly lower albumin excretion on its own (p = 0.067 and p = 0.101, respectively). However, the combined treatment led to significantly lower urinary albumin excretion (47.3 ± 9.6 µg/d; n = 12) compared to diabetic controls (170.8 ± 34.2 µg/d; n = 13; p = 0.004), and reached levels similar to non-diabetic controls (31.4 ± 10.1 µg/d, n = 12).
Conclusion
Riociguat significantly reduced urinary albumin excretion in diabetic eNOS knock out mice that were refractory to treatment with ARB’s alone. Patients with diabetic nephropathy refractory to treatment with ARB’s have the worst prognosis among all patients with diabetic nephropathy. Our data indicate that additional stimulation of sGC on top of standard treatment with ARB`s may offer a new therapeutic approach for patients with diabetic nephropathy resistant to ARB treatment.
Climate impacts on transocean dispersal and habitat in gray whales from the Pleistocene to 2100
(2015)
Arctic animals face dramatic habitat alteration due to ongoing climate change. Understanding how such species have responded to past glacial cycles can help us forecast their response to today's changing climate. Gray whales are among those marine species likely to be strongly affected by Arctic climate change, but a thorough analysis of past climate impacts on this species has been complicated by lack of information about an extinct population in the Atlantic. While little is known about the history of Atlantic gray whales or their relationship to the extant Pacific population, the extirpation of the Atlantic population during historical times has been attributed to whaling. We used a combination of ancient and modern DNA, radiocarbon dating and predictive habitat modelling to better understand the distribution of gray whales during the Pleistocene and Holocene. Our results reveal that dispersal between the Pacific and Atlantic was climate dependent and occurred both during the Pleistocene prior to the last glacial period and the early Holocene immediately following the opening of the Bering Strait. Genetic diversity in the Atlantic declined over an extended interval that predates the period of intensive commercial whaling, indicating this decline may have been precipitated by Holocene climate or other ecological causes. These first genetic data for Atlantic gray whales, particularly when combined with predictive habitat models for the year 2100, suggest that two recent sightings of gray whales in the Atlantic may represent the beginning of the expansion of this species' habitat beyond its currently realized range.
In this thesis, I examine different A-bar movement dependencies in Igbo, a Benue-Congo language spoken in southern Nigeria. Movement dependencies are found in constructions where an element is moved to the left edge of the clause to express information-structural categories such as in questions, relativization and focus. I show that these constructions in Igbo are very uniform from a syntactic point of view. The constructions are built on two basic fronting operations: relativization and focus movement, and are biclausal. I further investigate several morphophonological effects that are found in these A-bar constructions. I propose that these effects are reflexes of movement that are triggered when an element is moved overtly in relativization or focus. This proposal helps to explain the tone patterns that have previously been assumed to be a property of relative clauses. The thesis adds to the growing body of tonal reflexes of A-bar movement reported for a few African languages. The thesis also provides an insight into the complementizer domain (C-domain) of Igbo.
We investigate how inviting students to set task-based goals affects usage of an online learning platform and course performance. We design and implement a randomized field experiment in a large mandatory economics course with blended learning elements. The low-cost treatment induces students to use the online learning system more often, more intensively, and to begin earlier with exam preparation. Treated students perform better in the course than the control group: they are 18.8% (0.20 SD) more likely to pass the exam and earn 6.7% (0.19 SD) more points on the exam. There is no evidence that treated students spend significantly more time, rather they tend to shift to more productive learning methods. The heterogeneity analysis suggests that higher treatment effects are associated with higher levels of behavioral bias but also with poor early course behavior.
Gravity dictates the structure of the whole Universe and, although it is triumphantly described by the theory of General Relativity, it is the force that we least understand in nature. One of the cardinal predictions of this theory are black holes. Massive, dark objects are found in the majority of galaxies. Our own galactic center very contains such an object with a mass of about four million solar masses. Are these objects supermassive black holes (SMBHs), or do we need alternatives? The answer lies in the event horizon, the characteristic that defines a black hole. The key to probe the horizon is to model the movement of stars around a SMBH, and the interactions between them, and look for deviations from real observations. Nuclear star clusters harboring a massive, dark object with a mass of up to ~ ten million solar masses are good testbeds to probe the event horizon of the potential SMBH with stars. The channel for interactions between stars and the central MBH are the fact that (a) compact stars and stellar-mass black holes can gradually inspiral into the SMBH due to the emission of gravitational radiation, which is known as an “Extreme Mass Ratio Inspiral” (EMRI), and (b) stars can produce gases which will be accreted by the SMBH through normal stellar evolution, or by collisions and disruptions brought about by the strong central tidal field. Such processes can contribute significantly to the mass of the SMBH. These two processes involve different disciplines, which combined will provide us with detailed information about the fabric of space and time. In this habilitation I present nine articles of my recent work directly related with these topics.
Genetic studies of the Eurasian brown bear (Ursus arctos) have so far focused on populations from Europe and North America, although the largest distribution area of brown bears is in Asia. In this study, we reveal population genetic parameters for the brown bear population inhabiting the Grand Kaçkar Mountains (GKM) in the north east of Turkey, western Lesser Caucasus. Using both hair (N = 147) and tissue samples (N = 7) collected between 2008 and 2014, we found substantial levels of genetic variation (10 microsatellite loci). Bear samples (hair) taken from rubbing trees worked better for genotyping than those from power poles, regardless of the year collected. Genotyping also revealed that bears moved between habitat patches, despite ongoing massive habitat alterations and the creation of large water reservoirs. This population has the potential to serve as a genetic reserve for future reintroductions in the Middle East. Due to the importance of the GKM population for on-going and future conservation actions, the impacts of habitat alterations in the region ought to be minimized; e.g., by establishing green bridges or corridors over reservoirs and major roads to maintain habitat connectivity and gene flow among populations in the Lesser Caucasus.
The Indian summer monsoon (ISM) is one of the largest climate systems on earth and impacts the livelihood of nearly 40% of the world’s population. Despite dedicated efforts, a comprehensive picture of monsoon variability has proved elusive largely due to the absence of long term high resolution records, spatial inhomogeneity of the monsoon precipitation, and the complex forcing mechanisms (solar insolation, internal teleconnections for e.g., El Niño-Southern Oscillation, tropical-midlatitude interactions). My work aims to improve the understanding of monsoon variability through generation of long term high resolution palaeoclimate data from climatically sensitive regions in the ISM and westerlies domain. To achieve this aim I have (i) identified proxies (sedimentological, geochemical, isotopic, and mineralogical) that are sensitive to environmental changes; (ii) used the identified proxies to generate long term palaeoclimate data from two climatically sensitive regions, one in NW Himalayas (transitional westerlies and ISM domain in the Spiti valley and one in the core monsoon zone (Lonar lake) in central India); (iii) undertaken a regional overview to generate “snapshots” of selected time slices; and (iv) interpreted the spatial precipitation anomalies in terms of those caused by modern teleconnections. This approach must be considered only as the first step towards identifying the past teleconnections as the boundary conditions in the past were significantly different from today and would have impacted the precipitation anomalies. As the Spiti valley is located in the in the active tectonic orogen of Himalayas, it was essential to understand the role of regional tectonics to make valid interpretations of catchment erosion and detrital influx into the lake. My approach of using integrated structural/morphometric and geomorphic signatures provided clear evidence for active tectonics in this area and demonstrated the suitability of these lacustrine sediments as palaleoseismic archives. The investigations on the lacustrine outcrops in Spiti valley also provided information on changes in seasonality of precipitation and occurrence of frequent and intense periods (ca. 6.8-6.1 cal ka BP) of detrital influx indicating extreme hydrological events in the past. Regional comparison for this time slice indicates a possible extended “break-monsoon like” mode for the monsoon that favors enhanced precipitation over the Tibetan plateau, Himalayas and their foothills. My studies on surface sediments from Lonar lake helped to identify environmentally sensitive proxies which could also be used to interpret palaeodata obtained from a ca. 10m long core raised from the lake in 2008. The core encompasses the entire Holocene and is the first well dated (by 14C) archive from the core monsoon zone of central India. My identification of authigenic evaporite gaylussite crystals within the core sediments provided evidence of exceptionally drier conditions during 4.7-3.9 and 2.0-0.5 cal ka BP. Additionally, isotopic investigations on these crystals provided information on eutrophication, stratification, and carbon cycling processes in the lake.
Under an ecological speciation scenario, the radiation of African weakly electric fish (genus Campylomormyrus) is caused by an adaptation to different food sources, associated with diversification of the electric organ discharge (EOD). This study experimentally investigates a phenotype-environment correlation to further support this scenario. Our behavioural experiments showed that three sympatric Campylomormyrus species with significantly divergent snout morphology differentially react to variation in substrate structure. While the short snout species (C. tamandua) exhibits preference to sandy substrate, the long snout species (C. rhynchophorus) significantly prefers a stone substrate for feeding. A third species with intermediate snout size (C. compressirostris) does not exhibit any substrate preference. This preference is matched with the observation that long-snouted specimens probe deeper into the stone substrate, presumably enabling them to reach prey more distant to the substrate surface. These findings suggest that the diverse feeding apparatus in the genus Campylomormyrus may have evolved in adaptation to specific microhabitats, i.e., substrate structures where these fish forage. Whether the parallel divergence in EOD is functionally related to this adaptation or solely serves as a prezygotic isolation mechanism remains to be elucidated.
In the era of social networks, internet of things and location-based services, many online services produce a huge amount of data that have valuable objective information, such as geographic coordinates and date time. These characteristics (parameters) in the combination with a textual parameter bring the challenge for the discovery of geospatiotemporal knowledge. This challenge requires efficient methods for clustering and pattern mining in spatial, temporal and textual spaces.
In this thesis, we address the challenge of providing methods and frameworks for geospatiotemporal data analytics. As an initial step, we address the challenges of geospatial data processing: data gathering, normalization, geolocation, and storage. That initial step is the basement to tackle the next challenge -- geospatial clustering challenge. The first step of this challenge is to design the method for online clustering of georeferenced data. This algorithm can be used as a server-side clustering algorithm for online maps that visualize massive georeferenced data. As the second step, we develop the extension of this method that considers, additionally, the temporal aspect of data. For that, we propose the density and intensity-based geospatiotemporal clustering algorithm with fixed distance and time radius.
Each version of the clustering algorithm has its own use case that we show in the thesis.
In the next chapter of the thesis, we look at the spatiotemporal analytics from the perspective of the sequential rule mining challenge. We design and implement the framework that transfers data into textual geospatiotemporal data - data that contain geographic coordinates, time and textual parameters. By this way, we address the challenge of applying pattern/rule mining algorithms in geospatiotemporal space. As the applicable use case study, we propose spatiotemporal crime analytics -- discovery spatiotemporal patterns of crimes in publicly available crime data.
The second part of the thesis, we dedicate to the application part and use case studies. We design and implement the application that uses the proposed clustering algorithms to discover knowledge in data. Jointly with the application, we propose the use case studies for analysis of georeferenced data in terms of situational and public safety awareness.
High growth firms (HGFs) are important for job creation and considered to be precursors of economic growth. We investigate how formal institutions, like product- and labor-market regulations, as well as the quality of regional governments that implement these regulations, affect HGF development across European regions. Using data from Eurostat, OECD, WEF, and Gothenburg University, we show that both regulatory stringency and the quality of the regional government influence the regional shares of HGFs. More importantly, we find that the effect of labor- and product-market regulations ultimately depends on the quality of regional governments: in regions with high quality of government, the share of HGFs is neither affected by the level of product market regulation, nor by more or less flexibility in hiring and firing practices. Our findings contribute to the debate on the effects of regulations by showing that regulations are not, per se, “good, bad, and ugly”, rather their impact depends on the efficiency of regional governments. Our paper offers important building blocks to develop tailored policy measures that may influence the development of HGFs in a region.
The study of outcrop modeling is located at the interface between two fields of expertise, Sedimentology and Computing Geoscience, which respectively investigates and simulates geological heterogeneity observed in the sedimentary record. During the last past years, modeling tools and techniques were constantly improved. In parallel, the study of Phanerozoic carbonate deposits emphasized the common occurrence of a random facies distribution along single depositional domain. Although both fields of expertise are intrinsically linked during outcrop simulation, their respective advances have not been combined in literature to enhance carbonate modeling studies. The present study re-examines the modeling strategy adapted to the simulation of shallow-water carbonate systems, based on a close relationship between field sedimentology and modeling capabilities. In the present study, the evaluation of three commonly used algorithms Truncated Gaussian Simulation (TGSim), Sequential Indicator Simulation (SISim), and Indicator Kriging (IK), were performed for the first time using visual and quantitative comparisons on an ideally suited carbonate outcrop. The results show that the heterogeneity of carbonate rocks cannot be fully simulated using one single algorithm. The operating mode of each algorithm involves capabilities as well as drawbacks that are not capable to match all field observations carried out across the modeling area. Two end members in the spectrum of carbonate depositional settings, a low-angle Jurassic ramp (High Atlas, Morocco) and a Triassic isolated platform (Dolomites, Italy), were investigated to obtain a complete overview of the geological heterogeneity in shallow-water carbonate systems. Field sedimentology and statistical analysis performed on the type, morphology, distribution, and association of carbonate bodies and combined with palaeodepositional reconstructions, emphasize similar results. At the basin scale (x 1 km), facies association, composed of facies recording similar depositional conditions, displays linear and ordered transitions between depositional domains. Contrarily, at the bedding scale (x 0.1 km), individual lithofacies type shows a mosaic-like distribution consisting of an arrangement of spatially independent lithofacies bodies along the depositional profile. The increase of spatial disorder from the basin to bedding scale results from the influence of autocyclic factors on the transport and deposition of carbonate sediments. Scale-dependent types of carbonate heterogeneity are linked with the evaluation of algorithms in order to establish a modeling strategy that considers both the sedimentary characteristics of the outcrop and the modeling capabilities. A surface-based modeling approach was used to model depositional sequences. Facies associations were populated using TGSim to preserve ordered trends between depositional domains. At the lithofacies scale, a fully stochastic approach with SISim was applied to simulate a mosaic-like lithofacies distribution. This new workflow is designed to improve the simulation of carbonate rocks, based on the modeling of each scale of heterogeneity individually. Contrarily to simulation methods applied in literature, the present study considers that the use of one single simulation technique is unlikely to correctly model the natural patterns and variability of carbonate rocks. The implementation of different techniques customized for each level of the stratigraphic hierarchy provides the essential computing flexibility to model carbonate systems. Closer feedback between advances carried out in the field of Sedimentology and Computing Geoscience should be promoted during future outcrop simulations for the enhancement of 3-D geological models.
This text is a contribution to the research on the worldwide success of evangelical Christianity and offers a new perspective on the relationship between late modern capitalism and evangelicalism. For this purpose, the utilization of affect and emotion in evangelicalism towards the mobilization of its members will be examined in order to find out what similarities to their employment in late modern capitalism can be found. Different examples from within the evangelical spectrum will be analyzed as affective economies in order to elaborate how affective mobilization is crucial for evangelicalism’s worldwide success. Pivotal point of this text is the exploration of how evangelicalism is able to activate the voluntary commitment of its members, financiers, and missionaries. Gathered here are examples where both spheres—evangelicalism and late modern capitalism—overlap and reciprocate, followed by a theoretical exploration of how the findings presented support a view of evangelicalism as an inner-worldly narcissism that contributes to an assumed re-enchantment of the world.
Galaxies are among the most complex systems that can currently be modelled with a computer. A realistic simulation must take into account cosmology and gravitation as well as effects of plasma, nuclear, and particle physics that occur on very different time, length, and energy scales. The Milky Way is the ideal test bench for such simulations, because we can observe millions of its individual stars whose kinematics and chemical composition are records of the evolution of our Galaxy. Thanks to the advent of multi-object spectroscopic surveys, we can systematically study stellar populations in a much larger volume of the Milky Way. While the wealth of new data will certainly revolutionise our picture of the formation and evolution of our Galaxy and galaxies in general, the big-data era of Galactic astronomy also confronts us with new observational, theoretical, and computational challenges.
This thesis aims at finding new observational constraints to test Milky-Way models, primarily based on infra-red spectroscopy from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) and asteroseismic data from the CoRoT mission. We compare our findings with chemical-evolution models and more sophisticated chemodynamical simulations. In particular we use the new powerful technique of combining asteroseismic and spectroscopic observations that allows us to test the time dimension of such models for the first time. With CoRoT and APOGEE (CoRoGEE) we can infer much more precise ages for distant field red-giant stars, opening up a new window for Galactic archaeology.
Another important aspect of this work is the forward-simulation approach that we pursued when interpreting these complex datasets and comparing them to chemodynamical models.
The first part of the thesis contains the first chemodynamical study conducted with the APOGEE survey. Our sample comprises more than 20,000 red-giant stars located within 6 kpc from the Sun, and thus greatly enlarges the Galactic volume covered with high-resolution spectroscopic observations. Because APOGEE is much less affected by interstellar dust extinction, the sample covers the disc regions very close to the Galactic plane that are typically avoided by optical surveys. This allows us to investigate the chemo-kinematic properties of the Milky Way's thin disc outside the solar vicinity. We measure, for the first time with high-resolution data, the radial metallicity gradient of the disc as a function of distance from the Galactic plane, demonstrating that the gradient flattens and even changes its sign for mid-plane distances greater than 1 kpc.
Furthermore, we detect a gap between the high- and low-[$\alpha$/Fe] sequences in the chemical-abundance diagram (associated with the thin and thick disc) that unlike in previous surveys can hardly be explained by selection effects. Using 6D kinematic information, we also present chemical-abundance diagrams cleaned from stars on kinematically hot orbits. The data allow us to confirm without doubt that the scale length of the (chemically-defined) thick disc is significantly shorter than that of the thin disc.
In the second part, we present our results of the first combination of asteroseismic and spectroscopic data in the context of Galactic Archaeology. We analyse APOGEE follow-up observations of 606 solar-like oscillating red giants in two CoRoT fields close to the Galactic plane. These stars cover a large radial range of the Galactic disc (4.5 kpc $\lesssim R_{\rm Gal}\lesssim15$ kpc) and a large age baseline (0.5 Gyr $\lesssim \tau\lesssim$ 13 Gyr), allowing us to study the age- and radius-dependence of the [$\alpha$/Fe] vs. [Fe/H] distributions. We find that the age distribution of the high-[$\alpha$/Fe] sequence appears to be broader than expected from a monolithically-formed old thick disc that stopped to form stars 10 Gyr ago. In particular, we discover a significant population of apparently young, [$\alpha$/Fe]-rich stars in the CoRoGEE data whose existence cannot be explained by standard chemical-evolution models. These peculiar stars are much more abundant in the inner CoRoT field LRc01 than in the outer-disc field LRc01, suggesting that at least part of this population has a chemical-evolution rather than a stellar-evolution origin, possibly due to a peculiar chemical-enrichment history of the inner disc. We also find that strong radial migration is needed to explain the abundance of super-metal-rich stars in the outer disc.
Finally, we use the CoRoGEE sample to study the time evolution of the radial metallicity gradient in the thin disc, an observable that has been the subject of observational and theoretical debate for more than 20 years. By dividing the CoRoGEE dataset into six age bins, performing a careful statistical analysis of the radial [Fe/H], [O/H], and [Mg/Fe] distributions, and accounting for the biases introduced by the observation strategy, we obtain reliable gradient measurements. The slope of the radial [Fe/H] gradient of the young red-giant population ($-0.058\pm0.008$ [stat.] $\pm0.003$ [syst.] dex/kpc) is consistent with recent Cepheid data. For the age range of $1-4$ Gyr, the gradient steepens slightly ($-0.066\pm0.007\pm0.002$ dex/kpc), before flattening again to reach a value of $\sim-0.03$ dex/kpc for stars with ages between 6 and 10 Gyr. This age dependence of the [Fe/H] gradient can be explained by a nearly constant negative [Fe/H] gradient of $\sim-0.07$ dex/kpc in the interstellar medium over the past 10 Gyr, together with stellar heating and migration. Radial migration also offers a new explanation for the puzzling observation that intermediate-age open clusters in the solar vicinity (unlike field stars) tend to have higher metallicities than their younger counterparts. We suggest that non-migrating clusters are more likely to be kinematically disrupted, which creates a bias towards high-metallicity migrators from the inner disc and may even steepen the intermediate-age cluster abundance gradient.
Adsorption layers of soluble surfactants enable and govern a variety of phenomena in surface and colloidal sciences, such as foams. The ability of a surfactant solution to form wet foam lamellae is governed by the surface dilatational rheology. Only systems having a non-vanishing imaginary part in their surface dilatational modulus, E, are able to form wet foams. The aim of this thesis is to illuminate the dissipative processes that give rise to the imaginary part of the modulus. There are two controversial models discussed in the literature. The reorientation model assumes that the surfactants adsorb in two distinct states, differing in their orientation. This model is able to describe the frequency dependence of the modulus E. However, it assumes reorientation dynamics in the millisecond time regime. In order to assess this model, we designed a SHG pump-probe experiment that addresses the orientation dynamics. Results obtained reveal that the orientation dynamics occur in the picosecond time regime, being in strong contradiction with the two states model. The second model regards the interface as an interphase. The adsorption layer consists of a topmost monolayer and an adjacent sublayer. The dissipative process is due to the molecular exchange between both layers. The assessment of this model required the design of an experiment that discriminates between the surface compositional term and the sublayer contribution. Such an experiment has been successfully designed and results on elastic and viscoelastic surfactant provided evidence for the correctness of the model. Because of its inherent surface specificity, surface SHG is a powerful analytical tool that can be used to gain information on molecular dynamics and reorganization of soluble surfactants. They are central elements of both experiments. However, they impose several structural elements of the model system. During the course of this thesis, a proper model system has been identified and characterized. The combination of several linear and nonlinear optical techniques, allowed for a detailed picture of the interfacial architecture of these surfactants.
With accelerating climate cooling in the late Cenozoic, glacial and periglacial erosion became more widespread on the surface of the Earth. The resultant shift in erosion patterns significantly changed the large-scale morphology of many mountain ranges worldwide. Whereas the glacial fingerprint is easily distinguished by its characteristic fjords and U-shaped valleys, the periglacial fingerprint is more subtle but potentially prevails in some mid- to high-latitude landscapes. Previous models have advocated a frost-driven control on debris production at steep headwalls and glacial valley sides. Here we investigate the important role that periglacial processes also play in less steep parts of mountain landscapes. Understanding the influences of frost-driven processes in low-relief areas requires a focus on the consequences of an accreting soil mantle, which characterises such surfaces. We present a new model that quantifies two key physical processes: frost cracking and frost creep, as a function of both temperature and sediment thickness. Our results yield new insights into how climate and sediment transport properties combine to scale the intensity of periglacial processes. The thickness of the soil mantle strongly modulates the relation between climate and the intensity of mechanical weathering and sediment flux. Our results also point to an offset between the conditions that promote frost cracking and those that promote frost creep, indicating that a stable climate can provide optimal conditions for only one of those processes at a time. Finally, quantifying these relations also opens up the possibility of including periglacial processes in large-scale, long-term landscape evolution models, as demonstrated in a companion paper.