Refine
Has Fulltext
- yes (478) (remove)
Year of publication
- 2020 (478) (remove)
Document Type
- Postprint (296)
- Doctoral Thesis (101)
- Article (37)
- Working Paper (15)
- Monograph/Edited Volume (7)
- Master's Thesis (6)
- Report (4)
- Review (4)
- Part of Periodical (3)
- Habilitation Thesis (2)
Language
- English (478) (remove)
Keywords
- random point processes (19)
- statistical mechanics (19)
- stochastic analysis (19)
- diffusion (9)
- machine learning (7)
- climate change (6)
- Germany (5)
- diversity (5)
- prediction (5)
- remote sensing (5)
Institute
- Institut für Biochemie und Biologie (70)
- Institut für Physik und Astronomie (58)
- Institut für Geowissenschaften (54)
- Institut für Umweltwissenschaften und Geographie (30)
- Institut für Chemie (29)
- Institut für Mathematik (27)
- Strukturbereich Kognitionswissenschaften (27)
- Institut für Ernährungswissenschaft (23)
- Department Psychologie (18)
- Hasso-Plattner-Institut für Digital Engineering GmbH (17)
This paper addresses semantic/pragmatic variability of tag questions in German and makes three main contributions. First, we document the prevalence and variety of question tags in German across three different types of conversational corpora. Second, by annotating question tags according to their syntactic and semantic context, discourse function, and pragmatic effect, we demonstrate the existing overlap and differences between the individual tag variants. Finally, we distinguish several groups of question tags by identifying the factors that influence the speakers’ choices of tags in the conversational context, such as clause type, function, speaker/hearer knowledge, as well as conversation type and medium. These factors provide the limits of variability by constraining certain question tags in German against occurring in specific contexts or with individual functions.
Similar Yet Different
(2020)
The importance of intrinsically disordered late embryogenesis abundant (LEA) proteins in the tolerance to abiotic stresses involving cellular dehydration is undisputed. While structural transitions of LEA proteins in response to changes in water availability are commonly observed and several molecular functions have been suggested, a systematic, comprehensive and comparative study of possible underlying sequence-structure-function relationships is still lacking. We performed molecular dynamics (MD) simulations as well as spectroscopic and light scattering experiments to characterize six members of two distinct, lowly homologous clades of LEA_4 family proteins from Arabidopsis thaliana. We compared structural and functional characteristics to elucidate to what degree structure and function are encoded in LEA protein sequences and complemented these findings with physicochemical properties identified in a systematic bioinformatics study of the entire Arabidopsis thaliana LEA_4 family. Our results demonstrate that although the six experimentally characterized LEA_4 proteins have similar structural and functional characteristics, differences concerning their folding propensity and membrane stabilization capacity during a freeze/thaw cycle are obvious. These differences cannot be easily attributed to sequence conservation, simple physicochemical characteristics or the abundance of sequence motifs. Moreover, the folding propensity does not appear to be correlated with membrane stabilization capacity. Therefore, the refinement of LEA_4 structural and functional properties is likely encoded in specific patterns of their physicochemical characteristics.
Radical reactions have found many applications in carbohydrate chemistry, especially in the construction of carbon–carbon bonds. The formation of carbon–heteroatom bonds has been less intensively studied. This mini-review will summarize the efforts to add heteroatom radicals to unsaturated carbohydrates like endo-glycals. Starting from early examples, developed more than 50 years ago, the importance of such reactions for carbohydrate chemistry and recent applications will be discussed. After a short introduction, the mini-review is divided in sub-chapters according to the heteroatoms halogen, nitrogen, phosphorus, and sulfur. The mechanisms of radical generation by chemical or photochemical processes and the subsequent reactions of the radicals at the 1-position will be discussed. This mini-review cannot cover all aspects of heteroatom-centered radicals in carbohydrate chemistry, but should provide an overview of the various strategies and future perspectives
Plants located adjacent to agricultural fields are important for maintaining biodiversity in semi-natural landscapes. To avoid undesired impacts on these plants due to herbicide application on the arable fields, regulatory risk assessments are conducted prior to registration to ensure proposed uses of plant protection products do not present an unacceptable risk. The current risk assessment approach for these non-target terrestrial plants (NTTPs) examines impacts at the individual-level as a surrogate approach for protecting the plant community due to the inherent difficulties of directly assessing population or community level impacts. However, modelling approaches are suitable higher tier tools to upscale individual-level effects to community level. IBC-grass is a sophisticated plant community model, which has already been applied in several studies. However, as it is a console application software, it was not deemed sufficiently user-friendly for risk managers and assessors to be conveniently operated without prior expertise in ecological models. Here, we present a user-friendly and open source graphical user interface (GUI) for the application of IBC-grass in regulatory herbicide risk assessment. It facilitates the use of the plant community model for predicting long-term impacts of herbicide applications on NTTP communities. The GUI offers two options to integrate herbicide impacts: (1) dose responses based on current standard experiments (acc. to testing guidelines) and (2) based on specific effect intensities. Both options represent suitable higher tier options for future risk assessments of NTTPs as well as for research on the ecological relevance of effects.
Many institutions struggle to tap into the potential of their large archives of radar reflectivity: these data are often affected by miscalibration, yet the bias is typically unknown and temporally volatile. Still, relative calibration techniques can be used to correct the measurements a posteriori. For that purpose, the usage of spaceborne reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) platforms has become increasingly popular: the calibration bias of a ground radar (GR) is estimated from its average reflectivity difference to the spaceborne radar (SR). Recently, Crisologo et al. (2018) introduced a formal procedure to enhance the reliability of such estimates: each match between SR and GR observations is assigned a quality index, and the calibration bias is inferred as a quality-weighted average of the differences between SR and GR. The relevance of quality was exemplified for the Subic S-band radar in the Philippines, which is greatly affected by partial beam blockage.
The present study extends the concept of quality-weighted averaging by accounting for path-integrated attenuation (PIA) in addition to beam blockage. This extension becomes vital for radars that operate at the C or X band. Correspondingly, the study setup includes a C-band radar that substantially overlaps with the S-band radar. Based on the extended quality-weighting approach, we retrieve, for each of the two ground radars, a time series of calibration bias estimates from suitable SR overpasses. As a result of applying these estimates to correct the ground radar observations, the consistency between the ground radars in the region of overlap increased substantially. Furthermore, we investigated if the bias estimates can be interpolated in time, so that ground radar observations can be corrected even in the absence of prompt SR overpasses. We found that a moving average approach was most suitable for that purpose, although limited by the absence of explicit records of radar maintenance operations.
German orthography systematically marks all nouns (even other nominalized word classes) by capitalizing their first letter. It is often claimed that readers benefit from the uppercase-letter syntactic and semantic information, which makes the processing of sentences easier (e.g., Bock et al., 1985, 1989). In order to test this hypothesis, we asked 54 German readers to read single sentences systematically manipulated by a target word (N). In the experimental condition (EXP), we used semantic priming (in the following example: sick → cold) in order to build up a strong expectation of a noun, which was actually an attribute for the following noun (N+1) (translated to English e.g., “The sick writer had a cold (N) nose (N+1) …”). The sentences in the control condition were built analogously, but word N was purposefully altered (keeping word length and frequency constant) to make its interpretation as a noun extremely unlikely (e.g., “The sick writer had a blue (N) nose (N+1) …”). In both conditions, the sentences were presented either following German standard orthography (Cap) or in lowercase spelling (NoCap). The capitalized nouns in the EXP/Cap condition should then prevent garden-path parsing, as capital letters can be recognized parafoveally. However, in the EXP/NoCap condition, we expected a garden-path effect on word N+1 affecting first-pass fixations and the number of regressions, as the reader realizes that word N is instead an adjective. As the control condition does not include a garden-path, we expected to find (small) effects of the violation of the orthographic rule in the CON/NoCap condition, but no garden-path effect. As a global result, it can be stated that reading sentences in which nouns are not marked by a majuscule slows a native German reader down significantly, but from an absolute point of view, the effect is small. Compared with other manipulations (e.g., transpositions or substitutions), a lowercase letter still represents the correct allograph in the correct position without affecting phonology. Furthermore, most German readers do have experience with other alphabetic writing systems that lack consistent noun capitalization, and in (private) digital communication lowercase nouns are quite common. Although our garden-path sentences did not show the desired effect, we found an indication of grammatical pre-processing enabled by the majuscule in the regularly spelled sentences: In the case of high noun frequency, we post hoc located parafovea-on-fovea effects, i.e., longer fixation durations, on the attributive adjective (word N). These benefits of capitalization could only be detected under specific circumstances. In other cases, we conclude that longer reading durations are mainly the result of disturbance in readers' habituation when the expected capitalization is missing.
Commentary
(2020)
Recent studies have suggested that musical rhythm perception ability can affect the phonological system. The most prevalent causal account for developmental dyslexia is the phonological deficit hypothesis. As rhythm is a subpart of phonology, we hypothesized that reading deficits in dyslexia are associated with rhythm processing in speech and in music. In a rhythmic grouping task, adults with diagnosed dyslexia and age-matched controls listened to speech streams with syllables alternating in intensity, duration, or neither, and indicated whether they perceived a strong-weak or weak-strong rhythm pattern. Additionally, their reading and musical rhythm abilities were measured. Results showed that adults with dyslexia had lower musical rhythm abilities than adults without dyslexia. Moreover, lower musical rhythm ability was associated with lower reading ability in dyslexia. However, speech grouping by adults with dyslexia was not impaired when musical rhythm perception ability was controlled: like adults without dyslexia, they showed consistent preferences. However, rhythmic grouping was predicted by musical rhythm perception ability, irrespective of dyslexia. The results suggest associations among musical rhythm perception ability, speech rhythm perception, and reading ability. This highlights the importance of considering individual variability to better understand dyslexia and raises the possibility that musical rhythm perception ability is a key to phonological and reading acquisition.
Background
Multi-component cardiac rehabilitation (CR) is performed to achieve an improved prognosis, superior health-related quality of life (HRQL) and occupational resumption through the management of cardiovascular risk factors, as well as improvement of physical performance and patients’ subjective health. Out of a multitude of variables gathered at CR admission and discharge, we aimed to identify predictors of returning to work (RTW) and HRQL 6 months after CR.
Design
Prospective observational multi-centre study, enrolment in CR between 05/2017 and 05/2018.
Method
Besides general data (e.g. age, sex, diagnoses), parameters of risk factor management (e.g. smoking, hypertension), physical performance (e.g. maximum exercise capacity, endurance training load, 6-min walking distance) and patient-reported outcome measures (e.g. depression, anxiety, HRQL, subjective well-being, somatic and mental health, pain, lifestyle change motivation, general self-efficacy, pension desire and self-assessment of the occupational prognosis using several questionnaires) were documented at CR admission and discharge. These variables (at both measurement times and as changes during CR) were analysed using multiple linear regression models regarding their predictive value for RTW status and HRQL (SF-12) six months after CR.
Results
Out of 1262 patients (54±7 years, 77% men), 864 patients (69%) returned to work. Predictors of failed RTW were primarily the desire to receive pension (OR = 0.33, 95% CI: 0.22–0.50) and negative self-assessed occupational prognosis (OR = 0.34, 95% CI: 0.24–0.48) at CR discharge, acute coronary syndrome (OR = 0.64, 95% CI: 0.47–0.88) and comorbid heart failure (OR = 0.51, 95% CI: 0.30–0.87). High educational level, stress at work and physical and mental HRQL were associated with successful RTW. HRQL was determined predominantly by patient-reported outcome measures (e.g. pension desire, self-assessed health prognosis, anxiety, physical/mental HRQL/health, stress, well-being and self-efficacy) rather than by clinical parameters or physical performance.
Conclusion
Patient-reported outcome measures predominantly influenced return to work and HRQL in patients with heart disease. Therefore, the multi-component CR approach focussing on psychosocial support is crucial for subjective health prognosis and occupational resumption.
Soils in Germany are commonly low in selenium; consequently, a sufficient dietary supply is not always ensured. The extent of such provision adequacy is estimated by the optimal effect range of biomarkers, which often reflects the physiological requirement. Preceding epidemiological studies indicate that low selenium serum concentrations could be related to cardiovascular diseases. Inter alia, risk factors for cardiovascular diseases are physical inactivity, overweight, as well as disadvantageous eating habits. In order to assess whether these risk factors can be modulated, a cardio-protective diet comprising fixed menu plans combined with physical exercise was applied in the German MoKaRi (modulation of cardiovascular risk factors) intervention study. We analyzed serum samples of the MoKaRi cohort (51 participants) for total selenium, GPx activity, and selenoprotein P at different timepoints of the study (0, 10, 20, 40 weeks) to explore the suitability of these selenium-associated markers as indicators of selenium status. Overall, the time-dependent fluctuations in serum selenium concentration suggest a successful change in nutritional and lifestyle behavior. Compared to baseline, a pronounced increase in GPx activity and selenoprotein P was observed, while serum selenium decreased in participants with initially adequate serum selenium content. SELENOP concentration showed a moderate positive monotonic correlation (r = 0.467, p < 0.0001) to total Se concentration, while only a weak linear relationship was observed for GPx activity versus total Se concentration (r = 0.186, p = 0.021). Evidently, other factors apart from the available Se pool must have an impact on the GPx activity, leading to the conclusion that, without having identified these factors, GPx activity should not be used as a status marker for Se
Purpose: Psychosocial variables are known risk factors for the development and chronification of low back pain (LBP). Psychosocial stress is one of these risk factors. Therefore, this study aims to identify the most important types of stress predicting LBP. Self-efficacy was included as a potential protective factor related to both, stress and pain.
Participants and Methods: This prospective observational study assessed n = 1071 subjects with low back pain over 2 years. Psychosocial stress was evaluated in a broad manner using instruments assessing perceived stress, stress experiences in work and social contexts, vital exhaustion and life-event stress. Further, self-efficacy and pain (characteristic pain intensity and disability) were assessed. Using least absolute shrinkage selection operator regression, important predictors of characteristic pain intensity and pain-related disability at 1-year and 2-years follow-up were analyzed.
Results: The final sample for the statistic procedure consisted of 588 subjects (age: 39.2 (± 13.4) years; baseline pain intensity: 27.8 (± 18.4); disability: 14.3 (± 17.9)). In the 1-year follow-up, the stress types “tendency to worry”, “social isolation”, “work discontent” as well as vital exhaustion and negative life events were identified as risk factors for both pain intensity and pain-related disability. Within the 2-years follow-up, Lasso models identified the stress types “tendency to worry”, “social isolation”, “social conflicts”, and “perceived long-term stress” as potential risk factors for both pain intensity and disability. Furthermore, “self-efficacy” (“internality”, “self-concept”) and “social externality” play a role in reducing pain-related disability.
Conclusion: Stress experiences in social and work-related contexts were identified as important risk factors for LBP 1 or 2 years in the future, even in subjects with low initial pain levels. Self-efficacy turned out to be a protective factor for pain development, especially in the long-term follow-up. Results suggest a differentiation of stress types in addressing psychosocial factors in research, prevention and therapy approaches.
As an essential trace element, copper plays a pivotal role in physiological body functions. In fact, dysregulated copper homeostasis has been clearly linked to neurological disorders including Wilson and Alzheimer’s disease. Such neurodegenerative diseases are associated with progressive loss of neurons and thus impaired brain functions. However, the underlying mechanisms are not fully understood. Characterization of the element species and their subcellular localization is of great importance to uncover cellular mechanisms. Recent research activities focus on the question of how copper contributes to the pathological findings. Cellular bioimaging of copper is an essential key to accomplish this objective. Besides information on the spatial distribution and chemical properties of copper, other essential trace elements can be localized in parallel. Highly sensitive and high spatial resolution techniques such as LA-ICP-MS, TEM-EDS, S-XRF and NanoSIMS are required for elemental mapping on subcellular level. This review summarizes state-of-the-art techniques in the field of bioimaging. Their strengths and limitations will be discussed with particular focus on potential applications for the elucidation of copper-related diseases. Based on such investigations, further information on cellular processes and mechanisms can be derived under physiological and pathological conditions. Bioimaging studies might enable the clarification of the role of copper in the context of neurodegenerative diseases and provide an important basis to develop therapeutic strategies for reduction or even prevention of copper-related disorders and their pathological consequences.
With rising complexity of today's software and hardware systems and the hypothesized increase in autonomous, intelligent, and self-* systems, developing correct systems remains an important challenge. Testing, although an important part of the development and maintainance process, cannot usually establish the definite correctness of a software or hardware system - especially when systems have arbitrarily large or infinite state spaces or an infinite number of initial states. This is where formal verification comes in: given a representation of the system in question in a formal framework, verification approaches and tools can be used to establish the system's adherence to its similarly formalized specification, and to complement testing.
One such formal framework is the field of graphs and graph transformation systems. Both are powerful formalisms with well-established foundations and ongoing research that can be used to describe complex hardware or software systems with varying degrees of abstraction. Since their inception in the 1970s, graph transformation systems have continuously evolved; related research spans extensions of expressive power, graph algorithms, and their implementation, application scenarios, or verification approaches, to name just a few topics.
This thesis focuses on a verification approach for graph transformation systems called k-inductive invariant checking, which is an extension of previous work on 1-inductive invariant checking. Instead of exhaustively computing a system's state space, which is a common approach in model checking, 1-inductive invariant checking symbolically analyzes graph transformation rules - i.e. system behavior - in order to draw conclusions with respect to the validity of graph constraints in the system's state space. The approach is based on an inductive argument: if a system's initial state satisfies a graph constraint and if all rules preserve that constraint's validity, we can conclude the constraint's validity in the system's entire state space - without having to compute it.
However, inductive invariant checking also comes with a specific drawback: the locality of graph transformation rules leads to a lack of context information during the symbolic analysis of potential rule applications. This thesis argues that this lack of context can be partly addressed by using k-induction instead of 1-induction. A k-inductive invariant is a graph constraint whose validity in a path of k-1 rule applications implies its validity after any subsequent rule application - as opposed to a 1-inductive invariant where only one rule application is taken into account. Considering a path of transformations then accumulates more context of the graph rules' applications.
As such, this thesis extends existing research and implementation on 1-inductive invariant checking for graph transformation systems to k-induction. In addition, it proposes a technique to perform the base case of the inductive argument in a symbolic fashion, which allows verification of systems with an infinite set of initial states. Both k-inductive invariant checking and its base case are described in formal terms. Based on that, this thesis formulates theorems and constructions to apply this general verification approach for typed graph transformation systems and nested graph constraints - and to formally prove the approach's correctness.
Since unrestricted graph constraints may lead to non-termination or impracticably high execution times given a hypothetical implementation, this thesis also presents a restricted verification approach, which limits the form of graph transformation systems and graph constraints. It is formalized, proven correct, and its procedures terminate by construction. This restricted approach has been implemented in an automated tool and has been evaluated with respect to its applicability to test cases, its performance, and its degree of completeness.
The Jewish family has been the subject of much admiration and analysis, criticism and myth-making, not just but especially in modern times. As a field of inquiry, its place is at the intersection – or in the shadow – of the great topics in Jewish Studies and its contributing disciplines. Among them are the modernization and privatization of Judaism and Jewish life; integration and distinctiveness of Jews as individuals and as a group; gender roles and education. These and related questions have been the focus of modern Jewish family research, which took shape as a discipline in the 1910s.
This issue of PaRDeS traces the origins of academic Jewish family research and takes stock of its development over a century, with its ruptures that have added to the importance of familial roots and continuities. A special section retrieves the founder of the field, Arthur Czellitzer (1871–1943), his biography and work from oblivion and places him in the context of early 20th-century science and Jewish life.
The articles on current questions of Jewish family history reflect the topic’s potential for shedding new light on key questions in Jewish Studies past and present. Their thematic range – from 13th-century Yiddish Arthurian romances via family-based business practices in 19th-century Hungary and Germany, to concepts of Jewish parenthood in Imperial Russia – illustrates the broad interest in Jewish family research as a paradigm for early modern and modern Jewish Studies.
"How Wenzel and Cassie were wrong" – this was the eye-catching title of an article published by Lichao Gao and Thomas McCarthy in 2007, in which fundamental interpretations of wetting behavior were put into question. The authors initiated a discussion on a subject, which had been generally accepted a long time ago and they showed that wetting phenomena were not as fully understood as imagined. Similarly, this thesis tries to put a focus on certain aspects of liquid wetting, which so far have been widely neglected in terms of interpretation and experimental proof. While the effect of surface roughness on the macroscopically observed wetting behavior is commonly and reliably interpreted according to the well-known models of Wenzel and Cassie/Baxter, the size-scale of the structures responsible for the surface's rough texture has not been of further interest. Analogously, the limits of these models have not been described and exploited. Thus, the question arises, what will happen when the size of surface structures is reduced to the size of the contacting liquid molecules itself? Are common methods still valid or can deviations from macroscopic behavior be observed?
This thesis wants to create a starting point regarding these questions. In order to investigate the effect of smallest-scale surface structures on liquid wetting, a suitable model system is developed by means of self-assembled monolayer (SAM) formation from (fluoro)organic thiols of differing lengths of the alkyl chain. Surface topographies are created which rely on size differences of several Ångströms and exhibit surprising wetting behavior depending on the choice of the individual precursor system. Thus, contact angles are experimentally detected, which deviate considerably from theoretical calculations based on Wenzel and Cassie/Baxter models and confirm that sub-nm surface topographies affect wetting. Moreover, experimentally determined wetting properties are found to correlate well to an assumed scale-dependent surface tension of the contacting liquid. This behavior has already been described for scattering experiments taking into account capillary waves on the liquid surface induced by temperature and had been predicted earlier by theoretical calculations.
However, the investigation of model surfaces requires the provision of suitable precursor molecules, which are not commercially available and opens up a door to the exotic chemistry of fluoro-organic materials. During the course of this work, the synthesis of long-chain precursors is examined with a particular focus put on oligomerically pure semi-fluorinated n-alkyl thiols and n-alkyl trichlorosilanes. For this, general protocols for the syntheses of the desired compounds are developed and product mixtures are assayed to be separated into fractions of individual chain lengths by fluorous-phase high-performance liquid chromatography (F-HPLC).
The transition from model systems to technically more relevant surfaces and applications is initiated through the deposition of SAMs from long-chain fluorinated n-alkyl trichlorosilanes. Depositions are accomplished by a vapor-phase deposition process conducted on a pilot-scale set-up, which enables the exact control of relevant process parameters. Thus, the influence of varying deposition conditions on the properties of the final coating is examined and analyzed for the most important parameters. The strongest effect is observed for the partial pressure of reactive water vapor, which directly controls the extent of precursor hydrolysis during the deposition process. Experimental results propose that the formation of ordered monolayers rely on the amount of hydrolyzed silanol species present in the deposition system irrespective of the exact grade of hydrolysis. However, at increased amounts of species which are able to form cross-linked molecules due to condensation reactions, films deteriorate in quality. This effect is assumed to be caused by the introduction of defects within the film and the adsorption of cross linked agglomerates. Deposition conditions are also investigated for chain extended precursor species and reveal distinct differences caused by chain elongation.
During the last decade, intracellular actin waves have attracted much attention due to their essential role in various cellular functions, ranging from motility to cytokinesis. Experimental methods have advanced significantly and can capture the dynamics of actin waves over a large range of spatio-temporal scales. However, the corresponding coarse-grained theory mostly avoids the full complexity of this multi-scale phenomenon. In this perspective, we focus on a minimal continuum model of activator–inhibitor type and highlight the qualitative role of mass conservation, which is typically overlooked. Specifically, our interest is to connect between the mathematical mechanisms of pattern formation in the presence of a large-scale mode, due to mass conservation, and distinct behaviors of actin waves.
The Ornstein–Uhlenbeck process is a stationary and ergodic Gaussian process, that is fully determined by its covariance function and mean. We show here that the generic definitions of the ensemble- and time-averaged mean squared displacements fail to capture these properties consistently, leading to a spurious ergodicity breaking. We propose to remedy this failure by redefining the mean squared displacements such that they reflect unambiguously the statistical properties of any stochastic process. In particular we study the effect of the initial condition in the Ornstein–Uhlenbeck process and its fractional extension. For the fractional Ornstein–Uhlenbeck process representing typical experimental situations in crowded environments such as living biological cells, we show that the stationarity of the process delicately depends on the initial condition.
The decision to exercise is not only bound to rational considerations but also automatic affective processes. The affective–reflective theory of physical inactivity and exercise (ART) proposes a theoretical framework for explaining how the automatic affective process (type‑1 process) will influence exercise behavior, i.e., through the automatic activation of exercise-related associations and a subsequent affective valuation of exercise. This study aimed to empirically test this assumption of the ART with data from 69 study participants. A single-measurement study, including within-subject experimental variation, was conducted. Automatic associations with exercise were first measured with a single-target implicit association test. The somato-affective core of the participants’ automatic valuation of exercise-related pictures was then assessed via heart rate variability (HRV) analysis, and the affective valence of the valuation was tested with a facial expression (FE; smile and frown) task. Exercise behavior was assessed via self-report. Multiple regression (path) analysis revealed that automatic associations predicted HRV reactivity (β = −0.24, p = .044); the signs of the correlation between automatic associations and the smile FE score was in the expected direction but remained nonsignificant (β = −0.21, p = .078). HRV reactivity predicted self-reported exercise behavior (β = −0.28, p = .013) (the same pattern of results was achieved for the frown FE score). The HRV-related results illustrate the potential role of automatic negative affective reactions to the thought of exercise as a restraining force in exercise motivation. For better empirical distinction between the two ART type‑1 process components, automatic associations and the affective valuation should perhaps be measured separately in the future. The results support the notion that automatic and affective processes should be regarded as essential aspects of the motivation to exercise.
Sedimentary ancient DNA has been proposed as a key methodology for reconstructing biodiversity over time. Yet, despite the concentration of Earth’s biodiversity in the tropics, this method has rarely been applied in this region. Moreover, the taphonomy of sedimentary DNA, especially in tropical environments, is poorly understood. This study elucidates challenges and opportunities of sedimentary ancient DNA approaches for reconstructing tropical biodiversity. We present shotgun-sequenced metagenomic profiles and DNA degradation patterns from multiple sediment cores from Mubwindi Swamp, located in Bwindi Impenetrable Forest (Uganda), one of the most diverse forests in Africa. We describe the taxonomic composition of the sediments covering the past 2200 years and compare the sedimentary DNA data with a comprehensive set of environmental and sedimentological parameters to unravel the conditions of DNA degradation. Consistent with the preservation of authentic ancient DNA in tropical swamp sediments, DNA concentration and mean fragment length declined exponentially with age and depth, while terminal deamination increased with age. DNA preservation patterns cannot be explained by any environmental parameter alone, but age seems to be the primary driver of DNA degradation in the swamp. Besides degradation, the presence of living microbial communities in the sediment also affects DNA quantity. Critically, 92.3% of our metagenomic data of a total 81.8 million unique, merged reads cannot be taxonomically identified due to the absence of genomic references in public databases. Of the remaining 7.7%, most of the data (93.0%) derive from Bacteria and Archaea, whereas only 0–5.8% are from Metazoa and 0–6.9% from Viridiplantae, in part due to unbalanced taxa representation in the reference data. The plant DNA record at ordinal level agrees well with local pollen data but resolves less diversity. Our animal DNA record reveals the presence of 41 native taxa (16 orders) including Afrotheria, Carnivora, and Ruminantia at Bwindi during the past 2200 years. Overall, we observe no decline in taxonomic richness with increasing age suggesting that several-thousand-year-old information on past biodiversity can be retrieved from tropical sediments. However, comprehensive genomic surveys of tropical biota need prioritization for sedimentary DNA to be a viable methodology for future tropical biodiversity studies.
This thesis is concerned with Data Assimilation, the process of combining model predictions with observations. So called filters are of special interest. One is inter- ested in computing the probability distribution of the state of a physical process in the future, given (possibly) imperfect measurements. This is done using Bayes’ rule. The first part focuses on hybrid filters, that bridge between the two main groups of filters: ensemble Kalman filters (EnKF) and particle filters. The first are a group of very stable and computationally cheap algorithms, but they request certain strong assumptions. Particle filters on the other hand are more generally applicable, but computationally expensive and as such not always suitable for high dimensional systems. Therefore it exists a need to combine both groups to benefit from the advantages of each. This can be achieved by splitting the likelihood function, when assimilating a new observation and treating one part of it with an EnKF and the other part with a particle filter.
The second part of this thesis deals with the application of Data Assimilation to multi-scale models and the problems that arise from that. One of the main areas of application for Data Assimilation techniques is predicting the development of oceans and the atmosphere. These processes involve several scales and often balance rela- tions between the state variables. The use of Data Assimilation procedures most often violates relations of that kind, which leads to unrealistic and non-physical pre- dictions of the future development of the process eventually. This work discusses the inclusion of a post-processing step after each assimilation step, in which a minimi- sation problem is solved, which penalises the imbalance. This method is tested on four different models, two Hamiltonian systems and two spatially extended models, which adds even more difficulties.