Refine
Year of publication
- 2021 (2419) (remove)
Document Type
- Article (1492)
- Doctoral Thesis (276)
- Part of a Book (172)
- Postprint (154)
- Monograph/Edited Volume (92)
- Review (66)
- Part of Periodical (36)
- Conference Proceeding (34)
- Working Paper (20)
- Master's Thesis (18)
Language
Keywords
- COVID-19 (22)
- climate change (13)
- machine learning (13)
- Germany (11)
- Migration (11)
- diffusion (11)
- embodied cognition (11)
- USA (10)
- exercise (10)
- United States (9)
Institute
- Institut für Biochemie und Biologie (197)
- Institut für Physik und Astronomie (167)
- Institut für Chemie (157)
- Institut für Geowissenschaften (143)
- Bürgerliches Recht (128)
- Historisches Institut (119)
- Department Psychologie (102)
- Fachgruppe Politik- & Verwaltungswissenschaft (99)
- Fachgruppe Betriebswirtschaftslehre (98)
- Institut für Umweltwissenschaften und Geographie (92)
Children’s physical fitness development and related moderating effects of age and sex are well documented, especially boys’ and girls’ divergence during puberty. The situation might be different during prepuberty. As girls mature approximately two years earlier than boys, we tested a possible convergence of performance with five tests representing four components of physical fitness in a large sample of 108,295 eight-year old third-graders. Within this single prepubertal year of life and irrespective of the test, performance increased linearly with chronological age, and boys outperformed girls to a larger extent in tests requiring muscle mass for successful performance. Tests differed in the magnitude of age effects (gains), but there was no evidence for an interaction between age and sex. Moreover, “physical fitness” of schools correlated at r = 0.48 with their age effect which might imply that "fit schools” promote larger gains; expected secular trends from 2011 to 2019 were replicated.
Children’s physical fitness development and related moderating effects of age and sex are well documented, especially boys’ and girls’ divergence during puberty. The situation might be different during prepuberty. As girls mature approximately two years earlier than boys, we tested a possible convergence of performance with five tests representing four components of physical fitness in a large sample of 108,295 eight-year old third-graders. Within this single prepubertal year of life and irrespective of the test, performance increased linearly with chronological age, and boys outperformed girls to a larger extent in tests requiring muscle mass for successful performance. Tests differed in the magnitude of age effects (gains), but there was no evidence for an interaction between age and sex. Moreover, “physical fitness” of schools correlated at r = 0.48 with their age effect which might imply that "fit schools” promote larger gains; expected secular trends from 2011 to 2019 were replicated.
Differentiation hypotheses concern changes in the structural organization of cognitive abilities that depend on the level of general intelligence (ability differentiation) or age (developmental differentiation). Part 1 of this article presents a review of the literature on ability and developmental differentiation effects in children, revealing the need for studies that examine both effects simultaneously in this age group with appropriate statistical methods. Part 2 presents an empirical study in which nonlinear factor analytic models were applied to the standardization sample (N = 2,619 German elementary schoolchildren; 48% female; age: M = 8.8 years, SD = 1.2, range 6-12 years) of the THINK 1-4 intelligence test to investigate ability differentiation, developmental differentiation, and their interaction. The sample was nationally representative regarding age, gender, urbanization, and geographic location of residence but not regarding parents' education and migration background (overrepresentation of children with more educated parents, underrepresentation of children with migration background). The results showed no consistent evidence for the presence of differentiation effects or their interaction. Instead, different patterns were observed for figural, numerical, and verbal reasoning. Implications for the construction of intelligence tests, the assessment of intelligence in children, and for theories of cognitive development are discussed.
Afghanistan-Krieg
(2021)
Afghanistan und Zentralasien
(2021)
Simultaneously speculative and inspired by everyday experiences, this volume develops an aesthetics of metabolism that offers a new perspective on the human-environment relation, one that is processual, relational, and not dependent on conscious thought. In art installations, design prototypes, and researchcreation projects that utilize air, light, or temperature to impact subjective experience the author finds aesthetic milieus that shift our awareness to the role of different sense modalities in aesthetic experience. Metabolic and atmospheric processes allow for an aesthetics besides and beyond the usually dominant visual sense.
Advocating the inclusion of older adults in digital language learning technology and research
(2021)
This paper intends to explore the interaction between aspect and lexical means, in this case temporal adverbials, in the bounding of representations of situations. First, the theoretical basis is outlined, followed by the results of a corpus analysis of coccurrences with adverbs that limit situations. The term situation encompasses all representable processes, states, events, or actions. Finally, some theoretical conclusions are drawn concerning the cognitive category of bounding, using the example of aspectuality. The imperfective verb forms maintain their aspectuality in delimiting connections with adverbs, resulting in a complex, multi-dimensional aspectuality. In nongrammaticalized forms, such as lexical markers, the speaker is free to make a temporal localization or an aspectual perspective. Lexical expressions can make temporal and aspect markings even more precisely and clearly than tenses. They can also limit or extend situations and thus express aspect. Aspectuality thus presents itself as a compositional category, in which external bounding and the internal representation of a course of action or development can interact.
Precipitation forecasting has an important place in everyday life – during the day we may have tens of small talks discussing the likelihood that it will rain this evening or weekend. Should you take an umbrella for a walk? Or should you invite your friends for a barbecue? It will certainly depend on what your weather application shows.
While for years people were guided by the precipitation forecasts issued for a particular region or city several times a day, the widespread availability of weather radars allowed us to obtain forecasts at much higher spatiotemporal resolution of minutes in time and hundreds of meters in space. Hence, radar-based precipitation nowcasting, that is, very-short-range forecasting (typically up to 1–3 h), has become an essential technique, also in various professional application contexts, e.g., early warning, sewage control, or agriculture.
There are two major components comprising a system for precipitation nowcasting: radar-based precipitation estimates, and models to extrapolate that precipitation to the imminent future. While acknowledging the fundamental importance of radar-based precipitation retrieval for precipitation nowcasts, this thesis focuses only on the model development: the establishment of open and competitive benchmark models, the investigation of the potential of deep learning, and the development of procedures for nowcast errors diagnosis and isolation that can guide model development.
The present landscape of computational models for precipitation nowcasting still struggles with the availability of open software implementations that could serve as benchmarks for measuring progress. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. We distribute the corresponding set of models as a software library, rainymotion, which is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion). That way, the library acts as a tool for providing fast, open, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing.
One of the promising directions for model development is to challenge the potential of deep learning – a subfield of machine learning that refers to artificial neural networks with deep architectures, which may consist of many computational layers. Deep learning showed promising results in many fields of computer science, such as image and speech recognition, or natural language processing, where it started to dramatically outperform reference methods.
The high benefit of using "big data" for training is among the main reasons for that. Hence, the emerging interest in deep learning in atmospheric sciences is also caused and concerted with the increasing availability of data – both observational and model-based. The large archives of weather radar data provide a solid basis for investigation of deep learning potential in precipitation nowcasting: one year of national 5-min composites for Germany comprises around 85 billion data points.
To this aim, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. RainNet was trained to predict continuous precipitation intensities at a lead time of 5 min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900 km x 900 km and has a resolution of 1 km in space and 5 min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In these experiments, RainNet was applied recursively in order to achieve lead times of up to 1 h. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the previously developed rainymotion library.
RainNet significantly outperformed the benchmark models at all lead times up to 60 min for the routine verification metrics mean absolute error (MAE) and critical success index (CSI) at intensity thresholds of 0.125, 1, and 5 mm/h. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15 mm/h). The limited ability of RainNet to predict high rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5 min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16 km and below.
Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5 min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5 min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research on model development for precipitation nowcasting, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance.
The model development together with the verification experiments for both conventional and deep learning model predictions also revealed the need to better understand the source of forecast errors. Understanding the dominant sources of error in specific situations should help in guiding further model improvement. The total error of a precipitation nowcast consists of an error in the predicted location of a precipitation feature and an error in the change of precipitation intensity over lead time. So far, verification measures did not allow to isolate the location error, making it difficult to specifically improve nowcast models with regard to location prediction.
To fill this gap, we introduced a framework to directly quantify the location error. To that end, we detect and track scale-invariant precipitation features (corners) in radar images. We then consider these observed tracks as the true reference in order to evaluate the performance (or, inversely, the error) of any model that aims to predict the future location of a precipitation feature. Hence, the location error of a forecast at any lead time ahead of the forecast time corresponds to the Euclidean distance between the observed and the predicted feature location at the corresponding lead time.
Based on this framework, we carried out a benchmarking case study using one year worth of weather radar composites of the DWD. We evaluated the performance of four extrapolation models, two of which are based on the linear extrapolation of corner motion; and the remaining two are based on the Dense Inverse Search (DIS) method: motion vectors obtained from DIS are used to predict feature locations by linear and Semi-Lagrangian extrapolation.
For all competing models, the mean location error exceeds a distance of 5 km after 60 min, and 10 km after 110 min. At least 25% of all forecasts exceed an error of 5 km after 50 min, and of 10 km after 90 min. Even for the best models in our experiment, at least 5 percent of the forecasts will have a location error of more than 10 km after 45 min. When we relate such errors to application scenarios that are typically suggested for precipitation nowcasting, e.g., early warning, it becomes obvious that location errors matter: the order of magnitude of these errors is about the same as the typical extent of a convective cell. Hence, the uncertainty of precipitation nowcasts at such length scales – just as a result of locational errors – can be substantial already at lead times of less than 1 h. Being able to quantify the location error should hence guide any model development that is targeted towards its minimization. To that aim, we also consider the high potential of using deep learning architectures specific to the assimilation of sequential (track) data.
Last but not least, the thesis demonstrates the benefits of a general movement towards open science for model development in the field of precipitation nowcasting. All the presented models and frameworks are distributed as open repositories, thus enhancing transparency and reproducibility of the methodological approach. Furthermore, they are readily available to be used for further research studies, as well as for practical applications.
In the last decades, there was a notable progress in solving the well-known Boolean satisfiability (Sat) problem, which can be witnessed by powerful Sat solvers. One of the reasons why these solvers are so fast are structural properties of instances that are utilized by the solver’s interna. This thesis deals with the well-studied structural property treewidth, which measures the closeness of an instance to being a tree. In fact, there are many problems parameterized by treewidth that are solvable in polynomial time in the instance size when parameterized by treewidth.
In this work, we study advanced treewidth-based methods and tools for problems in knowledge representation and reasoning (KR). Thereby, we provide means to establish precise runtime results (upper bounds) for canonical problems relevant to KR. Then, we present a new type of problem reduction, which we call decomposition-guided (DG) that
allows us to precisely monitor the treewidth when reducing from one problem to another problem. This new reduction type will be the basis for a long-open lower bound result for quantified Boolean formulas and allows us to design a new methodology for establishing runtime lower bounds for problems parameterized by treewidth.
Finally, despite these lower bounds, we provide an efficient implementation of algorithms that adhere to treewidth. Our approach finds suitable abstractions of instances, which are subsequently refined in a recursive fashion, and it uses Sat solvers for solving subproblems. It turns out that our resulting solver is quite competitive for two canonical counting problems related to Sat.
Background
Nicotine consumption during pregnancy and advanced maternal age are well known independent risk factors for poor pregnancy outcome and therefore serious public health problems.
Objectives
Considering the ongoing trend of delaying childbirth in our society, this study investigates potential additive effects of nicotine consumption during pregnancy and advanced maternal age on foetal growth.
Sample and Methods
In a medical record-based study, we analysed the impact of maternal age and smoking behaviour before and during pregnancy on newborn size among 4142 singleton births that took place in Vienna, Austria between 1990 and 1995.
Results
Birth weight (H=82.176, p<0.001), birth length (H=91.525, p<0.001) and head circumference (H=42.097, p<0.001) differed significantly according to maternal smoking behaviour. For birth weight, the adjusted mean differences between smokers and non-smokers increased from 101.8g for the < 18-year-old mothers to 254.8g for >35 year olds, with the respective values for birth length being 0.6 cm to 0.7cm, for head circumference from 0.3 cm to 0.6 cm.
Conclusion
Increasing maternal age amplified the negative effects of smoking during pregnancy on newborn parameters. Our findings identify older smoking mothers as a high-risk group which should be of special interest for public health systems.
Background Advanced glycation end-products are proteins that become glycated after contact with sugars and are implicated in endothelial dysfunction and arterial stiffening. We aimed to investigate the relationships between advanced glycation end-products, measured as skin autofluorescence, and vascular stiffness in various glycemic strata. Methods We performed a cross-sectional analysis within the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam cohort, comprising n = 3535 participants (median age 67 years, 60% women). Advanced glycation end-products were measured as skin autofluorescence with AGE-Reader (TM), vascular stiffness was measured as pulse wave velocity, augmentation index and ankle-brachial index with Vascular Explorer (TM). A subset of 1348 participants underwent an oral glucose tolerance test. Participants were sub-phenotyped into normoglycemic, prediabetes and diabetes groups. Associations between skin autofluorescence and various indices of vascular stiffness were assessed by multivariable regression analyses and were adjusted for age, sex, measures of adiposity and lifestyle, blood pressure, prevalent conditions, medication use and blood biomarkers. Results Skin autofluorescence associated with pulse wave velocity, augmentation index and ankle-brachial index, adjusted beta coefficients (95% CI) per unit skin autofluorescence increase: 0.38 (0.21; 0.55) for carotid-femoral pulse wave velocity, 0.25 (0.14; 0.37) for aortic pulse wave velocity, 1.00 (0.29; 1.70) for aortic augmentation index, 4.12 (2.24; 6.00) for brachial augmentation index and - 0.04 (- 0.05; - 0.02) for ankle-brachial index. The associations were strongest in men, younger individuals and were consistent across all glycemic strata: for carotid-femoral pulse wave velocity 0.36 (0.12; 0.60) in normoglycemic, 0.33 (- 0.01; 0.67) in prediabetes and 0.45 (0.09; 0.80) in diabetes groups; with similar estimates for aortic pulse wave velocity. Augmentation index was associated with skin autofluorescence only in normoglycemic and diabetes groups. Ankle-brachial index inversely associated with skin autofluorescence across all sex, age and glycemic strata. Conclusions Our findings indicate that advanced glycation end-products measured as skin autofluorescence might be involved in vascular stiffening independent of age and other cardiometabolic risk factors not only in individuals with diabetes but also in normoglycemic and prediabetic conditions. Skin autofluorescence might prove as a rapid and non-invasive method for assessment of macrovascular disease progression across all glycemic strata.
As competition over peer status becomes intense during adolescence, some adolescents develop insecure feelings regarding their social standing among their peers (i.e., social status insecurity). These adolescents sometimes use aggression to defend or promote their status. The aim of this study was to examine the relationships among social status insecurity, callous-unemotional (CU) traits, and popularity-motivated aggression and prosocial behaviors among adolescents, while controlling for gender. Another purpose was to examine the potential moderating role of CU traits in these relationships. Participants were 1,047 (49.2% girls; Mage = 12.44 years; age range from 11 to 14 years) in the 7th or 8th grades from a large Midwestern city. They completed questionnaires on social status insecurity, CU traits, and popularity-motivated relational aggression, physical aggression, cyberaggression, and prosocial behaviors. A structural regression model was conducted, with gender as a covariate. The model had adequate fit. Social status insecurity was associated positively with callousness, unemotional, and popularity-motivated aggression and related negatively to popularity-motivated prosocial behaviors. High social status insecurity was related to greater popularity-motivated aggression when adolescents had high callousness traits. The findings have implications for understanding the individual characteristics associated with social status insecurity.
The chapter analyses recent reforms in the multilevel system of the Länder, specifically territorial, functional and structural reforms, which represent three of the most crucial and closely interconnected reform trajectories at the subnational level. It sheds light on the variety of reform approaches pursued in the different Länder and also highlights some factors that account for these differences. The transfer of state functions to local governments is addressed as well as the restructuring of Länder administrations (e.g. abolishment of the meso level of the Länder administration and of single-purpose state agencies) and the rescaling of territorial boundaries at county and municipal levels, including a brief review of the recently failed (territorial) reforms in Eastern Germany.
The link between emotions and motor control has been discussed for years. The measurement of the Adaptive Force (AF) provides the possibility to get insights into the adaptive control of the neuromuscular system in reaction to external forces. It was hypothesized that the holding isometric AF is especially vulnerable to disturbing inputs. Here, the behavior of the AF under the influence of positive (tasty) vs. negative (disgusting) food imaginations was investigated. The AF was examined in n = 12 cases using an objectified manual muscle test of the hip flexors, elbow flexors or pectoralis major muscle, performed by one of two experienced testers while the participants imagined their most tasty or most disgusting food. The reaction force and the limb position were measured by a handheld device. While the slope of force rises and the maximal AF did not differ significantly between tasty and disgusting imaginations (p > 0.05), the maximal isometric AF was significantly lower and the AF at the onset of oscillations was significantly higher under disgusting vs. tasty imaginations (both p = 0.001). A proper length tension control of muscles seems to be a crucial functional parameter of the neuromuscular system which can be impaired instantaneously by emotionally related negative imaginations. This might be a potential approach to evaluate somatic reactions to emotions.
Populations adapt to novel environmental conditions by genetic changes or phenotypic plasticity. Plastic responses are generally faster and can buffer fitness losses under variable conditions. Plasticity is typically modeled as random noise and linear reaction norms that assume simple one-to- one genotype–phenotype maps and no limits to the phenotypic response. Most studies on plasticity have focused on its effect on population viability. However, it is not clear, whether the advantage of plasticity depends solely on environmental fluctuations or also on the genetic and demographic properties (life histories) of populations. Here we present an individual-based model and study the relative importance of adaptive and nonadaptive plasticity for populations of sexual species with different life histories experiencing directional stochastic climate change. Environmental fluctuations were simulated using differentially autocorrelated climatic stochasticity or noise color, and scenarios of directiona
climate change. Nonadaptive plasticity was simulated as a random environmental effect on trait development, while adaptive plasticity as a linear, saturating, or sinusoidal reaction norm. The last two imposed limits to the plastic response and emphasized flexible interactions of the genotype with the environment. Interestingly, this assumption led to (a) smaller phenotypic than genotypic variance in the population (many-to- one genotype–phenotype map) and the coexistence of polymorphisms, and (b) the maintenance of higher genetic variation—compared to linear reaction norms and genetic determinism—even when the population was exposed to a constant environment for several generations. Limits to plasticity led to genetic accommodation, when costs were negligible, and to the appearance of cryptic variation when limits were exceeded. We found that adaptive plasticity promoted population persistence under red environmental noise and was particularly important for life histories with low fecundity. Populations produing more offspring could cope with environmental fluctuations solely by genetic changes or random plasticity, unless environmental change was too fast.
Populations adapt to novel environmental conditions by genetic changes or phenotypic plasticity. Plastic responses are generally faster and can buffer fitness losses under variable conditions. Plasticity is typically modeled as random noise and linear reaction norms that assume simple one-to- one genotype–phenotype maps and no limits to the phenotypic response. Most studies on plasticity have focused on its effect on population viability. However, it is not clear, whether the advantage of plasticity depends solely on environmental fluctuations or also on the genetic and demographic properties (life histories) of populations. Here we present an individual-based model and study the relative importance of adaptive and nonadaptive plasticity for populations of sexual species with different life histories experiencing directional stochastic climate change. Environmental fluctuations were simulated using differentially autocorrelated climatic stochasticity or noise color, and scenarios of directiona
climate change. Nonadaptive plasticity was simulated as a random environmental effect on trait development, while adaptive plasticity as a linear, saturating, or sinusoidal reaction norm. The last two imposed limits to the plastic response and emphasized flexible interactions of the genotype with the environment. Interestingly, this assumption led to (a) smaller phenotypic than genotypic variance in the population (many-to- one genotype–phenotype map) and the coexistence of polymorphisms, and (b) the maintenance of higher genetic variation—compared to linear reaction norms and genetic determinism—even when the population was exposed to a constant environment for several generations. Limits to plasticity led to genetic accommodation, when costs were negligible, and to the appearance of cryptic variation when limits were exceeded. We found that adaptive plasticity promoted population persistence under red environmental noise and was particularly important for life histories with low fecundity. Populations produing more offspring could cope with environmental fluctuations solely by genetic changes or random plasticity, unless environmental change was too fast.
Background: High-intensity muscle actions have the potential to temporarily improve the performance which has been denoted as postactivation performance enhancement.
Objectives: This study determined the acute effects of different stretch-shortening (fast vs. low) and strength (dynamic vs. isometric) exercises executed during one training session on subsequent balance performance in youth weightlifters.
Materials and Methods: Sixteen male and female young weightlifters, aged 11.3±0.6years, performed four strength exercise conditions in randomized order, including dynamic strength (DYN; 3 sets of 3 repetitions of 10 RM) and isometric strength exercises (ISOM; 3 sets of maintaining 3s of 10 RM of back-squat), as well as fast (FSSC; 3 sets of 3 repetitions of 20-cm drop-jumps) and slow (SSSC; 3 sets of 3 hurdle jumps over a 20-cm obstacle) stretch-shortening cycle protocols. Balance performance was tested before and after each of the four exercise conditions in bipedal stance on an unstable surface (i.e., BOSU ball with flat side facing up) using two dependent variables, i.e., center of pressure surface area (CoP SA) and velocity (CoP V).
Results: There was a significant effect of time on CoP SA and CoP V [F(1,60)=54.37, d=1.88, p<0.0001; F(1,60)=9.07, d=0.77, p=0.003]. In addition, a statistically significant effect of condition on CoP SA and CoP V [F(3,60)=11.81, d=1.53, p<0.0001; F(3,60)=7.36, d=1.21, p=0.0003] was observed. Statistically significant condition-by-time interactions were found for the balance parameters CoP SA (p<0.003, d=0.54) and CoP V (p<0.002, d=0.70). Specific to contrast analysis, all specified hypotheses were tested and demonstrated that FSSC yielded significantly greater improvements than all other conditions in CoP SA and CoP V [p<0.0001 (d=1.55); p=0.0004 (d=1.19), respectively]. In addition, FSSC yielded significantly greater improvements compared with the two conditions for both balance parameters [p<0.0001 (d=2.03); p<0.0001 (d=1.45)].
Conclusion: Fast stretch-shortening cycle exercises appear to be more effective to improve short-term balance performance in young weightlifters. Due to the importance of balance for overall competitive achievement in weightlifting, it is recommended that young weightlifters implement dynamic plyometric exercises in the fast stretch-shortening cycle during the warm-up to improve their balance performance.
Background: High-intensity muscle actions have the potential to temporarily improve the performance which has been denoted as postactivation performance enhancement.
Objectives: This study determined the acute effects of different stretch-shortening (fast vs. low) and strength (dynamic vs. isometric) exercises executed during one training session on subsequent balance performance in youth weightlifters.
Materials and Methods: Sixteen male and female young weightlifters, aged 11.3±0.6years, performed four strength exercise conditions in randomized order, including dynamic strength (DYN; 3 sets of 3 repetitions of 10 RM) and isometric strength exercises (ISOM; 3 sets of maintaining 3s of 10 RM of back-squat), as well as fast (FSSC; 3 sets of 3 repetitions of 20-cm drop-jumps) and slow (SSSC; 3 sets of 3 hurdle jumps over a 20-cm obstacle) stretch-shortening cycle protocols. Balance performance was tested before and after each of the four exercise conditions in bipedal stance on an unstable surface (i.e., BOSU ball with flat side facing up) using two dependent variables, i.e., center of pressure surface area (CoP SA) and velocity (CoP V).
Results: There was a significant effect of time on CoP SA and CoP V [F(1,60)=54.37, d=1.88, p<0.0001; F(1,60)=9.07, d=0.77, p=0.003]. In addition, a statistically significant effect of condition on CoP SA and CoP V [F(3,60)=11.81, d=1.53, p<0.0001; F(3,60)=7.36, d=1.21, p=0.0003] was observed. Statistically significant condition-by-time interactions were found for the balance parameters CoP SA (p<0.003, d=0.54) and CoP V (p<0.002, d=0.70). Specific to contrast analysis, all specified hypotheses were tested and demonstrated that FSSC yielded significantly greater improvements than all other conditions in CoP SA and CoP V [p<0.0001 (d=1.55); p=0.0004 (d=1.19), respectively]. In addition, FSSC yielded significantly greater improvements compared with the two conditions for both balance parameters [p<0.0001 (d=2.03); p<0.0001 (d=1.45)].
Conclusion: Fast stretch-shortening cycle exercises appear to be more effective to improve short-term balance performance in young weightlifters. Due to the importance of balance for overall competitive achievement in weightlifting, it is recommended that young weightlifters implement dynamic plyometric exercises in the fast stretch-shortening cycle during the warm-up to improve their balance performance.
Background
Earlier studies have shown that balance training (BT) has the potential to induce performance enhancements in selected components of physical fitness (i.e., balance, muscle strength, power, speed). While there is ample evidence on the long-term effects of BT on components of physical fitness in youth, less is known on the short-term or acute effects of single BT sessions on selected measures of physical fitness.
Objective
To examine the acute effects of different balance exercise types on balance, change-of-direction (CoD) speed, and jump performance in youth female volleyball players.
Methods
Eleven female players aged 14 years participated in this study. Three types of balance exercises (i.e., anterior, posterolateral, rotational type) were conducted in randomized order. For each exercise, 3 sets including 5 repetitions were performed. Before and after the performance of the balance exercises, participants were tested for their static balance (center of pressure surface area [CoP SA] and velocity [CoP V]) on foam and firm surfaces, CoD speed (T-Half test), and vertical jump height (countermovement jump [CMJ] height). A 3 (condition: anterior, mediolateral, rotational balance exercise type) × 2 (time: pre, post) analysis of variance was computed with repeated measures on time.
Results
Findings showed no significant condition × time interactions for all outcome measures (p > 0.05). However, there were small main effects of time for CoP SA on firm and foam surfaces (both d = 0.38; all p < 0.05) with no effect for CoP V on both surface conditions (p > 0.05). For CoD speed, findings showed a large main effect of time (d = 0.91; p < 0.001). However, for CMJ height, no main effect of time was observed (p > 0.05).
Conclusions
Overall, our results indicated small-to-large changes in balance and CoD speed performances but not in CMJ height in youth female volleyball players, regardless of the balance exercise type. Accordingly, it is recommended to regularly integrate balance exercises before the performance of sport-specific training to optimize performance development in youth female volleyball players.
Magmatic continental rifts often constitute the earliest stage of nascent plate boundaries. These extensional tectonic provinces are characterized by ubiquitous normal faulting and volcanic activity; the spatial pattern, the geometry, and the age of these normal faults can help to unravel the spatiotemporal relationships between extensional deformation, magmatism, and long-wavelength crustal deformation of continental rift provinces. This study focuses on the active faulting in the Kenya Rift of the Cenozoic East African Rift System (EARS) with a focus on the mid-Pleistocene to the present-day.
To examine the early stages of continental break-up in the EARS, this thesis presents a time-averaged minimum extension rate for the inner graben of the Northern Kenya Rift (NKR) for the last 0.5 m.y. Using the TanDEM-X digital elevation model, fault-scarp geometries and associated throws are determined across the volcano-tectonic axis of the inner graben of the NKR. By integrating existing geochronology of faulted units with new ⁴⁰Ar/³⁹Ar radioisotopic dates, time-averaged extension rates are calculated. This study reveals that in the inner graben of the NKR, the long-term extension rate based on mid-Pleistocene to recent brittle deformation has minimum values of 1.0 to 1.6 mm yr⁻¹, locally with values up to 2.0 mm yr⁻¹. In light of virtually inactive border faults of the NKR, we show that extension is focused in the region of the active volcano-tectonic axis in the inner graben, thus highlighting the maturing of continental rifting in the NKR.
The phenomenon of focused extension is further investigated with a structural analysis of the youngest volcanic manifestations of the Kenya Rift, their relationship with extensional structures, and their overprint by Holocene faulting. In this context I analyzed the fault characteristics at the ~36 ka old Menengai Caldera and adjacent areas in the Central Kenya Rift using detailed field mapping and a structure-from-motion-based DEM generated from UAV data. In general, the Holocene intra-rift normal faults are dip-slip faults which strike NNE and thus reflect the present-day tectonic stress field; however, inside Menengai caldera persistent magmatic activity and magmatic resurgence overprints these young structures significantly. The caldera is located at the center of an actively extending rift segment and this and the other volcanic edifices of the Kenya Rift may constitute nucleation points of faulting an magmatic extensional processes that ultimately lead into a future stage of magma-assisted rifting.
When viewed at the scale of the entire Kenya Rift the protracted normal faulting in this region compartmentalizes the larger rift depressions, and influences the sedimentology and the hydrology of the intra-rift basins at a scale of less than 100 km. In the present day, most of the fault-bounded sub-basins of the Kenya Rift are hydrologically isolated due to this combination of faulting and magmatic activity that has generated efficient hydrological barriers that maintain these basins as semi-independent geomorphic entities. This isolation, however, was overcome during wetter climatic conditions during the past when the basins were transiently connected. I therefore also investigated the hydrological connectivity of the rift basins during the African Humid Period of the early Holocene, when climate was wetter. With the help of DEM analysis, lake-highstand indicators, radiocarbon dating, and a review of the fossil record, two lake-river-cascades could be identified: one directed southward, and one directed northward. Both cascades connected presently isolated rift basins during the early Holocene via spillovers of lakes and incised river gorges. This hydrological connection fostered the dispersal of aquatic faunas along the rift, and in addition, the water divide between the two river systems represented the only terrestrial dispersal corridor across the Kenya Rift. The reconstruction explains isolated distributions of Nilotic fish species in Kenya Rift lakes and of Guineo-Congolian mammal species in forests east of the Kenya Rift. On longer timescales, repeated episodes of connectivity and isolation must have occurred. To address this problem I participated in research to analyze a sediment drill core from the Koora basin of the Southern Kenya Rift, which provides a paleo-environmental record of the last 1 Ma. Based on this record it can be concluded that at ~400 ka relatively stable environmental conditions were disrupted by tectonic, hydrological, and ecological changes, resulting in increasingly large and frequent fluctuations in water availability, grassland communities, and woody plant cover. The major environmental shifts reflected in the drill core data coincide with phases where volcano-tectonic activity affected the basin. This thesis therefore shows how protracted extensional tectonic processes and the resulting geomorphologic conditions can affect the hydrology, the paleo-environment and the biodiversity of extensional zones in Kenya and elsewhere.
Gravitational-wave (GW) astrophysics is a field in full blossom. Since the landmark detection of GWs from a binary black hole on September 14th 2015, fifty-two compact-object binaries have been reported by the LIGO-Virgo collaboration. Such events carry astrophysical and cosmological information ranging from an understanding of how black holes and neutron stars are formed, what neutron stars are composed of, how the Universe expands, and allow testing general relativity in the highly-dynamical strong-field regime. It is the goal of GW astrophysics to extract such information as accurately as possible. Yet, this is only possible if the tools and technology used to detect and analyze GWs are advanced enough. A key aspect of GW searches are waveform models, which encapsulate our best predictions for the gravitational radiation under a certain set of parameters, and that need to be cross-correlated with data to extract GW signals. Waveforms must be very accurate to avoid missing important physics in the data, which might be the key to answer the fundamental questions of GW astrophysics. The continuous improvements of the current LIGO-Virgo detectors, the development of next-generation ground-based detectors such as the Einstein Telescope or the Cosmic Explorer, as well as the development of the Laser Interferometer Space Antenna (LISA), demand accurate waveform models. While available models are enough to capture the low spins, comparable-mass binaries routinely detected in LIGO-Virgo searches, those for sources from both current and next-generation ground-based and spaceborne detectors must be accurate enough to detect binaries with large spins and asymmetry in the masses. Moreover, the thousands of sources that we expect to detect with future detectors demand accurate waveforms to mitigate biases in the estimation of signals’ parameters due to the presence of a foreground of many sources that overlap in the frequency band. This is recognized as one of the biggest challenges for the analysis of future-detectors’ data, since biases might hinder the extraction of important astrophysical and cosmological information from future detectors’ data. In the first part of this thesis, we discuss how to improve waveform models for binaries with high spins and asymmetry in the masses. In the second, we present the first generic metrics that have been proposed to predict biases in the presence of a foreground of many overlapping signals in GW data.
For the first task, we will focus on several classes of analytical techniques. Current models for LIGO and Virgo studies are based on the post-Newtonian (PN, weak-field, small velocities) approximation that is most natural for the bound orbits that are routinely detected in GW searches. However, two other approximations have risen in prominence, the post-Minkowskian (PM, weak- field only) approximation natural for unbound (scattering) orbits and the small-mass-ratio (SMR) approximation typical of binaries in which the mass of one body is much bigger than the other. These are most appropriate to binaries with high asymmetry in the masses that challenge current waveform models. Moreover, they allow one to “cover” regions of the parameter space of coalescing binaries, thereby improving the interpolation (and faithfulness) of waveform models. The analytical approximations to the relativistic two-body problem can synergically be included within the effective-one-body (EOB) formalism, in which the two-body information from each approximation can be recast into an effective problem of a mass orbiting a deformed Schwarzschild (or Kerr) black hole. The hope is that the resultant models can cover both the low-spin comparable-mass binaries that are routinely detected, and the ones that challenge current models. The first part of this thesis is dedicated to a study about how to best incorporate information from the PN, PM, SMR and EOB approaches in a synergistic way. We also discuss how accurate the resulting waveforms are, as compared against numerical-relativity (NR) simulations. We begin by comparing PM models, whether alone or recast in the EOB framework, against PN models and NR simulations. We will show that PM information has the potential to improve currently-employed models for LIGO and Virgo, especially if recast within the EOB formalism. This is very important, as the PM approximation comes with a host of new computational techniques from particle physics to exploit. Then, we show how a combination of PM and SMR approximations can be employed to access previously-unknown PN orders, deriving the third subleading PN dynamics for spin-orbit and (aligned) spin1-spin2 couplings. Such new results can then be included in the EOB models currently used in GW searches and parameter estimation studies, thereby improving them when the binaries have high spins. Finally, we build an EOB model for quasi-circular nonspinning binaries based on the SMR approximation (rather than the PN one as usually done). We show how this is done in detail without incurring in the divergences that had affected previous attempts, and compare the resultant model against NR simulations. We find that the SMR approximation is an excellent approximation for all (quasi-circular nonspinning) binaries, including both the equal-mass binaries that are routinely detected in GW searches and the ones with highly asymmetric masses. In particular, the SMR-based models compare much better than the PN models, suggesting that SMR-informed EOB models might be the key to model binaries in the future. In the second task of this thesis, we work within the linear-signal ap- proximation and describe generic metrics to predict inference biases on the parameters of a GW source of interest in the presence of confusion noise from unfitted foregrounds and from residuals of other signals that have been incorrectly fitted out. We illustrate the formalism with simple (yet realistic) LISA sources, and demonstrate its validity against Monte-Carlo simulations. The metrics we describe pave the way for more realistic studies to quantify the biases with future ground-based and spaceborne detectors.
Despite evidence that acculturation hassles (such as discrimination and language hassles) relate to poorer adjustment for adolescents of immigrant descent, we know less about the psychological processes underlying these associations. In this study, we test whether reduced psychological needs satisfaction in terms of a lower sense of belonging, autonomy, and competence, mediates the associations of acculturation hassles with psychological distress and academic adjustment. Our sample included 439 seventh graders from 15 schools in Germany (51% female, M-age = 12.4 years, SD = .73). Results revealed that adolescents who experienced greater discrimination and language hassles showed a lower sense of belonging with classmates and subsequently, greater psychological distress. Those who experienced greater language hassles also exhibited a lower sense of perceived competence, and ultimately poorer academic adjustment. We conclude that self-determination theory (SDT) provides an important framework to explain key processes underlying the links between acculturation hassles with psychological distress and academic (mal-)adjustment. Strengthening belonging and competence among adolescents of immigrant descent may enhance their well-being in the face of acculturation hassles.
3A 1954+319 has been classified for a long time as a symbiotic X-ray binary, hosting a slowly rotating neutron star and an aged M red giant. Recently, this classification has been revised thanks to the discovery that the donor star is an M supergiant. This makes 3A 1954+319 a rare type of high-mass X-ray binary consisting of a neutron star and a red supergiant donor. In this paper, we analyse two archival and still unpublished XMM-Newton and NuSTAR observations of the source. We perform a detailed hardness ratio-resolved spectral analysis to search for spectral variability that could help investigating the structures of the inhomogeneous M supergiant wind from which the neutron star is accreting. We discuss our results in the context of wind-fed supergiant X-ray binaries and show that the newest findings on 3A 1954+319 reinforce the hypothesis that the neutron star in this system is endowed with a magnetar-like magnetic field strength (greater than or similar to 10(14) G).
Centroid moment tensor (CMT) parameters can be estimated from seismic waveforms. Since these data indirectly observe the deformation process, CMTs are inferred as solutions to inverse problems which are generally underdetermined and require significant assumptions, including assumptions about data noise. Broadly speaking, we consider noise to include both theory and measurement errors, where theory errors are due to assumptions in the inverse problem and measurement errors are caused by the measurement process. While data errors are routinely included in parameter estimation for full CMTs, less attention has been paid to theory errors related to velocity-model uncertainties and how these affect the resulting moment-tensor (MT) uncertainties. Therefore, rigorous uncertainty quantification for CMTs may require theory-error estimation which becomes a problem of specifying noise models. Various noise models have been proposed, and these rely on several assumptions. All approaches quantify theory errors by estimating the covariance matrix of data residuals. However, this estimation can be based on explicit modelling, empirical estimation and/or ignore or include covariances. We quantitatively compare several approaches by presenting parameter and uncertainty estimates in nonlinear full CMT estimation for several simulated data sets and regional field data of the M-1 4.4, 2015 June 13 Fox Creek, Canada, event. While our main focus is at regional distances, the tested approaches are general and implemented for arbitrary source model choice. These include known or unknown centroid locations, full MTs, deviatoric MTs and double-couple MTs. We demonstrate that velocity-model uncertainties can profoundly affect parameter estimation and that their inclusion leads to more realistic parameter uncertainty quantification. However, not all approaches perform equally well. Including theory errors by estimating non-stationary (non-Toeplitz) error covariance matrices via iterative schemes during Monte Carlo sampling performs best and is computationally most efficient. In general, including velocity-model uncertainties is most important in cases where velocity structure is poorly known.
Classroom noise impairs students' cognition and learning. At a first glance, it seems useful to prevent the negative effects of noise on academic learning by wearing noise-cancelling (NC) headphones during class. The literature and guidelines emphasize the academic benefits of wearing NC headphones (decreased auditory distraction, increased concentration, learning improvement, and decreased distress). These benefits are particularly expected for students with special needs. None of the recommendations to wear NC headphones during class refer to any empirical studies, indicating a potential research gap and lack of evidence. Therefore, the question arises: Is there any empirical evidence supporting academic benefits of wearing NC headphones during class for typically developing students or students with special needs? A total of 13 empirical studies (quantitative and qualitative) were identified through a systematic scoping review of the existing literature. A wide range of outcomes (cognition, learning, academic performance, behaviour, and emotions) were reported related to the use of NC headphones. Most of the studies refer to specific groups of students with special needs (learning disabilities, autism, ADHD, etc.). In view of the limited number of studies, small sample sizes, and lack of replication studies, all studies give the impression of being pilot studies on the academic benefits of wearing NC headphones. The practice of wearing NC headphones during class is an understudied topic. The current body of evidence does not meet the standards for evidence-based practices in both general and special education. Implications for educational practice and future research are discussed.
In this paper, we propose a consistent mechanism of protein microcapsule formation upon ultrasound treatment. Aqueous suspensions of bovine serum albumin (BSA) microcapsules filled with toluene are prepared by use of high-intensity ultrasound following a reported method. Stabilization of the oil-in-water emulsion by the adsorption of the protein molecules at the interface of the emulsion droplets is accompanied by the creation of the cross-linked capsule shell due to formation of intermolecular disulfide bonds caused by highly reactive species like superoxide radicals generated sonochemically. The evidence for this mechanism, which until now remained elusive and was not proven properly, is presented based on experimental data from SDS-PAGE, Raman spectroscopy and dynamic light scattering.
Iron sulfur (Fe-S) clusters are important biological cofactors present in proteins with crucial biological functions, from photosynthesis to DNA repair, gene expression, and bioenergetic processes. For the insertion of Fe-S clusters into proteins, A-type carrier proteins have been identified. So far, three of them have been characterized in detail in Escherichia coli, namely, IscA, SufA, and ErpA, which were shown to partially replace each other in their roles in [4Fe-4S] cluster insertion into specific target proteins. To further expand the knowledge of [4Fe-4S] cluster insertion into proteins, we analyzed the complex Fe-S cluster-dependent network for the synthesis of the molybdenum cofactor (Moco) and the expression of genes encoding nitrate reductase in E. coli. Our studies include the identification of the A-type carrier proteins ErpA and IscA, involved in [4Fe-4S] cluster insertion into the radical Sadenosyl-methionine (SAM) enzyme MoaA. We show that ErpA and IscA can partially replace each other in their role to provide [4Fe-4S] clusters for MoaA. Since most genes expressing molybdoenzymes are regulated by the transcriptional regulator for fumarate and nitrate reduction (FNR) under anaerobic conditions, we also identified the proteins that are crucial to obtain an active FNR under conditions of nitrate respiration. We show that ErpA is essential for the FNR-dependent expression of the narGHJI operon, a role that cannot be compensated by IscA under the growth conditions tested. SufA does not appear to have a role in Fe-S cluster insertion into MoaA or FNR under anaerobic growth employing nitrate respiration, based on the low level of gene expression. <br /> IMPORTANCE Understanding the assembly of iron-sulfur (Fe-S) proteins is relevant to many fields, including nitrogen fixation, photosynthesis, bioenergetics, and gene regulation. Remaining critical gaps in our knowledge include how Fe-S clusters are transferred to their target proteins and how the specificity in this process is achieved, since different forms of Fe-S clusters need to be delivered to structurally highly diverse target proteins. Numerous Fe-S carrier proteins have been identified in prokaryotes like Escherichia coli, including ErpA, IscA, SufA, and NfuA. In addition, the diverse Fe-S cluster delivery proteins and their target proteins underlie a complex regulatory network of expression, to ensure that both proteins are synthesized under particular growth conditions.
Iron-sulfur clusters are essential enzyme cofactors. The most common and stable clusters are [2Fe-2S] and [4Fe-4S] that are found in nature. They are involved in crucial biological processes like respiration, gene regulation, protein translation, replication and DNA repair in prokaryotes and eukaryotes. In Escherichia coli, Fe-S clusters are essential for molybdenum cofactor (Moco) biosynthesis, which is a ubiquitous and highly conserved pathway. The first step of Moco biosynthesis is catalyzed by the MoaA protein to produce cyclic pyranopterin monophosphate (cPMP) from 5’GTP. MoaA is a [4Fe-4S] cluster containing radical S-adenosyl-L-methionine (SAM) enzyme. The focus of this study was to investigate Fe-S cluster insertion into MoaA under nitrate and TMAO respiratory conditions using E. coli as a model organism. Nitrate and TMAO respiration usually occur under anaerobic conditions, where oxygen is depleted. Under these conditions, E. coli uses nitrate and TMAO as terminal electron. Previous studies revealed that Fe-S cluster insertion is performed by Fe-S cluster carrier proteins. In E. coli, these proteins are known as A-type carrier proteins (ATC) by phylogenomic and genetic studies. So far, three of them have been characterized in detail in E. coli, namely IscA, SufA, and ErpA. This study shows that ErpA and IscA are involved in Fe-S cluster insertion into MoaA under nitrate and TMAO respiratory conditions. ErpA and IscA can partially replace each other in their role to provide [4Fe-4S] clusters for MoaA. SufA is not able to replace the functions of IscA or ErpA under nitrate respiratory conditions.
Nitrate reductase is a molybdoenzyme that coordinates Moco and Fe-S clusters. Under nitrate respiratory conditions, the expression of nitrate reductase is significantly increased in E. coli. Nitrate reductase is encoded in narGHJI genes, the expression of which is regulated by the transcriptional regulator, fumarate and nitrate reduction (FNR). The activation of FNR under conditions of nitrate respiration requires one [4Fe-4S] cluster. In this part of the study, we analyzed the insertion of Fe-S cluster into FNR for the expression of narGHJI genes in E. coli. The results indicate that ErpA is essential for the FNR-dependent expression of the narGHJI genes, a role that can be replaced partially by IscA and SufA when they are produced sufficiently under the conditions tested. This observation suggests that ErpA is indirectly regulating nitrate reductase expression via inserting Fe-S clusters into FNR.
Most molybdoenzymes are complex multi-subunit and multi-cofactor-containing enzymes that coordinate Fe-S clusters, which are functioning as electron transfer chains for catalysis. In E. coli, periplasmic aldehyde oxidoreductase (PaoAC) is a heterotrimeric molybdoenzyme that
consists of flavin, two [2Fe-2S], one [4Fe-4S] cluster and Moco. In the last part of this study, we investigated the insertion of Fe-S clusters into E. coli periplasmic aldehyde oxidoreductase (PaoAC). The results show that SufA and ErpA are involved in inserting [4Fe-4S] and [2Fe-2S] clusters into PaoABC, respectively under aerobic respiratory conditions.
Purpose of review
The zebrafish embryo has emerged as a powerful model organism to investigate the mechanisms by which biophysical forces regulate vascular and cardiac cell biology during development and disease. A versatile arsenal of methods and tools is available to manipulate and analyze biomechanical signaling. This review aims to provide an overview of the experimental strategies and tools that have been utilized to study biomechanical signaling in cardiovascular developmental processes and different vascular disease models in the zebrafish embryo. Within the scope of this review, we focus on work published during the last two years.
Recent findings
Genetic and pharmacological tools for the manipulation of cardiac function allow alterations of hemodynamic flow patterns in the zebrafish embryo and various types of transgenic lines are available to report endothelial cell responses to biophysical forces. These tools have not only revealed the impact of biophysical forces on cardiovascular development but also helped to establish more accurate models for cardiovascular diseases including cerebral cavernous malformations, hereditary hemorrhagic telangiectasias, arteriovenous malformations, and lymphangiopathies.
Summary
The zebrafish embryo is a valuable vertebrate model in which in-vivo manipulations of biophysical forces due to cardiac contractility and blood flow can be performed. These analyses give important insights into biomechanical signaling pathways that control endothelial and endocardial cell behaviors. The technical advances using this vertebrate model will advance our understanding of the impact of biophysical forces in cardiovascular pathologies.
We introduce a thermofield-based formulation of the multilayer multiconfigurational time-dependent Hartree (MCTDH) method to study finite temperature effects on non-adiabatic quantum dynamics from a non-stochastic, wave function perspective. Our approach is based on the formal equivalence of bosonic many-body theory at zero temperature with a doubled number of degrees of freedom and the thermal quasi-particle representation of bosonic thermofield dynamics (TFD). This equivalence allows for a transfer of bosonic many-body MCTDH as introduced by Wang and Thoss to the finite temperature framework of thermal quasi-particle TFD. As an application, we study temperature effects on the ultrafast internal conversion dynamics in pyrazine. We show that finite temperature effects can be efficiently accounted for in the construction of multilayer expansions of thermofield states in the framework presented herein. Furthermore, we find our results to agree well with existing studies on the pyrazine model based on the pMCTDH method.
A tale of shifting relations
(2021)
Understanding the dynamics between the East Asian summer (EASM) and winter monsoon (EAWM) is needed to predict their variability under future global warming scenarios. Here, we investigate the relationship between EASM and EAWM as well as the mechanisms driving their variability during the last 10,000 years by stacking marine and terrestrial (non-speleothem) proxy records from the East Asian realm. This provides a regional and proxy independent signal for both monsoonal systems. The respective signal was subsequently analysed using a linear regression model. We find that the phase relationship between EASM and EAWM is not time-constant and significantly depends on orbital configuration changes. In addition, changes in the Atlantic Meridional Overturning circulation, Arctic sea-ice coverage, El Niño-Southern Oscillation and Sun Spot numbers contributed to millennial scale changes in the EASM and EAWM during the Holocene. We also argue that the bulk signal of monsoonal activity captured by the stacked non-speleothem proxy records supports the previously argued bias of speleothem climatic archives to moisture source changes and/or seasonality.
A tale of shifting relations
(2021)
Understanding the dynamics between the East Asian summer (EASM) and winter monsoon (EAWM) is needed to predict their variability under future global warming scenarios. Here, we investigate the relationship between EASM and EAWM as well as the mechanisms driving their variability during the last 10,000 years by stacking marine and terrestrial (non-speleothem) proxy records from the East Asian realm. This provides a regional and proxy independent signal for both monsoonal systems. The respective signal was subsequently analysed using a linear regression model. We find that the phase relationship between EASM and EAWM is not time-constant and significantly depends on orbital configuration changes. In addition, changes in the Atlantic Meridional Overturning circulation, Arctic sea-ice coverage, El Niño-Southern Oscillation and Sun Spot numbers contributed to millennial scale changes in the EASM and EAWM during the Holocene. We also argue that the bulk signal of monsoonal activity captured by the stacked non-speleothem proxy records supports the previously argued bias of speleothem climatic archives to moisture source changes and/or seasonality.
In Systems Medicine, in addition to high-throughput molecular data (*omics), the wealth of clinical characterization plays a major role in the overall understanding of a disease. Unique problems and challenges arise from the heterogeneity of data and require new solutions to software and analysis methods. The SMART and EurValve studies establish a Systems Medicine approach to valvular heart disease -- the primary cause of subsequent heart failure.
With the aim to ascertain a holistic understanding, different *omics as well as the clinical picture of patients with aortic stenosis (AS) and mitral regurgitation (MR) are collected. Our task within the SMART consortium was to develop an IT platform for Systems Medicine as a basis for data storage, processing, and analysis as a prerequisite for collaborative research. Based on this platform, this thesis deals on the one hand with the transfer of the used Systems Biology methods to their use in the Systems Medicine context and on the other hand with the clinical and biomolecular differences of the two heart valve diseases. To advance differential expression/abundance (DE/DA) analysis software for use in Systems Medicine, we state 21 general software requirements and features of automated DE/DA software, including a novel concept for the simple formulation of experimental designs that can represent complex hypotheses, such as comparison of multiple experimental groups, and demonstrate our handling of the wealth of clinical data in two research applications DEAME and Eatomics. In user interviews, we show that novice users are empowered to formulate and test their multiple DE hypotheses based on clinical phenotype. Furthermore, we describe insights into users' general impression and expectation of the software's performance and show their intention to continue using the software for their work in the future. Both research applications cover most of the features of existing tools or even extend them, especially with respect to complex experimental designs. Eatomics is freely available to the research community as a user-friendly R Shiny application.
Eatomics continued to help drive the collaborative analysis and interpretation of the proteomic profile of 75 human left myocardial tissue samples from the SMART and EurValve studies. Here, we investigate molecular changes within the two most common types of valvular heart disease: aortic valve stenosis (AS) and mitral valve regurgitation (MR). Through DE/DA analyses, we explore shared and disease-specific protein alterations, particularly signatures that could only be found in the sex-stratified analysis. In addition, we relate changes in the myocardial proteome to parameters from clinical imaging. We find comparable cardiac hypertrophy but differences in ventricular size, the extent of fibrosis, and cardiac function. We find that AS and MR show many shared remodeling effects, the most prominent of which is an increase in the extracellular matrix and a decrease in metabolism. Both effects are stronger in AS. In muscle and cytoskeletal adaptations, we see a greater increase in mechanotransduction in AS and an increase in cortical cytoskeleton in MR. The decrease in proteostasis proteins is mainly attributable to the signature of female patients with AS. We also find relevant therapeutic targets.
In addition to the new findings, our work confirms several concepts from animal and heart failure studies by providing the largest collection of human tissue from in vivo collected biopsies to date. Our dataset contributing a resource for isoform-specific protein expression in two of the most common valvular heart diseases. Apart from the general proteomic landscape, we demonstrate the added value of the dataset by showing proteomic and transcriptomic evidence for increased expression of the SARS-CoV-2- receptor at pressure load but not at volume load in the left ventricle and also provide the basis of a newly developed metabolic model of the heart.
In clinical practice, only a few reliable measurement instruments are available for monitoring knee joint rehabilitation. Advances to replace motion capturing with sensor data measurement have been made in the last years. Thus, a systematic review of the literature was performed, focusing on the implementation, diagnostic accuracy, and facilitators and barriers of integrating wearable sensor technology in clinical practices based on a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. For critical appraisal, the COSMIN Risk of Bias tool for reliability and measurement of error was used. PUBMED, Prospero, Cochrane database, and EMBASE were searched for eligible studies. Six studies reporting reliability aspects in using wearable sensor technology at any point after knee surgery in humans were included. All studies reported excellent results with high reliability coefficients, high limits of agreement, or a few detectable errors. They used different or partly inappropriate methods for estimating reliability or missed reporting essential information. Therefore, a moderate risk of bias must be considered. Further quality criterion studies in clinical settings are needed to synthesize the evidence for providing transparent recommendations for the clinical use of wearable movement sensors in knee joint rehabilitation.
Both ground- and satellite-based airglow imaging have significantly contributed to understanding the low-latitude ionosphere, especially the morphology and dynamics of the equatorial ionization anomaly (EIA). The NASA Global-scale Observations of the Limb and Disk (GOLD) mission focuses on far-ultraviolet airglow images from a geostationary orbit at 47.5 degrees W. This region is of particular interest at low magnetic latitudes because of the high magnetic declination (i.e., about -20 degrees) and proximity of the South Atlantic magnetic anomaly. In this study, we characterize an exciting feature of the nighttime EIA using GOLD observations from October 5, 2018 to June 30, 2020. It consists of a wavelike structure of a few thousand kilometers seen as poleward and equatorward displacements of the EIA-crests. Initial analyses show that the synoptic-scale structure is symmetric about the dip equator and appears nearly stationary with time over the night. In quasi-dipole coordinates, maxima poleward displacements of the EIA-crests are seen at about +/- 12 degrees latitude and around 20 and 60 degrees longitude (i.e., in geographic longitude at the dip equator, about 53 degrees W and 14 degrees W). The wavelike structure presents typical zonal wavelengths of about 6.7 x 10(3) km and 3.3 x 10(3) km. The structure's occurrence and wavelength are highly variable on a day-to-day basis with no apparent dependence on geomagnetic activity. In addition, a cluster or quasi-periodic wave train of equatorial plasma depletions (EPDs) is often detected within the synoptic-scale structure. We further outline the difference in observing these EPDs from FUV images and in situ measurements during a GOLD and Swarm mission conjunction.
Ambitious climate policies, as well as economic development, education, technological progress and less resource-intensive lifestyles, are crucial elements for progress towards the UN Sustainable Development Goals (SDGs). However, using an integrated modelling framework covering 56 indicators or proxies across all 17 SDGs, we show that they are insufficient to reach the targets. An additional sustainable development package, including international climate finance, progressive redistribution of carbon pricing revenues, sufficient and healthy nutrition and improved access to modern energy, enables a more comprehensive sustainable development pathway. We quantify climate and SDG outcomes, showing that these interventions substantially boost progress towards many aspects of the UN Agenda 2030 and simultaneously facilitate reaching ambitious climate targets. Nonetheless, several important gaps remain; for example, with respect to the eradication of extreme poverty (180 million people remaining in 2030). These gaps can be closed by 2050 for many SDGs while also respecting the 1.5 °C target and several other planetary boundaries.
Patent document collections are an immense source of knowledge for research and innovation communities worldwide. The rapid growth of the number of patent documents poses an enormous challenge for retrieving and analyzing information from this source in an effective manner. Based on deep learning methods for natural language processing, novel approaches have been developed in the field of patent analysis. The goal of these approaches is to reduce costs by automating tasks that previously only domain experts could solve. In this article, we provide a comprehensive survey of the application of deep learning for patent analysis. We summarize the state-of-the-art techniques and describe how they are applied to various tasks in the patent domain. In a detailed discussion, we categorize 40 papers based on the dataset, the representation, and the deep learning architecture that were used, as well as the patent analysis task that was targeted. With our survey, we aim to foster future research at the intersection of patent analysis and deep learning and we conclude by listing promising paths for future work.
Large-scale biochemical models are of increasing sizes due to the consideration of interacting organisms and tissues. Model reduction approaches that preserve the flux phenotypes can simplify the analysis and predictions of steady-state metabolic phenotypes. However, existing approaches either restrict functionality of reduced models or do not lead to significant decreases in the number of modelled metabolites. Here, we introduce an approach for model reduction based on the structural property of balancing of complexes that preserves the steady-state fluxes supported by the network and can be efficiently determined at genome scale. Using two large-scale mass-action kinetic models of Escherichia coli, we show that our approach results in a substantial reduction of 99% of metabolites. Applications to genome-scale metabolic models across kingdoms of life result in up to 55% and 85% reduction in the number of metabolites when arbitrary and mass-action kinetics is assumed, respectively. We also show that predictions of the specific growth rate from the reduced models match those based on the original models. Since steady-state flux phenotypes from the original model are preserved in the reduced, the approach paves the way for analysing other metabolic phenotypes in large-scale biochemical networks.
Mineral resource exploration and mining is an essential part of today's high-tech industry. Elements such as rare-earth elements (REEs) and copper are, therefore, in high demand. Modern exploration techniques from multiple platforms (e.g., spaceborne and airborne), to detect and map the spectral characteristics of the materials of interest, require spectral libraries as an essential reference. They include field and laboratory spectral information in combination with geochemical analyses for validation. Here, we present a collection of REE- and copper-related hyperspectral spectra with associated geochemical information. The libraries contain reflectance spectra from rare-earth element oxides, REE-bearing minerals, copper-bearing minerals and mine surface samples from the Apliki copper-gold-pyrite mine in the Republic of Cyprus. The samples were measured with the HySpex imaging spectrometers in the visible and near infrared (VNIR) and shortwave infrared (SWIR) range (400-2500 nm). The geochemical validation of each sample is provided with the reflectance spectra. The spectral libraries are openly available to assist future mineral mapping campaigns and laboratory spectroscopic analyses. The spectral libraries and corresponding geochemistry are published via GFZ Data Services with the following DOIs: https://doi.org/10.5880/GFZ.1.4.2019.004 (13 REE-bearing minerals and 16 oxide powders, Koerting et al., 2019a), https://doi.org/10.5880/GFZ.1.4.2019.003 (20 copper-bearing minerals, Koellner et al., 2019), and https://doi.org/10.5880/GFZ.1.4.2019.005 (37 copper-bearing surface material samples from the Apliki coppergold-pyrite mine in Cyprus, Koerting et al., 2019b). All spectral libraries are united and comparable by the internally consistent method of hyperspectral data acquisition in the laboratory.
A simplified run time analysis of the univariate marginal distribution algorithm on LeadingOnes
(2021)
With elementary means, we prove a stronger run time guarantee for the univariate marginal distribution algorithm (UMDA) optimizing the LEADINGONES benchmark function in the desirable regime with low genetic drift. If the population size is at least quasilinear, then, with high probability, the UMDA samples the optimum in a number of iterations that is linear in the problem size divided by the logarithm of the UMDA's selection rate. This improves over the previous guarantee, obtained by Dang and Lehre (2015) via the deep level-based population method, both in terms of the run time and by demonstrating further run time gains from small selection rates. Under similar assumptions, we prove a lower bound that matches our upper bound up to constant factors.
Both ice sheets in Greenland and Antarctica are discharging ice into the ocean. In many regions along the coast of the ice sheets, the icebergs calve into a bay. If the addition of icebergs through calving is faster than their transport out of the embayment, the icebergs will be frozen into a melange with surrounding sea ice in winter. In this case, the buttressing effect of the ice melange can be considerably stronger than any buttressing by mere sea ice would be. This in turn stabilizes the glacier terminus and leads to a reduction in calving rates. Here we propose a simple parametrization of ice melange buttressing which leads to an upper bound on calving rates and can be used in numerical and analytical modelling.
Human size changes over time with worldwide secular trends in height, weight, and body mass index (BMI). There is general agreement to relate the state of nutrition to height and weight, and to ratios of weight-to-height. The BMI is a ratio. It is commonly used to classify underweight, overweight and obesity in adults. Yet, the BMI is inappropriate to provide any immediate information on body composition.
It is accepted that the BMI is “a simple index to classify underweight, overweight and obesity in adults”. It is stated that “policies, programmes and investments need to be “nutrition-sensitive”, which means they must have positive impacts on nutrition”. It is also stated that “a need for policies that address all forms of malnutrition by making healthy foods accessible and affordable, while restricting unhealthy foods through fiscal and regulatory restrictions“. But these statements are neither warranted by arithmetic considerations, nor by historic evidence.
Measuring the BMI is an appropriate screening tool for detecting an unusual weight-to-height ratio, but the BMI is an inappropriate tool for estimating body composition, or suggesting medical and health policy decisions.
The investigation of stresses, faults, structure and seismic hazards requires a good understanding and mapping of earthquake rupture and slip. Constraining the finite source of earthquakes from seismic and geodetic waveforms is challenging because the directional effects of the rupture itself are small and dynamic numerical solutions often include a large number of free parameters. The computational effort is large and therefore difficult to use in an exploratory forward modelling or inversion approach. Here, we use a simplified self-similar fracture model with only a few parameters, where the propagation of the fracture front is decoupled from the calculation of the slip. The approximative method is flexible and computationally efficient. We discuss the strengths and limitations of the model with real-case examples of well-studied earthquakes. These include the M-w 8.3 2015 Illapel, Chile, megathrust earthquake at the plate interface of a subduction zone and examples of continental intraplate strike-slip earthquakes like the M-w 7.1 2016 Kumamoto, Japan, multisegment variable slip event or the M-w 7.5 2018 Palu, Indonesia, supershear earthquake. Despite the simplicity of the model, a large number of observational features ranging from different rupture-front isochrones and slip distributions to directional waveform effects or high slip patches are easy to model. The temporal evolution of slip rate and rise time are derived from the incremental growth of the rupture and the stress drop without imposing other constraints. The new model is fast and implemented in the open-source Python seismology toolbox Pyrocko, ready to study the physics of rupture and to be used in finite source inversions.
A Secular Tradition
(2021)
This article focuses on the social philosopher Horace Kallen and the revisions he made to the concept of cultural pluralism that he first developed in the early 20th century, applying it to postwar America and the young State of Israel. It shows how he opposed the assumption that the United States’ social order was based on a “Judeo-Christian tradition.” By constructing pluralism as a civil religion and carving out space for secular self-understandings in midcentury America, Kallen attempted to preserve the integrity of his earlier political visions, developed during World War I, of pluralist societies in the United States and Palestine within an internationalist global order. While his perspective on the State of Israel was largely shaped by his American experiences, he revised his approach to politically functionalizing religious traditions as he tested his American understanding of a secular, pluralist society against the political theology effective in the State of Israel. The trajectory of Kallen’s thought points to fundamental questions about the compatibility of American and Israeli understandings of religion’s function in society and its relation to political belonging, especially in light of their transnational connection through American Jewish support for the recently established state.
AAA+ proteins (ATPases associated with various cellular activities) catalyze the energy-dependent movement or rearrangement of macromolecules. A new study addresses the important question of how to design a selective chemical inhibitor for specific proteins in this diverse superfamily. The powerful chemical genetics approach adds to a growing toolbox of applications that allow dissection of the functions of distinct AAA+ proteins in vivo, facilitating the first steps toward effective drug development.
The adsorption of protonated L-cysteine onto Au(111) surface was studied via molecular dynamics method. The detailed examination of trajectories reveals that a couple of picoseconds need to be strongly adsorbed at the gold surface via L-cysteine's sulfur and oxygen atoms. The average distances of L-cysteine's adsorbed sulfur and oxygen from gold plane are-2.7 angstrom and-3.2 angstrom, correspondingly. We found that the adsorption of L-cysteine takes place preferentially at bridge site with possibility of-82%. Discussing the conformation features of protonated L-cysteine, we consider that the most stable conformation of protonated L-cysteine is "reverse boat" position, where sulfur and oxygen pointed down to the gold surface, while the amino group is far from the gold surface.
Background
Relatively little is known about protective factors and the emergence and maintenance of positive outcomes in the field of adolescents with chronic conditions. Therefore, the primary aim of the study is to acquire a deeper understanding of the dynamic process of resilience factors, coping strategies and psychosocial adjustment of adolescents living with chronic conditions.
Methods/design
We plan to consecutively recruit N = 450 adolescents (12–21 years) from three German patient registries for chronic conditions (type 1 diabetes, cystic fibrosis, or juvenile idiopathic arthritis). Based on screening for anxiety and depression, adolescents are assigned to two parallel groups – “inconspicuous” (PHQ-9 and GAD-7 < 7) vs. “conspicuous” (PHQ-9 or GAD-7 ≥ 7) – participating in a prospective online survey at baseline and 12-month follow-up. At two time points (T1, T2), we assess (1) intra- and interpersonal resiliency factors, (2) coping strategies, and (3) health-related quality of life, well-being, satisfaction with life, anxiety and depression. Using a cross-lagged panel design, we will examine the bidirectional longitudinal relations between resiliency factors and coping strategies, psychological adaptation, and psychosocial adjustment. To monitor Covid-19 pandemic effects, participants are also invited to take part in an intermediate online survey.
Discussion
The study will provide a deeper understanding of adaptive, potentially modifiable processes and will therefore help to develop novel, tailored interventions supporting a positive adaptation in youths with a chronic condition. These strategies should not only support those at risk but also promote the maintenance of a successful adaptation.
Trial registration
German Clinical Trials Register (DRKS), no. DRKS00025125. Registered on May 17, 2021.
Background
Relatively little is known about protective factors and the emergence and maintenance of positive outcomes in the field of adolescents with chronic conditions. Therefore, the primary aim of the study is to acquire a deeper understanding of the dynamic process of resilience factors, coping strategies and psychosocial adjustment of adolescents living with chronic conditions.
Methods/design
We plan to consecutively recruit N = 450 adolescents (12–21 years) from three German patient registries for chronic conditions (type 1 diabetes, cystic fibrosis, or juvenile idiopathic arthritis). Based on screening for anxiety and depression, adolescents are assigned to two parallel groups – “inconspicuous” (PHQ-9 and GAD-7 < 7) vs. “conspicuous” (PHQ-9 or GAD-7 ≥ 7) – participating in a prospective online survey at baseline and 12-month follow-up. At two time points (T1, T2), we assess (1) intra- and interpersonal resiliency factors, (2) coping strategies, and (3) health-related quality of life, well-being, satisfaction with life, anxiety and depression. Using a cross-lagged panel design, we will examine the bidirectional longitudinal relations between resiliency factors and coping strategies, psychological adaptation, and psychosocial adjustment. To monitor Covid-19 pandemic effects, participants are also invited to take part in an intermediate online survey.
Discussion
The study will provide a deeper understanding of adaptive, potentially modifiable processes and will therefore help to develop novel, tailored interventions supporting a positive adaptation in youths with a chronic condition. These strategies should not only support those at risk but also promote the maintenance of a successful adaptation.
Trial registration
German Clinical Trials Register (DRKS), no. DRKS00025125. Registered on May 17, 2021.
Language portraits are useful instruments to elicit speakers' reflections on the languages in their repertoires. In this study, we implement a "portrait-corpus approach" (Peters and Coetzee-Van Rooy 2020) to investigate the conceptualisations of the languages Afrikaans and English in 105 language portraits. In this approach, we use participants' reflections about their placement of the two languages on a human silhouette as a linguistic corpus. Relying on quantitative and qualitative analyses using WordSmith, Statistica and Atlas.ti, our study shows that Afrikaans is mainly conceptualised as a language that is located in more peripheral areas of the body (for example, the hands and feet) and, hence, is perceived as less important in participants' repertoires. The central location of English in the head reveals its status as an important language in the participants' multilingual repertoires. We argue that these conceptualisations of Afrikaans and English provide additional insight into the attitudes towards these languages in South Africa.
Objective
The Caribbean is an important global biodiversity hotspot. Adaptive radiations there lead to many speciation events within a limited period and hence are particularly prominent biodiversity generators. A prime example are freshwater fish of the genus Limia, endemic to the Greater Antilles. Within Hispaniola, nine species have been described from a single isolated site, Lake Miragoâne, pointing towards extraordinary sympatric speciation. This study examines the evolutionary history of the Limia species in Lake Miragoâne, relative to their congeners throughout the Caribbean.
Results
For 12 Limia species, we obtained almost complete sequences of the mitochondrial cytochrome b gene, a well-established marker for lower-level taxonomic relationships. We included sequences of six further Limia species from GenBank (total N = 18 species). Our phylogenies are in concordance with other published phylogenies of Limia. There is strong support that the species found in Lake Miragoâne in Haiti are monophyletic, confirming a recent local radiation. Within Lake Miragoâne, speciation is likely extremely recent, leading to incomplete lineage sorting in the mtDNA. Future studies using multiple unlinked genetic markers are needed to disentangle the relationships within the Lake Miragoâne clade.
Objective
The Caribbean is an important global biodiversity hotspot. Adaptive radiations there lead to many speciation events within a limited period and hence are particularly prominent biodiversity generators. A prime example are freshwater fish of the genus Limia, endemic to the Greater Antilles. Within Hispaniola, nine species have been described from a single isolated site, Lake Miragoâne, pointing towards extraordinary sympatric speciation. This study examines the evolutionary history of the Limia species in Lake Miragoâne, relative to their congeners throughout the Caribbean.
Results
For 12 Limia species, we obtained almost complete sequences of the mitochondrial cytochrome b gene, a well-established marker for lower-level taxonomic relationships. We included sequences of six further Limia species from GenBank (total N = 18 species). Our phylogenies are in concordance with other published phylogenies of Limia. There is strong support that the species found in Lake Miragoâne in Haiti are monophyletic, confirming a recent local radiation. Within Lake Miragoâne, speciation is likely extremely recent, leading to incomplete lineage sorting in the mtDNA. Future studies using multiple unlinked genetic markers are needed to disentangle the relationships within the Lake Miragoâne clade.
Remote sensing plays an increasingly key role in the determination of soil organic carbon (SOC) stored in agriculturally managed topsoils at the regional and field scales. Contemporary Unmanned Aerial Systems (UAS) carrying low-cost and lightweight multispectral sensors provide high spatial resolution imagery (<10 cm). These capabilities allow integrate of UAS-derived soil data and maps into digitalized workflows for sustainable agriculture. However, the common situation of scarce soil data at field scale might be an obstacle for accurate digital soil mapping. In our case study we tested a fixed-wing UAS equipped with visible and near infrared (VIS-NIR) sensors to estimate topsoil SOC distribution at two fields under the constraint of limited sampling points, which were selected by pedological knowledge. They represent all releva nt soil types along an erosion-deposition gradient; hence, the full feature space in terms of topsoils' SOC status. We included the Topographic Position Index (TPI) as a co-variate for SOC prediction. Our study was performed in a soil landscape of hummocky ground moraines, which represent a significant of global arable land. Herein, small scale soil variability is mainly driven by tillage erosion which, in turn, is strongly dependent on topography. Relationships between SOC, TPI and spectral information were tested by Multiple Linear Regression (MLR) using: (i) single field data (local approach) and (ii) data from both fields (pooled approach). The highest prediction performance determined by a leave-one-out-cross-validation (LOOCV) was obtained for the models using the reflectance at 570 nm in conjunction with the TPI as explanatory variables for the local approach (coefficient of determination (R-2) = 0.91; root mean square error (RMSE) = 0.11% and R-2 = 0.48; RMSE = 0.33, respectively). The local MLR models developed with both reflectance and TPI using values from all points showed high correlations and low prediction errors for SOC content (R-2 = 0.88, RMSE = 0.07%; R-2 = 0.79, RMSE = 0.06%, respectively). The comparison with an enlarged dataset consisting of all points from both fields (pooled approach) showed no improvement of the prediction accuracy but yielded decreased prediction errors. Lastly, the local MLR models were applied to the data of the respective other field to evaluate the cross-field prediction ability. The spatial SOC pattern generally remains unaffected on both fields; differences, however, occur concerning the predicted SOC level. Our results indicate a high potential of the combination of UAS-based remote sensing and environmental covariates, such as terrain attributes, for the prediction of topsoil SOC content at the field scale. The temporal flexibility of UAS offer the opportunity to optimize flight conditions including weather and soil surface status (plant cover or residuals, moisture and roughness) which, otherwise, might obscure the relationship between spectral data and SOC content. Pedologically targeted selection of soil samples for model development appears to be the key for an efficient and effective prediction even with a small dataset.
The spread of shrubs in Namibian savannas raises questions about the resilience of these ecosystems to global change. This makes it necessary to understand the past dynamics of the vegetation, since there is no consensus on whether shrub encroachment is a new phenomenon, nor on its main drivers. However, a lack of long-term vegetation datasets for the region and the scarcity of suitable palaeoecological archives, makes reconstructing past vegetation and land cover of the savannas a challenge.
To help meet this challenge, this study addresses three main research questions: 1) is pollen analysis a suitable tool to reflect the vegetation change associated with shrub encroachment in savanna environments? 2) Does the current encroached landscape correspond to an alternative stable state of savanna vegetation? 3) To what extent do pollen-based quantitative vegetation reconstructions reflect changes in past land cover?
The research focuses on north-central Namibia, where despite being the region most affected by shrub invasion, particularly since the 21st century, little is known about the dynamics of this phenomenon.
Field-based vegetation data were compared with modern pollen data to assess their correspondence in terms of composition and diversity along precipitation and grazing intensity gradients. In addition, two sediment cores from Lake Otjikoto were analysed to reveal changes in vegetation composition that have occurred in the region over the past 170 years and their possible drivers. For this, a multiproxy approach (fossil pollen, sedimentary ancient DNA (sedaDNA), biomarkers, compound specific carbon (δ13C) and deuterium (δD) isotopes, bulk carbon isotopes (δ13Corg), grain size, geochemical properties) was applied at high taxonomic and temporal resolution. REVEALS modelling of the fossil pollen record from Lake Otjikoto was run to quantitatively reconstruct past vegetation cover. For this, we first made pollen productivity estimates (PPE) of the most relevant savanna taxa in the region using the extended R-value model and two pollen dispersal options (Gaussian plume model and Lagrangian stochastic model). The REVEALS-based vegetation reconstruction was then validated using remote sensing-based regional vegetation data.
The results show that modern pollen reflects the composition of the vegetation well, but diversity less well. Interestingly, precipitation and grazing explain a significant amount of the compositional change in the pollen and vegetation spectra. The multiproxy record shows that a state change from open Combretum woodland to encroached Terminalia shrubland can occur over a century, and that the transition between states spans around 80 years and is characterized by a unique vegetation composition. This transition is supported by gradual environmental changes induced by management (i.e. broad-scale logging for the mining industry, selective grazing and reduced fire activity associated with intensified farming) and related land-use change. Derived environmental changes (i.e. reduced soil moisture, reduced grass cover, changes in species composition and competitiveness, reduced fire intensity) may have affected the resilience of Combretum open woodlands, making them more susceptible to change to an encroached state by stochastic events such as consecutive years of precipitation and drought, and by high concentrations of pCO2. We assume that the resulting encroached state was further stabilized by feedback mechanisms that favour the establishment and competitiveness of woody vegetation.
The REVEALS-based quantitative estimates of plant taxa indicate the predominance of a semi-open landscape throughout the 20th century and a reduction in grass cover below 50% since the 21st century associated with the spread of encroacher woody taxa. Cover estimates show a close match with regional vegetation data, providing support for the vegetation dynamics inferred from multiproxy analyses. Reasonable PPEs were made for all woody taxa, but not for Poaceae.
In conclusion, pollen analysis is a suitable tool to reconstruct past vegetation dynamics in savannas. However, because pollen cannot identify grasses beyond family level, a multiproxy approach, particularly the use of sedaDNA, is required. I was able to separate stable encroached states from mere woodland phases, and could identify drivers and speculate about related feedbacks. In addition, the REVEALS-based quantitative vegetation reconstruction clearly reflects the magnitude of the changes in the vegetation cover that occurred during the last 130 years, despite the limitations of some PPEs.
This research provides new insights into pollen-vegetation relationships in savannas and highlights the importance of multiproxy approaches when reconstructing past vegetation dynamics in semi-arid environments. It also provides the first time series with sufficient taxonomic resolution to show changes in vegetation composition during shrub encroachment, as well as the first quantitative reconstruction of past land cover in the region. These results help to identify the different stages in savanna dynamics and can be used to calibrate predictive models of vegetation change, which are highly relevant to land management.
The precise and accurate assessment of carbon dioxide (CO2) exchange is crucial to identify terrestrial carbon (C) sources and sinks and for evaluating their role within the global C budget. The substantial uncertainty in disentangling the management and soil impact on measured CO2 fluxes are largely ignored especially in cropland. The reasons for this lies in the limitation of the widely used eddy covariance as well as manual and automatic chamber systems, which either account for short-term temporal variability or small-scale spatial heterogeneity, but barely both. To address this issue, we developed a novel robotic chamber system allowing for dozens of spatial measurement repetitions, thus enabling CO2 exchange measurements in a sufficient temporal and high small-scale spatial resolution. The system was tested from 08th July to 09th September 2019 at a heterogeneous field (100 m x 16 m), located within the hummocky ground moraine landscape of northeastern Germany (CarboZALF-D). The field is foreseen for a longer-term block trial manipulation experiment extending over three erosion induced soil types and was covered with spring barley. Measured fluxes of nighttime ecosystem respiration (R-eco) and daytime net ecosystem exchange (NEE) showed distinct temporal patterns influenced by crop phenology, weather conditions and management practices. Similarly, we found clear small-scale spatial differences in cumulated (gap-filled) R-eco, gross primary productivity (GPP) and NEE fluxes affected by the three distinct soil types. Additionally, spatial patterns induced by former management practices and characterized by differences in soil pH and nutrition status (P and K) were also revealed between plots within each of the three soil types, which allowed compensating for prior to the foreseen block trial manipulation experiment. The results underline the great potential of the novel robotic chamber system, which not only detects short-term temporal CO2 flux dynamics but also reflects the impact of small-scale spatial heterogeneity.
In this article we prove upper bounds for the Laplace eigenvalues lambda(k) below the essential spectrum for strictly negatively curved Cartan-Hadamard manifolds. Our bound is given in terms of k(2) and specific geometric data of the manifold. This applies also to the particular case of non-compact manifolds whose sectional curvature tends to -infinity, where no essential spectrum is present due to a theorem of Donnelly/Li. The result stands in clear contrast to Laplacians on graphs where such a bound fails to be true in general.
Monoclonal antibodies are used worldwide as highly potent and efficient detection reagents for research and diagnostic applications. Nevertheless, the specific targeting of complex antigens such as whole microorganisms remains a challenge. To provide a comprehensive workflow, we combined bioinformatic analyses with novel immunization and selection tools to design monoclonal antibodies for the detection of whole microorganisms. In our initial study, we used the human pathogenic strain E. coli O157:H7 as a model target and identified 53 potential protein candidates by using reverse vaccinology methodology. Five different peptide epitopes were selected for immunization using epitope-engineered viral proteins. The identification of antibody-producing hybridomas was performed by using a novel screening technology based on transgenic fusion cell lines. Using an artificial cell surface receptor expressed by all hybridomas, the desired antigen-specific cells can be sorted fast and efficiently out of the fusion cell pool. Selected antibody candidates were characterized and showed strong binding to the target strain E. coli O157:H7 with minor or no cross-reactivity to other relevant microorganisms such as Legionella pneumophila and Bacillus ssp. This approach could be useful as a highly efficient workflow for the generation of antibodies against microorganisms.
Monoclonal antibodies are used worldwide as highly potent and efficient detection reagents for research and diagnostic applications. Nevertheless, the specific targeting of complex antigens such as whole microorganisms remains a challenge. To provide a comprehensive workflow, we combined bioinformatic analyses with novel immunization and selection tools to design monoclonal antibodies for the detection of whole microorganisms. In our initial study, we used the human pathogenic strain E. coli O157:H7 as a model target and identified 53 potential protein candidates by using reverse vaccinology methodology. Five different peptide epitopes were selected for immunization using epitope-engineered viral proteins. The identification of antibody-producing hybridomas was performed by using a novel screening technology based on transgenic fusion cell lines. Using an artificial cell surface receptor expressed by all hybridomas, the desired antigen-specific cells can be sorted fast and efficiently out of the fusion cell pool. Selected antibody candidates were characterized and showed strong binding to the target strain E. coli O157:H7 with minor or no cross-reactivity to other relevant microorganisms such as Legionella pneumophila and Bacillus ssp. This approach could be useful as a highly efficient workflow for the generation of antibodies against microorganisms.
Almost one third of global drylands are open forests and savannas, which are typically shaped by frequent natural disturbances such as wildfire and herbivory. Studies on ecosystem functions and services of woody vegetation require robust estimates of aboveground biomass (AGB). However, most methods have been developed for comparatively undisturbed forest ecosystems. As they are not tailored to accurately quantify AGB of small and irregular growth forms, their application on these growth forms may lead to unreliable or even biased AGB estimates in disturbance-prone dryland ecosystems. Moreover, these methods cannot quantify AGB losses caused by disturbance agents. Here we propose a methodology to estimate individual-and stand-level woody AGB in disturbance-prone ecosystems. It consists of flexible field sampling routines and estimation workflows for six growth classes, delineated by size and damage criteria. It also comprises a detailed damage assessment, harnessing the ecological archive of woody growth for past disturbances.
Based on large inventories collected along steep gradients of elephant disturbances in African dryland ecosystems, we compared the AGB estimates generated with our proposed method against estimates from a less adapted forest inventory method. We evaluated the necessary stepwise procedures of method adaptation and analyzed each step's effect on stand-level AGB estimation. We further explored additional advantages of our proposed method with regard to disturbance impact quantification. Results indicate that a majority of growth forms and individuals in savanna vegetation could only be assessed if methods of AGB estimation were adapted to the conditions of a disturbance-prone ecosystem. Furthermore, our damage assessment demonstrated that one third to half of all woody AGB was lost to disturbances. Consequently, less adapted methods may be insufficient and are likely to render inaccurate AGB estimations.
Our proposed method has the potential to accurately quantify woody AGB in disturbance-prone ecosystems, as well as AGB losses. Our method is more time consuming than conventional allometric approaches, yet it can cover sufficient areas within reasonable timespans, and can also be easily adapted to alternative sampling schemes.
Graphs play an important role in many areas of Computer Science. In particular, our work is motivated by model-driven software development and by graph databases. For this reason, it is very important to have the means to express and to reason about the properties that a given graph may satisfy. With this aim, in this paper we present a visual logic that allows us to describe graph properties, including navigational properties, i.e., properties about the paths in a graph. The logic is equipped with a deductive tableau method that we have proved to be sound and complete.
This thesis focuses on the study of marked Gibbs point processes, in particular presenting some results on their existence and uniqueness, with ideas and techniques drawn from different areas of statistical mechanics: the entropy method from large deviations theory, cluster expansion and the Kirkwood--Salsburg equations, the Dobrushin contraction principle and disagreement percolation.
We first present an existence result for infinite-volume marked Gibbs point processes. More precisely, we use the so-called entropy method (and large-deviation tools) to construct marked Gibbs point processes in R^d under quite general assumptions. In particular, the random marks belong to a general normed space S and are not bounded. Moreover, we allow for interaction functionals that may be unbounded and whose range is finite but random. The entropy method relies on showing that a family of finite-volume Gibbs point processes belongs to sequentially compact entropy level sets, and is therefore tight.
We then present infinite-dimensional Langevin diffusions, that we put in interaction via a Gibbsian description. In this setting, we are able to adapt the general result above to show the existence of the associated infinite-volume measure. We also study its correlation functions via cluster expansion techniques, and obtain the uniqueness of the Gibbs process for all inverse temperatures β and activities z below a certain threshold. This method relies in first showing that the correlation functions of the process satisfy a so-called Ruelle bound, and then using it to solve a fixed point problem in an appropriate Banach space. The uniqueness domain we obtain consists then of the model parameters z and β for which such a problem has exactly one solution.
Finally, we explore further the question of uniqueness of infinite-volume Gibbs point processes on R^d, in the unmarked setting. We present, in the context of repulsive interactions with a hard-core component, a novel approach to uniqueness by applying the discrete Dobrushin criterion to the continuum framework. We first fix a discretisation parameter a>0 and then study the behaviour of the uniqueness domain as a goes to 0. With this technique we are able to obtain explicit thresholds for the parameters z and β, which we then compare to existing results coming from the different methods of cluster expansion and disagreement percolation.
Throughout this thesis, we illustrate our theoretical results with various examples both from classical statistical mechanics and stochastic geometry.
Despite advanced seismological techniques, automatic source characterization for microseismic earthquakes remains difficult and challenging since current inversion and modelling of high-frequency signals are complex and time consuming. For real-time applications such as induced seismicity monitoring, the application of standard methods is often not fast enough for true complete real-time information on seismic sources. In this paper, we present an alternative approach based on recent advances in deep learning for rapid source-parameter estimation of microseismic earthquakes. The seismic inversion is represented in compact form by two convolutional neural networks, with individual feature extraction, and a fully connected neural network, for feature aggregation, to simultaneously obtain full moment tensor and spatial location of microseismic sources. Specifically, a multibranch neural network algorithm is trained to encapsulate the information about the relationship between seismic waveforms and underlying point-source mechanisms and locations. The learning-based model allows rapid inversion (within a fraction of second) once input data are available. A key advantage of the algorithm is that it can be trained using synthetic seismic data only, so it is directly applicable to scenarios where there are insufficient real data for training. Moreover, we find that the method is robust with respect to perturbations such as observational noise and data incompleteness (missing stations). We apply the new approach on synthesized and example recorded small magnitude (M <= 1.6) earthquakes at the Hellisheioi geothermal field in the Hengill area, Iceland. For the examined events, the model achieves excellent performance and shows very good agreement with the inverted solutions determined through standard methodology. In this study, we seek to demonstrate that this approach is viable for microseismicity real-time estimation of source parameters and can be integrated into advanced decision-support tools for controlling induced seismicity.
In an attempt to pave the way for more extensive Computer Science Education (CSE) coverage in K-12, this research developed and made a preliminary evaluation of a blended-learning Introduction to CS program based on an academic MOOC. Using an academic MOOC that is pedagogically effective and engaging, such a program may provide teachers with disciplinary scaffolds and allow them to focus their attention on enhancing students’ learning experience and nurturing critical 21st-century skills such as self-regulated learning. As we demonstrate, this enabled us to introduce an academic level course to middle-school students. In this research, we developed the principals and initial version of such a program, targeting ninth-graders in science-track classes who learn CS as part of their standard curriculum. We found that the middle-schoolers who participated in the program achieved academic results on par with undergraduate students taking this MOOC for academic credit. Participating students also developed a more accurate perception of the essence of CS as a scientific discipline. The unplanned school closure due to the COVID19 pandemic outbreak challenged the research but underlined the advantages of such a MOOCbased blended learning program above classic pedagogy in times of global or local crises that lead to school closure. While most of the science track classes seem to stop learning CS almost entirely, and the end-of-year MoE exam was discarded, the program’s classes smoothly moved to remote learning mode, and students continued to study at a pace similar to that experienced before the school shut down.
We consider a sequential cascade of molecular first-reaction events towards a terminal reaction centre in which each reaction step is controlled by diffusive motion of the particles. The model studied here represents a typical reaction setting encountered in diverse molecular biology systems, in which, e.g. a signal transduction proceeds via a series of consecutive 'messengers': the first messenger has to find its respective immobile target site triggering a launch of the second messenger, the second messenger seeks its own target site and provokes a launch of the third messenger and so on, resembling a relay race in human competitions. For such a molecular relay race taking place in infinite one-, two- and three-dimensional systems, we find exact expressions for the probability density function of the time instant of the terminal reaction event, conditioned on preceding successful reaction events on an ordered array of target sites. The obtained expressions pertain to the most general conditions: number of intermediate stages and the corresponding diffusion coefficients, the sizes of the target sites, the distances between them, as well as their reactivities are arbitrary.
We consider a sequential cascade of molecular first-reaction events towards a terminal reaction centre in which each reaction step is controlled by diffusive motion of the particles. The model studied here represents a typical reaction setting encountered in diverse molecular biology systems, in which, e.g. a signal transduction proceeds via a series of consecutive 'messengers': the first messenger has to find its respective immobile target site triggering a launch of the second messenger, the second messenger seeks its own target site and provokes a launch of the third messenger and so on, resembling a relay race in human competitions. For such a molecular relay race taking place in infinite one-, two- and three-dimensional systems, we find exact expressions for the probability density function of the time instant of the terminal reaction event, conditioned on preceding successful reaction events on an ordered array of target sites. The obtained expressions pertain to the most general conditions: number of intermediate stages and the corresponding diffusion coefficients, the sizes of the target sites, the distances between them, as well as their reactivities are arbitrary.
Atmospheric water vapour content is a key variable that controls the development of deep convective storms and rainfall extremes over the central Andes. Direct measurements of water vapour are challenging; however, recent developments in microwave processing allow the use of phase delays from L-band radar to measure the water vapour content throughout the atmosphere: Global Navigation Satellite System (GNSS)-based integrated water vapour (IWV) monitoring shows promising results to measure vertically integrated water vapour at high temporal resolutions. Previous works also identified convective available potential energy (CAPE) as a key climatic variable for the formation of deep convective storms and rainfall in the central Andes. Our analysis relies on GNSS data from the Argentine Continuous Satellite Monitoring Network, Red Argentina de Monitoreo Satelital Continuo (RAMSAC) network from 1999 to 2013. CAPE is derived from version 2.0 of the ECMWF’s (European Centre for Medium-Range Weather Forecasts) Re-Analysis (ERA-interim) and rainfall from the TRMM (Tropical Rainfall Measuring Mission) product. In this study, we first analyse the rainfall characteristics of two GNSS-IWV stations by comparing their complementary cumulative distribution function (CCDF). Second, we separately derive the relation between rainfall vs. CAPE and GNSS-IWV. Based on our distribution fitting analysis, we observe an exponential relation of rainfall to GNSS-IWV. In contrast, we report a power-law relationship between the daily mean value of rainfall and CAPE at the GNSS-IWV station locations in the eastern central Andes that is close to the theoretical relationship based on parcel theory. Third, we generate a joint regression model through a multivariable regression analysis using CAPE and GNSS-IWV to explain the contribution of both variables in the presence of each other to extreme rainfall during the austral summer season. We found that rainfall can be characterised with a higher statistical significance for higher rainfall quantiles, e.g., the 0.9 quantile based on goodness-of-fit criterion for quantile regression. We observed different contributions of CAPE and GNSS-IWV to rainfall for each station for the 0.9 quantile. Fourth, we identify the temporal relation between extreme rainfall (the 90th, 95th, and 99th percentiles) and both GNSS-IWV and CAPE at 6 h time steps. We observed an increase before the rainfall event and at the time of peak rainfall—both for GNSS-integrated water vapour and CAPE. We show higher values of CAPE and GNSS-IWV for higher rainfall percentiles (99th and 95th percentiles) compared to the 90th percentile at a 6-h temporal scale. Based on our correlation analyses and the dynamics of the time series, we show that both GNSS-IWV and CAPE had comparable magnitudes, and we argue to consider both climatic variables when investigating their effect on rainfall extremes.
Atmospheric water vapour content is a key variable that controls the development of deep convective storms and rainfall extremes over the central Andes. Direct measurements of water vapour are challenging; however, recent developments in microwave processing allow the use of phase delays from L-band radar to measure the water vapour content throughout the atmosphere: Global Navigation Satellite System (GNSS)-based integrated water vapour (IWV) monitoring shows promising results to measure vertically integrated water vapour at high temporal resolutions. Previous works also identified convective available potential energy (CAPE) as a key climatic variable for the formation of deep convective storms and rainfall in the central Andes. Our analysis relies on GNSS data from the Argentine Continuous Satellite Monitoring Network, Red Argentina de Monitoreo Satelital Continuo (RAMSAC) network from 1999 to 2013. CAPE is derived from version 2.0 of the ECMWF’s (European Centre for Medium-Range Weather Forecasts) Re-Analysis (ERA-interim) and rainfall from the TRMM (Tropical Rainfall Measuring Mission) product. In this study, we first analyse the rainfall characteristics of two GNSS-IWV stations by comparing their complementary cumulative distribution function (CCDF). Second, we separately derive the relation between rainfall vs. CAPE and GNSS-IWV. Based on our distribution fitting analysis, we observe an exponential relation of rainfall to GNSS-IWV. In contrast, we report a power-law relationship between the daily mean value of rainfall and CAPE at the GNSS-IWV station locations in the eastern central Andes that is close to the theoretical relationship based on parcel theory. Third, we generate a joint regression model through a multivariable regression analysis using CAPE and GNSS-IWV to explain the contribution of both variables in the presence of each other to extreme rainfall during the austral summer season. We found that rainfall can be characterised with a higher statistical significance for higher rainfall quantiles, e.g., the 0.9 quantile based on goodness-of-fit criterion for quantile regression. We observed different contributions of CAPE and GNSS-IWV to rainfall for each station for the 0.9 quantile. Fourth, we identify the temporal relation between extreme rainfall (the 90th, 95th, and 99th percentiles) and both GNSS-IWV and CAPE at 6 h time steps. We observed an increase before the rainfall event and at the time of peak rainfall—both for GNSS-integrated water vapour and CAPE. We show higher values of CAPE and GNSS-IWV for higher rainfall percentiles (99th and 95th percentiles) compared to the 90th percentile at a 6-h temporal scale. Based on our correlation analyses and the dynamics of the time series, we show that both GNSS-IWV and CAPE had comparable magnitudes, and we argue to consider both climatic variables when investigating their effect on rainfall extremes.
With the downscaling of CMOS technologies, the radiation-induced Single Event Transient (SET) effects in combinational logic have become a critical reliability issue for modern integrated circuits (ICs) intended for operation under harsh radiation conditions. The SET pulses generated in combinational logic may propagate through the circuit and eventually result in soft errors. It has thus become an imperative to address the SET effects in the early phases of the radiation-hard IC design. In general, the soft error mitigation solutions should accommodate both static and dynamic measures to ensure the optimal utilization of available resources. An efficient soft-error-aware design should address synergistically three main aspects: (i) characterization and modeling of soft errors, (ii) multi-level soft error mitigation, and (iii) online soft error monitoring. Although significant results have been achieved, the effectiveness of SET characterization methods, accuracy of predictive SET models, and efficiency of SET mitigation measures are still critical issues. Therefore, this work addresses the following topics: (i) Characterization and modeling of SET effects in standard combinational cells, (ii) Static mitigation of SET effects in standard combinational cells, and (iii) Online particle detection, as a support for dynamic soft error mitigation.
Since the standard digital libraries are widely used in the design of radiation-hard ICs, the characterization of SET effects in standard cells and the availability of accurate SET models for the Soft Error Rate (SER) evaluation are the main prerequisites for efficient radiation-hard design. This work introduces an approach for the SPICE-based standard cell characterization with the reduced number of simulations, improved SET models and optimized SET sensitivity database. It has been shown that the inherent similarities in the SET response of logic cells for different input levels can be utilized to reduce the number of required simulations. Based on characterization results, the fitting models for the SET sensitivity metrics (critical charge, generated SET pulse width and propagated SET pulse width) have been developed. The proposed models are based on the principle of superposition, and they express explicitly the dependence of the SET sensitivity of individual combinational cells on design, operating and irradiation parameters. In contrast to the state-of-the-art characterization methodologies which employ extensive look-up tables (LUTs) for storing the simulation results, this work proposes the use of LUTs for storing the fitting coefficients of the SET sensitivity models derived from the characterization results. In that way the amount of characterization data in the SET sensitivity database is reduced significantly.
The initial step in enhancing the robustness of combinational logic is the application of gate-level mitigation techniques. As a result, significant improvement of the overall SER can be achieved with minimum area, delay and power overheads. For the SET mitigation in standard cells, it is essential to employ the techniques that do not require modifying the cell structure. This work introduces the use of decoupling cells for improving the robustness of standard combinational cells. By insertion of two decoupling cells at the output of a target cell, the critical charge of the cell’s output node is increased and the attenuation of short SETs is enhanced. In comparison to the most common gate-level techniques (gate upsizing and gate duplication), the proposed approach provides better SET filtering. However, as there is no single gate-level mitigation technique with optimal performance, a combination of multiple techniques is required. This work introduces a comprehensive characterization of gate-level mitigation techniques aimed to quantify their impact on the SET robustness improvement, as well as introduced area, delay and power overhead per gate. By characterizing the gate-level mitigation techniques together with the standard cells, the required effort in subsequent SER analysis of a target design can be reduced. The characterization database of the hardened standard cells can be utilized as a guideline for selection of the most appropriate mitigation solution for a given design.
As a support for dynamic soft error mitigation techniques, it is important to enable the online detection of energetic particles causing the soft errors. This allows activating the power-greedy fault-tolerant configurations based on N-modular redundancy only at the high radiation levels. To enable such a functionality, it is necessary to monitor both the particle flux and the variation of particle LET, as these two parameters contribute significantly to the system SER. In this work, a particle detection approach based on custom-sized pulse stretching inverters is proposed. Employing the pulse stretching inverters connected in parallel enables to measure the particle flux in terms of the number of detected SETs, while the particle LET variations can be estimated from the distribution of SET pulse widths. This approach requires a purely digital processing logic, in contrast to the standard detectors which require complex mixed-signal processing. Besides the possibility of LET monitoring, additional advantages of the proposed particle detector are low detection latency and power consumption, and immunity to error accumulation.
The results achieved in this thesis can serve as a basis for establishment of an overall soft-error-aware database for a given digital library, and a comprehensive multi-level radiation-hard design flow that can be implemented with the standard IC design tools. The following step will be to evaluate the achieved results with the irradiation experiments.
The large literature that aims to find evidence of climate migration delivers mixed findings. This meta-regression analysis i) summarizes direct links between adverse climatic events and migration, ii) maps patterns of climate migration, and iii) explains the variation in outcomes. Using a set of limited dependent variable models, we meta-analyze thus-far the most comprehensive sample of 3,625 estimates from 116 original studies and produce novel insights on climate migration. We find that extremely high temperatures and drying conditions increase migration. We do not find a significant effect of sudden-onset events. Climate migration is most likely to emerge due to contemporaneous events, to originate in rural areas and to take place in middle-income countries, internally, to cities. The likelihood to become trapped in affected areas is higher for women and in low-income countries, particularly in Africa. We uniquely quantify how pitfalls typical for the broader empirical climate impact literature affect climate migration findings. We also find evidence of different publication biases.
A matter of concern
(2021)
Neurons are post-mitotic cells in the brain and their integrity is of central importance to avoid neurodegeneration. Yet, the inability of self-replenishment of post-mitotic cells results in the need to withstand challenges from numerous stressors during life. Neurons are exposed to oxidative stress due to high oxygen consumption during metabolic activity in the brain. Accordingly, DNA damage can occur and accumulate, resulting in genome instability. In this context, imbalances in brain trace element homeostasis are a matter of concern, especially regarding iron, copper, manganese, zinc, and selenium. Although trace elements are essential for brain physiology, excess and deficient conditions are considered to impair neuronal maintenance. Besides increasing oxidative stress, DNA damage response and repair of oxidative DNA damage are affected by trace elements. Hence, a balanced trace element homeostasis is of particular importance to safeguard neuronal genome integrity and prevent neuronal loss. This review summarises the current state of knowledge on the impact of deficient, as well as excessive iron, copper, manganese, zinc, and selenium levels on neuronal genome stability
Aging in speech production is a multidimensional process. Biological, cognitive, social, and communicative factors can change over time, stay relatively stable, or may even compensate for each other. In this longitudinal work, we focus on stability and change at the laryngeal and supralaryngeal levels in the discourse particle euh produced by 10 older French-speaking females at two times, 10 years apart. Recognizing the multiple discourse roles of euh, we divided out occurrences according to utterance position. We quantified the frequency of euh, and evaluated acoustic changes in formants, fundamental frequency, and voice quality across time and utterance position. Results showed that euh frequency was stable with age. The only acoustic measure that revealed an age effect was harmonics-to-noise ratio, showing less noise at older ages. Other measures mostly varied with utterance position, sometimes in interaction with age. Some voice quality changes could reflect laryngeal adjustments that provide for airflow conservation utterance-finally. The data suggest that aging effects may be evident in some prosodic positions (e.g., utterance-final position), but not others (utterance-initial position). Thus, it is essential to consider the interactions among these factors in future work and not assume that vocal aging is evident throughout the signal.
We introduce a logic-based incremental approach to graph repair, generating a sound and complete (upon termination) overview of least-changing graph repairs from which a user may select a graph repair based on non-formalized further requirements. This incremental approach features delta preservation as it allows to restrict the generation of graph repairs to delta-preserving graph repairs, which do not revert the additions and deletions of the most recent consistency-violating graph update. We specify consistency of graphs using the logic of nested graph conditions, which is equivalent to first-order logic on graphs. Technically, the incremental approach encodes if and how the graph under repair satisfies a graph condition using the novel data structure of satisfaction trees, which are adapted incrementally according to the graph updates applied. In addition to the incremental approach, we also present two state-based graph repair algorithms, which restore consistency of a graph independent of the most recent graph update and which generate additional graph repairs using a global perspective on the graph under repair. We evaluate the developed algorithms using our prototypical implementation in the tool AutoGraph and illustrate our incremental approach using a case study from the graph database domain.
A few months before his death, A. v. Humboldt attended the celebration in honor of the 127th birthday of George Washington at the US legation in Berlin. A letter to the American Envoy, Joseph A. Wright (1810 – 1867), underlines Humboldt’s admiration for the fi rst president of the United States. At the same time Humboldt asked the diplomat to mail a letter to the German-American Bernard Moses (1832 – 1897) in Clinton, Louisiana, who had named his son Alexander Humboldt Moses (grave on the Hebrew Rest Cemetery #2 in New Orleans, burial plot A, 12, 5). It appears to be possible that the Moses family still owns Humboldt’s letter.
Context. The spectroscopic class of subdwarf A-type (sdA) stars has come into focus in recent years because of their possible link to extremely low-mass white dwarfs, a rare class of objects resulting from binary evolution. Although most sdA stars are consistent with metal-poor halo main-sequence stars, the formation and evolution of a fraction of these stars are still matters of debate. Aims. The identification of photometric variability can help to put further constraints on the evolutionary status of sdA stars, in particular through the analysis of pulsations. Moreover, the binary ratio, which can be deduced from eclipsing binaries and ellipsoidal variables, is important as input for stellar models. In order to search for variability due to either binarity or pulsations in objects of the spectroscopic sdA class, we have extracted all available high precision light curves from the Kepler K2 mission.
Methods. We have performed a thorough time series analysis on all available light curves, employing three different methods. Frequencies with a signal-to-noise ratio higher than four have been used for further analysis.
Results. From the 25 targets, 13 turned out to be variables of different kinds (i.e., classical pulsating stars, ellipsoidal and cataclysmic variables, eclipsing binaries, and rotationally induced variables). For the remaining 12 objects, a variability threshold was determined.
We present a new autoclave that enables in situ characterization of hydrothermal fluids at high pressures and high temperatures at synchrotron x-ray radiation sources. The autoclave has been specifically designed to enable x-ray absorption spectroscopy in fluids with applications to mineral solubility and element speciation analysis in hydrothermal fluids in complex compositions. However, other applications, such as Raman spectroscopy, in high-pressure fluids are also possible with the autoclave. First experiments were run at pressures between 100 and 600 bars and at temperatures between 25 degrees C and 550 degrees C, and preliminary results on scheelite dissolution in fluids of different compositions show that the autoclave is well suited to study the behavior of ore-forming metals at P-T conditions relevant to the Earth's crust.
We prove a homology vanishing theorem for graphs with positive Bakry-' Emery curvature, analogous to a classic result of Bochner on manifolds [3]. Specifically, we prove that if a graph has positive curvature at every vertex, then its first homology group is trivial, where the notion of homology that we use for graphs is the path homology developed by Grigor'yan, Lin, Muranov, and Yau [11]. We moreover prove that the fundamental group is finite for graphs with positive Bakry-' Emery curvature, analogous to a classic result of Myers on manifolds [22]. The proofs draw on several separate areas of graph theory, including graph coverings, gain graphs, and cycle spaces, in addition to the Bakry-Emery curvature, path homology, and graph homotopy. The main results follow as a consequence of several different relationships developed among these different areas. Specifically, we show that a graph with positive curvature cannot have a non-trivial infinite cover preserving 3-cycles and 4-cycles, and give a combinatorial interpretation of the first path homology in terms of the cycle space of a graph. Furthermore, we relate gain graphs to graph homotopy and the fundamental group developed by Grigor'yan, Lin, Muranov, and Yau [12], and obtain an alternative proof of their result that the abelianization of the fundamental group of a graph is isomorphic to the first path homology over the integers.
A grammar of authority?
(2021)
Directive Speech Acts (dsas) are a major feature of historical pragmatics, specifically in research on historical (im)politeness. However, for Classical French, there is a lack of research on related phenomena. In our contribution, we present two recently constructed corpora covering the period of Classical French, sermo and apwcf. We present these corpora in terms of their genre characteristics on a communicative-functional and socio-pragmatic level. Based on the observation that, both in sermo and apwcf, dsas frequently occur together with terms of address, we analyse and manually code a sample based on this co-occurrence, and we compare the results with regard to special features in the individual corpora. The emerging patterns show a clear correspondence between socio-pragmatic factors and the linguistic means used to realise dsas. We propose that these results can be interpreted as signs of an underlying "grammar of authority".
Image feature detection is a key task in computer vision. Scale Invariant Feature Transform (SIFT) is a prevalent and well known algorithm for robust feature detection. However, it is computationally demanding and software implementations are not applicable for real-time performance. In this paper, a versatile and pipelined hardware implementation is proposed, that is capable of computing keypoints and rotation invariant descriptors on-chip. All computations are performed in single precision floating-point format which makes it possible to implement the original algorithm with little alteration. Various rotation resolutions and filter kernel sizes are supported for images of any resolution up to ultra-high definition. For full high definition images, 84 fps can be processed. Ultra high definition images can be processed at 21 fps.
We present a new numerical algorithm to solve the recently derived equations of two-moment cosmic ray hydrodynamics (CRHD). The algorithm is implemented as a module in the moving mesh AREPO code. Therein, the anisotropic transport of cosmic rays (CRs) along magnetic field lines is discretized using a path-conservative finite volume method on the unstructured time-dependent Voronoi mesh of AREPO. The interaction of CRs and gyroresonant Alfven waves is described by short time-scale source terms in the CRHD equations. We employ a custom-made semi-implicit adaptive time stepping source term integrator to accurately integrate this interaction on the small light-crossing time of the anisotropic transport step. Both the transport and the source term integration step are separated from the evolution of the magnetohydrodynamical equations using an operator split approach. The new algorithm is tested with a variety of test problems, including shock tubes, a perpendicular magnetized discontinuity, the hydrodynamic response to a CR overpressure, CR acceleration of a warm cloud, and a CR blast wave, which demonstrate that the coupling between CR and magnetohydrodynamics is robust and accurate. We demonstrate the numerical convergence of the presented scheme using new linear and non-linear analytic solutions.
A expulsão do Éden
(2021)
A temática da migração está intimamente vinculada à história humana, desde a narrativa bíblica da expulsão do paraíso. O ser humano não apenas empregou técnicas cada vez mais sofisticadas para a violência, como também transmitiu, através dos séculos, técnicas de conservação e uso de seu saberconviver. Nesse sentido móvel da história, e em consonância com as literaturas do mundo, a partir de suas diversas origens, é possível dizer que existe um “Homo migrans” desde que existe o “Homo sapiens”. Assim, é possível afirmar que as ideias territoriais ou territorializantes com proveniência histórico-espacial permitem, vez ou outra, reconhecer seus esforços para filtrar e isolar a dimensão histórico-móvel e vetorial da história como narrativa, para tentar construir, com a ajuda de ideias estáticas, novos lugares da promessa ou da perda, da abundância ou da queda.
A drop of immunity
(2021)
The manuscript describes the phytochemical investigation of the roots, leaves and stem bark of Millettia lasiantha resulting in the isolation of twelve compounds including two new isomeric isoflavones lascoumestan and las-coumaronochromone. The structures of the new compounds were determined using different spectroscopic techniques.
Transition path theory (TPT) for diffusion processes is a framework for analyzing the transitions of multiscale ergodic diffusion processes between disjoint metastable subsets of state space. Most methods for applying TPT involve the construction of a Markov state model on a discretization of state space that approximates the underlying diffusion process. However, the assumption of Markovianity is difficult to verify in practice, and there are to date no known error bounds or convergence results for these methods. We propose a Monte Carlo method for approximating the forward committor, probability current, and streamlines from TPT for diffusion processes. Our method uses only sample trajectory data and partitions of state space based on Voronoi tessellations. It does not require the construction of a Markovian approximating process. We rigorously prove error bounds for the approximate TPT objects and use these bounds to show convergence to their exact counterparts in the limit of arbitrarily fine discretization. We illustrate some features of our method by application to a process that solves the Smoluchowski equation on a triple-well potential.
Exendin-4 is a pharmaceutical peptide used in the control of insulin secretion. Structural information on exendin-4 and related peptides especially on the level of quaternary structure is scarce. We present the first published association equilibria of exendin-4 directly measured by static and dynamic light scattering. We show that exendin-4 oligomerization is pH dependent and that these oligomers are of low compactness. We relate our experimental results to a structural hypothesis to describe molecular details of exendin-4 oligomers. Discussion of the validity of this hypothesis is based on NMR, circular dichroism and fluorescence spectroscopy, and light scattering data on exendin-4 and a set of exendin-4 derived peptides. The essential forces driving oligomerization of exendin-4 are helix–helix interactions and interactions of a conserved hydrophobic moiety. Our structural hypothesis suggests that key interactions of exendin-4 monomers in the experimentally supported trimer take place between a defined helical segment and a hydrophobic triangle constituted by the Phe22 residues of the three monomeric subunits. Our data rationalize that Val19 might function as an anchor in the N-terminus of the interacting helix-region and that Trp25 is partially shielded in the oligomer by C-terminal amino acids of the same monomer. Our structural hypothesis suggests that the Trp25 residues do not interact with each other, but with C-terminal Pro residues of their own monomers.
Exendin-4 is a pharmaceutical peptide used in the control of insulin secretion. Structural information on exendin-4 and related peptides especially on the level of quaternary structure is scarce. We present the first published association equilibria of exendin-4 directly measured by static and dynamic light scattering. We show that exendin-4 oligomerization is pH dependent and that these oligomers are of low compactness. We relate our experimental results to a structural hypothesis to describe molecular details of exendin-4 oligomers. Discussion of the validity of this hypothesis is based on NMR, circular dichroism and fluorescence spectroscopy, and light scattering data on exendin-4 and a set of exendin-4 derived peptides. The essential forces driving oligomerization of exendin-4 are helix–helix interactions and interactions of a conserved hydrophobic moiety. Our structural hypothesis suggests that key interactions of exendin-4 monomers in the experimentally supported trimer take place between a defined helical segment and a hydrophobic triangle constituted by the Phe22 residues of the three monomeric subunits. Our data rationalize that Val19 might function as an anchor in the N-terminus of the interacting helix-region and that Trp25 is partially shielded in the oligomer by C-terminal amino acids of the same monomer. Our structural hypothesis suggests that the Trp25 residues do not interact with each other, but with C-terminal Pro residues of their own monomers.
As AI technology is increasingly used in production systems, different approaches have emerged from highly decentralized small-scale AI at the edge level to centralized, cloud-based services used for higher-order optimizations. Each direction has disadvantages ranging from the lack of computational power at the edge level to the reliance on stable network connections with the centralized approach. Thus, a hybrid approach with centralized and decentralized components that possess specific abilities and interact is preferred. However, the distribution of AI capabilities leads to problems in self-adapting learning systems, as knowledgebases can diverge when no central coordination is present. Edge components will specialize in distinctive patterns (overlearn), which hampers their adaptability for different cases. Therefore, this paper aims to present a concept for a distributed interchangeable knowledge base in CPPS. The approach is based on various AI components and concepts for each participating node. A service-oriented infrastructure allows a decentralized, loosely coupled architecture of the CPPS. By exchanging knowledge bases between nodes, the overall system should become more adaptive, as each node can “forget” their present specialization.
Over the past decades, natural hazards, many of which are aggravated by climate change and reveal an increasing trend in frequency and intensity, have caused significant human and economic losses and pose a considerable obstacle to sustainable development. Hence, dedicated action toward disaster risk reduction is needed to understand the underlying drivers and create efficient risk mitigation plans. Such action is requested by the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR), a global agreement launched in 2015 that establishes stating priorities for action, e.g. an improved understanding of disaster risk. Turkey is one of the SFDRR contracting countries and has been severely affected by many natural hazards, in particular earthquakes and floods. However, disproportionately little is known about flood hazards and risks in Turkey. Therefore, this thesis aims to carry out a comprehensive analysis of flood hazards for the first time in Turkey from triggering drivers to impacts. It is intended to contribute to a better understanding of flood risks, improvements of flood risk mitigation and the facilitated monitoring of progress and achievements while implementing the SFDRR.
In order to investigate the occurrence and severity of flooding in comparison to other natural hazards in Turkey and provide an overview of the temporal and spatial distribution of flood losses, the Turkey Disaster Database (TABB) was examined for the years 1960-2014. The TABB database was reviewed through comparison with the Emergency Events Database (EM-DAT), the Dartmouth Flood Observatory database, the scientific literature and news archives. In addition, data on the most severe flood events between 1960 and 2014 were retrieved. These served as a basis for analyzing triggering mechanisms (i.e. atmospheric circulation and precipitation amounts) and aggravating pathways (i.e. topographic features, catchment size, land use types and soil properties). For this, a new approach was developed and the events were classified using hierarchical cluster analyses to identify the main influencing factor per event and provide additional information about the dominant flood pathways for severe floods. The main idea of the study was to start with the event impacts based on a bottom-up approach and identify the causes that created damaging events, instead of applying a model chain with long-term series as input and searching for potentially impacting events as model outcomes. However, within the frequency analysis of the flood-triggering circulation pattern types, it was discovered that events in terms of heavy precipitation were not included in the list of most severe floods, i.e. their impacts were not recorded in national and international loss databases but were mentioned in news archives and reported by the Turkish State Meteorological Service. This finding challenges bottom-up modelling approaches and underlines the urgent need for consistent event and loss documentation. Therefore, as a next step, the aim was to enhance the flood loss documentation by calibrating, validating and applying the United Nations Office for Disaster Risk Reduction (UNDRR) loss estimation method for the recent severe flood events (2015-2020). This provided, a consistent flood loss estimation model for Turkey, allowing governments to estimate losses as quickly as possible after events, e.g. to better coordinate financial aid.
This thesis reveals that, after earthquakes, floods have the second most destructive effects in Turkey in terms of human and economic impacts, with over 800 fatalities and US$ 885.7 million in economic losses between 1960 and 2020, and that more attention should be paid on the national scale. The clustering results of the dominant flood-producing mechanisms (e.g. circulation pattern types, extreme rainfall, sudden snowmelt) present crucial information regarding the source and pathway identification, which can be used as base information for hazard identification in the preliminary risk assessment process. The implementation of the UNDRR loss estimation model shows that the model with country-specific parameters, calibrated damage ratios and sufficient event documentation (i.e. physically damaged units) can be recommended in order to provide first estimates of the magnitude of direct economic losses, even shortly after events have occurred, since it performed well when estimates were compared to documented losses.
The presented results can contribute to improving the national disaster loss database in Turkey and thus enable a better monitoring of the national progress and achievements with regard to the targets stated by the SFDRR. In addition, the outcomes can be used to better characterize and classify flood events. Information on the main underlying factors and aggravating flood pathways further supports the selection of suitable risk reduction policies.
All input variables used in this thesis were obtained from publicly available data. The results are openly accessible and can be used for further research.
As an overall conclusion, it can be stated that consistent loss data collection and better event documentation should gain more attention for a reliable monitoring of the implementation of the SFDRR. Better event documentation should be established according to a globally accepted standard for disaster classification and loss estimation in Turkey. Ultimately, this enables stakeholders to create better risk mitigation actions based on clear hazard definitions, flood event classification and consistent loss estimations.