Refine
Has Fulltext
- yes (5953) (remove)
Year of publication
Document Type
- Postprint (2346)
- Doctoral Thesis (1730)
- Article (644)
- Preprint (425)
- Monograph/Edited Volume (246)
- Conference Proceeding (185)
- Working Paper (165)
- Master's Thesis (59)
- Habilitation Thesis (39)
- Part of Periodical (26)
Language
- English (5953) (remove)
Keywords
- climate change (73)
- Klimawandel (50)
- machine learning (40)
- morphology (40)
- information structure (39)
- MOOC (37)
- syntax (37)
- e-learning (36)
- digital education (35)
- Curriculum Framework (34)
Institute
- Institut für Physik und Astronomie (639)
- Institut für Biochemie und Biologie (555)
- Mathematisch-Naturwissenschaftliche Fakultät (485)
- Institut für Mathematik (475)
- Institut für Geowissenschaften (471)
- Extern (457)
- Institut für Chemie (427)
- Department Linguistik (237)
- Humanwissenschaftliche Fakultät (207)
- Department Psychologie (205)
Recognizing, understanding, and responding to quantities are considerable skills for human beings. We can easily communicate quantities, and we are extremely efficient in adapting our behavior to numerical related tasks. One usual task is to compare quantities. We also use symbols like digits in numerical-related tasks. To solve tasks including digits, we must to rely on our previously learned internal number representations.
This thesis elaborates on the process of number comparison with the use of noisy mental representations of numbers, the interaction of number and size representations and how we use mental number representations strategically. For this, three studies were carried out.
In the first study, participants had to decide which of two presented digits was numerically larger. They had to respond with a saccade in the direction of the anticipated answer. Using only a small set of meaningfully interpretable parameters, a variant of random walk models is described that accounts for response time, error rate, and variance of response time for the full matrix of 72 digit pairs. In addition, the used random walk model predicts a numerical distance effect even for error response times and this effect clearly occurs in the observed data. In relation to corresponding correct answers error responses were systematically faster. However, different from standard assumptions often made in random walk models, this account required that the distributions of step sizes of the induced random walks be asymmetric to account for this asymmetry between correct and incorrect responses.
Furthermore, the presented model provides a well-defined framework to investigate the nature and scale (e.g., linear vs. logarithmic) of the mapping of numerical magnitude onto its internal representation. In comparison of the fits of proposed models with linear and logarithmic mapping, the logarithmic mapping is suggested to be prioritized.
Finally, we discuss how our findings can help interpret complex findings (e.g., conflicting speed vs. accuracy trends) in applied studies that use number comparison as a well-established diagnostic tool. Furthermore, a novel oculomotoric effect is reported, namely the saccadic overschoot effect. The participants responded by saccadic eye movements and the amplitude of these saccadic responses decreases with numerical distance.
For the second study, an experimental design was developed that allows us to apply the signal detection theory to a task where participants had to decide whether a presented digit was physically smaller or larger. A remaining question is, whether the benefit in (numerical magnitude – physical size) congruent conditions is related to a better perception than in incongruent conditions. Alternatively, the number-size congruency effect is mediated by response biases due to numbers magnitude. The signal detection theory is a perfect tool to distinguish between these two alternatives. It describes two parameters, namely sensitivity and response bias. Changes in the sensitivity are related to the actual task performance due to real differences in perception processes whereas changes in the response bias simply reflect strategic implications as a stronger preparation (activation) of an anticipated answer. Our results clearly demonstrate that the number-size congruency effect cannot be reduced to mere response bias effects, and that genuine sensitivity gains for congruent number-size pairings contribute to the number-size congruency effect.
Third, participants had to perform a SNARC task – deciding whether a presented digit was odd or even. Local transition probability of irrelevant attributes (magnitude) was varied while local transition probability of relevant attributes (parity) and global probability occurrence of each stimulus were kept constantly. Participants were quite sensitive in recognizing the underlying local transition probability of irrelevant attributes. A gain in performance was observed for actual repetitions of the irrelevant attribute in relation to changes of the irrelevant attribute in high repetition conditions compared to low repetition conditions. One interpretation of these findings is that information about the irrelevant attribute (magnitude) in the previous trial is used as an informative precue, so that participants can prepare early processing stages in the current trial, with the corresponding benefits and costs typical of standard cueing studies.
Finally, the results reported in this thesis are discussed in relation to recent studies in numerical cognition.
A macro-tidal freshwater ecosystem recovering from hypereutrophication : the Schelde lease study
(2009)
We report a 40 year record of eutrophication and hypoxia on an estuarine ecosystem and its recovery from hypereutrophication. After decades of high inorganic nutrient concentrations and recurring anoxia and hypoxia, we observe a paradoxical increase in chlorophyll-a concentrations with decreasing nutrient inputs. We hypothesise that algal growth was inhibited due to hypereutrophication, either by elevated ammonium concentrations, severe hypoxia or the production of harmful substances in such a reduced environment. We study the dynamics of a simple but realistic mathematical model, incorporating the assumption of algal growth inhibition. It shows a high algal biomass, net oxygen production equilibrium with low ammonia inputs, and a low algal biomass, net oxygen consumption equilibrium with high ammonia inputs. At intermediate ammonia inputs it displays two alternative stable states. Although not intentional, the numerical output of this model corresponds to observations, giving extra support for assumption of algal growth inhibition. Due to potential algal growth inhibition, the recovery of hypereutrophied systems towards a classical eutrophied state, will need reduction of waste loads below certain thresholds and will be accompanied by large fluctuations in oxygen concentrations. We conclude that also flow-through systems, heavily influenced by external forcings which partly mask internal system dynamics, can display multiple stable states.
The large literature that aims to find evidence of climate migration delivers mixed findings. This meta-regression analysis i) summarizes direct links between adverse climatic events and migration, ii) maps patterns of climate migration, and iii) explains the variation in outcomes. Using a set of limited dependent variable models, we meta-analyze thus-far the most comprehensive sample of 3,625 estimates from 116 original studies and produce novel insights on climate migration. We find that extremely high temperatures and drying conditions increase migration. We do not find a significant effect of sudden-onset events. Climate migration is most likely to emerge due to contemporaneous events, to originate in rural areas and to take place in middle-income countries, internally, to cities. The likelihood to become trapped in affected areas is higher for women and in low-income countries, particularly in Africa. We uniquely quantify how pitfalls typical for the broader empirical climate impact literature affect climate migration findings. We also find evidence of different publication biases.
It is well-documented that strength training (ST) improves measures of muscle strength in young athletes. Less is known on transfer effects of ST on proxies of muscle power and the underlying dose-response relationships. The objectives of this meta-analysis were to quantify the effects of ST on lower limb muscle power in young athletes and to provide dose-response relationships for ST modalities such as frequency, intensity, and volume. A systematic literature search of electronic databases identified 895 records. Studies were eligible for inclusion if (i) healthy trained children (girls aged 6–11 y, boys aged 6–13 y) or adolescents (girls aged 12–18 y, boys aged 14–18 y) were examined, (ii) ST was compared with an active control, and (iii) at least one proxy of muscle power [squat jump (SJ) and countermovement jump height (CMJ)] was reported. Weighted mean standardized mean differences (SMDwm) between subjects were calculated. Based on the findings from 15 statistically aggregated studies, ST produced significant but small effects on CMJ height (SMDwm = 0.65; 95% CI 0.34–0.96) and moderate effects on SJ height (SMDwm = 0.80; 95% CI 0.23–1.37). The sub-analyses revealed that the moderating variable expertise level (CMJ height: p = 0.06; SJ height: N/A) did not significantly influence ST-related effects on proxies of muscle power. “Age” and “sex” moderated ST effects on SJ (p = 0.005) and CMJ height (p = 0.03), respectively. With regard to the dose-response relationships, findings from the meta-regression showed that none of the included training modalities predicted ST effects on CMJ height. For SJ height, the meta-regression indicated that the training modality “training duration” significantly predicted the observed gains (p = 0.02), with longer training durations (>8 weeks) showing larger improvements. This meta-analysis clearly proved the general effectiveness of ST on lower-limb muscle power in young athletes, irrespective of the moderating variables. Dose-response analyses revealed that longer training durations (>8 weeks) are more effective to improve SJ height. No such training modalities were found for CMJ height. Thus, there appear to be other training modalities besides the ones that were included in our analyses that may have an effect on SJ and particularly CMJ height. ST monitoring through rating of perceived exertion, movement velocity or force-velocity profile could be promising monitoring tools for lower-limb muscle power development in young athletes.
The author's recently published monograph on Alexander von Humboldt[1] describes the multiple images of this great cultural icon. The book is a metabiographical study that shows how from the middle of the nineteenth century to the present day Humboldt has served as a nucleus of crystallisation for a variety of successive socio-political ideologies, each producing its own distinctive representation of him. The historiographical implications of this biographical diversity are profound and support current attempts to understand historical scholarship in terms of memory cultures.
The paper aims to lay out a framework for evaluating value shifts in the international legal order for the purposes of a forthcoming book. In view of current contestations it asks whether we are observing yet another period of norm change (Wandel) or even a more fundamental transformation of international law – a metamorphosis (Verwandlung). For this purpose it suggests to look into the mechanisms of how norms change from the perspective of legal and political science and also to approximate a reference point where change turns into metamorphosis. It submits that such a point may be reached where specific legally protected values are indeed changing (change of legal values) or where the very idea of protecting certain values through law is renounced (delegalizing of values). The paper discusses the benefits of such an interdisciplinary exchange and tries to identify differences and commonalities among both disciplinary perspectives.
The MOOChub is a joined web-based catalog of all relevant German and Austrian MOOC platforms that lists well over 750 Massive Open Online Courses (MOOCs). Automatically building such a catalog requires that all partners describe and publicly offer the metadata of their courses in the same way. The paper at hand presents the genesis of the idea to establish a common metadata standard and the story of its subsequent development. The result of this effort is, first, an open-licensed de-facto-standard, which is based on existing commonly used standards and second, a first prototypical platform that is using this standard: the MOOChub, which lists all courses of the involved partners. This catalog is searchable and provides a more comprehensive overview of basically all MOOCs that are offered by German and Austrian MOOC platforms. Finally, the upcoming developments to further optimize the catalog and the metadata standard are reported.
With the downscaling of CMOS technologies, the radiation-induced Single Event Transient (SET) effects in combinational logic have become a critical reliability issue for modern integrated circuits (ICs) intended for operation under harsh radiation conditions. The SET pulses generated in combinational logic may propagate through the circuit and eventually result in soft errors. It has thus become an imperative to address the SET effects in the early phases of the radiation-hard IC design. In general, the soft error mitigation solutions should accommodate both static and dynamic measures to ensure the optimal utilization of available resources. An efficient soft-error-aware design should address synergistically three main aspects: (i) characterization and modeling of soft errors, (ii) multi-level soft error mitigation, and (iii) online soft error monitoring. Although significant results have been achieved, the effectiveness of SET characterization methods, accuracy of predictive SET models, and efficiency of SET mitigation measures are still critical issues. Therefore, this work addresses the following topics: (i) Characterization and modeling of SET effects in standard combinational cells, (ii) Static mitigation of SET effects in standard combinational cells, and (iii) Online particle detection, as a support for dynamic soft error mitigation.
Since the standard digital libraries are widely used in the design of radiation-hard ICs, the characterization of SET effects in standard cells and the availability of accurate SET models for the Soft Error Rate (SER) evaluation are the main prerequisites for efficient radiation-hard design. This work introduces an approach for the SPICE-based standard cell characterization with the reduced number of simulations, improved SET models and optimized SET sensitivity database. It has been shown that the inherent similarities in the SET response of logic cells for different input levels can be utilized to reduce the number of required simulations. Based on characterization results, the fitting models for the SET sensitivity metrics (critical charge, generated SET pulse width and propagated SET pulse width) have been developed. The proposed models are based on the principle of superposition, and they express explicitly the dependence of the SET sensitivity of individual combinational cells on design, operating and irradiation parameters. In contrast to the state-of-the-art characterization methodologies which employ extensive look-up tables (LUTs) for storing the simulation results, this work proposes the use of LUTs for storing the fitting coefficients of the SET sensitivity models derived from the characterization results. In that way the amount of characterization data in the SET sensitivity database is reduced significantly.
The initial step in enhancing the robustness of combinational logic is the application of gate-level mitigation techniques. As a result, significant improvement of the overall SER can be achieved with minimum area, delay and power overheads. For the SET mitigation in standard cells, it is essential to employ the techniques that do not require modifying the cell structure. This work introduces the use of decoupling cells for improving the robustness of standard combinational cells. By insertion of two decoupling cells at the output of a target cell, the critical charge of the cell’s output node is increased and the attenuation of short SETs is enhanced. In comparison to the most common gate-level techniques (gate upsizing and gate duplication), the proposed approach provides better SET filtering. However, as there is no single gate-level mitigation technique with optimal performance, a combination of multiple techniques is required. This work introduces a comprehensive characterization of gate-level mitigation techniques aimed to quantify their impact on the SET robustness improvement, as well as introduced area, delay and power overhead per gate. By characterizing the gate-level mitigation techniques together with the standard cells, the required effort in subsequent SER analysis of a target design can be reduced. The characterization database of the hardened standard cells can be utilized as a guideline for selection of the most appropriate mitigation solution for a given design.
As a support for dynamic soft error mitigation techniques, it is important to enable the online detection of energetic particles causing the soft errors. This allows activating the power-greedy fault-tolerant configurations based on N-modular redundancy only at the high radiation levels. To enable such a functionality, it is necessary to monitor both the particle flux and the variation of particle LET, as these two parameters contribute significantly to the system SER. In this work, a particle detection approach based on custom-sized pulse stretching inverters is proposed. Employing the pulse stretching inverters connected in parallel enables to measure the particle flux in terms of the number of detected SETs, while the particle LET variations can be estimated from the distribution of SET pulse widths. This approach requires a purely digital processing logic, in contrast to the standard detectors which require complex mixed-signal processing. Besides the possibility of LET monitoring, additional advantages of the proposed particle detector are low detection latency and power consumption, and immunity to error accumulation.
The results achieved in this thesis can serve as a basis for establishment of an overall soft-error-aware database for a given digital library, and a comprehensive multi-level radiation-hard design flow that can be implemented with the standard IC design tools. The following step will be to evaluate the achieved results with the irradiation experiments.
For the elucidation of the dynamics of signal transduction processes that are induced by cellular interactions, defined events along the signal transduction cascade and subsequent activation steps have to be analyzed and then also correlated with each other. This cannot be achieved by ensemble measurements because averaging biological data ignores the variability in timing and response patterns of individual cells and leads to highly blurred results. Instead, only a multi-parameter analysis at a single-cell level is able to exploit the information that is crucially needed for deducing the signaling pathways involved. The aim of this work was to develop a process line that allows the initiation of cell-cell or cell-particle interactions while at the same time the induced cellular reactions can be analyzed at various stages along the signal transduction cascade and correlated with each other. As this approach requires the gentle management of individually addressable cells, a dielectrophoresis (DEP)-based microfluidic system was employed that provides the manipulation of microscale objects with very high spatiotemporal precision and without the need of contacting the cell membrane. The system offers a high potential for automation and parallelization. This is essential for achieving a high level of robustness and reproducibility, which are key requirements in order to qualify this approach for a biomedical application. As an example process for intercellular communication, T cell activation has been chosen. The activation of the single T cells was triggered by contacting them individually with microbeads that were coated with antibodies directed against specific cell surface proteins, like the T cell receptor-associated kinase CD3 and the costimulatory molecule CD28 (CD; cluster of differentiation). The stimulation of the cells with the functionalized beads led to a rapid rise of their cytosolic Ca2+ concentration which was analyzed by a dual-wavelength ratiometric fluorescence measurement of the Ca2+-sensitive dye Fura-2. After Ca2+ imaging, the cells were isolated individually from the microfluidic system and cultivated further. Cell division and expression of the marker molecule CD69 as a late activation event of great significance were analyzed the following day and correlated with the previously recorded Ca2+ traces for each individual cell. It turned out such that the temporal profile of the Ca2+ traces between both activated and non-activated cells as well as dividing and non-dividing cells differed significantly. This shows that the pattern of Ca2+ signals in T cells can provide early information about a later reaction of the cell. As isolated cells are highly delicate objects, a precondition for these experiments was the successful adaptation of the system to maintain the vitality of single cells during and after manipulation. In this context, the influences of the microfluidic environment as well as the applied electric fields on the vitality of the cells and the cytosolic Ca2+ concentration as crucially important physiological parameters were thoroughly investigated. While a short-term DEP manipulation did not affect the vitality of the cells, they showed irregular Ca2+ transients upon exposure to the DEP field only. The rate and the strength of these Ca2+ signals depended on exposure time, electric field strength and field frequency. By minimizing their occurrence rate, experimental conditions were identified that caused the least interference with the physiology of the cell. The possibility to precisely control the exact time point of stimulus application, to simultaneously analyze short-term reactions and to correlate them with later events of the signal transduction cascade on the level of individual cells makes this approach unique among previously described applications and offers new possibilities to unravel the mechanisms underlying intercellular communication.
An important goal in biotechnology and (bio-) medical research is the isolation of single cells from a heterogeneous cell population. These specialised cells are of great interest for bioproduction, diagnostics, drug development, (cancer) therapy and research. To tackle emerging questions, an ever finer differentiation between target cells and non-target cells is required. This precise differentiation is a challenge for a growing number of available methods.
Since the physiological properties of the cells are closely linked to their morphology, it is beneficial to include their appearance in the sorting decision. For established methods, this represents a non addressable parameter, requiring new methods for the identification and isolation of target cells. Consequently, a variety of new flow-based methods have been developed and presented in recent years utilising 2D imaging data to identify target cells within a sample. As these methods aim for high throughput, the devices developed typically require highly complex fluid handling techniques, making them expensive while offering limited image quality.
In this work, a new continuous flow system for image-based cell sorting was developed that uses dielectrophoresis to precisely handle cells in a microchannel. Dielectrophoretic forces are exerted by inhomogeneous alternating electric fields on polarisable particles (here: cells). In the present system, the electric fields can be switched on and off precisely and quickly by a signal generator. In addition to the resulting simple and effective cell handling, the system is characterised by the outstanding quality of the image data generated and its compatibility with standard microscopes. These aspects result in low complexity, making it both affordable and user-friendly.
With the developed cell sorting system, cells could be sorted reliably and efficiently according to their cytosolic staining as well as morphological properties at different optical magnifications. The achieved purity of the target cell population was up to 95% and about 85% of the sorted cells could be recovered from the system. Good agreement was achieved between the results obtained and theoretical considerations. The achieved throughput of the system was up to 12,000 cells per hour. Cell viability studies indicated a high biocompatibility of the system.
The results presented demonstrate the potential of image-based cell sorting using dielectrophoresis. The outstanding image quality and highly precise yet gentle handling of the cells set the system apart from other technologies. This results in enormous potential for processing valuable and sensitive cell samples.
Background: Medical training is very demanding and associated with a high prevalence of psychological distress. Compared to the general population, medical students are at a greater risk of developing a psychological disorder. Various attempts of stress management training in medical school have achieved positive results on minimizing psychological distress; however, there are often limitations. Therefore, the use of a rigorous scientific method is needed. The present study protocol describes a randomized controlled trial to examine the effectiveness of a specifically developed mindfulness-based stress prevention training for medical students that includes selected elements of cognitive behavioral strategies (MediMind).
Methods/Design: This study protocol presents a prospective randomized controlled trial, involving four assessment time points: baseline, post-intervention, one-year follow-up and five-year follow-up. The aims include evaluating the effect on stress, coping, psychological morbidity and personality traits with validated measures. Participants are allocated randomly to one of three conditions: MediMind, Autogenic Training or control group. Eligible participants are medical or dental students in the second or eighth semester of a German university. They form a population of approximately 420 students in each academic term. A final total sample size of 126 (at five-year follow-up) is targeted. The trainings (MediMind and Autogenic Training) comprise five weekly sessions lasting 90 minutes each. MediMind will be offered to participants of the control group once the five-year follow-up is completed. The allotment is randomized with a stratified allocation ratio by course of studies, semester, and gender. After descriptive statistics have been evaluated, inferential statistical analysis will be carried out with a repeated measures ANOVA-design with interactions between time and group. Effect sizes will be calculated using partial η-square values.
Discussion: Potential limitations of this study are voluntary participation and the risk of attrition, especially concerning participants that are allocated to the control group. Strengths are the study design, namely random allocation, follow-up assessment, the use of control groups and inclusion of participants at different stages of medical training with the possibility of differential analysis.
We show that self-consistent partial synchrony in globally coupled oscillatory ensembles is a general phenomenon. We analyze in detail appearance and stability properties of this state in possibly the simplest setup of a biharmonic Kuramoto-Daido phase model as well as demonstrate the effect in limit-cycle relaxational Rayleigh oscillators. Such a regime extends the notion of splay state from a uniform distribution of phases to an oscillating one. Suitable collective observables such as the Kuramoto order parameter allow detecting the presence of an inhomogeneous distribution. The characteristic and most peculiar property of self-consistent partial synchrony is the difference between the frequency of single units and that of the macroscopic field.
The unusual mix of morphological traits displayed by extinct South American native ungulates (SANUs) confounded both Charles Darwin, who first discovered them, and Richard Owen, who tried to resolve their relationships. Here we report an almost complete mitochondrial genome for the litoptern Macrauchenia. Our dated phylogenetic tree places Macrauchenia as sister to Perissodactyla, but close to the radiation of major lineages within Laurasiatheria. This position is consistent with a divergence estimate of B66Ma (95% credibility interval, 56.64-77.83 Ma) obtained for the split between Macrauchenia and other Panperissodactyla. Combined with their morphological distinctiveness, this evidence supports the positioning of Litopterna (possibly in company with other SANU groups) as a separate order within Laurasiatheria. We also show that, when using strict criteria, extinct taxa marked by deep divergence times and a lack of close living relatives may still be amenable to palaeogenomic analysis through iterative mapping against more distant relatives.
A model analysis of mechanisms for radial microtubular patterns at root hair initiation sites
(2016)
Plant cells have two main modes of growth generating anisotropic structures. Diffuse growth where whole cell walls extend in specific directions, guided by anisotropically positioned cellulose fibers, and tip growth, with inhomogeneous addition of new cell wall material at the tip of the structure. Cells are known to regulate these processes via molecular signals and the cytoskeleton. Mechanical stress has been proposed to provide an input to the positioning of the cellulose fibers via cortical microtubules in diffuse growth. In particular, a stress feedback model predicts a circumferential pattern of fibers surrounding apical tissues and growing primordia, guided by the anisotropic curvature in such tissues. In contrast, during the initiation of tip growing root hairs, a star-like radial pattern has recently been observed. Here, we use detailed finite element models to analyze how a change in mechanical properties at the root hair initiation site can lead to star-like stress patterns in order to understand whether a stress-based feedback model can also explain the microtubule patterns seen during root hair initiation. We show that two independent mechanisms, individually or combined, can be sufficient to generate radial patterns. In the first, new material is added locally at the position of the root hair. In the second, increased tension in the initiation area provides a mechanism. Finally, we describe how a molecular model of Rho-of-plant (ROP) GTPases activation driven by auxin can position a patch of activated ROP protein basally along a 2D root epidermal cell plasma membrane, paving the way for models where mechanical and molecular mechanisms cooperate in the initial placement and outgrowth of root hairs.
Bacteria respond to changing environmental conditions by switching the global pattern of expressed genes. In response to specific environmental stresses the cell activates several stress-specific molecules such as sigma factors. They reversibly bind the RNA polymerase to form the so-called holoenzyme and direct it towards the appropriate stress response genes. In exponentially growing E. coli cells, the majority of the transcriptional activity is carried out by the housekeeping sigma factor, while stress responses are often under the control of alternative sigma factors. Different sigma factors compete for binding to a limited pool of RNA polymerase (RNAP) core enzymes, providing a mechanism for cross talk between genes or gene classes via the sharing of expression machinery. To quantitatively analyze the contribution of sigma factor competition to global changes in gene expression, we develop a thermodynamic model that describes binding between sigma factors and core RNAP at equilibrium, transcription, non-specific binding to DNA and the modulation of the availability of the molecular components.
Association of housekeeping sigma factor to RNAP is generally favored by its abundance and higher binding affinity to the core. In order to promote transcription by alternative sigma subunits, the bacterial cell modulates the transcriptional efficiency in a reversible manner through several strategies such as anti-sigma factors, 6S RNA and generally any kind of transcriptional regulators (e.g. activators or inhibitors). By shifting the outcome of sigma factor competition for the core, these modulators bias the transcriptional program of the cell. The model is validated by comparison with in vitro competition experiments, with which excellent agreement is found. We observe that transcription is affected via the modulation of the concentrations of the different types of holoenzymes, so saturated promoters are only weakly affected by sigma factor competition. However, in case of overlapping promoters or promoters recognized by two types of sigma factors, we find that even saturated promoters are strongly affected.
Active transcription effectively lowers the affinity between the sigma factor driving it and the core RNAP, resulting in complex cross talk effects and raising the question of how their in vitro measure is relevant in the cell. We also estimate that sigma factor competition is not strongly affected by non-specific binding of core RNAPs, sigma factors, and holoenzymes to DNA. Finally, we analyze the role of increased core RNAP availability upon the shut-down of ribosomal RNA transcription during stringent response. We find that passive up-regulation of alternative sigma-dependent transcription is not only possible, but also displays hypersensitivity based on the sigma factor competition. Our theoretical analysis thus provides support for a significant role of passive control during that global switch of the gene expression program and gives new insights into RNAP partitioning in the cell.
Informatics as a school subject has been virtually absent from bilingual education programs in German secondary schools. Most bilingual programs in German secondary education started out by focusing on subjects from the field of social sciences. Teachers and bilingual curriculum experts alike have been regarding those as the most suitable subjects for bilingual instruction – largely due to the intercultural perspective that a bilingual approach provides. And though one cannot deny the gain that ensues from an intercultural perspective on subjects such as history or geography, this benefit is certainly not limited to social science subjects. In consequence, bilingual curriculum designers have already begun to include other subjects such as physics or chemistry in bilingual school programs. It only seems a small step to extend this to informatics. This paper will start out by addressing potential benefits of adding informatics to the range of subjects taught as part of English-language bilingual programs in German secondary education. In a second step it will sketch out a methodological (= didactical) model for teaching informatics to German learners through English. It will then provide two items of hands-on and tested teaching material in accordance with this model. The discussion will conclude with a brief outlook on the chances and prerequisites of firmly establishing informatics as part of bilingual school curricula in Germany.
Atmospheric water vapour content is a key variable that controls the development of deep convective storms and rainfall extremes over the central Andes. Direct measurements of water vapour are challenging; however, recent developments in microwave processing allow the use of phase delays from L-band radar to measure the water vapour content throughout the atmosphere: Global Navigation Satellite System (GNSS)-based integrated water vapour (IWV) monitoring shows promising results to measure vertically integrated water vapour at high temporal resolutions. Previous works also identified convective available potential energy (CAPE) as a key climatic variable for the formation of deep convective storms and rainfall in the central Andes. Our analysis relies on GNSS data from the Argentine Continuous Satellite Monitoring Network, Red Argentina de Monitoreo Satelital Continuo (RAMSAC) network from 1999 to 2013. CAPE is derived from version 2.0 of the ECMWF’s (European Centre for Medium-Range Weather Forecasts) Re-Analysis (ERA-interim) and rainfall from the TRMM (Tropical Rainfall Measuring Mission) product. In this study, we first analyse the rainfall characteristics of two GNSS-IWV stations by comparing their complementary cumulative distribution function (CCDF). Second, we separately derive the relation between rainfall vs. CAPE and GNSS-IWV. Based on our distribution fitting analysis, we observe an exponential relation of rainfall to GNSS-IWV. In contrast, we report a power-law relationship between the daily mean value of rainfall and CAPE at the GNSS-IWV station locations in the eastern central Andes that is close to the theoretical relationship based on parcel theory. Third, we generate a joint regression model through a multivariable regression analysis using CAPE and GNSS-IWV to explain the contribution of both variables in the presence of each other to extreme rainfall during the austral summer season. We found that rainfall can be characterised with a higher statistical significance for higher rainfall quantiles, e.g., the 0.9 quantile based on goodness-of-fit criterion for quantile regression. We observed different contributions of CAPE and GNSS-IWV to rainfall for each station for the 0.9 quantile. Fourth, we identify the temporal relation between extreme rainfall (the 90th, 95th, and 99th percentiles) and both GNSS-IWV and CAPE at 6 h time steps. We observed an increase before the rainfall event and at the time of peak rainfall—both for GNSS-integrated water vapour and CAPE. We show higher values of CAPE and GNSS-IWV for higher rainfall percentiles (99th and 95th percentiles) compared to the 90th percentile at a 6-h temporal scale. Based on our correlation analyses and the dynamics of the time series, we show that both GNSS-IWV and CAPE had comparable magnitudes, and we argue to consider both climatic variables when investigating their effect on rainfall extremes.
In this paper, we investigate the continuous version of modified iterative Runge–Kutta-type methods for nonlinear inverse ill-posed problems proposed in a previous work. The convergence analysis is proved under the tangential cone condition, a modified discrepancy principle, i.e., the stopping time T is a solution of ∥𝐹(𝑥𝛿(𝑇))−𝑦𝛿∥=𝜏𝛿+ for some 𝛿+>𝛿, and an appropriate source condition. We yield the optimal rate of convergence.
A novel atomic beam splitter, using reflection of atoms off an evanescent light wave, is investigated theoretically. The intensity or frequency of the light is modulated in order to create sidebands on the reflected de Broglie wave. The weights and phases of the various sidevands are calculated using three different approaches: the Born approximation, a semiclassical path integral approach, and a numerical solution of the time-dependent Schrdinger equation. We show how this modulated mirror could be used to build practical atomic interferometers.
We consider a sequential cascade of molecular first-reaction events towards a terminal reaction centre in which each reaction step is controlled by diffusive motion of the particles. The model studied here represents a typical reaction setting encountered in diverse molecular biology systems, in which, e.g. a signal transduction proceeds via a series of consecutive 'messengers': the first messenger has to find its respective immobile target site triggering a launch of the second messenger, the second messenger seeks its own target site and provokes a launch of the third messenger and so on, resembling a relay race in human competitions. For such a molecular relay race taking place in infinite one-, two- and three-dimensional systems, we find exact expressions for the probability density function of the time instant of the terminal reaction event, conditioned on preceding successful reaction events on an ordered array of target sites. The obtained expressions pertain to the most general conditions: number of intermediate stages and the corresponding diffusion coefficients, the sizes of the target sites, the distances between them, as well as their reactivities are arbitrary.
In an attempt to pave the way for more extensive Computer Science Education (CSE) coverage in K-12, this research developed and made a preliminary evaluation of a blended-learning Introduction to CS program based on an academic MOOC. Using an academic MOOC that is pedagogically effective and engaging, such a program may provide teachers with disciplinary scaffolds and allow them to focus their attention on enhancing students’ learning experience and nurturing critical 21st-century skills such as self-regulated learning. As we demonstrate, this enabled us to introduce an academic level course to middle-school students. In this research, we developed the principals and initial version of such a program, targeting ninth-graders in science-track classes who learn CS as part of their standard curriculum. We found that the middle-schoolers who participated in the program achieved academic results on par with undergraduate students taking this MOOC for academic credit. Participating students also developed a more accurate perception of the essence of CS as a scientific discipline. The unplanned school closure due to the COVID19 pandemic outbreak challenged the research but underlined the advantages of such a MOOCbased blended learning program above classic pedagogy in times of global or local crises that lead to school closure. While most of the science track classes seem to stop learning CS almost entirely, and the end-of-year MoE exam was discarded, the program’s classes smoothly moved to remote learning mode, and students continued to study at a pace similar to that experienced before the school shut down.
Investigation of processes that contribute to the maintenance of genomic stability is one crucial factor in the attempt to understand mechanisms that facilitate ageing. The DNA damage response (DDR) and DNA repair mechanisms are crucial to safeguard the integrity of DNA and to prevent accumulation of persistent DNA damage. Among them, base excision repair (BER) plays a decisive role. BER is the major repair pathway for small oxidative base modifications and apurinic/apyrimidinic (AP) sites. We established a highly sensitive non-radioactive assay to measure BER incision activity in murine liver samples. Incision activity can be assessed towards the three DNA lesions 8-oxo-2’-deoxyguanosine (8-oxodG), 5-hydroxy-2’-deoxyuracil (5-OHdU), and an AP site analogue. We applied the established assay to murine livers of adult and old mice of both sexes. Furthermore, poly(ADP-ribosyl)ation (PARylation) was assessed, which is an important determinant in DDR and BER. Additionally, DNA damage levels were measured to examine the overall damage levels. No impact of ageing on the investigated endpoints in liver tissue were found. However, animal sex seems to be a significant impact factor, as evident by sex-dependent alterations in all endpoints investigated. Moreover, our results revealed interrelationships between the investigated endpoints indicative for the synergetic mode of action of the cellular DNA integrity maintaining machinery.
Advanced mechatronic systems have to integrate existing technologies from mechanical, electrical and software engineering. They must be able to adapt their structure and behavior at runtime by reconfiguration to react flexibly to changes in the environment. Therefore, a tight integration of structural and behavioral models of the different domains is required. This integration results in complex reconfigurable hybrid systems, the execution logic of which cannot be addressed directly with existing standard modeling, simulation, and code-generation techniques. We present in this paper how our component-based approach for reconfigurable mechatronic systems, M ECHATRONIC UML, efficiently handles the complex interplay of discrete behavior and continuous behavior in a modular manner. In addition, its extension to even more flexible reconfiguration cases is presented.
A multi-reference study of the byproduct formation for a ring-closed dithienylethene photoswitch
(2015)
Photodriven molecular switches are sometimes hindered in their performance by forming byproducts which act as dead ends in sequences of switching cycles, leading to rapid fatigue effects. Understanding the reaction pathways to unwanted byproducts is a prerequisite for preventing them. This article presents a study of the photochemical reaction pathways for byproduct formation in the photochromic switch 1,2-bis-(3-thienyl)-ethene. Specifically, using single- and multi-reference methods the post-deexcitation reaction towards the byproduct in the electronic ground state S0 when starting from the S1–S0 conical intersection (CoIn), is considered in detail. We find an unusual low-energy pathway, which offers the possibility for the formation of a dyotropic byproduct. Several high-energy pathways can be excluded with high probability.
The knowledge of the contemporary in situ stress state is a key issue for safe and sustainable subsurface engineering. However, information on the orientation and magnitudes of the stress state is limited and often not available for the areas of interest. Therefore 3-D geomechanical-numerical modelling is used to estimate the in situ stress state and the distance of faults from failure for application in subsurface engineering. The main challenge in this approach is to bridge the gap in scale between the widely scattered data used for calibration of the model and the high resolution in the target area required for the application. We present a multi-stage 3-D geomechanical-numerical approach which provides a state-of-the-art model of the stress field for a reservoir-scale area from widely scattered data records. Therefore, we first use a large-scale regional model which is calibrated by available stress data and provides the full 3-D stress tensor at discrete points in the entire model volume. The modelled stress state is used subsequently for the calibration of a smaller-scale model located within the large-scale model in an area without any observed stress data records. We exemplify this approach with two-stages for the area around Munich in the German Molasse Basin. As an example of application, we estimate the scalar values for slip tendency and fracture potential from the model results as measures for the criticality of fault reactivation in the reservoir-scale model. The modelling results show that variations due to uncertainties in the input data are mainly introduced by the uncertain material properties and missing S-Hmax magnitude estimates needed for a more reliable model calibration. This leads to the conclusion that at this stage the model's reliability depends only on the amount and quality of available stress information rather than on the modelling technique itself or on local details of the model geometry. Any improvements in modelling and increases in model reliability can only be achieved using more high-quality data for calibration.
A multidimensional and analytical perspective on Open Educational Practices in the 21st century
(2022)
Participatory approaches to teaching and learning are experiencing a new lease on life in the 21st century as a result of the rapid technology development. Knowledge, practices, and tools can be shared across spatial and temporal boundaries in higher education by means of Open Educational Resources, Massive Open Online Courses, and open-source technologies. In this context, the Open Education Movement calls for new didactic approaches that encourage greater learner participation in formal higher education. Based on a representative literature review and focus group research, in this study an analytical framework was developed that enables researchers and practitioners to assess the form of participation in formal, collaborative teaching and learning practices. The analytical framework is focused on the micro-level of higher education, in particular on the interaction between students and lecturers when organizing the curriculum. For this purpose, the research reflects anew on the concept of participation, taking into account existing stage models for participation in the educational context. These are then brought together with the dimensions of teaching and learning processes, such as methods, objectives and content, etc. This paper aims to make a valuable contribution to the opening up of learning and teaching, and expands the discourse around possibilities for interpreting Open Educational Practices.
Anthropogenic changes in climate, land use, and disturbance regimes, as well as introductions of non-native species can lead to the transformation of many ecosystems. The resulting novel ecosystems are usually characterized by species assemblages that have not occurred previously in a given area. Quantifying the ecological novelty of communities (i.e., biotic novelty) would enhance the understanding of environmental change. However, quantification remains challenging since current novelty metrics, such as the number and/or proportion of non-native species in a community, fall short of considering both functional and evolutionary aspects of biotic novelty. Here, we propose the Biotic Novelty Index (BNI), an intuitive and flexible multidimensional measure that combines (a) functional differences between native and non-native introduced species with (b) temporal dynamics of species introductions. We show that the BNI is an additive partition of Rao's quadratic entropy, capturing the novel interaction component of the community's functional diversity. Simulations show that the index varies predictably with the relative amount of functional novelty added by recently arrived species, and they illustrate the need to provide an additional standardized version of the index. We present a detailed R code and two applications of the BNI by (a) measuring changes of biotic novelty of dry grassland plant communities along an urbanization gradient in a metropolitan region and (b) determining the biotic novelty of plant species assemblages at a national scale. The results illustrate the applicability of the index across scales and its flexibility in the use of data of different quality. Both case studies revealed strong connections between biotic novelty and increasing urbanization, a measure of abiotic novelty. We conclude that the BNI framework may help building a basis for better understanding the ecological and evolutionary consequences of global change.
This thesis focuses on the study of marked Gibbs point processes, in particular presenting some results on their existence and uniqueness, with ideas and techniques drawn from different areas of statistical mechanics: the entropy method from large deviations theory, cluster expansion and the Kirkwood--Salsburg equations, the Dobrushin contraction principle and disagreement percolation.
We first present an existence result for infinite-volume marked Gibbs point processes. More precisely, we use the so-called entropy method (and large-deviation tools) to construct marked Gibbs point processes in R^d under quite general assumptions. In particular, the random marks belong to a general normed space S and are not bounded. Moreover, we allow for interaction functionals that may be unbounded and whose range is finite but random. The entropy method relies on showing that a family of finite-volume Gibbs point processes belongs to sequentially compact entropy level sets, and is therefore tight.
We then present infinite-dimensional Langevin diffusions, that we put in interaction via a Gibbsian description. In this setting, we are able to adapt the general result above to show the existence of the associated infinite-volume measure. We also study its correlation functions via cluster expansion techniques, and obtain the uniqueness of the Gibbs process for all inverse temperatures β and activities z below a certain threshold. This method relies in first showing that the correlation functions of the process satisfy a so-called Ruelle bound, and then using it to solve a fixed point problem in an appropriate Banach space. The uniqueness domain we obtain consists then of the model parameters z and β for which such a problem has exactly one solution.
Finally, we explore further the question of uniqueness of infinite-volume Gibbs point processes on R^d, in the unmarked setting. We present, in the context of repulsive interactions with a hard-core component, a novel approach to uniqueness by applying the discrete Dobrushin criterion to the continuum framework. We first fix a discretisation parameter a>0 and then study the behaviour of the uniqueness domain as a goes to 0. With this technique we are able to obtain explicit thresholds for the parameters z and β, which we then compare to existing results coming from the different methods of cluster expansion and disagreement percolation.
Throughout this thesis, we illustrate our theoretical results with various examples both from classical statistical mechanics and stochastic geometry.
Fluid force microscopy combines the positional accuracy and force sensitivity of an atomic
force microscope (AFM) with nanofluidics via a microchanneled cantilever. However, adequate loading and cleaning procedures for such AFM micropipettes are required for various application situations. Here, a new frontloading procedure is described for an AFM micropipette functioning as a force- and pressure-controlled microscale liquid dispenser. This frontloading
procedure seems especially attractive when using target substances featuring high
costs or low available amounts. Here, the AFM micropipette could be filled from the tip side with liquid from a previously applied droplet with a volume of only a few μL using a short low-pressure pulse. The liquid-loaded AFM micropipettes could be then applied for experiments in air or liquid environments. AFM micropipette frontloading was evaluated with the well-known organic fluorescent dye rhodamine 6G and the AlexaFluor647-labeled antibody goat anti-rat IgG as an example of a larger biological compound. After micropipette usage, specific cleaning procedures were tested. Furthermore, a storage method is described, at which the AFM micropipettes could be stored for a few hours up to several days without drying out or clogging of the microchannel. In summary, the rapid, versatile and cost-efficient
frontloading and cleaning procedure for the repeated usage of a single AFM micropipette is beneficial for various application situations from specific surface modifications through to local manipulation of living cells, and provides a simplified and faster handling for already known experiments with fluid force microscopy.
A feasible approach to construct multilayer films of sulfonated polyanilines – PMSA1 and PABMSA1 – containing different ratios of aniline, 2-methoxyaniline-5-sulfonic acid (MAS) and 3-aminobenzoic acid (AB), with the entrapped redox enzyme pyrroloquinoline quinone-dependent glucose dehydrogenase (PQQ-GDH) on Au and ITO electrode surfaces, is described. The formation of layers has been followed and confirmed by electrochemical impedance spectroscopy (EIS), which demonstrates that the multilayer assembly can be achieved in a progressive and uniform manner. The gold and ITO electrodes subsequently modified with PMSA1:PQQ-GDH and PABMSA1 films are studied by cyclic voltammetry (CV) and UV-Vis spectroscopy which show a significant direct bioelectrocatalytical response to the oxidation of the substrate glucose without any additional mediator. This response correlates linearly with the number of deposited layers. Furthermore, the constructed polymer/enzyme multilayer system exhibits a rather good long-term stability, since the catalytic current response is maintained for more than 60% of the initial value even after two weeks of storage. This verifies that a productive interaction of the enzyme embedded in the film of substituted polyaniline can be used as a basis for the construction of bioelectronic units, which are useful as indicators for processes liberating glucose and allowing optical and electrochemical transduction.
This study explores the sociometric status group differences in psychosocial adjustment and academic performance in various domains using multiple sources of information (teacher-, peer-, self-ratings, achievement data) and 2 age groups (elementary and secondary school students) in a different educational and cultural context. Gender differences in the profiles of the sociometric groups were also examined. The sample consisted of 1,041 elementary school (mean age = 11.4 years) and 862 secondary school (mean age = 14.3 years) students in public schools in Greece. Findings extended previous descriptions of rejected, neglected, and controversial groups based on the perceptions of all raters. Gender and age differences were found in the profiles of rejected and controversial groups, which were markedly distinguished from the other groups based on all data sets. Neglected children at both age levels were differentiated to a weaker degree.
Additive Manufacturing (AM) in terms of laser powder-bed fusion (L-PBF) offers new prospects regarding the design of parts and enables therefore the production of lattice structures. These lattice structures shall be implemented in various industrial applications (e.g. gas turbines) for reasons of material savings or cooling channels. However, internal defects, residual stress, and structural deviations from the nominal geometry are unavoidable.
In this work, the structural integrity of lattice structures manufactured by means of L-PBF was non-destructively investigated on a multiscale approach.
A workflow for quantitative 3D powder analysis in terms of particle size, particle shape, particle porosity, inter-particle distance and packing density was established. Synchrotron computed tomography (CT) was used to correlate the packing density with the particle size and particle shape. It was also observed that at least about 50% of the powder porosity was released during production of the struts.
Struts are the component of lattice structures and were investigated by means of laboratory CT. The focus was on the influence of the build angle on part porosity and surface quality. The surface topography analysis was advanced by the quantitative characterisation of re-entrant surface features. This characterisation was compared with conventional surface parameters showing their complementary information, but also the need for AM specific surface parameters.
The mechanical behaviour of the lattice structure was investigated with in-situ CT under compression and successive digital volume correlation (DVC). The deformation was found to be knot-dominated, and therefore the lattice folds unit cell layer wise.
The residual stress was determined experimentally for the first time in such lattice structures. Neutron diffraction was used for the non-destructive 3D stress investigation. The principal stress directions and values were determined in dependence of the number of measured directions. While a significant uni-axial stress state was found in the strut, a more hydrostatic stress state was found in the knot. In both cases, strut and knot, seven directions were at least needed to find reliable principal stress directions.
Background: Cysteine is a component in organic compounds including glutathione that have been implicated in the adaptation of plants to stresses. O-acetylserine (thiol) lyase (OAS-TL) catalyses the final step of cysteine biosynthesis. OAS-TL enzyme isoforms are localised in the cytoplasm, the plastids and mitochondria but the contribution of individual OAS-TL isoforms to plant sulphur metabolism has not yet been fully clarified.
Results: The seedling lethal phenotype of the Arabidopsis onset of leaf death3-1 (old3-1) mutant is due to a point mutation in the OAS-A1 gene, encoding the cytosolic OAS-TL. The mutation causes a single amino acid substitution from Gly(162) to Glu(162), abolishing old3-1 OAS-TL activity in vitro. The old3-1 mutation segregates as a monogenic semidominant trait when backcrossed to its wild type accession Landsberg erecta (Ler-0) and the Di-2 accession. Consistent with its semi-dominant behaviour, wild type Ler-0 plants transformed with the mutated old3-1 gene, displayed the early leaf death phenotype. However, the old3-1 mutation segregates in an 11: 4: 1 (wild type: semi-dominant: mutant) ratio when backcrossed to the Colombia-0 and Wassilewskija accessions. Thus, the early leaf death phenotype depends on two semi-dominant loci. The second locus that determines the old3-1 early leaf death phenotype is referred to as odd-ler (for old3 determinant in the Ler accession) and is located on chromosome 3. The early leaf death phenotype is temperature dependent and is associated with increased expression of defence-response and oxidative-stress marker genes. Independent of the presence of the odd-ler gene, OAS-A1 is involved in maintaining sulphur and thiol levels and is required for resistance against cadmium stress.
Conclusions: The cytosolic OAS-TL is involved in maintaining organic sulphur levels. The old3-1 mutation causes genome-dependent and independent phenotypes and uncovers a novel function for the mutated OAS-TL in cell death regulation.
Potentiality of nanosized materials has been largely proved but a closer look shows that a significant percentage of this research is related to oxides and metals, while the number drastically drops for metallic ceramics, namely transition metal nitrides and metal carbides. The lack of related publications do not reflect their potential but rather the difficulties related to their synthesis as dense and defect-free structures, fundamental prerequisites for advanced mechanical applications.
The present habilitation work aims to close the gap between preparation and processing, indicating novel synthetic pathways for a simpler and sustainable synthesis of transition metal nitride (MN) and carbide (MC) based nanostructures and easier processing thereafter. In spite of simplicity and reliability, the designed synthetic processes allow the production of functional materials, with the demanded size and morphology.
The goal was achieved exploiting classical and less-classical precursors, ranging from common metal salts and molecules (e.g. urea, gelatin, agar, etc), to more exotic materials, such as leafs, filter paper and even wood. It was found that the choice of precursors and reaction conditions makes it possible to control chemical composition (going for instance from metal oxides to metal oxy-nitrides to metal nitrides, or from metal nitrides to metal carbides, up to quaternary systems), size (from 5 to 50 nm) and morphology (going from mere spherical nanoparticles to rod-like shapes, fibers, layers, meso-porous and hierarchical structures, etc). The nature of the mixed precursors also allows the preparation of metal nitrides/carbides based nanocomposites, thus leading to multifunctional materials (e.g. MN/MC@C, MN/MC@PILs, etc) but also allowing dispersion in liquid media. Control over composition, size and morphology is obtained with simple adjustment of the main route, but also coupling it with processes such as electrospin, aerosol spray, bio-templating, etc. Last but not least, the nature of the precursor materials also allows easy processing, including printing, coating, casting, film and thin layers preparation, etc).
The designed routes are, concept-wise, similar and they all start by building up a secondary metal ion-N/C precursor network, which converts, upon heat treatment, into an intermediate “glass”. This glass stabilizes the nascent nanoparticles during their nucleation and impairs their uncontrolled growth during the heat treatment (scheme 1). This way, one of the main problems related to the synthesis of MN/MC, i.e. the need of very high temperature, could also be overcome (from up to 2000°C, for classical synthesis, down to 700°C in the present cases). The designed synthetic pathways are also conceived to allow usage of non-toxic compounds and to minimize (or even avoid) post-synthesis purification, still bringing to phase pure and well-defined (crystalline) nanoparticles.
This research aids to simplify the preparation of MN/MC, making these systems now readily available in suitable amounts both for fundamental and applied science. The prepared systems have been tested (in some cases for the first time) in many different fields, e.g. battery (MnN0.43@C shown a capacity stabilized at a value of 230 mAh/g, with coulombic efficiencies close to 100%), as alternative magnetic materials (Fe3C nanoparticles were prepared with different size and therefore different magnetic behavior, superparamagnetic or ferromagnetic, showing a saturation magnetization value up to 130 emu/g, i.e. similar to the value expected for the bulk material), as filters and for the degradation of organic dyes (outmatching the performance of carbon), as catalysts (both as active phase but also as active support, leading to high turnover rate and, more interesting, to tunable selectivity). Furthermore, with this route, it was possible to prepare for the first time, to the best of our knowledge, well-defined and crystalline MnN0.43, Fe3C and Zn1.7GeN1.8O nanoparticles via bottom-up approaches.
Once the synthesis of these materials can be made straightforward, any further modification, combination, manipulation, is in principle possible and new systems can be purposely conceived (e.g. hybrids, nanocomposites, ferrofluids, etc).
The polar and subtropical jet streams are strong upper-level winds with a crucial influence on weather throughout the Northern Hemisphere midlatitudes. In particular, the polar jet is located between cold arctic air to the north and warmer subtropical air to the south. Strongly meandering states therefore often lead to extreme surface weather.
Some algorithms exist which can detect the 2-D (latitude and longitude) jets' core around the hemisphere, but all of them use a minimal threshold to determine the subtropical and polar jet stream. This is particularly problematic for the polar jet stream, whose wind velocities can change rapidly from very weak to very high values and vice versa.
We develop a network-based scheme using Dijkstra's shortest-path algorithm to detect the polar and subtropical jet stream core. This algorithm not only considers the commonly used wind strength for core detection but also takes wind direction and climatological latitudinal position into account. Furthermore, it distinguishes between polar and subtropical jet, and between separate and merged jet states.
The parameter values of the detection scheme are optimized using simulated annealing and a skill function that accounts for the zonal-mean jet stream position (Rikus, 2015). After the successful optimization process, we apply our scheme to reanalysis data covering 1979-2015 and calculate seasonal-mean probabilistic maps and trends in wind strength and position of jet streams.
We present longitudinally defined probability distributions of the positions for both jets for all on the Northern Hemisphere seasons. This shows that winter is characterized by two well-separated jets over Europe and Asia (ca. 20 degrees W to 140 degrees E). In contrast, summer normally has a single merged jet over the western hemisphere but can have both merged and separated jet states in the eastern hemisphere.
With this algorithm it is possible to investigate the position of the jets' cores around the hemisphere and it is therefore very suitable to analyze jet stream patterns in observations and models, enabling more advanced model-validation.
Quantified Boolean formulas (QBFs) play an important role in theoretical computer science. QBF extends propositional logic in such a way that many advanced forms of reasoning can be easily formulated and evaluated. In this dissertation we present our ZQSAT, which is an algorithm for evaluating quantified Boolean formulas. ZQSAT is based on ZBDD: Zero-Suppressed Binary Decision Diagram , which is a variant of BDD, and an adopted version of the DPLL algorithm. It has been implemented in C using the CUDD: Colorado University Decision Diagram package. The capability of ZBDDs in storing sets of subsets efficiently enabled us to store the clauses of a QBF very compactly and let us to embed the notion of memoization to the DPLL algorithm. These points led us to implement the search algorithm in such a way that we could store and reuse the results of all previously solved subformulas with a little overheads. ZQSAT can solve some sets of standard QBF benchmark problems (known to be hard for DPLL based algorithms) faster than the best existing solvers. In addition to prenex-CNF, ZQSAT accepts prenex-NNF formulas. We show and prove how this capability can be exponentially beneficial.
Wheat is one of the most consumed foods in the world and unfortunately causes allergic reactions which have important health effects. The α-amylase/trypsin inhibitors (ATIs) have been identified as potentially allergen components of wheat. Due to a lack of data on optimization of ATI extraction, a new wheat ATIs extraction approach combining solvent extraction and selective precipitation is proposed in this work. Two types of wheat cultivars (Triticum aestivum L.), Julius and Ponticus were used and parameters such as solvent type, extraction time, temperature, stirring speed, salt type, salt concentration, buffer pH and centrifugation speed were analyzed using the Plackett-Burman design. Salt concentration, extraction time and pH appeared to have significant effects on the recovery of ATIs (p < 0.01). In both wheat cultivars, Julius and Ponticus, ammonium sulfate substantially reduced protein concentration and inhibition of amylase activity (IAA) compared to sodium chloride. The optimal conditions with desirability levels of 0.94 and 0.91 according to the Doehlert design were: salt concentrations of 1.67 and 1.22 M, extraction times of 53 and 118 min, and pHs of 7.1 and 7.9 for Julius and Ponticus, respectively. The corresponding responses were: protein concentrations of 0.31 and 0.35 mg and IAAs of 91.6 and 83.3%. Electrophoresis and MALDI-TOF/MS analysis showed that the extracted ATIs masses were between 10 and 20 kDa. Based on the initial LC-MS/MS analysis, up to 10 individual ATIs were identified in the extracted proteins under the optimal conditions. The positive implication of the present study lies in the quick assessment of their content in different varieties especially while considering their allergenic potential.
This paper presents a new methodology for examining the phenomenon of subitizing. Subjects were presented with a standard numerosity-detection task but for a range of presentation times to allow Task-Accuracy Functions to be computed for individual subjects. The data appear to show a continuous change in processing for numerosities from 2 to 5 when the data are aggregated across subjects. At the level of individual subjects, there appear to be qualitative shifts in enumeration processing after 3 or 4 objects. The approach used in this experiment may be used to test the claim that subitizing is a distinct enumeration process that can be used for small numbers of objects.
Compared to their inorganic counterparts, organic semiconductors suffer from relatively low charge carrier mobilities. Therefore, expressions derived for inorganic solar cells to correlate characteristic performance parameters to material properties are prone to fail when applied to organic devices. This is especially true for the classical Shockley-equation commonly used to describe current-voltage (JV)-curves, as it assumes a high electrical conductivity of the charge transporting material. Here, an analytical expression for the JV-curves of organic solar cells is derived based on a previously published analytical model. This expression, bearing a similar functional dependence as the Shockley-equation, delivers a new figure of merit α to express the balance between free charge recombination and extraction in low mobility photoactive materials. This figure of merit is shown to determine critical device parameters such as the apparent series resistance and the fill factor.
Quantifying the extremeness of heavy precipitation allows for the comparison of events. Conventional quantitative indices, however, typically neglect the spatial extent or the duration, while both are important to understand potential impacts. In 2014, the weather extremity index (WEI) was suggested to quantify the extremeness of an event and to identify the spatial and temporal scale at which the event was most extreme. However, the WEI does not account for the fact that one event can be extreme at various spatial and temporal scales. To better understand and detect the compound nature of precipitation events, we suggest complementing the original WEI with a “cross-scale weather extremity index” (xWEI), which integrates extremeness over relevant scales instead of determining its maximum.
Based on a set of 101 extreme precipitation events in Germany, we outline and demonstrate the computation of both WEI and xWEI. We find that the choice of the index can lead to considerable differences in the assessment of past events but that the most extreme events are ranked consistently, independently of the index. Even then, the xWEI can reveal cross-scale properties which would otherwise remain hidden. This also applies to the disastrous event from July 2021, which clearly outranks all other analyzed events with regard to both WEI and xWEI.
While demonstrating the added value of xWEI, we also identify various methodological challenges along the required computational workflow: these include the parameter estimation for the extreme value distributions, the definition of maximum spatial extent and temporal duration, and the weighting of extremeness at different scales. These challenges, however, also represent opportunities to adjust the retrieval of WEI and xWEI to specific user requirements and application scenarios.
A New Kind of Jew
(2018)
The article examines Allen Ginsberg’s spiritual path, and places his interest in Asian religions within larger cultural agendas and life choices. While identifying as a Jew, Ginsberg wished to transcend beyond his parents’ orbit and actively sought to create an inclusive, tolerant, and permissive society where persons such as himself could live and create at ease. He chose elements from the Christian, Jewish, Native-American, Hindu, and Buddhist traditions, weaving them together into an ever-growing cultural and spiritual quilt. The poet never underwent a conversion experience or restricted his choices and freedoms. In Ginsberg’s understanding, Buddhism was a universal, non-theistic religion that meshed well with an individualist outlook, and worked toward personal solace and mindfulness. He and other Jews saw no contradiction between enchantment with Buddhism and their Jewish identity.
Metagenomic sequencing has revolutionised our knowledge of virus diversity, with new virus sequences being reported faster than ever before. However, virus discovery from metagenomic sequencing usually depends on detectable homology: without a sufficiently close relative, so-called ‘dark’ virus sequences remain unrecognisable. An alternative approach is to use virus-identification methods that do not depend on detecting homology, such as virus recognition by host antiviral immunity. For example, virus-derived small RNAs have previously been used to propose ‘dark’ virus sequences associated with the Drosophilidae (Diptera). Here, we combine published Drosophila data with a comprehensive search of transcriptomic sequences and selected meta-transcriptomic datasets to identify a completely new lineage of segmented positive-sense single-stranded RNA viruses that we provisionally refer to as the Quenyaviruses. Each of the five segments contains a single open reading frame, with most encoding proteins showing no detectable similarity to characterised viruses, and one sharing a small number of residues with the RNA-dependent RNA polymerases of single- and double-stranded RNA viruses. Using these sequences, we identify close relatives in approximately 20 arthropods, including insects, crustaceans, spiders, and a myriapod. Using a more conserved sequence from the putative polymerase, we further identify relatives in meta-transcriptomic datasets from gut, gill, and lung tissues of vertebrates, reflecting infections of vertebrates or of their associated parasites. Our data illustrate the utility of small RNAs to detect viruses with limited sequence conservation, and provide robust evidence for a new deeply divergent and phylogenetically distinct RNA virus lineage.
We suggest several ideas which when combined could lead to a new mechanism for long-term pulsations of very hot and luminous stars. These involve the interplay between convection, radiation, atmospheric clumping and winds, which collectively feed back to stellar expansion and contraction. We discuss these ideas and point out the future work required in order to fill in the blanks.
The DNA origami technique has great potential for the development of brighter and more sensitive reporters for fluorescence based detection schemes such as a microbead-based assay in diagnostic applications. The nanostructures can be programmed to include multiple dye molecules to enhance the measured signal as well as multiple probe strands to increase the binding strength of the target oligonucleotide to these nanostructures. Here we present a proof-of-concept study to quantify short oligonucleotides by developing a novel DNA origami based reporter system, combined with planar microbead assays. Analysis of the assays using the VideoScan digital imaging platform showed DNA origami to be a more suitable reporter candidate for quantification of the target oligonucleotides at lower concentrations than a conventional reporter that consists of one dye molecule attached to a single stranded DNA. Efforts have been made to conduct multiplexed analysis of different targets as well as to enhance fluorescence signals obtained from the reporters. We therefore believe that the quantification of short oligonucleotides that exist in low copy numbers is achieved in a better way with the DNA origami nanostructures as reporters.
Monoclonal antibodies are used worldwide as highly potent and efficient detection reagents for research and diagnostic applications. Nevertheless, the specific targeting of complex antigens such as whole microorganisms remains a challenge. To provide a comprehensive workflow, we combined bioinformatic analyses with novel immunization and selection tools to design monoclonal antibodies for the detection of whole microorganisms. In our initial study, we used the human pathogenic strain E. coli O157:H7 as a model target and identified 53 potential protein candidates by using reverse vaccinology methodology. Five different peptide epitopes were selected for immunization using epitope-engineered viral proteins. The identification of antibody-producing hybridomas was performed by using a novel screening technology based on transgenic fusion cell lines. Using an artificial cell surface receptor expressed by all hybridomas, the desired antigen-specific cells can be sorted fast and efficiently out of the fusion cell pool. Selected antibody candidates were characterized and showed strong binding to the target strain E. coli O157:H7 with minor or no cross-reactivity to other relevant microorganisms such as Legionella pneumophila and Bacillus ssp. This approach could be useful as a highly efficient workflow for the generation of antibodies against microorganisms.
We consider a Cauchy problem for the heat equation in a cylinder X x (0,T) over a domain X in the n-dimensional space with data on a strip lying on the lateral surface. The strip is of the form
S x (0,T), where S is an open subset of the boundary of X. The problem is ill-posed. Under natural restrictions on the configuration of S we derive an explicit formula for solutions of this problem.
We present the first physical characterization of the young open cluster VVVCL041. We spectroscopically observed the cluster main-sequence stellar population and a very-massive star candidate: WR62-2. CMFGEN modelling to our near-infrared spectra indicates that WR62-2 is a very luminous (10^6.4±0.2 L⊙)and massive (∼ 80M⊙) star.
We consider the problem of testing whether the density of a mul- tivariate random variable can be expressed by a prespecified copula function and the marginal densities. The proposed test procedure is based on the asymptotic normality of the properly standardized integrated squared distance between a multivariate kernel density estimator and an estimator of its expectation under the hypothesis. The test of independence is a special case of this approach.
The paper presents a method that determines, by standard numerical means, the type of mutual relations of fold and flip bifurcations (configured as a so-called communication area) of a map. Equation systems are developed for the computation of points where a transition between areas of different types occurs. Furthermore, it is shown that saddle area<->spring area transitions can exist which have not yet been considered in the literature. Analytical conditions of that transition are derived.
In the past, floods were basically managed by flood control mechanisms. The focus was set on the reduction of flood hazard. The potential consequences were of minor interest. Nowadays river flooding is increasingly seen from the risk perspective, including possible consequences. Moreover, the large-scale picture of flood risk became increasingly important for disaster management planning, national risk developments and the (re-) insurance industry. Therefore, it is widely accepted that risk-orientated flood management ap-proaches at the basin-scale are needed. However, large-scale flood risk assessment methods for areas of several 10,000 km² are still in early stages. Traditional flood risk assessments are performed reach wise, assuming constant probabilities for the entire reach or basin. This might be helpful on a local basis, but where large-scale patterns are important this approach is of limited use. Assuming a T-year flood (e.g. 100 years) for the entire river network is unrealistic and would lead to an overestimation of flood risk at the large scale. Due to the lack of damage data, additionally, the probability of peak discharge or rainfall is usually used as proxy for damage probability to derive flood risk. With a continuous and long term simulation of the entire flood risk chain, the spatial variability of probabilities could be consider and flood risk could be directly derived from damage data in a consistent way.
The objective of this study is the development and application of a full flood risk chain, appropriate for the large scale and based on long term and continuous simulation. The novel approach of ‘derived flood risk based on continuous simulations’ is introduced, where the synthetic discharge time series is used as input into flood impact models and flood risk is directly derived from the resulting synthetic damage time series.
The bottleneck at this scale is the hydrodynamic simu-lation. To find suitable hydrodynamic approaches for the large-scale a benchmark study with simplified 2D hydrodynamic models was performed. A raster-based approach with inertia formulation and a relatively high resolution of 100 m in combination with a fast 1D channel routing model was chosen.
To investigate the suitability of the continuous simulation of a full flood risk chain for the large scale, all model parts were integrated into a new framework, the Regional Flood Model (RFM). RFM consists of the hydrological model SWIM, a 1D hydrodynamic river network model, a 2D raster based inundation model and the flood loss model FELMOps+r. Subsequently, the model chain was applied to the Elbe catchment, one of the largest catchments in Germany. For the proof-of-concept, a continuous simulation was per-formed for the period of 1990-2003. Results were evaluated / validated as far as possible with available observed data in this period. Although each model part introduced its own uncertainties, results and runtime were generally found to be adequate for the purpose of continuous simulation at the large catchment scale.
Finally, RFM was applied to a meso-scale catchment in the east of Germany to firstly perform a flood risk assessment with the novel approach of ‘derived flood risk assessment based on continuous simulations’. Therefore, RFM was driven by long term synthetic meteorological input data generated by a weather generator. Thereby, a virtual time series of climate data of 100 x 100 years was generated and served as input to RFM providing subsequent 100 x 100 years of spatially consistent river discharge series, inundation patterns and damage values. On this basis, flood risk curves and expected annual damage could be derived directly from damage data, providing a large-scale picture of flood risk. In contrast to traditional flood risk analysis, where homogenous return periods are assumed for the entire basin, the presented approach provides a coherent large-scale picture of flood risk. The spatial variability of occurrence probability is respected. Additionally, data and methods are consistent. Catchment and floodplain processes are repre-sented in a holistic way. Antecedent catchment conditions are implicitly taken into account, as well as physical processes like storage effects, flood attenuation or channel–floodplain interactions and related damage influencing effects. Finally, the simulation of a virtual period of 100 x 100 years and consequently large data set on flood loss events enabled the calculation of flood risk directly from damage distributions. Problems associated with the transfer of probabilities in rainfall or peak runoff to probabilities in damage, as often used in traditional approaches, are bypassed.
RFM and the ‘derived flood risk approach based on continuous simulations’ has the potential to provide flood risk statements for national planning, re-insurance aspects or other questions where spatially consistent, large-scale assessments are required.
In soils and sediments there is a strong coupling between local biogeochemical processes and the distribution of water, electron acceptors, acids and nutrients. Both sides are closely related and affect each other from small scale to larger scales. Soil structures such as aggregates, roots, layers or macropores enhance the patchiness of these distributions. At the same time it is difficult to access the spatial distribution and temporal dynamics of these parameter. Noninvasive imaging techniques with high spatial and temporal resolution overcome these limitations. And new non-invasive techniques are needed to study the dynamic interaction of plant roots with the surrounding soil, but also the complex physical and chemical processes in structured soils. In this study we developed an efficient non-destructive in-situ method to determine biogeochemical parameters relevant to plant roots growing in soil. This is a quantitative fluorescence imaging method suitable for visualizing the spatial and temporal pH changes around roots. We adapted the fluorescence imaging set-up and coupled it with neutron radiography to study simultaneously root growth, oxygen depletion by respiration activity and root water uptake. The combined set up was subsequently applied to a structured soil system to map the patchy structure of oxic and anoxic zones induced by a chemical oxygen consumption reaction for spatially varying water contents. Moreover, results from a similar fluorescence imaging technique for nitrate detection were complemented by a numerical modeling study where we used imaging data, aiming to simulate biodegradation under anaerobic, nitrate reducing conditions.
The use of monoclonal antibodies is ubiquitous in science and biomedicine but the generation and validation process of antibodies is nevertheless complicated and time-consuming. To address these issues we developed a novel selective technology based on an artificial cell surface construct by which secreted antibodies were connected to the corresponding hybridoma cell when they possess the desired antigen-specificity. Further the system enables the selection of desired isotypes and the screening for potential cross-reactivities in the same context. For the design of the construct we combined the transmembrane domain of the EGF-receptor with a hemagglutinin epitope and a biotin acceptor peptide and performed a transposon-mediated transfection of myeloma cell lines. The stably transfected myeloma cell line was used for the generation of hybridoma cells and an antigen- and isotype-specific screening method was established. The system has been validated for globular protein antigens as well as for haptens and enables a fast and early stage selection and validation of monoclonal antibodies in one step.
This thesis provides a novel view on the early stage of crystallization utilizing calcium carbonate as a model system. Calcium carbonate is of great economical, scientific and ecological importance, because it is a major part of water hardness, the most abundant Biomineral and forms huge amounts of geological sediments thus binding large amounts of carbon dioxide. The primary experiments base on the evolution of supersaturation via slow addition of dilute calcium chloride solution into dilute carbonate buffer. The time-dependent measurement of the Ca2+ potential and concurrent pH = constant titration facilitate the calculation of the amount of calcium and carbonate ions bound in pre-nucleation stage clusters, which have never been detected experimentally so far, and in the new phase after nucleation, respectively. Analytical Ultracentrifugation independently proves the existence of pre-nucleation stage clusters, and shows that the clusters forming at pH = 9.00 have a proximately time-averaged size of altogether 70 calcium and carbonate ions. Both experiments show that pre-nucleation stage cluster formation can be described by means of equilibrium thermodynamics. Effectively, the cluster formation equilibrium is physico-chemically characterized by means of a multiple-binding equilibrium of calcium ions to a ‘lattice’ of carbonate ions. The evaluation gives GIBBS standard energy for the formation of calcium/carbonate ion pairs in clusters, which exhibits a maximal value of approximately 17.2 kJ mol^-1 at pH = 9.75 and relates to a minimal binding strength in clusters at this pH-value. Nucleated calcium carbonate particles are amorphous at first and subsequently become crystalline. At high binding strength in clusters, only calcite (the thermodynamically stable polymorph) is finally obtained, while with decreasing binding strength in clusters, vaterite (the thermodynamically least stable polymorph) and presumably aragonite (the thermodynamically intermediate stable polymorph) are obtained additionally. Concurrently, two different solubility products of nucleated amorphous calcium carbonate (ACC) are detected at low binding strength and high binding strength in clusters (ACC I 3.1EE-8 M^2, ACC II 3.8EE-8 M^2), respectively, indicating the precipitation of at least two different ACC species, while the clusters provide the precursor species of ACC. It is proximate that ACC I may relate to calcitic ACC –i.e. ACC exhibiting short range order similar to the long range order of calcite and that ACC II may relate to vateritic ACC, which will subsequently transform into the particular crystalline polymorph as discussed in the literature, respectively. Detailed analysis of nucleated particles forming at minimal binding strength in clusters (pH = 9.75) by means of SEM, TEM, WAXS and light microscopy shows that predominantly vaterite with traces of calcite forms. The crystalline particles of early stages are composed of nano-crystallites of approximately 5 to 10 nm size, respectively, which are aligned in high mutual order as in mesocrystals. The analyses of precipitation at pH = 9.75 in presence of additives –polyacrylic acid (pAA) as a model compound for scale inhibitors and peptides exhibiting calcium carbonate binding affinity as model compounds for crystal modifiers- shows that ACC I and ACC II are precipitated in parallel: pAA stabilizes ACC II particles against crystallization leading to their dissolution for the benefit of crystals that form from ACC I and exclusively calcite is finally obtained. Concurrently, the peptide additives analogously inhibit the formation of calcite and exclusively vaterite is finally obtained in case of one of the peptide additives. These findings show that classical nucleation theory is hardly applicable for the nucleation of calcium carbonate. The metastable system is stabilized remarkably due to cluster formation, while clusters forming by means of equilibrium thermodynamics are the nucleation relevant species and not ions. Most likely, the concept of cluster formation is a common phenomenon occurring during the precipitation of hardly soluble compounds as qualitatively shown for calcium oxalate and calcium phosphate. This finding is important for the fundamental understanding of crystallization and nucleation-inhibition and modification by additives with impact on materials of huge scientific and industrial importance as well as for better understanding of the mass transport in crystallization. It can provide a novel basis for simulation and modelling approaches. New mechanisms of scale formation in Bio- and Geomineralization and also in scale inhibition on the basis of the newly reported reaction channel need to be considered.
The spread of shrubs in Namibian savannas raises questions about the resilience of these ecosystems to global change. This makes it necessary to understand the past dynamics of the vegetation, since there is no consensus on whether shrub encroachment is a new phenomenon, nor on its main drivers. However, a lack of long-term vegetation datasets for the region and the scarcity of suitable palaeoecological archives, makes reconstructing past vegetation and land cover of the savannas a challenge.
To help meet this challenge, this study addresses three main research questions: 1) is pollen analysis a suitable tool to reflect the vegetation change associated with shrub encroachment in savanna environments? 2) Does the current encroached landscape correspond to an alternative stable state of savanna vegetation? 3) To what extent do pollen-based quantitative vegetation reconstructions reflect changes in past land cover?
The research focuses on north-central Namibia, where despite being the region most affected by shrub invasion, particularly since the 21st century, little is known about the dynamics of this phenomenon.
Field-based vegetation data were compared with modern pollen data to assess their correspondence in terms of composition and diversity along precipitation and grazing intensity gradients. In addition, two sediment cores from Lake Otjikoto were analysed to reveal changes in vegetation composition that have occurred in the region over the past 170 years and their possible drivers. For this, a multiproxy approach (fossil pollen, sedimentary ancient DNA (sedaDNA), biomarkers, compound specific carbon (δ13C) and deuterium (δD) isotopes, bulk carbon isotopes (δ13Corg), grain size, geochemical properties) was applied at high taxonomic and temporal resolution. REVEALS modelling of the fossil pollen record from Lake Otjikoto was run to quantitatively reconstruct past vegetation cover. For this, we first made pollen productivity estimates (PPE) of the most relevant savanna taxa in the region using the extended R-value model and two pollen dispersal options (Gaussian plume model and Lagrangian stochastic model). The REVEALS-based vegetation reconstruction was then validated using remote sensing-based regional vegetation data.
The results show that modern pollen reflects the composition of the vegetation well, but diversity less well. Interestingly, precipitation and grazing explain a significant amount of the compositional change in the pollen and vegetation spectra. The multiproxy record shows that a state change from open Combretum woodland to encroached Terminalia shrubland can occur over a century, and that the transition between states spans around 80 years and is characterized by a unique vegetation composition. This transition is supported by gradual environmental changes induced by management (i.e. broad-scale logging for the mining industry, selective grazing and reduced fire activity associated with intensified farming) and related land-use change. Derived environmental changes (i.e. reduced soil moisture, reduced grass cover, changes in species composition and competitiveness, reduced fire intensity) may have affected the resilience of Combretum open woodlands, making them more susceptible to change to an encroached state by stochastic events such as consecutive years of precipitation and drought, and by high concentrations of pCO2. We assume that the resulting encroached state was further stabilized by feedback mechanisms that favour the establishment and competitiveness of woody vegetation.
The REVEALS-based quantitative estimates of plant taxa indicate the predominance of a semi-open landscape throughout the 20th century and a reduction in grass cover below 50% since the 21st century associated with the spread of encroacher woody taxa. Cover estimates show a close match with regional vegetation data, providing support for the vegetation dynamics inferred from multiproxy analyses. Reasonable PPEs were made for all woody taxa, but not for Poaceae.
In conclusion, pollen analysis is a suitable tool to reconstruct past vegetation dynamics in savannas. However, because pollen cannot identify grasses beyond family level, a multiproxy approach, particularly the use of sedaDNA, is required. I was able to separate stable encroached states from mere woodland phases, and could identify drivers and speculate about related feedbacks. In addition, the REVEALS-based quantitative vegetation reconstruction clearly reflects the magnitude of the changes in the vegetation cover that occurred during the last 130 years, despite the limitations of some PPEs.
This research provides new insights into pollen-vegetation relationships in savannas and highlights the importance of multiproxy approaches when reconstructing past vegetation dynamics in semi-arid environments. It also provides the first time series with sufficient taxonomic resolution to show changes in vegetation composition during shrub encroachment, as well as the first quantitative reconstruction of past land cover in the region. These results help to identify the different stages in savanna dynamics and can be used to calibrate predictive models of vegetation change, which are highly relevant to land management.
We propose a paraconsistent declarative semantics of possibly inconsistent generalized logic programs which allows for arbitrary formulas in the body and in the head of a rule (i.e. does not depend on the presence of any specific connective, such as negation(-as-failure), nor on any specific syntax of rules). For consistent generalized logic programs this semantics coincides with the stable generated models introduced in [HW97], and for normal logic programs it yields the stable models in the sense of [GL88].
We reconsider the fundamental work of Fichtner ([2]) and exhibit the permanental structure of the ideal Bose gas again, using another approach which combines a characterization of infinitely divisible random measures (due to Kerstan,Kummer and Matthes [5, 6] and Mecke [8, 9]) with a decomposition of the moment measures into its factorial measures due to Krickeberg [4]. To be more precise, we exhibit the moment measures of all orders of the general ideal Bose gas in terms of certain path integrals. This representation can be considered as a point process analogue of the old idea of Symanzik [11] that local times and self-crossings of the Brownian motion can be used as a tool in quantum field theory. Behind the notion of a general ideal Bose gas there is a class of infinitely divisible point processes of all orders with a Levy-measure belonging to some large class of measures containing the one of the classical ideal Bose gas considered by Fichtner. It is well known that the calculation of moments of higher order of point processes are notoriously complicated. See for instance Krickeberg's calculations for the Poisson or the Cox process in [4].
Matching participants (as suggested by Hope, 2015) may be one promising option for research on a potential bilingual advantage in executive functions (EF). In this study we first compared performances in three EF-tasks of a naturally heterogeneous sample of monolingual (n = 69, age = 9.0 y) and multilingual children (n = 57, age = 9.3 y). Secondly, we meticulously matched participants pairwise to obtain two highly homogeneous groups to rerun our analysis and investigate a potential bilingual advantage. The initally disadvantaged multilinguals (regarding socioeconomic status and German lexicon size) performed worse in updating and response inhibition, but similarly in interference inhibition. This indicates that superior EF compensate for the detrimental effects of the background variables. After matching children pairwise on age, gender, intelligence, socioeconomic status and German lexicon size, performances became similar except for interference inhibition. Here, an advantage for multilinguals in the form of globally reduced reaction times emerged, indicating a bilingual executive processing advantage.
A phagocyte-specific Irf8 gene enhancer establishes early conventional dendritic cell commitment
(2011)
Haematopoietic development is a complex process that is strictly hierarchically organized. Here, the phagocyte lineages are a very heterogeneous cell compartment with specialized functions in innate immunity and induction of adaptive immune responses. Their generation from a common precursor must be tightly controlled. Interference within lineage formation programs for example by mutation or change in expression levels of transcription factors (TF) is causative to leukaemia. However, the molecular mechanisms driving specification into distinct phagocytes remain poorly understood. In the present study I identify the transcription factor Interferon Regulatory Factor 8 (IRF8) as the specification factor of dendritic cell (DC) commitment in early phagocyte precursors. Employing an IRF8 reporter mouse, I showed the distinct Irf8 expression in haematopoietic lineage diversification and isolated a novel bone marrow resident progenitor which selectively differentiates into CD8α+ conventional dendritic cells (cDCs) in vivo. This progenitor strictly depends on Irf8 expression to properly establish its transcriptional DC program while suppressing a lineage-inappropriate neutrophile program. Moreover, I demonstrated that Irf8 expression during this cDC commitment-step depends on a newly discovered myeloid-specific cis-enhancer which is controlled by the haematopoietic transcription factors PU.1 and RUNX1. Interference with their binding leads to abrogation of Irf8 expression, subsequently to disturbed cell fate decisions, demonstrating the importance of these factors for proper phagocyte cell development. Collectively, these data delineate a transcriptional program establishing cDC fate choice with IRF8 in its center.
This thesis is focused on the electronic, spin-dependent and dynamical properties of thin magnetic systems. Photoemission-related techniques are combined with synchrotron radiation to study the spin-dependent properties of these systems in the energy and time domains. In the first part of this thesis, the strength of electron correlation effects in the spin-dependent electronic structure of ferromagnetic bcc Fe(110) and hcp Co(0001) is investigated by means of spin- and angle-resolved photoemission spectroscopy. The experimental results are compared to theoretical calculations within the three-body scattering approximation and within the dynamical mean-field theory, together with one-step model calculations of the photoemission process. From this comparison it is demonstrated that the present state of the art many-body calculations, although improving the description of correlation effects in Fe and Co, give too small mass renormalizations and scattering rates thus demanding more refined many-body theories including nonlocal fluctuations. In the second part, it is shown in detail monitoring by photoelectron spectroscopy how graphene can be grown by chemical vapour deposition on the transition-metal surfaces Ni(111) and Co(0001) and intercalated by a monoatomic layer of Au. For both systems, a linear E(k) dispersion of massless Dirac fermions is observed in the graphene pi-band in the vicinity of the Fermi energy. Spin-resolved photoemission from the graphene pi-band shows that the ferromagnetic polarization of graphene/Ni(111) and graphene/Co(0001) is negligible and that graphene on Ni(111) is after intercalation of Au spin-orbit split by the Rashba effect. In the last part, a time-resolved x-ray magnetic circular dichroic-photoelectron emission microscopy study of a permalloy platelet comprising three cross-tie domain walls is presented. It is shown how a fast picosecond magnetic response in the precessional motion of the magnetization can be induced by means of a laser-excited photoswitch. From a comparision to micromagnetic calculations it is demonstrated that the relatively high precessional frequency observed in the experiments is directly linked to the nature of the vortex/antivortex dynamics and its response to the magnetic perturbation. This includes the time-dependent reversal of the vortex core polarization, a process which is beyond the limit of detection in the present experiments.
Objective
The Caribbean is an important global biodiversity hotspot. Adaptive radiations there lead to many speciation events within a limited period and hence are particularly prominent biodiversity generators. A prime example are freshwater fish of the genus Limia, endemic to the Greater Antilles. Within Hispaniola, nine species have been described from a single isolated site, Lake Miragoâne, pointing towards extraordinary sympatric speciation. This study examines the evolutionary history of the Limia species in Lake Miragoâne, relative to their congeners throughout the Caribbean.
Results
For 12 Limia species, we obtained almost complete sequences of the mitochondrial cytochrome b gene, a well-established marker for lower-level taxonomic relationships. We included sequences of six further Limia species from GenBank (total N = 18 species). Our phylogenies are in concordance with other published phylogenies of Limia. There is strong support that the species found in Lake Miragoâne in Haiti are monophyletic, confirming a recent local radiation. Within Lake Miragoâne, speciation is likely extremely recent, leading to incomplete lineage sorting in the mtDNA. Future studies using multiple unlinked genetic markers are needed to disentangle the relationships within the Lake Miragoâne clade.
Background: For omics experiments, detailed characterisation of experimental material with respect to its genetic features, its cultivation history and its treatment history is a requirement for analyses by bioinformatics tools and for publication needs. Furthermore, meta-analysis of several experiments in systems biology based approaches make it necessary to store this information in a standardised manner, preferentially in relational databases. In the Golm Plant Database System, we devised a data management system based on a classical Laboratory Information Management System combined with web-based user interfaces for data entry and retrieval to collect this information in an academic environment.
Results: The database system contains modules representing the genetic features of the germplasm, the experimental conditions and the sampling details. In the germplasm module, genetically identical lines of biological material are generated by defined workflows, starting with the import workflow, followed by further workflows like genetic modification (transformation), vegetative or sexual reproduction. The latter workflows link lines and thus create pedigrees. For experiments, plant objects are generated from plant lines and united in so-called cultures, to which the cultivation conditions are linked. Materials and methods for each cultivation step are stored in a separate ACCESS database of the plant cultivation unit. For all cultures and thus every plant object, each cultivation site and the culture's arrival time at a site are logged by a barcode-scanner based system. Thus, for each plant object, all site-related parameters, e. g. automatically logged climate data, are available. These life history data and genetic information for the plant objects are linked to analytical results by the sampling module, which links sample components to plant object identifiers. This workflow uses controlled vocabulary for organs and treatments. Unique names generated by the system and barcode labels facilitate identification and management of the material. Web pages are provided as user interfaces to facilitate maintaining the system in an environment with many desktop computers and a rapidly changing user community. Web based search tools are the basis for joint use of the material by all researchers of the institute.
Conclusion: The Golm Plant Database system, which is based on a relational database, collects the genetic and environmental information on plant material during its production or experimental use at the Max-Planck-Institute of Molecular Plant Physiology. It thus provides information according to the MIAME standard for the component 'Sample' in a highly standardised format. The Plant Database system thus facilitates collaborative work and allows efficient queries in data analysis for systems biology research.
In his “Essay on the Fluctuations in the Supplies of Gold” (1838) Humboldt presents a global history of the flow of precious metals from antiquity to the 19th century. This paper traces Humboldt’s economic thinking within his natural and historical research, starting with an outline of his educational background which incorporated late mercantilist and early liberal influences. It then discusses a world map and four charts drawn by Humboldt, which combine historical and contemporary statistical data into a cartographical vision of a global economic circuit. In a next step, the article explores Humboldt’s application of natural and historical research methods in the field of political economy, using the example of his 1838 essay. Finally, the article addresses Humboldt’s discussion of platinum, a precious metal whose limited natural distribution contradicted the idea of free global exchange.
A polymer analogous reaction for the formation of imidazolium and NHC based porous polymer networks
(2013)
A polymer analogous reaction was carried out to generate a porous polymeric network with N-heterocyclic carbenes (NHC) in the polymer backbone. Using a stepwise approach, first a polyimine network is formed by polymerization of the tetrafunctional amine tetrakis(4-aminophenyl)methane. This polyimine network is converted in the second step into polyimidazolium chloride and finally to a polyNHC network. Furthermore a porous Cu(II)-coordinated polyNHC network can be generated. Supercritical drying generates polymer networks with high permanent surface areas and porosities which can be applied for different catalytic reactions. The catalytic properties were demonstrated for example in the activation of CO2 or in the deoxygenation of sulfoxides to the corresponding sulfides.
The German Sonderweg thesis has been discarded in most research fields. Yet in regards to the military, things differ: all conflicts before the Second World War are interpreted as prelude to the war of extermination between 1939–1945. This article specifically looks at the Franco-Prussian War 1870–71 and German behaviour vis-à-vis regular combatants, civilians and irregular guerrilla fighters, the so-called francs-tireurs. The author argues that the counter-measures were not exceptional for nineteenth century warfare and also shows how selective reading of the existing secondary literature has distorted our view on the war.
Extreme weather events are likely to occur more often under climate change and the resulting effects on ecosystems could lead to a further acceleration of climate change. But not all extreme weather events lead to extreme ecosystem response. Here, we focus on hazardous ecosystem behaviour and identify coinciding weather conditions. We use a simple probabilistic risk assessment based on time series of ecosystem behaviour and climate conditions. Given the risk assessment terminology, vulnerability and risk for the previously defined hazard are estimated on the basis of observed hazardous ecosystem behaviour.
We apply this approach to extreme responses of terrestrial ecosystems to drought, defining the hazard as a negative net biome productivity over a 12-month period. We show an application for two selected sites using data for 1981-2010 and then apply the method to the pan-European scale for the same period, based on numerical modelling results (LPJmL for ecosystem behaviour; ERA-Interim data for climate).
Our site-specific results demonstrate the applicability of the proposed method, using the SPEI to describe the climate condition. The site in Spain provides an example of vulnerability to drought because the expected value of the SPEI is 0.4 lower for hazardous than for non-hazardous ecosystem behaviour. In northern Germany, on the contrary, the site is not vulnerable to drought because the SPEI expectation values imply wetter conditions in the hazard case than in the non-hazard case.
At the pan-European scale, ecosystem vulnerability to drought is calculated in the Mediterranean and temperate region, whereas Scandinavian ecosystems are vulnerable under conditions without water shortages. These first model- based applications indicate the conceptual advantages of the proposed method by focusing on the identification of critical weather conditions for which we observe hazardous ecosystem behaviour in the analysed data set. Application of the method to empirical time series and to future climate would be important next steps to test the approach.
Background
Relatively little is known about protective factors and the emergence and maintenance of positive outcomes in the field of adolescents with chronic conditions. Therefore, the primary aim of the study is to acquire a deeper understanding of the dynamic process of resilience factors, coping strategies and psychosocial adjustment of adolescents living with chronic conditions.
Methods/design
We plan to consecutively recruit N = 450 adolescents (12–21 years) from three German patient registries for chronic conditions (type 1 diabetes, cystic fibrosis, or juvenile idiopathic arthritis). Based on screening for anxiety and depression, adolescents are assigned to two parallel groups – “inconspicuous” (PHQ-9 and GAD-7 < 7) vs. “conspicuous” (PHQ-9 or GAD-7 ≥ 7) – participating in a prospective online survey at baseline and 12-month follow-up. At two time points (T1, T2), we assess (1) intra- and interpersonal resiliency factors, (2) coping strategies, and (3) health-related quality of life, well-being, satisfaction with life, anxiety and depression. Using a cross-lagged panel design, we will examine the bidirectional longitudinal relations between resiliency factors and coping strategies, psychological adaptation, and psychosocial adjustment. To monitor Covid-19 pandemic effects, participants are also invited to take part in an intermediate online survey.
Discussion
The study will provide a deeper understanding of adaptive, potentially modifiable processes and will therefore help to develop novel, tailored interventions supporting a positive adaptation in youths with a chronic condition. These strategies should not only support those at risk but also promote the maintenance of a successful adaptation.
Trial registration
German Clinical Trials Register (DRKS), no. DRKS00025125. Registered on May 17, 2021.
This contribution presents a quantitative evaluation procedure for Information Retrieval models and the results of this procedure applied on the enhanced Topic-based Vector Space Model (eTVSM). Since the eTVSM is an ontology-based model, its effectiveness heavily depends on the quality of the underlaying ontology. Therefore the model has been tested with different ontologies to evaluate the impact of those ontologies on the effectiveness of the eTVSM. On the highest level of abstraction, the following results have been observed during our evaluation: First, the theoretically deduced statement that the eTVSM has a similar effecitivity like the classic Vector Space Model if a trivial ontology (every term is a concept and it is independet of any other concepts) is used has been approved. Second, we were able to show that the effectiveness of the eTVSM raises if an ontology is used which is only able to resolve synonyms. We were able to derive such kind of ontology automatically from the WordNet ontology. Third, we observed that more powerful ontologies automatically derived from the WordNet, dramatically dropped the effectiveness of the eTVSM model even clearly below the effectiveness level of the Vector Space Model. Fourth, we were able to show that a manually created and optimized ontology is able to raise the effectiveness of the eTVSM to a level which is clearly above the best effectiveness levels we have found in the literature for the Latent Semantic Index model with compareable document sets.
Dynamic earthquake rupture modeling provides information on the rupture physics as the rupture velocity, frictions or tractions acting during the rupture process. Nevertheless, as often based on spatial gridded preset geometries, dynamic modeling is depending on many free parameters leading to both a high non-uniqueness of the results and large computation times. That decreases the possibilities of full Bayesian error analysis.
To assess the named problems we developed the quasi-dynamic rupture model which is presented in this work. It combines the kinematic Eikonal rupture model with a boundary element method for quasi-static slip calculation.
The orientation of the modeled rupture plane is defined by a previously performed moment tensor inversion. The simultanously inverted scalar seismic moment allows an estimation of the extension of the rupture. The modeled rupture plane is discretized by a set of rectangular boundary elements. For each boundary element an applied traction vector is defined as the boundary value.
For insights in the dynamic rupture behaviour the rupture front propagation is calculated for incremental time steps based on the 2D Eikonal equation. The needed location-dependent rupture velocity field is assumed to scale linearly with a layered shear wave velocity field.
At each time all boundary elements enclosed within the rupture front are used to calculate the quasi-static slip distribution. Neither friction nor stress propagation are considered. Therefore the algorithm is assumed to be “quasi-static”. A series of the resulting quasi-static slip snapshots can be used as a quasi-dynamic model of the rupture process.
As many a priori information is used from the earth model (shear wave velocity and elastic parameters) and the moment tensor inversion (rupture extension and orientation) our model is depending on few free parameters as the traction field, the linear factor between rupture and shear wave velocity and the nucleation point and time. Hence stable and fast modeling results are obtained as proven from the comparison to different infinite and finite static crack solutions.
First dynamic applications show promissing results. The location-dependent rise time is automatically derived by the model. Different simple kinematic models as the slip-pulse or the penny-shaped crack model can be reproduced as well as their corresponding slip rate functions. A source time function (STF) approximation calculated from the cumulative sum of moment rates of each boundary element gives results similar to theoretical and empirical known STFs.
The model was also applied to the 2015 Illapel earthquake. Using a simple rectangular rupture geometry and a 2-layered traction regime yields good estimates of both the rupture front propagation and the slip patterns which are comparable to literature results. The STF approximation shows a good fit with previously published STFs.
The quasi-dynamic rupture model is hence able to fastly calculate reproducable slip results. That allows to test full Bayesian error analysis in the future. Further work on a full seismic source inversion or even a traction field inversion can also extend the scope of our model.
Transport Molecules play a crucial role for cell viability. Amongst others, linear motors transport cargos along rope-like structures from one location of the cell to another in a stochastic fashion. Thereby each step of the motor, either forwards or backwards, bridges a fixed distance. While moving along the rope the motor can also detach and is lost. We give here a mathematical formalization of such dynamics as a random process which is an extension of Random Walks, to which we add an absorbing state to model the detachment of the motor from the rope. We derive particular properties of such processes that have not been available before. Our results include description of the maximal distance reached from the starting point and the position from which detachment takes place. Finally, we apply our theoretical results to a concrete established model of the transport molecule Kinesin V.
Let A be a nonlinear differential operator on an open set X in R^n and S a closed subset of X. Given a class F of functions in X, the set S is said to be removable for F relative to A if any weak solution of A (u) = 0 in the complement of S of class F satisfies this equation weakly in all of X. For the most extensively studied classes F we show conditions on S which guarantee that S is removable for F relative to A.
We introduce a natural symmetry condition for a pseudodifferential operator on a manifold with cylindrical ends ensuring that the operator admits a doubling across the boundary. For such operators we prove an explicit index formula containing, apart from the Atiyah-Singer integral, a finite number of residues of the logarithmic derivative of the conormal symbol.
Once the “popular plaything of Realpolitiker” the doctrine of rebus sic stantibus post the 1969 VCLT is often described as an objective rule by which, on grounds of equity and justice, a fundamental change of circumstances may be invoked as a ground for termination. Yet recent practice from States such as Ecuador, Russia, Denmark and the United Kingdom suggests that it is returning with a new livery. They point to an understanding based on vital States’ interests––a view popular among scholars such as Erich Kaufmann at the beginning of the last century.
The integration of MOOCs into the Moroccan Higher Education (MHE) took place in 2013 by developing different partnerships and projects at national and international levels. As elsewhere, the Covid-19 crisis has played an important role in accelerating distance education in MHE. However, based on our experience as both university professors and specialists in educational engineering, the effective execution of the digital transition has not yet been implemented. Thus, in this article, we present a retrospective feedback of MOOCs in Morocco, focusing on the policies taken by the government to better support the digital transition in general and MOOCs in particular. We are therefore seeking to establish an optimal scenario for the promotion of MOOCs, which emphasizes the policies to be considered, and which recalls the importance of conducting a delicate articulation taking into account four levels, namely environmental, institutional, organizational and individual. We conclude with recommendations that are inspired by the Moroccan academic contex that focus on the major role that MOOCs plays for university students and on maintaining lifelong learning.
G protein-coupled receptor (GPCR) genes are large gene families in every animal, sometimes making up to 1-2% of the animal's genome. Of all insect GPCRs, the neurohormone (neuropeptide, protein hormone, biogenic amine) GPCRs are especially important, because they, together with their ligands, occupy a high hierarchic position in the physiology of insects and steer crucial processes such as development, reproduction, and behavior. In this paper, we give a review of our current knowledge on Drosophila melanogaster GPCRs and use this information to annotate the neurohormone GPCR genes present in the recently sequenced genome from the honey bee Apis mellifera. We found 35 neuropeptide receptor genes in the honey bee (44 in Drosophila) and two genes, coding for leucine-rich repeats-containing protein hormone GPCRs (4 in Drosophila). In addition, the honey bee has 19 biogenic amine receptor genes (21 in Drosophila). The larger numbers of neurohormone receptors in Drosophila are probably due to gene duplications that occurred during recent evolution of the fly. Our analyses also yielded the likely ligands for 40 of the 56 honey bee neurohormone GPCRs identified in this study. In addition, we made some interesting observations on neurohormone GPCR evolution and the evolution and co-evolution of their ligands. For neuropeptide and protein hormone GPCRs, there appears to be a general co-evolution between receptors and their ligands. This is in contrast to biogenic amine GPCRs, where evolutionarily unrelated GPCRs often bind to the same biogenic amine, suggesting frequent ligand exchanges ("ligand hops") during GPCR evolution. (c) 2006 Elsevier Ltd. All rights reserved.
In a bounded domain with smooth boundary in R^3 we consider the stationary Maxwell equations
for a function u with values in R^3 subject to a nonhomogeneous condition
(u,v)_x = u_0 on
the boundary, where v is a given vector field and u_0 a function on the boundary. We specify this problem within the framework of the Riemann-Hilbert boundary value problems for the Moisil-Teodorescu system. This latter is proved to satisfy the Shapiro-Lopaniskij condition if an only if the vector v is at no point tangent to the boundary. The Riemann-Hilbert problem for the Moisil-Teodorescu system fails to possess an adjoint boundary value problem with respect to the Green formula, which satisfies the Shapiro-Lopatinskij condition. We develop the construction of Green formula to get a proper concept of adjoint boundary value problem.
Salinity is a significant factor for structuring microbial communities, but little is known for aquatic fungi, particularly in the pelagic zone of brackish ecosystems. In this study, we explored the diversity and composition of fungal communities following a progressive salinity decline (from 34 to 3 PSU) along three transects of ca. 2000 km in the Baltic Sea, the world’s largest estuary. Based on 18S rRNA gene sequence analysis, we detected clear changes in fungal community composition along the salinity gradient and found significant differences in composition of fungal communities established above and below a critical value of 8 PSU. At salinities below this threshold, fungal communities resembled those from freshwater environments, with a greater abundance of Chytridiomycota, particularly of the orders Rhizophydiales, Lobulomycetales, and
Gromochytriales. At salinities above 8 PSU, communities were more similar to those from marine environments and, depending on the season, were dominated by a strain of the LKM11 group (Cryptomycota) or by members of Ascomycota and Basidiomycota. Our results highlight salinity as an important environmental driver also for pelagic fungi, and thus should be taken into account to better understand fungal diversity and ecological function in the aquatic realm.
A digital filter is introduced which treats the problem of predictability versus time averaging in a continuous, seamless manner. This seamless filter (SF) is characterized by a unique smoothing rule that determines the strength of smoothing in dependence on lead time. The rule needs to be specified beforehand, either by expert knowledge or by user demand. As a result, skill curves are obtained that allow a predictability assessment across a whole range of time-scales, from daily to seasonal, in a uniform manner. The SF is applied to downscaled SEAS5 ensemble forecasts for two focus regions in or near the tropical belt, the river basins of the Karun in Iran and the Sao Francisco in Brazil. Both are characterized by strong seasonality and semi-aridity, so that predictability across various time-scales is in high demand. Among other things, it is found that from the start of the water year (autumn), areal precipitation is predictable with good skill for the Karun basin two and a half months ahead; for the Sao Francisco it is only one month, longer-term prediction skill is just above the critical level.
A Secular Tradition
(2021)
This article focuses on the social philosopher Horace Kallen and the revisions he made to the concept of cultural pluralism that he first developed in the early 20th century, applying it to postwar America and the young State of Israel. It shows how he opposed the assumption that the United States’ social order was based on a “Judeo-Christian tradition.” By constructing pluralism as a civil religion and carving out space for secular self-understandings in midcentury America, Kallen attempted to preserve the integrity of his earlier political visions, developed during World War I, of pluralist societies in the United States and Palestine within an internationalist global order. While his perspective on the State of Israel was largely shaped by his American experiences, he revised his approach to politically functionalizing religious traditions as he tested his American understanding of a secular, pluralist society against the political theology effective in the State of Israel. The trajectory of Kallen’s thought points to fundamental questions about the compatibility of American and Israeli understandings of religion’s function in society and its relation to political belonging, especially in light of their transnational connection through American Jewish support for the recently established state.
As a result of CMOS scaling, radiation-induced Single-Event Effects (SEEs) in electronic circuits became a critical reliability issue for modern Integrated Circuits (ICs) operating under harsh radiation conditions. SEEs can be triggered in combinational or sequential logic by the impact of high-energy particles, leading to destructive or non-destructive faults, resulting in data corruption or even system failure. Typically, the SEE mitigation methods are deployed statically in processing architectures based on the worst-case radiation conditions, which is most of the time unnecessary and results in a resource overhead. Moreover, the space radiation conditions are dynamically changing, especially during Solar Particle Events (SPEs). The intensity of space radiation can differ over five orders of magnitude within a few hours or days, resulting in several orders of magnitude fault probability variation in ICs during SPEs. This thesis introduces a comprehensive approach for designing a self-adaptive fault resilient multiprocessing system to overcome the static mitigation overhead issue. This work mainly addresses the following topics: (1) Design of on-chip radiation particle monitor for real-time radiation environment detection, (2) Investigation of space environment predictor, as support for solar particle events forecast, (3) Dynamic mode configuration in the resilient multiprocessing system. Therefore, according to detected and predicted in-flight space radiation conditions, the target system can be configured to use no mitigation or low-overhead mitigation during non-critical periods of time. The redundant resources can be used to improve system performance or save power. On the other hand, during increased radiation activity periods, such as SPEs, the mitigation methods can be dynamically configured appropriately depending on the real-time space radiation environment, resulting in higher system reliability. Thus, a dynamic trade-off in the target system between reliability, performance and power consumption in real-time can be achieved. All results of this work are evaluated in a highly reliable quad-core multiprocessing system that allows the self-adaptive setting of optimal radiation mitigation mechanisms during run-time. Proposed methods can serve as a basis for establishing a comprehensive self-adaptive resilient system design process. Successful implementation of the proposed design in the quad-core multiprocessor shows its application perspective also in the other designs.
Fixational eye movements show scaling behaviour of the positional mean-squared displacement with a characteristic transition from persistence to antipersistence for increasing time-lag. These statistical patterns were found to be mainly shaped by microsaccades (fast, small-amplitude movements). However, our re-analysis of fixational eye-movement data provides evidence that the slow component (physiological drift) of the eyes exhibits scaling behaviour of the mean-squared displacement that varies across human participants. These results suggest that drift is a correlated movement that interacts with microsaccades. Moreover, on the long time scale, the mean-squared displacement of the drift shows oscillations, which is also present in the displacement auto-correlation function. This finding lends support to the presence of time-delayed feedback in the control of drift movements. Based on an earlier non-linear delayed feedback model of fixational eye movements, we propose and discuss different versions of a new model that combines a self-avoiding walk with time delay. As a result, we identify a model that reproduces oscillatory correlation functions, the transition from persistence to antipersistence, and microsaccades.
Processes driving the production, transformation and transport of methane (CH4) in wetland ecosystems are highly complex. We present a simple calculation algorithm to separate open-water CH4 fluxes measured with automatic chambers into diffusion- and ebullition-derived components. This helps to reveal underlying dynamics, to identify potential environmental drivers and, thus, to calculate reliable CH4 emission estimates. The flux separation is based on identification of ebullition-related sudden concentration changes during single measurements. Therefore, a variable ebullition filter is applied, using the lower and upper quartile and the interquartile range (IQR). Automation of data processing is achieved by using an established R script, adjusted for the purpose of CH4 flux calculation. The algorithm was validated by performing a laboratory experiment and tested using flux measurement data (July to September 2013) from a former fen grassland site, which converted into a shallow lake as a result of rewetting. Ebullition and diffusion contributed equally (46 and 55 %) to total CH4 emissions, which is comparable to ratios given in the literature. Moreover, the separation algorithm revealed a concealed shift in the diurnal trend of diffusive fluxes throughout the measurement period. The water temperature gradient was identified as one of the major drivers of diffusive CH4 emissions, whereas no significant driver was found in the case of erratic CH4 ebullition events.
In recent decades, the Greenland Ice Sheet has been losing mass and has thereby contributed to global sea-level rise. The rate of ice loss is highly relevant for coastal protection worldwide. The ice loss is likely to increase under future warming. Beyond a critical temperature threshold, a meltdown of the Greenland Ice Sheet is induced by the self-enforcing feedback between its lowering surface elevation and its increasing surface mass loss: the more ice that is lost, the lower the ice surface and the warmer the surface air temperature, which fosters further melting and ice loss. The computation of this rate so far relies on complex numerical models which are the appropriate tools for capturing the complexity of the problem. By contrast we aim here at gaining a conceptual understanding by deriving a purposefully simple equation for the self-enforcing feedback which is then used to estimate the melt time for different levels of warming using three observable characteristics of the ice sheet itself and its surroundings. The analysis is purely conceptual in nature. It is missing important processes like ice dynamics for it to be useful for applications to sea-level rise on centennial timescales, but if the volume loss is dominated by the feedback, the resulting logarithmic equation unifies existing numerical simulations and shows that the melt time depends strongly on the level of warming with a critical slow-down near the threshold: the median time to lose 10% of the present-day ice volume varies between about 3500 years for a temperature level of 0.5 degrees C above the threshold and 500 years for 5 degrees C. Unless future observations show a significantly higher melting sensitivity than currently observed, a complete meltdown is unlikely within the next 2000 years without significant ice-dynamical contributions.
The Riemann hypothesis is equivalent to the fact the the reciprocal function 1/zeta (s) extends from the interval (1/2,1) to an analytic function in the quarter-strip 1/2 < Re s < 1 and Im s > 0. Function theory allows one to rewrite the condition of analytic continuability in an elegant form amenable to numerical experiments.
Due to the enhanced electromagnetic field at the tips of metal nanoparticles, the spiked structure of gold nanostars (AuNSs) is promising for surface-enhanced Raman scattering (SERS). Therefore, the challenge is the synthesis of well designed particles with sharp tips. The influence of different surfactants, i.e., dioctyl sodium sulfosuccinate (AOT), sodium dodecyl sulfate (SDS), and benzylhexadecyldimethylammonium chloride (BDAC), as well as the combination of surfactant mixtures on the formation of nanostars in the presence of Ag⁺ ions and ascorbic acid was investigated. By varying the amount of BDAC in mixed micelles the core/spike-shell morphology of the resulting AuNSs can be tuned from small cores to large ones with sharp and large spikes. The concomitant red-shift in the absorption toward the NIR region without losing the SERS enhancement enables their use for biological applications and for time-resolved spectroscopic studies of chemical reactions, which require a permanent supply with a fresh and homogeneous solution. HRTEM micrographs and energy-dispersive X-ray (EDX) experiments allow us to verify the mechanism of nanostar formation according to the silver underpotential deposition on the spike surface in combination with micelle adsorption.
Social segregation in cities takes place where different household groups exist and when, according to Schelling, their location choice either minimizes the number of differing households in their neighborhood or maximizes their own group. In this contribution an evolutionary simulation based on a monocentric city model with externalities among households is used to discuss the spatial segregation patterns of four groups. The resulting complex spatial patterns can be shown as graphic animations. They can be applied as initial situation for the analysis of the effects a rent control has on segregation.
Cytochrome P450 17A1 (CYP17A1) catalyses the formation and metabolism of steroid hormones. They are involved in blood pressure (BP) regulation and in the pathogenesis of left ventricular hypertrophy. Therefore, altered function of CYP17A1 due to genetic variants may influence BP and left ventricular mass. Notably, genome wide association studies supported the role of this enzyme in BP control. Against this background, we investigated associations between single nucleotide polymorphisms (SNPs) in or nearby the CYP17A1 gene with BP and left ventricular mass in patients with arterial hypertension and associated cardiovascular organ damage treated according to guidelines. Patients (n = 1007, mean age 58.0 ± 9.8 years, 83% men) with arterial hypertension and cardiac left ventricular ejection fraction (LVEF) ≥40% were enrolled in the study. Cardiac parameters of left ventricular mass, geometry and function were determined by echocardiography. The cohort comprised patients with coronary heart disease (n = 823; 81.7%) and myocardial infarction (n = 545; 54.1%) with a mean LVEF of 59.9% ± 9.3%. The mean left ventricular mass index (LVMI) was 52.1 ± 21.2 g/m2.7 and 485 (48.2%) patients had left ventricular hypertrophy. There was no significant association of any investigated SNP (rs619824, rs743572, rs1004467, rs11191548, rs17115100) with mean 24 h systolic or diastolic BP. However, carriers of the rs11191548 C allele demonstrated a 7% increase in LVMI (95% CI: 1%–12%, p = 0.017) compared to non-carriers. The CYP17A1 polymorphism rs11191548 demonstrated a significant association with LVMI in patients with arterial hypertension and preserved LVEF. Thus, CYP17A1 may contribute to cardiac hypertrophy in this clinical condition.
Spatio-temporal data denotes a category of data that contains spatial as well as temporal components. For example, time-series of geo-data, thematic maps that change over time, or tracking data of moving entities can be interpreted as spatio-temporal data.
In today's automated world, an increasing number of data sources exist, which constantly generate spatio-temporal data. This includes for example traffic surveillance systems, which gather movement data about human or vehicle movements, remote-sensing systems, which frequently scan our surroundings and produce digital representations of cities and landscapes, as well as sensor networks in different domains, such as logistics, animal behavior study, or climate research.
For the analysis of spatio-temporal data, in addition to automatic statistical and data mining methods, exploratory analysis methods are employed, which are based on interactive visualization. These analysis methods let users explore a data set by interactively manipulating a visualization, thereby employing the human cognitive system and knowledge of the users to find patterns and gain insight into the data.
This thesis describes a software framework for the visualization of spatio-temporal data, which consists of GPU-based techniques to enable the interactive visualization and exploration of large spatio-temporal data sets. The developed techniques include data management, processing, and rendering, facilitating real-time processing and visualization of large geo-temporal data sets. It includes three main contributions:
- Concept and Implementation of a GPU-Based Visualization Pipeline.
The developed visualization methods are based on the concept of a GPU-based visualization pipeline, in which all steps -- processing, mapping, and rendering -- are implemented on the GPU. With this concept, spatio-temporal data is represented directly in GPU memory, using shader programs to process and filter the data, apply mappings to visual properties, and finally generate the geometric representations for a visualization during the rendering process. Data processing, filtering, and mapping are thereby executed in real-time, enabling dynamic control over the mapping and a visualization process which can be controlled interactively by a user.
- Attributed 3D Trajectory Visualization.
A visualization method has been developed for the interactive exploration of large numbers of 3D movement trajectories. The trajectories are visualized in a virtual geographic environment, supporting basic geometries such as lines, ribbons, spheres, or tubes. Interactive mapping can be applied to visualize the values of per-node or per-trajectory attributes, supporting shape, height, size, color, texturing, and animation as visual properties. Using the dynamic mapping system, several kind of visualization methods have been implemented, such as focus+context visualization of trajectories using interactive density maps, and space-time cube visualization to focus on the temporal aspects of individual movements.
- Geographic Network Visualization.
A method for the interactive exploration of geo-referenced networks has been developed, which enables the visualization of large numbers of nodes and edges in a geographic context. Several geographic environments are supported, such as a 3D globe, as well as 2D maps using different map projections, to enable the analysis of networks in different contexts and scales. Interactive filtering, mapping, and selection can be applied to analyze these geographic networks, and visualization methods for specific types of networks, such as coupled 3D networks or temporal networks have been implemented.
As a demonstration of the developed visualization concepts, interactive visualization tools for two distinct use cases have been developed. The first contains the visualization of attributed 3D movement trajectories of airplanes around an airport. It allows users to explore and analyze the trajectories of approaching and departing aircrafts, which have been recorded over the period of a month. By applying the interactive visualization methods for trajectory visualization and interactive density maps, analysts can derive insight from the data, such as common flight paths, regular and irregular patterns, or uncommon incidents such as missed approaches on the airport.
The second use case involves the visualization of climate networks, which are geographic networks in the climate research domain. They represent the dynamics of the climate system using a network structure that expresses statistical interrelationships between different regions. The interactive tool allows climate analysts to explore these large networks, analyzing the network's structure and relating it to the geographic background. Interactive filtering and selection enables them to find patterns in the climate data and identify e.g. clusters in the networks or flow patterns.
Modern 3D geovisualization systems (3DGeoVSs) are complex and evolving systems that are required to be adaptable and leverage distributed resources, including massive geodata. This article focuses on 3DGeoVSs built based on the principles of service-oriented architectures, standards and image-based representations (SSI) to address practically relevant challenges and potentials. Such systems facilitate resource sharing and agile and efficient system construction and change in an interoperable manner, while exploiting images as efficient, decoupled and interoperable representations. The software architecture of a 3DGeoVS and its underlying visualization model have strong effects on the system's quality attributes and support various system life cycle activities. This article contributes a software reference architecture (SRA) for 3DGeoVSs based on SSI that can be used to design, describe and analyze concrete software architectures with the intended primary benefit of an increase in effectiveness and efficiency in such activities. The SRA integrates existing, proven technology and novel contributions in a unique manner. As the foundation for the SRA, we propose the generalized visualization pipeline model that generalizes and overcomes expressiveness limitations of the prevalent visualization pipeline model. To facilitate exploiting image-based representations (IReps), the SRA integrates approaches for the representation, provisioning and styling of and interaction with IReps. Five applications of the SRA provide proofs of concept for the general applicability and utility of the SRA. A qualitative evaluation indicates the overall suitability of the SRA, its applications and the general approach of building 3DGeoVSs based on SSI.
The zero-noise limit of differential equations with singular coefficients is investigated for the first time in the case when the noise is a general alpha-stable process. It is proved that extremal solutions are selected and the probability of selection is computed. Detailed analysis of the characteristic function of an exit time form on the half-line is performed, with a suitable decomposition in small and large jumps adapted to the singular drift.
Solar wind observations show that geomagnetic storms are mainly driven by interplanetary coronal mass ejections (ICMEs) and corotating or stream interaction regions (C/SIRs). We present a binary classifier that assigns one of these drivers to 7,546 storms between 1930 and 2015 using ground‐based geomagnetic field observations only. The input data consists of the long‐term stable Hourly Magnetospheric Currents index alongside the corresponding midlatitude geomagnetic observatory time series. This data set provides comprehensive information on the global storm time magnetic disturbance field, particularly its spatial variability, over eight solar cycles. For the first time, we use this information statistically with regard to an automated storm driver identification. Our supervised classification model significantly outperforms unskilled baseline models (78% accuracy with 26[19]% misidentified interplanetary coronal mass ejections [corotating or stream interaction regions]) and delivers plausible driver occurrences with regard to storm intensity and solar cycle phase. Our results can readily be used to advance related studies fundamental to space weather research, for example, studies connecting galactic cosmic ray modulation and geomagnetic disturbances. They are fully reproducible by means of the underlying open‐source software (Pick, 2019, http://doi.org/10.5880/GFZ.2.3.2019.003)