Refine
Has Fulltext
- yes (4506) (remove)
Year of publication
Document Type
- Postprint (2089)
- Doctoral Thesis (1694)
- Monograph/Edited Volume (221)
- Preprint (126)
- Working Paper (121)
- Article (86)
- Master's Thesis (58)
- Habilitation Thesis (38)
- Part of Periodical (26)
- Conference Proceeding (25)
Language
- English (4506) (remove)
Is part of the Bibliography
- yes (4506) (remove)
Keywords
- climate change (69)
- Klimawandel (47)
- machine learning (40)
- Modellierung (34)
- diffusion (28)
- morphology (27)
- climate (25)
- model (25)
- Germany (24)
- German (22)
Institute
- Institut für Biochemie und Biologie (531)
- Institut für Physik und Astronomie (523)
- Mathematisch-Naturwissenschaftliche Fakultät (482)
- Institut für Geowissenschaften (445)
- Institut für Chemie (363)
- Humanwissenschaftliche Fakultät (207)
- Extern (206)
- Strukturbereich Kognitionswissenschaften (174)
- Wirtschaftswissenschaften (169)
- Institut für Mathematik (166)
Wheat is one of the most consumed foods in the world and unfortunately causes allergic reactions which have important health effects. The α-amylase/trypsin inhibitors (ATIs) have been identified as potentially allergen components of wheat. Due to a lack of data on optimization of ATI extraction, a new wheat ATIs extraction approach combining solvent extraction and selective precipitation is proposed in this work. Two types of wheat cultivars (Triticum aestivum L.), Julius and Ponticus were used and parameters such as solvent type, extraction time, temperature, stirring speed, salt type, salt concentration, buffer pH and centrifugation speed were analyzed using the Plackett-Burman design. Salt concentration, extraction time and pH appeared to have significant effects on the recovery of ATIs (p < 0.01). In both wheat cultivars, Julius and Ponticus, ammonium sulfate substantially reduced protein concentration and inhibition of amylase activity (IAA) compared to sodium chloride. The optimal conditions with desirability levels of 0.94 and 0.91 according to the Doehlert design were: salt concentrations of 1.67 and 1.22 M, extraction times of 53 and 118 min, and pHs of 7.1 and 7.9 for Julius and Ponticus, respectively. The corresponding responses were: protein concentrations of 0.31 and 0.35 mg and IAAs of 91.6 and 83.3%. Electrophoresis and MALDI-TOF/MS analysis showed that the extracted ATIs masses were between 10 and 20 kDa. Based on the initial LC-MS/MS analysis, up to 10 individual ATIs were identified in the extracted proteins under the optimal conditions. The positive implication of the present study lies in the quick assessment of their content in different varieties especially while considering their allergenic potential.
Compared to their inorganic counterparts, organic semiconductors suffer from relatively low charge carrier mobilities. Therefore, expressions derived for inorganic solar cells to correlate characteristic performance parameters to material properties are prone to fail when applied to organic devices. This is especially true for the classical Shockley-equation commonly used to describe current-voltage (JV)-curves, as it assumes a high electrical conductivity of the charge transporting material. Here, an analytical expression for the JV-curves of organic solar cells is derived based on a previously published analytical model. This expression, bearing a similar functional dependence as the Shockley-equation, delivers a new figure of merit α to express the balance between free charge recombination and extraction in low mobility photoactive materials. This figure of merit is shown to determine critical device parameters such as the apparent series resistance and the fill factor.
Quantifying the extremeness of heavy precipitation allows for the comparison of events. Conventional quantitative indices, however, typically neglect the spatial extent or the duration, while both are important to understand potential impacts. In 2014, the weather extremity index (WEI) was suggested to quantify the extremeness of an event and to identify the spatial and temporal scale at which the event was most extreme. However, the WEI does not account for the fact that one event can be extreme at various spatial and temporal scales. To better understand and detect the compound nature of precipitation events, we suggest complementing the original WEI with a “cross-scale weather extremity index” (xWEI), which integrates extremeness over relevant scales instead of determining its maximum.
Based on a set of 101 extreme precipitation events in Germany, we outline and demonstrate the computation of both WEI and xWEI. We find that the choice of the index can lead to considerable differences in the assessment of past events but that the most extreme events are ranked consistently, independently of the index. Even then, the xWEI can reveal cross-scale properties which would otherwise remain hidden. This also applies to the disastrous event from July 2021, which clearly outranks all other analyzed events with regard to both WEI and xWEI.
While demonstrating the added value of xWEI, we also identify various methodological challenges along the required computational workflow: these include the parameter estimation for the extreme value distributions, the definition of maximum spatial extent and temporal duration, and the weighting of extremeness at different scales. These challenges, however, also represent opportunities to adjust the retrieval of WEI and xWEI to specific user requirements and application scenarios.
Metagenomic sequencing has revolutionised our knowledge of virus diversity, with new virus sequences being reported faster than ever before. However, virus discovery from metagenomic sequencing usually depends on detectable homology: without a sufficiently close relative, so-called ‘dark’ virus sequences remain unrecognisable. An alternative approach is to use virus-identification methods that do not depend on detecting homology, such as virus recognition by host antiviral immunity. For example, virus-derived small RNAs have previously been used to propose ‘dark’ virus sequences associated with the Drosophilidae (Diptera). Here, we combine published Drosophila data with a comprehensive search of transcriptomic sequences and selected meta-transcriptomic datasets to identify a completely new lineage of segmented positive-sense single-stranded RNA viruses that we provisionally refer to as the Quenyaviruses. Each of the five segments contains a single open reading frame, with most encoding proteins showing no detectable similarity to characterised viruses, and one sharing a small number of residues with the RNA-dependent RNA polymerases of single- and double-stranded RNA viruses. Using these sequences, we identify close relatives in approximately 20 arthropods, including insects, crustaceans, spiders, and a myriapod. Using a more conserved sequence from the putative polymerase, we further identify relatives in meta-transcriptomic datasets from gut, gill, and lung tissues of vertebrates, reflecting infections of vertebrates or of their associated parasites. Our data illustrate the utility of small RNAs to detect viruses with limited sequence conservation, and provide robust evidence for a new deeply divergent and phylogenetically distinct RNA virus lineage.
The DNA origami technique has great potential for the development of brighter and more sensitive reporters for fluorescence based detection schemes such as a microbead-based assay in diagnostic applications. The nanostructures can be programmed to include multiple dye molecules to enhance the measured signal as well as multiple probe strands to increase the binding strength of the target oligonucleotide to these nanostructures. Here we present a proof-of-concept study to quantify short oligonucleotides by developing a novel DNA origami based reporter system, combined with planar microbead assays. Analysis of the assays using the VideoScan digital imaging platform showed DNA origami to be a more suitable reporter candidate for quantification of the target oligonucleotides at lower concentrations than a conventional reporter that consists of one dye molecule attached to a single stranded DNA. Efforts have been made to conduct multiplexed analysis of different targets as well as to enhance fluorescence signals obtained from the reporters. We therefore believe that the quantification of short oligonucleotides that exist in low copy numbers is achieved in a better way with the DNA origami nanostructures as reporters.
Monoclonal antibodies are used worldwide as highly potent and efficient detection reagents for research and diagnostic applications. Nevertheless, the specific targeting of complex antigens such as whole microorganisms remains a challenge. To provide a comprehensive workflow, we combined bioinformatic analyses with novel immunization and selection tools to design monoclonal antibodies for the detection of whole microorganisms. In our initial study, we used the human pathogenic strain E. coli O157:H7 as a model target and identified 53 potential protein candidates by using reverse vaccinology methodology. Five different peptide epitopes were selected for immunization using epitope-engineered viral proteins. The identification of antibody-producing hybridomas was performed by using a novel screening technology based on transgenic fusion cell lines. Using an artificial cell surface receptor expressed by all hybridomas, the desired antigen-specific cells can be sorted fast and efficiently out of the fusion cell pool. Selected antibody candidates were characterized and showed strong binding to the target strain E. coli O157:H7 with minor or no cross-reactivity to other relevant microorganisms such as Legionella pneumophila and Bacillus ssp. This approach could be useful as a highly efficient workflow for the generation of antibodies against microorganisms.
We consider a Cauchy problem for the heat equation in a cylinder X x (0,T) over a domain X in the n-dimensional space with data on a strip lying on the lateral surface. The strip is of the form
S x (0,T), where S is an open subset of the boundary of X. The problem is ill-posed. Under natural restrictions on the configuration of S we derive an explicit formula for solutions of this problem.
In the past, floods were basically managed by flood control mechanisms. The focus was set on the reduction of flood hazard. The potential consequences were of minor interest. Nowadays river flooding is increasingly seen from the risk perspective, including possible consequences. Moreover, the large-scale picture of flood risk became increasingly important for disaster management planning, national risk developments and the (re-) insurance industry. Therefore, it is widely accepted that risk-orientated flood management ap-proaches at the basin-scale are needed. However, large-scale flood risk assessment methods for areas of several 10,000 km² are still in early stages. Traditional flood risk assessments are performed reach wise, assuming constant probabilities for the entire reach or basin. This might be helpful on a local basis, but where large-scale patterns are important this approach is of limited use. Assuming a T-year flood (e.g. 100 years) for the entire river network is unrealistic and would lead to an overestimation of flood risk at the large scale. Due to the lack of damage data, additionally, the probability of peak discharge or rainfall is usually used as proxy for damage probability to derive flood risk. With a continuous and long term simulation of the entire flood risk chain, the spatial variability of probabilities could be consider and flood risk could be directly derived from damage data in a consistent way.
The objective of this study is the development and application of a full flood risk chain, appropriate for the large scale and based on long term and continuous simulation. The novel approach of ‘derived flood risk based on continuous simulations’ is introduced, where the synthetic discharge time series is used as input into flood impact models and flood risk is directly derived from the resulting synthetic damage time series.
The bottleneck at this scale is the hydrodynamic simu-lation. To find suitable hydrodynamic approaches for the large-scale a benchmark study with simplified 2D hydrodynamic models was performed. A raster-based approach with inertia formulation and a relatively high resolution of 100 m in combination with a fast 1D channel routing model was chosen.
To investigate the suitability of the continuous simulation of a full flood risk chain for the large scale, all model parts were integrated into a new framework, the Regional Flood Model (RFM). RFM consists of the hydrological model SWIM, a 1D hydrodynamic river network model, a 2D raster based inundation model and the flood loss model FELMOps+r. Subsequently, the model chain was applied to the Elbe catchment, one of the largest catchments in Germany. For the proof-of-concept, a continuous simulation was per-formed for the period of 1990-2003. Results were evaluated / validated as far as possible with available observed data in this period. Although each model part introduced its own uncertainties, results and runtime were generally found to be adequate for the purpose of continuous simulation at the large catchment scale.
Finally, RFM was applied to a meso-scale catchment in the east of Germany to firstly perform a flood risk assessment with the novel approach of ‘derived flood risk assessment based on continuous simulations’. Therefore, RFM was driven by long term synthetic meteorological input data generated by a weather generator. Thereby, a virtual time series of climate data of 100 x 100 years was generated and served as input to RFM providing subsequent 100 x 100 years of spatially consistent river discharge series, inundation patterns and damage values. On this basis, flood risk curves and expected annual damage could be derived directly from damage data, providing a large-scale picture of flood risk. In contrast to traditional flood risk analysis, where homogenous return periods are assumed for the entire basin, the presented approach provides a coherent large-scale picture of flood risk. The spatial variability of occurrence probability is respected. Additionally, data and methods are consistent. Catchment and floodplain processes are repre-sented in a holistic way. Antecedent catchment conditions are implicitly taken into account, as well as physical processes like storage effects, flood attenuation or channel–floodplain interactions and related damage influencing effects. Finally, the simulation of a virtual period of 100 x 100 years and consequently large data set on flood loss events enabled the calculation of flood risk directly from damage distributions. Problems associated with the transfer of probabilities in rainfall or peak runoff to probabilities in damage, as often used in traditional approaches, are bypassed.
RFM and the ‘derived flood risk approach based on continuous simulations’ has the potential to provide flood risk statements for national planning, re-insurance aspects or other questions where spatially consistent, large-scale assessments are required.
In soils and sediments there is a strong coupling between local biogeochemical processes and the distribution of water, electron acceptors, acids and nutrients. Both sides are closely related and affect each other from small scale to larger scales. Soil structures such as aggregates, roots, layers or macropores enhance the patchiness of these distributions. At the same time it is difficult to access the spatial distribution and temporal dynamics of these parameter. Noninvasive imaging techniques with high spatial and temporal resolution overcome these limitations. And new non-invasive techniques are needed to study the dynamic interaction of plant roots with the surrounding soil, but also the complex physical and chemical processes in structured soils. In this study we developed an efficient non-destructive in-situ method to determine biogeochemical parameters relevant to plant roots growing in soil. This is a quantitative fluorescence imaging method suitable for visualizing the spatial and temporal pH changes around roots. We adapted the fluorescence imaging set-up and coupled it with neutron radiography to study simultaneously root growth, oxygen depletion by respiration activity and root water uptake. The combined set up was subsequently applied to a structured soil system to map the patchy structure of oxic and anoxic zones induced by a chemical oxygen consumption reaction for spatially varying water contents. Moreover, results from a similar fluorescence imaging technique for nitrate detection were complemented by a numerical modeling study where we used imaging data, aiming to simulate biodegradation under anaerobic, nitrate reducing conditions.
The use of monoclonal antibodies is ubiquitous in science and biomedicine but the generation and validation process of antibodies is nevertheless complicated and time-consuming. To address these issues we developed a novel selective technology based on an artificial cell surface construct by which secreted antibodies were connected to the corresponding hybridoma cell when they possess the desired antigen-specificity. Further the system enables the selection of desired isotypes and the screening for potential cross-reactivities in the same context. For the design of the construct we combined the transmembrane domain of the EGF-receptor with a hemagglutinin epitope and a biotin acceptor peptide and performed a transposon-mediated transfection of myeloma cell lines. The stably transfected myeloma cell line was used for the generation of hybridoma cells and an antigen- and isotype-specific screening method was established. The system has been validated for globular protein antigens as well as for haptens and enables a fast and early stage selection and validation of monoclonal antibodies in one step.
This thesis provides a novel view on the early stage of crystallization utilizing calcium carbonate as a model system. Calcium carbonate is of great economical, scientific and ecological importance, because it is a major part of water hardness, the most abundant Biomineral and forms huge amounts of geological sediments thus binding large amounts of carbon dioxide. The primary experiments base on the evolution of supersaturation via slow addition of dilute calcium chloride solution into dilute carbonate buffer. The time-dependent measurement of the Ca2+ potential and concurrent pH = constant titration facilitate the calculation of the amount of calcium and carbonate ions bound in pre-nucleation stage clusters, which have never been detected experimentally so far, and in the new phase after nucleation, respectively. Analytical Ultracentrifugation independently proves the existence of pre-nucleation stage clusters, and shows that the clusters forming at pH = 9.00 have a proximately time-averaged size of altogether 70 calcium and carbonate ions. Both experiments show that pre-nucleation stage cluster formation can be described by means of equilibrium thermodynamics. Effectively, the cluster formation equilibrium is physico-chemically characterized by means of a multiple-binding equilibrium of calcium ions to a ‘lattice’ of carbonate ions. The evaluation gives GIBBS standard energy for the formation of calcium/carbonate ion pairs in clusters, which exhibits a maximal value of approximately 17.2 kJ mol^-1 at pH = 9.75 and relates to a minimal binding strength in clusters at this pH-value. Nucleated calcium carbonate particles are amorphous at first and subsequently become crystalline. At high binding strength in clusters, only calcite (the thermodynamically stable polymorph) is finally obtained, while with decreasing binding strength in clusters, vaterite (the thermodynamically least stable polymorph) and presumably aragonite (the thermodynamically intermediate stable polymorph) are obtained additionally. Concurrently, two different solubility products of nucleated amorphous calcium carbonate (ACC) are detected at low binding strength and high binding strength in clusters (ACC I 3.1EE-8 M^2, ACC II 3.8EE-8 M^2), respectively, indicating the precipitation of at least two different ACC species, while the clusters provide the precursor species of ACC. It is proximate that ACC I may relate to calcitic ACC –i.e. ACC exhibiting short range order similar to the long range order of calcite and that ACC II may relate to vateritic ACC, which will subsequently transform into the particular crystalline polymorph as discussed in the literature, respectively. Detailed analysis of nucleated particles forming at minimal binding strength in clusters (pH = 9.75) by means of SEM, TEM, WAXS and light microscopy shows that predominantly vaterite with traces of calcite forms. The crystalline particles of early stages are composed of nano-crystallites of approximately 5 to 10 nm size, respectively, which are aligned in high mutual order as in mesocrystals. The analyses of precipitation at pH = 9.75 in presence of additives –polyacrylic acid (pAA) as a model compound for scale inhibitors and peptides exhibiting calcium carbonate binding affinity as model compounds for crystal modifiers- shows that ACC I and ACC II are precipitated in parallel: pAA stabilizes ACC II particles against crystallization leading to their dissolution for the benefit of crystals that form from ACC I and exclusively calcite is finally obtained. Concurrently, the peptide additives analogously inhibit the formation of calcite and exclusively vaterite is finally obtained in case of one of the peptide additives. These findings show that classical nucleation theory is hardly applicable for the nucleation of calcium carbonate. The metastable system is stabilized remarkably due to cluster formation, while clusters forming by means of equilibrium thermodynamics are the nucleation relevant species and not ions. Most likely, the concept of cluster formation is a common phenomenon occurring during the precipitation of hardly soluble compounds as qualitatively shown for calcium oxalate and calcium phosphate. This finding is important for the fundamental understanding of crystallization and nucleation-inhibition and modification by additives with impact on materials of huge scientific and industrial importance as well as for better understanding of the mass transport in crystallization. It can provide a novel basis for simulation and modelling approaches. New mechanisms of scale formation in Bio- and Geomineralization and also in scale inhibition on the basis of the newly reported reaction channel need to be considered.
The spread of shrubs in Namibian savannas raises questions about the resilience of these ecosystems to global change. This makes it necessary to understand the past dynamics of the vegetation, since there is no consensus on whether shrub encroachment is a new phenomenon, nor on its main drivers. However, a lack of long-term vegetation datasets for the region and the scarcity of suitable palaeoecological archives, makes reconstructing past vegetation and land cover of the savannas a challenge.
To help meet this challenge, this study addresses three main research questions: 1) is pollen analysis a suitable tool to reflect the vegetation change associated with shrub encroachment in savanna environments? 2) Does the current encroached landscape correspond to an alternative stable state of savanna vegetation? 3) To what extent do pollen-based quantitative vegetation reconstructions reflect changes in past land cover?
The research focuses on north-central Namibia, where despite being the region most affected by shrub invasion, particularly since the 21st century, little is known about the dynamics of this phenomenon.
Field-based vegetation data were compared with modern pollen data to assess their correspondence in terms of composition and diversity along precipitation and grazing intensity gradients. In addition, two sediment cores from Lake Otjikoto were analysed to reveal changes in vegetation composition that have occurred in the region over the past 170 years and their possible drivers. For this, a multiproxy approach (fossil pollen, sedimentary ancient DNA (sedaDNA), biomarkers, compound specific carbon (δ13C) and deuterium (δD) isotopes, bulk carbon isotopes (δ13Corg), grain size, geochemical properties) was applied at high taxonomic and temporal resolution. REVEALS modelling of the fossil pollen record from Lake Otjikoto was run to quantitatively reconstruct past vegetation cover. For this, we first made pollen productivity estimates (PPE) of the most relevant savanna taxa in the region using the extended R-value model and two pollen dispersal options (Gaussian plume model and Lagrangian stochastic model). The REVEALS-based vegetation reconstruction was then validated using remote sensing-based regional vegetation data.
The results show that modern pollen reflects the composition of the vegetation well, but diversity less well. Interestingly, precipitation and grazing explain a significant amount of the compositional change in the pollen and vegetation spectra. The multiproxy record shows that a state change from open Combretum woodland to encroached Terminalia shrubland can occur over a century, and that the transition between states spans around 80 years and is characterized by a unique vegetation composition. This transition is supported by gradual environmental changes induced by management (i.e. broad-scale logging for the mining industry, selective grazing and reduced fire activity associated with intensified farming) and related land-use change. Derived environmental changes (i.e. reduced soil moisture, reduced grass cover, changes in species composition and competitiveness, reduced fire intensity) may have affected the resilience of Combretum open woodlands, making them more susceptible to change to an encroached state by stochastic events such as consecutive years of precipitation and drought, and by high concentrations of pCO2. We assume that the resulting encroached state was further stabilized by feedback mechanisms that favour the establishment and competitiveness of woody vegetation.
The REVEALS-based quantitative estimates of plant taxa indicate the predominance of a semi-open landscape throughout the 20th century and a reduction in grass cover below 50% since the 21st century associated with the spread of encroacher woody taxa. Cover estimates show a close match with regional vegetation data, providing support for the vegetation dynamics inferred from multiproxy analyses. Reasonable PPEs were made for all woody taxa, but not for Poaceae.
In conclusion, pollen analysis is a suitable tool to reconstruct past vegetation dynamics in savannas. However, because pollen cannot identify grasses beyond family level, a multiproxy approach, particularly the use of sedaDNA, is required. I was able to separate stable encroached states from mere woodland phases, and could identify drivers and speculate about related feedbacks. In addition, the REVEALS-based quantitative vegetation reconstruction clearly reflects the magnitude of the changes in the vegetation cover that occurred during the last 130 years, despite the limitations of some PPEs.
This research provides new insights into pollen-vegetation relationships in savannas and highlights the importance of multiproxy approaches when reconstructing past vegetation dynamics in semi-arid environments. It also provides the first time series with sufficient taxonomic resolution to show changes in vegetation composition during shrub encroachment, as well as the first quantitative reconstruction of past land cover in the region. These results help to identify the different stages in savanna dynamics and can be used to calibrate predictive models of vegetation change, which are highly relevant to land management.
Matching participants (as suggested by Hope, 2015) may be one promising option for research on a potential bilingual advantage in executive functions (EF). In this study we first compared performances in three EF-tasks of a naturally heterogeneous sample of monolingual (n = 69, age = 9.0 y) and multilingual children (n = 57, age = 9.3 y). Secondly, we meticulously matched participants pairwise to obtain two highly homogeneous groups to rerun our analysis and investigate a potential bilingual advantage. The initally disadvantaged multilinguals (regarding socioeconomic status and German lexicon size) performed worse in updating and response inhibition, but similarly in interference inhibition. This indicates that superior EF compensate for the detrimental effects of the background variables. After matching children pairwise on age, gender, intelligence, socioeconomic status and German lexicon size, performances became similar except for interference inhibition. Here, an advantage for multilinguals in the form of globally reduced reaction times emerged, indicating a bilingual executive processing advantage.
A phagocyte-specific Irf8 gene enhancer establishes early conventional dendritic cell commitment
(2011)
Haematopoietic development is a complex process that is strictly hierarchically organized. Here, the phagocyte lineages are a very heterogeneous cell compartment with specialized functions in innate immunity and induction of adaptive immune responses. Their generation from a common precursor must be tightly controlled. Interference within lineage formation programs for example by mutation or change in expression levels of transcription factors (TF) is causative to leukaemia. However, the molecular mechanisms driving specification into distinct phagocytes remain poorly understood. In the present study I identify the transcription factor Interferon Regulatory Factor 8 (IRF8) as the specification factor of dendritic cell (DC) commitment in early phagocyte precursors. Employing an IRF8 reporter mouse, I showed the distinct Irf8 expression in haematopoietic lineage diversification and isolated a novel bone marrow resident progenitor which selectively differentiates into CD8α+ conventional dendritic cells (cDCs) in vivo. This progenitor strictly depends on Irf8 expression to properly establish its transcriptional DC program while suppressing a lineage-inappropriate neutrophile program. Moreover, I demonstrated that Irf8 expression during this cDC commitment-step depends on a newly discovered myeloid-specific cis-enhancer which is controlled by the haematopoietic transcription factors PU.1 and RUNX1. Interference with their binding leads to abrogation of Irf8 expression, subsequently to disturbed cell fate decisions, demonstrating the importance of these factors for proper phagocyte cell development. Collectively, these data delineate a transcriptional program establishing cDC fate choice with IRF8 in its center.
This thesis is focused on the electronic, spin-dependent and dynamical properties of thin magnetic systems. Photoemission-related techniques are combined with synchrotron radiation to study the spin-dependent properties of these systems in the energy and time domains. In the first part of this thesis, the strength of electron correlation effects in the spin-dependent electronic structure of ferromagnetic bcc Fe(110) and hcp Co(0001) is investigated by means of spin- and angle-resolved photoemission spectroscopy. The experimental results are compared to theoretical calculations within the three-body scattering approximation and within the dynamical mean-field theory, together with one-step model calculations of the photoemission process. From this comparison it is demonstrated that the present state of the art many-body calculations, although improving the description of correlation effects in Fe and Co, give too small mass renormalizations and scattering rates thus demanding more refined many-body theories including nonlocal fluctuations. In the second part, it is shown in detail monitoring by photoelectron spectroscopy how graphene can be grown by chemical vapour deposition on the transition-metal surfaces Ni(111) and Co(0001) and intercalated by a monoatomic layer of Au. For both systems, a linear E(k) dispersion of massless Dirac fermions is observed in the graphene pi-band in the vicinity of the Fermi energy. Spin-resolved photoemission from the graphene pi-band shows that the ferromagnetic polarization of graphene/Ni(111) and graphene/Co(0001) is negligible and that graphene on Ni(111) is after intercalation of Au spin-orbit split by the Rashba effect. In the last part, a time-resolved x-ray magnetic circular dichroic-photoelectron emission microscopy study of a permalloy platelet comprising three cross-tie domain walls is presented. It is shown how a fast picosecond magnetic response in the precessional motion of the magnetization can be induced by means of a laser-excited photoswitch. From a comparision to micromagnetic calculations it is demonstrated that the relatively high precessional frequency observed in the experiments is directly linked to the nature of the vortex/antivortex dynamics and its response to the magnetic perturbation. This includes the time-dependent reversal of the vortex core polarization, a process which is beyond the limit of detection in the present experiments.
Objective
The Caribbean is an important global biodiversity hotspot. Adaptive radiations there lead to many speciation events within a limited period and hence are particularly prominent biodiversity generators. A prime example are freshwater fish of the genus Limia, endemic to the Greater Antilles. Within Hispaniola, nine species have been described from a single isolated site, Lake Miragoâne, pointing towards extraordinary sympatric speciation. This study examines the evolutionary history of the Limia species in Lake Miragoâne, relative to their congeners throughout the Caribbean.
Results
For 12 Limia species, we obtained almost complete sequences of the mitochondrial cytochrome b gene, a well-established marker for lower-level taxonomic relationships. We included sequences of six further Limia species from GenBank (total N = 18 species). Our phylogenies are in concordance with other published phylogenies of Limia. There is strong support that the species found in Lake Miragoâne in Haiti are monophyletic, confirming a recent local radiation. Within Lake Miragoâne, speciation is likely extremely recent, leading to incomplete lineage sorting in the mtDNA. Future studies using multiple unlinked genetic markers are needed to disentangle the relationships within the Lake Miragoâne clade.
Background: For omics experiments, detailed characterisation of experimental material with respect to its genetic features, its cultivation history and its treatment history is a requirement for analyses by bioinformatics tools and for publication needs. Furthermore, meta-analysis of several experiments in systems biology based approaches make it necessary to store this information in a standardised manner, preferentially in relational databases. In the Golm Plant Database System, we devised a data management system based on a classical Laboratory Information Management System combined with web-based user interfaces for data entry and retrieval to collect this information in an academic environment.
Results: The database system contains modules representing the genetic features of the germplasm, the experimental conditions and the sampling details. In the germplasm module, genetically identical lines of biological material are generated by defined workflows, starting with the import workflow, followed by further workflows like genetic modification (transformation), vegetative or sexual reproduction. The latter workflows link lines and thus create pedigrees. For experiments, plant objects are generated from plant lines and united in so-called cultures, to which the cultivation conditions are linked. Materials and methods for each cultivation step are stored in a separate ACCESS database of the plant cultivation unit. For all cultures and thus every plant object, each cultivation site and the culture's arrival time at a site are logged by a barcode-scanner based system. Thus, for each plant object, all site-related parameters, e. g. automatically logged climate data, are available. These life history data and genetic information for the plant objects are linked to analytical results by the sampling module, which links sample components to plant object identifiers. This workflow uses controlled vocabulary for organs and treatments. Unique names generated by the system and barcode labels facilitate identification and management of the material. Web pages are provided as user interfaces to facilitate maintaining the system in an environment with many desktop computers and a rapidly changing user community. Web based search tools are the basis for joint use of the material by all researchers of the institute.
Conclusion: The Golm Plant Database system, which is based on a relational database, collects the genetic and environmental information on plant material during its production or experimental use at the Max-Planck-Institute of Molecular Plant Physiology. It thus provides information according to the MIAME standard for the component 'Sample' in a highly standardised format. The Plant Database system thus facilitates collaborative work and allows efficient queries in data analysis for systems biology research.
A polymer analogous reaction for the formation of imidazolium and NHC based porous polymer networks
(2013)
A polymer analogous reaction was carried out to generate a porous polymeric network with N-heterocyclic carbenes (NHC) in the polymer backbone. Using a stepwise approach, first a polyimine network is formed by polymerization of the tetrafunctional amine tetrakis(4-aminophenyl)methane. This polyimine network is converted in the second step into polyimidazolium chloride and finally to a polyNHC network. Furthermore a porous Cu(II)-coordinated polyNHC network can be generated. Supercritical drying generates polymer networks with high permanent surface areas and porosities which can be applied for different catalytic reactions. The catalytic properties were demonstrated for example in the activation of CO2 or in the deoxygenation of sulfoxides to the corresponding sulfides.
The German Sonderweg thesis has been discarded in most research fields. Yet in regards to the military, things differ: all conflicts before the Second World War are interpreted as prelude to the war of extermination between 1939–1945. This article specifically looks at the Franco-Prussian War 1870–71 and German behaviour vis-à-vis regular combatants, civilians and irregular guerrilla fighters, the so-called francs-tireurs. The author argues that the counter-measures were not exceptional for nineteenth century warfare and also shows how selective reading of the existing secondary literature has distorted our view on the war.
Extreme weather events are likely to occur more often under climate change and the resulting effects on ecosystems could lead to a further acceleration of climate change. But not all extreme weather events lead to extreme ecosystem response. Here, we focus on hazardous ecosystem behaviour and identify coinciding weather conditions. We use a simple probabilistic risk assessment based on time series of ecosystem behaviour and climate conditions. Given the risk assessment terminology, vulnerability and risk for the previously defined hazard are estimated on the basis of observed hazardous ecosystem behaviour.
We apply this approach to extreme responses of terrestrial ecosystems to drought, defining the hazard as a negative net biome productivity over a 12-month period. We show an application for two selected sites using data for 1981-2010 and then apply the method to the pan-European scale for the same period, based on numerical modelling results (LPJmL for ecosystem behaviour; ERA-Interim data for climate).
Our site-specific results demonstrate the applicability of the proposed method, using the SPEI to describe the climate condition. The site in Spain provides an example of vulnerability to drought because the expected value of the SPEI is 0.4 lower for hazardous than for non-hazardous ecosystem behaviour. In northern Germany, on the contrary, the site is not vulnerable to drought because the SPEI expectation values imply wetter conditions in the hazard case than in the non-hazard case.
At the pan-European scale, ecosystem vulnerability to drought is calculated in the Mediterranean and temperate region, whereas Scandinavian ecosystems are vulnerable under conditions without water shortages. These first model- based applications indicate the conceptual advantages of the proposed method by focusing on the identification of critical weather conditions for which we observe hazardous ecosystem behaviour in the analysed data set. Application of the method to empirical time series and to future climate would be important next steps to test the approach.
Background
Relatively little is known about protective factors and the emergence and maintenance of positive outcomes in the field of adolescents with chronic conditions. Therefore, the primary aim of the study is to acquire a deeper understanding of the dynamic process of resilience factors, coping strategies and psychosocial adjustment of adolescents living with chronic conditions.
Methods/design
We plan to consecutively recruit N = 450 adolescents (12–21 years) from three German patient registries for chronic conditions (type 1 diabetes, cystic fibrosis, or juvenile idiopathic arthritis). Based on screening for anxiety and depression, adolescents are assigned to two parallel groups – “inconspicuous” (PHQ-9 and GAD-7 < 7) vs. “conspicuous” (PHQ-9 or GAD-7 ≥ 7) – participating in a prospective online survey at baseline and 12-month follow-up. At two time points (T1, T2), we assess (1) intra- and interpersonal resiliency factors, (2) coping strategies, and (3) health-related quality of life, well-being, satisfaction with life, anxiety and depression. Using a cross-lagged panel design, we will examine the bidirectional longitudinal relations between resiliency factors and coping strategies, psychological adaptation, and psychosocial adjustment. To monitor Covid-19 pandemic effects, participants are also invited to take part in an intermediate online survey.
Discussion
The study will provide a deeper understanding of adaptive, potentially modifiable processes and will therefore help to develop novel, tailored interventions supporting a positive adaptation in youths with a chronic condition. These strategies should not only support those at risk but also promote the maintenance of a successful adaptation.
Trial registration
German Clinical Trials Register (DRKS), no. DRKS00025125. Registered on May 17, 2021.
Dynamic earthquake rupture modeling provides information on the rupture physics as the rupture velocity, frictions or tractions acting during the rupture process. Nevertheless, as often based on spatial gridded preset geometries, dynamic modeling is depending on many free parameters leading to both a high non-uniqueness of the results and large computation times. That decreases the possibilities of full Bayesian error analysis.
To assess the named problems we developed the quasi-dynamic rupture model which is presented in this work. It combines the kinematic Eikonal rupture model with a boundary element method for quasi-static slip calculation.
The orientation of the modeled rupture plane is defined by a previously performed moment tensor inversion. The simultanously inverted scalar seismic moment allows an estimation of the extension of the rupture. The modeled rupture plane is discretized by a set of rectangular boundary elements. For each boundary element an applied traction vector is defined as the boundary value.
For insights in the dynamic rupture behaviour the rupture front propagation is calculated for incremental time steps based on the 2D Eikonal equation. The needed location-dependent rupture velocity field is assumed to scale linearly with a layered shear wave velocity field.
At each time all boundary elements enclosed within the rupture front are used to calculate the quasi-static slip distribution. Neither friction nor stress propagation are considered. Therefore the algorithm is assumed to be “quasi-static”. A series of the resulting quasi-static slip snapshots can be used as a quasi-dynamic model of the rupture process.
As many a priori information is used from the earth model (shear wave velocity and elastic parameters) and the moment tensor inversion (rupture extension and orientation) our model is depending on few free parameters as the traction field, the linear factor between rupture and shear wave velocity and the nucleation point and time. Hence stable and fast modeling results are obtained as proven from the comparison to different infinite and finite static crack solutions.
First dynamic applications show promissing results. The location-dependent rise time is automatically derived by the model. Different simple kinematic models as the slip-pulse or the penny-shaped crack model can be reproduced as well as their corresponding slip rate functions. A source time function (STF) approximation calculated from the cumulative sum of moment rates of each boundary element gives results similar to theoretical and empirical known STFs.
The model was also applied to the 2015 Illapel earthquake. Using a simple rectangular rupture geometry and a 2-layered traction regime yields good estimates of both the rupture front propagation and the slip patterns which are comparable to literature results. The STF approximation shows a good fit with previously published STFs.
The quasi-dynamic rupture model is hence able to fastly calculate reproducable slip results. That allows to test full Bayesian error analysis in the future. Further work on a full seismic source inversion or even a traction field inversion can also extend the scope of our model.
Transport Molecules play a crucial role for cell viability. Amongst others, linear motors transport cargos along rope-like structures from one location of the cell to another in a stochastic fashion. Thereby each step of the motor, either forwards or backwards, bridges a fixed distance. While moving along the rope the motor can also detach and is lost. We give here a mathematical formalization of such dynamics as a random process which is an extension of Random Walks, to which we add an absorbing state to model the detachment of the motor from the rope. We derive particular properties of such processes that have not been available before. Our results include description of the maximal distance reached from the starting point and the position from which detachment takes place. Finally, we apply our theoretical results to a concrete established model of the transport molecule Kinesin V.
Let A be a nonlinear differential operator on an open set X in R^n and S a closed subset of X. Given a class F of functions in X, the set S is said to be removable for F relative to A if any weak solution of A (u) = 0 in the complement of S of class F satisfies this equation weakly in all of X. For the most extensively studied classes F we show conditions on S which guarantee that S is removable for F relative to A.
G protein-coupled receptor (GPCR) genes are large gene families in every animal, sometimes making up to 1-2% of the animal's genome. Of all insect GPCRs, the neurohormone (neuropeptide, protein hormone, biogenic amine) GPCRs are especially important, because they, together with their ligands, occupy a high hierarchic position in the physiology of insects and steer crucial processes such as development, reproduction, and behavior. In this paper, we give a review of our current knowledge on Drosophila melanogaster GPCRs and use this information to annotate the neurohormone GPCR genes present in the recently sequenced genome from the honey bee Apis mellifera. We found 35 neuropeptide receptor genes in the honey bee (44 in Drosophila) and two genes, coding for leucine-rich repeats-containing protein hormone GPCRs (4 in Drosophila). In addition, the honey bee has 19 biogenic amine receptor genes (21 in Drosophila). The larger numbers of neurohormone receptors in Drosophila are probably due to gene duplications that occurred during recent evolution of the fly. Our analyses also yielded the likely ligands for 40 of the 56 honey bee neurohormone GPCRs identified in this study. In addition, we made some interesting observations on neurohormone GPCR evolution and the evolution and co-evolution of their ligands. For neuropeptide and protein hormone GPCRs, there appears to be a general co-evolution between receptors and their ligands. This is in contrast to biogenic amine GPCRs, where evolutionarily unrelated GPCRs often bind to the same biogenic amine, suggesting frequent ligand exchanges ("ligand hops") during GPCR evolution. (c) 2006 Elsevier Ltd. All rights reserved.
In a bounded domain with smooth boundary in R^3 we consider the stationary Maxwell equations
for a function u with values in R^3 subject to a nonhomogeneous condition
(u,v)_x = u_0 on
the boundary, where v is a given vector field and u_0 a function on the boundary. We specify this problem within the framework of the Riemann-Hilbert boundary value problems for the Moisil-Teodorescu system. This latter is proved to satisfy the Shapiro-Lopaniskij condition if an only if the vector v is at no point tangent to the boundary. The Riemann-Hilbert problem for the Moisil-Teodorescu system fails to possess an adjoint boundary value problem with respect to the Green formula, which satisfies the Shapiro-Lopatinskij condition. We develop the construction of Green formula to get a proper concept of adjoint boundary value problem.
Salinity is a significant factor for structuring microbial communities, but little is known for aquatic fungi, particularly in the pelagic zone of brackish ecosystems. In this study, we explored the diversity and composition of fungal communities following a progressive salinity decline (from 34 to 3 PSU) along three transects of ca. 2000 km in the Baltic Sea, the world’s largest estuary. Based on 18S rRNA gene sequence analysis, we detected clear changes in fungal community composition along the salinity gradient and found significant differences in composition of fungal communities established above and below a critical value of 8 PSU. At salinities below this threshold, fungal communities resembled those from freshwater environments, with a greater abundance of Chytridiomycota, particularly of the orders Rhizophydiales, Lobulomycetales, and
Gromochytriales. At salinities above 8 PSU, communities were more similar to those from marine environments and, depending on the season, were dominated by a strain of the LKM11 group (Cryptomycota) or by members of Ascomycota and Basidiomycota. Our results highlight salinity as an important environmental driver also for pelagic fungi, and thus should be taken into account to better understand fungal diversity and ecological function in the aquatic realm.
A digital filter is introduced which treats the problem of predictability versus time averaging in a continuous, seamless manner. This seamless filter (SF) is characterized by a unique smoothing rule that determines the strength of smoothing in dependence on lead time. The rule needs to be specified beforehand, either by expert knowledge or by user demand. As a result, skill curves are obtained that allow a predictability assessment across a whole range of time-scales, from daily to seasonal, in a uniform manner. The SF is applied to downscaled SEAS5 ensemble forecasts for two focus regions in or near the tropical belt, the river basins of the Karun in Iran and the Sao Francisco in Brazil. Both are characterized by strong seasonality and semi-aridity, so that predictability across various time-scales is in high demand. Among other things, it is found that from the start of the water year (autumn), areal precipitation is predictable with good skill for the Karun basin two and a half months ahead; for the Sao Francisco it is only one month, longer-term prediction skill is just above the critical level.
As a result of CMOS scaling, radiation-induced Single-Event Effects (SEEs) in electronic circuits became a critical reliability issue for modern Integrated Circuits (ICs) operating under harsh radiation conditions. SEEs can be triggered in combinational or sequential logic by the impact of high-energy particles, leading to destructive or non-destructive faults, resulting in data corruption or even system failure. Typically, the SEE mitigation methods are deployed statically in processing architectures based on the worst-case radiation conditions, which is most of the time unnecessary and results in a resource overhead. Moreover, the space radiation conditions are dynamically changing, especially during Solar Particle Events (SPEs). The intensity of space radiation can differ over five orders of magnitude within a few hours or days, resulting in several orders of magnitude fault probability variation in ICs during SPEs. This thesis introduces a comprehensive approach for designing a self-adaptive fault resilient multiprocessing system to overcome the static mitigation overhead issue. This work mainly addresses the following topics: (1) Design of on-chip radiation particle monitor for real-time radiation environment detection, (2) Investigation of space environment predictor, as support for solar particle events forecast, (3) Dynamic mode configuration in the resilient multiprocessing system. Therefore, according to detected and predicted in-flight space radiation conditions, the target system can be configured to use no mitigation or low-overhead mitigation during non-critical periods of time. The redundant resources can be used to improve system performance or save power. On the other hand, during increased radiation activity periods, such as SPEs, the mitigation methods can be dynamically configured appropriately depending on the real-time space radiation environment, resulting in higher system reliability. Thus, a dynamic trade-off in the target system between reliability, performance and power consumption in real-time can be achieved. All results of this work are evaluated in a highly reliable quad-core multiprocessing system that allows the self-adaptive setting of optimal radiation mitigation mechanisms during run-time. Proposed methods can serve as a basis for establishing a comprehensive self-adaptive resilient system design process. Successful implementation of the proposed design in the quad-core multiprocessor shows its application perspective also in the other designs.
Fixational eye movements show scaling behaviour of the positional mean-squared displacement with a characteristic transition from persistence to antipersistence for increasing time-lag. These statistical patterns were found to be mainly shaped by microsaccades (fast, small-amplitude movements). However, our re-analysis of fixational eye-movement data provides evidence that the slow component (physiological drift) of the eyes exhibits scaling behaviour of the mean-squared displacement that varies across human participants. These results suggest that drift is a correlated movement that interacts with microsaccades. Moreover, on the long time scale, the mean-squared displacement of the drift shows oscillations, which is also present in the displacement auto-correlation function. This finding lends support to the presence of time-delayed feedback in the control of drift movements. Based on an earlier non-linear delayed feedback model of fixational eye movements, we propose and discuss different versions of a new model that combines a self-avoiding walk with time delay. As a result, we identify a model that reproduces oscillatory correlation functions, the transition from persistence to antipersistence, and microsaccades.
Processes driving the production, transformation and transport of methane (CH4) in wetland ecosystems are highly complex. We present a simple calculation algorithm to separate open-water CH4 fluxes measured with automatic chambers into diffusion- and ebullition-derived components. This helps to reveal underlying dynamics, to identify potential environmental drivers and, thus, to calculate reliable CH4 emission estimates. The flux separation is based on identification of ebullition-related sudden concentration changes during single measurements. Therefore, a variable ebullition filter is applied, using the lower and upper quartile and the interquartile range (IQR). Automation of data processing is achieved by using an established R script, adjusted for the purpose of CH4 flux calculation. The algorithm was validated by performing a laboratory experiment and tested using flux measurement data (July to September 2013) from a former fen grassland site, which converted into a shallow lake as a result of rewetting. Ebullition and diffusion contributed equally (46 and 55 %) to total CH4 emissions, which is comparable to ratios given in the literature. Moreover, the separation algorithm revealed a concealed shift in the diurnal trend of diffusive fluxes throughout the measurement period. The water temperature gradient was identified as one of the major drivers of diffusive CH4 emissions, whereas no significant driver was found in the case of erratic CH4 ebullition events.
In recent decades, the Greenland Ice Sheet has been losing mass and has thereby contributed to global sea-level rise. The rate of ice loss is highly relevant for coastal protection worldwide. The ice loss is likely to increase under future warming. Beyond a critical temperature threshold, a meltdown of the Greenland Ice Sheet is induced by the self-enforcing feedback between its lowering surface elevation and its increasing surface mass loss: the more ice that is lost, the lower the ice surface and the warmer the surface air temperature, which fosters further melting and ice loss. The computation of this rate so far relies on complex numerical models which are the appropriate tools for capturing the complexity of the problem. By contrast we aim here at gaining a conceptual understanding by deriving a purposefully simple equation for the self-enforcing feedback which is then used to estimate the melt time for different levels of warming using three observable characteristics of the ice sheet itself and its surroundings. The analysis is purely conceptual in nature. It is missing important processes like ice dynamics for it to be useful for applications to sea-level rise on centennial timescales, but if the volume loss is dominated by the feedback, the resulting logarithmic equation unifies existing numerical simulations and shows that the melt time depends strongly on the level of warming with a critical slow-down near the threshold: the median time to lose 10% of the present-day ice volume varies between about 3500 years for a temperature level of 0.5 degrees C above the threshold and 500 years for 5 degrees C. Unless future observations show a significantly higher melting sensitivity than currently observed, a complete meltdown is unlikely within the next 2000 years without significant ice-dynamical contributions.
The Riemann hypothesis is equivalent to the fact the the reciprocal function 1/zeta (s) extends from the interval (1/2,1) to an analytic function in the quarter-strip 1/2 < Re s < 1 and Im s > 0. Function theory allows one to rewrite the condition of analytic continuability in an elegant form amenable to numerical experiments.
Due to the enhanced electromagnetic field at the tips of metal nanoparticles, the spiked structure of gold nanostars (AuNSs) is promising for surface-enhanced Raman scattering (SERS). Therefore, the challenge is the synthesis of well designed particles with sharp tips. The influence of different surfactants, i.e., dioctyl sodium sulfosuccinate (AOT), sodium dodecyl sulfate (SDS), and benzylhexadecyldimethylammonium chloride (BDAC), as well as the combination of surfactant mixtures on the formation of nanostars in the presence of Ag⁺ ions and ascorbic acid was investigated. By varying the amount of BDAC in mixed micelles the core/spike-shell morphology of the resulting AuNSs can be tuned from small cores to large ones with sharp and large spikes. The concomitant red-shift in the absorption toward the NIR region without losing the SERS enhancement enables their use for biological applications and for time-resolved spectroscopic studies of chemical reactions, which require a permanent supply with a fresh and homogeneous solution. HRTEM micrographs and energy-dispersive X-ray (EDX) experiments allow us to verify the mechanism of nanostar formation according to the silver underpotential deposition on the spike surface in combination with micelle adsorption.
Social segregation in cities takes place where different household groups exist and when, according to Schelling, their location choice either minimizes the number of differing households in their neighborhood or maximizes their own group. In this contribution an evolutionary simulation based on a monocentric city model with externalities among households is used to discuss the spatial segregation patterns of four groups. The resulting complex spatial patterns can be shown as graphic animations. They can be applied as initial situation for the analysis of the effects a rent control has on segregation.
Cytochrome P450 17A1 (CYP17A1) catalyses the formation and metabolism of steroid hormones. They are involved in blood pressure (BP) regulation and in the pathogenesis of left ventricular hypertrophy. Therefore, altered function of CYP17A1 due to genetic variants may influence BP and left ventricular mass. Notably, genome wide association studies supported the role of this enzyme in BP control. Against this background, we investigated associations between single nucleotide polymorphisms (SNPs) in or nearby the CYP17A1 gene with BP and left ventricular mass in patients with arterial hypertension and associated cardiovascular organ damage treated according to guidelines. Patients (n = 1007, mean age 58.0 ± 9.8 years, 83% men) with arterial hypertension and cardiac left ventricular ejection fraction (LVEF) ≥40% were enrolled in the study. Cardiac parameters of left ventricular mass, geometry and function were determined by echocardiography. The cohort comprised patients with coronary heart disease (n = 823; 81.7%) and myocardial infarction (n = 545; 54.1%) with a mean LVEF of 59.9% ± 9.3%. The mean left ventricular mass index (LVMI) was 52.1 ± 21.2 g/m2.7 and 485 (48.2%) patients had left ventricular hypertrophy. There was no significant association of any investigated SNP (rs619824, rs743572, rs1004467, rs11191548, rs17115100) with mean 24 h systolic or diastolic BP. However, carriers of the rs11191548 C allele demonstrated a 7% increase in LVMI (95% CI: 1%–12%, p = 0.017) compared to non-carriers. The CYP17A1 polymorphism rs11191548 demonstrated a significant association with LVMI in patients with arterial hypertension and preserved LVEF. Thus, CYP17A1 may contribute to cardiac hypertrophy in this clinical condition.
Spatio-temporal data denotes a category of data that contains spatial as well as temporal components. For example, time-series of geo-data, thematic maps that change over time, or tracking data of moving entities can be interpreted as spatio-temporal data.
In today's automated world, an increasing number of data sources exist, which constantly generate spatio-temporal data. This includes for example traffic surveillance systems, which gather movement data about human or vehicle movements, remote-sensing systems, which frequently scan our surroundings and produce digital representations of cities and landscapes, as well as sensor networks in different domains, such as logistics, animal behavior study, or climate research.
For the analysis of spatio-temporal data, in addition to automatic statistical and data mining methods, exploratory analysis methods are employed, which are based on interactive visualization. These analysis methods let users explore a data set by interactively manipulating a visualization, thereby employing the human cognitive system and knowledge of the users to find patterns and gain insight into the data.
This thesis describes a software framework for the visualization of spatio-temporal data, which consists of GPU-based techniques to enable the interactive visualization and exploration of large spatio-temporal data sets. The developed techniques include data management, processing, and rendering, facilitating real-time processing and visualization of large geo-temporal data sets. It includes three main contributions:
- Concept and Implementation of a GPU-Based Visualization Pipeline.
The developed visualization methods are based on the concept of a GPU-based visualization pipeline, in which all steps -- processing, mapping, and rendering -- are implemented on the GPU. With this concept, spatio-temporal data is represented directly in GPU memory, using shader programs to process and filter the data, apply mappings to visual properties, and finally generate the geometric representations for a visualization during the rendering process. Data processing, filtering, and mapping are thereby executed in real-time, enabling dynamic control over the mapping and a visualization process which can be controlled interactively by a user.
- Attributed 3D Trajectory Visualization.
A visualization method has been developed for the interactive exploration of large numbers of 3D movement trajectories. The trajectories are visualized in a virtual geographic environment, supporting basic geometries such as lines, ribbons, spheres, or tubes. Interactive mapping can be applied to visualize the values of per-node or per-trajectory attributes, supporting shape, height, size, color, texturing, and animation as visual properties. Using the dynamic mapping system, several kind of visualization methods have been implemented, such as focus+context visualization of trajectories using interactive density maps, and space-time cube visualization to focus on the temporal aspects of individual movements.
- Geographic Network Visualization.
A method for the interactive exploration of geo-referenced networks has been developed, which enables the visualization of large numbers of nodes and edges in a geographic context. Several geographic environments are supported, such as a 3D globe, as well as 2D maps using different map projections, to enable the analysis of networks in different contexts and scales. Interactive filtering, mapping, and selection can be applied to analyze these geographic networks, and visualization methods for specific types of networks, such as coupled 3D networks or temporal networks have been implemented.
As a demonstration of the developed visualization concepts, interactive visualization tools for two distinct use cases have been developed. The first contains the visualization of attributed 3D movement trajectories of airplanes around an airport. It allows users to explore and analyze the trajectories of approaching and departing aircrafts, which have been recorded over the period of a month. By applying the interactive visualization methods for trajectory visualization and interactive density maps, analysts can derive insight from the data, such as common flight paths, regular and irregular patterns, or uncommon incidents such as missed approaches on the airport.
The second use case involves the visualization of climate networks, which are geographic networks in the climate research domain. They represent the dynamics of the climate system using a network structure that expresses statistical interrelationships between different regions. The interactive tool allows climate analysts to explore these large networks, analyzing the network's structure and relating it to the geographic background. Interactive filtering and selection enables them to find patterns in the climate data and identify e.g. clusters in the networks or flow patterns.
Modern 3D geovisualization systems (3DGeoVSs) are complex and evolving systems that are required to be adaptable and leverage distributed resources, including massive geodata. This article focuses on 3DGeoVSs built based on the principles of service-oriented architectures, standards and image-based representations (SSI) to address practically relevant challenges and potentials. Such systems facilitate resource sharing and agile and efficient system construction and change in an interoperable manner, while exploiting images as efficient, decoupled and interoperable representations. The software architecture of a 3DGeoVS and its underlying visualization model have strong effects on the system's quality attributes and support various system life cycle activities. This article contributes a software reference architecture (SRA) for 3DGeoVSs based on SSI that can be used to design, describe and analyze concrete software architectures with the intended primary benefit of an increase in effectiveness and efficiency in such activities. The SRA integrates existing, proven technology and novel contributions in a unique manner. As the foundation for the SRA, we propose the generalized visualization pipeline model that generalizes and overcomes expressiveness limitations of the prevalent visualization pipeline model. To facilitate exploiting image-based representations (IReps), the SRA integrates approaches for the representation, provisioning and styling of and interaction with IReps. Five applications of the SRA provide proofs of concept for the general applicability and utility of the SRA. A qualitative evaluation indicates the overall suitability of the SRA, its applications and the general approach of building 3DGeoVSs based on SSI.
The zero-noise limit of differential equations with singular coefficients is investigated for the first time in the case when the noise is a general alpha-stable process. It is proved that extremal solutions are selected and the probability of selection is computed. Detailed analysis of the characteristic function of an exit time form on the half-line is performed, with a suitable decomposition in small and large jumps adapted to the singular drift.
Solar wind observations show that geomagnetic storms are mainly driven by interplanetary coronal mass ejections (ICMEs) and corotating or stream interaction regions (C/SIRs). We present a binary classifier that assigns one of these drivers to 7,546 storms between 1930 and 2015 using ground‐based geomagnetic field observations only. The input data consists of the long‐term stable Hourly Magnetospheric Currents index alongside the corresponding midlatitude geomagnetic observatory time series. This data set provides comprehensive information on the global storm time magnetic disturbance field, particularly its spatial variability, over eight solar cycles. For the first time, we use this information statistically with regard to an automated storm driver identification. Our supervised classification model significantly outperforms unskilled baseline models (78% accuracy with 26[19]% misidentified interplanetary coronal mass ejections [corotating or stream interaction regions]) and delivers plausible driver occurrences with regard to storm intensity and solar cycle phase. Our results can readily be used to advance related studies fundamental to space weather research, for example, studies connecting galactic cosmic ray modulation and geomagnetic disturbances. They are fully reproducible by means of the underlying open‐source software (Pick, 2019, http://doi.org/10.5880/GFZ.2.3.2019.003)
In this combined theoretical and experimental study we report a full analysis of the resonant inelastic X-ray scattering (RIXS) spectra of H2O, D2O and HDO. We demonstrate that electronically-elastic RIXS has an inherent capability to map the potential energy surface and to perform vibrational analysis of the electronic ground state in multimode systems. We show that the control and selection of vibrational excitation can be performed by tuning the X-ray frequency across core-excited molecular bands and that this is clearly reflected in the RIXS spectra. Using high level ab initio electronic structure and quantum nuclear wave packet calculations together with high resolution RIXS measurements, we discuss in detail the mode coupling, mode localization and anharmonicity in the studied systems.
Actin is one of the most highly conserved proteins in eukaryotes and distinct actin-related proteins with filament-forming properties are even found in prokaryotes. Due to these commonalities, actin-modulating proteins of many species share similar structural properties and proposed functions. The polymerization and depolymerization of actin are critical processes for a cell as they can contribute to shape changes to adapt to its environment and to move and distribute nutrients and cellular components within the cell. However, to what extent functions of actin-binding proteins are conserved between distantly related species, has only been addressed in a few cases. In this work, functions of Coronin-A (CorA) and Actin-interacting protein 1 (Aip1), two proteins involved in actin dynamics, were characterized. In addition, the interchangeability and function of Aip1 were investigated in two phylogenetically distant model organisms. The flowering plant Arabidopsis thaliana (encoding two homologs, AIP1-1 and AIP1-2) and in the amoeba Dictyostelium discoideum (encoding one homolog, DdAip1) were chosen because the functions of their actin cytoskeletons may differ in many aspects. Functional analyses between species were conducted for AIP1 homologs as flowering plants do not harbor a CorA gene.
In the first part of the study, the effect of four different mutation methods on the function of Coronin-A protein and the resulting phenotype in D. discoideum was revealed in two genetic knockouts, one RNAi knockdown and a sudden loss-of-function mutant created by chemical-induced dislocation (CID). The advantages and disadvantages of the different mutation methods on the motility, appearance and development of the amoebae were investigated, and the results showed that not all observed properties were affected with the same intensity. Remarkably, a new combination of Selection-Linked Integration and CID could be established.
In the second and third parts of the thesis, the exchange of Aip1 between plant and amoeba was carried out. For A. thaliana, the two homologs (AIP1-1 and AIP1-2) were analyzed for functionality as well as in D. discoideum. In the Aip1-deficient amoeba, rescue with AIP1-1 was more effective than with AIP1-2. The main results in the plant showed that in the aip1-2 mutant background, reintroduced AIP1-2 displayed the most efficient rescue and A. thaliana AIP1-1 rescued better than DdAip1. The choice of the tagging site was important for the function of Aip1 as steric hindrance is a problem. The DdAip1 was less effective when tagged at the C-terminus, while the plant AIP1s showed mixed results depending on the tag position. In conclusion, the foreign proteins partially rescued phenotypes of mutant plants and mutant amoebae, despite the organisms only being very distantly related in evolutionary terms.
Introduction
To date, several meta-analyses clearly demonstrated that resistance and plyometric training are effective to improve physical fitness in children and adolescents. However, a methodological limitation of meta-analyses is that they synthesize results from different studies and hence ignore important differences across studies (i.e., mixing apples and oranges). Therefore, we aimed at examining comparative intervention studies that assessed the effects of age, sex, maturation, and resistance or plyometric training descriptors (e.g., training intensity, volume etc.) on measures of physical fitness while holding other variables constant.
Methods
To identify relevant studies, we systematically searched multiple electronic databases (e.g., PubMed) from inception to March 2018. We included resistance and plyometric training studies in healthy young athletes and non-athletes aged 6 to 18 years that investigated the effects of moderator variables (e.g., age, maturity, sex, etc.) on components of physical fitness (i.e., muscle strength and power).
Results
Our systematic literature search revealed a total of 75 eligible resistance and plyometric training studies, including 5,138 participants. Mean duration of resistance and plyometric training programs amounted to 8.9 ± 3.6 weeks and 7.1±1.4 weeks, respectively. Our findings showed that maturation affects plyometric and resistance training outcomes differently, with the former eliciting greater adaptations pre-peak height velocity (PHV) and the latter around- and post-PHV. Sex has no major impact on resistance training related outcomes (e.g., maximal strength, 10 repetition maximum). In terms of plyometric training, around-PHV boys appear to respond with larger performance improvements (e.g., jump height, jump distance) compared with girls. Different types of resistance training (e.g., body weight, free weights) are effective in improving measures of muscle strength (e.g., maximum voluntary contraction) in untrained children and adolescents. Effects of plyometric training in untrained youth primarily follow the principle of training specificity. Despite the fact that only 6 out of 75 comparative studies investigated resistance or plyometric training in trained individuals, positive effects were reported in all 6 studies (e.g., maximum strength and vertical jump height, respectively).
Conclusions
The present review article identified research gaps (e.g., training descriptors, modern alternative training modalities) that should be addressed in future comparative studies.
A systems biological approach towards the molecular basis of heterosis in Arabidopsis thaliana
(2011)
Heterosis is defined as the superiority in performance of heterozygous genotypes compared to their corresponding genetically different homozygous parents. This phenomenon is already known since the beginning of the last century and it has been widely used in plant breeding, but the underlying genetic and molecular mechanisms are not well understood. In this work, a systems biological approach based on molecular network structures is proposed to contribute to the understanding of heterosis. Hybrids are likely to contain additional regulatory possibilities compared to their homozygous parents and, therefore, they may be able to correctly respond to a higher number of environmental challenges, which leads to a higher adaptability and, thus, the heterosis phenomenon. In the network hypothesis for heterosis, presented in this work, more regulatory interactions are expected in the molecular networks of the hybrids compared to the homozygous parents. Partial correlations were used to assess this difference in the global interaction structure of regulatory networks between the hybrids and the homozygous genotypes. This network hypothesis for heterosis was tested on metabolite profiles as well as gene expression data of the two parental Arabidopsis thaliana accessions C24 and Col-0 and their reciprocal crosses. These plants are known to show a heterosis effect in their biomass phenotype. The hypothesis was confirmed for mid-parent and best-parent heterosis for either hybrid of our experimental metabolite as well as gene expression data. It was shown that this result is influenced by the used cutoffs during the analyses. Too strict filtering resulted in sets of metabolites and genes for which the network hypothesis for heterosis does not hold true for either hybrid regarding mid-parent as well as best-parent heterosis. In an over-representation analysis, the genes that show the largest heterosis effects according to our network hypothesis were compared to genes of heterotic quantitative trait loci (QTL) regions. Separately for either hybrid regarding mid-parent as well as best-parent heterosis, a significantly larger overlap between the resulting gene lists of the two different approaches towards biomass heterosis was detected than expected by chance. This suggests that each heterotic QTL region contains many genes influencing biomass heterosis in the early development of Arabidopsis thaliana. Furthermore, this integrative analysis led to a confinement and an increased confidence in the group of candidate genes for biomass heterosis in Arabidopsis thaliana identified by both approaches.
In Systems Medicine, in addition to high-throughput molecular data (*omics), the wealth of clinical characterization plays a major role in the overall understanding of a disease. Unique problems and challenges arise from the heterogeneity of data and require new solutions to software and analysis methods. The SMART and EurValve studies establish a Systems Medicine approach to valvular heart disease -- the primary cause of subsequent heart failure.
With the aim to ascertain a holistic understanding, different *omics as well as the clinical picture of patients with aortic stenosis (AS) and mitral regurgitation (MR) are collected. Our task within the SMART consortium was to develop an IT platform for Systems Medicine as a basis for data storage, processing, and analysis as a prerequisite for collaborative research. Based on this platform, this thesis deals on the one hand with the transfer of the used Systems Biology methods to their use in the Systems Medicine context and on the other hand with the clinical and biomolecular differences of the two heart valve diseases. To advance differential expression/abundance (DE/DA) analysis software for use in Systems Medicine, we state 21 general software requirements and features of automated DE/DA software, including a novel concept for the simple formulation of experimental designs that can represent complex hypotheses, such as comparison of multiple experimental groups, and demonstrate our handling of the wealth of clinical data in two research applications DEAME and Eatomics. In user interviews, we show that novice users are empowered to formulate and test their multiple DE hypotheses based on clinical phenotype. Furthermore, we describe insights into users' general impression and expectation of the software's performance and show their intention to continue using the software for their work in the future. Both research applications cover most of the features of existing tools or even extend them, especially with respect to complex experimental designs. Eatomics is freely available to the research community as a user-friendly R Shiny application.
Eatomics continued to help drive the collaborative analysis and interpretation of the proteomic profile of 75 human left myocardial tissue samples from the SMART and EurValve studies. Here, we investigate molecular changes within the two most common types of valvular heart disease: aortic valve stenosis (AS) and mitral valve regurgitation (MR). Through DE/DA analyses, we explore shared and disease-specific protein alterations, particularly signatures that could only be found in the sex-stratified analysis. In addition, we relate changes in the myocardial proteome to parameters from clinical imaging. We find comparable cardiac hypertrophy but differences in ventricular size, the extent of fibrosis, and cardiac function. We find that AS and MR show many shared remodeling effects, the most prominent of which is an increase in the extracellular matrix and a decrease in metabolism. Both effects are stronger in AS. In muscle and cytoskeletal adaptations, we see a greater increase in mechanotransduction in AS and an increase in cortical cytoskeleton in MR. The decrease in proteostasis proteins is mainly attributable to the signature of female patients with AS. We also find relevant therapeutic targets.
In addition to the new findings, our work confirms several concepts from animal and heart failure studies by providing the largest collection of human tissue from in vivo collected biopsies to date. Our dataset contributing a resource for isoform-specific protein expression in two of the most common valvular heart diseases. Apart from the general proteomic landscape, we demonstrate the added value of the dataset by showing proteomic and transcriptomic evidence for increased expression of the SARS-CoV-2- receptor at pressure load but not at volume load in the left ventricle and also provide the basis of a newly developed metabolic model of the heart.
A tale of shifting relations
(2021)
Understanding the dynamics between the East Asian summer (EASM) and winter monsoon (EAWM) is needed to predict their variability under future global warming scenarios. Here, we investigate the relationship between EASM and EAWM as well as the mechanisms driving their variability during the last 10,000 years by stacking marine and terrestrial (non-speleothem) proxy records from the East Asian realm. This provides a regional and proxy independent signal for both monsoonal systems. The respective signal was subsequently analysed using a linear regression model. We find that the phase relationship between EASM and EAWM is not time-constant and significantly depends on orbital configuration changes. In addition, changes in the Atlantic Meridional Overturning circulation, Arctic sea-ice coverage, El Niño-Southern Oscillation and Sun Spot numbers contributed to millennial scale changes in the EASM and EAWM during the Holocene. We also argue that the bulk signal of monsoonal activity captured by the stacked non-speleothem proxy records supports the previously argued bias of speleothem climatic archives to moisture source changes and/or seasonality.
A task-based parallel elliptic solver for numerical relativity with discontinuous Galerkin methods
(2022)
Elliptic partial differential equations are ubiquitous in physics. In numerical relativity---the study of computational solutions to the Einstein field equations of general relativity---elliptic equations govern the initial data that seed every simulation of merging black holes and neutron stars. In the quest to produce detailed numerical simulations of these most cataclysmic astrophysical events in our Universe, numerical relativists resort to the vast computing power offered by current and future supercomputers. To leverage these computational resources, numerical codes for the time evolution of general-relativistic initial value problems are being developed with a renewed focus on parallelization and computational efficiency. Their capability to solve elliptic problems for accurate initial data must keep pace with the increasing detail of the simulations, but elliptic problems are traditionally hard to parallelize effectively.
In this thesis, I develop new numerical methods to solve elliptic partial differential equations on computing clusters, with a focus on initial data for orbiting black holes and neutron stars. I develop a discontinuous Galerkin scheme for a wide range of elliptic equations, and a stack of task-based parallel algorithms for their iterative solution. The resulting multigrid-Schwarz preconditioned Newton-Krylov elliptic solver proves capable of parallelizing over 200 million degrees of freedom to at least a few thousand cores, and already solves initial data for a black hole binary about ten times faster than the numerical relativity code SpEC. I also demonstrate the applicability of the new elliptic solver across physical disciplines, simulating the thermal noise in thin mirror coatings of interferometric gravitational-wave detectors to unprecedented accuracy. The elliptic solver is implemented in the new open-source SpECTRE numerical relativity code, and set up to support simulations of astrophysical scenarios for the emerging era of gravitational-wave and multimessenger astronomy.
Pollen records from Siberia are mostly absent in global or Northern Hemisphere synthesis works. Here we present a taxonomically harmonized and temporally standardized pollen dataset that was synthesized using 173 palynological records from Siberia and adjacent areas (northeastern Asia, 42-75 degrees N, 50-180 degrees E). Pollen data were taxonomically harmonized, i.e. the original 437 taxa were assigned to 106 combined pollen taxa. Age-depth models for all records were revised by applying a constant Bayesian age-depth modelling routine. The pollen dataset is available as count data and percentage data in a table format (taxa vs. samples), with age information for each sample. The dataset has relatively few sites covering the last glacial period between 40 and 11.5 ka (calibrated thousands of years before 1950 CE) particularly from the central and western part of the study area. In the Holocene period, the dataset has many sites from most of the area, with the exception of the central part of Siberia. Of the 173 pollen records, 81 % of pollen counts were downloaded from open databases (GPD, EPD, PANGAEA) and 10 % were contributions by the original data gatherers, while a few were digitized from publications. Most of the pollen records originate from peatlands (48 %) and lake sediments (33 %). Most of the records (83 %) have >= 3 dates, allowing the establishment of reliable chronologies. The dataset can be used for various purposes, including pollen data mapping (example maps for Larix at selected time slices are shown) as well as quantitative climate and vegetation reconstructions. The datasets for pollen counts and pollen percentages are available at https://doi.org/10.1594/PANGAEA.898616 (Cao et al., 2019a), also including the site information, data source, original publication, dating data, and the plant functional type for each pollen taxa.
A treemap is a visualization that has been specifically designed to facilitate the exploration of tree-structured data and, more general, hierarchically structured data. The family of visualization techniques that use a visual metaphor for parent-child relationships based “on the property of containment” (Johnson, 1993) is commonly referred to as treemaps. However, as the number of variations of treemaps grows, it becomes increasingly important to distinguish clearly between techniques and their specific characteristics. This paper proposes to discern between Space-filling Treemap TS, Containment Treemap TC, Implicit Edge Representation Tree TIE, and Mapped Tree TMT for classification of hierarchy visualization techniques and highlights their respective properties. This taxonomy is created as a hyponymy, i.e., its classes have an is-a relationship to one another: TS TC TIE TMT. With this proposal, we intend to stimulate a discussion on a more unambiguous classification of treemaps and, furthermore, broaden what is understood by the concept of treemap itself.
Schema modes (ormodes) are a key concept in the theory underlying schema therapy. Modes have rarely been related to established models of personality traits. The present study thus investigates the associations between trait emotional intelligence (TEI) and 14 modes, and tests a global TEI-mode factors-general psychological distress mediation model. The study draws on self-report data from 173 inpatients from a German clinic for psychosomatic medicine. Global TEI correlated positively with both healthy modes (happy child and healthy adult) and negatively with 10 maladaptive modes. When modes were regressed on the four TEI factors, six (emotionality), five (well-being), four (sociability), and four (self-control) significant partial effects on 10 modes emerged. In the parallel mediation model, the mode factors internalization and compulsivity fully mediated the global TEI-general psychological distress link. Implications of the results for the integration of modes with traits in general and with TEI in particular as well as implications of low TEI as a transdiagnostic feature of personality malfunctioning are discussed.
The Babylonian Talmud (BT) attributes the idea of committing a transgression for the sake of God to R. Nahman b. Isaac (RNBI). RNBI's statement appears in two parallel sugyot in the BT (Nazir 23a; Horayot 10a). Each sugya has four textual witnesses. By comparing these textual witnesses, this paper will attempt to reconstruct the sugya's earlier (or, what some might term, original) dialectical form, from which the two familiar versions of the text in Nazir and Horayot evolved. This article reveals the specific ways in which, value-laden conceptualizations have a major impact on the Talmud's formulation, as we know it today.
The focus in this article, through a reading of the German-Australian
newspaper Der Kosmopolit, is on the legacies of entangled imperial
identities in the period of the nineteenth-century German
Enlightenment. Attention is drawn to members of the liberal
nationalist generation of 1848 who emigrated to the Australian
colonies and became involved in intellectual activities there. The
idea of entanglement is applied to the philosophical orientation
of the German-language newspaper that this group formed, Der
Kosmopolit, which was published between 1856 and 1957. Against
simplistic notions that would view cosmopolitanism as the
opposite of nationalism, it is argued that individuals like Gustav
Droege and Carl Muecke deployed an entangled ‘cosmo-
nationalism’ in ways that both advanced German nationalism and
facilitated their own engagement with and investment in
Australian colonial society.
For various experimental applications, microbial cultures at defined, constant densities are highly advantageous over simple batch cultures. Due to high costs, however, devices for continuous culture at freely defined densities still experience limited use. We have developed a small-scale turbidostat for research purposes, which is manufactured from inexpensive components and 3D printed parts. A high degree of spatial system integration and a graphical user interface provide user-friendly operability. The used optical density feedback control allows for constant continuous culture at a wide range of densities and offers to vary culture volume and dilution rates without additional parametrization. Further, a recursive algorithm for on-line growth rate estimation has been implemented. The employed Kalman filtering approach based on a very general state model retains the flexibility of the used control type and can be easily adapted to other bioreactor designs. Within several minutes it can converge to robust, accurate growth rate estimates. This is particularly useful for directed evolution experiments or studies on metabolic challenges, as it allows direct monitoring of the population fitness.
A vector error correction model for the relationship between public debt and inflation in Germany
(2014)
In the paper, the interaction between public debt and inflation including mutual impulse response will be analysed. The European sovereign debt crisis brought once again the focus on the consequences of public debt in combination with an expansive monetary policy for the development of consumer prices. Public deficits can lead to inflation if the money supply is expansive. The high level of national debt, not only in the Euro-crisis countries, and the strong increase in total assets of the European Central Bank, as a result of the unconventional monetary policy, caused fears on inflating national debt. The transmission from public debt to inflation through money supply and long-term interest rate will be shown in the paper. Based on these theoretical thoughts, the variables public debt, consumer price index, money supply m3 and long-term interest rate will be analysed within a vector error correction model estimated by Johansen approach. In the empirical part of the article, quarterly data for Germany from 1991 by 2010 are to be examined.
E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in con-ventional computer laboratories poses the problem of immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. This report introduces the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibilities to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide effective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system configurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by the virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present security management solutions to prevent misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This report demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.
This thesis discusses challenges in IT security education, points out a gap between e-learning and practical education, and presents a work to fill the gap. E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in conventional computer laboratories poses particular problems such as immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. In this thesis, we introduce the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibility to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide effective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system configurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by a virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present a security management solution to prevent the misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This thesis demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.
A water quality model for shallow river-lake systems and its application in river basin management
(2007)
This work documents the development and application of a new model for simulating mass transport and turnover in rivers and shallow lakes. The simulation tool called 'TRAM' is intended to complement mesoscale eco-hydrological catchment models in studies on river basin management. TRAM aims at describing the water quality of individual water bodies, using problem- and scale-adequate approaches for representing their hydrological and ecological characteristics. The need for such flexible water quality analysis and prediction tools is expected to further increase during the implementation of the European Water Framework Directive (WFD) as well as in the context of climate change research. The developed simulation tool consists of a transport and a reaction module with the latter being highly flexible with respect to the description of turnover processes in the aquatic environment. Therefore, simulation approaches of different complexity can easily be tested and model formulations can be chosen in consideration of the problem at hand, knowledge of process functioning, and data availability. Consequently, TRAM is suitable for both heavily simplified engineering applications as well as scientific ecosystem studies involving a large number of state variables, interactions, and boundary conditions. TRAM can easily be linked to catchment models off-line and it requires the use of external hydrodynamic simulation software. Parametrization of the model and visualization of simulation results are facilitated by the use of geographical information systems as well as specific pre- and post-processors. TRAM has been developed within the research project 'Management Options for the Havel River Basin' funded by the German Ministry of Education and Research. The project focused on the analysis of different options for reducing the nutrient load of surface waters. It was intended to support the implementation of the WFD in the lowland catchment of the Havel River located in North-East Germany. Within the above-mentioned study TRAM was applied with two goals in mind. In a first step, the model was used for identifying the magnitude as well as spatial and temporal patterns of nitrogen retention and sediment phosphorus release in a 100~km stretch of the highly eutrophic Lower Havel River. From the system analysis, strongly simplified conceptual approaches for modeling N-retention and P-remobilization in the studied river-lake system were obtained. In a second step, the impact of reduced external nutrient loading on the nitrogen and phosphorus concentrations of the Havel River was simulated (scenario analysis) taking into account internal retention/release. The boundary conditions for the scenario analysis such as runoff and nutrient emissions from river basins were computed by project partners using the catchment models SWIM and ArcEGMO-Urban. Based on the output of TRAM, the considered options of emission control could finally be evaluated using a site-specific assessment scale which is compatible with the requirements of the WFD. Uncertainties in the model predictions were also examined. According to simulation results, the target of the WFD -- with respect to total phosphorus concentrations in the Lower Havel River -- could be achieved in the medium-term, if the full potential for reducing point and non-point emissions was tapped. Furthermore, model results suggest that internal phosphorus loading will ease off noticeably until 2015 due to a declining pool of sedimentary mobile phosphate. Mass balance calculations revealed that the lakes of the Lower Havel River are an important nitrogen sink. This natural retention effect contributes significantly to the efforts aimed at reducing the river's nitrogen load. If a sustainable improvement of the river system's water quality is to be achieved, enhanced measures to further reduce the immissions of both phosphorus and nitrogen are required.
A water soluble fluorescent polymer as a dual colour sensor for temperature and a specific protein
(2013)
We present two thermoresponsive water soluble copolymers prepared via free radical statistical copolymerization of N-isopropylacrylamide (NIPAm) and of oligo(ethylene glycol) methacrylates (OEGMAs), respectively, with a solvatochromic 7-(diethylamino)-3-carboxy-coumarin (DEAC)- functionalized monomer. In aqueous solutions, the NIPAm-based copolymer exhibits characteristic changes in its fluorescence profile in response to a change in solution temperature as well as to the presence of a specific protein, namely an anti-DEAC antibody. This polymer emits only weakly at low temperatures, but exhibits a marked fluorescence enhancement accompanied by a change in its emission colour when heated above its cloud point. Such drastic changes in the fluorescence and absorbance spectra are observed also upon injection of the anti-DEAC antibody, attributed to the specific binding of the antibody to DEAC moieties. Importantly, protein binding occurs exclusively when the polymer is in the well hydrated state below the cloud point, enabling a temperature control on the molecular recognition event. On the other hand, heating of the polymer–antibody complexes releases a fraction of the bound antibody. In the presence of the DEAC-functionalized monomer in this mixture, the released antibody competitively binds to the monomer and the antibody-free chains of the polymer undergo a more effective collapse and inter-aggregation. In contrast, the emission properties of the OEGMA-based analogous copolymer are rather insensitive to the thermally induced phase transition or to antibody binding. These opposite behaviours underline the need for a carefully tailored molecular design of responsive polymers aimed at specific applications, such as biosensing.
Zinc is an essential trace element, making it crucial to have a reliable biomarker for evaluating an individual’s zinc status. The total serum zinc concentration, which is presently the most commonly used biomarker, is not ideal for this purpose, but a superior alternative is still missing. The free zinc concentration, which describes the fraction of zinc that is only loosely bound and easily exchangeable, has been proposed for this purpose, as it reflects the highly bioavailable part of serum zinc. This report presents a fluorescence-based method for determining the free zinc concentration in human serum samples, using the fluorescent probe Zinpyr-1. The assay has been applied on 154 commercially obtained human serum samples. Measured free zinc concentrations ranged from 0.09 to 0.42 nM with a mean of 0.22 ± 0.05 nM. It did not correlate with age or the total serum concentrations of zinc, manganese, iron or selenium. A negative correlation between the concentration of free zinc and total copper has been seen for sera from females. In addition, the free zinc concentration in sera from females (0.21 ± 0.05 nM) was significantly lower than in males (0.23 ± 0.06 nM). The assay uses a sample volume of less than 10 µL, is rapid and cost-effective and allows us to address questions regarding factors influencing the free serum zinc concentration, its connection with the body’s zinc status, and its suitability as a future biomarker for an individual’s zinc status.
In this thesis, I examine different A-bar movement dependencies in Igbo, a Benue-Congo language spoken in southern Nigeria. Movement dependencies are found in constructions where an element is moved to the left edge of the clause to express information-structural categories such as in questions, relativization and focus. I show that these constructions in Igbo are very uniform from a syntactic point of view. The constructions are built on two basic fronting operations: relativization and focus movement, and are biclausal. I further investigate several morphophonological effects that are found in these A-bar constructions. I propose that these effects are reflexes of movement that are triggered when an element is moved overtly in relativization or focus. This proposal helps to explain the tone patterns that have previously been assumed to be a property of relative clauses. The thesis adds to the growing body of tonal reflexes of A-bar movement reported for a few African languages. The thesis also provides an insight into the complementizer domain (C-domain) of Igbo.
Iron-sulfur clusters are essential enzyme cofactors. The most common and stable clusters are [2Fe-2S] and [4Fe-4S] that are found in nature. They are involved in crucial biological processes like respiration, gene regulation, protein translation, replication and DNA repair in prokaryotes and eukaryotes. In Escherichia coli, Fe-S clusters are essential for molybdenum cofactor (Moco) biosynthesis, which is a ubiquitous and highly conserved pathway. The first step of Moco biosynthesis is catalyzed by the MoaA protein to produce cyclic pyranopterin monophosphate (cPMP) from 5’GTP. MoaA is a [4Fe-4S] cluster containing radical S-adenosyl-L-methionine (SAM) enzyme. The focus of this study was to investigate Fe-S cluster insertion into MoaA under nitrate and TMAO respiratory conditions using E. coli as a model organism. Nitrate and TMAO respiration usually occur under anaerobic conditions, where oxygen is depleted. Under these conditions, E. coli uses nitrate and TMAO as terminal electron. Previous studies revealed that Fe-S cluster insertion is performed by Fe-S cluster carrier proteins. In E. coli, these proteins are known as A-type carrier proteins (ATC) by phylogenomic and genetic studies. So far, three of them have been characterized in detail in E. coli, namely IscA, SufA, and ErpA. This study shows that ErpA and IscA are involved in Fe-S cluster insertion into MoaA under nitrate and TMAO respiratory conditions. ErpA and IscA can partially replace each other in their role to provide [4Fe-4S] clusters for MoaA. SufA is not able to replace the functions of IscA or ErpA under nitrate respiratory conditions.
Nitrate reductase is a molybdoenzyme that coordinates Moco and Fe-S clusters. Under nitrate respiratory conditions, the expression of nitrate reductase is significantly increased in E. coli. Nitrate reductase is encoded in narGHJI genes, the expression of which is regulated by the transcriptional regulator, fumarate and nitrate reduction (FNR). The activation of FNR under conditions of nitrate respiration requires one [4Fe-4S] cluster. In this part of the study, we analyzed the insertion of Fe-S cluster into FNR for the expression of narGHJI genes in E. coli. The results indicate that ErpA is essential for the FNR-dependent expression of the narGHJI genes, a role that can be replaced partially by IscA and SufA when they are produced sufficiently under the conditions tested. This observation suggests that ErpA is indirectly regulating nitrate reductase expression via inserting Fe-S clusters into FNR.
Most molybdoenzymes are complex multi-subunit and multi-cofactor-containing enzymes that coordinate Fe-S clusters, which are functioning as electron transfer chains for catalysis. In E. coli, periplasmic aldehyde oxidoreductase (PaoAC) is a heterotrimeric molybdoenzyme that
consists of flavin, two [2Fe-2S], one [4Fe-4S] cluster and Moco. In the last part of this study, we investigated the insertion of Fe-S clusters into E. coli periplasmic aldehyde oxidoreductase (PaoAC). The results show that SufA and ErpA are involved in inserting [4Fe-4S] and [2Fe-2S] clusters into PaoABC, respectively under aerobic respiratory conditions.
ABCB1/4 gallbladder cancer risk variants identified in India also show strong effects in Chileans
(2020)
Background: The first large-scale genome-wide association study of gallbladder cancer (GBC) recently identified and validated three susceptibility variants in the ABCB1 and ABCB4 genes for individuals of Indian descent. We investigated whether these variants were also associated with GBC risk in Chileans, who show the highest incidence of GBC worldwide, and in Europeans with a low GBC incidence.
Methods: This population-based study analysed genotype data from retrospective Chilean case-control (255 cases, 2042 controls) and prospective European cohort (108 cases, 181 controls) samples consistently with the original publication.
Results: Our results confirmed the reported associations for Chileans with similar risk effects. Particularly strong associations (per-allele odds ratios close to 2) were observed for Chileans with high Native American (=Mapuche) ancestry. No associations were noticed for Europeans, but the statistical power was low.
Conclusion: Taking full advantage of genetic and ethnic differences in GBC risk may improve the efficiency of current prevention programs.
We present XMM-Newton and Chandra observations of the born-again planetary nebula A 30. These X-ray observations reveal a bright unresolved source at the position of the central star whose X-ray luminosity exceeds by far the model expectations for photospheric emission and for shocks within the stellar wind. We suggest that a “born-again hot bubble” may be responsible for this X-ray emission. Diffuse X-ray emission associated with the petal-like features and one of the H-poor knots seen in the optical is also found. The weakened emission of carbon lines in the spectrum of the diffuse emission can be interpreted as the dilution of stellar wind by mass-loading or as the detection of material ejected during a very late thermal pulse.
Many complex systems that we encounter in the world can be formalized using networks. Consequently, they have been in the focus of computer science for decades, where algorithms are developed to understand and utilize these systems.
Surprisingly, our theoretical understanding of these algorithms and their behavior in practice often diverge significantly. In fact, they tend to perform much better on real-world networks than one would expect when considering the theoretical worst-case bounds. One way of capturing this discrepancy is the average-case analysis, where the idea is to acknowledge the differences between practical and worst-case instances by focusing on networks whose properties match those of real graphs. Recent observations indicate that good representations of real-world networks are obtained by assuming that a network has an underlying hyperbolic geometry.
In this thesis, we demonstrate that the connection between networks and hyperbolic space can be utilized as a powerful tool for average-case analysis. To this end, we first introduce strongly hyperbolic unit disk graphs and identify the famous hyperbolic random graph model as a special case of them. We then consider four problems where recent empirical results highlight a gap between theory and practice and use hyperbolic graph models to explain these phenomena theoretically. First, we develop a routing scheme, used to forward information in a network, and analyze its efficiency on strongly hyperbolic unit disk graphs. For the special case of hyperbolic random graphs, our algorithm beats existing performance lower bounds. Afterwards, we use the hyperbolic random graph model to theoretically explain empirical observations about the performance of the bidirectional breadth-first search. Finally, we develop algorithms for computing optimal and nearly optimal vertex covers (problems known to be NP-hard) and show that, on hyperbolic random graphs, they run in polynomial and quasi-linear time, respectively.
Our theoretical analyses reveal interesting properties of hyperbolic random graphs and our empirical studies present evidence that these properties, as well as our algorithmic improvements translate back into practice.
About the relation between implicit Theory of Mind & the comprehension of complement sentences
(2010)
Previous studies on the relation between language and social cognition have shown that children’s mastery of embedded sentential complements plays a causal role for the development of a Theory of Mind (ToM). Children start to succeed on complementation tasks in which they are required to report the content of an embedded clause in the second half of the fourth year. Traditional ToM tasks test the child’s ability to predict that a person who is holding a false belief (FB) about a situation will act "falsely". In these task, children do not represent FBs until the age of 4 years. According the linguistic determinism hypothesis, only the unique syntax of complement sentences provides the format for representing FBs. However, experiments measuring children’s looking behavior instead of their explicit predictions provided evidence that already 2-year olds possess an implicit ToM. This dissertation examined the question of whether there is an interrelation also between implicit ToM and the comprehension of complement sentences in typically developing German preschoolers. Two studies were conducted. In a correlational study (Study 1 ), 3-year-old children’s performance on a traditional (explicit) FB task, on an implicit FB task and on language tasks measuring children’s comprehension of tensed sentential complements were collected and tested for their interdependence. Eye-tracking methodology was used to assess implicit ToM by measuring participants’ spontaneous anticipatory eye movements while they were watching FB movies. Two central findings emerged. First, predictive looking (implicit ToM) was not correlated with complement mastery, although both measures were associated with explicit FB task performance. This pattern of results suggests that explicit, but not implicit ToM is language dependent. Second, as a group, 3-year-olds did not display implicit FB understanding. That is, previous findings on a precocious reasoning ability could not be replicated. This indicates that the characteristics of predictive looking tasks play a role for the elicitation of implicit FB understanding as the current task was completely nonverbal and as complex as traditional FB tasks. Study 2 took a methodological approach by investigating whether children display an earlier comprehension of sentential complements when using the same means of measurement as used in experimental tasks tapping implicit ToM, namely anticipatory looking. Two experiments were conducted. 3-year-olds were confronted either with a complement sentence expressing the protagonist’s FB (Exp. 1) or with a complex sentence expressing the protagonist’s belief without giving any information about the truth/ falsity of the belief (Exp. 2). Afterwards, their expectations about the protagonist’s future behavior were measured. Overall, implicit measures reveal no considerably earlier understanding of sentential complementation. Whereas 3-year-olds did not display a comprehension of complex sentences if these embedded a false proposition, children from 3;9 years on were proficient in processing complement sentences if the truth value of the embedded proposition could not be evaluated. This pattern of results suggests that (1) the linguistic expression of a person’s FB does not elicit implicit FB understanding and that (2) the assessment of the purely syntactic understanding of complement sentences is affected by competing reality information. In conclusion, this dissertation found no evidence that the implicit ToM is related to the comprehension of sentential complementation. The findings suggest that implicit ToM might be based on nonlinguistic processes. Results are discussed in the light of recently proposed dual-process models that assume two cognitive mechanisms that account for different levels of ToM task performance.
The size and morphology control of precipitated solid particles is a major economic issue for numerous industries. For instance, it is interesting for the nuclear industry, concerning the recovery of radioactive species from used nuclear fuel.
The precipitates features, which are a key parameter from the post-precipitate processing, depend on the process local mixing conditions. So far, the relationship between precipitation features and hydrodynamic conditions have not been investigated.
In this study, a new experimental configuration consisting of coalescing drops is set to investigate the link between reactive crystallization and hydrodynamics. Two configurations of aqueous drops are examined. The first one corresponds to high contact angle drops (>90°) in oil, as a model system for flowing drops, the second one correspond to sessile drops in air with low contact angle (<25°). In both cases, one reactive is dissolved in each drop, namely oxalic acid and cerium nitrate. When both drops get into contact, they may coalesce; the dissolved species mix and react to produce insoluble cerium oxalate. The precipitates features and effect on hydrodynamics are investigated depending on the solvent. In the case of sessile drops in air, the surface tension difference between the drops generates a gradient which induces a Marangoni flow from the low surface tension drop over the high surface tension drop. By setting the surface tension difference between the two drops and thus the Marangoni flow, the hydrodynamics conditions during the drop coalescence could be modified. Diols/water mixtures are used as solvent, in order to fix the surface tension difference between the liquids of both drops regardless from the reactant concentration. More precisely, the used diols, 1,2-propanediol and 1,3-propanediol, are isomer with identical density and close viscosity. By keeping the water volume fraction constant and playing with the 1,2-propanediol and 1,3-propanediol volume fractions of the solvents, the mixtures surface tensions differ up to 10 mN/m for identical/constant reactant concentration, density and viscosity. 3 precipitation behaviors were identified for the coalescence of water/diols/recatants drops depending on the oxalic excess. The corresponding precipitates patterns are visualized by optical microscopy and the precipitates are characterized by confocal microscopy SEM, XRD and SAXS measurements. In the intermediate oxalic excess regime, formation of periodic patterns can be observed. These patterns consist in alternating cerium oxalate precipitates with distinct morphologies, namely needles and “microflowers”. Such periodic fringes can be explained by a feedback mechanism between convection, reaction and the diffusion.
Different lake systems might reflect different climate elements of climate changes, while the responses of lake systems are also divers, and are not completely understood so far. Therefore, a comparison of lakes in different climate zones, during the high-amplitude and abrupt climate fluctuations of the Last Glacial to Holocene transition provides an exceptional opportunity to investigate distinct natural lake system responses to different abrupt climate changes. The aim of this doctoral thesis was to reconstruct climatic and environmental fluctuations down to (sub-) annual resolution from two different lake systems during the Last Glacial-Interglacial transition (~17 and 11 ka). Lake Gościąż, situated in the temperate central Poland, developed in the Allerød after recession of the Last Glacial ice sheets. The Dead Sea is located in the Levant (eastern Mediterranean) within a steep gradient from sub-humid to hyper-arid climate, and formed in the mid-Miocene. Despite their differences in sedimentation processes, both lakes form annual laminations (varves), which are crucial for studies of abrupt climate fluctuations. This doctoral thesis was carried out within the DFG project PALEX-II (Paleohydrology and Extreme Floods from the Dead Sea ICDP Core) that investigates extreme hydro-meteorological events in the ICDP core in relation to climate changes, and ICLEA (Virtual Institute of Integrated Climate and Landscape Evolution Analyses) that intends to better the understanding of climate dynamics and landscape evolutions in north-central Europe since the Last Glacial. Further, it contributes to the Helmholtz Climate Initiative REKLIM (Regional Climate Change and Humans) Research Theme 3 “Extreme events across temporal and spatial scales” that investigates extreme events using climate data, paleo-records and model-based simulations. The three main aims were to (1) establish robust chronologies of the lakes, (2) investigate how major and abrupt climate changes affect the lake systems, and (3) to compare the responses of the two varved lakes to these hemispheric-scale climate changes.
Robust chronologies are a prerequisite for high-resolved climate and environmental reconstructions, as well as for archive comparisons. Thus, addressing the first aim, the novel chronology of Lake Gościąż was established by microscopic varve counting and Bayesian age-depth modelling in Bacon for a non-varved section, and was corroborated by independent age constrains from 137Cs activity concentration measurements, AMS radiocarbon dating and pollen analysis. The varve chronology reaches from the late Allerød until AD 2015, revealing more Holocene varves than a previous study of Lake Gościąż suggested. Varve formation throughout the complete Younger Dryas (YD) even allowed the identification of annually- to decadal-resolved leads and lags in proxy responses at the YD transitions.
The lateglacial chronology of the Dead Sea (DS) was thus far mainly based on radiocarbon and U/Th-dating. In the unique ICDP core from the deep lake centre, continuous search for cryptotephra has been carried out in lateglacial sediments between two prominent gypsum deposits – the Upper and Additional Gypsum Units (UGU and AGU, respectively). Two cryptotephras were identified with glass analyses that correlate with tephra deposits from the Süphan and Nemrut volcanoes indicating that the AGU is ~1000 years younger than previously assumed, shifting it into the YD, and the underlying varved interval into the Bølling/Allerød, contradicting previous assumptions.
Using microfacies analyses, stable isotopes and temperature reconstructions, the second aim was achieved at Lake Gościąż. The YD lake system was dynamic, characterized by higher aquatic bioproductivity, more re-suspended material and less anoxia than during the Allerød and Early Holocene, mainly influenced by stronger water circulation and catchment erosion due to stronger westerly winds and less lake sheltering. Cooling at the YD onset was ~100 years longer than the final warming, while environmental proxies lagged the onset of cooling by ~90 years, but occurred contemporaneously during the termination of the YD. Chironomid-based temperature reconstructions support recent studies indicating mild YD summer temperatures. Such a comparison of annually-resolved proxy responses to both abrupt YD transitions is rare, because most European lake archives do not preserve varves during the YD.
To accomplish the second aim at the DS, microfacies analyses were performed between the UGU (~17 ka) and Holocene onset (~11 ka) in shallow- (Masada) and deep-water (ICDP core) environments. This time interval is marked by a huge but fluctuating lake level drop and therefore the complete transition into the Holocene is only recorded in the deep-basin ICDP core. In this thesis, this transition was investigated for the first time continuously and in detail. The final two pronounced lake level drops recorded by deposition of the UGU and AGU, were interrupted by one millennium of relative depositional stability and a positive water budget as recorded by aragonite varve deposition interrupted by only a few event layers. Further, intercalation of aragonite varves between the gypsum beds of the UGU and AGU shows that these generally dry intervals were also marked by decadal- to centennial-long rises in lake level. While continuous aragonite varves indicate decadal-long stable phases, the occurrence of thicker and more frequent event layers suggests general more instability during the gypsum units. These results suggest a pattern of complex and variable hydroclimate at different time scales during the Lateglacial at the DS.
The third aim was accomplished based on the individual studies above that jointly provide an integrated picture of different lake responses to different climate elements of hemispheric-scale abrupt climate changes during the Last Glacial-Interglacial transition. In general, climatically-driven facies changes are more dramatic in the DS than at Lake Gościąż. Further, Lake Gościąż is characterized by continuous varve formation nearly throughout the complete profile, whereas the DS record is widely characterized by extreme event layers, hampering the establishment of a continuous varve chronology. The lateglacial sedimentation in Lake Gościąż is mainly influenced by westerly winds and minor by changes in catchment vegetation, whereas the DS is primarily influenced by changes in winter precipitation, which are caused by temperature variations in the Mediterranean. Interestingly, sedimentation in both archives is more stable during the Bølling/Allerød and more dynamic during the YD, even when sedimentation processes are different.
In summary, this doctoral thesis presents seasonally-resolved records from two lake archives during the Lateglacial (ca 17-11 ka) to investigate the impact of abrupt climate changes in different lake systems. New age constrains from the identification of volcanic glass shards in the lateglacial sediments of the DS allowed the first lithology-based interpretation of the YD in the DS record and its comparison to Lake Gościąż. This highlights the importance of the construction of a robust chronology, and provides a first step for synchronization of the DS with other eastern Mediterranean archives. Further, climate reconstructions from the lake sediments showed variability on different time scales in the different archives, i.e. decadal- to millennial fluctuations in the lateglacial DS, and even annual variations and sub-decadal leads and lags in proxy responses during the rapid YD transitions in Lake Gościąż. This showed the importance of a comparison of different lake archives to better understand the regional and local impacts of hemispheric-scale climate variability. An unprecedented example is demonstrated here of how different lake systems show different lake responses and also react to different climate elements of abrupt climate changes. This further highlights the importance of the understanding of the respective lake system for climate reconstructions.
Identifying abrupt transitions is a key question in various disciplines. Existing transition detection methods, however, do not rigorously account for time series uncertainties, often neglecting them altogether or assuming them to be independent and qualitatively similar. Here, we introduce a novel approach suited to handle uncertainties by representing the time series as a time-ordered sequence of probability density functions. We show how to detect abrupt transitions in such a sequence using the community structure of networks representing probabilities of recurrence. Using our approach, we detect transitions in global stock indices related to well-known periods of politico-economic volatility. We further uncover transitions in the El Nino-Southern Oscillation which coincide with periods of phase locking with the Pacific Decadal Oscillation. Finally, we provide for the first time an 'uncertainty-aware' framework which validates the hypothesis that ice-rafting events in the North Atlantic during the Holocene were synchronous with a weakened Asian summer monsoon.
Abstract gringo
(2015)
This paper defines the syntax and semantics of the input language of the ASP grounder gringo. The definition covers several constructs that were not discussed in earlier work on the semantics of that language, including intervals, pools, division of integers, aggregates with non-numeric values, and lparse-style aggregate expressions. The definition is abstract in the sense that it disregards some details related to representing programs by strings of ASCII characters. It serves as a specification for gringo from Version 4.5 on.
In the present thesis, AC electrokinetic forces, like dielectrophoresis and AC electroosmosis, were demonstrated as a simple and fast method to functionalize the surface of nanoelectrodes with submicrometer sized biological objects. These nanoelectrodes have a cylindrical shape with a diameter of 500 nm arranged in an array of 6256 electrodes. Due to its medical relevance influenza virus as well as anti-influenza antibodies were chosen as a model organism. Common methods to bring antibodies or proteins to biosensor surfaces are complex and time-consuming. In the present work, it was demonstrated that by applying AC electric fields influenza viruses and antibodies can be immobilized onto the nanoelectrodes within seconds without any prior chemical modification of neither the surface nor the immobilized biological object. The distribution of these immobilized objects is not uniform over the entire array, it exhibits a decreasing gradient from the outer row to the inner ones. Different causes for this gradient have been discussed, such as the vortex-shaped fluid motion above the nanoelectrodes generated by, among others, electrothermal fluid flow. It was demonstrated that parts of the accumulated material are permanently immobilized to the electrodes. This is a unique characteristic of the presented system since in the literature the AC electrokinetic immobilization is almost entirely presented as a method just for temporary immobilization. The spatial distribution of the immobilized viral material or the anti-influenza antibodies at the electrodes was observed by either the combination of fluorescence microscopy and deconvolution or by super-resolution microscopy (STED). On-chip immunoassays were performed to examine the suitability of the functionalized electrodes as a potential affinity-based biosensor. Two approaches were pursued: A) the influenza virus as the bio-receptor or B) the influenza virus as the analyte. Different sources of error were eliminated by ELISA and passivation experiments. Hence, the activity of the immobilized object was inspected by incubation with the analyte. This resulted in the successful detection of anti-influenza antibodies by the immobilized viral material. On the other hand, a detection of influenza virus particles by the immobilized anti-influenza antibodies was not possible. The latter might be due to lost activity or wrong orientation of the antibodies. Thus, further examinations on the activity of by AC electric fields immobilized antibodies should follow. When combined with microfluidics and an electrical read-out system, the functionalized chips possess the potential to serve as a rapid, portable, and cost-effective point-of-care (POC) device. This device can be utilized as a basis for diverse applications in diagnosing and treating influenza, as well as various other pathogens.
A numerical MHD model is developed to investigate acceleration and heating of both thermal and auroral plasma. This is done for magnetospheric flux tubes in which intensive field aligned currents flow. To give each of these tubes, the empirical Tsyganenko model of the magnetospheric field is used. The parameters of the background plasma outside the flux tube as well as the strength of the electric field of magnetospheric convection are given. Performing the numerical calculations, the distributions of the plasma densities, velocities, temperatures, parallel electric field and current, and of the coefficients of thermal conductivity are obtained in a self-consistent way. It is found that EIC turbulence develops effectively in the thermal plasma. The parallel electric field develops under the action of the anomalous resistivity. This electric field accelerates both the thermal and the auroral plasma. The thermal turbulent plasma is also subjected to an intensive heating. The increase of the plasma of the Earth's ionosphere. Besides, studying the growth and dispersion properties of oblique ion cyclotron waves excited in a drifting magnetized plasma, it is shown that under non-stationary conditions such waves may reveal the properties of bursts of polarized transverse electromagnetic waves at frequencies near the patron gyrofrequency.
The present work is a case study contributing to the major planning project “Suedlink”. It is structured as follows: first, in a theoretical part, mandatory theories of social acceptance (Wüstenhagen et al., 2007), steps of participation (Münnich, 2014), and the governance theory (Benz and Dose, 2011) are elaborated. Secondly, the relevant methods are discussed. Thirdly, in a qualitative analytical part, the information that were gathered from the expert interviews are analyzed with the use of the aforementioned theories. In the fourth place, an empirical quantitative analysis of data regarding the public acceptance towards Suedlink is presented.
In this case study, with the use of qualitative and quantitative methods, two questions are answered: first, which governance aspects were relevant for the priority use of underground cables for the construction of high voltage direct current transmission lines? For this question, intensive document analysis and different expert interviews were conducted. Secondly, the central question of the present work addresses the question whether local or/and individual factors affect the public acceptance towards SüdLink. Here, in particular, it is interesting to analyze if the priority use of underground cables affected the people’s acceptance towards SuedLink. In order to respond to both questions, an online survey was conducted among citizen initiatives, district administrators, and individuals in social media during March till July 2016. Thereafter, the data was analyzed with the use of descriptive quantitative methods. The data shows, that underground cables not necessarily increase public acceptance (see also Menges and Beyer, 2013). On the contrary, individual and local criteria were relevant for the survey respondents. For example criteria such as the quality of participation, distance between home and transmission lines, and the additional financial burden (taxes, higher prices for electricity) were important for the evaluation. In addition, survey respondents who participated in citizen initiatives were more critical against the priority use of underground cables and SuedLink in general. Likewise, residential homeowners rejected every form of transmission lines.
Access to digital finance
(2024)
Financing entrepreneurship spurs innovation and economic growth. Digital financial platforms that crowdfund equity for entrepreneurs have emerged globally, yet they remain poorly understood. We model equity crowdfunding in terms of the relationship between the number of investors and the amount of money raised per pitch. We examine heterogeneity in the average amount raised per pitch that is associated with differences across three countries and seven platforms. Using a novel dataset of successful fundraising on the most prominent platforms in the UK, Germany, and the USA, we find the underlying relationship between the number of investors and the amount of money raised for entrepreneurs is loglinear, with a coefficient less than one and concave to the origin. We identify significant variation in the average amount invested in each pitch across countries and platforms. Our findings have implications for market actors as well as regulators who set competitive frameworks.
This study examines the access to healthcare for children and adolescents with three common chronic diseases (type-1 diabetes (T1D), obesity, or juvenile idiopathic arthritis (JIA)) within the 4th (Delta), 5th (Omicron), and beginning of the 6th (Omicron) wave (June 2021 until July 2022) of the COVID-19 pandemic in Germany in a cross-sectional study using three national patient registries. A paper-and-pencil questionnaire was given to parents of pediatric patients (<21 years) during the routine check-ups. The questionnaire contains self-constructed items assessing the frequency of healthcare appointments and cancellations, remote healthcare, and satisfaction with healthcare. In total, 905 parents participated in the T1D-sample, 175 in the obesity-sample, and 786 in the JIA-sample. In general, satisfaction with healthcare (scale: 0–10; 10 reflecting the highest satisfaction) was quite high (median values: T1D 10, JIA 10, obesity 8.5). The proportion of children and adolescents with canceled appointments was relatively small (T1D 14.1%, JIA 11.1%, obesity 20%), with a median of 1 missed appointment, respectively. Only a few parents (T1D 8.6%; obesity 13.1%; JIA 5%) reported obstacles regarding health services during the pandemic. To conclude, it seems that access to healthcare was largely preserved for children and adolescents with chronic health conditions during the COVID-19 pandemic in Germany.
One of the most striking features of recent public sector reform in Europe is privatization. This development raises questions of accountability: By whom and for what are managers of private for-profit organizations delivering public goods held accountable? Analyzing accountability mechanisms through the lens of an institutional organizational approach and on the empirical basis of hospital privatization in Germany, the article contributes to the empirical and theoretical understanding of public accountability of private actors. The analysis suggests that accountability is not declining but rather multiplying. The shifts in the locus and content of accountability cause organizational stress for private hospitals.
Accountability can be conceptualized as institutionalized mechanisms obliging actors to explain their conduct to different forums, which can pose questions and impose sanctions. This article analyses different crises' in immigration policies in Norway, Denmark and Germany along a descriptive framework of five different accountability types: political, administrative, legal, professional and social accountability. The exchanges of information, debate and their consequences between an actor and a forum are crucial to understanding how political-administrative action is carried out in critical situations. First, accountability dynamics emphasize conventional norms and values regarding policy change and, second, formal political responsibility does not necessarily lead to political consequences such as minister resignations in cases of misbehaviour. Consequences strongly depend on how accountability dynamics take place.
School adjustment determines long-term adjustment in society. Yet, immigrant youth do better in some countries than in others. Drawing on acculturation research (Berry, 1997; Ward, 2001) and self-determination theory (Ryan and Deci, 2000), we investigated indirect effects of adolescent immigrants’ acculturation orientations on school adjustment (school-related attitudes, truancy, and mathematics achievement) through school belonging. Analyses were based on data from the Programme for International Student Assessment from six European countries, which were combined into three clusters based on their migrant integration and multicultural policies: Those with the most supportive policies (Belgium and Finland), those with moderately supportive policies (Italy and Portugal), and those with the most unsupportive policies (Denmark and Slovenia). In a multigroup path model, we confirmed most associations. As expected, mainstream orientation predicted higher belonging and better outcomes in all clusters, whereas the added value of students’ ethnic orientation was only observed in some clusters. Results are discussed in terms of differences in acculturative climate and policies between countries of settlement.
Continuous exercise (CON) and high-intensity interval exercise (HIIE) can be safely performed with type 1 diabetes mellitus (T1DM). Additionally, continuous glucose monitoring (CGM) systems may serve as a tool to reduce the risk of exercise-induced hypoglycemia. It is unclear if CGM is accurate during CON and HIIE at different mean workloads. Seven T1DM patients performed CON and HIIE at 5% below (L) and above (M) the first lactate turn point (LTP1), and 5% below the second lactate turn point (LTP2) (H) on a cycle ergometer. Glucose was measured via CGM and in capillary blood (BG). Differences were found in comparison of CGM vs. BG in three out of the six tests (p < 0.05). In CON, bias and levels of agreement for L, M, and H were found at: 0.85 (−3.44, 5.15) mmol·L−1, −0.45 (−3.95, 3.05) mmol·L−1, −0.31 (−8.83, 8.20) mmol·L−1 and at 1.17 (−2.06, 4.40) mmol·L−1, 0.11 (−5.79, 6.01) mmol·L−1, 1.48 (−2.60, 5.57) mmol·L−1 in HIIE for the same intensities. Clinically-acceptable results (except for CON H) were found. CGM estimated BG to be clinically acceptable, except for CON H. Additionally, using CGM may increase avoidance of exercise-induced hypoglycemia, but usual BG control should be performed during intense exercise.
Accuracy of training recommendations based on a treadmill multistage incremental exercise test
(2018)
Competitive runners will occasionally undergo exercise in a laboratory setting to obtain predictive and prescriptive information regarding their performance. The present research aimed to assess whether the physiological demands of lab-based treadmill running (TM) can simulate that of over-ground (OG) running using a commonly used protocol. Fifteen healthy volunteers with a weekly mileage of ≥ 20 km over the past 6 months and treadmill experience participated in this cross-sectional study. Two stepwise incremental tests until volitional exhaustion was performed in a fixed order within one week in an Outpatient Clinic research laboratory and outdoor athletic track. Running velocity (IATspeed), heart rate (IATHR) and lactate concentration at the individual anaerobic threshold (IATbLa) were primary endpoints. Additionally, distance covered (DIST), maximal heart rate (HRmax), maximal blood lactate concentration (bLamax) and rate of perceived exertion (RPE) at IATspeed were analyzed. IATspeed, DIST and HRmax were not statistically significantly different between conditions, whereas bLamax and RPE at IATspeed showed statistical significance (p < 0.05). Apart from RPE at IATspeed, IATspeed, DIST, HRmax and bLamax strongly correlate between conditions (r = 0.815–0.988). High reliability between conditions provides strong evidence to suggest that running on a treadmill are physiologically comparable to that of OG and that training recommendations and be made with assurance.
This Thesis puts its focus on the physics of neutron stars and its description with methods of numerical relativity. In the first step, a new numerical framework the Whisky2D code will be developed, which solves the relativistic equations of hydrodynamics in axisymmetry. Therefore we consider an improved formulation of the conserved form of these equations. The second part will use the new code to investigate the critical behaviour of two colliding neutron stars. Considering the analogy to phase transitions in statistical physics, we will investigate the evolution of the entropy of the neutron stars during the whole process. A better understanding of the evolution of thermodynamical quantities, like the entropy in critical process, should provide deeper understanding of thermodynamics in relativity. More specifically, we have written the Whisky2D code, which solves the general-relativistic hydrodynamics equations in a flux-conservative form and in cylindrical coordinates. This of course brings in 1/r singular terms, where r is the radial cylindrical coordinate, which must be dealt with appropriately. In the above-referenced works, the flux operator is expanded and the 1/r terms, not containing derivatives, are moved to the right-hand-side of the equation (the source term), so that the left hand side assumes a form identical to the one of the three-dimensional (3D) Cartesian formulation. We call this the standard formulation. Another possibility is not to split the flux operator and to redefine the conserved variables, via a multiplication by r. We call this the new formulation. The new equations are solved with the same methods as in the Cartesian case. From a mathematical point of view, one would not expect differences between the two ways of writing the differential operator, but, of course, a difference is present at the numerical level. Our tests show that the new formulation yields results with a global truncation error which is one or more orders of magnitude smaller than those of alternative and commonly used formulations. The second part of the Thesis uses the new code for investigations of critical phenomena in general relativity. In particular, we consider the head-on-collision of two neutron stars in a region of the parameter space where two final states a new stable neutron star or a black hole, lay close to each other. In 1993, Choptuik considered one-parameter families of solutions, S[P], of the Einstein-Klein-Gordon equations for a massless scalar field in spherical symmetry, such that for every P > P⋆, S[P] contains a black hole and for every P < P⋆, S[P] is a solution not containing singularities. He studied numerically the behavior of S[P] as P → P⋆ and found that the critical solution, S[P⋆], is universal, in the sense that it is approached by all nearly-critical solutions regardless of the particular family of initial data considered. All these phenomena have the common property that, as P approaches P⋆, S[P] approaches a universal solution S[P⋆] and that all the physical quantities of S[P] depend only on |P − P⋆|. The first study of critical phenomena concerning the head-on collision of NSs was carried out by Jin and Suen in 2007. In particular, they considered a series of families of equal-mass NSs, modeled with an ideal-gas EOS, boosted towards each other and varied the mass of the stars, their separation, velocity and the polytropic index in the EOS. In this way they could observe a critical phenomenon of type I near the threshold of black-hole formation, with the putative solution being a nonlinearly oscillating star. In a successive work, they performed similar simulations but considering the head-on collision of Gaussian distributions of matter. Also in this case they found the appearance of type-I critical behaviour, but also performed a perturbative analysis of the initial distributions of matter and of the merged object. Because of the considerable difference found in the eigenfrequencies in the two cases, they concluded that the critical solution does not represent a system near equilibrium and in particular not a perturbed Tolmann-Oppenheimer-Volkoff (TOV) solution. In this Thesis we study the dynamics of the head-on collision of two equal-mass NSs using a setup which is as similar as possible to the one considered above. While we confirm that the merged object exhibits a type-I critical behaviour, we also argue against the conclusion that the critical solution cannot be described in terms of equilibrium solution. Indeed, we show that, in analogy with what is found in, the critical solution is effectively a perturbed unstable solution of the TOV equations. Our analysis also considers fine-structure of the scaling relation of type-I critical phenomena and we show that it exhibits oscillations in a similar way to the one studied in the context of scalar-field critical collapse.
Landscape evolution models (LEMs) allow the study of earth surface responses to changing climatic and tectonic forcings. While much effort has been devoted to the development of LEMs that simulate a wide range of processes, the numerical accuracy of these models has received less attention. Most LEMs use first-order accurate numerical methods that suffer from substantial numerical diffusion. Numerical diffusion particularly affects the solution of the advection equation and thus the simulation of retreating landforms such as cliffs and river knickpoints. This has potential consequences for the integrated response of the simulated landscape. Here we test a higher-order flux-limiting finite volume method that is total variation diminishing (TVD-FVM) to solve the partial differential equations of river incision and tectonic displacement. We show that using the TVD-FVM to simulate river incision significantly influences the evolution of simulated landscapes and the spatial and temporal variability of catchment-wide erosion rates. Furthermore, a two-dimensional TVD-FVM accurately simulates the evolution of landscapes affected by lateral tectonic displacement, a process whose simulation was hitherto largely limited to LEMs with flexible spatial discretization. We implement the scheme in TTLEM (TopoToolbox Landscape Evolution Model), a spatially explicit, raster-based LEM for the study of fluvially eroding landscapes in TopoToolbox 2.
Gravitational-wave (GW) astrophysics is a field in full blossom. Since the landmark detection of GWs from a binary black hole on September 14th 2015, fifty-two compact-object binaries have been reported by the LIGO-Virgo collaboration. Such events carry astrophysical and cosmological information ranging from an understanding of how black holes and neutron stars are formed, what neutron stars are composed of, how the Universe expands, and allow testing general relativity in the highly-dynamical strong-field regime. It is the goal of GW astrophysics to extract such information as accurately as possible. Yet, this is only possible if the tools and technology used to detect and analyze GWs are advanced enough. A key aspect of GW searches are waveform models, which encapsulate our best predictions for the gravitational radiation under a certain set of parameters, and that need to be cross-correlated with data to extract GW signals. Waveforms must be very accurate to avoid missing important physics in the data, which might be the key to answer the fundamental questions of GW astrophysics. The continuous improvements of the current LIGO-Virgo detectors, the development of next-generation ground-based detectors such as the Einstein Telescope or the Cosmic Explorer, as well as the development of the Laser Interferometer Space Antenna (LISA), demand accurate waveform models. While available models are enough to capture the low spins, comparable-mass binaries routinely detected in LIGO-Virgo searches, those for sources from both current and next-generation ground-based and spaceborne detectors must be accurate enough to detect binaries with large spins and asymmetry in the masses. Moreover, the thousands of sources that we expect to detect with future detectors demand accurate waveforms to mitigate biases in the estimation of signals’ parameters due to the presence of a foreground of many sources that overlap in the frequency band. This is recognized as one of the biggest challenges for the analysis of future-detectors’ data, since biases might hinder the extraction of important astrophysical and cosmological information from future detectors’ data. In the first part of this thesis, we discuss how to improve waveform models for binaries with high spins and asymmetry in the masses. In the second, we present the first generic metrics that have been proposed to predict biases in the presence of a foreground of many overlapping signals in GW data.
For the first task, we will focus on several classes of analytical techniques. Current models for LIGO and Virgo studies are based on the post-Newtonian (PN, weak-field, small velocities) approximation that is most natural for the bound orbits that are routinely detected in GW searches. However, two other approximations have risen in prominence, the post-Minkowskian (PM, weak- field only) approximation natural for unbound (scattering) orbits and the small-mass-ratio (SMR) approximation typical of binaries in which the mass of one body is much bigger than the other. These are most appropriate to binaries with high asymmetry in the masses that challenge current waveform models. Moreover, they allow one to “cover” regions of the parameter space of coalescing binaries, thereby improving the interpolation (and faithfulness) of waveform models. The analytical approximations to the relativistic two-body problem can synergically be included within the effective-one-body (EOB) formalism, in which the two-body information from each approximation can be recast into an effective problem of a mass orbiting a deformed Schwarzschild (or Kerr) black hole. The hope is that the resultant models can cover both the low-spin comparable-mass binaries that are routinely detected, and the ones that challenge current models. The first part of this thesis is dedicated to a study about how to best incorporate information from the PN, PM, SMR and EOB approaches in a synergistic way. We also discuss how accurate the resulting waveforms are, as compared against numerical-relativity (NR) simulations. We begin by comparing PM models, whether alone or recast in the EOB framework, against PN models and NR simulations. We will show that PM information has the potential to improve currently-employed models for LIGO and Virgo, especially if recast within the EOB formalism. This is very important, as the PM approximation comes with a host of new computational techniques from particle physics to exploit. Then, we show how a combination of PM and SMR approximations can be employed to access previously-unknown PN orders, deriving the third subleading PN dynamics for spin-orbit and (aligned) spin1-spin2 couplings. Such new results can then be included in the EOB models currently used in GW searches and parameter estimation studies, thereby improving them when the binaries have high spins. Finally, we build an EOB model for quasi-circular nonspinning binaries based on the SMR approximation (rather than the PN one as usually done). We show how this is done in detail without incurring in the divergences that had affected previous attempts, and compare the resultant model against NR simulations. We find that the SMR approximation is an excellent approximation for all (quasi-circular nonspinning) binaries, including both the equal-mass binaries that are routinely detected in GW searches and the ones with highly asymmetric masses. In particular, the SMR-based models compare much better than the PN models, suggesting that SMR-informed EOB models might be the key to model binaries in the future. In the second task of this thesis, we work within the linear-signal ap- proximation and describe generic metrics to predict inference biases on the parameters of a GW source of interest in the presence of confusion noise from unfitted foregrounds and from residuals of other signals that have been incorrectly fitted out. We illustrate the formalism with simple (yet realistic) LISA sources, and demonstrate its validity against Monte-Carlo simulations. The metrics we describe pave the way for more realistic studies to quantify the biases with future ground-based and spaceborne detectors.
Macrophages have important protective functions during infection with herpes simplex virus type 1 (HSV-1). However, molecular mechanisms that restrict viral propagation and protect from severe disease are unclear. Here we show that macrophages take up HSV-1 via endocytosis and transport the virions into multivesicular bodies (MVBs). In MVBs, acid ceramidase (aCDase) converts ceramide into sphingosine and increases the formation of sphingosine-rich intraluminal vesicles (ILVs). Once HSV-1 particles reach MVBs, sphingosine-rich ILVs bind to HSV-1 particles, which restricts fusion with the limiting endosomal membrane and prevents cellular infection. Lack of aCDase in macrophage cultures or in Asah1(-/-) mice results in replication of HSV-1 and Asah1(-/-) mice die soon after systemic or intravaginal inoculation. The treatment of macrophages with sphingosine enhancing compounds blocks HSV-1 propagation, suggesting a therapeutic potential of this pathway. In conclusion, aCDase loads ILVs with sphingosine, which prevents HSV-1 capsids from penetrating into the cytosol.
Farber disease is a rare lysosomal storage disorder resulting from acid ceramidase deficiency and subsequent ceramide accumulation. No treatments for Farber disease are clinically available, and affected patients have a severely shortened lifespan. We have recently reported a novel acid ceramidase deficiency model that mirrors the human disease closely. Acid sphingomyelinase is the enzyme that generates ceramide upstream of acid ceramidase in the lysosomes. Using our acid ceramidase deficiency model, we tested if acid sphingomyelinase could be a potential novel therapeutic target for the treatment of Farber disease. A number of functional acid sphingomyelinase inhibitors are clinically available and have been used for decades to treat major depression. Using these as a therapeutic for Farber disease, thus, has the potential to improve central nervous symptoms of the disease as well, something all other treatment options for Farber disease can’t achieve so far. As a proof-of-concept study, we first cross-bred acid ceramidase deficient mice with acid sphingomyelinase deficient mice in order to prevent ceramide accumulation. Double-deficient mice had reduced ceramide accumulation, fewer disease manifestations, and prolonged survival. We next targeted acid sphingomyelinase pharmacologically, to test if these findings would translate to a setting with clinical applicability. Surprisingly, the treatment of acid ceramidase deficient mice with the acid sphingomyelinase inhibitor amitriptyline was toxic to acid ceramidase deficient mice and killed them within a few days of treatment. In conclusion, our study provides the first proof-of-concept that acid sphingomyelinase could be a potential new therapeutic target for Farber disease to reduce disease manifestations and prolong survival. However, we also identified previously unknown toxicity of the functional acid sphingomyelinase inhibitor amitriptyline in the context of Farber disease, strongly cautioning against the use of this substance class for Farber disease patients
Acquiring Syntactic Variability: The Production of Wh-Questions in Children and Adults Speaking Akan
(2020)
This paper investigates the predictions of the Derivational Complexity Hypothesis by studying the acquisition of wh-questions in 4- and 5-year-old Akan-speaking children in an experimental approach using an elicited production and an elicited imitation task. Akan has two types of wh-question structures (wh-in-situ and wh-ex-situ questions), which allows an investigation of children’s acquisition of these two question structures and their preferences for one or the other. Our results show that adults prefer to use wh-ex-situ questions over wh-in-situ questions. The results from the children show that both age groups have the two question structures in their linguistic repertoire. However, they differ in their preferences in usage in the elicited production task: while the 5-year-olds preferred the wh-in-situ structure over the wh-ex-situ structure, the 4-year-olds showed a selective preference for the wh-in-situ structure in who-questions. These findings suggest a developmental change in wh-question preferences in Akan-learning children between 4 and 5 years of age with a so far unobserved u-shaped developmental pattern. In the elicited imitation task, all groups showed a strong tendency to maintain the structure of in-situ and ex-situ questions in repeating grammatical questions. When repairing ungrammatical ex-situ questions, structural changes to grammatical in-situ questions were hardly observed but the insertion of missing morphemes while keeping the ex-situ structure. Together, our findings provide only partial support for the Derivational Complexity Hypothesis.
Successful communication is often explored by people throughout their life courses. To effectively transfer one’s own information to others, people employ various linguistic tools, such as word order information, prosodic cues, and lexical choices. The exploration of these linguistic cues is known as the study of information structure (IS). Moreover, an important issue in the language acquisition of children is the investigation of how they acquire IS. This thesis seeks to improve our understanding of how children acquire different tools (i.e., prosodical cues, syntactical cues, and the focus particle only) of focus marking in a cross linguistic perspective.
In the first study, following Szendrői and her colleagues (2017)- the sentence-picture verification task- was performed to investigate whether three- to five-year-old Mandarin-speaking children as well as Mandarin-speaking adults could apply prosodic information to recognize focus in sentences. More, in the second study, not only Mandarin-speaking adults and Mandarin-speaking children but also German-speaking adults and German-speaking children were included to confirm the assumption that children could have adult-like performance in understanding sentence focus by identifying language specific cues in their mother tongue from early onwards. In this study, the same paradigm- the sentence-picture verification task- as in the first study was employed together with the eye-tracking method. Finally, in the last study, an issue of whether five-year-old Mandarin-speaking children could understand the pre-subject only sentence was carried out and again whether prosodic information would help them to better understand this kind of sentences.
The overall results seem to suggest that Mandarin-speaking children from early onwards could make use of the specific linguistic cues in their ambient language. That is, in Mandarin, a Topic-prominent and tone language, the word order information plays a more important rule than the prosodic information and even three-year-old Mandarin-speaking children could follow the word order information. More, although it seems that German-speaking children could follow the prosodic information, they did not have the adult-like performance in the object-accented condition. A feasible reason for this result is that there are more possibilities of marking focus in German, such as flexible word order, prosodic information, focus particles, and thus it would take longer time for German-speaking children to manage these linguistic tools. Another important empirical finding regarding the syntactically-marked focus in German is that it seems that the cleft construction is not a valid focus construction and this result corroborates with the previous observations (Dufter, 2009). Further, eye-tracking method did help to uncover how the parser direct their attention for recognizing focus. In the final study, it is showed that with explicit verbal context Mandarin-speaking children could understand the pre-subject only sentence and the study brought a better understanding of the acquisition of the focus particle- only with the Mandarin-speaking children.
Across currents
(2017)
Business process management experiences a large uptake by the industry, and process models play an important role in the analysis and improvement of processes. While an increasing number of staff becomes involved in actual modeling practice, it is crucial to assure model quality and homogeneity along with providing suitable aids for creating models. In this paper we consider the problem of offering recommendations to the user during the act of modeling. Our key contribution is a concept for defining and identifying so-called action patterns - chunks of actions often appearing together in business processes. In particular, we specify action patterns and demonstrate how they can be identified from existing process model repositories using association rule mining techniques. Action patterns can then be used to suggest additional actions for a process model. Our approach is challenged by applying it to the collection of process models from the SAP Reference Model.
The field of machine learning studies algorithms that infer predictive models from data. Predictive models are applicable for many practical tasks such as spam filtering, face and handwritten digit recognition, and personalized product recommendation. In general, they are used to predict a target label for a given data instance. In order to make an informed decision about the deployment of a predictive model, it is crucial to know the model’s approximate performance. To evaluate performance, a set of labeled test instances is required that is drawn from the distribution the model will be exposed to at application time. In many practical scenarios, unlabeled test instances are readily available, but the process of labeling them can be a time- and cost-intensive task and may involve a human expert. This thesis addresses the problem of evaluating a given predictive model accurately with minimal labeling effort. We study an active model evaluation process that selects certain instances of the data according to an instrumental sampling distribution and queries their labels. We derive sampling distributions that minimize estimation error with respect to different performance measures such as error rate, mean squared error, and F-measures. An analysis of the distribution that governs the estimator leads to confidence intervals, which indicate how precise the error estimation is. Labeling costs may vary across different instances depending on certain characteristics of the data. For instance, documents differ in their length, comprehensibility, and technical requirements; these attributes affect the time a human labeler needs to judge relevance or to assign topics. To address this, the sampling distribution is extended to incorporate instance-specific costs. We empirically study conditions under which the active evaluation processes are more accurate than a standard estimate that draws equally many instances from the test distribution. We also address the problem of comparing the risks of two predictive models. The standard approach would be to draw instances according to the test distribution, label the selected instances, and apply statistical tests to identify significant differences. Drawing instances according to an instrumental distribution affects the power of a statistical test. We derive a sampling procedure that maximizes test power when used to select instances, and thereby minimizes the likelihood of choosing the inferior model. Furthermore, we investigate the task of comparing several alternative models; the objective of an evaluation could be to rank the models according to the risk that they incur or to identify the model with lowest risk. An experimental study shows that the active procedure leads to higher test power than the standard test in many application domains. Finally, we study the problem of evaluating the performance of ranking functions, which are used for example for web search. In practice, ranking performance is estimated by applying a given ranking model to a representative set of test queries and manually assessing the relevance of all retrieved items for each query. We apply the concepts of active evaluation and active comparison to ranking functions and derive optimal sampling distributions for the commonly used performance measures Discounted Cumulative Gain and Expected Reciprocal Rank. Experiments on web search engine data illustrate significant reductions in labeling costs.
Magmatic continental rifts often constitute the earliest stage of nascent plate boundaries. These extensional tectonic provinces are characterized by ubiquitous normal faulting and volcanic activity; the spatial pattern, the geometry, and the age of these normal faults can help to unravel the spatiotemporal relationships between extensional deformation, magmatism, and long-wavelength crustal deformation of continental rift provinces. This study focuses on the active faulting in the Kenya Rift of the Cenozoic East African Rift System (EARS) with a focus on the mid-Pleistocene to the present-day.
To examine the early stages of continental break-up in the EARS, this thesis presents a time-averaged minimum extension rate for the inner graben of the Northern Kenya Rift (NKR) for the last 0.5 m.y. Using the TanDEM-X digital elevation model, fault-scarp geometries and associated throws are determined across the volcano-tectonic axis of the inner graben of the NKR. By integrating existing geochronology of faulted units with new ⁴⁰Ar/³⁹Ar radioisotopic dates, time-averaged extension rates are calculated. This study reveals that in the inner graben of the NKR, the long-term extension rate based on mid-Pleistocene to recent brittle deformation has minimum values of 1.0 to 1.6 mm yr⁻¹, locally with values up to 2.0 mm yr⁻¹. In light of virtually inactive border faults of the NKR, we show that extension is focused in the region of the active volcano-tectonic axis in the inner graben, thus highlighting the maturing of continental rifting in the NKR.
The phenomenon of focused extension is further investigated with a structural analysis of the youngest volcanic manifestations of the Kenya Rift, their relationship with extensional structures, and their overprint by Holocene faulting. In this context I analyzed the fault characteristics at the ~36 ka old Menengai Caldera and adjacent areas in the Central Kenya Rift using detailed field mapping and a structure-from-motion-based DEM generated from UAV data. In general, the Holocene intra-rift normal faults are dip-slip faults which strike NNE and thus reflect the present-day tectonic stress field; however, inside Menengai caldera persistent magmatic activity and magmatic resurgence overprints these young structures significantly. The caldera is located at the center of an actively extending rift segment and this and the other volcanic edifices of the Kenya Rift may constitute nucleation points of faulting an magmatic extensional processes that ultimately lead into a future stage of magma-assisted rifting.
When viewed at the scale of the entire Kenya Rift the protracted normal faulting in this region compartmentalizes the larger rift depressions, and influences the sedimentology and the hydrology of the intra-rift basins at a scale of less than 100 km. In the present day, most of the fault-bounded sub-basins of the Kenya Rift are hydrologically isolated due to this combination of faulting and magmatic activity that has generated efficient hydrological barriers that maintain these basins as semi-independent geomorphic entities. This isolation, however, was overcome during wetter climatic conditions during the past when the basins were transiently connected. I therefore also investigated the hydrological connectivity of the rift basins during the African Humid Period of the early Holocene, when climate was wetter. With the help of DEM analysis, lake-highstand indicators, radiocarbon dating, and a review of the fossil record, two lake-river-cascades could be identified: one directed southward, and one directed northward. Both cascades connected presently isolated rift basins during the early Holocene via spillovers of lakes and incised river gorges. This hydrological connection fostered the dispersal of aquatic faunas along the rift, and in addition, the water divide between the two river systems represented the only terrestrial dispersal corridor across the Kenya Rift. The reconstruction explains isolated distributions of Nilotic fish species in Kenya Rift lakes and of Guineo-Congolian mammal species in forests east of the Kenya Rift. On longer timescales, repeated episodes of connectivity and isolation must have occurred. To address this problem I participated in research to analyze a sediment drill core from the Koora basin of the Southern Kenya Rift, which provides a paleo-environmental record of the last 1 Ma. Based on this record it can be concluded that at ~400 ka relatively stable environmental conditions were disrupted by tectonic, hydrological, and ecological changes, resulting in increasingly large and frequent fluctuations in water availability, grassland communities, and woody plant cover. The major environmental shifts reflected in the drill core data coincide with phases where volcano-tectonic activity affected the basin. This thesis therefore shows how protracted extensional tectonic processes and the resulting geomorphologic conditions can affect the hydrology, the paleo-environment and the biodiversity of extensional zones in Kenya and elsewhere.
Assumed comparable environmental conditions of early Mars and early Earth in 3.7 Ga ago – at a time when first fossil records of life on Earth could be found – suggest the possibility of life emerging on both planets in parallel. As conditions changed, the hypothetical life on Mars either became extinct or was able to adapt and might still exist in biological niches. The controversial discussed detection of methane on Mars led to the assumption, that it must have a recent origin – either abiotic through active volcanism or chemical processes, or through biogenic production. Spatial and seasonal variations in the detected methane concentrations and correlations between the presence of water vapor and geological features such as subsurface hydrogen, which are occurring together with locally increased detected concentrations of methane, gave fuel to the hypothesis of a possible biological source of the methane on Mars.
Therefore the phylogenetically old methanogenic archaea, which have evolved under early Earth conditions, are often used as model-organisms in astrobiological studies to investigate the potential of life to exist in possible extraterrestrial habitats on our neighboring planet. In this thesis methanogenic archaea originating from two extreme environments on Earth were investigated to test their ability to be active under simulated Mars analog conditions. These extreme environments – the Siberian permafrost-affected soil and the chemoautotrophically based terrestrial ecosystem of Movile cave, Romania – are regarded as analogs for possible Martian (subsurface) habitats. Two novel species of methanogenic archaea isolated from these environments were described within the frame of this thesis.
It could be shown that concentrations up to 1 wt% of Mars regolith analogs added to the growth media had a positive influence on the methane production rates of the tested methanogenic archaea, whereas higher concentrations resulted in decreasing rates. Nevertheless it was possible for the organisms to metabolize when incubated on water-saturated soil matrixes made of Mars regolith analogs without any additional nutrients. Long-term desiccation resistance of more than 400 days was proven with reincubation and indirect counting of viable cells through a combined treatment with propidium monoazide (to inactivate DNA of destroyed cells) and quantitative PCR. Phyllosilicate rich regolith analogs seem to be the best soil mixtures for the tested methanogenic archaea to be active under Mars analog conditions. Furthermore, in a simulation chamber experiment the activity of the permafrost methanogen strain Methanosarcina soligelidi SMA-21 under Mars subsurface analog conditions could be proven. Through real-time wavelength modulation spectroscopy measurements the increase in the methane concentration at temperatures down to -5 °C could be detected.
The results presented in this thesis contribute to the understanding of the activity potential of methanogenic archaea under Mars analog conditions and therefore provide insights to the possible habitability of present-day Mars (near) subsurface environments. Thus, it contributes also to the data interpretation of future life detection missions on that planet. For example the ExoMars mission of the European Space Agency (ESA) and Roscosmos which is planned to be launched in 2018 and is aiming to drill in the Martian subsurface.
Background
Earlier studies have shown that balance training (BT) has the potential to induce performance enhancements in selected components of physical fitness (i.e., balance, muscle strength, power, speed). While there is ample evidence on the long-term effects of BT on components of physical fitness in youth, less is known on the short-term or acute effects of single BT sessions on selected measures of physical fitness.
Objective
To examine the acute effects of different balance exercise types on balance, change-of-direction (CoD) speed, and jump performance in youth female volleyball players.
Methods
Eleven female players aged 14 years participated in this study. Three types of balance exercises (i.e., anterior, posterolateral, rotational type) were conducted in randomized order. For each exercise, 3 sets including 5 repetitions were performed. Before and after the performance of the balance exercises, participants were tested for their static balance (center of pressure surface area [CoP SA] and velocity [CoP V]) on foam and firm surfaces, CoD speed (T-Half test), and vertical jump height (countermovement jump [CMJ] height). A 3 (condition: anterior, mediolateral, rotational balance exercise type) × 2 (time: pre, post) analysis of variance was computed with repeated measures on time.
Results
Findings showed no significant condition × time interactions for all outcome measures (p > 0.05). However, there were small main effects of time for CoP SA on firm and foam surfaces (both d = 0.38; all p < 0.05) with no effect for CoP V on both surface conditions (p > 0.05). For CoD speed, findings showed a large main effect of time (d = 0.91; p < 0.001). However, for CMJ height, no main effect of time was observed (p > 0.05).
Conclusions
Overall, our results indicated small-to-large changes in balance and CoD speed performances but not in CMJ height in youth female volleyball players, regardless of the balance exercise type. Accordingly, it is recommended to regularly integrate balance exercises before the performance of sport-specific training to optimize performance development in youth female volleyball players.
Background: High-intensity muscle actions have the potential to temporarily improve the performance which has been denoted as postactivation performance enhancement.
Objectives: This study determined the acute effects of different stretch-shortening (fast vs. low) and strength (dynamic vs. isometric) exercises executed during one training session on subsequent balance performance in youth weightlifters.
Materials and Methods: Sixteen male and female young weightlifters, aged 11.3±0.6years, performed four strength exercise conditions in randomized order, including dynamic strength (DYN; 3 sets of 3 repetitions of 10 RM) and isometric strength exercises (ISOM; 3 sets of maintaining 3s of 10 RM of back-squat), as well as fast (FSSC; 3 sets of 3 repetitions of 20-cm drop-jumps) and slow (SSSC; 3 sets of 3 hurdle jumps over a 20-cm obstacle) stretch-shortening cycle protocols. Balance performance was tested before and after each of the four exercise conditions in bipedal stance on an unstable surface (i.e., BOSU ball with flat side facing up) using two dependent variables, i.e., center of pressure surface area (CoP SA) and velocity (CoP V).
Results: There was a significant effect of time on CoP SA and CoP V [F(1,60)=54.37, d=1.88, p<0.0001; F(1,60)=9.07, d=0.77, p=0.003]. In addition, a statistically significant effect of condition on CoP SA and CoP V [F(3,60)=11.81, d=1.53, p<0.0001; F(3,60)=7.36, d=1.21, p=0.0003] was observed. Statistically significant condition-by-time interactions were found for the balance parameters CoP SA (p<0.003, d=0.54) and CoP V (p<0.002, d=0.70). Specific to contrast analysis, all specified hypotheses were tested and demonstrated that FSSC yielded significantly greater improvements than all other conditions in CoP SA and CoP V [p<0.0001 (d=1.55); p=0.0004 (d=1.19), respectively]. In addition, FSSC yielded significantly greater improvements compared with the two conditions for both balance parameters [p<0.0001 (d=2.03); p<0.0001 (d=1.45)].
Conclusion: Fast stretch-shortening cycle exercises appear to be more effective to improve short-term balance performance in young weightlifters. Due to the importance of balance for overall competitive achievement in weightlifting, it is recommended that young weightlifters implement dynamic plyometric exercises in the fast stretch-shortening cycle during the warm-up to improve their balance performance.
Background and aims:
To succeed in competition, elite team and individual athletes often seek the development of both, high levels of muscle strength and power as well as cardiorespiratory endurance. In this context, concurrent training (CT) is a commonly applied and effective training approach. While being exposed to high training loads, youth athletes (≤ 18 years) are yet underrepresented in the scientific literature. Besides, immunological responses to CT have received little attention. Therefore, the aims of this work were to examine the acute (< 15min) and delayed (≥ 6 hours) effects of dif-ferent exercise order in CT on immunological stress responses, muscular fitness, metabolic response, and rating of perceived exertion (RPE) in highly trained youth male and female judo athletes.
Methods:
A total of twenty male and thirteen female participants, with an average age of 16 ± 1.8 years and 14.4 ± 2.1 years, respectively, were included in the study. They were randomly assigned to two CT sessions; power-endurance versus endurance-power (i.e., study 1), or strength-endurance versus endurance-strength (i.e., study 2). Markers of immune response (i.e., white-blood-cells, granulocytes, lymphocytes, mon-ocytes, and lymphocytes, granulocyte-lymphocyte-ratio, and systemic-inflammation-index), muscular fitness (i.e., counter-movement jump [CMJ]), metabolic responses (i.e., blood lactate, glucose), and RPE were collected at different time points (i.e., PRE12H, PRE, MID, POST, POST6H, POST22H).
Results (study 1):
There were significant time*order interactions for white-blood-cells, lymphocytes, granulocytes, monocytes, granulocyte-lymphocyte-ratio, and systemic-inflammation-index. The power-endurance order resulted in significantly larger PRE-to-POST increases in white-blood-cells, monocytes, and lymphocytes while the endur-ance-power order resulted in significantly larger PRE-to-POST increases in the granu-locyte-lymphocyte-ratio and systemic-inflammation-index. Likewise, significantly larger increases from PRE-to-POST6H in white-blood-cells and granulocytes were observed following the power-endurance order compared to endurance-power. All markers of immune response returned toward baseline values at POST22H. Moreover, there was a significant time*order interaction for blood glucose and lactate. Following the endur-ance-power order, blood lactate and glucose increased from PRE-to-MID but not from PRE-to-POST. Meanwhile, in the power-endurance order blood lactate and glucose increased from PRE-to-POST but not from PRE-to-MID. A significant time*order inter-action was observed for CMJ-force with larger PRE-to-POST decreases in the endur-ance-power order compared to power-endurance order. Further, CMJ-power showed larger PRE-to-MID performance decreases following the power-endurance order, com-pared to the endurance-power order. Regarding RPE, significant time*order interactions were noted with larger PRE-to-MID values following the endurance-power order and larger PRE-to-POST values following the power-endurance order.
Results (study 2):
There were significant time*order interactions for lymphocytes, monocytes, granulocyte-lymphocyte-ratio, and systemic-inflammation-index. The strength-endurance order resulted in significantly larger PRE-to-POST increases in lymphocytes while the endurance-strength order resulted in significantly larger PRE-to-POST increases in the granulocyte-lymphocyte-ratio and systemic-inflammation-index. All markers of the immune system returned toward baseline values at POST22H. Moreover, there was a significant time*order interaction for blood glucose and lactate. From PRE-to-MID, there was a significantly greater increase in blood lactate and glu-cose following the endurance-strength order compared to strength-endurance order. Meanwhile, from PRE-to-POST there was a significantly higher increase in blood glu-cose following the strength-endurance order compared to endurance-strength order. Regarding physical fitness, a significant time*order interaction was observed for CMJ-force and CMJ-power with larger PRE-to-MID increases following the endurance-strength order compared to the strength-endurance order. For RPE, significant time*order interactions were noted with larger PRE-to-MID values following the endur-ance-power order and larger PRE-to-POST values following the power-endurance or-der.
Conclusions:
The primary findings from both studies revealed order-dependent effects on immune responses. In male youth judo athletes, the results demonstrated greater immunological stress responses, both immediately (≤ 15 min) and delayed (≥ 6 hours), following the power-endurance order compared to the endurance-power order. For female youth judo athletes, the results indicated higher acute, but not delayed, order-dependent changes in immune responses following the strength-endurance order compared to the endurance-strength order. It is worth noting that in both studies, all markers of immune system response returned to baseline levels within 22 hours. This suggests that successful recovery from the exercise-induced immune stress response was achieved within 22 hours. Regarding metabolic responses, physical fitness, and perceived exertion, the findings from both studies indicated acute (≤ 15 minutes) alterations that were dependent on the exercise order. These alterations were primarily influ-enced by the endurance exercise component. Moreover, study 1 provided substantial evidence suggesting that internal load measures, such as immune markers, may differ from external load measures. This indicates a disparity between immunological, perceived, and physical responses following both concurrent training orders. Therefore, it is crucial for practitioners to acknowledge these differences and take them into consideration when designing training programs.
The effects of static stretching (StS) on subsequent strength and power activities has been one of the most debated topics in sport science literature over the past decades. The aim of this review is (1) to summarize previous and current findings on the acute effects of StS on muscle strength and power performances; (2) to update readers’ knowledge related to previous caveats; and (3) to discuss the underlying physiological mechanisms of short-duration StS when performed as single-mode treatment or when integrated into a full warm-up routine. Over the last two decades, StS has been considered harmful to subsequent strength and power performances. Accordingly, it has been recommended not to apply StS before strength- and power-related activities. More recent evidence suggests that when performed as a single-mode treatment or when integrated within a full warm-up routine including aerobic activity, dynamic-stretching, and sport-specific activities, short-duration StS (≤60 s per muscle group) trivially impairs subsequent strength and power activities (∆1–2%). Yet, longer StS durations (>60 s per muscle group) appear to induce substantial and practically relevant declines in strength and power performances (∆4.0–7.5%). Moreover, recent evidence suggests that when included in a full warm-up routine, short-duration StS may even contribute to lower the risk of sustaining musculotendinous injuries especially with high-intensity activities (e.g., sprint running and change of direction speed). It seems that during short-duration StS, neuromuscular activation and musculotendinous stiffness appear not to be affected compared with long-duration StS. Among other factors, this could be due to an elevated muscle temperature induced by a dynamic warm-up program. More specifically, elevated muscle temperature leads to increased muscle fiber conduction-velocity and improved binding of contractile proteins (actin, myosin). Therefore, our previous understanding of harmful StS effects on subsequent strength and power activities has to be updated. In fact, short-duration StS should be included as an important warm-up component before the uptake of recreational sports activities due to its potential positive effect on flexibility and musculotendinous injury prevention. However, in high-performance athletes, short-duration StS has to be applied with caution due to its negligible but still prevalent negative effects on subsequent strength and power performances, which could have an impact on performance during competition.