Refine
Year of publication
Document Type
- Article (21267) (remove)
Language
- English (21267) (remove)
Keywords
- climate change (95)
- Germany (73)
- stars: massive (57)
- diffusion (47)
- morphology (47)
- stars: early-type (47)
- gamma rays: general (46)
- German (45)
- stars: winds, outflows (45)
- Climate change (43)
Institute
- Institut für Physik und Astronomie (4063)
- Institut für Biochemie und Biologie (3383)
- Institut für Geowissenschaften (2601)
- Institut für Chemie (2231)
- Department Psychologie (1126)
- Institut für Mathematik (958)
- Department Linguistik (763)
- Institut für Ernährungswissenschaft (726)
- Institut für Informatik und Computational Science (572)
- Institut für Umweltwissenschaften und Geographie (571)
The closed-chamber method is the most common approach to determine CH4 fluxes in peatlands. The concentration change in the chamber is monitored over time, and the flux is usually calculated by the slope of a linear regression function. Theoretically, the gas exchange cannot be constant over time but has to decrease, when the concentration gradient between chamber headspace and soil air decreases. In this study, we test whether we can detect this non- linearity in the concentration change during the chamber closure with six air samples. We expect generally a low concentration gradient on dry sites (hummocks) and thus the occurrence of exponential concentration changes in the chamber due to a quick equilibrium of gas concentrations between peat and chamber headspace. On wet (flarks) and sedge- covered sites (lawns), we expect a high gradient and near-linear concentration changes in the chamber. To evaluate these model assumptions, we calculate both linear and exponential regressions for a test data set (n = 597) from a Finnish mire. We use the Akaike Information Criterion with small sample second order bias correction to select the best-fitted model. 13.6%, 19.2% and 9.8% of measurements on hummocks, lawns and flarks, respectively, were best fitted with an exponential regression model. A flux estimation derived from the slope of the exponential function at the beginning of the chamber closure can be significantly higher than using the slope of the linear regression function. Non-linear concentration-overtime curves occurred mostly during periods of changing water table. This could be due to either natural processes or chamber artefacts, e.g. initial pressure fluctuations during chamber deployment. To be able to exclude either natural processes or artefacts as cause of non-linearity, further information, e.g. CH4 concentration profile measurements in the peat, would be needed. If this is not available, the range of uncertainty can be substantial. We suggest to use the range between the slopes of the exponential regression at the beginning and at the end of the closure time as an estimate of the overall uncertainty.
The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon rejection of a molecular dynamics proposal step. The implication is a trajectory reversal upon rejection, which is undesirable when interpreting GHMC as thermostated molecular dynamics. We show that a modified detailed balance condition can be used to implement GHMC without momentum flips. The same modification can be applied to the generalized shadow hybrid Monte Carlo (GSHMC) method. Numerical results indicate that GHMC/GSHMC implementations with momentum flip display a favorable behavior in terms of sampling efficiency, i.e., the traditional GHMC/GSHMC implementations with momentum flip got the advantage of a higher acceptance rate and faster decorrelation of Monte Carlo samples. The difference is more pronounced for GHMC. We also numerically investigate the behavior of the GHMC method as a Langevin-type thermostat. We find that the GHMC method without momentum flip interferes less with the underlying stochastic molecular dynamics in terms of autocorrelation functions and it to be preferred over the GHMC method with momentum flip. The same finding applies to GSHMC.
Flood loss data collection and modeling are not standardized, and previous work has indicated that losses from different flood types (e.g., riverine and groundwater) may follow different driving forces. However, different flood types may occur within a single flood event, which is known as a compound flood event. Therefore, we aimed to identify statistical similarities between loss-driving factors across flood types and test whether the corresponding losses should be modeled separately. In this study, we used empirical data from 4,418 respondents from four survey campaigns studying households in Germany that experienced flooding. These surveys sought to investigate several features of the impact process (hazard, socioeconomic, preparedness, and building characteristics, as well as flood type). While the level of most of these features differed across flood type subsamples (e.g., degree of preparedness), they did so in a nonregular pattern. A variable selection process indicates that besides hazard and building characteristics, information on property-level preparedness was also selected as a relevant predictor of the loss ratio. These variables represent information, which is rarely adopted in loss modeling. Models shall be refined with further data collection and other statistical methods. To save costs, data collection efforts should be steered toward the most relevant predictors to enhance data availability and increase the statistical power of results. Understanding that losses from different flood types are driven by different factors is a crucial step toward targeted data collection and model development and will finally clarify conditions that allow us to transfer loss models in space and time. <br /> Key Points <br /> Survey data of flood-affected households show different concurrent flood types, undermining the use of a single-flood-type loss model Thirteen variables addressing flood hazard, the building, and property level preparedness are significant predictors of the building loss ratio Flood type-specific models show varying significance across the predictor variables, indicating a hindrance to model transferability
Risk-based insurance is a commonly proposed and discussed flood risk adaptation mechanism in policy debates across the world such as in the United Kingdom and the United States of America. However, both risk-based premiums and growing risk pose increasing difficulties for insurance to remain affordable. An empirical concept of affordability is required as the affordability of adaption strategies is an important concern for policymakers, yet such a concept is not often examined. Therefore, a robust metric with a commonly acceptable affordability threshold is required. A robust metric allows for a previously normative concept to be quantified in monetary terms, and in this way, the metric is rendered more suitable for integration into public policy debates. This paper investigates the degree to which risk-based flood insurance premiums are unaffordable in Europe. In addition, this paper compares the outcomes generated by three different definitions of unaffordability in order to investigate the most robust definition. In doing so, the residual income definition was found to be the least sensitive to changes in the threshold. While this paper focuses on Europe, the selected definition can be employed elsewhere in the world and across adaption measures in order to develop a common metric for indicating the potential unaffordability problem.
A comparison of current trends within computer science teaching in school in Germany and the UK
(2013)
In the last two years, CS as a school subject has gained a lot of attention worldwide, although different countries have differing approaches to and experiences of introducing CS in schools. This paper reports on a study comparing current trends in CS at school, with a major focus on two countries, Germany and UK. A survey was carried out of a number of teaching professionals and experts from the UK and Germany with regard to the content and delivery of CS in school. An analysis of the quantitative data reveals a difference in foci in the two countries; putting this into the context of curricular developments we are able to offer interpretations of these trends and suggest ways in which curricula in CS at school should be moving forward.
The possibilities and limits of structure refinement of Langmuir-Blodgett films by means of symmetrical reflection of X- rays are described using the example of a stearic acid multilayer. Three different techniques for the determiantion of the electron density profile from reflectivity data are compared; a Fourier method, a Patterson method, and model calculations. The important role of the a priori information for finding the besft structure model is outlined.
A comparative whole-genome approach identifies bacterial traits for marine microbial interactions
(2022)
Luca Zoccarato, Daniel Sher et al. leverage publicly available bacterial genomes from marine and other environments to examine traits underlying microbial interactions.
Their results provide a valuable resource to investigate clusters of functional and linked traits to better understand marine bacteria community assembly and dynamics.
Microbial interactions shape the structure and function of microbial communities with profound consequences for biogeochemical cycles and ecosystem health. Yet, most interaction mechanisms are studied only in model systems and their prevalence is unknown. To systematically explore the functional and interaction potential of sequenced marine bacteria, we developed a trait-based approach, and applied it to 473 complete genomes (248 genera), representing a substantial fraction of marine microbial communities.
We identified genome functional clusters (GFCs) which group bacterial taxa with common ecology and life history. Most GFCs revealed unique combinations of interaction traits, including the production of siderophores (10% of genomes), phytohormones (3-8%) and different B vitamins (57-70%). Specific GFCs, comprising Alpha- and Gammaproteobacteria, displayed more interaction traits than expected by chance, and are thus predicted to preferentially interact synergistically and/or antagonistically with bacteria and phytoplankton. Linked trait clusters (LTCs) identify traits that may have evolved to act together (e.g., secretion systems, nitrogen metabolism regulation and B vitamin transporters), providing testable hypotheses for complex mechanisms of microbial interactions.
Our approach translates multidimensional genomic information into an atlas of marine bacteria and their putative functions, relevant for understanding the fundamental rules that govern community assembly and dynamics.
A successful assignment for the fundamental bands observed in the experimental IR spectra of mn-12S(2)O(2) and fn-12S(2)O(2) dithiacrown ethers was achieved by the aid of the density functional theory (DFT) based quantum mechanical calculations carried out at the 133LYP/6-31G(d) and B3LYP/6-31 + G(d) level of theory. Two different scaling approaches, '(i) scaled quantum mechanics force field (SQM FF) methodology', and (ii) the 'scaling frequencies with dual empirical scale factors', were used in order to fit the calculated harmonic frequencies to the experimental ones. Potential energy distribution (PED) calculations were carried out to define the internal coordinate contributions to each normal mode and to define the corresponding normal modes of the molecules. The effects of the conformational differences onto the IR active normal modes of the two isomeric molecules and their corresponding experimental frequencies were discussed in the light of the calculated spectral data.
In this paper two groups supporting different views on the mechanism of light induced polymer deformation argue about the respective underlying theoretical conceptions, in order to bring this interesting debate to the attention of the scientific community. The group of Prof. Nicolae Hurduc supports the model claiming that the cyclic isomerization of azobenzenes may cause an athermal transition of the glassy azobenzene containing polymer into a fluid state, the so-called photo-fluidization concept. This concept is quite convenient for an intuitive understanding of the deformation process as an anisotropic flow of the polymer material. The group of Prof. Svetlana Santer supports the re-orientational model where the mass-transport of the polymer material accomplished during polymer deformation is stated to be generated by the light-induced re-orientation of the azobenzene side chains and as a consequence of the polymer backbone that in turn results in local mechanical stress, which is enough to irreversibly deform an azobenzene containing material even in the glassy state. For the debate we chose three polymers differing in the glass transition temperature, 32 degrees C, 87 degrees C and 95 degrees C, representing extreme cases of flexible and rigid materials. Polymer film deformation occurring during irradiation with different interference patterns is recorded using a homemade set-up combining an optical part for the generation of interference patterns and an atomic force microscope for acquiring the kinetics of film deformation. We also demonstrated the unique behaviour of azobenzene containing polymeric films to switch the topography in situ and reversibly by changing the irradiation conditions. We discuss the results of reversible deformation of three polymers induced by irradiation with intensity (IIP) and polarization (PIP) interference patterns, and the light of homogeneous intensity in terms of two approaches: the re-orientational and the photo-fluidization concepts. Both agree in that the formation of opto-mechanically induced stresses is a necessary prerequisite for the process of deformation. Using this argument, the deformation process can be characterized either as a flow or mass transport.
The importance of cultural ecosystem services in agricultural landscapes is increasingly recognized as agricultural scale enlargement and abandonment affect aesthetic and recreational values of agricultural landscapes. Landscape preference studies addressing these type of values often yield context-specific outcomes, limiting the applicability of their outcomes in landscape policy. Our approach measures the relative importance of landscape features across agricultural landscapes. This approach was applied in the agricultural landscapes of Winterswijk, The Netherlands (n=191) and the Markische Schweiz, Germany (n=113) among visitors in the agricultural landscape. We set up a parallel designed choice experiment, using regionally specific, photorealistic visualizations of four comparable landscape attributes. In the Dutch landscape visitors highly value hedgerows and tree lines, whereas groups of trees and crop diversity are highly valued in the German landscape. Furthermore, we find that differences in relative preference for landscape attributes are, to some extent, explained by socio-cultural background variables such as education level and affinity with agriculture of the visitors. This approach contributes to a better understanding of the cross-regional variation of aesthetic and recreational values and how these values relate to characteristics of the agricultural landscape, which could support the integration of cultural services in landscape policy. (C) 2015 Elsevier B.V. All rights reserved.
In this paper, we analyse the effectiveness of flood management measures based on the concept known as "retaining water in the landscape". The investigated measures include afforestation, micro-ponds and small-reservoirs. A comparative and model-based methodological approach has been developed and applied for three meso-scale catchments located in different European hydro-climatological regions: Poyo (184 km(2)) in the Spanish Mediterranean, Upper Iller (954 km(2)) in the German Alps and Kamp (621 km(2)) in Northeast-Austria representing the Continental hydro-climate. This comparative analysis has found general similarities in spite of the particular differences among studied areas. In general terms, the flood reduction through the concept of "retaining water in the landscape" depends on the following factors: the storage capacity increase in the catchment resulting from such measures, the characteristics of the rainfall event, the antecedent soil moisture condition and the spatial distribution of such flood management measures in the catchment. In general, our study has shown that, this concept is effective for small and medium events, but almost negligible for the largest and less frequent floods: this holds true for all different hydro-climatic regions, and with different land-use, soils and morphological settings.
Genomic prediction has revolutionized crop breeding despite remaining issues of transferability of models to unseen environmental conditions and environments. Usage of endophenotypes rather than genomic markers leads to the possibility of building phenomic prediction models that can account, in part, for this challenge. Here, we compare and contrast genomic prediction and phenomic prediction models for 3 growth-related traits, namely, leaf count, tree height, and trunk diameter, from 2 coffee 3-way hybrid populations exposed to a series of treatment-inducing environmental conditions. The models are based on 7 different statistical methods built with genomic markers and ChlF data used as predictors. This comparative analysis demonstrates that the best-performing phenomic prediction models show higher predictability than the best genomic prediction models for the considered traits and environments in the vast majority of comparisons within 3-way hybrid populations. In addition, we show that phenomic prediction models are transferrable between conditions but to a lower extent between populations and we conclude that chlorophyll a fluorescence data can serve as alternative predictors in statistical models of coffee hybrid performance. Future directions will explore their combination with other endophenotypes to further improve the prediction of growth-related traits for crops.
Downscaling of microfluidic cell culture and detection devices for electrochemical monitoring has mostly focused on miniaturization of the microfluidic chips which are often designed for specific applications and therefore lack functional flexibility. We present a compact microfluidic cell culture and electrochemical analysis platform with in-built fluid handling and detection, enabling complete cell based assays comprising on-line electrode cleaning, sterilization, surface functionalization, cell seeding, cultivation and electrochemical real-time monitoring of cellular dynamics. To demonstrate the versatility and multifunctionality of the platform, we explored amperometric monitoring of intracellular redox activity in yeast (Saccharomyces cerevisiae) and detection of exocytotically released dopamine from rat pheochromocytoma cells (PC12). Electrochemical impedance spectroscopy was used in both applications for monitoring cell sedimentation and adhesion as well as proliferation in the case of PC12 cells. The influence of flow rate on the signal amplitude in the detection of redox metabolism as well as the effect of mechanical stimulation on dopamine release were demonstrated using the programmable fluid handling capability. The here presented platform is aimed at applications utilizing cell based assays, ranging from e.g. monitoring of drug effects in pharmacological studies, characterization of neural stem cell differentiation, and screening of genetically modified microorganisms to environmental monitoring.
Objective Pre-eclampsia is a serious complication of pregnancy with high morbidity and mortality and an incidence of 3-5% in all pregnancies. Early prediction is still insufficient in clinical practice. Although most pre- eclamptic patients have pathological uterine perfusion in the second trimester, perfusion disturbance has a positive predictive accuracy (PPA) only of approximately 30%. Methods Non-invasive continuous blood pressure recordings were taken simultaneously via a finger cuff for 30 min. Time series of systolic as well as diastolic beat-to-beat pressure values were extracted to analyse heart rate and blood pressure variability and baroreflex sensitivity in 102 second- trimester pregnancies, to assess predictability for pre-eclampsia (n = 16). All women underwent Doppler investigations of the uterine arteries. Results We identified a combination of three variability and baroreflex parameters to best predict pre-eclampsia several weeks before clinical manifestation. The discriminant function of these three parameters classified patients with later pre-eclampsia with a sensitivity of 87.5%, a specificity of 83.7%, and a PPA of 50.0%. Combined with Doppler investigations of uterine arteries, PPA increased to 71.4%. Conclusions This technique of incorporating one-stop clinical assessment of uterine perfusion and variability parameters in the second trimester produces the most effective prediction of pre-eclampsia to date
X-ray photoelectron spectroscopy (XPS) is a powerful tool for probing the local chemical environment of atoms near surfaces. When applied to soft matter, such as polymers, XPS spectra are frequently shifted and broadened due to thermal atom motion and by interchain interactions. We present a combined quantum mechanical QM/molecular dynamics (MD) simulation of X-ray photoelectron spectra of polyvinyl alcohol (PVA) using oligomer models in order to account for and quantify these effects on the XPS (C1s) signal. In our study, molecular dynamics at finite temperature were performed with a classical forcefield and by ab initio MD (AIMD) using the Car-Parrinello method. Snapshots along, the trajectories represent possible conformers and/or neighbouring environments, with different C1s ionization potentials for individual C atoms leading to broadened XPS peaks. The latter are determined by Delta-Kohn Sham calculations. We also examine the experimental practice of gauging XPS (C1s) signals of alkylic C-atoms in C-containing polymers to the C1s signal of polyethylene.
We find that (i) the experimental XPS (C1s) spectra of PVA (position and width) can be roughly represented by single-strand models, (ii) interchain interactions lead to red-shifts of the XPS peaks by about 0.6 eV, and (iii) AIMD simulations match the findings from classical MD semi-quantitatively. Further, (iv) the gauging procedure of XPS (C1s) signals to the values of PE, introduces errors of about 0.5 eV. (C) 2014 Elsevier B.V. All rights reserved.
Diatom diversity in lakes of northwest Yakutia (Siberia) was investigated by microscopic and genetic analysis of surface and cored lake sediments, to evaluate the use of sedimentary DNA for paleolimnological diatom studies and to identify obscure genetic diversity that cannot be detected by microscopic methods. Two short (76 and 73 bp) and one longer (577 bp) fragments of the ribulose 1,5-bisphosphate carboxylase/oxygenase (rbcL) gene, encoding the large subunit of the rbcL, were used as genetic markers. Diverse morphological assemblages of diatoms, dominated by small benthic fragilarioid taxa, were retrieved from the sediments of each lake. These minute fragilarioid taxa were examined by scanning electron microscopy, revealing diverse morphotypes in Staurosira and Staurosirella from the different lakes. Genetic analyses indicated a dominance of haplotypes that were assigned to fragilarioid taxa and less genetic diversity in other diatom taxa. The long rbcL_577 amplicon identified considerable diversification among haplotypes clustering within the Staurosira/Staurosirella genera, revealing 19 different haplotypes whose spatial distribution appears to be primarily related to the latitude of the lakes, which corresponds to a vegetation and climate gradient. Our rbcL markers are valuable tools for tracking differences between diatom lineages that are not visible in their morphologies. These markers revealed putatively high genetic diversity within the Staurosira/Staurosirella species complex, at a finer scale than is possible to resolve by microscopic determination. The rbcL markers may provide additional reliable information on the diversity of barely distinguishable minute benthic fragilarioids. Environmental sequencing may thus allow the tracking of spatial and temporal diversification in Siberian lakes, especially in the context of diatom responses to recent environmental changes, which remains a matter of controversy.
Abstract
In recent years, feedforward neural networks (NNs) have been successfully applied to reconstruct global plasmasphere dynamics in the equatorial plane. These neural network‐based models capture the large‐scale dynamics of the plasmasphere, such as plume formation and erosion of the plasmasphere on the nightside. However, their performance depends strongly on the availability of training data. When the data coverage is limited or non‐existent, as occurs during geomagnetic storms, the performance of NNs significantly decreases, as networks inherently cannot learn from the limited number of examples. This limitation can be overcome by employing physics‐based modeling during strong geomagnetic storms. Physics‐based models show a stable performance during periods of disturbed geomagnetic activity if they are correctly initialized and configured. In this study, we illustrate how to combine the neural network‐ and physics‐based models of the plasmasphere in an optimal way by using data assimilation. The proposed approach utilizes advantages of both neural network‐ and physics‐based modeling and produces global plasma density reconstructions for both quiet and disturbed geomagnetic activity, including extreme geomagnetic storms. We validate the models quantitatively by comparing their output to the in‐situ density measurements from RBSP‐A for an 18‐month out‐of‐sample period from June 30, 2016 to January 01, 2018 and computing performance metrics. To validate the global density reconstructions qualitatively, we compare them to the IMAGE EUV images of the He+ particle distribution in the Earth's plasmasphere for a number of events in the past, including the Halloween storm in 2003.
We present new radio/millimeter measurements of the hot magnetic star HR5907 obtained with the VLA and ALMA interferometers. We find that HR5907 is the most radio luminous early type star in the cm-mm band among those presently known. Its multi-wavelength radio light curves are strongly variable with an amplitude that increases with radio frequency. The radio emission can be explained by the populations of the non-thermal electrons accelerated in the current sheets on the outer border of the magnetosphere of this fast-rotating magnetic star. We classify HR5907 as another member of the growing class of strongly magnetic fast-rotating hot stars where the gyro-synchrotron emission mechanism efficiently operates in their magnetospheres. The new radio observations of HR5907 are combined with archival X-ray data to study the physical condition of its magnetosphere. The X-ray spectra of HR5907 show tentative evidence for the presence of non-thermal spectral component. We suggest that non-thermal X-rays originate a stellar X-ray aurora due to streams of non-thermal electrons impacting on the stellar surface. Taking advantage of the relation between the spectral indices of the X-ray power-law spectrum and the non-thermal electron energy distributions, we perform 3-D modelling of the radio emission for HR5907. The wavelength-dependent radio light curves probe magnetospheric layers at different heights above the stellar surface. A detailed comparison between simulated and observed radio light curves leads us to conclude that the stellar magnetic field of HR 5907 is likely non-dipolar, providing further indirect evidence of the complex magnetic field topology of HR5907.
Next-generation sequencing methods provide comprehensive data for the analysis of structural and functional analysis of the genome. The draft genomes with low contig number and high N50 value can give insight into the structure of the genome as well as provide information on the annotation of the genome. In this study, we designed a pipeline that can be used to assemble prokaryotic draft genomes with low number of contigs and high N50 value. We aimed to use combination of two de novo assembly tools (SPAdes and IDBA-Hybrid) and evaluate the impact of this approach on the quality metrics of the assemblies. The followed pipeline was tested with the raw sequence data with short reads (< 300) for a total of 10 species from four different genera. To obtain the final draft genomes, we firstly assembled the sequences using SPAdes to find closely related organism using the extracted 16 s rRNA from it. IDBA-Hybrid assembler was used to obtain the second assembly data using the closely related organism genome. SPAdes assembler tool was implemented using the second assembly, produced by IDBA-hybrid as a hint. The results were evaluated using QUAST and BUSCO. The pipeline was successful for the reduction of the contig numbers and increasing the N50 statistical values in the draft genome assemblies while preserving the coverage of the draft genomes.
To explore the genetic determinants of obesity and Type 2 diabetes (T2D), the German Center for Diabetes Research (DZD) conducted crossbreedings of the obese and diabetes-prone New Zealand Obese mouse strain with four different lean strains (B6, DBA, C3H, 129P2) that vary in their susceptibility to develop T2D. Genome-wide linkage analyses localized more than 290 quantitative trait loci (QTL) for obesity, 190 QTL for diabetes-related traits and 100 QTL for plasma metabolites in the out-cross populations. A computational framework was developed that allowed to refine critical regions and to nominate a small number of candidate genes by integrating reciprocal haplotype mapping and transcriptome data. The efficiency of the complex procedure was demonstrated for one obesity QTL. The genomic interval of 35 Mb with 502 annotated candidate genes was narrowed down to six candidates. Accordingly, congenic mice retained the obesity phenotype owing to an interval that contains three of the six candidate genes. Among these the phospholipase PLA2G4A exhibited an elevated expression in adipose tissue of obese human subjects and is therefore a critical regulator of the obesity locus. Together, our broad and complex approach demonstrates that combined- and comparative-cross analysis exhibits improved mapping resolution and represents a valid tool for the identification of disease genes.
The origin of Galactic cosmic rays is a century-long puzzle. Indirect evidence points to their acceleration by supernova shockwaves, but we know little of their escape from the shock and their evolution through the turbulent medium surrounding massive stars. Gamma rays can probe their spreading through the ambient gas and radiation fields. The Fermi Large Area Telescope (LAT) has observed the star-forming region of Cygnus X. The 1- to 100-gigaelectronvolt images reveal a 50-parsec-wide cocoon of freshly accelerated cosmic rays that flood the cavities carved by the stellar winds and ionization fronts from young stellar clusters. It provides an example to study the youth of cosmic rays in a superbubble environment before they merge into the older Galactic population.
A close call
(2018)
The present study investigated how lexical selection is influenced by the number of semantically related representations (semantic neighbourhood density) and their similarity (semantic distance) to the target in a speeded picture-naming task. Semantic neighbourhood density and similarity as continuous variables were used to assess lexical selection for which competitive and noncompetitive mechanisms have been proposed. Previous studies found mixed effects of semantic neighbourhood variables, leaving this issue unresolved. Here, we demonstrate interference of semantic neighbourhood similarity with less accurate naming responses and a higher likelihood of producing semantic errors and omissions over accurate responses for words with semantically more similar (closer) neighbours. No main effect of semantic neighbourhood density and no interaction between semantic neighbourhood density and similarity was found. We assessed further whether semantic neighbourhood density can affect naming performance if semantic neighbours exceed a certain degree of semantic similarity. Semantic similarity between the target and each neighbour was used to split semantic neighbourhood density into two different density variables: The number of semantically close neighbours versus distant neighbours. The results showed a significant effect of close, but not of distant, semantic neighbourhood density: Naming pictures of targets with more close semantic neighbours led to longer naming latencies, less accurate responses, and a higher likelihood for the production of semantic errors and omissions over accurate responses. The results show that word inherent semantic attributes such as semantic neighbourhood similarity and the number of coactivated close semantic neighbours modulate lexical selection supporting theories of competitive lexical processing.
In recent years, there has been a large amount of disparate work concerning the representation and reasoning with qualitative preferential information by means of approaches to nonmonotonic reasoning. Given the variety of underlying systems, assumptions, motivations, and intuitions, it is difficult to compare or relate one approach with another. Here, we present an overview and classification for approaches to dealing with preference. A set of criteria for classifying approaches is given, followed by a set of desiderata that an approach might be expected to satisfy. A comprehensive set of approaches is subsequently given and classified with respect to these sets of underlying principles
A circulatory loop
(2023)
In the digitalization debate, gender biases in digital technologies play a significant role because of their potential for social exclusion and inequality. It is therefore remarkable that organizations as drivers of digitalization and as places for social integration have been widely overlooked so far. Simultaneously, gender biases and digitalization have structurally immanent connections to organizations. Therefore, a look at the reciprocal relationship between organizations, digitalization, and gender is needed. The article provides answers to the question of whether and how organizations (re)produce, reinforce, or diminish gender‐specific inequalities during their digital transformations. On the one hand, gender inequalities emerge when organizations use post‐bureaucratic concepts through digitalization. On the other hand, gender inequalities are reproduced when organizations either program or implement digital technologies and fail to establish control structures that prevent gender biases. This article shows that digitalization can act as a catalyst for inequality‐producing mechanisms, but also has the potential to mitigate inequalities. We argue that organizations must be considered when discussing the potential of exclusion through digitalization.
Current assessment of visual neglect involves paper-and-pencil tests or computer-based tasks. Both have been criticised because of their lack of ecological validity as target stimuli can only be presented in a restricted visual range. This study examined the user-friendliness and diagnostic strength of a new "Circle-Monitor" (CM), which enlarges the range of the peripersonal space, in comparison to a standard paper-and-pencil test (Neglect-Test, NET).
Methods: Ten stroke patients with neglect and ten age-matched healthy controls were examined by the NET and the CM test comprising of four subtests (Star Cancellation, Line Bisection, Dice Task, and Puzzle Test).
Results: The acceptance of the CM in elderly controls and neglect patients was high. Participants rated the examination by CM as clear, safe and more enjoyable than NET. Healthy controls performed at ceiling on all subtests, without any systematic differences between the visual fields. Both NET and CM revealed significant differences between controls and patients in Line Bisection, Star Cancellation and visuo-constructive tasks (NET: Figure Copying, CM: Puzzle Test). Discriminant analyses revealed cross-validated assignment of patients and controls to groups was more precise when based on the CM (hit rate 90%) as compared to the NET (hit rate 70%).
Conclusion: The CM proved to be a sensitive novel tool to diagnose visual neglect symptoms quickly and accurately with superior diagnostic validity compared to a standard neglect test while being well accepted by patients. Due to its upgradable functions the system may also be a valuable tool not only to test for non-visual neglect symptoms, but also to provide treatment and assess its outcome.
Incorporation of noncanonical amino acids (ncAAs) with bioorthogonal reactive groups by amber suppression allows the generation of synthetic proteins with desired novel properties. Such modified molecules are in high demand for basic research and therapeutic applications such as cancer treatment and in vivo imaging. The positioning of the ncAA-responsive codon within the protein's coding sequence is critical in order to maintain protein function, achieve high yields of ncAA-containing protein, and allow effective conjugation. Cell-free ncAA incorporation is of particular interest due to the open nature of cell-free systems and their concurrent ease of manipulation. In this study, we report a straightforward workflow to inquire ncAA positions in regard to incorporation efficiency and protein functionality in a Chinese hamster ovary (CHO) cell-free system. As a model, the well-established orthogonal translation components Escherichia coli tyrosyl-tRNA synthetase (TyrRS) and tRNATyr(CUA) were used to site-specifically incorporate the ncAA p-azido-l-phenylalanine (AzF) in response to UAG codons. A total of seven ncAA sites within an anti-epidermal growth factor receptor (EGFR) single-chain variable fragment (scFv) N-terminally fused to the red fluorescent protein mRFP1 and C-terminally fused to the green fluorescent protein sfGFP were investigated for ncAA incorporation efficiency and impact on antigen binding. The characterized cell-free dual fluorescence reporter system allows screening for ncAA incorporation sites with high incorporation efficiency that maintain protein activity. It is parallelizable, scalable, and easy to operate. We propose that the established CHO-based cell-free dual fluorescence reporter system can be of particular interest for the development of antibody-drug conjugates (ADCs).
This study investigates the characteristics of narrative-speech production and the use of verbs in Turkish agrammatic speakers (n = 10) compared to non-brain-damaged controls (n = 10). To elicit narrative-speech samples, personal interviews and storytelling tasks were conducted. Turkish has a large and regular verb inflection paradigm where verbs are inflected for evidentiality (i.e. direct versus indirect evidence available to the speaker). Particularly, we explored the general characteristics of the speech samples (e.g. utterance length) and the uses of lexical, finite and non-finite verbs and direct and indirect evidentials. The results show that speech rate is slow, verbs per utterance are lower than normal and the verb diversity is reduced in the agrammatic speakers. Verb inflection is relatively intact; however, a trade-off pattern between inflection for direct evidentials and verb diversity is found. The implications of the data are discussed in connection with narrative-speech production studies on other languages.
Let v be a valuation of terms of type tau, assigning to each term t of type tau a value v(t) greater than or equal to 0. Let k greater than or equal to 1 be a natural number. An identity s approximate to t of type tau is called k- normal if either s = t or both s and t have value greater than or equal to k, and otherwise is called non-k-normal. A variety V of type tau is said to be k-normal if all its identities are k-normal, and non-k-normal otherwise. In the latter case, there is a unique smallest k-normal variety N-k(A) (V) to contain V , called the k-normalization of V. Inthe case k = 1, for the usual depth valuation of terms, these notions coincide with the well-known concepts of normal identity, normal variety, and normalization of a variety. I. Chajda has characterized the normalization of a variety by means of choice algebras. In this paper we generalize his results to a characterization of the k-normalization of a variety, using k-choice algebras. We also introduce the concept of a k-inflation algebra, and for the case that v is the usual depth valuation of terms, we prove that a variety V is k-normal iff it is closed under the formation of k- inflations, and that the k-normalization of V consists precisely of all homomorphic images of k-inflations of algebras in V
To asymptotic complete scattering systems {M(+) + V, M(+)} on H(+) := L(2)(R(+), K, d lambda), where M(+) is the multiplication operator on H(+) and V is a trace class operator with analyticity conditions, a decay semigroup is associated such that the spectrum of the generator of this semigroup coincides with the set of all resonances (poles of the analytic continuation of the scattering matrix into the lower half plane across the positive half line), i.e. the decay semigroup yields a "time-dependent" characterization of the resonances. As a counterpart a "spectral characterization" is mentioned which is due to the "eigenvalue-like" properties of resonances.
Channel transmission losses in drylands take place normally in extensive alluvial channels or streambeds underlain by fractured rocks. They can play an important role in streamflow rates, groundwater recharge, freshwater supply and channel-associated ecosystems. We aim to develop a process-oriented, semi-distributed channel transmission losses model, using process formulations which are suitable for data-scarce dryland environments and applicable to both hydraulically disconnected losing streams and hydraulically connected losing(/gaining) streams. This approach should be able to cover a large variation in climate and hydro-geologic controls, which are typically found in dryland regions of the Earth. Our model was first evaluated for a losing/gaining, hydraulically connected 30 km reach of the Middle Jaguaribe River (MJR), Ceara, Brazil, which drains a catchment area of 20 000 km(2). Secondly, we applied it to a small losing, hydraulically disconnected 1.5 km channel reach in the Walnut Gulch Experimental Watershed (WGEW), Arizona, USA. The model was able to predict reliably the streamflow volume and peak for both case studies without using any parameter calibration procedure. We have shown that the evaluation of the hypotheses on the dominant hydrological processes was fundamental for reducing structural model uncertainties and improving the streamflow prediction. For instance, in the case of the large river reach (MJR), it was shown that both lateral stream-aquifer water fluxes and groundwater flow in the underlying alluvium parallel to the river course are necessary to predict streamflow volume and channel transmission losses, the former process being more relevant than the latter. Regarding model uncertainty, it was shown that the approaches, which were applied for the unsaturated zone processes (highly nonlinear with elaborate numerical solutions), are much more sensitive to parameter variability than those approaches which were used for the saturated zone (mathematically simple water budgeting in aquifer columns, including backwater effects). In case of the MJR-application, we have seen that structural uncertainties due to the limited knowledge of the subsurface saturated system interactions (i.e. groundwater coupling with channel water; possible groundwater flow parallel to the river) were more relevant than those related to the subsurface parameter variability. In case of the WEGW application we have seen that the non-linearity involved in the unsaturated flow processes in disconnected dryland river systems (controlled by the unsaturated zone) generally contain far more model uncertainties than do connected systems controlled by the saturated flow. Therefore, the degree of aridity of a dryland river may be an indicator of potential model uncertainty and subsequent attainable predictability of the system.
We present the results from Chandra X-ray observations, and near- and mid-infrared analysis, using VISTA/VVV and Spitzer/GLIMPSE catalogs, of the high-mass star-forming region IRAS 16562-3959, which contains a candidate for a high-mass protostar. We detected 249 X-ray sources within the ACIS-I field of view. The majority of the X-ray sources have low count rates (<0.638 cts/ks) and hard X-ray spectra. The search for YSOs in the region using VISTA/VVV and Spitzer/GLIMPSE catalogs resulted in a total of 636 YSOs, with 74 Class I and 562 Class II YSOs. The search for near- and mid-infrared counterparts of the X-ray sources led to a total of 165 VISTA/VVV counterparts, and a total of 151 Spitzer/GLIMPSE counterparts. The infrared analysis of the X-ray counterparts allowed us to identify an extra 91 Class III YSOs associated with the region. We conclude that a total of 727 YSOs are associated with the region, with 74 Class I, 562 Class II, and 91 Class III YSOs. We also found that the region is composed of 16 subclusters. In the vicinity of the high-mass protostar, the stellar distribution has a core-halo structure. The subcluster containing the high-mass protostar is the densest and the youngest in the region, and the high-mass protostar is located at its center. The YSOs in this cluster appear to be substantially older than the high-mass protostar.
A Cell-free Expression Pipeline for the Generation and Functional Characterization of Nanobodies
(2022)
Cell-free systems are well-established platforms for the rapid synthesis, screening, engineering and modification of all kinds of recombinant proteins ranging from membrane proteins to soluble proteins, enzymes and even toxins. Also within the antibody field the cell-free technology has gained considerable attention with respect to the clinical research pipeline including antibody discovery and production. Besides the classical full-length monoclonal antibodies (mAbs), so-called "nanobodies" (Nbs) have come into focus. A Nb is the smallest naturally-derived functional antibody fragment known and represents the variable domain (VHH, similar to 15 kDa) of a camelid heavy-chain-only antibody (HCAb). Based on their nanoscale and their special structure, Nbs display striking advantages concerning their production, but also their characteristics as binders, such as high stability, diversity, improved tissue penetration and reaching of cavity-like epitopes. The classical way to produce Nbs depends on the use of living cells as production host. Though cell-based production is well-established, it is still time-consuming, laborious and hardly amenable for high-throughput applications. Here, we present for the first time to our knowledge the synthesis of functional Nbs in a standardized mammalian cell-free system based on Chinese hamster ovary (CHO) cell lysates. Cell-free reactions were shown to be time-efficient and easy-to-handle allowing for the "on demand" synthesis of Nbs. Taken together, we complement available methods and demonstrate a promising new system for Nb selection and validation.
Hydrocarbons can be found in many different habitats and represent an important carbon source for microbes. As fossil fuels, they are also an important economical resource and through natural seepage or accidental release they can be major pollutants. DNA-specific stains and molecular probes bind to hydrocarbons, causing massive background fluorescence, thereby hampering cell enumeration. The cell extraction procedure of Kallmeyer et al. (2008) separates the cells from the sediment matrix. In principle, this technique can also be used to separate cells from oily sediments, but it was not originally optimized for this application. Here we present a modified extraction method in which the hydrocarbons are removed prior to cell extraction. Due to the reduced background fluorescence the microscopic image becomes clearer, making cell identification, and enumeration much easier. Consequently, the resulting cell counts from oily samples treated according to our new protocol are significantly higher than those treated according to Kallmeyer et al. (2008). We tested different amounts of a variety of solvents for their ability to remove hydrocarbons and found that n-hexane and in samples containing more mature oils methanol, delivered the best results. However, as solvents also tend to lyse cells, it was important to find the optimum solvent to sample ratio, at which hydrocarbon extraction is maximized and cell lysis minimized. A volumetric ratio of 1:2-1:5 between a formalin-fixed sediment slurry and solvent delivered highest cell counts. Extraction efficiency was around 30-50% and was checked on both oily samples spiked with known amounts of E. coli cells and oil-free samples amended with fresh and biodegraded oil. The method provided reproducible results on samples containing very different kinds of oils with regard to their degree of biodegradation. For strongly biodegraded oil MeOH turned out to be the most appropriate solvent, whereas for less biodegraded samples n-hexane delivered best results.
We study the Cauchy problem for a nonlinear elliptic equation with data on a piece S of the boundary surface partial derivative X. By the Cauchy problem is meant any boundary value problem for an unknown function u in a domain X with the property that the data on S, if combined with the differential equations in X, allows one to determine all derivatives of u on S by means of functional equations. In the case of real analytic data of the Cauchy problem, the existence of a local solution near S is guaranteed by the Cauchy-Kovalevskaya theorem. We discuss a variational setting of the Cauchy problem which always possesses a generalized solution.
A catalog of genetic loci associated with kidney function from analyses of a million individuals
(2019)
Chronic kidney disease (CKD) is responsible for a public health burden with multi-systemic complications. Through transancestry meta-analysis of genome-wide association studies of estimated glomerular filtration rate (eGFR) and independent replication (n = 1,046,070), we identified 264 associated loci (166 new). Of these,147 were likely to be relevant for kidney function on the basis of associations with the alternative kidney function marker blood urea nitrogen (n = 416,178). Pathway and enrichment analyses, including mouse models with renal phenotypes, support the kidney as the main target organ. A genetic risk score for lower eGFR was associated with clinically diagnosed CKD in 452,264 independent individuals. Colocalization analyses of associations with eGFR among 783,978 European-ancestry individuals and gene expression across 46 human tissues, including tubulo-interstitial and glomerular kidney compartments, identified 17 genes differentially expressed in kidney. Fine-mapping highlighted missense driver variants in 11 genes and kidney-specific regulatory variants. These results provide a comprehensive priority list of molecular targets for translational research.
A case of primary progressive ahasia : a 14year follow-up study with neuropathological findings
(1998)
A Case for Serious Play
(2017)
With the advent of increasingly powerful computational architectures, scientists use these possibilities to create simulations of ever-increasing size and complexity. Large-scale simulations of environmental systems require huge amounts of resources. Managing these in an operational way becomes increasingly complex and difficult to handle for individual scientists. State-of-the-art simulation infrastructures usually provide the necessary re-sources in a centralised setup, which often results in an all-or-nothing choice for the user. Here, we outline an alternative approach to handling this complexity, while rendering the use of high-performance hardware and large datasets still possible. It retains a number of desirable properties: (i) a decentralised structure, (ii) easy sharing of resources to promote collaboration and (iii) secure access to everything, including natural delegation of authority across levels and system boundaries. We show that the object capability paradigm will cover these issues, and present the first steps towards developing a simulation infrastructure based on these principles.