Refine
Year of publication
Document Type
- Article (21073) (remove)
Language
- English (21073) (remove)
Keywords
- climate change (96)
- Germany (70)
- stars: massive (55)
- diffusion (47)
- morphology (47)
- stars: early-type (47)
- gamma rays: general (46)
- German (45)
- stars: winds, outflows (44)
- Climate change (43)
Institute
- Institut für Physik und Astronomie (4031)
- Institut für Biochemie und Biologie (3346)
- Institut für Geowissenschaften (2579)
- Institut für Chemie (2230)
- Department Psychologie (1120)
- Institut für Mathematik (953)
- Department Linguistik (758)
- Institut für Ernährungswissenschaft (711)
- Institut für Informatik und Computational Science (571)
- Institut für Umweltwissenschaften und Geographie (561)
We present a computational evaluation of three hypotheses about sources of deficit in sentence comprehension in aphasia: slowed processing, intermittent deficiency, and resource reduction. The ACT-R based Lewis and Vasishth (2005) model is used to implement these three proposals. Slowed processing is implemented as slowed execution time of parse steps; intermittent deficiency as increased random noise in activation of elements in memory; and resource reduction as reduced spreading activation. As data, we considered subject vs. object relative sentences, presented in a self-paced listening modality to 56 individuals with aphasia (IWA) and 46 matched controls. The participants heard the sentences and carried out a picture verification task to decide on an interpretation of the sentence. These response accuracies are used to identify the best parameters (for each participant) that correspond to the three hypotheses mentioned above. We show that controls have more tightly clustered (less variable) parameter values than IWA; specifically, compared to controls, among IWA there are more individuals with slow parsing times, high noise, and low spreading activation. We find that (a) individual IWA show differential amounts of deficit along the three dimensions of slowed processing, intermittent deficiency, and resource reduction, (b) overall, there is evidence for all three sources of deficit playing a role, and (c) IWA have a more variable range of parameter values than controls. An important implication is that it may be meaningless to talk about sources of deficit with respect to an abstract verage IWA; the focus should be on the individual's differential degrees of deficit along different dimensions, and on understanding the causes of variability in deficit between participants.
To provide physically based wind modelling for wind erosion research at regional scale, a 3D computational fluid dynamics (CFD) wind model was developed. The model was programmed in C language based on the Navier-Stokes equations, and it is freely available as open source. Integrated with the spatial analysis and modelling tool (SAMT), the wind model has convenient input preparation and powerful output visualization. To validate the wind model, a series of experiments was conducted in a wind tunnel. A blocking inflow experiment was designed to test the performance of the model on simulation of basic fluid processes. A round obstacle experiment was designed to check if the model could simulate the influences of the obstacle on wind field. Results show that measured and simulated wind fields have high correlations, and the wind model can simulate both the basic processes of the wind and the influences of the obstacle on the wind field. These results show the high reliability of the wind model. A digital elevation model (DEM) of an area (3800 m long and 1700 m wide) in the Xilingele grassland in Inner Mongolia (autonomous region, China) was applied to the model, and a 3D wind field has been successfully generated. The clear implementation of the model and the adequate validation by wind tunnel experiments laid a solid foundation for the prediction and assessment of wind erosion at regional scale.
Individuals with agrammatic Broca's aphasia experience difficulty when processing reversible non-canonical sentences. Different accounts have been proposed to explain this phenomenon. The Trace Deletion account (Grodzinsky, 1995, 2000, 2006) attributes this deficit to an impairment in syntactic representations, whereas others (e.g., Caplan, Waters, Dede, Michaud, & Reddy, 2007; Haarmann, Just, & Carpenter, 1997) propose that the underlying structural representations are unimpaired, but sentence comprehension is affected by processing deficits, such as slow lexical activation, reduction in memory resources, slowed processing and/or intermittent deficiency, among others. We test the claims of two processing accounts, slowed processing and intermittent deficiency, and two versions of the Trace Deletion Hypothesis (TDH), in a computational framework for sentence processing (Lewis & Vasishth, 2005) implemented in ACT-R (Anderson, Byrne, Douglass, Lebiere, & Qin, 2004). The assumption of slowed processing is operationalized as slow procedural memory, so that each processing action is performed slower than normal, and intermittent deficiency as extra noise in the procedural memory, so that the parsing steps are more noisy than normal. We operationalize the TDH as an absence of trace information in the parse tree. To test the predictions of the models implementing these theories, we use the data from a German sentence—picture matching study reported in Hanne, Sekerina, Vasishth, Burchert, and De Bleser (2011). The data consist of offline (sentence-picture matching accuracies and response times) and online (eye fixation proportions) measures. From among the models considered, the model assuming that both slowed processing and intermittent deficiency are present emerges as the best model of sentence processing difficulty in aphasia. The modeling of individual differences suggests that, if we assume that patients have both slowed processing and intermittent deficiency, they have them in differing degrees.
A comprehensive workflow to analyze ensembles of globally inverted 2D electrical resistivity models
(2022)
Electrical resistivity tomography (ERT) aims at imaging the subsurface resistivity distribution and provides valuable information for different geological, engineering, and hydrological applications. To obtain a subsurface resistivity model from measured apparent resistivities, stochastic or deterministic inversion procedures may be employed. Typically, the inversion of ERT data results in non-unique solutions; i.e., an ensemble of different models explains the measured data equally well. In this study, we perform inference analysis of model ensembles generated using a well-established global inversion approach to assess uncertainties related to the nonuniqueness of the inverse problem. Our interpretation strategy starts by establishing model selection criteria based on different statistical descriptors calculated from the data residuals. Then, we perform cluster analysis considering the inverted resistivity models and the corresponding data residuals. Finally, we evaluate model uncertainties and residual distributions for each cluster. To illustrate the potential of our approach, we use a particle swarm optimization (PSO) algorithm to obtain an ensemble of 2D layer-based resistivity models from a synthetic data example and a field data set collected in Loon-Plage, France. Our strategy performs well for both synthetic and field data and allows us to extract different plausible model scenarios with their associated uncertainties and data residual distributions. Although we demonstrate our workflow using 2D ERT data and a PSObased inversion approach, the proposed strategy is general and can be adapted to analyze model ensembles generated from other kinds of geophysical data and using different global inversion approaches.
The improvement of process representations in hydrological models is often only driven by the modelers' knowledge and data availability. We present a comprehensive comparison between two hydrological models of different complexity that is developed to support (1) the understanding of the differences between model structures and (2) the identification of the observations needed for model assessment and improvement. The comparison is conducted on both space and time and by aggregating the outputs at different spatiotemporal scales. In the present study, mHM, a process‐based hydrological model, and ParFlow‐CLM, an integrated subsurface‐surface hydrological model, are used. The models are applied in a mesoscale catchment in Germany. Both models agree in the simulated river discharge at the outlet and the surface soil moisture dynamics, lending their supports for some model applications (drought monitoring). Different model sensitivities are, however, found when comparing evapotranspiration and soil moisture at different soil depths. The analysis supports the need of observations within the catchment for model assessment, but it indicates that different strategies should be considered for the different variables. Evapotranspiration measurements are needed at daily resolution across several locations, while highly resolved spatially distributed observations with lower temporal frequency are required for soil moisture. Finally, the results show the impact of the shallow groundwater system simulated by ParFlow‐CLM and the need to account for the related soil moisture redistribution. Our comparison strategy can be applied to other models types and environmental conditions to strengthen the dialog between modelers and experimentalists for improving process representations in Earth system models.
Home range estimation is routine practice in ecological research. While advances in animal tracking technology have increased our capacity to collect data to support home range analysis, these same advances have also resulted in increasingly autocorrelated data. Consequently, the question of which home range estimator to use on modern, highly autocorrelated tracking data remains open. This question is particularly relevant given that most estimators assume independently sampled data. Here, we provide a comprehensive evaluation of the effects of autocorrelation on home range estimation. We base our study on an extensive data set of GPS locations from 369 individuals representing 27 species distributed across five continents. We first assemble a broad array of home range estimators, including Kernel Density Estimation (KDE) with four bandwidth optimizers (Gaussian reference function, autocorrelated‐Gaussian reference function [AKDE], Silverman's rule of thumb, and least squares cross‐validation), Minimum Convex Polygon, and Local Convex Hull methods. Notably, all of these estimators except AKDE assume independent and identically distributed (IID) data. We then employ half‐sample cross‐validation to objectively quantify estimator performance, and the recently introduced effective sample size for home range area estimation ( N̂ area
) to quantify the information content of each data set. We found that AKDE 95% area estimates were larger than conventional IID‐based estimates by a mean factor of 2. The median number of cross‐validated locations included in the hold‐out sets by AKDE 95% (or 50%) estimates was 95.3% (or 50.1%), confirming the larger AKDE ranges were appropriately selective at the specified quantile. Conversely, conventional estimates exhibited negative bias that increased with decreasing N̂ area. To contextualize our empirical results, we performed a detailed simulation study to tease apart how sampling frequency, sampling duration, and the focal animal's movement conspire to affect range estimates. Paralleling our empirical results, the simulation study demonstrated that AKDE was generally more accurate than conventional methods, particularly for small N̂ area. While 72% of the 369 empirical data sets had >1,000 total observations, only 4% had an N̂ area >1,000, where 30% had an N̂ area <30. In this frequently encountered scenario of small N̂ area, AKDE was the only estimator capable of producing an accurate home range estimate on autocorrelated data.
The quantification of spatial propagation of extreme precipitation events is vital in water resources planning and disaster mitigation. However, quantifying these extreme events has always been challenging as many traditional methods are insufficient to capture the nonlinear interrelationships between extreme event time series. Therefore, it is crucial to develop suitable methods for analyzing the dynamics of extreme events over a river basin with a diverse climate and complicated topography. Over the last decade, complex network analysis emerged as a powerful tool to study the intricate spatiotemporal relationship between many variables in a compact way. In this study, we employ two nonlinear concepts of event synchronization and edit distance to investigate the extreme precipitation pattern in the Ganga river basin. We use the network degree to understand the spatial synchronization pattern of extreme rainfall and identify essential sites in the river basin with respect to potential prediction skills. The study also attempts to quantify the influence of precipitation seasonality and topography on extreme events. The findings of the study reveal that (1) the network degree is decreased in the southwest to northwest direction, (2) the timing of 50th percentile precipitation within a year influences the spatial distribution of degree, (3) the timing is inversely related to elevation, and (4) the lower elevation greatly influences connectivity of the sites. The study highlights that edit distance could be a promising alternative to analyze event-like data by incorporating event time and amplitude and constructing complex networks of climate extremes.
A long-standing and profound problem in astronomy is the difficulty in obtaining deep near-infrared observations due to the extreme brightness and variability of the night sky at these wavelengths. A solution to this problem is crucial if we are to obtain the deepest possible observations of the early Universe, as redshifted starlight from distant galaxies appears at these wavelengths. The atmospheric emission between 1,000 and 1,800 nm arises almost entirely from a forest of extremely bright, very narrow hydroxyl emission lines that varies on timescales of minutes. The astronomical community has long envisaged the prospect of selectively removing these lines, while retaining high throughput between them. Here we demonstrate such a filter for the first time, presenting results from the first on-sky tests. Its use on current 8 m telescopes and future 30 m telescopes will open up many new research avenues in the years to come.
Accurate time series representation of paleoclimatic proxy records is challenging because such records involve dating errors in addition to proxy measurement errors. Rigorous attention is rarely given to age uncertainties in paleoclimatic research, although the latter can severely bias the results of proxy record analysis. Here, we introduce a Bayesian approach to represent layer-counted proxy records - such as ice cores, sediments, corals, or tree rings - as sequences of probability distributions on absolute, error-free time axes. The method accounts for both proxy measurement errors and uncertainties arising from layer-counting-based dating of the records. An application to oxygen isotope ratios from the North Greenland Ice Core Project (NGRIP) record reveals that the counting errors, although seemingly small, lead to substantial uncertainties in the final representation of the oxygen isotope ratios. In particular, for the older parts of the NGRIP record, our results show that the total uncertainty originating from dating errors has been seriously underestimated. Our method is next applied to deriving the overall uncertainties of the Suigetsu radiocarbon comparison curve, which was recently obtained from varved sediment cores at Lake Suigetsu, Japan. This curve provides the only terrestrial radiocarbon comparison for the time interval 12.5-52.8 kyr BP. The uncertainties derived here can be readily employed to obtain complete error estimates for arbitrary radiometrically dated proxy records of this recent part of the last glacial interval.
A competitive immunoassay to detect a hapten using an enzyme-labelled peptide mimotope as tracer
(2002)
Mimotope peptides-peptides which mimic the binding of a hapten to its corresponding monoclonal antibody-were conjugated to peroxidase and used in competitive immunoassay. The established immunoassay was used to quantitatively determine the concentration of hapten. As model system in all the experiments described here, we used the binding of the monoclonal antibody B13-DE1 to fluorescein and the corresponding peptide mimotope.
Although the general development of mathematical abilities in primary school has been the focus of many researchers, the development of place value understanding has rarely been investigated to date. This is possibly due to the lack of conceptual approaches and empirical studies related to this topic. To fill this gap, a theory-driven and empirically validated model was developed that describes five sequential conceptual levels of place value understanding. The level sequence model gives us the ability to estimate general abilities and difficulties in primary school pupils in the development of a conceptual place value understanding. The level sequence model was tried and tested in Germany, and given that number words are very differently constructed in German and in the languages used in South African classrooms, this study aims to investigate whether this level sequence model can be transferred to South Africa. The findings based on the responses of 198 Grade 2-4 learners show that the English translation of the test items results in the same item level allocation as the original German test items, especially for the three basic levels. Educational implications are provided, in particular concrete suggestions on how place value might be taught according to the model and how to collect specific empirical data related to place value understanding.
Within the last decade, the role of the Creative Industries has grown to become an important part of the economic system. The increasing acceleration of new developments in media and ICT technologies greatly affected the Creative Industries' dynamic with a direct impact on the people working in this sector. Since only a few studies focus on competences needs, more or less isolated from the trends within the industry, we address the topic of individual competence shifts in the turbulent environment of the Creative Industries. We investigated the trends regarding competence shifts and their implications as well as the competences which are essential for creative professionals. We conducted a broad literature review as well as a qualitative study, which includes interviews and workshops with industry experts on trends within the Creative Industries and corresponding dimensions and demands for competences. We present four requirements that call for shifts in the education of competences. Based on the discussion of requirements, we present a competence portfolio for the Creative Industries along the dimensions of professional, methodological and personal-social competences. The portfolio clearly indicates which competences should be taken into consideration for the development of curricula and study programmes in the education of creative professionals. A generalization of these findings suggests new challenges for companies relying on creative professionals.
Multidirectional communicative interactions in social networks can have a profound effect on mate choice behavior. Male Atlantic molly Poecilia mexicana exhibit weaker mating preferences when an audience male is presented. This could be a male strategy to reduce sperm competition risk: interacting more equally with different females may be advantageous because rivals might copy mate choice decisions. In line with this hypothesis, a previous study found males to show a strong audience effect when being observed while exercising mate choice, but not when the rival was presented only before the choice tests. Audience effects on mate choice decisions have been quantified in poeciliid fishes using association preference designs, but it remains unknown if patterns found from measuring association times translate into actual mating behavior. Thus, we created five audience treatments simulating different forms of perceived sperm competition risk and determined focal males' mating preferences by scoring pre-mating (nipping) and mating behavior (gonopodial thrusting). Nipping did not reflect the pattern that was found when association preferences were measured, while a very similar pattern was uncovered in thrusting behavior. The strongest response was observed when the audience could eavesdrop on the focal male's behavior. A reduction in the strength of focal males' preferences was also seen after the rival male had an opportunity to mate with the focal male's preferred mate. In comparison, the reduction of mating preferences in response to an audience was greater when measuring association times than actual mating behavior. While measuring direct sexual interactions between the focal male and both stimulus females not only the male's motivational state is reflected but also females' behavior such as avoidance of male sexual harassment.
In order to predict which ecosystem functions are most at risk from biodiversity loss, meta-analyses have generalised results from biodiversity experiments over different sites and ecosystem types. In contrast, comparing the strength of biodiversity effects across a large number of ecosystem processes measured in a single experiment permits more direct comparisons. Here, we present an analysis of 418 separate measures of 38 ecosystem processes. Overall, 45 % of processes were significantly affected by plant species richness, suggesting that, while diversity affects a large number of processes not all respond to biodiversity. We therefore compared the strength of plant diversity effects between different categories of ecosystem processes, grouping processes according to the year of measurement, their biogeochemical cycle, trophic level and compartment (above- or belowground) and according to whether they were measures of biodiversity or other ecosystem processes, biotic or abiotic and static or dynamic. Overall, and for several individual processes, we found that biodiversity effects became stronger over time. Measures of the carbon cycle were also affected more strongly by plant species richness than were the measures associated with the nitrogen cycle. Further, we found greater plant species richness effects on measures of biodiversity than on other processes. The differential effects of plant diversity on the various types of ecosystem processes indicate that future research and political effort should shift from a general debate about whether biodiversity loss impairs ecosystem functions to focussing on the specific functions of interest and ways to preserve them individually or in combination.
Situated in an active tectonic region, Santiago de Chile, the country's capital with more than six million inhabitants, faces tremendous earthquake risk. Macroseismic data for the 1985 Valparaiso event show large variations in the distribution of damage to buildings within short distances, indicating strong effects of local sediments on ground motion. Therefore, a temporary seismic network was installed in the urban area for recording earthquake activity and a study was carried out aiming to estimate site amplification derived from horizontal-to- vertical (H/V) spectral ratios from earthquake data (EHV) and ambient noise (NHV), as well as using the standard spectral ratio (SSR) technique with a nearby reference station located on igneous rock. The results lead to the following conclusions: The analysis of earthquake data shows significant dependence on the local geological structure with respect to amplitude and duration. An amplification of ground motion at frequencies higher than the fundamental one can be found. This amplification would not be found when looking at NHV ratios alone. The analysis of NHV spectral ratios shows that they can only provide a lower bound in amplitude for site amplification. P-wave site responses always show lower amplitudes than those derived by S waves, and sometimes even fail to provide some frequencies of amplification. No variability in terms of time and amplitude is observed in the analysis of the H/V ratio of noise. Due to the geological conditions in some parts of the investigated area, the fundamental resonance frequency of a site is difficult to estimate following standard criteria proposed by the SESAME consortium, suggesting that these are too restrictive under certain circumstances.
Reliable information on past and present vegetation is important to project future changes, especially for rapidly transitioning areas such as the boreal treeline. To study past vegetation, pollen analysis is common, while current vegetation is usually assessed by field surveys. Application of detailed sedimentary DNA (sedDNA) records has the potential to enhance our understanding of vegetation changes, but studies systematically investigating the power of this proxy are rare to date. This study compares sedDNA metabarcoding and pollen records from surface sediments of 31 lakes along a north-south gradient of increasing forest cover in northern Siberia (Taymyr peninsula) with data from field surveys in the surroundings of the lakes. sedDNA metabarcoding recorded 114 plant taxa, about half of them to species level, while pollen analyses identified 43 taxa, both exceeding the 31 taxa found by vegetation field surveys. Increasing Larix percentages from north to south were consistently recorded by all three methods and principal component analyses based on percentage data of vegetation surveys and DNA sequences separated tundra from forested sites. Comparisons of the ordinations using procrustes and protest analyses show a significant fit among all compared pairs of records. Despite similarities of sedDNA and pollen records, certain idiosyncrasies, such as high percentages of Alnus and Betula in all pollen and high percentages of Salix in all sedDNA spectra, are observable. Our results from the tundra to single-tree tundra transition zone show that sedDNA analyses perform better than pollen in recording site-specific richness (i.e., presence/absence of taxa in the vicinity of the lake) and perform as well as pollen in tracing vegetation composition.
A comparison of running kinetics in children with and without genu varus: A cross sectional study
(2017)
Introduction Varus knee alignment has been identified as a risk factor for the progression of medial knee osteoarthritis. However, the underlying mechanisms have not been elucidated yet in children. Thus, the aims of the present study were to examine differences in ground reaction forces, loading rate, impulses, and free moment values during running in children with and without genu varus. Methods Thirty-six boys aged 9-14 volunteered to participate in this study. They were divided in two age-matched groups (genu varus versus healthy controls). Body weight adjusted three dimensional kinetic data (Fx, Fy, Fz) were collected during running at preferred speed using two Kistler force plates for the dominant and non-dominant limb. Results Individuals with knee genu varus produced significantly higher (p = .01; d = 1.09; 95%) body weight adjusted ground reaction forces in the lateral direction (Fx) of the dominant limb compared to controls. On the non-dominant limb, genu varus patients showed significantly higher body weight adjusted ground reaction forces values in the lateral (p = .01; d = 1.08; 86%) and medial (p < .001; d = 1.55; 102%) directions (Fx). Further, genu varus patients demonstrated 55% and 36% greater body weight adjusted loading rates in the dominant (p < .001; d = 2.09) and non-dominant (p < .001; d = 1.02) leg, respectively. No significant between-group differences were observed for adjusted free moment values (p>.05). Discussion Higher mediolateral ground reaction forces and vertical loading rate amplitudes in boys with genu varus during running at preferred running speed may accelerate the development of progressive joint degeneration in terms of the age at knee osteoarthritis onset. Therefore, practitioners and therapists are advised to conduct balance and strength training programs to improve lower limb alignment and mediolateral control during dynamic movements.
Introduction
Varus knee alignment has been identified as a risk factor for the progression of medial knee osteoarthritis. However, the underlying mechanisms have not been elucidated yet in children. Thus, the aims of the present study were to examine differences in ground reaction forces, loading rate, impulses, and free moment values during running in children with and without genu varus.
Methods
Thirty-six boys aged 9–14 volunteered to participate in this study. They were divided in two age-matched groups (genu varus versus healthy controls). Body weight adjusted three dimensional kinetic data (Fx, Fy, Fz) were collected during running at preferred speed using two Kistler force plates for the dominant and non-dominant limb.
Results
Individuals with knee genu varus produced significantly higher (p = .01; d = 1.09; 95%) body weight adjusted ground reaction forces in the lateral direction (Fx) of the dominant limb compared to controls. On the non-dominant limb, genu varus patients showed significantly higher body weight adjusted ground reaction forces values in the lateral (p = .01; d = 1.08; 86%) and medial (p < .001; d = 1.55; 102%) directions (Fx). Further, genu varus patients demonstrated 55% and 36% greater body weight adjusted loading rates in the dominant (p < .001; d = 2.09) and non-dominant (p < .001; d = 1.02) leg, respectively. No significant between-group differences were observed for adjusted free moment values (p>.05).
Discussion
Higher mediolateral ground reaction forces and vertical loading rate amplitudes in boys with genu varus during running at preferred running speed may accelerate the development of progressive joint degeneration in terms of the age at knee osteoarthritis onset. Therefore, practitioners and therapists are advised to conduct balance and strength training programs to improve lower limb alignment and mediolateral control during dynamic movements.
Context. Extrapolations of solar photospheric vector magnetograms into three-dimensional magnetic fields in the chromosphere and corona are usually done under the assumption that the fields are force-free. This condition is violated in the photosphere itself and a thin layer in the lower atmosphere above. The field calculations can be improved by preprocessing the photospheric magnetograms. The intention here is to remove a non-force-free component from the data.
Aims. We compare two preprocessing methods presently in use, namely the methods of Wiegelmann et al. (2006, Sol. Phys., 233, 215) and Fuhrmann et al. (2007, A&A, 476, 349).
Methods. The two preprocessing methods were applied to a vector magnetogram of the recently observed active region NOAA AR 10 953. We examine the changes in the magnetogram effected by the two preprocessing algorithms. Furthermore, the original magnetogram and the two preprocessed magnetograms were each used as input data for nonlinear force-free field extrapolations by means of two different methods, and we analyze the resulting fields.
Results. Both preprocessing methods managed to significantly decrease the magnetic forces and magnetic torques that act through the magnetogram area and that can cause incompatibilities with the assumption of force-freeness in the solution domain. The force and torque decrease is stronger for the Fuhrmann et al. method. Both methods also reduced the amount of small-scale irregularities in the observed photospheric field, which can sharply worsen the quality of the solutions. For the chosen parameter set, the Wiegelmann et al. method led to greater changes in strong-field areas, leaving weak-field areas mostly unchanged, and thus providing an approximation of the magnetic field vector in the chromosphere, while the Fuhrmann et al. method weakly changed the whole magnetogram, thereby better preserving patterns present in the original magnetogram. Both preprocessing methods raised the magnetic energy content of the extrapolated fields to values above the minimum energy, corresponding to the potential field. Also, the fields calculated from the preprocessed magnetograms fulfill the solenoidal condition better than those calculated without preprocessing.
Objective: To compare lateralized cerebral activations elicited during self-initiated movement mirroring and observation of movements.
Subjects: A total of 15 right-handed healthy subjects, age range 22-56 years.
Methods: Functional imaging study comparing movement mirroring with movement observation, in both hands, in an otherwise identical setting. Imaging data were analysed using statistical parametric mapping software, with significance threshold set at p<0.01 (false discovery rate) and a minimum cluster size of 20 voxels.
Results: Movement mirroring induced additional activation in primary and higher-order visual areas strictly contralateral to the limb seen by the subject. There was no significant difference of brain activity when comparing movement observation of somebody else's right hand with left hand.
Conclusion: Lateralized cerebral activations are elicited by inversion of visual feedback (movement mirroring), but not by movement observation.
Air pollution is a pressing issue that is associated with adverse effects on human health, ecosystems, and climate. Despite many years of effort to improve air quality, nitrogen dioxide (NO2) limit values are still regularly exceeded in Europe, particularly in cities and along streets. This study explores how concentrations of nitrogen oxides (NOx = NO + NO2) in European urban areas have changed over the last decades and how this relates to changes in emissions. To do so, the incremental approach was used, comparing urban increments (i.e. urban background minus rural concentrations) to total emissions, and roadside increments (i.e. urban roadside concentrations minus urban background concentrations) to traffic emissions. In total, nine European cities were assessed. The study revealed that potentially confounding factors like the impact of urban pollution at rural monitoring sites through atmospheric transport are generally negligible for NOx. The approach proves therefore particularly useful for this pollutant. The estimated urban increments all showed downward trends, and for the majority of the cities the trends aligned well with the total emissions. However, it was found that factors like a very densely populated surrounding or local emission sources in the rural area such as shipping traffic on inland waterways restrict the application of the approach for some cities. The roadside increments showed an overall very diverse picture in their absolute values and trends and also in their relation to traffic emissions. This variability and the discrepancies between roadside increments and emissions could be attributed to a combination of local influencing factors at the street level and different aspects introducing inaccuracies to the trends of the emis-sion inventories used, including deficient emission factors. Applying the incremental approach was evaluated as useful for long-term pan-European studies, but at the same time it was found to be restricted to certain regions and cities due to data availability issues. The results also highlight that using emission inventories for the prediction of future health impacts and compliance with limit values needs to consider the distinct variability in the concentrations not only across but also within cities.
The closed-chamber method is the most common approach to determine CH4 fluxes in peatlands. The concentration change in the chamber is monitored over time, and the flux is usually calculated by the slope of a linear regression function. Theoretically, the gas exchange cannot be constant over time but has to decrease, when the concentration gradient between chamber headspace and soil air decreases. In this study, we test whether we can detect this non- linearity in the concentration change during the chamber closure with six air samples. We expect generally a low concentration gradient on dry sites (hummocks) and thus the occurrence of exponential concentration changes in the chamber due to a quick equilibrium of gas concentrations between peat and chamber headspace. On wet (flarks) and sedge- covered sites (lawns), we expect a high gradient and near-linear concentration changes in the chamber. To evaluate these model assumptions, we calculate both linear and exponential regressions for a test data set (n = 597) from a Finnish mire. We use the Akaike Information Criterion with small sample second order bias correction to select the best-fitted model. 13.6%, 19.2% and 9.8% of measurements on hummocks, lawns and flarks, respectively, were best fitted with an exponential regression model. A flux estimation derived from the slope of the exponential function at the beginning of the chamber closure can be significantly higher than using the slope of the linear regression function. Non-linear concentration-overtime curves occurred mostly during periods of changing water table. This could be due to either natural processes or chamber artefacts, e.g. initial pressure fluctuations during chamber deployment. To be able to exclude either natural processes or artefacts as cause of non-linearity, further information, e.g. CH4 concentration profile measurements in the peat, would be needed. If this is not available, the range of uncertainty can be substantial. We suggest to use the range between the slopes of the exponential regression at the beginning and at the end of the closure time as an estimate of the overall uncertainty.
The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon rejection of a molecular dynamics proposal step. The implication is a trajectory reversal upon rejection, which is undesirable when interpreting GHMC as thermostated molecular dynamics. We show that a modified detailed balance condition can be used to implement GHMC without momentum flips. The same modification can be applied to the generalized shadow hybrid Monte Carlo (GSHMC) method. Numerical results indicate that GHMC/GSHMC implementations with momentum flip display a favorable behavior in terms of sampling efficiency, i.e., the traditional GHMC/GSHMC implementations with momentum flip got the advantage of a higher acceptance rate and faster decorrelation of Monte Carlo samples. The difference is more pronounced for GHMC. We also numerically investigate the behavior of the GHMC method as a Langevin-type thermostat. We find that the GHMC method without momentum flip interferes less with the underlying stochastic molecular dynamics in terms of autocorrelation functions and it to be preferred over the GHMC method with momentum flip. The same finding applies to GSHMC.
Flood loss data collection and modeling are not standardized, and previous work has indicated that losses from different flood types (e.g., riverine and groundwater) may follow different driving forces. However, different flood types may occur within a single flood event, which is known as a compound flood event. Therefore, we aimed to identify statistical similarities between loss-driving factors across flood types and test whether the corresponding losses should be modeled separately. In this study, we used empirical data from 4,418 respondents from four survey campaigns studying households in Germany that experienced flooding. These surveys sought to investigate several features of the impact process (hazard, socioeconomic, preparedness, and building characteristics, as well as flood type). While the level of most of these features differed across flood type subsamples (e.g., degree of preparedness), they did so in a nonregular pattern. A variable selection process indicates that besides hazard and building characteristics, information on property-level preparedness was also selected as a relevant predictor of the loss ratio. These variables represent information, which is rarely adopted in loss modeling. Models shall be refined with further data collection and other statistical methods. To save costs, data collection efforts should be steered toward the most relevant predictors to enhance data availability and increase the statistical power of results. Understanding that losses from different flood types are driven by different factors is a crucial step toward targeted data collection and model development and will finally clarify conditions that allow us to transfer loss models in space and time. <br /> Key Points <br /> Survey data of flood-affected households show different concurrent flood types, undermining the use of a single-flood-type loss model Thirteen variables addressing flood hazard, the building, and property level preparedness are significant predictors of the building loss ratio Flood type-specific models show varying significance across the predictor variables, indicating a hindrance to model transferability
Risk-based insurance is a commonly proposed and discussed flood risk adaptation mechanism in policy debates across the world such as in the United Kingdom and the United States of America. However, both risk-based premiums and growing risk pose increasing difficulties for insurance to remain affordable. An empirical concept of affordability is required as the affordability of adaption strategies is an important concern for policymakers, yet such a concept is not often examined. Therefore, a robust metric with a commonly acceptable affordability threshold is required. A robust metric allows for a previously normative concept to be quantified in monetary terms, and in this way, the metric is rendered more suitable for integration into public policy debates. This paper investigates the degree to which risk-based flood insurance premiums are unaffordable in Europe. In addition, this paper compares the outcomes generated by three different definitions of unaffordability in order to investigate the most robust definition. In doing so, the residual income definition was found to be the least sensitive to changes in the threshold. While this paper focuses on Europe, the selected definition can be employed elsewhere in the world and across adaption measures in order to develop a common metric for indicating the potential unaffordability problem.
A comparison of current trends within computer science teaching in school in Germany and the UK
(2013)
In the last two years, CS as a school subject has gained a lot of attention worldwide, although different countries have differing approaches to and experiences of introducing CS in schools. This paper reports on a study comparing current trends in CS at school, with a major focus on two countries, Germany and UK. A survey was carried out of a number of teaching professionals and experts from the UK and Germany with regard to the content and delivery of CS in school. An analysis of the quantitative data reveals a difference in foci in the two countries; putting this into the context of curricular developments we are able to offer interpretations of these trends and suggest ways in which curricula in CS at school should be moving forward.
The possibilities and limits of structure refinement of Langmuir-Blodgett films by means of symmetrical reflection of X- rays are described using the example of a stearic acid multilayer. Three different techniques for the determiantion of the electron density profile from reflectivity data are compared; a Fourier method, a Patterson method, and model calculations. The important role of the a priori information for finding the besft structure model is outlined.
A comparative whole-genome approach identifies bacterial traits for marine microbial interactions
(2022)
Luca Zoccarato, Daniel Sher et al. leverage publicly available bacterial genomes from marine and other environments to examine traits underlying microbial interactions.
Their results provide a valuable resource to investigate clusters of functional and linked traits to better understand marine bacteria community assembly and dynamics.
Microbial interactions shape the structure and function of microbial communities with profound consequences for biogeochemical cycles and ecosystem health. Yet, most interaction mechanisms are studied only in model systems and their prevalence is unknown. To systematically explore the functional and interaction potential of sequenced marine bacteria, we developed a trait-based approach, and applied it to 473 complete genomes (248 genera), representing a substantial fraction of marine microbial communities.
We identified genome functional clusters (GFCs) which group bacterial taxa with common ecology and life history. Most GFCs revealed unique combinations of interaction traits, including the production of siderophores (10% of genomes), phytohormones (3-8%) and different B vitamins (57-70%). Specific GFCs, comprising Alpha- and Gammaproteobacteria, displayed more interaction traits than expected by chance, and are thus predicted to preferentially interact synergistically and/or antagonistically with bacteria and phytoplankton. Linked trait clusters (LTCs) identify traits that may have evolved to act together (e.g., secretion systems, nitrogen metabolism regulation and B vitamin transporters), providing testable hypotheses for complex mechanisms of microbial interactions.
Our approach translates multidimensional genomic information into an atlas of marine bacteria and their putative functions, relevant for understanding the fundamental rules that govern community assembly and dynamics.
A successful assignment for the fundamental bands observed in the experimental IR spectra of mn-12S(2)O(2) and fn-12S(2)O(2) dithiacrown ethers was achieved by the aid of the density functional theory (DFT) based quantum mechanical calculations carried out at the 133LYP/6-31G(d) and B3LYP/6-31 + G(d) level of theory. Two different scaling approaches, '(i) scaled quantum mechanics force field (SQM FF) methodology', and (ii) the 'scaling frequencies with dual empirical scale factors', were used in order to fit the calculated harmonic frequencies to the experimental ones. Potential energy distribution (PED) calculations were carried out to define the internal coordinate contributions to each normal mode and to define the corresponding normal modes of the molecules. The effects of the conformational differences onto the IR active normal modes of the two isomeric molecules and their corresponding experimental frequencies were discussed in the light of the calculated spectral data.
In this paper two groups supporting different views on the mechanism of light induced polymer deformation argue about the respective underlying theoretical conceptions, in order to bring this interesting debate to the attention of the scientific community. The group of Prof. Nicolae Hurduc supports the model claiming that the cyclic isomerization of azobenzenes may cause an athermal transition of the glassy azobenzene containing polymer into a fluid state, the so-called photo-fluidization concept. This concept is quite convenient for an intuitive understanding of the deformation process as an anisotropic flow of the polymer material. The group of Prof. Svetlana Santer supports the re-orientational model where the mass-transport of the polymer material accomplished during polymer deformation is stated to be generated by the light-induced re-orientation of the azobenzene side chains and as a consequence of the polymer backbone that in turn results in local mechanical stress, which is enough to irreversibly deform an azobenzene containing material even in the glassy state. For the debate we chose three polymers differing in the glass transition temperature, 32 degrees C, 87 degrees C and 95 degrees C, representing extreme cases of flexible and rigid materials. Polymer film deformation occurring during irradiation with different interference patterns is recorded using a homemade set-up combining an optical part for the generation of interference patterns and an atomic force microscope for acquiring the kinetics of film deformation. We also demonstrated the unique behaviour of azobenzene containing polymeric films to switch the topography in situ and reversibly by changing the irradiation conditions. We discuss the results of reversible deformation of three polymers induced by irradiation with intensity (IIP) and polarization (PIP) interference patterns, and the light of homogeneous intensity in terms of two approaches: the re-orientational and the photo-fluidization concepts. Both agree in that the formation of opto-mechanically induced stresses is a necessary prerequisite for the process of deformation. Using this argument, the deformation process can be characterized either as a flow or mass transport.
The importance of cultural ecosystem services in agricultural landscapes is increasingly recognized as agricultural scale enlargement and abandonment affect aesthetic and recreational values of agricultural landscapes. Landscape preference studies addressing these type of values often yield context-specific outcomes, limiting the applicability of their outcomes in landscape policy. Our approach measures the relative importance of landscape features across agricultural landscapes. This approach was applied in the agricultural landscapes of Winterswijk, The Netherlands (n=191) and the Markische Schweiz, Germany (n=113) among visitors in the agricultural landscape. We set up a parallel designed choice experiment, using regionally specific, photorealistic visualizations of four comparable landscape attributes. In the Dutch landscape visitors highly value hedgerows and tree lines, whereas groups of trees and crop diversity are highly valued in the German landscape. Furthermore, we find that differences in relative preference for landscape attributes are, to some extent, explained by socio-cultural background variables such as education level and affinity with agriculture of the visitors. This approach contributes to a better understanding of the cross-regional variation of aesthetic and recreational values and how these values relate to characteristics of the agricultural landscape, which could support the integration of cultural services in landscape policy. (C) 2015 Elsevier B.V. All rights reserved.
In this paper, we analyse the effectiveness of flood management measures based on the concept known as "retaining water in the landscape". The investigated measures include afforestation, micro-ponds and small-reservoirs. A comparative and model-based methodological approach has been developed and applied for three meso-scale catchments located in different European hydro-climatological regions: Poyo (184 km(2)) in the Spanish Mediterranean, Upper Iller (954 km(2)) in the German Alps and Kamp (621 km(2)) in Northeast-Austria representing the Continental hydro-climate. This comparative analysis has found general similarities in spite of the particular differences among studied areas. In general terms, the flood reduction through the concept of "retaining water in the landscape" depends on the following factors: the storage capacity increase in the catchment resulting from such measures, the characteristics of the rainfall event, the antecedent soil moisture condition and the spatial distribution of such flood management measures in the catchment. In general, our study has shown that, this concept is effective for small and medium events, but almost negligible for the largest and less frequent floods: this holds true for all different hydro-climatic regions, and with different land-use, soils and morphological settings.
Genomic prediction has revolutionized crop breeding despite remaining issues of transferability of models to unseen environmental conditions and environments. Usage of endophenotypes rather than genomic markers leads to the possibility of building phenomic prediction models that can account, in part, for this challenge. Here, we compare and contrast genomic prediction and phenomic prediction models for 3 growth-related traits, namely, leaf count, tree height, and trunk diameter, from 2 coffee 3-way hybrid populations exposed to a series of treatment-inducing environmental conditions. The models are based on 7 different statistical methods built with genomic markers and ChlF data used as predictors. This comparative analysis demonstrates that the best-performing phenomic prediction models show higher predictability than the best genomic prediction models for the considered traits and environments in the vast majority of comparisons within 3-way hybrid populations. In addition, we show that phenomic prediction models are transferrable between conditions but to a lower extent between populations and we conclude that chlorophyll a fluorescence data can serve as alternative predictors in statistical models of coffee hybrid performance. Future directions will explore their combination with other endophenotypes to further improve the prediction of growth-related traits for crops.
Downscaling of microfluidic cell culture and detection devices for electrochemical monitoring has mostly focused on miniaturization of the microfluidic chips which are often designed for specific applications and therefore lack functional flexibility. We present a compact microfluidic cell culture and electrochemical analysis platform with in-built fluid handling and detection, enabling complete cell based assays comprising on-line electrode cleaning, sterilization, surface functionalization, cell seeding, cultivation and electrochemical real-time monitoring of cellular dynamics. To demonstrate the versatility and multifunctionality of the platform, we explored amperometric monitoring of intracellular redox activity in yeast (Saccharomyces cerevisiae) and detection of exocytotically released dopamine from rat pheochromocytoma cells (PC12). Electrochemical impedance spectroscopy was used in both applications for monitoring cell sedimentation and adhesion as well as proliferation in the case of PC12 cells. The influence of flow rate on the signal amplitude in the detection of redox metabolism as well as the effect of mechanical stimulation on dopamine release were demonstrated using the programmable fluid handling capability. The here presented platform is aimed at applications utilizing cell based assays, ranging from e.g. monitoring of drug effects in pharmacological studies, characterization of neural stem cell differentiation, and screening of genetically modified microorganisms to environmental monitoring.
Objective Pre-eclampsia is a serious complication of pregnancy with high morbidity and mortality and an incidence of 3-5% in all pregnancies. Early prediction is still insufficient in clinical practice. Although most pre- eclamptic patients have pathological uterine perfusion in the second trimester, perfusion disturbance has a positive predictive accuracy (PPA) only of approximately 30%. Methods Non-invasive continuous blood pressure recordings were taken simultaneously via a finger cuff for 30 min. Time series of systolic as well as diastolic beat-to-beat pressure values were extracted to analyse heart rate and blood pressure variability and baroreflex sensitivity in 102 second- trimester pregnancies, to assess predictability for pre-eclampsia (n = 16). All women underwent Doppler investigations of the uterine arteries. Results We identified a combination of three variability and baroreflex parameters to best predict pre-eclampsia several weeks before clinical manifestation. The discriminant function of these three parameters classified patients with later pre-eclampsia with a sensitivity of 87.5%, a specificity of 83.7%, and a PPA of 50.0%. Combined with Doppler investigations of uterine arteries, PPA increased to 71.4%. Conclusions This technique of incorporating one-stop clinical assessment of uterine perfusion and variability parameters in the second trimester produces the most effective prediction of pre-eclampsia to date
X-ray photoelectron spectroscopy (XPS) is a powerful tool for probing the local chemical environment of atoms near surfaces. When applied to soft matter, such as polymers, XPS spectra are frequently shifted and broadened due to thermal atom motion and by interchain interactions. We present a combined quantum mechanical QM/molecular dynamics (MD) simulation of X-ray photoelectron spectra of polyvinyl alcohol (PVA) using oligomer models in order to account for and quantify these effects on the XPS (C1s) signal. In our study, molecular dynamics at finite temperature were performed with a classical forcefield and by ab initio MD (AIMD) using the Car-Parrinello method. Snapshots along, the trajectories represent possible conformers and/or neighbouring environments, with different C1s ionization potentials for individual C atoms leading to broadened XPS peaks. The latter are determined by Delta-Kohn Sham calculations. We also examine the experimental practice of gauging XPS (C1s) signals of alkylic C-atoms in C-containing polymers to the C1s signal of polyethylene.
We find that (i) the experimental XPS (C1s) spectra of PVA (position and width) can be roughly represented by single-strand models, (ii) interchain interactions lead to red-shifts of the XPS peaks by about 0.6 eV, and (iii) AIMD simulations match the findings from classical MD semi-quantitatively. Further, (iv) the gauging procedure of XPS (C1s) signals to the values of PE, introduces errors of about 0.5 eV. (C) 2014 Elsevier B.V. All rights reserved.
Diatom diversity in lakes of northwest Yakutia (Siberia) was investigated by microscopic and genetic analysis of surface and cored lake sediments, to evaluate the use of sedimentary DNA for paleolimnological diatom studies and to identify obscure genetic diversity that cannot be detected by microscopic methods. Two short (76 and 73 bp) and one longer (577 bp) fragments of the ribulose 1,5-bisphosphate carboxylase/oxygenase (rbcL) gene, encoding the large subunit of the rbcL, were used as genetic markers. Diverse morphological assemblages of diatoms, dominated by small benthic fragilarioid taxa, were retrieved from the sediments of each lake. These minute fragilarioid taxa were examined by scanning electron microscopy, revealing diverse morphotypes in Staurosira and Staurosirella from the different lakes. Genetic analyses indicated a dominance of haplotypes that were assigned to fragilarioid taxa and less genetic diversity in other diatom taxa. The long rbcL_577 amplicon identified considerable diversification among haplotypes clustering within the Staurosira/Staurosirella genera, revealing 19 different haplotypes whose spatial distribution appears to be primarily related to the latitude of the lakes, which corresponds to a vegetation and climate gradient. Our rbcL markers are valuable tools for tracking differences between diatom lineages that are not visible in their morphologies. These markers revealed putatively high genetic diversity within the Staurosira/Staurosirella species complex, at a finer scale than is possible to resolve by microscopic determination. The rbcL markers may provide additional reliable information on the diversity of barely distinguishable minute benthic fragilarioids. Environmental sequencing may thus allow the tracking of spatial and temporal diversification in Siberian lakes, especially in the context of diatom responses to recent environmental changes, which remains a matter of controversy.
We present new radio/millimeter measurements of the hot magnetic star HR5907 obtained with the VLA and ALMA interferometers. We find that HR5907 is the most radio luminous early type star in the cm-mm band among those presently known. Its multi-wavelength radio light curves are strongly variable with an amplitude that increases with radio frequency. The radio emission can be explained by the populations of the non-thermal electrons accelerated in the current sheets on the outer border of the magnetosphere of this fast-rotating magnetic star. We classify HR5907 as another member of the growing class of strongly magnetic fast-rotating hot stars where the gyro-synchrotron emission mechanism efficiently operates in their magnetospheres. The new radio observations of HR5907 are combined with archival X-ray data to study the physical condition of its magnetosphere. The X-ray spectra of HR5907 show tentative evidence for the presence of non-thermal spectral component. We suggest that non-thermal X-rays originate a stellar X-ray aurora due to streams of non-thermal electrons impacting on the stellar surface. Taking advantage of the relation between the spectral indices of the X-ray power-law spectrum and the non-thermal electron energy distributions, we perform 3-D modelling of the radio emission for HR5907. The wavelength-dependent radio light curves probe magnetospheric layers at different heights above the stellar surface. A detailed comparison between simulated and observed radio light curves leads us to conclude that the stellar magnetic field of HR 5907 is likely non-dipolar, providing further indirect evidence of the complex magnetic field topology of HR5907.
Next-generation sequencing methods provide comprehensive data for the analysis of structural and functional analysis of the genome. The draft genomes with low contig number and high N50 value can give insight into the structure of the genome as well as provide information on the annotation of the genome. In this study, we designed a pipeline that can be used to assemble prokaryotic draft genomes with low number of contigs and high N50 value. We aimed to use combination of two de novo assembly tools (SPAdes and IDBA-Hybrid) and evaluate the impact of this approach on the quality metrics of the assemblies. The followed pipeline was tested with the raw sequence data with short reads (< 300) for a total of 10 species from four different genera. To obtain the final draft genomes, we firstly assembled the sequences using SPAdes to find closely related organism using the extracted 16 s rRNA from it. IDBA-Hybrid assembler was used to obtain the second assembly data using the closely related organism genome. SPAdes assembler tool was implemented using the second assembly, produced by IDBA-hybrid as a hint. The results were evaluated using QUAST and BUSCO. The pipeline was successful for the reduction of the contig numbers and increasing the N50 statistical values in the draft genome assemblies while preserving the coverage of the draft genomes.
To explore the genetic determinants of obesity and Type 2 diabetes (T2D), the German Center for Diabetes Research (DZD) conducted crossbreedings of the obese and diabetes-prone New Zealand Obese mouse strain with four different lean strains (B6, DBA, C3H, 129P2) that vary in their susceptibility to develop T2D. Genome-wide linkage analyses localized more than 290 quantitative trait loci (QTL) for obesity, 190 QTL for diabetes-related traits and 100 QTL for plasma metabolites in the out-cross populations. A computational framework was developed that allowed to refine critical regions and to nominate a small number of candidate genes by integrating reciprocal haplotype mapping and transcriptome data. The efficiency of the complex procedure was demonstrated for one obesity QTL. The genomic interval of 35 Mb with 502 annotated candidate genes was narrowed down to six candidates. Accordingly, congenic mice retained the obesity phenotype owing to an interval that contains three of the six candidate genes. Among these the phospholipase PLA2G4A exhibited an elevated expression in adipose tissue of obese human subjects and is therefore a critical regulator of the obesity locus. Together, our broad and complex approach demonstrates that combined- and comparative-cross analysis exhibits improved mapping resolution and represents a valid tool for the identification of disease genes.
The origin of Galactic cosmic rays is a century-long puzzle. Indirect evidence points to their acceleration by supernova shockwaves, but we know little of their escape from the shock and their evolution through the turbulent medium surrounding massive stars. Gamma rays can probe their spreading through the ambient gas and radiation fields. The Fermi Large Area Telescope (LAT) has observed the star-forming region of Cygnus X. The 1- to 100-gigaelectronvolt images reveal a 50-parsec-wide cocoon of freshly accelerated cosmic rays that flood the cavities carved by the stellar winds and ionization fronts from young stellar clusters. It provides an example to study the youth of cosmic rays in a superbubble environment before they merge into the older Galactic population.
A close call
(2018)
The present study investigated how lexical selection is influenced by the number of semantically related representations (semantic neighbourhood density) and their similarity (semantic distance) to the target in a speeded picture-naming task. Semantic neighbourhood density and similarity as continuous variables were used to assess lexical selection for which competitive and noncompetitive mechanisms have been proposed. Previous studies found mixed effects of semantic neighbourhood variables, leaving this issue unresolved. Here, we demonstrate interference of semantic neighbourhood similarity with less accurate naming responses and a higher likelihood of producing semantic errors and omissions over accurate responses for words with semantically more similar (closer) neighbours. No main effect of semantic neighbourhood density and no interaction between semantic neighbourhood density and similarity was found. We assessed further whether semantic neighbourhood density can affect naming performance if semantic neighbours exceed a certain degree of semantic similarity. Semantic similarity between the target and each neighbour was used to split semantic neighbourhood density into two different density variables: The number of semantically close neighbours versus distant neighbours. The results showed a significant effect of close, but not of distant, semantic neighbourhood density: Naming pictures of targets with more close semantic neighbours led to longer naming latencies, less accurate responses, and a higher likelihood for the production of semantic errors and omissions over accurate responses. The results show that word inherent semantic attributes such as semantic neighbourhood similarity and the number of coactivated close semantic neighbours modulate lexical selection supporting theories of competitive lexical processing.
In recent years, there has been a large amount of disparate work concerning the representation and reasoning with qualitative preferential information by means of approaches to nonmonotonic reasoning. Given the variety of underlying systems, assumptions, motivations, and intuitions, it is difficult to compare or relate one approach with another. Here, we present an overview and classification for approaches to dealing with preference. A set of criteria for classifying approaches is given, followed by a set of desiderata that an approach might be expected to satisfy. A comprehensive set of approaches is subsequently given and classified with respect to these sets of underlying principles
A circulatory loop
(2023)
In the digitalization debate, gender biases in digital technologies play a significant role because of their potential for social exclusion and inequality. It is therefore remarkable that organizations as drivers of digitalization and as places for social integration have been widely overlooked so far. Simultaneously, gender biases and digitalization have structurally immanent connections to organizations. Therefore, a look at the reciprocal relationship between organizations, digitalization, and gender is needed. The article provides answers to the question of whether and how organizations (re)produce, reinforce, or diminish gender‐specific inequalities during their digital transformations. On the one hand, gender inequalities emerge when organizations use post‐bureaucratic concepts through digitalization. On the other hand, gender inequalities are reproduced when organizations either program or implement digital technologies and fail to establish control structures that prevent gender biases. This article shows that digitalization can act as a catalyst for inequality‐producing mechanisms, but also has the potential to mitigate inequalities. We argue that organizations must be considered when discussing the potential of exclusion through digitalization.
Current assessment of visual neglect involves paper-and-pencil tests or computer-based tasks. Both have been criticised because of their lack of ecological validity as target stimuli can only be presented in a restricted visual range. This study examined the user-friendliness and diagnostic strength of a new "Circle-Monitor" (CM), which enlarges the range of the peripersonal space, in comparison to a standard paper-and-pencil test (Neglect-Test, NET).
Methods: Ten stroke patients with neglect and ten age-matched healthy controls were examined by the NET and the CM test comprising of four subtests (Star Cancellation, Line Bisection, Dice Task, and Puzzle Test).
Results: The acceptance of the CM in elderly controls and neglect patients was high. Participants rated the examination by CM as clear, safe and more enjoyable than NET. Healthy controls performed at ceiling on all subtests, without any systematic differences between the visual fields. Both NET and CM revealed significant differences between controls and patients in Line Bisection, Star Cancellation and visuo-constructive tasks (NET: Figure Copying, CM: Puzzle Test). Discriminant analyses revealed cross-validated assignment of patients and controls to groups was more precise when based on the CM (hit rate 90%) as compared to the NET (hit rate 70%).
Conclusion: The CM proved to be a sensitive novel tool to diagnose visual neglect symptoms quickly and accurately with superior diagnostic validity compared to a standard neglect test while being well accepted by patients. Due to its upgradable functions the system may also be a valuable tool not only to test for non-visual neglect symptoms, but also to provide treatment and assess its outcome.
Incorporation of noncanonical amino acids (ncAAs) with bioorthogonal reactive groups by amber suppression allows the generation of synthetic proteins with desired novel properties. Such modified molecules are in high demand for basic research and therapeutic applications such as cancer treatment and in vivo imaging. The positioning of the ncAA-responsive codon within the protein's coding sequence is critical in order to maintain protein function, achieve high yields of ncAA-containing protein, and allow effective conjugation. Cell-free ncAA incorporation is of particular interest due to the open nature of cell-free systems and their concurrent ease of manipulation. In this study, we report a straightforward workflow to inquire ncAA positions in regard to incorporation efficiency and protein functionality in a Chinese hamster ovary (CHO) cell-free system. As a model, the well-established orthogonal translation components Escherichia coli tyrosyl-tRNA synthetase (TyrRS) and tRNATyr(CUA) were used to site-specifically incorporate the ncAA p-azido-l-phenylalanine (AzF) in response to UAG codons. A total of seven ncAA sites within an anti-epidermal growth factor receptor (EGFR) single-chain variable fragment (scFv) N-terminally fused to the red fluorescent protein mRFP1 and C-terminally fused to the green fluorescent protein sfGFP were investigated for ncAA incorporation efficiency and impact on antigen binding. The characterized cell-free dual fluorescence reporter system allows screening for ncAA incorporation sites with high incorporation efficiency that maintain protein activity. It is parallelizable, scalable, and easy to operate. We propose that the established CHO-based cell-free dual fluorescence reporter system can be of particular interest for the development of antibody-drug conjugates (ADCs).
This study investigates the characteristics of narrative-speech production and the use of verbs in Turkish agrammatic speakers (n = 10) compared to non-brain-damaged controls (n = 10). To elicit narrative-speech samples, personal interviews and storytelling tasks were conducted. Turkish has a large and regular verb inflection paradigm where verbs are inflected for evidentiality (i.e. direct versus indirect evidence available to the speaker). Particularly, we explored the general characteristics of the speech samples (e.g. utterance length) and the uses of lexical, finite and non-finite verbs and direct and indirect evidentials. The results show that speech rate is slow, verbs per utterance are lower than normal and the verb diversity is reduced in the agrammatic speakers. Verb inflection is relatively intact; however, a trade-off pattern between inflection for direct evidentials and verb diversity is found. The implications of the data are discussed in connection with narrative-speech production studies on other languages.
Let v be a valuation of terms of type tau, assigning to each term t of type tau a value v(t) greater than or equal to 0. Let k greater than or equal to 1 be a natural number. An identity s approximate to t of type tau is called k- normal if either s = t or both s and t have value greater than or equal to k, and otherwise is called non-k-normal. A variety V of type tau is said to be k-normal if all its identities are k-normal, and non-k-normal otherwise. In the latter case, there is a unique smallest k-normal variety N-k(A) (V) to contain V , called the k-normalization of V. Inthe case k = 1, for the usual depth valuation of terms, these notions coincide with the well-known concepts of normal identity, normal variety, and normalization of a variety. I. Chajda has characterized the normalization of a variety by means of choice algebras. In this paper we generalize his results to a characterization of the k-normalization of a variety, using k-choice algebras. We also introduce the concept of a k-inflation algebra, and for the case that v is the usual depth valuation of terms, we prove that a variety V is k-normal iff it is closed under the formation of k- inflations, and that the k-normalization of V consists precisely of all homomorphic images of k-inflations of algebras in V
To asymptotic complete scattering systems {M(+) + V, M(+)} on H(+) := L(2)(R(+), K, d lambda), where M(+) is the multiplication operator on H(+) and V is a trace class operator with analyticity conditions, a decay semigroup is associated such that the spectrum of the generator of this semigroup coincides with the set of all resonances (poles of the analytic continuation of the scattering matrix into the lower half plane across the positive half line), i.e. the decay semigroup yields a "time-dependent" characterization of the resonances. As a counterpart a "spectral characterization" is mentioned which is due to the "eigenvalue-like" properties of resonances.
Channel transmission losses in drylands take place normally in extensive alluvial channels or streambeds underlain by fractured rocks. They can play an important role in streamflow rates, groundwater recharge, freshwater supply and channel-associated ecosystems. We aim to develop a process-oriented, semi-distributed channel transmission losses model, using process formulations which are suitable for data-scarce dryland environments and applicable to both hydraulically disconnected losing streams and hydraulically connected losing(/gaining) streams. This approach should be able to cover a large variation in climate and hydro-geologic controls, which are typically found in dryland regions of the Earth. Our model was first evaluated for a losing/gaining, hydraulically connected 30 km reach of the Middle Jaguaribe River (MJR), Ceara, Brazil, which drains a catchment area of 20 000 km(2). Secondly, we applied it to a small losing, hydraulically disconnected 1.5 km channel reach in the Walnut Gulch Experimental Watershed (WGEW), Arizona, USA. The model was able to predict reliably the streamflow volume and peak for both case studies without using any parameter calibration procedure. We have shown that the evaluation of the hypotheses on the dominant hydrological processes was fundamental for reducing structural model uncertainties and improving the streamflow prediction. For instance, in the case of the large river reach (MJR), it was shown that both lateral stream-aquifer water fluxes and groundwater flow in the underlying alluvium parallel to the river course are necessary to predict streamflow volume and channel transmission losses, the former process being more relevant than the latter. Regarding model uncertainty, it was shown that the approaches, which were applied for the unsaturated zone processes (highly nonlinear with elaborate numerical solutions), are much more sensitive to parameter variability than those approaches which were used for the saturated zone (mathematically simple water budgeting in aquifer columns, including backwater effects). In case of the MJR-application, we have seen that structural uncertainties due to the limited knowledge of the subsurface saturated system interactions (i.e. groundwater coupling with channel water; possible groundwater flow parallel to the river) were more relevant than those related to the subsurface parameter variability. In case of the WEGW application we have seen that the non-linearity involved in the unsaturated flow processes in disconnected dryland river systems (controlled by the unsaturated zone) generally contain far more model uncertainties than do connected systems controlled by the saturated flow. Therefore, the degree of aridity of a dryland river may be an indicator of potential model uncertainty and subsequent attainable predictability of the system.
We present the results from Chandra X-ray observations, and near- and mid-infrared analysis, using VISTA/VVV and Spitzer/GLIMPSE catalogs, of the high-mass star-forming region IRAS 16562-3959, which contains a candidate for a high-mass protostar. We detected 249 X-ray sources within the ACIS-I field of view. The majority of the X-ray sources have low count rates (<0.638 cts/ks) and hard X-ray spectra. The search for YSOs in the region using VISTA/VVV and Spitzer/GLIMPSE catalogs resulted in a total of 636 YSOs, with 74 Class I and 562 Class II YSOs. The search for near- and mid-infrared counterparts of the X-ray sources led to a total of 165 VISTA/VVV counterparts, and a total of 151 Spitzer/GLIMPSE counterparts. The infrared analysis of the X-ray counterparts allowed us to identify an extra 91 Class III YSOs associated with the region. We conclude that a total of 727 YSOs are associated with the region, with 74 Class I, 562 Class II, and 91 Class III YSOs. We also found that the region is composed of 16 subclusters. In the vicinity of the high-mass protostar, the stellar distribution has a core-halo structure. The subcluster containing the high-mass protostar is the densest and the youngest in the region, and the high-mass protostar is located at its center. The YSOs in this cluster appear to be substantially older than the high-mass protostar.
A Cell-free Expression Pipeline for the Generation and Functional Characterization of Nanobodies
(2022)
Cell-free systems are well-established platforms for the rapid synthesis, screening, engineering and modification of all kinds of recombinant proteins ranging from membrane proteins to soluble proteins, enzymes and even toxins. Also within the antibody field the cell-free technology has gained considerable attention with respect to the clinical research pipeline including antibody discovery and production. Besides the classical full-length monoclonal antibodies (mAbs), so-called "nanobodies" (Nbs) have come into focus. A Nb is the smallest naturally-derived functional antibody fragment known and represents the variable domain (VHH, similar to 15 kDa) of a camelid heavy-chain-only antibody (HCAb). Based on their nanoscale and their special structure, Nbs display striking advantages concerning their production, but also their characteristics as binders, such as high stability, diversity, improved tissue penetration and reaching of cavity-like epitopes. The classical way to produce Nbs depends on the use of living cells as production host. Though cell-based production is well-established, it is still time-consuming, laborious and hardly amenable for high-throughput applications. Here, we present for the first time to our knowledge the synthesis of functional Nbs in a standardized mammalian cell-free system based on Chinese hamster ovary (CHO) cell lysates. Cell-free reactions were shown to be time-efficient and easy-to-handle allowing for the "on demand" synthesis of Nbs. Taken together, we complement available methods and demonstrate a promising new system for Nb selection and validation.
Hydrocarbons can be found in many different habitats and represent an important carbon source for microbes. As fossil fuels, they are also an important economical resource and through natural seepage or accidental release they can be major pollutants. DNA-specific stains and molecular probes bind to hydrocarbons, causing massive background fluorescence, thereby hampering cell enumeration. The cell extraction procedure of Kallmeyer et al. (2008) separates the cells from the sediment matrix. In principle, this technique can also be used to separate cells from oily sediments, but it was not originally optimized for this application. Here we present a modified extraction method in which the hydrocarbons are removed prior to cell extraction. Due to the reduced background fluorescence the microscopic image becomes clearer, making cell identification, and enumeration much easier. Consequently, the resulting cell counts from oily samples treated according to our new protocol are significantly higher than those treated according to Kallmeyer et al. (2008). We tested different amounts of a variety of solvents for their ability to remove hydrocarbons and found that n-hexane and in samples containing more mature oils methanol, delivered the best results. However, as solvents also tend to lyse cells, it was important to find the optimum solvent to sample ratio, at which hydrocarbon extraction is maximized and cell lysis minimized. A volumetric ratio of 1:2-1:5 between a formalin-fixed sediment slurry and solvent delivered highest cell counts. Extraction efficiency was around 30-50% and was checked on both oily samples spiked with known amounts of E. coli cells and oil-free samples amended with fresh and biodegraded oil. The method provided reproducible results on samples containing very different kinds of oils with regard to their degree of biodegradation. For strongly biodegraded oil MeOH turned out to be the most appropriate solvent, whereas for less biodegraded samples n-hexane delivered best results.
We study the Cauchy problem for a nonlinear elliptic equation with data on a piece S of the boundary surface partial derivative X. By the Cauchy problem is meant any boundary value problem for an unknown function u in a domain X with the property that the data on S, if combined with the differential equations in X, allows one to determine all derivatives of u on S by means of functional equations. In the case of real analytic data of the Cauchy problem, the existence of a local solution near S is guaranteed by the Cauchy-Kovalevskaya theorem. We discuss a variational setting of the Cauchy problem which always possesses a generalized solution.
A catalog of genetic loci associated with kidney function from analyses of a million individuals
(2019)
Chronic kidney disease (CKD) is responsible for a public health burden with multi-systemic complications. Through transancestry meta-analysis of genome-wide association studies of estimated glomerular filtration rate (eGFR) and independent replication (n = 1,046,070), we identified 264 associated loci (166 new). Of these,147 were likely to be relevant for kidney function on the basis of associations with the alternative kidney function marker blood urea nitrogen (n = 416,178). Pathway and enrichment analyses, including mouse models with renal phenotypes, support the kidney as the main target organ. A genetic risk score for lower eGFR was associated with clinically diagnosed CKD in 452,264 independent individuals. Colocalization analyses of associations with eGFR among 783,978 European-ancestry individuals and gene expression across 46 human tissues, including tubulo-interstitial and glomerular kidney compartments, identified 17 genes differentially expressed in kidney. Fine-mapping highlighted missense driver variants in 11 genes and kidney-specific regulatory variants. These results provide a comprehensive priority list of molecular targets for translational research.
A case of primary progressive ahasia : a 14year follow-up study with neuropathological findings
(1998)
A Case for Serious Play
(2017)
With the advent of increasingly powerful computational architectures, scientists use these possibilities to create simulations of ever-increasing size and complexity. Large-scale simulations of environmental systems require huge amounts of resources. Managing these in an operational way becomes increasingly complex and difficult to handle for individual scientists. State-of-the-art simulation infrastructures usually provide the necessary re-sources in a centralised setup, which often results in an all-or-nothing choice for the user. Here, we outline an alternative approach to handling this complexity, while rendering the use of high-performance hardware and large datasets still possible. It retains a number of desirable properties: (i) a decentralised structure, (ii) easy sharing of resources to promote collaboration and (iii) secure access to everything, including natural delegation of authority across levels and system boundaries. We show that the object capability paradigm will cover these issues, and present the first steps towards developing a simulation infrastructure based on these principles.
Stochastically triggered photospheric light variations reaching similar to 40 mmag peak-to-valley amplitudes have been detected in the O8 Iaf supergiant V973 Scorpii as the outcome of 2 months of high-precision time-resolved photometric observations with the BRIght Target Explorer (BRITE) nanosatellites. The amplitude spectrum of the time series photometry exhibits a pronounced broad bump in the low-frequency regime (less than or similar to 0.9 d(-1)) where several prominent frequencies are detected. A time-frequency analysis of the observations reveals typical mode lifetimes of the order of 5-10 d. The overall features of the observed brightness amplitude spectrum of V973 Sco match well with those extrapolated from two-dimensional hydrodynamical simulations of convectively driven internal gravity waves randomly excited from deep in the convective cores of massive stars. An alternative or additional possible source of excitation from a sub-surface convection zone needs to be explored in future theoretical investigations.
Abdominal and general adiposity are independently associated with mortality, but there is no consensus on how best to assess abdominal adiposity. We compared the ability of alternative waist indices to complement body mass index (BMI) when assessing all-cause mortality. We used data from 352,985 participants in the European Prospective Investigation into Cancer and Nutrition (EPIC) and Cox proportional hazards models adjusted for other risk factors. During a mean follow-up of 16.1 years, 38,178 participants died. Combining in one model BMI and a strongly correlated waist index altered the association patterns with mortality, to a predominantly negative association for BMI and a stronger positive association for the waist index, while combining BMI with the uncorrelated A Body Shape Index (ABSI) preserved the association patterns. Sex-specific cohort-wide quartiles of waist indices correlated with BMI could not separate high-risk from low-risk individuals within underweight (BMI<18.5 kg/m(2)) or obese (BMI30 kg/m(2)) categories, while the highest quartile of ABSI separated 18-39% of the individuals within each BMI category, which had 22-55% higher risk of death. In conclusion, only a waist index independent of BMI by design, such as ABSI, complements BMI and enables efficient risk stratification, which could facilitate personalisation of screening, treatment and monitoring.
On 6 June 1982, Israel invaded Lebanon to fight the Palestinian Liberation Organization (PLO). Between August 1982 and February 1984, the US, France, Britain and Italy deployed a Multinational Force (MNF) to Beirut. Its task was to act as an interposition force to bolster the government and to bring peace to the people. The mission is often forgotten or merely remembered in context with the bombing of US Marines’ barracks. However, an analysis of the Italian contingent shows that the MNF was not doomed to fail and could accomplish its task when operational and diplomatic efforts were coordinated. The Italian commander in Beirut, General Franco Angioni, followed a successful approach that sustained neutrality, respectful behaviour and minimal force, which resulted in a qualified success of the Italian efforts.
Abiotic stresses cause oxidative damage in plants. Here, we demonstrate that foliar application of an extract from the seaweed Ascophyllum nodosum, SuperFifty (SF), largely prevents paraquat (PQ)-induced oxidative stress in Arabidopsis thaliana. While PQ-stressed plants develop necrotic lesions, plants pre-treated with SF (i.e., primed plants) were unaffected by PQ. Transcriptome analysis revealed induction of reactive oxygen species (ROS) marker genes, genes involved in ROS-induced programmed cell death, and autophagy-related genes after PQ treatment. These changes did not occur in PQ-stressed plants primed with SF. In contrast, upregulation of several carbohydrate metabolism genes, growth, and hormone signaling as well as antioxidant-related genes were specific to SF-primed plants. Metabolomic analyses revealed accumulation of the stress-protective metabolite maltose and the tricarboxylic acid cycle intermediates fumarate and malate in SF-primed plants. Lipidome analysis indicated that those lipids associated with oxidative stress-induced cell death and chloroplast degradation, such as triacylglycerols (TAGs), declined upon SF priming. Our study demonstrated that SF confers tolerance to PQ-induced oxidative stress in A. thaliana, an effect achieved by modulating a range of processes at the transcriptomic, metabolic, and lipid levels.
A Biosensor for aromatic aldehydes comprising the mediator dependent PaoABC-Aldehyde oxidoreductase
(2013)
A novel aldehyde oxidoreductase (PaoABC) from Escherichia coli was utilized for the development of an oxygen insensitive biosensor for benzaldehyde. The enzyme was immobilized in polyvinyl alcohol and currents were measured for aldehyde oxidation with different one and two electron mediators with the highest sensitivity for benzaldehyde in the presence of hexacyanoferrate(III). The benzaldehyde biosensor was optimized with respect to mediator concentration, enzyme loading and pH using potassium hexacyanoferrate(III). The linear measuring range is between 0.5200 mu M benzaldehyde. In correspondence with the substrate selectivity of the enzyme in solution the biosensor revealed a preference for aromatic aldehydes and less effective conversion of aliphatic aldehydes. The biosensor is oxygen independent, which is a particularly attractive feature for application. The biosensor can be applied to detect contaminations with benzaldehyde in solvents such as benzyl alcohol, where traces of benzaldehyde in benzyl alcohol down to 0.0042?% can be detected.
"Left" and "right" coordinates control our spatial behavior and even influence abstract thoughts. For number concepts, horizontal spatial-numerical associations (SNAs) have been widely documented: we associate few with left and many with right. Importantly, increments are universally coded on the right side even in preverbal humans and nonhuman animals, thus questioning the fundamental role of directional cultural habits, such as reading or finger counting. Here, we propose a biological, nonnumerical mechanism for the origin of SNAs on the basis of asymmetric tuning of animal brains for different spatial frequencies (SFs). The resulting selective visual processing predicts both universal SNAs and their context-dependence. We support our proposal by analyzing the stimuli used to document SNAs in newborns for their SF content. As predicted, the SFs contained in visual patterns with few versus many elements preferentially engage right versus left brain hemispheres, respectively, thus predicting left-versus rightward behavioral biases. Our "brain's asymmetric frequency tuning" hypothesis explains the perceptual origin of horizontal SNAs for nonsymbolic visual numerosities and might be extensible to the auditory domain.
We report the influence of different nutritional modes-autotrophy, mixotrophy, and heterotrophy-on the fatty acid and sterol composition of the freshwater flagellate Ochromonas sp. and discuss the ecological significance of our results with respect to the resource competition theory (rct). Polyunsaturated fatty acids (PUFAs) are the most efficient biochemical variable distinguishing between nutritional modes of Ochromonas sp. Decreasing concentrations of PUFAs were observed in the order autotrophs, mixotrophs, heterotrophs. In mixotrophs and heterotrophs, concentrations of saturated fatty acids were higher than those of monounsaturated fatty acids and PUFAs as a result of bacterivory. Stigmasterol was the main sterol in Ochromonas sp., regardless of nutritional mode. Mixotrophs showed higher growth rates than heterotrophs, which could not be explained by rct. Heterotrophs, in turn, exhibited higher growth rates than autotrophs, which were cultured under the same light conditions as mixotrophs. Mixotrophs can synthesize PUFAs, which are important for many physiological functions such as membrane permeability and growth. Thus, mixotrophy facilitated efficient growth as well as the ability to synthesize complex and essential biomolecules. These strong synergetic effects are due to the combination of biochemical benefits of heterotrophic and autotrophic metabolic pathways and cannot be predicted by rct.
Aldol reactions play an important role in organic synthesis, as they belong to the class of highly beneficial C-C-linking reactions. Aldol-type reactions can be efficiently and stereoselectively catalyzed by the enzyme 2-deoxy-D-ribose-5-phosphate aldolase (DERA) to gain key intermediates for pharmaceuticals such as atorvastatin. The immobilization of DERA would open the opportunity for a continuous operation mode which gives access to an efficient, large-scale production of respective organic intermediates. In this contribution, we synthesize and utilize DERA/polymer conjugates for the generation and fixation of a DERA bearing thin film on a polymeric membrane support. The conjugation strongly increases the tolerance of the enzyme toward the industrial relevant substrate acetaldehyde while UV-cross-linkable groups along the conjugated polymer chains provide the opportunity for covalent binding to the support. First, we provide a thorough characterization of the conjugates followed by immobilization tests on representative, nonporous cycloolefinic copolymer supports. Finally, immobilization on the target supports constituted of polyacrylonitrile (PAN) membranes is performed, and the resulting enzymatically active membranes are implemented in a simple membrane module setup for the first assessment of biocatalytic performance in the continuous operation mode using the combination hexanal/acetaldehyde as the substrate.
Binding or catalysis? Both can be distinguished with a molecularly imprinted polymer (MIP) by the different patterns of heat generation. The catalytically active sites, like in the corresponding enzyme, generate a steady-state temperature increase. Thus, enzyme-like catalysis and antibody-analogue binding are analyzed simultaneously in a bifunctional MIP for the first time (see scheme).
Ecological communities are complex adaptive systems that exhibit remarkable feedbacks between their biomass and trait dynamics. Trait-based aggregate models cope with this complexity by focusing on the temporal development of the community’s aggregate properties such as its total biomass, mean trait and trait variance. They are based on particular assumptions about the shape of the underlying trait distribution, which is commonly assumed to be normal. However, ecologically important traits are usually restricted to a finite range, and empirical trait distributions are often skewed or multimodal. As a result, normal distribution-based aggregate models may fail to adequately represent the biomass and trait dynamics of natural communities. We resolve this mismatch by developing a new moment closure approach assuming the trait values to be beta-distributed. We show that the beta distribution captures important shape properties of both observed and simulated trait distributions, which cannot be captured by a Gaussian. We further demonstrate that a beta distribution-based moment closure can strongly enhance the reliability of trait-based aggregate models. We compare the biomass, mean trait and variance dynamics of a full trait distribution (FD) model to the ones of beta (BA) and normal (NA) distribution-based aggregate models, under different selection regimes. This way, we demonstrate under which general conditions (stabilizing, fluctuating or disruptive selection) different aggregate models are reliable tools. All three models predicted very similar biomass and trait dynamics under stabilizing selection yielding unimodal trait distributions with small standing trait variation. We also obtained an almost perfect match between the results of the FD and BA models under fluctuating selection, promoting skewed trait distributions and ongoing oscillations in the biomass and trait dynamics. In contrast, the NA model showed unrealistic trait dynamics and exhibited different alternative stable states, and thus a high sensitivity to initial conditions under fluctuating selection. Under disruptive selection, both aggregate models failed to reproduce the results of the FD model with the mean trait values remaining within their ecologically feasible ranges in the BA model but not in the NA model. Overall, a beta distribution-based moment closure strongly improved the realism of trait-based aggregate models.
Background: With increasing age neuromuscular deficits (e.g., sarcopenia) may result in impaired physical performance and an increased risk for falls. Prominent intrinsic fall-risk factors are age-related decreases in balance and strength / power performance as well as cognitive decline. Additional studies are needed to develop specifically tailored exercise programs for older adults that can easily be implemented into clinical practice. Thus, the objective of the present trial is to assess the effects of a fall prevention program that was developed by an interdisciplinary expert panel on measures of balance, strength / power, body composition, cognition, psychosocial well-being, and falls self-efficacy in healthy older adults. Additionally, the time-related effects of detraining are tested.
Methods/Design: Healthy old people (n = 54) between the age of 65 to 80 years will participate in this trial. The testing protocol comprises tests for the assessment of static / dynamic steady-state balance (i.e., Sharpened Romberg Test, instrumented gait analysis), proactive balance (i.e., Functional Reach Test; Timed Up and Go Test), reactive balance (i.e., perturbation test during bipedal stance; Push and Release Test), strength (i.e., hand grip strength test; Chair Stand Test), and power (i.e., Stair Climb Power Test; countermovement jump). Further, body composition will be analysed using a bioelectrical impedance analysis system. In addition, questionnaires for the assessment of psychosocial (i.e., World Health Organisation Quality of Life Assessment-Bref), cognitive (i.e., Mini Mental State Examination), and fall risk determinants (i.e., Fall Efficacy Scale -International) will be included in the study protocol. Participants will be randomized into two intervention groups or the control / waiting group. After baseline measures, participants in the intervention groups will conduct a 12-week balance and strength / power exercise intervention 3 times per week, with each training session lasting 30 min. (actual training time). One intervention group will complete an extensive supervised training program, while the other intervention group will complete a short version (` 3 times 3') that is home-based and controlled by weekly phone calls. Post-tests will be conducted right after the intervention period. Additionally, detraining effects will be measured 12 weeks after program cessation. The control group / waiting group will not participate in any specific intervention during the experimental period, but will receive the extensive supervised program after the experimental period.
Discussion: It is expected that particularly the supervised combination of balance and strength / power training will improve performance in variables of balance, strength / power, body composition, cognitive function, psychosocial well-being, and falls self-efficacy of older adults. In addition, information regarding fall risk assessment, dose-response-relations, detraining effects, and supervision of training will be provided. Further, training-induced health-relevant changes, such as improved performance in activities of daily living, cognitive function, and quality of life, as well as a reduced risk for falls may help to lower costs in the health care system. Finally, practitioners, therapists, and instructors will be provided with a scientifically evaluated feasible, safe, and easy-to-administer exercise program for fall prevention.
The estimation of a log-concave density on R is a canonical problem in the area of shape-constrained nonparametric inference. We present a Bayesian nonparametric approach to this problem based on an exponentiated Dirichlet process mixture prior and show that the posterior distribution converges to the log-concave truth at the (near-) minimax rate in Hellinger distance. Our proof proceeds by establishing a general contraction result based on the log-concave maximum likelihood estimator that prevents the need for further metric entropy calculations. We further present computationally more feasible approximations and both an empirical and hierarchical Bayes approach. All priors are illustrated numerically via simulations.
In eye-movement control during reading, advanced process-oriented models have been developed to reproduce behavioral data. So far, model complexity and large numbers of model parameters prevented rigorous statistical inference and modeling of interindividual differences. Here we propose a Bayesian approach to both problems for one representative computational model of sentence reading (SWIFT; Engbert et al., Psychological Review, 112, 2005, pp. 777-813). We used experimental data from 36 subjects who read the text in a normal and one of four manipulated text layouts (e.g., mirrored and scrambled letters). The SWIFT model was fitted to subjects and experimental conditions individually to investigate between- subject variability. Based on posterior distributions of model parameters, fixation probabilities and durations are reliably recovered from simulated data and reproduced for withheld empirical data, at both the experimental condition and subject levels. A subsequent statistical analysis of model parameters across reading conditions generates model-driven explanations for observable effects between conditions.
We recorded large data sets of swimming trajectories of the soil bacterium Pseudomonas putida. Like other prokaryotic swimmers, P. putida exhibits a motion pattern dominated by persistent runs that are interrupted by turning events. An in-depth analysis of their swimming trajectories revealed that the majority of the turning events is characterized by an angle of phi(1) = 180 degrees (reversals). To a lesser extent, turning angles of phi(2 Sigma Sigma Sigma Sigma) = 00 are also found. Remarkably, we observed that, upon a reversal, the swimming speed changes by a factor of two on average a prominent feature of the motion pattern that, to our knowledge, has not been reported before. A theoretical model, based on the experimental values for the average run time and the rotational diffusion, recovers the mean-square displacement of P. putida if the two distinct swimming speeds are taken into account. Compared to a swimmer that moves with a constant intermediate speed, the mean-square displacement is strongly enhanced. We furthermore observed a negative dip in the directional autocorrelation at intermediate times, a feature that is only recovered in an extended model, where the nonexponential shape of the run-time distribution is taken into account.
Controlled conversion of leaf starch to sucrose at night is essential for the normal growth of Arabidopsis. The conversion involves the cytosolic metabolism of maltose to hexose phosphates via an unusual, multidomain protein with 4-glucanotransferase activity, DPE2, believed to transfer glucosyl moieties to a complex heteroglycan prior to their conversion to hexose phosphate via a cytosolic phosphorylase. The significance of this complex pathway is unclear; conversion of maltose to hexose phosphate in bacteria proceeds via a more typical 4-glucanotransferase that does not require a heteroglycan acceptor. It has recently been suggested that DPE2 generates a heterogeneous series of terminal glucan chains on the heteroglycan that acts as a glucosyl buffer to ensure a constant rate of sucrose synthesis in the leaf at night. Alternatively, DPE2 and/or the heteroglycan may have specific properties important for their function in the plant. To distinguish between these ideas, we compared the properties of DPE2 with those of the Escherichia coli glucanotransferase MalQ. We found that MalQ cannot use the plant heteroglycan as an acceptor for glucosyl transfer. However, experimental and modeling approaches suggested that it can potentially generate a glucosyl buffer between maltose and hexose phosphate because, unlike DPE2, it can generate polydisperse malto-oligosaccharides from maltose. Consistent with this suggestion, MalQ is capable of restoring an essentially wild-type phenotype when expressed in mutant Arabidopsis plants lacking DPE2. In light of these findings, we discuss the possible evolutionary origins of the complex DPE2-heteroglycan pathway.
A bacterial effector counteracts host autophagy by promoting degradation of an autophagy component
(2022)
Beyond its role in cellular homeostasis, autophagy plays anti- and promicrobial roles in host-microbe interactions, both in animals and plants.
One prominent role of antimicrobial autophagy is to degrade intracellular pathogens or microbial molecules, in a process termed xenophagy.
Consequently, microbes evolved mechanisms to hijack or modulate autophagy to escape elimination.
Although well-described in animals, the extent to which xenophagy contributes to plant-bacteria interactions remains unknown.
Here, we provide evidence that Xanthomonas campestris pv. vesicatoria (Xcv) suppresses host autophagy by utilizing type-III effector XopL. XopL interacts with and degrades the autophagy component SH3P2 via its E3 ligase activity to promote infection.
Intriguingly, XopL is targeted for degradation by defense-related selective autophagy mediated by NBR1/Joka2, revealing a complex antagonistic interplay between XopL and the host autophagy machinery.
Our results implicate plant antimicrobial autophagy in the depletion of a bacterial virulence factor and unravel an unprecedented pathogen strategy to counteract defense-related autophagy in plant-bacteria interactions.
A 3-D crustal shear wave velocity model and Moho map below the Semail Ophiolite, eastern Arabia
(2022)
The Semail Ophiolite in eastern Arabia is the largest and best-exposed slice of oceanic lithosphere on land. Detailed knowledge of the tectonic evolution of the shallow crust, in particular during and after ophiolite obduction in Late Cretaceous times is contrasted by few constraints on physical and compositional properties of the middle and lower continental crust below the obducted units. The role of inherited, pre-obduction crustal architecture remains therefore unaccounted for in our understanding of crustal evolution and the present-day geology. Based on seismological data acquired during a 27-month campaign in northern Oman, Ambient Seismic Noise Tomography and Receiver Function analysis provide for the first time a 3-D radially anisotropic shear wave velocity (V-S) model and a consistent Moho map below the iconic Semail Ophiolite. The model highlights deep crustal boundaries that segment the eastern Arabian basement in two distinct units. The previously undescribed Western Jabal Akhdar Zone separates Arabian crust with typical continental properties and a thickness of similar to 40-45 km in the northwest from a compositionally different terrane in the southeast that is interpreted as a terrane accreted during the Pan-African orogeny in Neoproterozoic times. East of the Ibra Zone, another deep crustal boundary, crustal thickness decreases to 30-35 km and very high lower crustal V-S suggest large-scale mafic intrusions into, and possible underplating of the Arabian continental crust that occurred most likely during Permian breakup of Pangea. Mafic reworking is sharply bounded by the (upper crustal) Semail Gap Fault Zone, northwest of which no such high velocities are found in the crust. Topography of the Oman Mountains is supported by a mild crustal root and Moho depth below the highest topography, the Jabal Akhdar Dome, is similar to 42 km. Radial anisotropy is robustly resolved in the upper crust and aids in discriminating dipping allochthonous units from autochthonous sedimentary rocks that are indistinguishable by isotropic V-S alone. Lateral thickness variations of the ophiolite highlight the Haylayn Ophiolite Massif on the northern flank of Jabal Akhdar Dome and the Hawasina Window as the deepest reaching unit. Ophiolite thickness is similar to 10 km in the southern and northern massifs, and <= 5 km elsewhere.
The main objective of this work is to investigate the evolution of massive stars, and the interplay between them and the ionized gas for a sample of local metal-poor Wolf-Rayet galaxies.
Optical integral field spectrocopy was used in combination with multi-wavelength radio data.
Combining optical and radio data, we locate Wolf-Rayet stars and supernova remnants across the Wolf-Rayet galaxies to study the spatial correlation between them. This study will shed light on the massive star formation and its feedback, and will help us to better understand
distant star-forming galaxies.