Refine
Year of publication
Document Type
- Article (21056) (remove)
Language
- English (21056) (remove)
Keywords
- climate change (95)
- Germany (70)
- stars: massive (55)
- diffusion (47)
- morphology (47)
- stars: early-type (47)
- gamma rays: general (46)
- German (45)
- stars: winds, outflows (44)
- Climate change (43)
Institute
- Institut für Physik und Astronomie (4027)
- Institut für Biochemie und Biologie (3342)
- Institut für Geowissenschaften (2577)
- Institut für Chemie (2229)
- Department Psychologie (1118)
- Institut für Mathematik (952)
- Department Linguistik (758)
- Institut für Ernährungswissenschaft (711)
- Institut für Informatik und Computational Science (571)
- Institut für Umweltwissenschaften und Geographie (561)
Wages and wage dynamics directly affect individuals' and families' daily lives. In this article, we show how major theoretical branches of research on wages and inequality-that is, cumulative advantage (CA), human capital theory, and the lifespan perspective-can be integrated into a coherent statistical framework and analyzed with multilevel dynamic structural equation modeling (DSEM). This opens up a new way to empirically investigate the mechanisms that drive growing inequality over time. We demonstrate the new approach by making use of longitudinal, representative U.S. data (NLSY-79). Analyses revealed fundamental between-person differences in both initial wages and autoregressive wage growth rates across the lifespan. Only 0.5% of the sample experienced a "strict" CA and unbounded wage growth, whereas most individuals revealed logarithmic wage growth over time. Adolescent intelligence and adult educational levels explained substantial heterogeneity in both parameters. We discuss how DSEM may help researchers study CA processes and related developmental dynamics, and we highlight the extensions and limitations of the DSEM framework.
A dust cloud around Ganymede Maintained by hypervelocity impacts of interplanetary micrometeoroids
(2000)
Past climatic change can be reconstructed from sedimentary archives by a number of proxies. However, few methods exist to directly estimate hydrological changes and even fewer result in quantitative data, impeding our understanding of the timing, magnitude and mechanisms of hydrological changes. Here we present a novel approach based on delta H-2 values of sedimentary lipid biomarkers in combination with plant physiological modeling to extract quantitative information on past changes in relative humidity. Our initial application to an annually laminated lacustrine sediment sequence from western Europe deposited during the Younger Dryas cold period revealed relative humidity changes of up to 15% over sub-centennial timescales, leading to major ecosystem changes, in agreement with palynological data from the region. We show that by combining organic geochemical methods and mechanistic plant physiological models on well characterized lacustrine archives it is possible to extract quantitative ecohydrological parameters from sedimentary lipid biomarker delta H-2 data.
A cationic surfactant containing a spiropyrane unit is prepared exhibiting a dual-responsive adjustability of its surface-active characteristics. The switching mechanism of the system relies on the reversible conversion of the non-ionic spiropyrane (SP) to a zwitterionic merocyanine (MC) and can be controlled by adjusting the pH value and via light, resulting in a pH-dependent photoactivity: While the compound possesses a pronounced difference in surface activity between both forms under acidic conditions, this behavior is suppressed at a neutral pH level. The underlying switching processes are investigated in detail, and a thermodynamic explanation based on a combination of theoretical and experimental results is provided. This complex stimuli-responsive behavior enables remote-control of colloidal systems. To demonstrate its applicability, the surfactant is utilized for the pH-dependent manipulation of oil-in-water emulsions.
A drop of immunity
(2021)
Background Soy protein is effective in lowering plasma cholesterol, LDL cholesterol and triglyceride concentrations. It has not been conclusively answered, whether and to what extent other soy constituents may also contribute to this effect. Objective To investigate the change in blood lipid levels after application of two soy-based supplements containing soy protein either without (SuproSoy(R)) or with (Abacor(R)) soy fiber and phospholipids in a randomized placebo-controlled triple-armed study. Methods 121 hypercholesterolemic adults ( 66 females, 55 males) were recruited and randomly assigned to one of three treatments. Over 8 weeks they received daily either 25 g soy protein ( as a component of the supplements Abacor(R) or SuproSoy(R)) or 25 g milk protein ( as a component of placebo). Serum lipids were measured at baseline and after 4, 6 and 8 weeks. Results After 8 weeks of supplementation total cholesterol levels were reduced by 8.0 +/- 9.6% (Abacor(R)) and 3.4 +/- 8.3% (SuproSoy(R)); LDL cholesterol levels by 9.7 +/- 11.7% ( Abacor(R)) and 5.4 +/- 11.6% ( SuproSoy(R)); and Apolipoprotein B levels by 6.9 +/- 14.6% (Abacor(R)) and 4.0 +/- 12.4 % (SuproSoy(R)). Serum levels of HDL cholesterol and triglycerides remained unchanged. Conclusions A preparation combining isolated soy protein with soy fibers and phospholipids showed twice the lipid-lowering effect of a preparation containing isolated soy protein alone. Therefore, such soy-based supplements can be useful in reducing the cardiovascular risk
There is an increasing interest in fusing data from heterogeneous sources. Combining data sources increases the utility of existing datasets, generating new information and creating services of higher quality. A central issue in working with heterogeneous sources is data migration: In order to share and process data in different engines, resource intensive and complex movements and transformations between computing engines, services, and stores are necessary.
Muses is a distributed, high-performance data migration engine that is able to interconnect distributed data stores by forwarding, transforming, repartitioning, or broadcasting data among distributed engines' instances in a resource-, cost-, and performance-adaptive manner. As such, it performs seamless information sharing across all participating resources in a standard, modular manner. We show an overall improvement of 30 % for pipelining jobs across multiple engines, even when we count the overhead of Muses in the execution time. This performance gain implies that Muses can be used to optimise large pipelines that leverage multiple engines.
The paper presents a novel approach to explaining word order variation in the early Germanic languages. Initial observations about verb placement as a device marking types of rhetorical relations made on data from Old High German (cf. Hinterhölzl & Petrova 2005) are now reconsidered on a larger scale and compared with evidence from other early Germanic languages. The paper claims that the identification of information-structural domains in a sentence is best achieved by taking into account the interaction between the pragmatic features of discourse referents and properties of discourse organization.
The uptake of resources from the environment is a vital process for all organisms. Many experimental studies have revealed that the rate at which this process occurs depends critically on the resource concentration, a relationship called "functional response." However, whether the concentration of the consumer normally affects the functional response has been the subject of a long-standing, predominantly theoretical, debate in ecology. Here we present an experimental test between the alternative hypotheses that food uptake depends either only on the resource concentration or on both the resource and the consumer concentrations. In short-term laboratory experiments, we measured the uptake of radioactively labeled, unicellular green algae (Monoraphidium minutum, resource) by the rotifer Brachionus calyciflorus (a consumer) for varying combinations of resource and consumer concentrations. We found that the food uptake by Brachionus depended on the algal concentration with the relationship best described by a Holling type 3 functional response. We detected significant consumer effects on the functional response only at an extraordinarily high Brachionus density (similar to 125 rotifers/mL), which by far exceeds concentrations normally encountered in the field. We conclude that con sumer-dependent food uptake by planktonic rotifers is a phenomenon that can occur under extreme conditions, but probably plays a minor role in natural environments
A direct competitive homogeneous immunoassay for progesterone - the Redox Quenching Immunoassay
(2012)
A direct competitive amperometric immunoassay format for the detection of haptens and proteins was developed. The method is based on the quenching of electroactivity of ferrocenium, which is coupled to the antigen and used as the primary reporter, upon binding to a monoclonal anti-ferrocenium antibody, which is coupled to the detection antibody and used as a secondary reporter. A separation-free progesterone immunoassay with a lower detection limit of 1 ng?mL-1 (3.18 nmol?L-1) in 1?:?2 diluted blood serum was realised by combining two bifunctional conjugates, a ferrocenium-PEG-progesterone tracer and a bioconjugate of one anti-progesterone and one anti-ferrocenium antibody. The immune complex is formed within 30 s upon addition of progesterone, resulting in a total analysis time of 1.5 min.
fMRI studies of reward find increased neural activity in ventral striatum and medial prefrontal cortex (mPFC), whereas other regions, including the dorsolateral prefrontal cortex (d1PFC), anterior cingulate cortex (ACC), and anterior insula, are activated when anticipating aversive exposure. Although these data suggest differential activation during anticipation of pleasant or of unpleasant exposure, they also arise in the context of different paradigms (e.g., preparation for reward vs. threat of shock) and participants. To determine overlapping and unique regions active during emotional anticipation, we compared neural activity during anticipation of pleasant or unpleasant exposure in the same participants. Cues signalled the upcoming presentation of erotic/romantic, violent, or everyday pictures while BOLD activity during the 9-s anticipatory period was measured using fMRI. Ventral striatum and a ventral mPFC subregion were activated when anticipating pleasant, but not unpleasant or neutral, pictures, whereas activation in other regions was enhanced when anticipating appetitive or aversive scenes.
A different class of refugee: university scholarships and developmentalism in late 1960s Africa
(2022)
Using documents assembled in connection with the 1967 Conference on the Legal, Economic and Social Aspects of African Refugee Problems, this article discusses African refugee higher-education discourses in the 1960s at the level of international organizations, volunteer agencies, and government representatives. Education and development history have recently been studied together, but this article focuses on the history of refugee higher education, which, it argues, needs to be understood within the development framework of human-capital theory, meant to support political pan African concerns for a decolonized continent and merged with humanitarian arguments to create a hybrid form of humanitarian developmentalism. The article zooms in on higher-education scholarships, above all for refugees from Southern Africa, as a means of support for human-capital development. It shows that refugee higher education was both a result and a driver of increased international exchanges, as evidenced at the 1967 conference.
In order to explore the behavioral mechanisms underlying aggregation of foragers on local resource patches, it is necessary to manipulate the location, quality and quantity of food patches. This requires careful control over the conditions in the foraging arena, which may be a challenging task in the case of aquatic resource-consumer systems, like that of freshwater zooplankton feeding on suspended algal cells. We present an experimental tool designed to aid behavioral ecologists in exploring the consequences of resource characteristics for zooplankton aggregation behavior and movement decisions under conditions where the boundaries and characteristics (quantity and quality) of food patches can be standardized. The aggregation behavior of Daphnia magna and D. galeata x hyalina was tested in relation to i) the presence or absence of food or ii) food quality, where algae of high or low nutrient (phosphorus) content were offered in distinct patches. Individuals of both Daphnia species chose tubes containing food patches and D. galeata x hyalina also showed a preference towards food patches of high nutrient content. We discuss how the described equipment complements other behavioral approaches providing a useful tool to understand animal foraging decisions in environments with heterogeneous resource distributions.
A detailed x-ray investigation of zeta puppis - II. the variability on short and long timescales
(2013)
Stellar winds are a crucial component of massive stars, but their exact properties still remain uncertain. To shed some light on this subject, we have analyzed an exceptional set of X-ray observations of zeta Puppis, one of the closest and brightest massive stars. The sensitive light curves that were derived reveal two major results. On the one hand, a slow modulation of the X-ray flux (with a relative amplitude of up to 15% over 16 hr in the 0.3-4.0 keV band) is detected. Its characteristic timescale cannot be determined with precision, but amounts from one to several days. It could be related to corotating interaction regions, known to exist in zeta Puppis from UV observations. Hour-long changes, linked to flares or to the pulsation activity, are not observed in the last decade covered by the XMM observations; the 17 hr tentative period, previously reported in a ROSAT analysis, is not confirmed either and is thus transient, at best. On the other hand, short-term changes are surprisingly small (<1% relative amplitude for the total energy band). In fact, they are compatible solely with the presence of Poisson noise in the data. This surprisingly low level of short-term variability, in view of the embedded wind-shock origin, requires a very high fragmentation of the stellar wind, for both absorbing and emitting features (>10(5) parcels, comparing with a two-dimensional wind model). This is the first time that constraints have been placed on the number of clumps in an O-type star wind and from X-ray observations.
In Near Edge X-Ray Absorption Fine Structure (NEXAFS) spectroscopy X-Ray photons are used to excite tightly bound core electrons to low-lying unoccupied orbitals of the system. This technique offers insight into the electronic structure of the system as well as useful structural information. In this work, we apply NEXAFS to two kinds of imidazolium based ionic liquids ([C(n)C(1)im](+)[NTf2](-) and [C(4)C(1)im](+)[I](-)). A combination of measurements and quantum chemical calculations of C K and N K NEXAFS resonances is presented. The simulations, based on the transition potential density functional theory method (TP-DFT), reproduce all characteristic features observed by the experiment. Furthermore, a detailed assignment of resonance features to excitation centers (carbon or nitrogen atoms) leads to a consistent interpretation of the spectra.
Instruments for measuring the absorbed dose and dose rate under radiation exposure, known as radiation dosimeters, are indispensable in space missions. They are composed of radiation sensors that generate current or voltage response when exposed to ionizing radiation, and processing electronics for computing the absorbed dose and dose rate. Among a wide range of existing radiation sensors, the Radiation Sensitive Field Effect Transistors (RADFETs) have unique advantages for absorbed dose measurement, and a proven record of successful exploitation in space missions. It has been shown that the RADFETs may be also used for the dose rate monitoring. In that regard, we propose a unique design concept that supports the simultaneous operation of a single RADFET as absorbed dose and dose rate monitor. This enables to reduce the cost of implementation, since the need for other types of radiation sensors can be minimized or eliminated. For processing the RADFET's response we propose a readout system composed of analog signal conditioner (ASC) and a self-adaptive multiprocessing system-on-chip (MPSoC). The soft error rate of MPSoC is monitored in real time with embedded sensors, allowing the autonomous switching between three operating modes (high-performance, de-stress and fault-tolerant), according to the application requirements and radiation conditions.
Sightings and migration patterns of 65 bean and 65 white-fronted geese are reported. These geese were tagged and serologically screened. 19 of the 53 birds sighted had serologic evidence of Newcastle Disease. The migration patterns of the wild geese provided further evidence that the main resting and wintering sites of migratory waterfowl are likely to be important for the inter- and intraspecies transmission of avian diseases.
Quantum theory (QT) is usually formulated in terms of abstract mathematical postulates involving Hilbert spaces, state vectors and unitary operators. In this paper, we show that the full formalism of QT can instead be derived from five simple physical requirements, based on elementary assumptions regarding preparations, transformations and measurements. This is very similar to the usual formulation of special relativity, where two simple physical requirements-the principles of relativity and light speed invariance-are used to derive the mathematical structure of Minkowski space-time. Our derivation provides insights into the physical origin of the structure of quantum state spaces (including a group-theoretic explanation of the Bloch ball and its three dimensionality) and suggests several natural possibilities to construct consistent modifications of QT.
The phase behavior of a dendritic amphiphile containing a Newkome-type dendron as the hydrophilic moiety and a cholesterol unit as the hydrophobic segment is investigated at the air-liquid interface. The amphiphile forms stable monomolecular films at the airliquid interface on different subphases. Furthermore, the mineralization of calcium phosphate beneath the monolayer at different calcium and phosphate concentrations versus mineralization time shows that at low calcium and phosphate concentrations needles form, whereas flakes and spheres dominate at higher concentrations. Energy-dispersive X-ray spectroscopy, X-ray photoelectron spectroscopy, and electron diffraction confirm the formation of calcium phosphate. High-resolution transmission electron microscopy and electron diffraction confirm the predominant formation of octacalcium phosphate and hydroxyapatite. The data also indicate that the final products form via a complex multistep reaction, including an association step, where nano-needles aggregate into larger flake-like objects.
We study those nonlinear partial differential equations which appear as Euler-Lagrange equations of variational problems. On defining weak boundary values of solutions to such equations we initiate the theory of Lagrangian boundary value problems in spaces of appropriate smoothness. We also analyse if the concept of mapping degree of current importance applies to Lagrangian problems.
The molecular biomarker composition of two sediment cores from Sanabria Lake (NW Iberian Peninsula) and a survey of modern plants in the watershed provide a reconstruction of past vegetation and landscape dynamics since deglaciation. During a proglacial stage in Lake Sanabria (prior to 14.7 cal ka BP), very low biomarker concentration and carbon preference index (CPI) values similar to 1 suggest that the n-alkanes could have derived from eroded ancient sediment sources or older organic matter with high degree of maturity. During the Late glacial (14.7-11.7 cal ka BP) and the Holocene (last 11.7 cal ka BP) intervals with higher biomarker and triterpenoid concentrations (high %nC(29) , nC(31) alkanes), higher CPI and average carbon length (ACL), and lower P-aq (proportion of aquatic plants) are indicative of major contribution of vascular land plants from a more forested watershed (e.g. Mid Holocene period 7.0-4.0 cal ka BP). Lower biomarker concentrations (high %nC(27) alkanes), CPI and ACL values responded to short phases with decreased allochthonous contribution into the lake that correspond to centennial-scale periods of regional forest decline (e.g. 4-3 ka BP, Roman deforestation after 2.0 ka, and some phases of the LIA, seventeenth-nineteenth centuries). Human activities in the watershed were significant during early medieval times (1.3-1.0 cal ka BP) and since 1960 CE, in both cases associated with relatively higher productivity stages in the lake (lower biomarker and triterpenoid concentrations, high %nC(23) and %nC(31) respectively, lower ACL and CPI values and higher P-aq). The lipid composition of Sanabria Lake sediments indicates a major allochthonous (watershed-derived) contribution to the organic matter budget since deglaciation, and a dominant oligotrophic status during the lake history. The study constrains the climate and anthropogenic forcings and watershed versus lake sources in organic matter accumulation processes and helps to design conservation and management policies in mountain, oligotrophic lakes.
Multimodal representation learning has gained increasing importance in various real-world multimedia applications. Most previous approaches focused on exploring inter-modal correlation by learning a common or intermediate space in a conventional way, e.g. Canonical Correlation Analysis (CCA). These works neglected the exploration of fusing multiple modalities at higher semantic level. In this paper, inspired by the success of deep networks in multimedia computing, we propose a novel unified deep neural framework for multimodal representation learning. To capture the high-level semantic correlations across modalities, we adopted deep learning feature as image representation and topic feature as text representation respectively. In joint model learning, a 5-layer neural network is designed and enforced with a supervised pre-training in the first 3 layers for intra-modal regularization. The extensive experiments on benchmark Wikipedia and MIR Flickr 25K datasets show that our approach achieves state-of-the-art results compare to both shallow and deep models in multimodal and cross-modal retrieval.
In nowadays production, fluctuations in demand, shortening product life-cycles, and highly configurable products require an adaptive and robust control approach to maintain competitiveness. This approach must not only optimise desired production objectives but also cope with unforeseen machine failures, rush orders, and changes in short-term demand. Previous control approaches were often implemented using a single operations layer and a standalone deep learning approach, which may not adequately address the complex organisational demands of modern manufacturing systems. To address this challenge, we propose a hyper-heuristics control model within a semi-heterarchical production system, in which multiple manufacturing and distribution agents are spread across pre-defined modules. The agents employ a deep reinforcement learning algorithm to learn a policy for selecting low-level heuristics in a situation-specific manner, thereby leveraging system performance and adaptability. We tested our approach in simulation and transferred it to a hybrid production environment. By that, we were able to demonstrate its multi-objective optimisation capabilities compared to conventional approaches in terms of mean throughput time, tardiness, and processing of prioritised orders in a multi-layered production system. The modular design is promising in reducing the overall system complexity and facilitates a quick and seamless integration into other scenarios.
Earthquake localization is both a necessity within the field of seismology, and a prerequisite for further analysis such as source studies and hazard assessment. Traditional localization methods often rely on manually picked phases. We present an alternative approach using deep learning that once trained can predict hypocenter locations efficiently. In seismology, neural networks have typically been trained with either single-station records or based on features that have been extracted previously from the waveforms. We use three-component full-waveform records of multiple stations directly. This means no information is lost during preprocessing and preparation of the data does not require expert knowledge. The first convolutional layer of our deep convolutional neural network (CNN) becomes sensitive to features that characterize the waveforms it is trained on. We show that this layer can therefore additionally be used as an event detector. As a test case, we trained our CNN using more than 2000 earthquake swarm events from West Bohemia, recorded by nine local three-component stations. The CNN successfully located 908 validation events with standard deviations of 56.4 m in east-west, 123.8 m in north-south, and 136.3 m in vertical direction compared to a double-difference relocated reference catalog. The detector is sensitive to events with magnitudes down to M-L = -0.8 with 3.5% false positive detections.
A very sensitive X-ray investigation of the giant HII region N11 in the Large Megallanic Cloud was performed using the Chandra X-ray Observatory. The 300 ks observation reveals X-ray sources with luminosities down to 10(32) erg s(-1), increasing the number of known point sources in the field by more than a factor of five. Among these detections are 13 massive stars (3 compact groups of massive stars, 9 O stars, and one early B star) with log(L-X/L-BOL) similar to -6.5 to -7, which may suggest that they are highly magnetic or colliding-wind systems. On the other hand, the stacked signal for regions corresponding to undetected O stars yields log(L-X/L-BOL) similar to -7.3, i.e., an emission level comparable to similar Galactic stars despite the lower metallicity. Other point sources coincide with 11 foreground stars, 6 late-B/A stars in N11, and many background objects. This observation also uncovers the extent and detailed spatial properties of the soft, diffuse emission regions, but the presence of some hotter plasma in their spectra suggests contamination by the unresolved stellar population.
We present an experimental approach to study the three-dimensional microstructure of gas diffusion layer (GDL) materials under realistic compression conditions. A dedicated compression device was designed that allows for synchrotron-tomographic investigation of circular samples under well-defined compression conditions. The tomographic data provide the experimental basis for stochastic modeling of nonwoven GDL materials. A plain compression tool is used to study the fiber courses in the material at different compression stages. Transport relevant geometrical parameters, such as porosity, pore size, and tortuosity distributions, are exemplarily evaluated for a GDL sample in the uncompressed state and for a compression of 30 vol.%. To mimic the geometry of the flow-field, we employed a compression punch with an integrated channel-rib-profile. It turned out that the GDL material is homogeneously compressed under the ribs, however, much less compressed underneath the channel. GDL fibers extend far into the channel volume where they might interfere with the convective gas transport and the removal of liquid water from the cell. (C) 2015 AIP Publishing LLC.
Faced with the increasing needs of companies, optimal dimensioning of IT hardware is becoming challenging for decision makers. In terms of analytical infrastructures, a highly evolutionary environment causes volatile, time dependent workloads in its components, and intelligent, flexible task distribution between local systems and cloud services is attractive. With the aim of developing a flexible and efficient design for analytical infrastructures, this paper proposes a flexible architecture model, which allocates tasks following a machine-specific decision heuristic. A simulation benchmarks this system with existing strategies and identifies the new decision maxim as superior in a first scenario-based simulation.
Acetanilides can be deacetylated and diazotized in situ, and subsequently used in Pd-catalyzed coupling reactions without isolation of the diazonium intermediate. Heck reactions, Suzuki cross-coupling reactions, and a Pd-catalyzed [2+2+1]cycloaddition have been investigated as terminating CC bond-forming steps of this one-flask sequence. The sequence does not require the exchange of solvents or removal of by-products between the individual steps, but proceeds by addition of reagents and catalysts in due course.
In this study we investigate a dayside, midlatitude plasma depletion (DMLPD) encountered on 22 May 2014 by the Swarm and GRACE satellites, as well as ground-based instruments. The DMLPD was observed near Puerto Rico by Swarm near 10 LT under quiet geomagnetic conditions at altitudes of 475-520 km and magnetic latitudes of similar to 25 degrees-30 degrees. The DMLPD was also revealed in total electron content observations by the Saint Croix station and by the GRACE satellites (430 km) near 16 LT and near the same geographic location. The unique Swarm constellation enables the horizontal tilt of the DMLPD to be measured (35 degrees clockwise from the geomagnetic east-west direction). Ground-based airglow images at Arecibo showed no evidence for plasma density depletions during the night prior to this dayside event. The C/NOFS equatorial satellite showed evidence for very modest plasma density depletions that had rotated into the morningside from nightside. However, the equatorial depletions do not appear related to the DMLPD, for which the magnetic apex height is about 2500 km. The origins of the DMLPD are unknown, but may be related to gravity waves.
Compound natural hazards likeEl Ninoevents cause high damage to society, which to manage requires reliable risk assessments. Damage modelling is a prerequisite for quantitative risk estimations, yet many procedures still rely on expert knowledge, and empirical studies investigating damage from compound natural hazards hardly exist. A nationwide building survey in Peru after theEl Ninoevent 2017 - which caused intense rainfall, ponding water, flash floods and landslides - enables us to apply data-mining methods for statistical groundwork, using explanatory features generated from remote sensing products and open data. We separate regions of different dominant characteristics through unsupervised clustering, and investigate feature importance rankings for classifying damage via supervised machine learning. Besides the expected effect of precipitation, the classification algorithms select the topographic wetness index as most important feature, especially in low elevation areas. The slope length and steepness factor ranks high for mountains and canyons. Partial dependence plots further hint at amplified vulnerability in rural areas. An example of an empirical damage probability map, developed with a random forest model, is provided to demonstrate the technical feasibility.
We consider an extension of the Standard Model within the framework of Noncommutative Geometry. The model is based on an older model [C. A. Stephan, Phys. Rev. D 79, 065013 (2009)] which extends the Standard Model by new fermions, a new U(1)-gauge group and, crucially, a new scalar field which couples to the Higgs field. This new scalar field allows to lower the mass of the Higgs mass from similar to 170 GeV, as predicted by the Spectral Action for the Standard Model, to a value of 120-130 GeV. The shortcoming of the previous model lay in its inability to meet all the constraints on the gauge couplings implied by the Spectral Action. These shortcomings are cured in the present model which also features a "dark sector" containing fermions and scalar particles.
A cytoplasmically inherited chlorophyll-deficient mutant of barley (Hordeum vulgare) termed cytoplasmic line 3 (CL3), displaying a viridis (homogeneously light-green colored) phenotype, has been previously shown to be affected by elevated temperatures. In this article, biochemical, biophysical, and molecular approaches were used to study the CL3 mutant under different temperature and light conditions. The results lead to the conclusion that an impaired assembly of photosystem I (PSI) under higher temperatures and certain light conditions is the primary cause of the CL3 phenotype. Compromised splicing of ycf3 transcripts, particularly at elevated temperature, resulting from a mutation in a noncoding region (intron 1) in the mutant ycf3 gene results in a defective synthesis of Ycf3, which is a chaperone involved in PSI assembly. The defective PSI assembly causes severe photoinhibition and degradation of PSII.
Ten square-based pyramidal molybdenum complexes with different sulfur donor ligands, that is, a variety of dithiolenes and sulfides, were prepared, which mimic coordination motifs of the molybdenum cofactors of molybdenum-dependent oxidoreductases. The model compounds were investigated by Mo K-edge X-ray absorption spectroscopy (XAS) and (with one exception) their molecular structures were analyzed by X-ray diffraction to derive detailed information on bond lengths and geometries of the first coordination shell of molybdenum. Only small variations in Mo=O and Mo-S bond lengths and their respective coordination angles were observed for all complexes including those containing Mo(CO)(2) or Mo(mu-S)(2)Mo motifs. XAS analysis (edge energy) revealed higher relative oxidation levels in the molybdenum ion in compounds with innocent sulfur-based ligands relative to those in dithiolene complexes, which are known to exhibit noninnocence, that is, donation of substantial electron density from ligand to metal. In addition, longer average Mo-S and Mo=O bonds and consequently lower.(Mo=O) stretching frequencies in the IR spectra were observed for complexes with dithiolene-derived ligands. The results emphasize that the noninnocent character of the dithiolene ligand influences the electronic structure of the model compounds, but does not significantly affect their metal coordination geometry, which is largely determined by the Mo(IV) or (V) ion itself. The latter conclusion also holds for the molybdenum site geometries in the oxidized Mo-VI cofactor of DMSO reductase and the reduced Mo-IV cofactor of arsenite oxidase. The innocent behavior of the dithiolene molybdopterin ligands observed in the enzymes is likely to be related to cofactor-protein interactions.
Protection of natural or semi-natural ecosystems is an important part of societal strategies for maintaining biodiversity, ecosystem services, and achieving overall sustainable development. The assessment of multiple emerging land use trade-offs is complicated by the fact that land use changes occur and have consequences at local, regional, and even global scale. Outcomes also depend on the underlying socio-economic trends. We apply a coupled, multi-scale modelling system to assess an increase in nature protection areas as a key policy option in the European Union (EU). The main goal of the analysis is to understand the interactions between policy-induced land use changes across different scales and sectors under two contrasting future socio-economic pathways. We demonstrate how complementary insights into land system change can be gained by coupling land use models for agriculture, forestry, and urban areas for Europe, in connection with other world regions. The simulated policy case of nature protection shows how the allocation of a certain share of total available land to newly protected areas, with specific management restrictions imposed, may have a range of impacts on different land-based sectors until the year 2040. Agricultural land in Europe is slightly reduced, which is partly compensated for by higher management intensity. As a consequence of higher costs, total calorie supply per capita is reduced within the EU. While wood harvest is projected to decrease, carbon sequestration rates increase in European forests. At the same time, imports of industrial roundwood from other world regions are expected to increase. Some of the aggregate effects of nature protection have very different implications at the local to regional scale in different parts of Europe. Due to nature protection measures, agricultural production is shifted from more productive land in Europe to on average less productive land in other parts of the world. This increases, at the global level, the allocation of land resources for agriculture, leading to a decrease in tropical forest areas, reduced carbon stocks, and higher greenhouse gas emissions outside of Europe. The integrated modelling framework provides a method to assess the land use effects of a single policy option while accounting for the trade-offs between locations, and between regional, European, and global scales.
Diagnostics of Autoimmune Diseases involve screening of patient samples for containing autoantibodies against various antigens. To ensure quality of diagnostic assays a calibrator is needed in each assay system. Different calibrators as recombinant human monoclonal antibodies as well as chimeric antibodies against the autoantigens of interest are described. A less cost-intensive and also more representative possibility covering different targets on the antigens is the utilization of polyclonal sera from other species. Nevertheless, the detection of human autoantibodies as well as the calibration reagent containing antibodies from other species in one assay constitutes a challenge in terms of assay calibration. We therefore developed a cross-reactive monoclonal antibody which binds human as well as rabbit sera with similar affinities in the nanomolar range. We tested our monoclonal antibody S38CD11B12 successfully in the commercial Serazym (R) Anti-Cardiolipin-beta 2-GPI IgG/IgM assay and could thereby prove the eligibility of S38CD11B12 as detection antibody in autoimmune diagnostic assays using rabbit derived sera as reference material.
A protected derivative of (3R, 4R)-hexa-1,5-diene-3,4-diol, a conveniently accessible C-2-symmetric building block, undergoes single or double cross metathesis with methyl acryl-ate. The cross metathesis products are amenable to stereoselective conjugate addition reactions and can be converted into either gamma-butyrolactones or gamma-lactams.
This study compares the duration and first two formants (F1 and F2) of 11 nominal monophthongs and five nominal diphthongs in Standard Southern British English (SSBE) and a Northern English dialect. F1 and F2 trajectories were fitted with parametric curves using the discrete cosine transform (DCT) and the zeroth DCT coefficient represented formant trajectory means and the first DCT coefficient represented the magnitude and direction of formant trajectory change to characterize vowel inherent spectral change (VISC). Cross-dialectal comparisons involving these measures revealed significant differences for the phonologically back monophthongs /D, , , u:/ and also /3z:/ and the diphthongs /eI, e, aI, I/. Most cross-dialectal differences are in zeroth DCT coefficients, suggesting formant trajectory means tend to characterize such differences, while first DCT coefficient differences were more numerous for diphthongs. With respect to VISC, the most striking differences are that /u:/is considerably more diphthongized in the Northern dialect and that the F2 trajectory of /e/proceeds in opposite directions in the two dialects. Cross-dialectal differences were found to be largely unaffected by the consonantal context in which the vowels were produced. The implications of the results are discussed in relation to VISC, consonantal context effects and speech perception. (c) 2014 Acoustical Society of America.
Despite the fact that development aid has broadened from economic growth theory to include human and social capital, there is a lack of a general agreement as to its benefits. This critical review and analyses of the development aid academic and institutional discourse identifies some major shortcomings. The dominance of economics at the expense of politics, and the imposition of development aid neoliberal conditionalities act as barriers to socio-economic development in aid recipient countries. An inference is offered to recast development aid through reconciliation within critical frameworks of different sides of the political spectrum.
Monsoon systems around the world are governed by the so-called moisture-advection feedback. Here we show that, in a minimal conceptual model, this feedback implies a critical threshold with respect to the atmospheric specific humidity q(o) over the ocean adjacent to the monsoon region. If q(o) falls short of this critical value q(o)(c), monsoon rainfall over land cannot be sustained. Such a case could occur if evaporation from the ocean was reduced, e.g. due to low sea surface temperatures. Within the restrictions of the conceptual model, we estimate q(o)(c) from present-day reanalysis data for four major monsoon systems, and demonstrate how this concept can help understand abrupt variations in monsoon strength on orbital timescales as found in proxy records.
Nonlinear force-free field (NLFFF) models are thought to be viable tools for investigating the structure, dynamics, and evolution of the coronae of solar active regions. In a series of NLFFF modeling studies, we have found that NLFFF models are successful in application to analytic test cases, and relatively successful when applied to numerically constructed Sun-like test cases, but they are less successful in application to real solar data. Different NLFFF models have been found to have markedly different field line configurations and to provide widely varying estimates of the magnetic free energy in the coronal volume, when applied to solar data. NLFFF models require consistent, force-free vector magnetic boundary data. However, vector magnetogram observations sampling the photosphere, which is dynamic and contains significant Lorentz and buoyancy forces, do not satisfy this requirement, thus creating several major problems for force-free coronal modeling efforts. In this paper, we discuss NLFFF modeling of NOAA Active Region 10953 using Hinode/SOT-SP, Hinode/XRT, STEREO/SECCHI-EUVI, and SOHO/MDI observations, and in the process illustrate three such issues we judge to be critical to the success of NLFFF modeling: (1) vector magnetic field data covering larger areas are needed so that more electric currents associated with the full active regions of interest are measured, (2) the modeling algorithms need a way to accommodate the various uncertainties in the boundary data, and (3) a more realistic physical model is needed to approximate the photosphere-to-corona interface in order to better transform the forced photospheric magnetograms into adequate approximations of nearly force-free fields at the base of the corona. We make recommendations for future modeling efforts to overcome these as yet unsolved problems.
Transverse dispersion, or tracer spreading orthogonal to the mean flow direction, which is relevant e.g, for quantifying bio-degradation of contaminant plumes or mixing of reactive solutes, has been studied in the literature less than the longitudinal one. Inferring transverse dispersion coefficients from field experiments is a difficult and error-prone task, requiring a spatial resolution of solute plumes which is not easily achievable in applications. In absence of field data, it is a questionable common practice to set transverse dispersivities as a fraction of the longitudinal one, with the ratio 1/10 being the most prevalent. We collected estimates of field-scale transverse dispersivities from existing publications and explored possible scale relationships as guidance criteria for applications. Our investigation showed that a large number of estimates available in the literature are of low reliability and should be discarded from further analysis. The remaining reliable estimates are formation-specific, span three orders of magnitude and do not show any clear scale-dependence on the plume traveled distance. The ratios with the longitudinal dispersivity are also site specific and vary widely. The reliability of transverse dispersivities depends significantly on the type of field experiment and method of data analysis. In applications where transverse dispersion plays a significant role, inference of transverse dispersivities should be part of site characterization with the transverse dispersivity estimated as an independent parameter rather than related heuristically to longitudinal dispersivity.
The Upper Cretaceous (Campanian-Maastrichtian) bioclastic wedge of the Orfento Formation in the Montagna della Maiella, Italy, is compared to newly discovered contourite drifts in the Maldives. Like the drift deposits in the Maldives, the Orfento Formation fills a channel and builds a Miocene delta-shaped and mounded sedimentary body in the basin that is similar in size to the approximately 350 km(2) large coarse-grained bioclastic Miocene delta drifts in the Maldives. The composition of the bioclastic wedge of the Orfento Formation is also exclusively bioclastic debris sourced from the shallow-water areas and reworked clasts of the Orfento Formation itself. In the near mud-free succession, age-diagnostic fossils are sparse. The depositional textures vary from wackestone to float-rudstone and breccia/conglomerates, but rocks with grainstone and rudstone textures are the most common facies. In the channel, lensoid convex-upward breccias, cross-cutting channelized beds and thick grainstone lobes with abundant scours indicate alternating erosion and deposition from a high-energy current. In the basin, the mounded sedimentary body contains lobes with a divergent progradational geometry. The lobes are built by decametre thick composite megabeds consisting of sigmoidal clinoforms that typically have a channelized topset, a grainy foreset and a fine-grained bottomset with abundant irregular angular clasts. Up to 30 m thick channels filled with intraformational breccias and coarse grainstones pinch out downslope between the megabeds. In the distal portion of the wedge, stacked grainstone beds with foresets and reworked intraclasts document continuous sediment reworking and migration. The bioclastic wedge of the Orfento Formation has been variously interpreted as a succession of sea-level controlled slope deposits, a shoaling shoreface complex, or a carbonate tidal delta. Current-controlled delta drifts in the Maldives, however, offer a new interpretation because of their similarity in architecture and composition. These similarities include: (i) a feeder channel opening into the basin; (ii) an excavation moat at the exit of the channel; (iii) an overall mounded geometry with an apex that is in shallower water depth than the source channel; (iv) progradation of stacked lobes; (v) channels that pinch out in a basinward direction; and (vi) smaller channelized intervals that are arranged in a radial pattern. As a result, the Upper Cretaceous (Campanian-Maastrichtian) bioclastic wedge of the Orfento Formation in the Montagna della Maiella, Italy, is here interpreted as a carbonate delta drift.
Children's participation in legal proceedings affecting them personally has been gaining importance. So far, a primary research concern has been how children experience their participation in court proceedings. However, little is known about the child's voice itself: Are children able to clearly express their wishes, and if so, what do they say in child protection cases? In this study, we extracted information about children's statements from court file data of 220 child protection cases in Germany. We found 182 children were asked about their wishes. The majority of the statements found came either from reports of the guardians ad litem or from judicial records of the child hearings. Using content analysis, three main aspects of the statements were extracted: wishes concerning main place of residence, wishes about whom to have or not contact with, and children granting decision-making authority to someone else. Children's main focus was on their parents, but others (e.g., relatives and foster care providers) were also mentioned. Intercoder agreement was substantial. Making sure that child hearings are as informative as possible is in the child's best interest. Therefore, the categories developed herein might help professionals to ask questions more precisely relevant to the child.
The literature contains a sizable number of publications where weather types are used to decompose climate shifts or trends into contributions of frequency and mean of those types. They are all based on the product rule, that is, a transformation of a product of sums into a sum of products, the latter providing the decomposition. While there is nothing to argue about the transformation itself, its interpretation as a climate shift or trend decomposition is bound to fail. While the case of a climate shift may be viewed as an incomplete description of a more complex behaviour, trend decomposition indeed produces bogus trends, as demonstrated by a synthetic counterexample with well-defined trends in type frequency and mean. Consequently, decompositions based on that transformation, be it for climate shifts or trends, must not be used.
The manuscript describes the phytochemical investigation of the roots, leaves and stem bark of Millettia lasiantha resulting in the isolation of twelve compounds including two new isomeric isoflavones lascoumestan and las-coumaronochromone. The structures of the new compounds were determined using different spectroscopic techniques.
Eclipsing systems of massive stars allow one to explore the properties of their components in great detail. We perform a multi-wavelength, non-LTE analysis of the three components of the massive multiple system delta Ori A, focusing on the fundamental stellar properties, stellar winds, and X-ray characteristics of the system. The primary's distance-independent parameters turn out to be characteristic for its spectral type (O9.5 II), but usage of the Hipparcos parallax yields surprisingly low values for the mass, radius, and luminosity. Consistent values follow only if delta Ori lies at about twice the Hipparcos distance, in the vicinity of the sigma-Orionis cluster. The primary and tertiary dominate the spectrum and leave the secondary only marginally detectable. We estimate the V-band magnitude difference between primary and secondary to be Delta V approximate to 2.(m)8. The inferred parameters suggest that the secondary is an early B-type dwarf (approximate to B1 V), while the tertiary is an early B-type subgiant (approximate to B0 IV). We find evidence for rapid turbulent velocities (similar to 200 km s(-1)) and wind inhomogeneities, partially optically thick, in the primary's wind. The bulk of the X-ray emission likely emerges from the primary's stellar wind (logL(X)/L-Bol approximate to -6.85), initiating close to the stellar surface at R-0 similar to 1.1 R-*. Accounting for clumping, the mass-loss rate of the primary is found to be log (M) over dot approximate to -6.4 (M-circle dot yr(-1))., which agrees with hydrodynamic predictions, and provides a consistent picture along the X-ray, UV, optical, and radio spectral domains.
We report on both high-precision photometry from the Microvariability and Oscillations of Stars (MOST) space telescope and ground-based spectroscopy of the triple system delta Ori A, consisting of a binary O9.5II+early-B (Aa1 and Aa2) with P = 5.7 days, and a more distant tertiary (O9 IV P > 400 years). This data was collected in concert with X-ray spectroscopy from the Chandra X-ray Observatory. Thanks to continuous coverage for three weeks, the MOST light curve reveals clear eclipses between Aa1 and Aa2 for the first time in non-phased data. From the spectroscopy, we have a well-constrained radial velocity (RV) curve of Aa1. While we are unable to recover RV variations of the secondary star, we are able to constrain several fundamental parameters of this system and determine an approximate mass of the primary using apsidal motion. We also detected second order modulations at 12 separate frequencies with spacings indicative of tidally influenced oscillations. These spacings have never been seen in a massive binary, making this system one of only a handful of such binaries that show evidence for tidally induced pulsations.
We present time-resolved and phase-resolved variability studies of an extensive X-ray high-resolution spectral data set of the delta Ori Aa binary system. The four observations, obtained with Chandra ACIS HETGS, have a total exposure time of approximate to 479 ks and provide nearly complete binary phase coverage. Variability of the total X-ray flux in the range of 5-25 is is confirmed, with a maximum amplitude of about +/- 15% within a single approximate to 125 ks observation. Periods of 4.76 and 2.04 days are found in the total X-ray flux, as well as an apparent overall increase in the flux level throughout the nine-day observational campaign. Using 40 ks contiguous spectra derived from the original observations, we investigate the variability of emission line parameters and ratios. Several emission lines are shown to be variable, including S XV, Si XIII, and Ne IX. For the first time, variations of the X-ray emission line widths as a function of the binary phase are found in a binary system, with the smallest widths at phi = 0.0 when the secondary delta Ori Aa2 is at the inferior conjunction. Using 3D hydrodynamic modeling of the interacting winds, we relate the emission line width variability to the presence of a wind cavity created by a wind-wind collision, which is effectively void of embedded wind shocks and is carved out of the X-ray-producing primary wind, thus producing phase-locked X-ray variability.
We present an overview of four deep phase-constrained Chandra HETGS X-ray observations of delta Ori A. Delta Ori A is actually a triple system that includes the nearest massive eclipsing spectroscopic binary, delta Ori Aa, the only such object that can be observed with little phase-smearing with the Chandra gratings. Since the fainter star, delta Ori Aa2, has a much lower X-ray luminosity than the brighter primary (delta Ori Aa1), delta Ori Aa provides a unique system with which to test the spatial distribution of the X-ray emitting gas around delta Ori Aa1 via occultation by the photosphere of, and wind cavity around, the X-ray dark secondary. Here we discuss the X-ray spectrum and X-ray line profiles for the combined observation, having an exposure time of nearly 500 ks and covering nearly the entire binary orbit. The companion papers discuss the X-ray variability seen in the Chandra spectra, present new space-based photometry and ground-based radial velocities obtained simultaneously with the X-ray data to better constrain the system parameters, and model the effects of X-rays on the optical and UV spectra. We find that the X-ray emission is dominated by embedded wind shock emission from star Aa1, with little contribution from the tertiary star Ab or the shocked gas produced by the collision of the wind of Aa1 against the surface of Aa2. We find a similar temperature distribution to previous X-ray spectrum analyses. We also show that the line half-widths are about 0.3-0.5 times the terminal velocity of the wind of star Aa1. We find a strong anti-correlation between line widths and the line excitation energy, which suggests that longer-wavelength, lower-temperature lines form farther out in the wind. Our analysis also indicates that the ratio of the intensities of the strong and weak lines of Fe XVII and Ne X are inconsistent with model predictions, which may be an effect of resonance scattering.
Transition path theory (TPT) for diffusion processes is a framework for analyzing the transitions of multiscale ergodic diffusion processes between disjoint metastable subsets of state space. Most methods for applying TPT involve the construction of a Markov state model on a discretization of state space that approximates the underlying diffusion process. However, the assumption of Markovianity is difficult to verify in practice, and there are to date no known error bounds or convergence results for these methods. We propose a Monte Carlo method for approximating the forward committor, probability current, and streamlines from TPT for diffusion processes. Our method uses only sample trajectory data and partitions of state space based on Voronoi tessellations. It does not require the construction of a Markovian approximating process. We rigorously prove error bounds for the approximate TPT objects and use these bounds to show convergence to their exact counterparts in the limit of arbitrarily fine discretization. We illustrate some features of our method by application to a process that solves the Smoluchowski equation on a triple-well potential.
A conundrum of trends
(2022)
This comment is meant to reiterate two warnings: One applies to the uncritical use of ready-made (openly available) program packages, and one to the estimation of trends in serially correlated time series. Both warnings apply to the recent publication of Lischeid et al. about lake-level trends in Germany.
Design flood estimation is an essential part of flood risk assessment. Commonly applied are flood frequency analyses and design storm approaches, while the derived flood frequency using continuous simulation has been getting more attention recently. In this study, a continuous hydrological modelling approach on an hourly time scale, driven by a multi-site weather generator in combination with a -nearest neighbour resampling procedure, based on the method of fragments, is applied. The derived 100-year flood estimates in 16 catchments in Vorarlberg (Austria) are compared to (a) the flood frequency analysis based on observed discharges, and (b) a design storm approach. Besides the peak flows, the corresponding runoff volumes are analysed. The spatial dependence structure of the synthetically generated flood peaks is validated against observations. It can be demonstrated that the continuous modelling approach can achieve plausible results and shows a large variability in runoff volume across the flood events.
Model-informed precision dosing (MIPD) is a quantitative dosing framework that combines prior knowledge on the drug-disease-patient system with patient data from therapeutic drug/ biomarker monitoring (TDM) to support individualized dosing in ongoing treatment. Structural models and prior parameter distributions used in MIPD approaches typically build on prior clinical trials that involve only a limited number of patients selected according to some exclusion/inclusion criteria. Compared to the prior clinical trial population, the patient population in clinical practice can be expected to also include altered behavior and/or increased interindividual variability, the extent of which, however, is typically unknown. Here, we address the question of how to adapt and refine models on the level of the model parameters to better reflect this real-world diversity. We propose an approach for continued learning across patients during MIPD using a sequential hierarchical Bayesian framework. The approach builds on two stages to separate the update of the individual patient parameters from updating the population parameters. Consequently, it enables continued learning across hospitals or study centers, because only summary patient data (on the level of model parameters) need to be shared, but no individual TDM data. We illustrate this continued learning approach with neutrophil-guided dosing of paclitaxel. The present study constitutes an important step toward building confidence in MIPD and eventually establishing MIPD increasingly in everyday therapeutic use.
We analyse whether a stellar atmosphere model computed with the code CMFGEN provides an optimal description of the stellar observations of WR 136 and simultaneously reproduces the nebular observations of NGC 6888, such as the ionization degree, which is modelled with the pyCloudy code. All the observational material available (far and near UV and optical spectra) were used to constrain such models. We found that the stellar temperature T∗, at τ = 20, can be in a range between 70 000 and 110 000 K, but when using the nebula as an additional restriction, we found that the stellar models with T∗ ∼ 70 000 K represent the best solution for both, the star and the nebula.
Exendin-4 is a pharmaceutical peptide used in the control of insulin secretion. Structural information on exendin-4 and related peptides especially on the level of quaternary structure is scarce. We present the first published association equilibria of exendin-4 directly measured by static and dynamic light scattering. We show that exendin-4 oligomerization is pH dependent and that these oligomers are of low compactness. We relate our experimental results to a structural hypothesis to describe molecular details of exendin-4 oligomers. Discussion of the validity of this hypothesis is based on NMR, circular dichroism and fluorescence spectroscopy, and light scattering data on exendin-4 and a set of exendin-4 derived peptides. The essential forces driving oligomerization of exendin-4 are helix–helix interactions and interactions of a conserved hydrophobic moiety. Our structural hypothesis suggests that key interactions of exendin-4 monomers in the experimentally supported trimer take place between a defined helical segment and a hydrophobic triangle constituted by the Phe22 residues of the three monomeric subunits. Our data rationalize that Val19 might function as an anchor in the N-terminus of the interacting helix-region and that Trp25 is partially shielded in the oligomer by C-terminal amino acids of the same monomer. Our structural hypothesis suggests that the Trp25 residues do not interact with each other, but with C-terminal Pro residues of their own monomers.
Apoptotic death of cells damaged by genotoxic stress requires regulatory input from surrounding tissues. The C. elegans scaffold protein KRI-1, ortholog of mammalian KRIT1/CCM1, permits DNA damage-induced apoptosis of cells in the germline by an unknown cell non-autonomous mechanism. We reveal that KRI-1 exists in a complex with CCM-2 in the intestine to negatively regulate the ERK-5/MAPK pathway. This allows the KLF-3 transcription factor to facilitate expression of the SLC39 zinc transporter gene zipt-2.3, which functions to sequester zinc in the intestine. Ablation of KRI-1 results in reduced zinc sequestration in the intestine, inhibition of IR-induced MPK-1/ERK1 activation, and apoptosis in the germline. Zinc localization is also perturbed in the vasculature of krit1(-/-) zebrafish, and SLC39 zinc transporters are mis-expressed in Cerebral Cavernous Malformations (CCM) patient tissues. This study provides new insights into the regulation of apoptosis by cross-tissue communication, and suggests a link between zinc localization and CCM disease.
A Conjunction of Mysteries
(2016)
A conformational study of N-acetyl glucosamine derivatives utilizing residual dipolar couplings
(2013)
A conformational study of N-acetyl glucosamine derivatives utilizing residual dipolar couplings
(2011)
The conformational analyses of six non-rigid N-acetyl glucosamine (NAG) derivatives employing residual dipolar couplings (RDCs) and NOEs together with molecular dynamics (MD) simulations are presented. Due to internal dynamics we had to consider different conformer ratios existing in solution. The good quality of the correlation between theoretically and experimentally obtained RDCs show the correctness of the calculated conformers even if the ratios derived from the MD simulations do not exactly meet the experimental data. If possible, the results were compared to former published data and commented.
The minima on the potential energy surface of 1,2-bis(o-carboxyphenoxy) ethane (CPE) molecule in its electronic ground state were searched by a molecular dynamics simulation performed with MM2 force field. For each of the found minimum-energy conformers, the corresponding equilibrium geometry, charge distribution, HOMO-LUMO energy gap, force field, vibrational normal modes and associated IR and Raman spectral data were determined by means of the density functional theory (DFT) based electronic structure calculations carried out by using B3LYP method and various Pople- style basis sets. The obtained theoretical data confirmed the significant effects of the intra- and inter-molecular hydrogen bonding interactions on the conformational structure, force field, and group vibrations of the molecule. The same data have also revealed that two of the determined stable conformers, both of which exhibit pseudo-crown structure, are considerably more favorable in energy to the others and accordingly provide the major c ntribution to the experimental spectra of CPE. In the light of the improved vibrational spectral data obtained within the "SQM FF" methodology and "Dual Scale Factors" approach for the monomer and dimer forms of these two conformers, a reliable assignment of the fundamental bands observed in the experimental room-temperature IR and Raman spectra of the molecule was given, and the sensitivities of its group vibratb20s to conformation, substitution and dimerization were discussed.
The minima on the potential energy surface of 1,2-bis(o-carboxyphenoxy)ethane (CPE) molecule in its electronic ground state were searched by a molecular dynamics simulation performed with MM2 force field. For each of the found minimum-energy conformers, the corresponding equilibrium geometry, charge distribution, HOMO-LUMO energy gap, force field, vibrational normal modes and associated IR and Raman spectral data were determined by means of the density functional theory (DFT) based electronic structure calculations carried out by using B3LYP method and various Pople-style basis sets. The obtained theoretical data confirmed the significant effects of the intra- and inter-molecular hydrogen bonding interactions on the conformational structure, force field, and group vibrations of the molecule. The same data have also revealed that two of the determined stable conformers, both of which exhibit pseudo-crown structure, are considerably more favorable in energy to the others and accordingly provide the major contribution to the experimental spectra of CPE. In the light of the improved vibrational spectral data obtained within the "SQM FF" methodology and "Dual Scale Factors" approach for the monomer and dimer forms of these two conformers, a reliable assignment of the fundamental bands observed in the experimental room-temperature IR and Raman spectra of the molecule was given, and the sensitivities of its group vibrations to conformation, substitution and dimerization were discussed.
A confocal set-up is presented that improves micro-XRF and XAFS experiment with high-pressure e diamond-anvil cells (DACs) In this experiment a probing volume is defined by the focus of the incoming synchrotron radiation beam and that of a polycapillary X-ray half-lens with a very long working distance, which is placed in front of the fluorescence detector This set-up enhances the quality of the fluorescence and XAFS spectra, and thus the sensitivity for detecting elements at low concentrations. It efficiently suppresses signal from outside the sample chamber, which stems from elastic and inelastic scattering of the incoming beam by the diamond anvils as well as from excitation of fluorescence from the body of the DAC
The Net Reclassification Improvement (NRI) has become a popular metric for evaluating improvement in disease prediction models through the past years. The concept is relatively straightforward but usage and interpretation has been different across studies. While no thresholds exist for evaluating the degree of improvement, many studies have relied solely on the significance of the NRI estimate. However, recent studies recommend that statistical testing with the NRI should be avoided. We propose using confidence ellipses around the estimated values of event and non-event NRIs which might provide the best measure of variability around the point estimates. Our developments are illustrated using practical examples from EPIC-Potsdam study.
Time-delayed collection field (TDCF) and bias-amplified charge extraction (BACE) are applied to as-prepared and annealed poly(3-hexylthiophene):[6,6]-phenyl C-71 butyric acid methyl ester (P3HT:PCBM) blends coated from chloroform. Despite large differences in fill factor, short-circuit current, and power conversion efficiency, both blends exhibit a negligible dependence of photogeneration on the electric field and strictly bimolecular recombination (BMR) with a weak dependence of the BMR coefficient on charge density. Drift-diffusion simulations are performed using the measured coefficients and mobilities, taking into account bimolecular recombination and the possible effects of surface recombination. The excellent agreement between the simulation and the experimental data for an intensity range covering two orders of magnitude indicates that a field-independent generation rate and a density-independent recombination coefficient describe the current-voltage characteristics of the annealed P3HT: PCBM devices, while the performance of the as-prepared blend is shown to be limited by space charge effects due to a low hole mobility. Finally, even though the bimolecular recombination coefficient is small, surface recombination is found to be a negligible loss mechanism in these solar cells.
Technological advancements are giving rise to the fourth industrial revolution - Industry 4.0 -characterized by the mass employment of smart objects in highly reconfigurable and thoroughly connected industrialproduct-service systems. The purpose of this paper is to propose a theory-based knowledgedynamics model in the smart grid scenario that would provide a holistic view on the knowledge-based interactions among smart objects, humans, and other actors as an underlyingmechanism of value co-creation in Industry 4.0. A multi-loop and three-layer - physical, virtual, and interface - model of knowledge dynamics is developedby building on the concept of ba - an enabling space for interactions and theemergence of knowledge. The model depicts how big data analytics are just one component inunlocking the value of big data, whereas the tacit engagement of humans-in-the-loop - theirsense-making and decision-making - is needed for insights to be evoked fromanalytics reports and customer needs to be met.
Environmental heterogeneity is a major determinant of plant population dynamics. In semi-arid Kalahari savannas, heterogeneity is created by savanna structure, i.e. by the spatial arrangement and temporal dynamics of woody plant and open grassland microsites. We formulate a conceptual model describing the effects of savanna dynamics on the population dynamics of the animal-dispersed shrub Grewia flava. From empirical results we derive model rules describing effects of savanna structure on several processes in Grewia's life cycle. By formulating the model, we summarise existing information on Grewia demography and identify gaps in this knowledge. Despite a number of such gaps, the model can be used to make certain quantitative predictions. As an example, we apply the model to investigate the role of seed dispersal in Grewia encroachment on rangelands. Model results show that cattle promote encroachment by depositing substantial numbers of seeds in open areas, where Grewia is otherwise dispersal-limited. Finally, we draw some general conclusions about Grewia's life history and population dynamics. Under natural conditions, concentrated seed deposition under woody plants appears to be a key process causing the observed association between Grewia and other woody plants. Furthermore, low rates of recruitment and high adult survival result in slow-motion dynamics of Grewia populations. As a consequence, Grewia populations interact with savanna dynamics on long temporal and short to intermediate spatial scales.
Land-use concepts provide decision support for the most efficient usage options according to sustainable development and multifunctionality requirements. However, developments in landscape-related, agricultural production schemes are primarily driven by economic benefits. Therefore, most agricultural land-use concepts tackle particular problems or interests and lack a systemic perspective. As a result, we discuss a conceptual model for future site-specific agricultural land-use with an inbuilt requirement for adequate experimental sites to enable monitoring systems for a new generation of ecosystem models and for new approaches to address science-stakeholder interactions.
BACKGROUND: Work capacity demands are a concept to describe which psychological capacities are required in a job. Assessing psychological work capacity demands is of specific importance when mental health problems at work endanger work ability. Exploring psychological work capacity demands is the basis for mental hazard analysis or rehabilitative action, e.g. in terms of work adjustment. OBJECTIVE: This is the first study investigating psychological work capacity demands in rehabilitation patients with and without mental disorders. METHODS: A structured interview on psychological work capacity demands (Mini-ICF-Work; Muschalla, 2015; Linden et al., 2015) was done with 166 rehabilitation patients of working age. All interviews were done by a state-licensed socio-medically trained psychotherapist. Inter-rater-reliability was assessed by determining agreement in independent co-rating in 65 interviews. For discriminant validity purposes, participants filled in the Short Questionnaire for Work Analysis (KFZA, Prumper et al., 1994). RESULTS: In different professional fields, different psychological work capacity demands were of importance. The Mini-ICF-Work capacity dimensions reflect different aspects than the KFZA. Patients with mental disorders were longer on sick leave and had worse work ability prognosis than patients without mental disorders, although both groups reported similar work capacity demands. CONCLUSIONS: Psychological work demands - which are highly relevant for work ability prognosis and work adjustment processes - can be explored and differentiated in terms of psychological capacity demands.
The past three decades of policy process studies have seen the emergence of a clear intellectual lineage with regard to complexity. Implicitly or explicitly, scholars have employed complexity theory to examine the intricate dynamics of collective action in political contexts. However, the methodological counterparts to complexity theory, such as computational methods, are rarely used and, even if they are, they are often detached from established policy process theory. Building on a critical review of the application of complexity theory to policy process studies, we present and implement a baseline model of policy processes using the logic of coevolving networks. Our model suggests that an actor's influence depends on their environment and on exogenous events facilitating dialogue and consensus-building. Our results validate previous opinion dynamics models and generate novel patterns. Our discussion provides ground for further research and outlines the path for the field to achieve a computational turn.
We present a computational evaluation of three hypotheses about sources of deficit in sentence comprehension in aphasia: slowed processing, intermittent deficiency, and resource reduction. The ACT-R based Lewis and Vasishth (2005) model is used to implement these three proposals. Slowed processing is implemented as slowed execution time of parse steps; intermittent deficiency as increased random noise in activation of elements in memory; and resource reduction as reduced spreading activation. As data, we considered subject vs. object relative sentences, presented in a self-paced listening modality to 56 individuals with aphasia (IWA) and 46 matched controls. The participants heard the sentences and carried out a picture verification task to decide on an interpretation of the sentence. These response accuracies are used to identify the best parameters (for each participant) that correspond to the three hypotheses mentioned above. We show that controls have more tightly clustered (less variable) parameter values than IWA; specifically, compared to controls, among IWA there are more individuals with slow parsing times, high noise, and low spreading activation. We find that (a) individual IWA show differential amounts of deficit along the three dimensions of slowed processing, intermittent deficiency, and resource reduction, (b) overall, there is evidence for all three sources of deficit playing a role, and (c) IWA have a more variable range of parameter values than controls. An important implication is that it may be meaningless to talk about sources of deficit with respect to an abstract verage IWA; the focus should be on the individual's differential degrees of deficit along different dimensions, and on understanding the causes of variability in deficit between participants.
To provide physically based wind modelling for wind erosion research at regional scale, a 3D computational fluid dynamics (CFD) wind model was developed. The model was programmed in C language based on the Navier-Stokes equations, and it is freely available as open source. Integrated with the spatial analysis and modelling tool (SAMT), the wind model has convenient input preparation and powerful output visualization. To validate the wind model, a series of experiments was conducted in a wind tunnel. A blocking inflow experiment was designed to test the performance of the model on simulation of basic fluid processes. A round obstacle experiment was designed to check if the model could simulate the influences of the obstacle on wind field. Results show that measured and simulated wind fields have high correlations, and the wind model can simulate both the basic processes of the wind and the influences of the obstacle on the wind field. These results show the high reliability of the wind model. A digital elevation model (DEM) of an area (3800 m long and 1700 m wide) in the Xilingele grassland in Inner Mongolia (autonomous region, China) was applied to the model, and a 3D wind field has been successfully generated. The clear implementation of the model and the adequate validation by wind tunnel experiments laid a solid foundation for the prediction and assessment of wind erosion at regional scale.
Individuals with agrammatic Broca's aphasia experience difficulty when processing reversible non-canonical sentences. Different accounts have been proposed to explain this phenomenon. The Trace Deletion account (Grodzinsky, 1995, 2000, 2006) attributes this deficit to an impairment in syntactic representations, whereas others (e.g., Caplan, Waters, Dede, Michaud, & Reddy, 2007; Haarmann, Just, & Carpenter, 1997) propose that the underlying structural representations are unimpaired, but sentence comprehension is affected by processing deficits, such as slow lexical activation, reduction in memory resources, slowed processing and/or intermittent deficiency, among others. We test the claims of two processing accounts, slowed processing and intermittent deficiency, and two versions of the Trace Deletion Hypothesis (TDH), in a computational framework for sentence processing (Lewis & Vasishth, 2005) implemented in ACT-R (Anderson, Byrne, Douglass, Lebiere, & Qin, 2004). The assumption of slowed processing is operationalized as slow procedural memory, so that each processing action is performed slower than normal, and intermittent deficiency as extra noise in the procedural memory, so that the parsing steps are more noisy than normal. We operationalize the TDH as an absence of trace information in the parse tree. To test the predictions of the models implementing these theories, we use the data from a German sentence—picture matching study reported in Hanne, Sekerina, Vasishth, Burchert, and De Bleser (2011). The data consist of offline (sentence-picture matching accuracies and response times) and online (eye fixation proportions) measures. From among the models considered, the model assuming that both slowed processing and intermittent deficiency are present emerges as the best model of sentence processing difficulty in aphasia. The modeling of individual differences suggests that, if we assume that patients have both slowed processing and intermittent deficiency, they have them in differing degrees.
A comprehensive workflow to analyze ensembles of globally inverted 2D electrical resistivity models
(2022)
Electrical resistivity tomography (ERT) aims at imaging the subsurface resistivity distribution and provides valuable information for different geological, engineering, and hydrological applications. To obtain a subsurface resistivity model from measured apparent resistivities, stochastic or deterministic inversion procedures may be employed. Typically, the inversion of ERT data results in non-unique solutions; i.e., an ensemble of different models explains the measured data equally well. In this study, we perform inference analysis of model ensembles generated using a well-established global inversion approach to assess uncertainties related to the nonuniqueness of the inverse problem. Our interpretation strategy starts by establishing model selection criteria based on different statistical descriptors calculated from the data residuals. Then, we perform cluster analysis considering the inverted resistivity models and the corresponding data residuals. Finally, we evaluate model uncertainties and residual distributions for each cluster. To illustrate the potential of our approach, we use a particle swarm optimization (PSO) algorithm to obtain an ensemble of 2D layer-based resistivity models from a synthetic data example and a field data set collected in Loon-Plage, France. Our strategy performs well for both synthetic and field data and allows us to extract different plausible model scenarios with their associated uncertainties and data residual distributions. Although we demonstrate our workflow using 2D ERT data and a PSObased inversion approach, the proposed strategy is general and can be adapted to analyze model ensembles generated from other kinds of geophysical data and using different global inversion approaches.
The improvement of process representations in hydrological models is often only driven by the modelers' knowledge and data availability. We present a comprehensive comparison between two hydrological models of different complexity that is developed to support (1) the understanding of the differences between model structures and (2) the identification of the observations needed for model assessment and improvement. The comparison is conducted on both space and time and by aggregating the outputs at different spatiotemporal scales. In the present study, mHM, a process‐based hydrological model, and ParFlow‐CLM, an integrated subsurface‐surface hydrological model, are used. The models are applied in a mesoscale catchment in Germany. Both models agree in the simulated river discharge at the outlet and the surface soil moisture dynamics, lending their supports for some model applications (drought monitoring). Different model sensitivities are, however, found when comparing evapotranspiration and soil moisture at different soil depths. The analysis supports the need of observations within the catchment for model assessment, but it indicates that different strategies should be considered for the different variables. Evapotranspiration measurements are needed at daily resolution across several locations, while highly resolved spatially distributed observations with lower temporal frequency are required for soil moisture. Finally, the results show the impact of the shallow groundwater system simulated by ParFlow‐CLM and the need to account for the related soil moisture redistribution. Our comparison strategy can be applied to other models types and environmental conditions to strengthen the dialog between modelers and experimentalists for improving process representations in Earth system models.
Home range estimation is routine practice in ecological research. While advances in animal tracking technology have increased our capacity to collect data to support home range analysis, these same advances have also resulted in increasingly autocorrelated data. Consequently, the question of which home range estimator to use on modern, highly autocorrelated tracking data remains open. This question is particularly relevant given that most estimators assume independently sampled data. Here, we provide a comprehensive evaluation of the effects of autocorrelation on home range estimation. We base our study on an extensive data set of GPS locations from 369 individuals representing 27 species distributed across five continents. We first assemble a broad array of home range estimators, including Kernel Density Estimation (KDE) with four bandwidth optimizers (Gaussian reference function, autocorrelated‐Gaussian reference function [AKDE], Silverman's rule of thumb, and least squares cross‐validation), Minimum Convex Polygon, and Local Convex Hull methods. Notably, all of these estimators except AKDE assume independent and identically distributed (IID) data. We then employ half‐sample cross‐validation to objectively quantify estimator performance, and the recently introduced effective sample size for home range area estimation ( N̂ area
) to quantify the information content of each data set. We found that AKDE 95% area estimates were larger than conventional IID‐based estimates by a mean factor of 2. The median number of cross‐validated locations included in the hold‐out sets by AKDE 95% (or 50%) estimates was 95.3% (or 50.1%), confirming the larger AKDE ranges were appropriately selective at the specified quantile. Conversely, conventional estimates exhibited negative bias that increased with decreasing N̂ area. To contextualize our empirical results, we performed a detailed simulation study to tease apart how sampling frequency, sampling duration, and the focal animal's movement conspire to affect range estimates. Paralleling our empirical results, the simulation study demonstrated that AKDE was generally more accurate than conventional methods, particularly for small N̂ area. While 72% of the 369 empirical data sets had >1,000 total observations, only 4% had an N̂ area >1,000, where 30% had an N̂ area <30. In this frequently encountered scenario of small N̂ area, AKDE was the only estimator capable of producing an accurate home range estimate on autocorrelated data.
The quantification of spatial propagation of extreme precipitation events is vital in water resources planning and disaster mitigation. However, quantifying these extreme events has always been challenging as many traditional methods are insufficient to capture the nonlinear interrelationships between extreme event time series. Therefore, it is crucial to develop suitable methods for analyzing the dynamics of extreme events over a river basin with a diverse climate and complicated topography. Over the last decade, complex network analysis emerged as a powerful tool to study the intricate spatiotemporal relationship between many variables in a compact way. In this study, we employ two nonlinear concepts of event synchronization and edit distance to investigate the extreme precipitation pattern in the Ganga river basin. We use the network degree to understand the spatial synchronization pattern of extreme rainfall and identify essential sites in the river basin with respect to potential prediction skills. The study also attempts to quantify the influence of precipitation seasonality and topography on extreme events. The findings of the study reveal that (1) the network degree is decreased in the southwest to northwest direction, (2) the timing of 50th percentile precipitation within a year influences the spatial distribution of degree, (3) the timing is inversely related to elevation, and (4) the lower elevation greatly influences connectivity of the sites. The study highlights that edit distance could be a promising alternative to analyze event-like data by incorporating event time and amplitude and constructing complex networks of climate extremes.
A long-standing and profound problem in astronomy is the difficulty in obtaining deep near-infrared observations due to the extreme brightness and variability of the night sky at these wavelengths. A solution to this problem is crucial if we are to obtain the deepest possible observations of the early Universe, as redshifted starlight from distant galaxies appears at these wavelengths. The atmospheric emission between 1,000 and 1,800 nm arises almost entirely from a forest of extremely bright, very narrow hydroxyl emission lines that varies on timescales of minutes. The astronomical community has long envisaged the prospect of selectively removing these lines, while retaining high throughput between them. Here we demonstrate such a filter for the first time, presenting results from the first on-sky tests. Its use on current 8 m telescopes and future 30 m telescopes will open up many new research avenues in the years to come.
Accurate time series representation of paleoclimatic proxy records is challenging because such records involve dating errors in addition to proxy measurement errors. Rigorous attention is rarely given to age uncertainties in paleoclimatic research, although the latter can severely bias the results of proxy record analysis. Here, we introduce a Bayesian approach to represent layer-counted proxy records - such as ice cores, sediments, corals, or tree rings - as sequences of probability distributions on absolute, error-free time axes. The method accounts for both proxy measurement errors and uncertainties arising from layer-counting-based dating of the records. An application to oxygen isotope ratios from the North Greenland Ice Core Project (NGRIP) record reveals that the counting errors, although seemingly small, lead to substantial uncertainties in the final representation of the oxygen isotope ratios. In particular, for the older parts of the NGRIP record, our results show that the total uncertainty originating from dating errors has been seriously underestimated. Our method is next applied to deriving the overall uncertainties of the Suigetsu radiocarbon comparison curve, which was recently obtained from varved sediment cores at Lake Suigetsu, Japan. This curve provides the only terrestrial radiocarbon comparison for the time interval 12.5-52.8 kyr BP. The uncertainties derived here can be readily employed to obtain complete error estimates for arbitrary radiometrically dated proxy records of this recent part of the last glacial interval.
A competitive immunoassay to detect a hapten using an enzyme-labelled peptide mimotope as tracer
(2002)
Mimotope peptides-peptides which mimic the binding of a hapten to its corresponding monoclonal antibody-were conjugated to peroxidase and used in competitive immunoassay. The established immunoassay was used to quantitatively determine the concentration of hapten. As model system in all the experiments described here, we used the binding of the monoclonal antibody B13-DE1 to fluorescein and the corresponding peptide mimotope.
Although the general development of mathematical abilities in primary school has been the focus of many researchers, the development of place value understanding has rarely been investigated to date. This is possibly due to the lack of conceptual approaches and empirical studies related to this topic. To fill this gap, a theory-driven and empirically validated model was developed that describes five sequential conceptual levels of place value understanding. The level sequence model gives us the ability to estimate general abilities and difficulties in primary school pupils in the development of a conceptual place value understanding. The level sequence model was tried and tested in Germany, and given that number words are very differently constructed in German and in the languages used in South African classrooms, this study aims to investigate whether this level sequence model can be transferred to South Africa. The findings based on the responses of 198 Grade 2-4 learners show that the English translation of the test items results in the same item level allocation as the original German test items, especially for the three basic levels. Educational implications are provided, in particular concrete suggestions on how place value might be taught according to the model and how to collect specific empirical data related to place value understanding.
Within the last decade, the role of the Creative Industries has grown to become an important part of the economic system. The increasing acceleration of new developments in media and ICT technologies greatly affected the Creative Industries' dynamic with a direct impact on the people working in this sector. Since only a few studies focus on competences needs, more or less isolated from the trends within the industry, we address the topic of individual competence shifts in the turbulent environment of the Creative Industries. We investigated the trends regarding competence shifts and their implications as well as the competences which are essential for creative professionals. We conducted a broad literature review as well as a qualitative study, which includes interviews and workshops with industry experts on trends within the Creative Industries and corresponding dimensions and demands for competences. We present four requirements that call for shifts in the education of competences. Based on the discussion of requirements, we present a competence portfolio for the Creative Industries along the dimensions of professional, methodological and personal-social competences. The portfolio clearly indicates which competences should be taken into consideration for the development of curricula and study programmes in the education of creative professionals. A generalization of these findings suggests new challenges for companies relying on creative professionals.
Multidirectional communicative interactions in social networks can have a profound effect on mate choice behavior. Male Atlantic molly Poecilia mexicana exhibit weaker mating preferences when an audience male is presented. This could be a male strategy to reduce sperm competition risk: interacting more equally with different females may be advantageous because rivals might copy mate choice decisions. In line with this hypothesis, a previous study found males to show a strong audience effect when being observed while exercising mate choice, but not when the rival was presented only before the choice tests. Audience effects on mate choice decisions have been quantified in poeciliid fishes using association preference designs, but it remains unknown if patterns found from measuring association times translate into actual mating behavior. Thus, we created five audience treatments simulating different forms of perceived sperm competition risk and determined focal males' mating preferences by scoring pre-mating (nipping) and mating behavior (gonopodial thrusting). Nipping did not reflect the pattern that was found when association preferences were measured, while a very similar pattern was uncovered in thrusting behavior. The strongest response was observed when the audience could eavesdrop on the focal male's behavior. A reduction in the strength of focal males' preferences was also seen after the rival male had an opportunity to mate with the focal male's preferred mate. In comparison, the reduction of mating preferences in response to an audience was greater when measuring association times than actual mating behavior. While measuring direct sexual interactions between the focal male and both stimulus females not only the male's motivational state is reflected but also females' behavior such as avoidance of male sexual harassment.