Refine
Year of publication
- 2012 (1321) (remove)
Document Type
- Article (960)
- Doctoral Thesis (158)
- Conference Proceeding (52)
- Preprint (43)
- Postprint (39)
- Review (31)
- Monograph/Edited Volume (22)
- Other (9)
- Part of a Book (2)
- Master's Thesis (2)
Language
- English (1321) (remove)
Keywords
- Curriculum Framework (16)
- European values education (16)
- Europäische Werteerziehung (16)
- Lehrevaluation (16)
- Studierendenaustausch (16)
- Unterrichtseinheiten (16)
- curriculum framework (16)
- lesson evaluation (16)
- student exchange (16)
- teaching units (16)
Institute
- Institut für Biochemie und Biologie (235)
- Institut für Physik und Astronomie (215)
- Institut für Chemie (174)
- Institut für Geowissenschaften (170)
- Department Psychologie (75)
- Institut für Mathematik (60)
- Institut für Ernährungswissenschaft (55)
- Institut für Informatik und Computational Science (41)
- Department Linguistik (29)
- Department Sport- und Gesundheitswissenschaften (29)
This article studies proactive work behavior from a within-person perspective. Building on the broaden-and-build model and the mood-as-information approach, we hypothesized that negative trait affect and positive state affect predict the relative time spent on proactive behavior. Furthermore, based on self-determination theory we argued that persons want to feel competent and that proactive behavior is one way to experience competence. In an experience-sampling study, 52 employees responded to surveys 3 times a day for 5 days. Hierarchical linear modeling confirmed the hypotheses on trait and state affect. Analyses furthermore showed that although a higher level of experienced competence at core task activities was associated with a subsequent increase in time spent on these activities, low experienced competence predicted an increase in time spent on proactive behavior.
This is an introduction to Wiener measure and the Feynman-Kac formula on general Riemannian manifolds for Riemannian geometers with little or no background in stochastics. We explain the construction of Wiener measure based on the heat kernel in full detail and we prove the Feynman-Kac formula for Schrödinger operators with bounded potentials. We also consider normal Riemannian coverings and show that projecting and lifting of paths are inverse operations which respect the Wiener measure.
An experimental and computational study on the impact of functional groups on the oxidation stability of higher acenes is presented. We synthesized anthracenes, tetracenes, and pentacenes with various substituents at the periphery, identified their photooxygenation products, and measured the kinetics. Furthermore, the products obtained from thermolysis and the kinetics of the thermolysis are investigated. Density functional theory is applied in order to predict reaction energies, frontier molecular orbital interactions, and radical stabilization energies. The combined results allow us to describe the mechanisms of the oxidations and the subsequent thermolysis. We found that the alkynyl group not only enhances the oxidation stability of acenes but also protects the resulting endoperoxides from thermal decomposition. Additionally, such substituents increase the regioselectivity of the photooxygenation of tetracenes and pentacenes. For the first time, we oxidized alkynylpentacenes by using chemically generated singlet oxygen (O-1(2)) without irradiation and identified a 6,13-endoperoxide as the sole regioisomer. The bimolecular rate constant of this oxidation amounts to only 1 X 10(5) s(-1) M-1. This unexpectedly slow reaction is a result of a physical deactivation of O-1(2). In contrast to unsubstituted or aryl-substituted acenes, photooxygenation of alkynyl-substituted acenes proceeds most likely by a concerted mechanism, while the thermolysis is well explained by the formation of radical intermediates. Our results should be important for the future design of oxidation stable acene-based semiconductors.
The dissertation examines the use of performance information by public managers. “Use” is conceptualized as purposeful utilization in order to steer, learn, and improve public services. The main research question is: Why do public managers use performance information? To answer this question, I systematically review the existing literature, identify research gaps and introduce the approach of my dissertation. The first part deals with manager-related variables that might affect performance information use but which have thus far been disregarded. The second part models performance data use by applying a theory from social psychology which is based on the assumption that this management behavior is conscious and reasoned. The third part examines the extent to which explanations of performance information use vary if we include others sources of “unsystematic” feedback in our analysis. The empirical results are based on survey data from 2011. I surveyed middle managers from eight selected divisions of all German cities with county status (n=954). To analyze the data, I used factor analysis, multiple regression analysis, and structural equation modeling. My research resulted in four major findings: 1) The use of performance information can be modeled as a reasoned behavior which is determined by the attitude of the managers and of their immediate peers. 2) Regular users of performance data surprisingly are not generally inclined to analyze abstract data but rather prefer gathering information through personal interaction. 3) Managers who take on ownership of performance information at an early stage in the measurement process are also more likely to use this data when it is reported to them. 4) Performance reports are only one source of information among many. Public managers prefer verbal feedback from insiders and feedback from external stakeholders over systematic performance reports. The dissertation explains these findings using a deductive approach and discusses their implications for theory and practice.
Over the last two decades, macroecology the analysis of large-scale, multi-species ecological patterns and processes has established itself as a major line of biological research. Analyses of statistical links between environmental variables and biotic responses have long and successfully been employed as a main approach, but new developments are due to be utilized. Scanning the horizon of macroecology, we identified four challenges that will probably play a major role in the future. We support our claims by examples and bibliographic analyses. 1) Integrating the past into macroecological analyses, e.g. by using paleontological or phylogenetic information or by applying methods from historical biogeography, will sharpen our understanding of the underlying reasons for contemporary patterns. 2) Explicit consideration of the local processes that lead to the observed larger-scale patterns is necessary to understand the fine-grain variability found in nature, and will enable better prediction of future patterns (e.g. under environmental change conditions). 3) Macroecology is dependent on large-scale, high quality data from a broad spectrum of taxa and regions. More available data sources need to be tapped and new, small-grain large-extent data need to be collected. 4) Although macroecology already lead to mainstreaming cutting-edge statistical analysis techniques, we find that more sophisticated methods are needed to account for the biases inherent to sampling at large scale. Bayesian methods may be particularly suitable to address these challenges. To continue the vigorous development of the macroecological research agenda, it is time to address these challenges and to avoid becoming too complacent with current achievements.
In the eighties, the analysis of satellite altimetry data leads to the major discovery of gravity lineations in the oceans, with wavelengths between 200 and 1400 km. While the existence of the 200 km scale undulations is widely accepted, undulations at scales larger than 400 km are still a matter of debate. In this paper, we revisit the topic of the large-scale geoid undulations over the oceans in the light of the satellite gravity data provided by the GRACE mission, considerably more precise than the altimetry data at wavelengths larger than 400 km. First, we develop a dedicated method of directional Poisson wavelet analysis on the sphere with significance testing, in order to detect and characterize directional structures in geophysical data on the sphere at different spatial scales. This method is particularly well suited for potential field analysis. We validate it on a series of synthetic tests, and then apply it to analyze recent gravity models, as well as a bathymetry data set independent from gravity. Our analysis confirms the existence of gravity undulations at large scale in the oceans, with characteristic scales between 600 and 2000 km. Their direction correlates well with present-day plate motion over the Pacific ocean, where they are particularly clear, and associated with a conjugate direction at 1500 km scale. A major finding is that the 2000 km scale geoid undulations dominate and had never been so clearly observed previously. This is due to the great precision of GRACE data at those wavelengths. Given the large scale of these undulations, they are most likely related to mantle processes. Taking into account observations and models from other geophysical information, as seismological tomography, convection and geochemical models and electrical conductivity in the mantle, we conceive that all these inputs indicate a directional fabric of the mantle flows at depth, reflecting how the history of subduction influences the organization of lower mantle upwellings.
The precise knowledge of one of two complementary experimental outcomes prevents us from obtaining complete information about the other one. This formulation of Niels Bohr's principle of complementarity when applied to the paradigm of wave-particle dualism-that is, to Young's double-slit experiment-implies that the information about the slit through which a quantum particle has passed erases interference. In the present paper we report a double-slit experiment using two photons created by spontaneous parametric down-conversion where we observe interference in the signal photon despite the fact that we have located it in one of the slits due to its entanglement with the idler photon. This surprising aspect of complementarity comes to light by our special choice of the TEM01 pump mode. According to quantum field theory the signal photon is then in a coherent superposition of two distinct wave vectors giving rise to interference fringes analogous to two mechanical slits.
Random copolymers of 4-vinylbenzyl tri(oxyethylene) and tetra(oxyethylene) ethers, as well as alternating copolymers of 4-vinylbenzyl methoxytetra(oxyethylene) ether and a series of N-substituted maleimides, were synthesised by conventional free radical polymerisation, reversible addition fragmentation chain transfer (RAFT) and atom transfer radical polymerisation (ATRP). Their thermosensitive behaviour in aqueous solution was studied by turbidimetry and dynamic light scattering. Depending on the copolymer composition, a LCST type phase transition was observed in water. The transition temperature of the obtained random as well as alternating copolymers could be varied within a broad temperature window. In the case of the random copolymers, transition temperatures could be easily fine-tuned, as they showed a linear dependence on the copolymer composition, and were additionally modified by the nature of the polymer end-groups. Alternating copolymers were extremely versatile for implementing a broad range of variations of the phase transition temperatures. Further, while alternating copolymers derived from 4-vinylbenzyl methoxytetra(oxyethylene) ether and maleimides with small hydrophobic side chains underwent macroscopic phase separation when dissolved in water and heated above their cloud point, the incorporation of maleimides bearing larger hydrophobic substituents resulted in the formation of mesoglobules above the phase transition temperature, with hydrodynamic diameters of less than 100 nm.
Indoor mesocosm experiments were conducted to test for potential climate change effects on the spring succession of Baltic Sea plankton. Two different temperature (Delta 0 A degrees C and Delta 6 A degrees C) and three light scenarios (62, 57 and 49 % of the natural surface light intensity on sunny days), mimicking increasing cloudiness as predicted for warmer winters in the Baltic Sea region, were simulated. By combining experimental and modeling approaches, we were able to test for a potential dietary mismatch between phytoplankton and zooplankton. Two general predator-prey models, one representing the community as a tri-trophic food chain and one as a 5-guild food web were applied to test for the consequences of different temperature sensitivities of heterotrophic components of the plankton. During the experiments, we observed reduced time-lags between the peaks of phytoplankton and protozoan biomass in response to warming. Microzooplankton peak biomass was reached by 2.5 day A degrees C-1 earlier and occurred almost synchronously with biomass peaks of phytoplankton in the warm mesocosms (Delta 6 A degrees C). The peak magnitudes of microzooplankton biomass remained unaffected by temperature, and growth rates of microzooplankton were higher at Delta 6 A degrees C (mu(a dagger 0 A degrees C) = 0.12 day(-1) and mu(a dagger 6 A degrees C) = 0.25 day(-1)). Furthermore, warming induced a shift in microzooplankton phenology leading to a faster species turnover and a shorter window of microzooplankton occurrence. Moderate differences in the light levels had no significant effect on the time-lags between autotrophic and heterotrophic biomass and on the timing, biomass maxima and growth rate of microzooplankton biomass. Both models predicted reduced time-lags between the biomass peaks of phytoplankton and its predators (both microzooplankton and copepods) with warming. The reduction of time-lags increased with increasing Q(10) values of copepods and protozoans in the tritrophic food chain. Indirect trophic effects modified this pattern in the 5-guild food web. Our study shows that instead of a mismatch, warming might lead to a stronger match between protist grazers and their prey altering in turn the transfer of matter and energy toward higher trophic levels.
First language (L1) phonological categories strongly influence late learners' perception and production of second language (L2) categories. For learners who start learning an L2 early in life ("early learners"), this L1 influence appears to be substantially reduced or at least more variable. In this paper, we examine the age at which L1 vowel categories influence the acquisition of L2 vowels. We tested a child population with a very narrow range of age of first exposure, controlling for the use of L1 vs. L2, and various naturally produced contrasts that are not allophonic in the L1 of the children. An oddity discrimination task provided evidence that children who are native speakers of Turkish and began learning German as an L2 in kindergarten categorized difficult German contrasts differently from age-matched native speakers. Their vowel productions of these same contrasts (un-cued object naming) were mostly target-like.
Background: Given the huge impact of vitamin D deficiency on a broad spectrum of diseases such as rickets, osteoporosis, mineral bone disease-vascular calcification syndrome, infectious diseases, but also several types of cancer and CNS diseases, reliable and simple methods to analyze the vitamin D status are urgently needed.
Methods: We developed an easy technique to determine the 25-OH vitamin D status from dried blood samples on filter paper. This allows determination of the 25-OH vitamin D status independently of venous blood taking, since only sampling of capillary blood is required for this new method. We compared the results of vitamin D measurements from venous blood of 96 healthy blood donors with those from capillary blood taken from the same patients at the same time. The capillary blood was dried on filter paper using the D-Vital ID dry-blood collection system.
Results: 25-OH vitamin D concentration data from extracted dried capillary blood filters correlated very well with data obtained after direct measurement of venous blood samples of the same blood donor (R: 0.7936; p<0.0001). The correlation was linear over the whole range of 25-OH vitamin D concentrations seen in this study. A Bland-Altman plot revealed good agreement between both tests.
Conclusions: The D-Vital ID dry-blood collection system showed an excellent performance as compared to the classical way of 25-OH vitamin D measurement from venous blood. This new technique will facilitate easy and reliable measurement for vitamin D status, in particular, in rural or isolated areas, developing countries, and field studies.
We report on sub-wavelength structuring of photosensitive azo-containing polymer films induced by a surface plasmon interference intensity pattern. The two surface plasmon waves generated at neighboring nano-slits in the metal layer during irradiation interfere constructively, resulting in an intensity pattern with a periodicity three times smaller than the wavelength of the incoming light. The near field pattern interacts with the photosensitive polymer film placed above it, leading to a topography change which follows the intensity pattern exactly, resulting in the formation of surface relief gratings of a size below the diffraction limit. We analyze numerically and experimentally how the depth of the nano-slit alters the interference pattern of surface plasmons and find that the sub-wavelength patterning of the polymer surface could be optimized by modifying the geometry and the size of the nano-slit.
Videos related to the maps
(2012)
Recent studies have claimed the existence of very massive stars (VMS) up to 300 M⊙ in the local Universe. As this finding may represent a paradigm shift for the canonical stellar upper-mass limit of 150 M⊙, it is timely to discuss the status of the data, as well as the far-reaching implications of such objects. We held a Joint Discussion at the General Assembly in Beijing to discuss (i) the determination of the current masses of the most massive stars, (ii) the formation of VMS, (iii) their mass loss, and (iv) their evolution and final fate. The prime aim was to reach broad consensus between observers and theorists on how to identify and quantify the dominant physical processes.
We report on very high energy (E > 100 GeV) gamma-ray observations of V407 Cygni, a symbiotic binary that underwent a nova outburst producing 0.1-10 GeV gamma rays during 2010 March 10-26. Observations were made with the Very Energetic Radiation Imaging Telescope Array System during 2010 March 19-26 at relatively large zenith angles due to the position of V407 Cyg. An improved reconstruction technique for large zenith angle observations is presented and used to analyze the data. We do not detect V407 Cygni and place a differential upper limit on the flux at 1.6 TeV of 2.3 x 10(-12) erg cm(-2) s(-1) (at the 95% confidence level). When considered jointly with data from Fermi-LAT, this result places limits on the acceleration of very high energy particles in the nova.
We report on VERITAS very high energy (VHE; E >= 100 GeV) observations of six blazars selected from the Fermi Large Area Telescope First Source Catalog (1FGL). The gamma-ray emission from 1FGL sources was extrapolated up to the VHE band, taking gamma-ray absorption by the extragalactic background light into account. This allowed the selection of six bright, hard-spectrum blazars that were good candidate TeV emitters. Spectroscopic redshift measurements were attempted with the Keck Telescope for the targets without Sloan Digital Sky Survey spectroscopic data. No VHE emission is detected during the observations of the six sources described here. Corresponding TeV upper limits are presented, along with contemporaneous Fermi observations and non-concurrent Swift UVOT and X-Ray Telescope data. The blazar broadband spectral energy distributions (SEDs) are assembled and modeled with a single-zone synchrotron self-Compton model. The SED built for each of the six blazars shows a synchrotron peak bordering between the intermediate-and high-spectrum-peak classifications, with four of the six resulting in particle-dominated emission regions.
VERITAS has been monitoring the very-high-energy (VHE; > 100 GeV) gamma-ray activity of the radio galaxy M87 since 2007. During 2008, flaring activity on a timescale of a few days was observed with a peak flux of (0.70 +/- 0.16) x 10(-11) cm(-2) s(-1) at energies above 350 GeV. In 2010 April, VERITAS detected a flare from M 87 with peak flux of (2.71 +/- 0.68) x 10(-11) cm(-2) s(-1) for E > 350 GeV. The source was observed for six consecutive nights during the flare, resulting in a total of 21 hr of good-quality data. The most rapid flux variation occurred on the trailing edge of the flare with an exponential flux decay time of 0.90(-0.15)(+0.22) days. The shortest detected exponential rise time is three times as long, at 2.87(+1.65)(-0.99) days. The quality of the data sample is such that spectral analysis can be performed for three periods: rising flux, peak flux, and falling flux. The spectra obtained are consistent with power-law forms. The spectral index at the peak of the flare is equal to 2.19 +/- 0.07. There is some indication that the spectrum is softer in the falling phase of the flare than the peak phase, with a confidence level corresponding to 3.6 standard deviations. We discuss the implications of these results for the acceleration and cooling rates of VHE electrons in M 87 and the constraints they provide on the physical size of the emitting region.
The VERITAS array of Cherenkov telescopes has carried out a deep observational program on the nearby dwarf spheroidal galaxy Segue 1. We report on the results of nearly 48 hours of good quality selected data, taken between January 2010 and May 2011. No significant gamma-ray emission is detected at the nominal position of Segue 1, and upper limits on the integrated flux are derived. According to recent studies, Segue 1 is the most dark matter-dominated dwarf spheroidal galaxy currently known. We derive stringent bounds on various annihilating and decaying dark matter particle models. The upper limits on the velocity-weighted annihilation cross-section are <sigma upsilon >(95%) (CL) less than or similar to 10(-23) cm(3) s(-1), improving our limits from previous observations of dwarf spheroidal galaxies by at least a factor of 2 for dark matter particle masses m(chi) greater than or similar to 300 GeV. The lower limits on the decay lifetime are at the level of tau(95%) (CL) greater than or similar to 10(24) s. Finally, we address the interpretation of the cosmic ray lepton anomalies measured by ATIC and PAMELA in terms of dark matter annihilation, and show that the VERITAS observations of Segue 1 disfavor such a scenario.
We report on a new three-color FRET system consisting of three fluorescent dyes, i.e., of a carbostyril (=quinolin-2(1H)-one)-derived donor D, a (bathophenanthroline)ruthenium complex as a relay chromophore A1, and a Cy dye as A2 (FRET=Forster resonance-energy-transfer) (cf. Fig. 1). With their widely matching spectroscopic properties (cf. Fig. 2), the combination of these dyes yielded excellent FRET efficiencies. Furthermore, fluorescence lifetime measurements revealed that the long fluorescence lifetime of the Ru complex was transferred to the Cy dye offering the possibility to measure the whole system in a time-resolved mode. The FRET system was established on double-stranded DNA (cf. Fig. 3) but it should also be generally applicable to other biomolecules.
This article examines two so-far-understudied verb doubling constructions in Mandarin Chinese, viz., verb doubling clefts and verb doubling lian…dou. We show that these constructions have the same internal syntax as regular clefts and lian…dou sentences, the doubling effect being epiphenomenal; therefore, we classify them as subtypes of the general cleft and lian…dou constructions, respectively, rather than as independent constructions. Additionally, we also show that, as in many other languages with comparable constructions, the two instances of the verb are part of a single movement chain, which has the peculiarity of allowing Spell-Out of more than one link.
Velocity and displacement correlation functions for fractional generalized Langevin equations
(2012)
We study analytically a generalized fractional Langevin equation. General formulas for calculation of variances and the mean square displacement are derived. Cases with a three parameter Mittag-Leffler frictional memory kernel are considered. Exact results in terms of the Mittag-Leffler type functions for the relaxation functions, average velocity and average particle displacement are obtained. The mean square displacement and variances are investigated analytically. Asymptotic behaviors of the particle in the short and long time limit are found. The model considered in this paper may be used for modeling anomalous diffusive processes in complex media including phenomena similar to single file diffusion or possible generalizations thereof. We show the importance of the initial conditions on the anomalous diffusive behavior of the particle.
Objectives-The purpose of this study was to determine the dependence of breast tissue elasticity on the menstrual cycle of healthy volunteers by means of real-time sonoelastography.
Methods-Twenty-two healthy volunteers (aged 18-33 years) were examined once weekly during two consecutive menstrual cycles using sonoelastography. Group 1 (n = 10) was not taking hormonal medication; group 2 (n = 12) was taking oral contraceptives.
Results-The breast parenchyma appeared softer than the dermis and harder than the adipose tissue, and elasticity varied over the menstrual cycle and between groups. Group 1 (no hormone intake) showed continuously increasing elasticity with relatively soft breast parenchyma in the menstrual and follicular phases and harder parenchyma in the luteal phase (P = .012). Group 2 (oral contraceptives) showed no statistically significant changes in breast parenchymal elasticity according to sonoelastography. The parenchyma was generally softer in group 1 compared with group 2 throughout the menstrual cycle (P = .033). The dermis, the subcutaneous adipose tissue, and the pectoralis major muscle showed no changes in elasticity. Comparison of measurements made during the first and the second menstrual cycles showed similar patterns of elasticity in both groups.
Conclusions-Sonoelastography is a reproducible method that can be used to determine the dependence of breast parenchyma elasticity on the menstrual cycle and on the intake of hormonal contraceptives.
We study the dispersion interaction of the van der Waals and Casimir-Polder (vdW-CP) type between a neutral atom and the surface of a conductor by allowing for nonlocal electrodynamics, i.e. electron diffusion. We consider two models: (i) bulk diffusion, and (ii) diffusion in a surface charge layer. In both cases, we find that the transition to a semiconductor as a function of the conductivity is continuous, unlike the case of a local model. The relevant parameter is the electric screening length and depends on the carrier diffusion constant. We find that for distances comparable to the screening length, vdW-CP data can distinguish between bulk and surface diffusion, hence it can be a sensitive probe for surface states.
Correlation functions of a driven two-level system embedded in a photonic crystal are analyzed. The spectral density of the photonic bands near a gap makes this system non-Markovian. The equations of motion for two-time correlations are derived by two different methods, the quantum regression theorem and the fluctuation dissipation theorem, and found to be the same.
Background: Isokinetic measurements are widely used to assess strength capacity in a clinical or research context. Nevertheless, the validity of isokinetic measures for identifying strength deficits and the evaluation of therapeutic process regarding different pathologies is yet to be established. Therefore, the purpose of this review is to evaluate the validity of isokinetic measures in a specific case: that of muscular capacity in low back pain (LBP).
Methods: A literature search (PubMed; ISI Web of Knowledge; The Cochrane Library) covering the last 10 years was performed. Relevant papers regarding isokinetic trunk strength measures in healthy and patients with low back pain (PLBP) were searched. Peak torque values [Nm] and peak torque normalized to body weight [Nm/kg BW] were extracted for healthy and PLBP. Ranked mean values across studies were calculated for the concentric peak torque at 60 degrees/s as well as the flexion/extension (F/E) ratio.
Results: 34 publications (31 flexion/extension; 3 rotation) were suitable for reporting detailed isokinetic strength measures in healthy or LBP (untrained adults, adolescents, athletes). Adolescents and athletes were different compared to normal adults in terms of absolute trunk strength values and the F/E ratio. Furthermore, isokinetic measures evaluating therapeutic process and isokinetic rehabilitation training were infrequent in literature (8 studies).
Conclusion: Isokinetic measurements are valid for measuring trunk flexion/extension strength and F/E ratio in athletes, adolescents and (untrained) adults with/without LBP. The validity of trunk rotation is questionable due to a very small number of publications whereas no reliable source regarding lateral flexion could be traced. Therefore, isokinetic dynamometry may be utilized for identifying trunk strength deficits in healthy adults and PLBP.
Background. Despite considerable progress made in the past decade through salt iodization programs, over 2 billion people worldwide still have inadequate iodine intake, with devastating consequences for brain development and intellectual capacity. To optimize these programs with regard to salt iodine content, careful monitoring of salt iodine content is essential, but few methods are available to quantitatively measure iodine concentration in a simple, fast, and safe way.
Objective. We have validated a newly developed device that quantitatively measures the content of potassium iodate in salt in a simple, safe, and rapid way.
Methods. The linearity, determination and detection limit, and inter- and intra-assay variability of this colorimetric method were assessed and the method was compared with iodometric titration, using salt samples from several countries.
Results. Linearity of analysis ranged from 5 to 75 mg/kg iodine, with I mg/kg being the determination limit; the intra- and interassay imprecision was 0.9%, 0.5%, and 0.7% and 1.5%, 1.7%, and 2.5% for salt samples with iodine contents of 17, 30, and 55 mg/kg, respectively; the interoperator imprecision for the same samples was 1.2%, 4.9%, and 4.7%, respectively. Comparison with the iodometric method showed high agreement between the methods (R-2 = 0.978; limits of agreement, -10.5 to 10.0 mg/kg).
Conclusions. The device offers a field- and user-friendly solution to quantifying potassium iodate salt content reliably. For countries that use potassium iodide in salt iodization programs, further validation is required.
Background: beta-Carotene is an important precursor of vitamin A, and is associated with bovine fertility. beta-Carotene concentrations in plasma are used to optimize beta-carotene supplementation in cattle, but measurement requires specialized equipment to separate plasma and extract and measure beta-carotene, either using spectrophotometry or high performance liquid chromatography (HPLC).
Objective: The objective of this study was to validate a new 2-step point-of-care (POC) assay for measuring beta-carotene in whole blood and plasma.
Methods: beta-carotene concentrations in plasma from 166 cows were measured using HPLC and compared with results obtained using a POC assay, the iCheck-iEx-Carotene test kit. Whole blood samples from 23 of these cattle were also evaluated using the POC assay and compared with HPLC-plasma results from the same 23 animals. The POC assay includes an extraction vial (iEx Carotene) and hand-held photometer (iCheck Carotene).
Results: Concentrations of beta-carotene in plasma measured using the POC assay ranged from 0.40 to 15.84 mg/L (n = 166). No differences were observed between methods for assay of plasma (mean +/- SD; n = 166): HPLC-plasma 4.23 +/- 2.35 mg/L; POC-plasma 4.49 +/- 2.36 mg/L. Similar good agreement was found when plasma analyzed using HPLC was compared with whole blood analyzed using the POC system (n = 23): HPLC-plasma 3.46 +/- 2.12 mg/L; POC-whole blood 3.67 +/- 2.29 mg/L.
Conclusions: Concentrations of beta-carotene can be measured in blood and plasma from cattle easily and rapidly using a POC assay, and results are comparable to those obtained by the highly sophisticated HPLC method. Immediate feedback regarding beta-carotene deficiency facilitates rapid and appropriate optimization of beta-carotene supplementation in feed.
The aims of this study were to identify areas of wind erosion and dust deposition and to quantify the effects of different grazing intensities on soil redistribution rates in grasslands based on the Cs-137 technique. Because the method uses a reference inventory as threshold for erosion or deposition, the classification of any other site as source or sink for dust depends on the accurate selection of this reference site.
Measurements of Cs-137 inventories and depth distributions were carried out at pasture sites with predominant species of Stipa grandis and Leymus chinensis which are grazed with different intensities. Additional measurements were made at arable land, plant-covered sand dunes and alluvial plains. Wind-induced soil erosion and dust deposition rates were calculated from Cs-137 inventories by means of the "Profile-Distribution" and the "Mass Balance II" models.
The selection of the reference site was based on fluid dynamical and process-determining parameters. The chosen site should meet the following four conditions: (i) located at a summit position with obviously low deposition rates, (ii) sufficient vegetation cover to prevent wind erosion, (iii) plane to exclude water erosion and (iv) in the wind/dust shadow of a higher elevation. The measured reference inventory of Cs-137 was 1967(+/- 102) Bqm(-2) located at a summit position of moderately grazed Leymus chinensis steppe. The Cs-137 inventories at other sites ranged from 1330 Bqm(-2) at heavily grazed sites to 5119 Bqm(-2) at river deposits, representing annual average soil losses of up to 130 tkm(-2) and deposits of up to 540 tkm(-2), respectively. The calculated annual averages of dust depositions at ungrazed Leymus chinensis sites were related to the dust storm frequencies of the last 50 years resulting in a description of the temporal variability of annual dust depositions from about 154 tkm(-2) in the 1960s to 26 tkm(-2) at recent times. Based on this quantification already 80% of the total dust depositions can be related to the 20 years between the 1960s and the end of the 1970s and only 20% to the time between 1980 and 2001.
Cs-137 technique is a promising method to assess the effect of grazing intensity and land use types on the spatial variability of wind-induced soil and dust redistribution processes in semi-arid grasslands. However, considerable efforts are needed to identify a reliable reference site, because erosion and deposition induced by wind may occur at the same places. The combination of the dust deposition rates derived from Cs-137 profile data with the dust storm frequencies is helpful for a better reconstruction of the temporal variability of dust deposition and wind erosion in this region. The calculated recent deposition rates of about 20 tkm(-2) are in good agreement with data of other authors.
1. The polyunsaturated fatty acid eicosapentaenoic acid (EPA) plays an important role in aquatic food webs, in particular at the primary producerconsumer interface where keystone species such as daphnids may be constrained by its dietary availability. Such constraints and their seasonal and interannual changes may be detected by continuous measurements of EPA concentrations. However, such EPA measurements became common only during the last two decades, whereas long-term data sets on plankton biomass are available for many well-studied lakes. Here, we test whether it is possible to estimate EPA concentrations from abiotic variables (light and temperature) and the biomass of prey organisms (e.g. ciliates, diatoms and cryptophytes) that potentially provide EPA for consumers. 2. We used multiple linear regression to relate size- and taxonomically resolved plankton biomass data and measurements of temperature and light intensity to directly measured EPA concentrations in Lake Constance during a whole year. First, we tested the predictability of EPA concentrations from the biomass of EPA-rich organisms (diatoms, cryptophytes and ciliates). Secondly, we included the variables mean temperature and mean light intensity over the sampling depth (020 m) and depth (08 and 820 m) as factors in our model to check for large-scale seasonal- and depth-dependent effects on EPA concentrations. In a third step, we included the deviations of light and temperature from mean values in our model to allow for their potential influence on the biochemical composition of plankton organisms. We used the Akaike Information Criterion to determine the best models. 3. All approaches supported our proposition that the biomasses of specific plankton groups are variables from which seston EPA concentrations can be derived. The importance of ciliates as an EPA source in the seston was emphasised by their high weight in our models, although ciliates are neglected in most studies that link fatty acids to seston taxonomic composition. The large-scale seasonal variability of light intensity and its interaction with diatom biomass were significant predictors of EPA concentrations. The deviation of temperature from mean values, accounting for a depth-dependent effect on EPA concentrations, and its interaction with ciliate biomass were also variables with high predictive power. 4. The best models from the first and second approaches were validated with measurements of EPA concentrations from another year (1997). The estimation with the best model including only biomass explained 80%, and the best model from the second approach including mean temperature and depth explained 87% of the variability in EPA concentrations in 1997. 5. We show that it is possible to predict EPA concentrations reliably from plankton biomass, while the inclusion of abiotic factors led to results that were only partly consistent with expectations from laboratory studies. Our approach of including biotic predictors should be transferable to other systems and allow checking for biochemical constraints on primary consumers.
Whereas the US President signed the Kyoto Protocol, the failure of the US Congress to ratify it seriously hampered subsequent international climate cooperation. This recent US trend, of signing environmental treaties but failing to ratify them, could thwart attempts to come to a future climate agreement. Two complementary explanations of this trend are proposed. First, the political system of the US has distinct institutional features that make it difficult for presidents to predict whether the Senate will give its advice and consent to multilateral environmental agreements (MEAs) and whether Congress will pass the required enabling legislation. Second, elected for a fixed term, US presidents might benefit politically from supporting MEAs even when knowing that legislative support is not forthcoming. Four policy implications are explored, concerning the scope for unilateral presidential action, the potential for bipartisan congressional support, the effectiveness of a treaty without the US, and the prospects for a deep, new climate treaty.
Policy relevance
Why does the failure of US ratification of multilateral environmental treaties occur? This article analyses the domestic political mechanisms involved in cases of failed US ratification. US non-participation in global environmental institutions often has serious ramifications. For example, it sharply limited Kyoto's effectiveness and seriously hampered international climate negotiations for years. Although at COP 17 in Durban the parties agreed to negotiate a new agreement by 2015, a new global climate treaty may well trigger a situation resembling the one President Clinton faced in 1997 when he signed Kyoto but never obtained support for it in the Senate. US failure to ratify could thwart future climate agreements.
The conformational analysis of the first representative of the Si-alkoxy substituted six-membered Si,N-heterocycles, 1,3-dimethyl-3-isopropoxy-3-silapiperidine, was performed by low-temperature 1H and 13C NMR spectroscopy and DFT theoretical calculations. In contrast to the expectations from the conformational energies of methyl and alkoxy substituents, the Meaxi-PrOeq conformer was found to predominate in the conformational equilibrium in the ratio Meaxi-PrOeq : Meeqi-PrOax of ca. 2 : 1 as from the 1H and 13C NMR study. The thermodynamic parameters obtained by the complete line shape analysis showed that the main contribution to the barrier to ring inversion originates from the entropy term of the free energy of activation.
This editorial introduces a set of papers on differential embodiment in spatial tasks. According to the theoretical notion of embodied cognition, our experiences of acting in the world, and the constraints of our sensory and motor systems, strongly shape our cognitive functions. In the current set of papers, the authors were asked to particularly consider idiosyncratic or differential embodied cognition in the context of spatial tasks and processes. In each contribution, differential embodiment is considered from one of two complementary perspectives: either by considering unusual individuals, who have atypical bodies or uncommon experiences of interacting with the world; or by exploring individual differences in the general population that reflect the naturally occurring variability in embodied processes. Our editorial summarizes the contributions to this special issue and discusses the insights they offer. We conclude from this collection of papers that exploring differences in the recruitment and involvement of embodied processes can be highly informative, and can add an extra dimension to our understanding of spatial cognitive functions. Taking a broader perspective, it can also shed light on important theoretical and empirical questions concerning the nature of embodied cognition per se.
Leaf senescence is an active process required for plant survival, and it is flexibly controlled, allowing plant adaptation to environmental conditions. Although senescence is largely an age-dependent process, it can be triggered by environmental signals and stresses. Leaf senescence coordinates the breakdown and turnover of many cellular components, allowing a massive remobilization and recycling of nutrients from senescing tissues to other organs (e.g., young leaves, roots, and seeds), thus enhancing the fitness of the plant. Such metabolic coordination requires a tight regulation of gene expression. One important mechanism for the regulation of gene expression is at the transcriptional level via transcription factors (TFs). The NAC TF family (NAM, ATAF, CUC) includes various members that show elevated expression during senescence, including ORE1 (ANAC092/AtNAC2) among others. ORE1 was first reported in a screen for mutants with delayed senescence (oresara1, 2, 3, and 11). It was named after the Korean word “oresara,” meaning “long-living,” and abbreviated to ORE1, 2, 3, and 11, respectively. Although the pivotal role of ORE1 in controlling leaf senescence has recently been demonstrated, the underlying molecular mechanisms and the pathways it regulates are still poorly understood. To unravel the signaling cascade through which ORE1 exerts its function, we analyzed particular features of regulatory pathways up-stream and down-stream of ORE1. We identified characteristic spatial and temporal expression patterns of ORE1 that are conserved in Arabidopsis thaliana and Nicotiana tabacum and that link ORE1 expression to senescence as well as to salt stress. We proved that ORE1 positively regulates natural and dark-induced senescence. Molecular characterization of the ORE1 promoter in silico and experimentally suggested a role of the 5’UTR in mediating ORE1 expression. ORE1 is a putative substrate of a calcium-dependent protein kinase named CKOR (unpublished data). Promising data revealed a positive regulation of putative ORE1 targets by CKOR, suggesting the phosphorylation of ORE1 as a requirement for its regulation. Additionally, as part of the ORE1 up-stream regulatory pathway, we identified the NAC TF ATAF1 which was able to transactivate the ORE1 promoter in vivo. Expression studies using chemically inducible ORE1 overexpression lines and transactivation assays employing leaf mesophyll cell protoplasts provided information on target genes whose expression was rapidly induced upon ORE1 induction. First, a set of target genes was established and referred to as early responding in the ORE1 regulatory network. The consensus binding site (BS) of ORE1 was characterized. Analysis of some putative targets revealed the presence of ORE1 BSs in their promoters and the in vitro and in vivo binding of ORE1 to their promoters. Among these putative target genes, BIFUNCTIONAL NUCLEASE I (BFN1) and VND-Interacting2 (VNI2) were further characterized. The expression of BFN1 was found to be dependent on the presence of ORE1. Our results provide convincing data which support a role for BFN1 as a direct target of ORE1. Characterization of VNI2 in age-dependent and stress-induced senescence revealed ORE1 as a key up-stream regulator since it can bind and activate VNI2 expression in vivo and in vitro. Furthermore, VNI2 was able to promote or delay senescence depending on the presence of an activation domain located in its C-terminal region. The plasticity of this gene might include alternative splicing (AS) to regulate its function in different organs and at different developmental stages, particularly during senescence. A model is proposed on the molecular mechanism governing the dual role of VNI2 during senescence.
Unique properties of eukaryote-type actin and profilin horizontally transferred to cyanobacteria
(2012)
A eukaryote-type actin and its binding protein profilin encoded on a genomic island in the cyanobacterium Microcystis aeruginosa PCC 7806 co-localize to form a hollow, spherical enclosure occupying a considerable intracellular space as shown by in vivo fluorescence microscopy. Biochemical and biophysical characterization reveals key differences between these proteins and their eukaryotic homologs. Small-angle X-ray scattering shows that the actin assembles into elongated, filamentous polymers which can be visualized microscopically with fluorescent phalloidin. Whereas rabbit actin forms thin cylindrical filaments about 100 mu m in length, cyanobacterial actin polymers resemble a ribbon, arrest polymerization at 510 lam and tend to form irregular multi-strand assemblies. While eukaryotic profilin is a specific actin monomer binding protein, cyanobacterial profilin shows the unprecedented property of decorating actin filaments. Electron micrographs show that cyanobacterial profilin stimulates actin filament bundling and stabilizes their lateral alignment into heteropolymeric sheets from which the observed hollow enclosure may be formed. We hypothesize that adaptation to the confined space of a bacterial cell devoid of binding proteins usually regulating actin polymerization in eukaryotes has driven the co-evolution of cyanobacterial actin and profilin, giving rise to an intracellular entity.
In industrialized economies such as the European countries unemployment rates are very responsive to the business cycle and significant shares stay unemployed for more than one year. To fight cyclical and long-term unemployment countries spend significant shares of their budget on Active Labor Market Policies (ALMP). To improve the allocation and design of ALMP it is essential for policy makers to have reliable evidence on the effectiveness of such programs available. Although the number of studies has been increased during the last decades, policy makers still lack evidence on innovative programs and for specific subgroups of the labor market. Using Germany as a case study, the dissertation aims at contributing in this way by providing new evidence on start-up subsidies, marginal employment and programs for youth unemployed. The idea behind start-up subsidies is to encourage unemployed individuals to exit unemployment by starting their own business. Those programs have compared to traditional programs of ALMP the advantage that not only the participant escapes unemployment but also might generate additional jobs for other individuals. Considering two distinct start-up subsidy programs, the dissertation adds three substantial aspects to the literature: First, the programs are effective in improving the employment and income situation of participants compared to non-participants in the long-run. Second, the analysis on effect heterogeneity reveals that the programs are particularly effective for disadvantaged groups in the labor market like low educated or low qualified individuals, and in regions with unfavorable economic conditions. Third, the analysis considers the effectiveness of start-up programs for women. Due to higher preferences for flexible working hours and limited part-time jobs, unemployed women often face more difficulties to integrate in dependent employment. It can be shown that start-up subsidy programs are very promising as unemployed women become self-employed which gives them more flexibility to reconcile work and family. Overall, the results suggest that the promotion of self-employment among the unemployed is a sensible strategy to fight unemployment by abolishing labor market barriers for disadvantaged groups and sustainably integrating those into the labor market. The next chapter of the dissertation considers the impact of marginal employment on labor market outcomes of the unemployed. Unemployed individuals in Germany are allowed to earn additional income during unemployment without suffering a reduction in their unemployment benefits. Those additional earnings are usually earned by taking up so-called marginal employment that is employment below a certain income level subject to reduced payroll taxes (also known as “mini-job”). The dissertation provides an empirical evaluation of the impact of marginal employment on unemployment duration and subsequent job quality. The results suggest that being marginal employed during unemployment has no significant effect on unemployment duration but extends employment duration. Moreover, it can be shown that taking up marginal employment is particularly effective for long-term unemployed, leading to higher job-finding probabilities and stronger job stability. It seems that mini-jobs can be an effective instrument to help long-term unemployed individuals to find (stable) jobs which is particularly interesting given the persistently high shares of long-term unemployed in European countries. Finally, the dissertation provides an empirical evaluation of the effectiveness of ALMP programs to improve labor market prospects of unemployed youth. Youth are generally considered a population at risk as they have lower search skills and little work experience compared to adults. This results in above-average turnover rates between jobs and unemployment for youth which is particularly sensitive to economic fluctuations. Therefore, countries spend significant resources on ALMP programs to fight youth unemployment. However, so far only little is known about the effectiveness of ALMP for unemployed youth and with respect to Germany no comprehensive quantitative analysis exists at all. Considering seven different ALMP programs, the results show an overall positive picture with respect to post-treatment employment probabilities for all measures under scrutiny except for job creation schemes. With respect to effect heterogeneity, it can be shown that almost all programs particularly improve the labor market prospects of youths with high levels of pretreatment schooling. Furthermore, youths who are assigned to the most successful employment measures have much better characteristics in terms of their pre-treatment employment chances compared to non-participants. Therefore, the program assignment process seems to favor individuals for whom the measures are most beneficial, indicating a lack of ALMP alternatives that could benefit low-educated youths.
Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.
Empirical species distribution models (SDMs) constitute often the tool of choice for the assessment of rapid climate change effects on species vulnerability. Conclusions regarding extinction risks might be misleading, however, because SDMs do not explicitly incorporate dispersal or other demographic processes. Here, we supplement SDMs with a dynamic population model 1) to predict climate-induced range dynamics for black grouse in Switzerland, 2) to compare direct and indirect measures of extinction risks, and 3) to quantify uncertainty in predictions as well as the sources of that uncertainty. To this end, we linked models of habitat suitability to a spatially explicit, individual-based model. In an extensive sensitivity analysis, we quantified uncertainty in various model outputs introduced by different SDM algorithms, by different climate scenarios and by demographic model parameters. Potentially suitable habitats were predicted to shift uphill and eastwards. By the end of the 21st century, abrupt habitat losses were predicted in the western Prealps for some climate scenarios. In contrast, population size and occupied area were primarily controlled by currently negative population growth and gradually declined from the beginning of the century across all climate scenarios and SDM algorithms. However, predictions of population dynamic features were highly variable across simulations. Results indicate that inferring extinction probabilities simply from the quantity of suitable habitat may underestimate extinction risks because this may ignore important interactions between life history traits and available habitat. Also, in dynamic range predictions uncertainty in SDM algorithms and climate scenarios can become secondary to uncertainty in dynamic model components. Our study emphasises the need for principal evaluation tools like sensitivity analysis in order to assess uncertainty and robustness in dynamic range predictions. A more direct benefit of such robustness analysis is an improved mechanistic understanding of dynamic species responses to climate change.
Ultrasound evaluation of the patellar tendon cross-sectional area and its relation to maximum force
(2012)
High topography in eastern Tibet is thought to have formed when deep crust beneath the central Tibetan Plateau flowed towards the plateau margin, causing crustal thickening and surface uplift(1,2). Rapid exhumation starting about 10-15 million years ago is inferred to mark the onset of surface uplift and fluvial incision(3-6). Although geophysical data are consistent with weak crust capable of flow(7,8), it is unclear how the timing(9) and amount of deformation adjacent to the Sichuan Basin during the Cenozoic era can be explained in this way(10,11). Here we use thermochronology to measure the cooling histories of rocks exposed in a section that stretches vertically over 3 km adjacent to the Sichuan Basin. Our thermal models of exhumation-driven cooling show that these rocks, and hence the plateau margin, were subject to slow, steady exhumation during early Cenozoic time, followed by two pulses of rapid exhumation, one beginning 30-25 million years ago and a second 10-15 million years ago that continues to present. Our findings imply that significant topographic relief existed adjacent to the Sichuan Basin before the Indo-Asian collision. Furthermore, the onset of Cenozoic mountain building probably pre-dated development of the weak lower crust, implying that early topography was instead formed during thickening of the upper crust along faults. We suggest that episodes of mountain building may reflect distinct geodynamic mechanisms of crustal thickening.
A transient two-dimensional model describing degenerate four-wave mixing inside saturable gain media is presented. The new model is compared to existing one-dimensional models with their qualitative results confirmed. Large quantitative differences with respect to peak reflectivity and optimum pump fluence are observed. Furthermore, the influence of the beam focus size, the transverse position and the crossing angle on the reflectivity of the grating is investigated using the improved model. It is demonstrated that the phase conjugate reflectivity depends sensitively on the transverse features of the interacting beams with a transverse shift in the position of the pump beams yielding a threefold improvement in reflectivity. (C) 2012 Optical Society of America