Refine
Year of publication
- 2012 (1321) (remove)
Document Type
- Article (960)
- Doctoral Thesis (158)
- Conference Proceeding (52)
- Preprint (43)
- Postprint (39)
- Review (31)
- Monograph/Edited Volume (22)
- Other (9)
- Part of a Book (2)
- Master's Thesis (2)
Language
- English (1321) (remove)
Keywords
- Curriculum Framework (16)
- European values education (16)
- Europäische Werteerziehung (16)
- Lehrevaluation (16)
- Studierendenaustausch (16)
- Unterrichtseinheiten (16)
- curriculum framework (16)
- lesson evaluation (16)
- student exchange (16)
- teaching units (16)
Institute
- Institut für Biochemie und Biologie (235)
- Institut für Physik und Astronomie (215)
- Institut für Chemie (174)
- Institut für Geowissenschaften (170)
- Department Psychologie (75)
- Institut für Mathematik (60)
- Institut für Ernährungswissenschaft (55)
- Institut für Informatik und Computational Science (41)
- Department Linguistik (29)
- Department Sport- und Gesundheitswissenschaften (29)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (28)
- Wirtschaftswissenschaften (25)
- Mathematisch-Naturwissenschaftliche Fakultät (22)
- Institut für Umweltwissenschaften und Geographie (21)
- Institut für Romanistik (20)
- Institut für Anglistik und Amerikanistik (17)
- Humanwissenschaftliche Fakultät (11)
- Sozialwissenschaften (11)
- Strukturbereich Kognitionswissenschaften (10)
- Extern (8)
- Philosophische Fakultät (8)
- Öffentliches Recht (8)
- Institut für Germanistik (6)
- Institut für Jüdische Studien und Religionswissenschaft (5)
- Department Erziehungswissenschaft (4)
- Vereinigung für Jüdische Studien e. V. (4)
- Historisches Institut (3)
- Bürgerliches Recht (2)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (2)
- Fachgruppe Soziologie (1)
- Institut für Künste und Medien (1)
- MenschenRechtsZentrum (1)
- WeltTrends e.V. Potsdam (1)
This paper explores questions surrounding corporeality and heavenly ascent, in texts ranging from 1 Enoch to the Hekhalot literature, including Philo’s works. It examines both descriptions of the heavenly realms and accounts of the ascent process. Despite his Platonic apophaticism, Philo superimposes cosmological and spiritual heavens, and draws upon the biblical imagery of dazzling glory. Although they do not express themselves in philosophical language, the heavenly ascent texts make it clear that human beings cannot ascend to heaven in their earthly bodies, and that God cannot be seen with terrestrial eyes. In terms of ideas they are not so far from the philosopher Philo as might at first appear.
In many near-surface geophysical studies it is now common practice to collect co-located disparate geophysical data sets to explore subsurface structures. Reconstruction of physical parameter distributions underlying the available geophysical data sets usually requires the use of tomographic reconstruction techniques. To improve the quality of the obtained models, the information content of all data sets should be considered during the model generation process, e.g., by employing joint or cooperative inversion approaches. Here, we extend the zonal cooperative inversion methodology based on fuzzy c-means cluster analysis and conventional single-input data set inversion algorithms for the cooperative inversion of data sets with partially co-located model areas. This is done by considering recent developments in fuzzy c-means cluster analysis. Additionally, we show how supplementary a priori information can be incorporated in an automated fashion into the zonal cooperative inversion approach to further constrain the inversion. The only requirement is that this a priori information can be expressed numerically; e.g., by physical parameters or indicator variables. We demonstrate the applicability of the modified zonal cooperative inversion approach using synthetic and field data examples. In these examples, we cooperatively invert S- and P-wave traveltime data sets with partially co-located model areas using water saturation information expressed by indicator variables as additional a priori information. The approach results in a zoned multi-parameter model, which is consistent with all available information given to the zonal cooperative inversion and outlines the major subsurface units. In our field example, we further compare the obtained zonal model to sparsely available borehole and direct-push logs. This comparison provides further confidence in our zonal cooperative inversion model because the borehole and direct-push logs indicate a similar zonation.
When the mind wanders, attention turns away from the external environment and cognitive processing is decoupled from perceptual information. Mind wandering is usually treated as a dichotomy (dichotomy-hypothesis), and is often measured using self-reports. Here, we propose the levels of inattention hypothesis, which postulates attentional decoupling to graded degrees at different hierarchical levels of cognitive processing. To measure graded levels of attentional decoupling during reading we introduce the sustained attention to stimulus task (SAST), which is based on psychophysics of error detection. Under experimental conditions likely to induce mind wandering, we found that subjects were less likely to notice errors that required high-level processing for their detection as opposed to errors that only required low-level processing. Eye tracking revealed that before errors were overlooked influences of high- and low-level linguistic variables on eye fixations were reduced in a graded fashion, indicating episodes of mindless reading at weak and deep levels. Individual fixation durations predicted overlooking of lexical errors 5 s before they occurred. Our findings support the levels of inattention hypothesis and suggest that different levels of mindless reading can be measured behaviorally in the SAST. Using eye tracking to detect mind wandering online represents a promising approach for the development of new techniques to study mind wandering and to ameliorate its negative consequences.
The embodied cognition framework suggests that neural systems for perception and action are engaged during higher cognitive processes. In an event-related fMRI study, we tested this claim for the abstract domain of numerical symbol processing: is the human cortical motor system part of the representation of numbers, and is organization of numerical knowledge influenced by individual finger counting habits? Developmental studies suggest a link between numerals and finger counting habits due to the acquisition of numerical skills through finger counting in childhood. In the present study, digits 1 to 9 and the corresponding number words were presented visually to adults with different finger counting habits, i.e. left- and right-starters who reported that they usually start counting small numbers with their left and right hand, respectively. Despite the absence of overt hand movements, the hemisphere contralateral to the hand used for counting small numbers was activated when small numbers were presented. The correspondence between finger counting habits and hemispheric motor activation is consistent with an intrinsic functional link between finger counting and number processing.
Parameters of a formal working-memory model were estimated for verbal and spatial memory updating of children. The model proposes interference though feature overwriting and through confusion of whole elements as the primary cause of working-memory capacity limits. We tested 2 age groups each containing 1 group of normal intelligence and I deficit group. For young children the deficit was developmental dyslexia; for older children it was a general learning difficulty. The interference model predicts less interference through overwriting but more through confusion of whole elements for the dyslexic children than for their age-matched controls. Older children exhibited less interference through confusion of whole elements and a higher processing rate than young children, but general learning difficulty was associated with slower processing than in the age-matched control group. Furthermore, the difference between verbal and spatial updating mapped onto several meaningful dissociations of model parameters.
Parameters of a formal working-memory model were estimated for verbal and spatial memory updating of children. The model proposes interference though feature overwriting and through confusion of whole elements as the primary cause of working-memory capacity limits. We tested 2 age groups each containing 1 group of normal intelligence and 1 deficit group. For young children the deficit was developmental dyslexia; for older children it was a general learning difficulty. The interference model predicts less interference through overwriting but more through confusion of whole elements for the dyslexic children than for their age-matched controls. Older children exhibited less interference through confusion of whole elements and a higher processing rate than young children, but general learning difficulty was associated with slower processing than in the age-matched control group. Furthermore, the difference between verbal and spatial updating mapped onto several meaningful dissociations of model parameters.
Projected scenarios of climate change involve general predictions about the likely changes to the magnitude and frequency of landslides, particularly as a consequence of altered precipitation and temperature regimes. Whether such landslide response to contemporary or past climate change may be captured in differing scaling statistics of landslide size distributions and the erosion rates derived thereof remains debated. We test this notion with simple Monte Carlo and bootstrap simulations of statistical models commonly used to characterize empirical landslide size distributions. Our results show that significant changes to total volumes contained in such inventories may be masked by statistically indistinguishable scaling parameters, critically depending on, among others, the size of the largest of landslides recorded. Conversely, comparable model parameter values may obscure significant, i.e. more than twofold, changes to landslide occurrence, and thus inferred rates of hillslope denudation and sediment delivery to drainage networks. A time series of some of Earth's largest mass movements reveals clustering near and partly before the last glacial-interglacial transition and a distinct step-over from white noise to temporal clustering around this period. However, elucidating whether this is a distinct signal of first-order climate-change impact on slope stability or simply coincides with a transition from short-term statistical noise to long-term steady-state conditions remains an important research challenge.
This article studies proactive work behavior from a within-person perspective. Building on the broaden-and-build model and the mood-as-information approach, we hypothesized that negative trait affect and positive state affect predict the relative time spent on proactive behavior. Furthermore, based on self-determination theory we argued that persons want to feel competent and that proactive behavior is one way to experience competence. In an experience-sampling study, 52 employees responded to surveys 3 times a day for 5 days. Hierarchical linear modeling confirmed the hypotheses on trait and state affect. Analyses furthermore showed that although a higher level of experienced competence at core task activities was associated with a subsequent increase in time spent on these activities, low experienced competence predicted an increase in time spent on proactive behavior.
This is an introduction to Wiener measure and the Feynman-Kac formula on general Riemannian manifolds for Riemannian geometers with little or no background in stochastics. We explain the construction of Wiener measure based on the heat kernel in full detail and we prove the Feynman-Kac formula for Schrödinger operators with bounded potentials. We also consider normal Riemannian coverings and show that projecting and lifting of paths are inverse operations which respect the Wiener measure.
An experimental and computational study on the impact of functional groups on the oxidation stability of higher acenes is presented. We synthesized anthracenes, tetracenes, and pentacenes with various substituents at the periphery, identified their photooxygenation products, and measured the kinetics. Furthermore, the products obtained from thermolysis and the kinetics of the thermolysis are investigated. Density functional theory is applied in order to predict reaction energies, frontier molecular orbital interactions, and radical stabilization energies. The combined results allow us to describe the mechanisms of the oxidations and the subsequent thermolysis. We found that the alkynyl group not only enhances the oxidation stability of acenes but also protects the resulting endoperoxides from thermal decomposition. Additionally, such substituents increase the regioselectivity of the photooxygenation of tetracenes and pentacenes. For the first time, we oxidized alkynylpentacenes by using chemically generated singlet oxygen (O-1(2)) without irradiation and identified a 6,13-endoperoxide as the sole regioisomer. The bimolecular rate constant of this oxidation amounts to only 1 X 10(5) s(-1) M-1. This unexpectedly slow reaction is a result of a physical deactivation of O-1(2). In contrast to unsubstituted or aryl-substituted acenes, photooxygenation of alkynyl-substituted acenes proceeds most likely by a concerted mechanism, while the thermolysis is well explained by the formation of radical intermediates. Our results should be important for the future design of oxidation stable acene-based semiconductors.
The dissertation examines the use of performance information by public managers. “Use” is conceptualized as purposeful utilization in order to steer, learn, and improve public services. The main research question is: Why do public managers use performance information? To answer this question, I systematically review the existing literature, identify research gaps and introduce the approach of my dissertation. The first part deals with manager-related variables that might affect performance information use but which have thus far been disregarded. The second part models performance data use by applying a theory from social psychology which is based on the assumption that this management behavior is conscious and reasoned. The third part examines the extent to which explanations of performance information use vary if we include others sources of “unsystematic” feedback in our analysis. The empirical results are based on survey data from 2011. I surveyed middle managers from eight selected divisions of all German cities with county status (n=954). To analyze the data, I used factor analysis, multiple regression analysis, and structural equation modeling. My research resulted in four major findings: 1) The use of performance information can be modeled as a reasoned behavior which is determined by the attitude of the managers and of their immediate peers. 2) Regular users of performance data surprisingly are not generally inclined to analyze abstract data but rather prefer gathering information through personal interaction. 3) Managers who take on ownership of performance information at an early stage in the measurement process are also more likely to use this data when it is reported to them. 4) Performance reports are only one source of information among many. Public managers prefer verbal feedback from insiders and feedback from external stakeholders over systematic performance reports. The dissertation explains these findings using a deductive approach and discusses their implications for theory and practice.
Over the last two decades, macroecology the analysis of large-scale, multi-species ecological patterns and processes has established itself as a major line of biological research. Analyses of statistical links between environmental variables and biotic responses have long and successfully been employed as a main approach, but new developments are due to be utilized. Scanning the horizon of macroecology, we identified four challenges that will probably play a major role in the future. We support our claims by examples and bibliographic analyses. 1) Integrating the past into macroecological analyses, e.g. by using paleontological or phylogenetic information or by applying methods from historical biogeography, will sharpen our understanding of the underlying reasons for contemporary patterns. 2) Explicit consideration of the local processes that lead to the observed larger-scale patterns is necessary to understand the fine-grain variability found in nature, and will enable better prediction of future patterns (e.g. under environmental change conditions). 3) Macroecology is dependent on large-scale, high quality data from a broad spectrum of taxa and regions. More available data sources need to be tapped and new, small-grain large-extent data need to be collected. 4) Although macroecology already lead to mainstreaming cutting-edge statistical analysis techniques, we find that more sophisticated methods are needed to account for the biases inherent to sampling at large scale. Bayesian methods may be particularly suitable to address these challenges. To continue the vigorous development of the macroecological research agenda, it is time to address these challenges and to avoid becoming too complacent with current achievements.
In the eighties, the analysis of satellite altimetry data leads to the major discovery of gravity lineations in the oceans, with wavelengths between 200 and 1400 km. While the existence of the 200 km scale undulations is widely accepted, undulations at scales larger than 400 km are still a matter of debate. In this paper, we revisit the topic of the large-scale geoid undulations over the oceans in the light of the satellite gravity data provided by the GRACE mission, considerably more precise than the altimetry data at wavelengths larger than 400 km. First, we develop a dedicated method of directional Poisson wavelet analysis on the sphere with significance testing, in order to detect and characterize directional structures in geophysical data on the sphere at different spatial scales. This method is particularly well suited for potential field analysis. We validate it on a series of synthetic tests, and then apply it to analyze recent gravity models, as well as a bathymetry data set independent from gravity. Our analysis confirms the existence of gravity undulations at large scale in the oceans, with characteristic scales between 600 and 2000 km. Their direction correlates well with present-day plate motion over the Pacific ocean, where they are particularly clear, and associated with a conjugate direction at 1500 km scale. A major finding is that the 2000 km scale geoid undulations dominate and had never been so clearly observed previously. This is due to the great precision of GRACE data at those wavelengths. Given the large scale of these undulations, they are most likely related to mantle processes. Taking into account observations and models from other geophysical information, as seismological tomography, convection and geochemical models and electrical conductivity in the mantle, we conceive that all these inputs indicate a directional fabric of the mantle flows at depth, reflecting how the history of subduction influences the organization of lower mantle upwellings.
The precise knowledge of one of two complementary experimental outcomes prevents us from obtaining complete information about the other one. This formulation of Niels Bohr's principle of complementarity when applied to the paradigm of wave-particle dualism-that is, to Young's double-slit experiment-implies that the information about the slit through which a quantum particle has passed erases interference. In the present paper we report a double-slit experiment using two photons created by spontaneous parametric down-conversion where we observe interference in the signal photon despite the fact that we have located it in one of the slits due to its entanglement with the idler photon. This surprising aspect of complementarity comes to light by our special choice of the TEM01 pump mode. According to quantum field theory the signal photon is then in a coherent superposition of two distinct wave vectors giving rise to interference fringes analogous to two mechanical slits.
Random copolymers of 4-vinylbenzyl tri(oxyethylene) and tetra(oxyethylene) ethers, as well as alternating copolymers of 4-vinylbenzyl methoxytetra(oxyethylene) ether and a series of N-substituted maleimides, were synthesised by conventional free radical polymerisation, reversible addition fragmentation chain transfer (RAFT) and atom transfer radical polymerisation (ATRP). Their thermosensitive behaviour in aqueous solution was studied by turbidimetry and dynamic light scattering. Depending on the copolymer composition, a LCST type phase transition was observed in water. The transition temperature of the obtained random as well as alternating copolymers could be varied within a broad temperature window. In the case of the random copolymers, transition temperatures could be easily fine-tuned, as they showed a linear dependence on the copolymer composition, and were additionally modified by the nature of the polymer end-groups. Alternating copolymers were extremely versatile for implementing a broad range of variations of the phase transition temperatures. Further, while alternating copolymers derived from 4-vinylbenzyl methoxytetra(oxyethylene) ether and maleimides with small hydrophobic side chains underwent macroscopic phase separation when dissolved in water and heated above their cloud point, the incorporation of maleimides bearing larger hydrophobic substituents resulted in the formation of mesoglobules above the phase transition temperature, with hydrodynamic diameters of less than 100 nm.
Indoor mesocosm experiments were conducted to test for potential climate change effects on the spring succession of Baltic Sea plankton. Two different temperature (Delta 0 A degrees C and Delta 6 A degrees C) and three light scenarios (62, 57 and 49 % of the natural surface light intensity on sunny days), mimicking increasing cloudiness as predicted for warmer winters in the Baltic Sea region, were simulated. By combining experimental and modeling approaches, we were able to test for a potential dietary mismatch between phytoplankton and zooplankton. Two general predator-prey models, one representing the community as a tri-trophic food chain and one as a 5-guild food web were applied to test for the consequences of different temperature sensitivities of heterotrophic components of the plankton. During the experiments, we observed reduced time-lags between the peaks of phytoplankton and protozoan biomass in response to warming. Microzooplankton peak biomass was reached by 2.5 day A degrees C-1 earlier and occurred almost synchronously with biomass peaks of phytoplankton in the warm mesocosms (Delta 6 A degrees C). The peak magnitudes of microzooplankton biomass remained unaffected by temperature, and growth rates of microzooplankton were higher at Delta 6 A degrees C (mu(a dagger 0 A degrees C) = 0.12 day(-1) and mu(a dagger 6 A degrees C) = 0.25 day(-1)). Furthermore, warming induced a shift in microzooplankton phenology leading to a faster species turnover and a shorter window of microzooplankton occurrence. Moderate differences in the light levels had no significant effect on the time-lags between autotrophic and heterotrophic biomass and on the timing, biomass maxima and growth rate of microzooplankton biomass. Both models predicted reduced time-lags between the biomass peaks of phytoplankton and its predators (both microzooplankton and copepods) with warming. The reduction of time-lags increased with increasing Q(10) values of copepods and protozoans in the tritrophic food chain. Indirect trophic effects modified this pattern in the 5-guild food web. Our study shows that instead of a mismatch, warming might lead to a stronger match between protist grazers and their prey altering in turn the transfer of matter and energy toward higher trophic levels.
First language (L1) phonological categories strongly influence late learners' perception and production of second language (L2) categories. For learners who start learning an L2 early in life ("early learners"), this L1 influence appears to be substantially reduced or at least more variable. In this paper, we examine the age at which L1 vowel categories influence the acquisition of L2 vowels. We tested a child population with a very narrow range of age of first exposure, controlling for the use of L1 vs. L2, and various naturally produced contrasts that are not allophonic in the L1 of the children. An oddity discrimination task provided evidence that children who are native speakers of Turkish and began learning German as an L2 in kindergarten categorized difficult German contrasts differently from age-matched native speakers. Their vowel productions of these same contrasts (un-cued object naming) were mostly target-like.
Background: Given the huge impact of vitamin D deficiency on a broad spectrum of diseases such as rickets, osteoporosis, mineral bone disease-vascular calcification syndrome, infectious diseases, but also several types of cancer and CNS diseases, reliable and simple methods to analyze the vitamin D status are urgently needed.
Methods: We developed an easy technique to determine the 25-OH vitamin D status from dried blood samples on filter paper. This allows determination of the 25-OH vitamin D status independently of venous blood taking, since only sampling of capillary blood is required for this new method. We compared the results of vitamin D measurements from venous blood of 96 healthy blood donors with those from capillary blood taken from the same patients at the same time. The capillary blood was dried on filter paper using the D-Vital ID dry-blood collection system.
Results: 25-OH vitamin D concentration data from extracted dried capillary blood filters correlated very well with data obtained after direct measurement of venous blood samples of the same blood donor (R: 0.7936; p<0.0001). The correlation was linear over the whole range of 25-OH vitamin D concentrations seen in this study. A Bland-Altman plot revealed good agreement between both tests.
Conclusions: The D-Vital ID dry-blood collection system showed an excellent performance as compared to the classical way of 25-OH vitamin D measurement from venous blood. This new technique will facilitate easy and reliable measurement for vitamin D status, in particular, in rural or isolated areas, developing countries, and field studies.
We report on sub-wavelength structuring of photosensitive azo-containing polymer films induced by a surface plasmon interference intensity pattern. The two surface plasmon waves generated at neighboring nano-slits in the metal layer during irradiation interfere constructively, resulting in an intensity pattern with a periodicity three times smaller than the wavelength of the incoming light. The near field pattern interacts with the photosensitive polymer film placed above it, leading to a topography change which follows the intensity pattern exactly, resulting in the formation of surface relief gratings of a size below the diffraction limit. We analyze numerically and experimentally how the depth of the nano-slit alters the interference pattern of surface plasmons and find that the sub-wavelength patterning of the polymer surface could be optimized by modifying the geometry and the size of the nano-slit.
Videos related to the maps
(2012)
Recent studies have claimed the existence of very massive stars (VMS) up to 300 M⊙ in the local Universe. As this finding may represent a paradigm shift for the canonical stellar upper-mass limit of 150 M⊙, it is timely to discuss the status of the data, as well as the far-reaching implications of such objects. We held a Joint Discussion at the General Assembly in Beijing to discuss (i) the determination of the current masses of the most massive stars, (ii) the formation of VMS, (iii) their mass loss, and (iv) their evolution and final fate. The prime aim was to reach broad consensus between observers and theorists on how to identify and quantify the dominant physical processes.
We report on very high energy (E > 100 GeV) gamma-ray observations of V407 Cygni, a symbiotic binary that underwent a nova outburst producing 0.1-10 GeV gamma rays during 2010 March 10-26. Observations were made with the Very Energetic Radiation Imaging Telescope Array System during 2010 March 19-26 at relatively large zenith angles due to the position of V407 Cyg. An improved reconstruction technique for large zenith angle observations is presented and used to analyze the data. We do not detect V407 Cygni and place a differential upper limit on the flux at 1.6 TeV of 2.3 x 10(-12) erg cm(-2) s(-1) (at the 95% confidence level). When considered jointly with data from Fermi-LAT, this result places limits on the acceleration of very high energy particles in the nova.
We report on VERITAS very high energy (VHE; E >= 100 GeV) observations of six blazars selected from the Fermi Large Area Telescope First Source Catalog (1FGL). The gamma-ray emission from 1FGL sources was extrapolated up to the VHE band, taking gamma-ray absorption by the extragalactic background light into account. This allowed the selection of six bright, hard-spectrum blazars that were good candidate TeV emitters. Spectroscopic redshift measurements were attempted with the Keck Telescope for the targets without Sloan Digital Sky Survey spectroscopic data. No VHE emission is detected during the observations of the six sources described here. Corresponding TeV upper limits are presented, along with contemporaneous Fermi observations and non-concurrent Swift UVOT and X-Ray Telescope data. The blazar broadband spectral energy distributions (SEDs) are assembled and modeled with a single-zone synchrotron self-Compton model. The SED built for each of the six blazars shows a synchrotron peak bordering between the intermediate-and high-spectrum-peak classifications, with four of the six resulting in particle-dominated emission regions.
VERITAS has been monitoring the very-high-energy (VHE; > 100 GeV) gamma-ray activity of the radio galaxy M87 since 2007. During 2008, flaring activity on a timescale of a few days was observed with a peak flux of (0.70 +/- 0.16) x 10(-11) cm(-2) s(-1) at energies above 350 GeV. In 2010 April, VERITAS detected a flare from M 87 with peak flux of (2.71 +/- 0.68) x 10(-11) cm(-2) s(-1) for E > 350 GeV. The source was observed for six consecutive nights during the flare, resulting in a total of 21 hr of good-quality data. The most rapid flux variation occurred on the trailing edge of the flare with an exponential flux decay time of 0.90(-0.15)(+0.22) days. The shortest detected exponential rise time is three times as long, at 2.87(+1.65)(-0.99) days. The quality of the data sample is such that spectral analysis can be performed for three periods: rising flux, peak flux, and falling flux. The spectra obtained are consistent with power-law forms. The spectral index at the peak of the flare is equal to 2.19 +/- 0.07. There is some indication that the spectrum is softer in the falling phase of the flare than the peak phase, with a confidence level corresponding to 3.6 standard deviations. We discuss the implications of these results for the acceleration and cooling rates of VHE electrons in M 87 and the constraints they provide on the physical size of the emitting region.
The VERITAS array of Cherenkov telescopes has carried out a deep observational program on the nearby dwarf spheroidal galaxy Segue 1. We report on the results of nearly 48 hours of good quality selected data, taken between January 2010 and May 2011. No significant gamma-ray emission is detected at the nominal position of Segue 1, and upper limits on the integrated flux are derived. According to recent studies, Segue 1 is the most dark matter-dominated dwarf spheroidal galaxy currently known. We derive stringent bounds on various annihilating and decaying dark matter particle models. The upper limits on the velocity-weighted annihilation cross-section are <sigma upsilon >(95%) (CL) less than or similar to 10(-23) cm(3) s(-1), improving our limits from previous observations of dwarf spheroidal galaxies by at least a factor of 2 for dark matter particle masses m(chi) greater than or similar to 300 GeV. The lower limits on the decay lifetime are at the level of tau(95%) (CL) greater than or similar to 10(24) s. Finally, we address the interpretation of the cosmic ray lepton anomalies measured by ATIC and PAMELA in terms of dark matter annihilation, and show that the VERITAS observations of Segue 1 disfavor such a scenario.
We report on a new three-color FRET system consisting of three fluorescent dyes, i.e., of a carbostyril (=quinolin-2(1H)-one)-derived donor D, a (bathophenanthroline)ruthenium complex as a relay chromophore A1, and a Cy dye as A2 (FRET=Forster resonance-energy-transfer) (cf. Fig. 1). With their widely matching spectroscopic properties (cf. Fig. 2), the combination of these dyes yielded excellent FRET efficiencies. Furthermore, fluorescence lifetime measurements revealed that the long fluorescence lifetime of the Ru complex was transferred to the Cy dye offering the possibility to measure the whole system in a time-resolved mode. The FRET system was established on double-stranded DNA (cf. Fig. 3) but it should also be generally applicable to other biomolecules.
This article examines two so-far-understudied verb doubling constructions in Mandarin Chinese, viz., verb doubling clefts and verb doubling lian…dou. We show that these constructions have the same internal syntax as regular clefts and lian…dou sentences, the doubling effect being epiphenomenal; therefore, we classify them as subtypes of the general cleft and lian…dou constructions, respectively, rather than as independent constructions. Additionally, we also show that, as in many other languages with comparable constructions, the two instances of the verb are part of a single movement chain, which has the peculiarity of allowing Spell-Out of more than one link.
Velocity and displacement correlation functions for fractional generalized Langevin equations
(2012)
We study analytically a generalized fractional Langevin equation. General formulas for calculation of variances and the mean square displacement are derived. Cases with a three parameter Mittag-Leffler frictional memory kernel are considered. Exact results in terms of the Mittag-Leffler type functions for the relaxation functions, average velocity and average particle displacement are obtained. The mean square displacement and variances are investigated analytically. Asymptotic behaviors of the particle in the short and long time limit are found. The model considered in this paper may be used for modeling anomalous diffusive processes in complex media including phenomena similar to single file diffusion or possible generalizations thereof. We show the importance of the initial conditions on the anomalous diffusive behavior of the particle.
Objectives-The purpose of this study was to determine the dependence of breast tissue elasticity on the menstrual cycle of healthy volunteers by means of real-time sonoelastography.
Methods-Twenty-two healthy volunteers (aged 18-33 years) were examined once weekly during two consecutive menstrual cycles using sonoelastography. Group 1 (n = 10) was not taking hormonal medication; group 2 (n = 12) was taking oral contraceptives.
Results-The breast parenchyma appeared softer than the dermis and harder than the adipose tissue, and elasticity varied over the menstrual cycle and between groups. Group 1 (no hormone intake) showed continuously increasing elasticity with relatively soft breast parenchyma in the menstrual and follicular phases and harder parenchyma in the luteal phase (P = .012). Group 2 (oral contraceptives) showed no statistically significant changes in breast parenchymal elasticity according to sonoelastography. The parenchyma was generally softer in group 1 compared with group 2 throughout the menstrual cycle (P = .033). The dermis, the subcutaneous adipose tissue, and the pectoralis major muscle showed no changes in elasticity. Comparison of measurements made during the first and the second menstrual cycles showed similar patterns of elasticity in both groups.
Conclusions-Sonoelastography is a reproducible method that can be used to determine the dependence of breast parenchyma elasticity on the menstrual cycle and on the intake of hormonal contraceptives.
We study the dispersion interaction of the van der Waals and Casimir-Polder (vdW-CP) type between a neutral atom and the surface of a conductor by allowing for nonlocal electrodynamics, i.e. electron diffusion. We consider two models: (i) bulk diffusion, and (ii) diffusion in a surface charge layer. In both cases, we find that the transition to a semiconductor as a function of the conductivity is continuous, unlike the case of a local model. The relevant parameter is the electric screening length and depends on the carrier diffusion constant. We find that for distances comparable to the screening length, vdW-CP data can distinguish between bulk and surface diffusion, hence it can be a sensitive probe for surface states.
Correlation functions of a driven two-level system embedded in a photonic crystal are analyzed. The spectral density of the photonic bands near a gap makes this system non-Markovian. The equations of motion for two-time correlations are derived by two different methods, the quantum regression theorem and the fluctuation dissipation theorem, and found to be the same.
Background: Isokinetic measurements are widely used to assess strength capacity in a clinical or research context. Nevertheless, the validity of isokinetic measures for identifying strength deficits and the evaluation of therapeutic process regarding different pathologies is yet to be established. Therefore, the purpose of this review is to evaluate the validity of isokinetic measures in a specific case: that of muscular capacity in low back pain (LBP).
Methods: A literature search (PubMed; ISI Web of Knowledge; The Cochrane Library) covering the last 10 years was performed. Relevant papers regarding isokinetic trunk strength measures in healthy and patients with low back pain (PLBP) were searched. Peak torque values [Nm] and peak torque normalized to body weight [Nm/kg BW] were extracted for healthy and PLBP. Ranked mean values across studies were calculated for the concentric peak torque at 60 degrees/s as well as the flexion/extension (F/E) ratio.
Results: 34 publications (31 flexion/extension; 3 rotation) were suitable for reporting detailed isokinetic strength measures in healthy or LBP (untrained adults, adolescents, athletes). Adolescents and athletes were different compared to normal adults in terms of absolute trunk strength values and the F/E ratio. Furthermore, isokinetic measures evaluating therapeutic process and isokinetic rehabilitation training were infrequent in literature (8 studies).
Conclusion: Isokinetic measurements are valid for measuring trunk flexion/extension strength and F/E ratio in athletes, adolescents and (untrained) adults with/without LBP. The validity of trunk rotation is questionable due to a very small number of publications whereas no reliable source regarding lateral flexion could be traced. Therefore, isokinetic dynamometry may be utilized for identifying trunk strength deficits in healthy adults and PLBP.
Background. Despite considerable progress made in the past decade through salt iodization programs, over 2 billion people worldwide still have inadequate iodine intake, with devastating consequences for brain development and intellectual capacity. To optimize these programs with regard to salt iodine content, careful monitoring of salt iodine content is essential, but few methods are available to quantitatively measure iodine concentration in a simple, fast, and safe way.
Objective. We have validated a newly developed device that quantitatively measures the content of potassium iodate in salt in a simple, safe, and rapid way.
Methods. The linearity, determination and detection limit, and inter- and intra-assay variability of this colorimetric method were assessed and the method was compared with iodometric titration, using salt samples from several countries.
Results. Linearity of analysis ranged from 5 to 75 mg/kg iodine, with I mg/kg being the determination limit; the intra- and interassay imprecision was 0.9%, 0.5%, and 0.7% and 1.5%, 1.7%, and 2.5% for salt samples with iodine contents of 17, 30, and 55 mg/kg, respectively; the interoperator imprecision for the same samples was 1.2%, 4.9%, and 4.7%, respectively. Comparison with the iodometric method showed high agreement between the methods (R-2 = 0.978; limits of agreement, -10.5 to 10.0 mg/kg).
Conclusions. The device offers a field- and user-friendly solution to quantifying potassium iodate salt content reliably. For countries that use potassium iodide in salt iodization programs, further validation is required.
Background: beta-Carotene is an important precursor of vitamin A, and is associated with bovine fertility. beta-Carotene concentrations in plasma are used to optimize beta-carotene supplementation in cattle, but measurement requires specialized equipment to separate plasma and extract and measure beta-carotene, either using spectrophotometry or high performance liquid chromatography (HPLC).
Objective: The objective of this study was to validate a new 2-step point-of-care (POC) assay for measuring beta-carotene in whole blood and plasma.
Methods: beta-carotene concentrations in plasma from 166 cows were measured using HPLC and compared with results obtained using a POC assay, the iCheck-iEx-Carotene test kit. Whole blood samples from 23 of these cattle were also evaluated using the POC assay and compared with HPLC-plasma results from the same 23 animals. The POC assay includes an extraction vial (iEx Carotene) and hand-held photometer (iCheck Carotene).
Results: Concentrations of beta-carotene in plasma measured using the POC assay ranged from 0.40 to 15.84 mg/L (n = 166). No differences were observed between methods for assay of plasma (mean +/- SD; n = 166): HPLC-plasma 4.23 +/- 2.35 mg/L; POC-plasma 4.49 +/- 2.36 mg/L. Similar good agreement was found when plasma analyzed using HPLC was compared with whole blood analyzed using the POC system (n = 23): HPLC-plasma 3.46 +/- 2.12 mg/L; POC-whole blood 3.67 +/- 2.29 mg/L.
Conclusions: Concentrations of beta-carotene can be measured in blood and plasma from cattle easily and rapidly using a POC assay, and results are comparable to those obtained by the highly sophisticated HPLC method. Immediate feedback regarding beta-carotene deficiency facilitates rapid and appropriate optimization of beta-carotene supplementation in feed.
The aims of this study were to identify areas of wind erosion and dust deposition and to quantify the effects of different grazing intensities on soil redistribution rates in grasslands based on the Cs-137 technique. Because the method uses a reference inventory as threshold for erosion or deposition, the classification of any other site as source or sink for dust depends on the accurate selection of this reference site.
Measurements of Cs-137 inventories and depth distributions were carried out at pasture sites with predominant species of Stipa grandis and Leymus chinensis which are grazed with different intensities. Additional measurements were made at arable land, plant-covered sand dunes and alluvial plains. Wind-induced soil erosion and dust deposition rates were calculated from Cs-137 inventories by means of the "Profile-Distribution" and the "Mass Balance II" models.
The selection of the reference site was based on fluid dynamical and process-determining parameters. The chosen site should meet the following four conditions: (i) located at a summit position with obviously low deposition rates, (ii) sufficient vegetation cover to prevent wind erosion, (iii) plane to exclude water erosion and (iv) in the wind/dust shadow of a higher elevation. The measured reference inventory of Cs-137 was 1967(+/- 102) Bqm(-2) located at a summit position of moderately grazed Leymus chinensis steppe. The Cs-137 inventories at other sites ranged from 1330 Bqm(-2) at heavily grazed sites to 5119 Bqm(-2) at river deposits, representing annual average soil losses of up to 130 tkm(-2) and deposits of up to 540 tkm(-2), respectively. The calculated annual averages of dust depositions at ungrazed Leymus chinensis sites were related to the dust storm frequencies of the last 50 years resulting in a description of the temporal variability of annual dust depositions from about 154 tkm(-2) in the 1960s to 26 tkm(-2) at recent times. Based on this quantification already 80% of the total dust depositions can be related to the 20 years between the 1960s and the end of the 1970s and only 20% to the time between 1980 and 2001.
Cs-137 technique is a promising method to assess the effect of grazing intensity and land use types on the spatial variability of wind-induced soil and dust redistribution processes in semi-arid grasslands. However, considerable efforts are needed to identify a reliable reference site, because erosion and deposition induced by wind may occur at the same places. The combination of the dust deposition rates derived from Cs-137 profile data with the dust storm frequencies is helpful for a better reconstruction of the temporal variability of dust deposition and wind erosion in this region. The calculated recent deposition rates of about 20 tkm(-2) are in good agreement with data of other authors.
1. The polyunsaturated fatty acid eicosapentaenoic acid (EPA) plays an important role in aquatic food webs, in particular at the primary producerconsumer interface where keystone species such as daphnids may be constrained by its dietary availability. Such constraints and their seasonal and interannual changes may be detected by continuous measurements of EPA concentrations. However, such EPA measurements became common only during the last two decades, whereas long-term data sets on plankton biomass are available for many well-studied lakes. Here, we test whether it is possible to estimate EPA concentrations from abiotic variables (light and temperature) and the biomass of prey organisms (e.g. ciliates, diatoms and cryptophytes) that potentially provide EPA for consumers. 2. We used multiple linear regression to relate size- and taxonomically resolved plankton biomass data and measurements of temperature and light intensity to directly measured EPA concentrations in Lake Constance during a whole year. First, we tested the predictability of EPA concentrations from the biomass of EPA-rich organisms (diatoms, cryptophytes and ciliates). Secondly, we included the variables mean temperature and mean light intensity over the sampling depth (020 m) and depth (08 and 820 m) as factors in our model to check for large-scale seasonal- and depth-dependent effects on EPA concentrations. In a third step, we included the deviations of light and temperature from mean values in our model to allow for their potential influence on the biochemical composition of plankton organisms. We used the Akaike Information Criterion to determine the best models. 3. All approaches supported our proposition that the biomasses of specific plankton groups are variables from which seston EPA concentrations can be derived. The importance of ciliates as an EPA source in the seston was emphasised by their high weight in our models, although ciliates are neglected in most studies that link fatty acids to seston taxonomic composition. The large-scale seasonal variability of light intensity and its interaction with diatom biomass were significant predictors of EPA concentrations. The deviation of temperature from mean values, accounting for a depth-dependent effect on EPA concentrations, and its interaction with ciliate biomass were also variables with high predictive power. 4. The best models from the first and second approaches were validated with measurements of EPA concentrations from another year (1997). The estimation with the best model including only biomass explained 80%, and the best model from the second approach including mean temperature and depth explained 87% of the variability in EPA concentrations in 1997. 5. We show that it is possible to predict EPA concentrations reliably from plankton biomass, while the inclusion of abiotic factors led to results that were only partly consistent with expectations from laboratory studies. Our approach of including biotic predictors should be transferable to other systems and allow checking for biochemical constraints on primary consumers.
Whereas the US President signed the Kyoto Protocol, the failure of the US Congress to ratify it seriously hampered subsequent international climate cooperation. This recent US trend, of signing environmental treaties but failing to ratify them, could thwart attempts to come to a future climate agreement. Two complementary explanations of this trend are proposed. First, the political system of the US has distinct institutional features that make it difficult for presidents to predict whether the Senate will give its advice and consent to multilateral environmental agreements (MEAs) and whether Congress will pass the required enabling legislation. Second, elected for a fixed term, US presidents might benefit politically from supporting MEAs even when knowing that legislative support is not forthcoming. Four policy implications are explored, concerning the scope for unilateral presidential action, the potential for bipartisan congressional support, the effectiveness of a treaty without the US, and the prospects for a deep, new climate treaty.
Policy relevance
Why does the failure of US ratification of multilateral environmental treaties occur? This article analyses the domestic political mechanisms involved in cases of failed US ratification. US non-participation in global environmental institutions often has serious ramifications. For example, it sharply limited Kyoto's effectiveness and seriously hampered international climate negotiations for years. Although at COP 17 in Durban the parties agreed to negotiate a new agreement by 2015, a new global climate treaty may well trigger a situation resembling the one President Clinton faced in 1997 when he signed Kyoto but never obtained support for it in the Senate. US failure to ratify could thwart future climate agreements.
The conformational analysis of the first representative of the Si-alkoxy substituted six-membered Si,N-heterocycles, 1,3-dimethyl-3-isopropoxy-3-silapiperidine, was performed by low-temperature 1H and 13C NMR spectroscopy and DFT theoretical calculations. In contrast to the expectations from the conformational energies of methyl and alkoxy substituents, the Meaxi-PrOeq conformer was found to predominate in the conformational equilibrium in the ratio Meaxi-PrOeq : Meeqi-PrOax of ca. 2 : 1 as from the 1H and 13C NMR study. The thermodynamic parameters obtained by the complete line shape analysis showed that the main contribution to the barrier to ring inversion originates from the entropy term of the free energy of activation.
This editorial introduces a set of papers on differential embodiment in spatial tasks. According to the theoretical notion of embodied cognition, our experiences of acting in the world, and the constraints of our sensory and motor systems, strongly shape our cognitive functions. In the current set of papers, the authors were asked to particularly consider idiosyncratic or differential embodied cognition in the context of spatial tasks and processes. In each contribution, differential embodiment is considered from one of two complementary perspectives: either by considering unusual individuals, who have atypical bodies or uncommon experiences of interacting with the world; or by exploring individual differences in the general population that reflect the naturally occurring variability in embodied processes. Our editorial summarizes the contributions to this special issue and discusses the insights they offer. We conclude from this collection of papers that exploring differences in the recruitment and involvement of embodied processes can be highly informative, and can add an extra dimension to our understanding of spatial cognitive functions. Taking a broader perspective, it can also shed light on important theoretical and empirical questions concerning the nature of embodied cognition per se.
Leaf senescence is an active process required for plant survival, and it is flexibly controlled, allowing plant adaptation to environmental conditions. Although senescence is largely an age-dependent process, it can be triggered by environmental signals and stresses. Leaf senescence coordinates the breakdown and turnover of many cellular components, allowing a massive remobilization and recycling of nutrients from senescing tissues to other organs (e.g., young leaves, roots, and seeds), thus enhancing the fitness of the plant. Such metabolic coordination requires a tight regulation of gene expression. One important mechanism for the regulation of gene expression is at the transcriptional level via transcription factors (TFs). The NAC TF family (NAM, ATAF, CUC) includes various members that show elevated expression during senescence, including ORE1 (ANAC092/AtNAC2) among others. ORE1 was first reported in a screen for mutants with delayed senescence (oresara1, 2, 3, and 11). It was named after the Korean word “oresara,” meaning “long-living,” and abbreviated to ORE1, 2, 3, and 11, respectively. Although the pivotal role of ORE1 in controlling leaf senescence has recently been demonstrated, the underlying molecular mechanisms and the pathways it regulates are still poorly understood. To unravel the signaling cascade through which ORE1 exerts its function, we analyzed particular features of regulatory pathways up-stream and down-stream of ORE1. We identified characteristic spatial and temporal expression patterns of ORE1 that are conserved in Arabidopsis thaliana and Nicotiana tabacum and that link ORE1 expression to senescence as well as to salt stress. We proved that ORE1 positively regulates natural and dark-induced senescence. Molecular characterization of the ORE1 promoter in silico and experimentally suggested a role of the 5’UTR in mediating ORE1 expression. ORE1 is a putative substrate of a calcium-dependent protein kinase named CKOR (unpublished data). Promising data revealed a positive regulation of putative ORE1 targets by CKOR, suggesting the phosphorylation of ORE1 as a requirement for its regulation. Additionally, as part of the ORE1 up-stream regulatory pathway, we identified the NAC TF ATAF1 which was able to transactivate the ORE1 promoter in vivo. Expression studies using chemically inducible ORE1 overexpression lines and transactivation assays employing leaf mesophyll cell protoplasts provided information on target genes whose expression was rapidly induced upon ORE1 induction. First, a set of target genes was established and referred to as early responding in the ORE1 regulatory network. The consensus binding site (BS) of ORE1 was characterized. Analysis of some putative targets revealed the presence of ORE1 BSs in their promoters and the in vitro and in vivo binding of ORE1 to their promoters. Among these putative target genes, BIFUNCTIONAL NUCLEASE I (BFN1) and VND-Interacting2 (VNI2) were further characterized. The expression of BFN1 was found to be dependent on the presence of ORE1. Our results provide convincing data which support a role for BFN1 as a direct target of ORE1. Characterization of VNI2 in age-dependent and stress-induced senescence revealed ORE1 as a key up-stream regulator since it can bind and activate VNI2 expression in vivo and in vitro. Furthermore, VNI2 was able to promote or delay senescence depending on the presence of an activation domain located in its C-terminal region. The plasticity of this gene might include alternative splicing (AS) to regulate its function in different organs and at different developmental stages, particularly during senescence. A model is proposed on the molecular mechanism governing the dual role of VNI2 during senescence.
Unique properties of eukaryote-type actin and profilin horizontally transferred to cyanobacteria
(2012)
A eukaryote-type actin and its binding protein profilin encoded on a genomic island in the cyanobacterium Microcystis aeruginosa PCC 7806 co-localize to form a hollow, spherical enclosure occupying a considerable intracellular space as shown by in vivo fluorescence microscopy. Biochemical and biophysical characterization reveals key differences between these proteins and their eukaryotic homologs. Small-angle X-ray scattering shows that the actin assembles into elongated, filamentous polymers which can be visualized microscopically with fluorescent phalloidin. Whereas rabbit actin forms thin cylindrical filaments about 100 mu m in length, cyanobacterial actin polymers resemble a ribbon, arrest polymerization at 510 lam and tend to form irregular multi-strand assemblies. While eukaryotic profilin is a specific actin monomer binding protein, cyanobacterial profilin shows the unprecedented property of decorating actin filaments. Electron micrographs show that cyanobacterial profilin stimulates actin filament bundling and stabilizes their lateral alignment into heteropolymeric sheets from which the observed hollow enclosure may be formed. We hypothesize that adaptation to the confined space of a bacterial cell devoid of binding proteins usually regulating actin polymerization in eukaryotes has driven the co-evolution of cyanobacterial actin and profilin, giving rise to an intracellular entity.
In industrialized economies such as the European countries unemployment rates are very responsive to the business cycle and significant shares stay unemployed for more than one year. To fight cyclical and long-term unemployment countries spend significant shares of their budget on Active Labor Market Policies (ALMP). To improve the allocation and design of ALMP it is essential for policy makers to have reliable evidence on the effectiveness of such programs available. Although the number of studies has been increased during the last decades, policy makers still lack evidence on innovative programs and for specific subgroups of the labor market. Using Germany as a case study, the dissertation aims at contributing in this way by providing new evidence on start-up subsidies, marginal employment and programs for youth unemployed. The idea behind start-up subsidies is to encourage unemployed individuals to exit unemployment by starting their own business. Those programs have compared to traditional programs of ALMP the advantage that not only the participant escapes unemployment but also might generate additional jobs for other individuals. Considering two distinct start-up subsidy programs, the dissertation adds three substantial aspects to the literature: First, the programs are effective in improving the employment and income situation of participants compared to non-participants in the long-run. Second, the analysis on effect heterogeneity reveals that the programs are particularly effective for disadvantaged groups in the labor market like low educated or low qualified individuals, and in regions with unfavorable economic conditions. Third, the analysis considers the effectiveness of start-up programs for women. Due to higher preferences for flexible working hours and limited part-time jobs, unemployed women often face more difficulties to integrate in dependent employment. It can be shown that start-up subsidy programs are very promising as unemployed women become self-employed which gives them more flexibility to reconcile work and family. Overall, the results suggest that the promotion of self-employment among the unemployed is a sensible strategy to fight unemployment by abolishing labor market barriers for disadvantaged groups and sustainably integrating those into the labor market. The next chapter of the dissertation considers the impact of marginal employment on labor market outcomes of the unemployed. Unemployed individuals in Germany are allowed to earn additional income during unemployment without suffering a reduction in their unemployment benefits. Those additional earnings are usually earned by taking up so-called marginal employment that is employment below a certain income level subject to reduced payroll taxes (also known as “mini-job”). The dissertation provides an empirical evaluation of the impact of marginal employment on unemployment duration and subsequent job quality. The results suggest that being marginal employed during unemployment has no significant effect on unemployment duration but extends employment duration. Moreover, it can be shown that taking up marginal employment is particularly effective for long-term unemployed, leading to higher job-finding probabilities and stronger job stability. It seems that mini-jobs can be an effective instrument to help long-term unemployed individuals to find (stable) jobs which is particularly interesting given the persistently high shares of long-term unemployed in European countries. Finally, the dissertation provides an empirical evaluation of the effectiveness of ALMP programs to improve labor market prospects of unemployed youth. Youth are generally considered a population at risk as they have lower search skills and little work experience compared to adults. This results in above-average turnover rates between jobs and unemployment for youth which is particularly sensitive to economic fluctuations. Therefore, countries spend significant resources on ALMP programs to fight youth unemployment. However, so far only little is known about the effectiveness of ALMP for unemployed youth and with respect to Germany no comprehensive quantitative analysis exists at all. Considering seven different ALMP programs, the results show an overall positive picture with respect to post-treatment employment probabilities for all measures under scrutiny except for job creation schemes. With respect to effect heterogeneity, it can be shown that almost all programs particularly improve the labor market prospects of youths with high levels of pretreatment schooling. Furthermore, youths who are assigned to the most successful employment measures have much better characteristics in terms of their pre-treatment employment chances compared to non-participants. Therefore, the program assignment process seems to favor individuals for whom the measures are most beneficial, indicating a lack of ALMP alternatives that could benefit low-educated youths.
Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.
Empirical species distribution models (SDMs) constitute often the tool of choice for the assessment of rapid climate change effects on species vulnerability. Conclusions regarding extinction risks might be misleading, however, because SDMs do not explicitly incorporate dispersal or other demographic processes. Here, we supplement SDMs with a dynamic population model 1) to predict climate-induced range dynamics for black grouse in Switzerland, 2) to compare direct and indirect measures of extinction risks, and 3) to quantify uncertainty in predictions as well as the sources of that uncertainty. To this end, we linked models of habitat suitability to a spatially explicit, individual-based model. In an extensive sensitivity analysis, we quantified uncertainty in various model outputs introduced by different SDM algorithms, by different climate scenarios and by demographic model parameters. Potentially suitable habitats were predicted to shift uphill and eastwards. By the end of the 21st century, abrupt habitat losses were predicted in the western Prealps for some climate scenarios. In contrast, population size and occupied area were primarily controlled by currently negative population growth and gradually declined from the beginning of the century across all climate scenarios and SDM algorithms. However, predictions of population dynamic features were highly variable across simulations. Results indicate that inferring extinction probabilities simply from the quantity of suitable habitat may underestimate extinction risks because this may ignore important interactions between life history traits and available habitat. Also, in dynamic range predictions uncertainty in SDM algorithms and climate scenarios can become secondary to uncertainty in dynamic model components. Our study emphasises the need for principal evaluation tools like sensitivity analysis in order to assess uncertainty and robustness in dynamic range predictions. A more direct benefit of such robustness analysis is an improved mechanistic understanding of dynamic species responses to climate change.
Ultrasound evaluation of the patellar tendon cross-sectional area and its relation to maximum force
(2012)
High topography in eastern Tibet is thought to have formed when deep crust beneath the central Tibetan Plateau flowed towards the plateau margin, causing crustal thickening and surface uplift(1,2). Rapid exhumation starting about 10-15 million years ago is inferred to mark the onset of surface uplift and fluvial incision(3-6). Although geophysical data are consistent with weak crust capable of flow(7,8), it is unclear how the timing(9) and amount of deformation adjacent to the Sichuan Basin during the Cenozoic era can be explained in this way(10,11). Here we use thermochronology to measure the cooling histories of rocks exposed in a section that stretches vertically over 3 km adjacent to the Sichuan Basin. Our thermal models of exhumation-driven cooling show that these rocks, and hence the plateau margin, were subject to slow, steady exhumation during early Cenozoic time, followed by two pulses of rapid exhumation, one beginning 30-25 million years ago and a second 10-15 million years ago that continues to present. Our findings imply that significant topographic relief existed adjacent to the Sichuan Basin before the Indo-Asian collision. Furthermore, the onset of Cenozoic mountain building probably pre-dated development of the weak lower crust, implying that early topography was instead formed during thickening of the upper crust along faults. We suggest that episodes of mountain building may reflect distinct geodynamic mechanisms of crustal thickening.
A transient two-dimensional model describing degenerate four-wave mixing inside saturable gain media is presented. The new model is compared to existing one-dimensional models with their qualitative results confirmed. Large quantitative differences with respect to peak reflectivity and optimum pump fluence are observed. Furthermore, the influence of the beam focus size, the transverse position and the crossing angle on the reflectivity of the grating is investigated using the improved model. It is demonstrated that the phase conjugate reflectivity depends sensitively on the transverse features of the interacting beams with a transverse shift in the position of the pump beams yielding a threefold improvement in reflectivity. (C) 2012 Optical Society of America
Tectonic and geological processes on Earth often result in structural anisotropy of the subsurface, which can be imaged by various geophysical methods. In order to achieve appropriate and realistic Earth models for interpretation, inversion algorithms have to allow for an anisotropic subsurface. Within the framework of this thesis, I analyzed a magnetotelluric (MT) data set taken from the Cape Fold Belt in South Africa. This data set exhibited strong indications for crustal anisotropy, e.g. MT phases out of the expected quadrant, which are beyond of fitting and interpreting with standard isotropic inversion algorithms. To overcome this obstacle, I have developed a two-dimensional inversion method for reconstructing anisotropic electrical conductivity distributions. The MT inverse problem represents in general a non-linear and ill-posed minimization problem with many degrees of freedom: In isotropic case, we have to assign an electrical conductivity value to each cell of a large grid to assimilate the Earth's subsurface, e.g. a grid with 100 x 50 cells results in 5000 unknown model parameters in an isotropic case; in contrast, we have the sixfold in an anisotropic scenario where the single value of electrical conductivity becomes a symmetric, real-valued tensor while the number of the data remains unchanged. In order to successfully invert for anisotropic conductivities and to overcome the non-uniqueness of the solution of the inverse problem it is necessary to use appropriate constraints on the class of allowed models. This becomes even more important as MT data is not equally sensitive to all anisotropic parameters. In this thesis, I have developed an algorithm through which the solution of the anisotropic inversion problem is calculated by minimization of a global penalty functional consisting of three entries: the data misfit, the model roughness constraint and the anisotropy constraint. For comparison, in an isotropic approach only the first two entries are minimized. The newly defined anisotropy term is measured by the sum of the square difference of the principal conductivity values of the model. The basic idea of this constraint is straightforward. If an isotropic model is already adequate to explain the data, there is no need to introduce electrical anisotropy at all. In order to ensure successful inversion, appropriate trade-off parameters, also known as regularization parameters, have to be chosen for the different model constraints. Synthetic tests show that using fixed trade-off parameters usually causes the inversion to end up by either a smooth model with large RMS error or a rough model with small RMS error. Using of a relaxation approach on the regularization parameters after each successful inversion iteration will result in smoother inversion model and a better convergence. This approach seems to be a sophisticated way for the selection of trade-off parameters. In general, the proposed inversion method is adequate for resolving the principal conductivities defined in horizontal plane. Once none of the principal directions of the anisotropic structure is coincided with the predefined strike direction, only the corresponding effective conductivities, which is the projection of the principal conductivities onto the model coordinate axes direction, can be resolved and the information about the rotation angles is lost. In the end the MT data from the Cape Fold Belt in South Africa has been analyzed. The MT data exhibits an area (> 10 km) where MT phases over 90 degrees occur. This part of data cannot be modeled by standard isotropic modeling procedures and hence can not be properly interpreted. The proposed inversion method, however, could not reproduce the anomalous large phases as desired because of losing the information about rotation angles. MT phases outside the first quadrant are usually obtained by different anisotropic anomalies with oblique anisotropy strike. In order to achieve this challenge, the algorithm needs further developments. However, forward modeling studies with the MT data have shown that surface highly conductive heterogeneity in combination with a mid-crustal electrically anisotropic zone are required to fit the data. According to known geological and tectonic information the mid-crustal zone is interpreted as a deep aquifer related to the fractured Table Mountain Group rocks in the Cape Fold Belt.
A membrane of a dielectric elastomer coated with compliant electrodes may form wrinkles as the applied voltage is ramped up. We present a combination of experiment and theory to investigate the transition to wrinkles using a clamped membrane subject to a constant force and a voltage ramp. Two types of transitions are identified. In type-I transition, the voltage-stretch curve is N-shaped, and flat and wrinkled regions coexist in separate areas of the membrane. The type-I transition progresses by nucleation of small wrinkled regions, followed by the growth of the wrinkled regions at the expense of the flat regions, until the entire membrane is wrinkled. By contrast, in type-II transition, the voltage-stretch curve is monotonic, and the entire flat membrane becomes wrinkled with no nucleation barrier. The two types of transitions are analogous to the first and the second order phase transitions. While the type-I transition is accompanied by a jump in the vertical displacement, type-II transition is accompanied by a continuous change in the vertical displacement. Such transitions may enable applications in muscle-like actuation and energy harvesting, where large deformation and large energy of conversion are desired.
Parenchyma cells from tubers of Solanum tuberosum L. convert several externally supplied sugars to starch but the rates vary largely. Conversion of glucose 1-phosphate to starch is exceptionally efficient. In this communication, tuber slices were incubated with either of four solutions containing equimolar [U-C-14]glucose 1-phosphate, [U-C-14]sucrose, [U-C-14]glucose 1-phosphate plus unlabelled equimolar sucrose or [U-C-14]sucrose plus unlabelled equimolar glucose 1-phosphate. C-14-incorporation into starch was monitored. In slices from freshly harvested tubers each unlabelled compound strongly enhanced C-14 incorporation into starch indicating closely interacting paths of starch biosynthesis. However, enhancement disappeared when the tubers were stored. The two paths (and, consequently, the mutual enhancement effect) differ in temperature dependence. At lower temperatures, the glucose 1-phosphate-dependent path is functional, reaching maximal activity at approximately 20 degrees C but the flux of the sucrose-dependent route strongly increases above 20 degrees C. Results are confirmed by in vitro experiments using [U-C-14]glucose 1-phosphate or adenosine-[U-C-14]glucose and by quantitative zymograms of starch synthase or phosphorylase activity. In mutants almost completely lacking the plastidial phosphorylase isozyme(s), the glucose 1-phosphate-dependent path is largely impeded. Irrespective of the size of the granules, glucose 1-phosphate-dependent incorporation per granule surface area is essentially equal. Furthermore, within the granules no preference of distinct glucosyl acceptor sites was detectable. Thus, the path is integrated into the entire granule biosynthesis. In vitro C-14-incorporation into starch granules mediated by the recombinant plastidial phosphorylase isozyme clearly differed from the in situ results. Taken together, the data clearly demonstrate that two closely but flexibly interacting general paths of starch biosynthesis are functional in potato tuber cells.
We analyze a general class of difference operators on where is a multi-well potential and is a small parameter. We decouple the wells by introducing certain Dirichlet operators on regions containing only one potential well, and we shall treat the eigenvalue problem for as a small perturbation of these comparison problems. We describe tunneling by a certain interaction matrix, similar to the analysis for the Schrodinger operator [see Helffer and Sjostrand in Commun Partial Differ Equ 9:337-408, 1984], and estimate the remainder, which is exponentially small and roughly quadratic compared with the interaction matrix.
We analyze a general class of difference operators containing a multi-well potential and a small parameter. We decouple the wells by introducing certain Dirichlet operators on regions containing only one potential well, and we treat the eigenvalue problem as a small perturbation of these comparison problems. We describe tunneling by a certain interaction matrix similar to the analysis for the Schrödinger operator, and estimate the remainder, which is exponentially small and roughly quadratic compared with the interaction matrix.
Experimental evidence reveals that there is a strong willingness to trust and to act in both positively and negatively reciprocal ways. So far it is rarely analyzed whether these variables of social cognition influence everyday decision making behavior. We focus on entrepreneurs who are permanently facing exchange processes in the interplay with investors, sellers, and buyers, as well as needing to trust others and reciprocate with their network. We base our analysis on the German Socio-Economic Panel with its recently introduced questions about trust, positive reciprocity, and negative reciprocity to examine the extent that these variables influence the entrepreneurial decision processes. More specifically, we analyze whether (i) the willingness to trust other people influences the probability of starting a business; (ii) trust, positive reciprocity, and negative reciprocity influence the exit probability of entrepreneurs; and (iii) willingness to trust and to act reciprocally influences the probability of being an entrepreneur versus an employee or a manager. Our findings reveal that, in particular, trust impacts entrepreneurial development. Interestingly, entrepreneurs are more trustful than employees, but much less trustful than managers.
Clusters of codons pairing to low-abundance tRNAs synchronize the translation with co-translational folding of single domains in multidomain proteins. Although proven with some examples, the impact of the ribosomal speed on the folding and solubility on a global, cell-wide level remains elusive. Here we show that upregulation of three low-abundance tRNAs in Escherichia coil increased the aggregation propensity of several cellular proteins as a result of an accelerated elongation rate. Intriguingly, alterations in the concentration of the natural tRNA pool compromised the solubility of various chaperones consequently rendering the solubility of some chaperone-dependent proteins.
Triassic Latemar cycle tops - Subaerial exposure of platform carbonates under tropical arid climate
(2012)
The Triassic Latemar platform in the Dolomites, Italy, is the site of several ongoing controversies. Perhaps the most interesting debate focuses on apparent cyclic deposition within the Latemar platform, whose nature and duration are still open to debate. Further disagreement concerns the lack of meteoric diagenesis-related isotope shifts at cycle tops that bear circumstantial petrographic evidence for subaerial emergence. Here, an evaluation of the nature of Latemar cycle tops is presented combining evidence from previous work and new field, petrographic and geochemical data. Cycle tops are ranked according to increasing exposure duration and spatial extent: type I surfaces lacking unequivocal evidence of prolonged supratidal conditions; type II dolomite caps formed in warm, evaporitic, intertidal lagoonal waters followed by exposure of perhaps intermediate duration; type III clastic-rich, red calcareous horizons with some showing platform-wide extent, representing prolonged supratidal conditions, and type IV discontinuities in tepee belts, genetically related to type II and III surfaces, but likely representing shorter-lived exposure stages. Petrographic and geochemical criteria indicate that most diagenesis occurred in the shallow marine and burial domain whilst an extensive meteoric overprint of cycle tops is lacking. This is underlined by the scarcity of meteoric diagenetic fabrics such as gravitational cements that, where present, are here interpreted as marine-vadose in origin. The scarcity of carbon and oxygen isotope signatures commonly assigned to subaerial exposure stages is best explained in the context of mid-Triassic climate. The low latitude, tropical but arid setting of the Latemar, situated in the western extension of the Tethys ocean, its isolation from nearby continental areas and overall short-term emergence episodes are in agreement with a limited degree of meteoric alteration of most cycle tops. High amounts of aeolian clastic material beneath some cycle tops, along with high Fe and Mn elemental abundances argue for intermittent subaerial conditions. This study proposes an enhancement of the classical Allan and Matthews (1982) isotope model for subaerial exposure under strongly arid climates. As the subaerial exposure nature of Latemar cycle tops, and therefore eustasy as the cause for cyclicity, have been previously challenged due to the lack of meteoric-induced isotopic signatures, the outcome of this study is of significance for the ongoing Latemar stratigraphic controversy.
Ferroelectrets have been fabricated from low-density polyethylene (LDPE) films by means of a template-based lamination. The temperature dependence of the piezoelectric d(33) coefficient has been investigated. It was found that low-density polyethylene ferroelectrets have rather low thermal stability with the piezoelectric coefficient decaying almost to zero already at 100 degrees C. This behavior is attributed to the poor electret properties of the polyethylene films used for the fabrication of the ferroelectrets. In order to improve the charge trapping and the thermal stability of electret charge and piezoelectricity, LDPE ferroelectrets were treated with orthophosphoric acid. The treatment resulted in considerable improvements of the charge stability in LDPE films and in ferroelectret systems made from them. For example, the charge and piezoelectric-coefficient decay curves shifted to higher temperatures by 60 K and 40 K, respectively. It is shown that the decay of the piezoelectric coefficient in LDPE ferroelectrets is governed by the relaxation of less stable positive charges. The treatment also leads to noticeable changes in the chemical composition of the LDPE surface. Infrared spectroscopy reveals absorption bands attributed to phosphorus-containing structures, while scanning electron microscopy shows new island-like structures, 50-200 nm in diameter, on the modified surface.
A Bose-Hubbard model on a dynamical lattice was introduced in previous work as a spin system analogue of emergent geometry and gravity. Graphs with regions of high connectivity in the lattice were identified as candidate analogues of spacetime geometries that contain trapped surfaces. We carry out a detailed study of these systems and show explicitly that the highly connected subgraphs trap matter. We do this by solving the model in the limit of no back-reaction of the matter on the lattice, and for states with certain symmetries that are natural for our problem. We find that in this case the problem reduces to a one-dimensional Hubbard model on a lattice with variable vertex degree and multiple edges between the same two vertices. In addition, we obtain a (discrete) differential equation for the evolution of the probability density of particles which is closed in the classical regime. This is a wave equation in which the vertex degree is related to the local speed of propagation of probability. This allows an interpretation of the probability density of particles similar to that in analogue gravity systems: matter inside this analogue system sees a curved spacetime. We verify our analytic results by numerical simulations. Finally, we analyze the dependence of localization on a gradual, rather than abrupt, falloff of the vertex degree on the boundary of the highly connected region and find that matter is localized in and around that region.
OBJECTIVE-BMI and albumin are commonly accepted parameters to recognize wasting in dialysis patients and are powerful predictors of morbidity and mortality. However, both parameters reveal limitations and may not cover the entire range of patients with wasting. The visceral protein transthyretin (TTR) may be helpful in overcoming the diagnostic and prognostic gap. Therefore, the aim of this study was to assess the association of TTR with morbidity and mortality in hemodialysis patients.
RESEARCH DESIGN AND METHODS-The TTR concentration was determined in plasma samples of 1,177 hemodialysis patients with type 2 diabetes. Cox regression analyses were used to determine hazard ratios (HRs) for the risk of cardiovascular end points (CVEs) and mortality according to quartiles of TTR concentration for the total study cohort and the subgroups BMI >= 23 kg/m(2), albumin concentration >= 3.8 g/dL, and a combination of both.
RESULTS-A low TTR concentration was associated with an increased risk for CVE for the total study cohort (HR 1.65 [95% CI 1.27-2.14]), patients with BMI >= 23 kg/m(2) (1.70 [1.22-2.37]), albumin >= 3.8 g/dL (1.68 [1.17-2.42]), and the combination of both (1.69 [1.13-2.53]). Additionally, a low TTR concentration predicted mortality for the total study cohort (1.79 [1.43-2.24]) and patients with BMI >= 23 kg/m(2) (1.46 [1.09-1.95]).
CONCLUSIONS-The current study demonstrated that TTR is a useful predictor for cardiovascular outcome and mortality in diabetic hemodialysis patients. TTR was particularly useful in patients who were not identified to be at risk by BMI or albumin status.
Translatome and metabolome effects triggered by gibberellins during rosette growth in Arabidopsis
(2012)
Although gibberellins (GAs) are well known for their growth control function, little is known about their effects on primary metabolism. Here the modulation of gene expression and metabolic adjustment in response to changes in plant (Arabidopsis thaliana) growth imposed on varying the gibberellin regime were evaluated. Polysomal mRNA populations were profiled following treatment of plants with paclobutrazol (PAC), an inhibitor of GA biosynthesis, and gibberellic acid (GA(3)) to monitor translational regulation of mRNAs globally. Gibberellin levels did not affect levels of carbohydrates in plants treated with PAC and/or GA(3). However, the tricarboxylic acid cycle intermediates malate and fumarate, two alternative carbon storage molecules, accumulated upon PAC treatment. Moreover, an increase in nitrate and in the levels of the amino acids was observed in plants grown under a low GA regime. Only minor changes in amino acid levels were detected in plants treated with GA(3) alone, or PAC plus GA(3). Comparison of the molecular changes at the transcript and metabolite levels demonstrated that a low GA level mainly affects growth by uncoupling growth from carbon availability. These observations, together with the translatome changes, reveal an interaction between energy metabolism and GA-mediated control of growth to coordinate cell wall extension, secondary metabolism, and lipid metabolism.
Earthquake-triggered landslide dams are potentially dangerous disrupters of water and sediment flux in mountain rivers, and capable of releasing catastrophic outburst flows to downstream areas. We analyze an inventory of 828 landslide dams in the Longmen Shan mountains, China, triggered by the M-w 7.9 2008 Wenchuan earthquake. This database is unique in that it is the largest of its kind attributable to a single regional-scale triggering event: 501 of the spatially clustered landslides fully blocked rivers, while the remainder only partially obstructed or diverted channels in steep watersheds of the hanging wall of the Yingxiu-Beichuan Fault Zone. The size distributions of the earthquake-triggered landslides, landslide dams, and associated lakes (a) can be modeled by an inverse gamma distribution; (b) show that moderate-size slope failures caused the majority of blockages; and (c) allow a detailed assessment of seismically induced river-blockage effects on regional water and sediment storage. Monte Carlo simulations based on volumetric scaling relationships for soil and bedrock failures respectively indicate that 14% (18%) of the estimated total coseismic landslide volume of 6.4 (14.6) x 10(9) m(3) was contained in landslide dams, representing only 1.4% of the >60,000 slope failures attributed to the earthquake. These dams have created storage capacity of similar to 0.6x 10(9) m(3) for incoming water and sediment. About 25% of the dams containing 2% of the total river-blocking debris volume failed one week after the earthquake; these figures had risen to 60% (similar to 20%), and >90% (>90%) within one month, and one:year, respectively, thus also emptying similar to 92% of the total potential water and sediment storage behind these, dams within one year following the earthquake. Currently only similar to 0.08 x 10(9) m(3) remain available as natural reservoirs for storing water and sediment, while similar to 0.19 x 10(9) m(3), i.e. about a third of the total river-blocking debris volume, has been eroded by rivers. Dam volume and upstream catchment area control to first order the longevity of the barriers, and bivariate domain plots are consistent with the observation that most earthquake-triggered landslide dams were ephemeral. We conclude that the river-blocking portion of coseismic slope failures disproportionately modulates the post-seismic sediment flux in the Longmen Shan on annual to decadal timescales.
Transcription factor OsHsfC1b regulates salt tolerance and development in Oryza sativa ssp japonica
(2012)
Background and aims Salt stress leads to attenuated growth and productivity in rice. Transcription factors like heat shock factors (HSFs) represent central regulators of stress adaptation. Heat shock factors of the classes A and B are well established as regulators of thermal and non-thermal stress responses in plants; however, the role of class C HSFs is unknown. Here we characterized the function of the OsHsfC1b (Os01g53220) transcription factor from rice.
Methodology We analysed the expression of OsHsfC1b in the rice japonica cultivars Dongjin and Nipponbare exposed to salt stress as well as after mannitol, abscisic acid (ABA) and H2O2 treatment. For functional characterization of OsHsfC1b, we analysed the physiological response of a T-DNA insertion line (hsfc1b) and two artificial micro-RNA (amiRNA) knock-down lines to salt, mannitol and ABA treatment. In addition, we quantified the expression of small Heat Shock Protein (sHSP) genes and those related to signalling and ion homeostasis by quantitative real-time polymerase chain reaction in roots exposed to salt. The subcellular localization of OsHsfC1b protein fused to green fluorescent protein (GFP) was determined in Arabidopsis mesophyll cell protoplasts.
Principal results Expression of OsHsfC1b was induced by salt, mannitol and ABA, but not by H2O2. Impaired function of OsHsfC1b in the hsfc1b mutant and the amiRNA lines led to decreased salt and osmotic stress tolerance, increased sensitivity to ABA, and temporal misregulation of salt-responsive genes involved in signalling and ion homeostasis. Furthermore, sHSP genes showed enhanced expression in knock-down plants under salt stress. We observed retarded growth of hsfc1b and knock-down lines in comparison with control plants under non-stress conditions. Transient expression of OsHsfC1b fused to GFP in protoplasts revealed nuclear localization of the transcription factor.
Conclusions OsHsfC1b plays a role in ABA-mediated salt stress tolerance in rice. Furthermore, OsHsfC1b is involved in the response to osmotic stress and is required for plant growth under non-stress conditions.
During natural reading, a parafoveal preview of the upcoming word facilitates its subsequent recognition (e.g., shorter fixation durations compared to masked preview) but nothing is known about the neural correlates of this so-called preview benefit. Furthermore, while the evidence is strong that readers preprocess orthographic features of upcoming words, it is controversial whether word meaning can also be accessed parafoveally. We investigated the timing, scope, and electrophysiological correlates of parafoveal information use in reading by simultaneously recording eye movements and fixation-related brain potentials (FRPs) while participants read word lists fluently from left to right. For one word the target (e.g., "blade") parafoveal information was manipulated by showing an identical ("blade"), semantically related ("knife"), or unrelated ("sugar") word as preview. In boundary trials, the preview was shown parafoveally but changed to the correct target word during the incoming saccade. Replicating classic findings, target words were fixated shorter after identical previews. In the EEG, this benefit was reflected in an occipitotemporal preview positivity between 200 and 280 ms. In contrast, there was no facilitation from related previews. In parafoveal-on-foveal trials, preview and target were embedded at neighboring list positions without a display change. Consecutive fixation of two related words produced N400 priming effects, but only shortly (160 ms) after the second word was directly fixated. Results demonstrate that neural responses to words are substantially altered by parafoveal preprocessing under normal reading conditions. We found no evidence that word meaning contributes to these effects. Saccade-contingent display manipulations can be combined with EEG recordings to study extrafoveal perception in vision.
Silvicultural practices lead to changes in forest composition and structure and may impact species diversity from the overall regional species pool to stand-level species occurrence. We explored to what extent fine-scale occupancy patterns in differently managed forest stands are driven by environment and ecological traits in three regions in Germany using a multi-species hierarchical model. We tested for the possible impact of environmental variables and ecological traits on occupancy dynamics in a joint modelling exercise while taking possible variation in coefficient estimates over years and plots into account. Bird species richness differed across regions and years, and trends in species richness across years were different in the three regions. On the species level, forest management affected occupancy of species in all regions, but only 35% of the total assemblage-level variation in occurrence probability was explained by either forest type and successional stage and <?1% by forest edge. On the assemblage level, bird occurrence decreased with body mass in all regions. Species with smaller breeding ranges had lower occurrence probabilities in one region, while later spring arrival decreased occurrence probabilities in the two other regions. Spatial variation in the effect size of trait covariates such as species phylogeny and breeding strata showed that variation in patch occupancy due to fine-scale differences in forest management is, to some extent, predictable from ecological traits. Our results show that environmental factors and ecological traits jointly predict variation in bird occupancy patterns and their response to forest management. Observations at the fine scale of forest stands, at which conservation efforts can be arranged along with forest management practices in heterogeneous environments, have been shown to provide meaningful insights despite the difficulties involved in monitoring mobile organisms such as birds at the plot level.
Nowadays, model-driven engineering (MDE) promises to ease software development by decreasing the inherent complexity of classical software development. In order to deliver on this promise, MDE increases the level of abstraction and automation, through a consideration of domain-specific models (DSMs) and model operations (e.g. model transformations or code generations). DSMs conform to domain-specific modeling languages (DSMLs), which increase the level of abstraction, and model operations are first-class entities of software development because they increase the level of automation. Nevertheless, MDE has to deal with at least two new dimensions of complexity, which are basically caused by the increased linguistic and technological heterogeneity. The first dimension of complexity is setting up an MDE environment, an activity comprised of the implementation or selection of DSMLs and model operations. Setting up an MDE environment is both time-consuming and error-prone because of the implementation or adaptation of model operations. The second dimension of complexity is concerned with applying MDE for actual software development. Applying MDE is challenging because a collection of DSMs, which conform to potentially heterogeneous DSMLs, are required to completely specify a complex software system. A single DSML can only be used to describe a specific aspect of a software system at a certain level of abstraction and from a certain perspective. Additionally, DSMs are usually not independent but instead have inherent interdependencies, reflecting (partial) similar aspects of a software system at different levels of abstraction or from different perspectives. A subset of these dependencies are applications of various model operations, which are necessary to keep the degree of automation high. This becomes even worse when addressing the first dimension of complexity. Due to continuous changes, all kinds of dependencies, including the applications of model operations, must also be managed continuously. This comprises maintaining the existence of these dependencies and the appropriate (re-)application of model operations. The contribution of this thesis is an approach that combines traceability and model management to address the aforementioned challenges of configuring and applying MDE for software development. The approach is considered as a traceability approach because it supports capturing and automatically maintaining dependencies between DSMs. The approach is considered as a model management approach because it supports managing the automated (re-)application of heterogeneous model operations. In addition, the approach is considered as a comprehensive model management. Since the decomposition of model operations is encouraged to alleviate the first dimension of complexity, the subsequent composition of model operations is required to counteract their fragmentation. A significant portion of this thesis concerns itself with providing a method for the specification of decoupled yet still highly cohesive complex compositions of heterogeneous model operations. The approach supports two different kinds of compositions - data-flow compositions and context compositions. Data-flow composition is used to define a network of heterogeneous model operations coupled by sharing input and output DSMs alone. Context composition is related to a concept used in declarative model transformation approaches to compose individual model transformation rules (units) at any level of detail. In this thesis, context composition provides the ability to use a collection of dependencies as context for the composition of other dependencies, including model operations. In addition, the actual implementation of model operations, which are going to be composed, do not need to implement any composition concerns. The approach is realized by means of a formalism called an executable and dynamic hierarchical megamodel, based on the original idea of megamodels. This formalism supports specifying compositions of dependencies (traceability and model operations). On top of this formalism, traceability is realized by means of a localization concept, and model management by means of an execution concept.
Aim Biotic interactions within guilds or across trophic levels have widely been ignored in species distribution models (SDMs). This synthesis outlines the development of species interaction distribution models (SIDMs), which aim to incorporate multispecies interactions at large spatial extents using interaction matrices. Location Local to global. Methods We review recent approaches for extending classical SDMs to incorporate biotic interactions, and identify some methodological and conceptual limitations. To illustrate possible directions for conceptual advancement we explore three principal ways of modelling multispecies interactions using interaction matrices: simple qualitative linkages between species, quantitative interaction coefficients reflecting interaction strengths, and interactions mediated by interaction currencies. We explain methodological advancements for static interaction data and multispecies time series, and outline methods to reduce complexity when modelling multispecies interactions. Results Classical SDMs ignore biotic interactions and recent SDM extensions only include the unidirectional influence of one or a few species. However, novel methods using error matrices in multivariate regression models allow interactions between multiple species to be modelled explicitly with spatial co-occurrence data. If time series are available, multivariate versions of population dynamic models can be applied that account for the effects and relative importance of species interactions and environmental drivers. These methods need to be extended by incorporating the non-stationarity in interaction coefficients across space and time, and are challenged by the limited empirical knowledge on spatio-temporal variation in the existence and strength of species interactions. Model complexity may be reduced by: (1) using prior ecological knowledge to set a subset of interaction coefficients to zero, (2) modelling guilds and functional groups rather than individual species, and (3) modelling interaction currencies and species effect and response traits. Main conclusions There is great potential for developing novel approaches that incorporate multispecies interactions into the projection of species distributions and community structure at large spatial extents. Progress can be made by: (1) developing statistical models with interaction matrices for multispecies co-occurrence datasets across large-scale environmental gradients, (2) testing the potential and limitations of methods for complexity reduction, and (3) sampling and monitoring comprehensive spatio-temporal data on biotic interactions in multispecies communities.
The Maule earthquake of 27th February 2010 (M-w = 8.8) affected similar to 500 km of the Nazca-South America plate boundary in south-central Chile producing spectacular crustal deformation. Here, we present a detailed estimate of static coseismic surface offsets as measured by survey and continuous GPS, both in near- and far-field regions. Earthquake slip along the megathrust has been inferred from a Joint inversion of our new data together with published GPS, InSAR, and land-level changes data using Green's functions generated by a spherical finite-element model with realistic subduction zone geometry. The combination of the data sets provided a good resolution, indicating that most of the slip was well resolved. Coseismic slip was concentrated north of the epicenter with up to 16 m of slip, whereas to the south it reached over 10 m within two minor patches. A comparison of coseismic slip with the slip deficit accumulated since the last great earthquake in 1835 suggests that the 2010 event closed a mature seismic gap. Slip deficit distribution shows an apparent local overshoot that highlight cycle-to-cycle variability, which has to be taken into account when anticipating future events from interseismic observations. Rupture propagation was obviously not affected by bathymetric features of the incoming plate. Instead, splay faults in the upper plate seem to have limited rupture propagation in the updip and along-strike directions. Additionally, we found that along-strike gradients in slip are spatially correlated with geometrical inflections of the megathrust. Our study suggests that persistent tectonic features may control strain accumulation and release along subduction megathrusts.
The Seismic Hazard Harmonization in Europe (SHARE) project, which began in June 2009, aims at establishing new standards for probabilistic seismic hazard assessment in the Euro-Mediterranean region. In this context, a logic tree for ground-motion prediction in Europe has been constructed. Ground-motion prediction equations (GMPEs) and weights have been determined so that the logic tree captures epistemic uncertainty in ground-motion prediction for six different tectonic regimes in Europe. Here we present the strategy that we adopted to build such a logic tree. This strategy has the particularity of combining two complementary and independent approaches: expert judgment and data testing. A set of six experts was asked to weight pre-selected GMPEs while the ability of these GMPEs to predict available data was evaluated with the method of Scherbaum et al. (Bull Seismol Soc Am 99:3234-3247, 2009). Results of both approaches were taken into account to commonly select the smallest set of GMPEs to capture the uncertainty in ground-motion prediction in Europe. For stable continental regions, two models, both from eastern North America, have been selected for shields, and three GMPEs from active shallow crustal regions have been added for continental crust. For subduction zones, four models, all non-European, have been chosen. Finally, for active shallow crustal regions, we selected four models, each of them from a different host region but only two of them were kept for long periods. In most cases, a common agreement has been also reached for the weights. In case of divergence, a sensitivity analysis of the weights on the seismic hazard has been conducted, showing that once the GMPEs have been selected, the associated set of weights has a smaller influence on the hazard.
Enantioselective total syntheses of both enantiomers of the recently isolated decanolide natural product seimatopolide A are described. The C-2-symmetric building blocks (R,R)-hexa-1,5-diene-3,4-diol (derived from D-mannitol) and its enantiomer (derived from L-(+)-tartrate) serve as key starting materials, which are elaborated in a bidirectional way using a selective mono-cross-metathesis, regio- and stereoselective epoxidation, and regioselective reductive epoxide opening to furnish the first fragment. Both enantiomers of the second fragment, 3-hydroxypent-4-enoic acid, were conveniently obtained through a lipase-catalyzed kinetic resolution and merged with the first fragment via Shiina esterification. An E-selective ring-closing metathesis was used to access the 10-membered lactone. A comparison of the specific optical rotations of synthetic seimatopolides with those reported for the natural product suggests that the originally assigned (3R,6R,7R,9S)-configuration should be corrected to (3S,6S,7S,9R).
Background: Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task.
Methods: 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed.
Results: Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500-1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced.
Conclusions: Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems.
We employ the ultrafast response of a 15.4 nm thin SrRuO3 layer grown epitaxially on a SrTiO3 substrate to perform time-domain sampling of an x-ray pulse emitted from a synchrotron storage ring. Excitation of the sample with an ultrashort laser pulse triggers coherent expansion and compression waves in the thin layer, which turn the diffraction efficiency on and off at a fixed Bragg angle during 5 ps. This is significantly shorter than the duration of the synchrotron x-ray pulse of 100 ps. Cross-correlation measurements of the ultrafast sample response and the synchrotron x-ray pulse allow to reconstruct the x-ray pulse shape.
Time-dependent escape of cosmic rays from supernova remnants, and their interaction with dense media
(2012)
Context. Supernova remnants (SNRs) are thought to be the main source of Galactic cosmic rays (CRs) up to the "knee" in CR spectrum. During the evolution of a SNR, the bulk of the CRs are confined inside the SNR shell. The highest-energy particles leave the system continuously, while the remaining adiabatically cooled particles are released when the SNR has expanded sufficiently and decelerated so that the magnetic field at the shock is no longer able to confine them. Particles escaping from the parent system may interact with nearby molecular clouds, producing.-rays in the process via pion decay. The soft gamma-ray spectra observed for a number of SNRs interacting with molecular clouds, however, challenge current theories of non-linear particle acceleration that predict harder spectra.
Aims. We study how the spectrum of escaped particles depends on the time-dependent acceleration history in both Type Ia and core-collapse SNRs, as well as on different assumptions about the diffusion coefficient in the vicinity of the SNR.
Methods. We solve the CR transport equation in a test-particle approach combined with numerical simulations of SNR evolution.
Results. We extend our method for calculating the CR acceleration in SNRs to trace the escaped particles in a large volume around SNRs. We calculate the evolution of the spectra of CRs that have escaped from a SNR into a molecular cloud or dense shell for two diffusion models. We find a strong confinement of CRs in a close region around the SNR, and a strong dilution effect for CRs that were able to propagate out as far as a few SNR radii.
Context. The true mass-loss rates from massive stars are important for many branches of astrophysics. For the correct modeling of the resonance lines, which are among the key diagnostics of stellar mass-loss, the stellar wind clumping has been found to be very important. To incorporate clumping into a radiative transfer calculation, three-dimensional (3D) models are required. Various properties of the clumps may have a strong impact on the resonance line formation and, therefore, on the determination of empirical mass-loss rates.
Aims. We incorporate the 3D nature of the stellar wind clumping into radiative transfer calculations and investigate how different model parameters influence the resonance line formation.
Methods. We develop a full 3D Monte Carlo radiative transfer code for inhomogeneous expanding stellar winds. The number density of clumps follows the mass conservation. For the first time, we use realistic 3D models that describe the dense as well as the tenuous wind components to model the formation of resonance lines in a clumped stellar wind. At the same time, we account for non-monotonic velocity fields.
Results. The 3D density and velocity wind inhomogeneities show that there is a very strong impact on the resonance line formation. The different parameters describing the clumping and the velocity field results in different line strengths and profiles. We present a set of representative models for various sets of model parameters and investigate how the resonance lines are affected. Our 3D models show that the line opacity is lower for a larger clump separation and shallower velocity gradients within the clumps.
Conclusions. Our model demonstrates that to obtain empirically correct mass-loss rates from the UV resonance lines, the wind clumping and its 3D nature must be taken into account.
Ocean Drilling Program Site 1085 provides a continuous marine sediment record off southern South West Africa for at least the last three and half million years. The n-alkane partial derivative(13) C record from this site records changes in past vegetation and provides an indication of the moisture availability of SW Africa during this time period. Very little variation, and no apparent trend, is observed in the n-alkane delta C-13 record, suggesting stable long-term conditions despite significant changes in East African tectonics and global climate. Slightly higher n-alkane delta C-13 values occur between 3.5 and 2.7 Ma suggesting slightly drier conditions than today. Between 2.5 and 2.7 Ma there is a shift to more negative n-alkane delta C-13 values suggesting slightly wetter conditions during a similar to 0.2 Ma episode that coincides with the intensification of Northern Hemisphere Glaciation (iNHG). From 2.5 to 0.4 Ma the n-alkane delta C-13 values are very consistent, varying by less than +/- 0.5 parts per thousand and suggesting little or no long-term change in the moisture availability of South West Africa over the last 2.5 million years. This is in contrast to the long-term drying trend observed further north offshore from the Namib Desert and in East Africa. A comparison of the climate history of these regions suggests that Southern Africa may have been an area of long-term stability over the last 3.5 Myrs.
We use substituted polyanilines for the construction of new polymer electrodes for interaction studies with the redox protein cytochrome c (cyt c) and the enzyme sulfite oxidase (SO). For these purposes four different polyaniline copolymers are chemically synthesized. Three of them are copolymers, containing 2-methoxyaniline-5-sulfonic acid with variable ratios of aniline; the fourth copolymer consists of 3-amino-benzoic acid and aniline. The results show that all polymers are suitable for being immobilized as thin stable films on gold wire and indium tin oxide (ITO) electrode surfaces from DMSO solution. This can be demonstrated by cyclic voltammetry and UV-Vis spectroscopy measurements. Moreover, cyt c can be electrochemically detected not only in solution, but also immobilized on top of the polymer films. Furthermore, the appearance of a significant catalytic current has been demonstrated for the sulfonated polyanilines, when the polymer-coated protein electrode is being measured upon addition of sulfite oxidase, confirming the establishment of a bioanalytical signal chain. Best results have been obtained for the polymer with highest sulfonation grade. The redox switching of the polymer by the enzymatic reaction can also be analyzed by following the spectral properties of the polymer electrode.
A series of symmetrical, thermo-responsive triblock copolymers was prepared by reversible addition fragmentation chain transfer (RAFT) polymerization, and studied in aqueous solution with respect to their ability to form hydrogels. Triblock copolymers were composed of two identical, permanently hydrophobic outer blocks, made of low molar mass polystyrene, and of a hydrophilic inner block of variable length, consisting of poly(methoxy diethylene glycol acrylate) PMDEGA. The polymers exhibited a LCST-type phase transition in the range of 20-40 degrees C, which markedly depended on molar mass and concentration. Accordingly, the triblock copolymers behaved as amphiphiles at low temperatures, but became water-insoluble at high temperatures. The temperature dependent self-assembly of the amphiphilic block copolymers in aqueous solution was studied by turbidimetry and rheology at concentrations up to 30 wt %, to elucidate the impact of the inner thermoresponsive block on the gel properties. Additionally, small-angle X-ray scattering (SAXS) was performed to access the structural changes in the gel with temperature. For all polymers a gel phase was obtained at low temperatures, which underwent a gel-sol transition at intermediate temperatures, well below the cloud point where phase separation occurred. With increasing length of the PMDEGA inner block, the gel-sol transition shifts to markedly lower concentrations, as well as to higher transition temperatures. For the longest PMDEGA block studied (DPn about 450), gels had already formed at 3.5 wt % at low temperatures. The gel-sol transition of the hydrogels and the LCST-type phase transition of the hydrophilic inner block were found to be independent of each other.
Thermomechanical model reconciles contradictory geophysical observations at the Dead Sea Basin
(2012)
The Dead Sea Transform (DST) comprises a boundary between the African and Arabian plates. During the last 15-20 m.y. more than 100 km of left lateral transform displacement has been accumulated on the DST and about 10 km thick Dead Sea Basin (DSB) was formed in the central part of the DST. Widespread igneous activity since some 20 Ma ago and especially in the last 5 m.y., thin (60-80 km) lithosphere constrained by seismic data and absence of seismicity below the Moho, seem to be quite natural for this tectonically active plate boundary. However, surface heat flow values of less than 50-60 mW/m(2) and deep seismicity in the lower crust (deeper than 20 km) reported for this region are apparently inconsistent with the tectonic settings specific for an active continental plate boundary and with the crustal structure of the DSB. To address these inconsistencies which comprise what we call the "DST heat-flow paradox," we have developed a numerical model that assumes an erosion of initially thick and cold lithosphere just before or during the active faulting at the DST. The optimal initial conditions for the model are defined using transient thermal analysis. From the results of our numerical experiments we conclude that the entire set of observations for the DSB can be explained within the classical pull-apart model assuming that the lithosphere has been thermally eroded at about 20 Ma and the uppermost mantle in the region have relatively weak rheology consistent with experimental data for wet olivine or pyroxenite.
Current climate warming is affecting arctic regions at a faster rate than the rest of the world. This has profound effects on permafrost that underlies most of the arctic land area. Permafrost thawing can lead to the liberation of considerable amounts of greenhouse gases as well as to significant changes in the geomorphology, hydrology, and ecology of the corresponding landscapes, which may in turn act as a positive feedback to the climate system. Vast areas of the east Siberian lowlands, which are underlain by permafrost of the Yedoma-type Ice Complex, are particularly sensitive to climate warming because of the high ice content of these permafrost deposits. Thermokarst and thermal erosion are two major types of permafrost degradation in periglacial landscapes. The associated landforms are prominent indicators of climate-induced environmental variations on the regional scale. Thermokarst lakes and basins (alasses) as well as thermo-erosional valleys are widely distributed in the coastal lowlands adjacent to the Laptev Sea. This thesis investigates the spatial distribution and morphometric properties of these degradational features to reconstruct their evolutionary conditions during the Holocene and to deduce information on the potential impact of future permafrost degradation under the projected climate warming. The methodological approach is a combination of remote sensing, geoinformation, and field investigations, which integrates analyses on local to regional spatial scales. Thermokarst and thermal erosion have affected the study region to a great extent. In the Ice Complex area of the Lena River Delta, thermokarst basins cover a much larger area than do present thermokarst lakes on Yedoma uplands (20.0 and 2.2 %, respectively), which indicates that the conditions for large-area thermokarst development were more suitable in the past. This is supported by the reconstruction of the development of an individual alas in the Lena River Delta, which reveals a prolonged phase of high thermokarst activity since the Pleistocene/Holocene transition that created a large and deep basin. After the drainage of the primary thermokarst lake during the mid-Holocene, permafrost aggradation and degradation have occurred in parallel and in shorter alternating stages within the alas, resulting in a complex thermokarst landscape. Though more dynamic than during the first phase, late Holocene thermokarst activity in the alas was not capable of degrading large portions of Pleistocene Ice Complex deposits and substantially altering the Yedoma relief. Further thermokarst development in existing alasses is restricted to thin layers of Holocene ice-rich alas sediments, because the Ice Complex deposits underneath the large primary thermokarst lakes have thawed completely and the underlying deposits are ice-poor fluvial sands. Thermokarst processes on undisturbed Yedoma uplands have the highest impact on the alteration of Ice Complex deposits, but will be limited to smaller areal extents in the future because of the reduced availability of large undisturbed upland surfaces with poor drainage. On Kurungnakh Island in the central Lena River Delta, the area of Yedoma uplands available for future thermokarst development amounts to only 33.7 %. The increasing proximity of newly developing thermokarst lakes on Yedoma uplands to existing degradational features and other topographic lows decreases the possibility for thermokarst lakes to reach large sizes before drainage occurs. Drainage of thermokarst lakes due to thermal erosion is common in the study region, but thermo-erosional valleys also provide water to thermokarst lakes and alasses. Besides these direct hydrological interactions between thermokarst and thermal erosion on the local scale, an interdependence between both processes exists on the regional scale. A regional analysis of extensive networks of thermo-erosional valleys in three lowland regions of the Laptev Sea with a total study area of 5,800 km² found that these features are more common in areas with higher slopes and relief gradients, whereas thermokarst development is more pronounced in flat lowlands with lower relief gradients. The combined results of this thesis highlight the need for comprehensive analyses of both, thermokarst and thermal erosion, in order to assess past and future impacts and feedbacks of the degradation of ice-rich permafrost on hydrology and climate of a certain region.