Refine
Year of publication
- 2014 (1386) (remove)
Document Type
- Article (1008)
- Doctoral Thesis (125)
- Postprint (97)
- Preprint (42)
- Review (39)
- Conference Proceeding (37)
- Monograph/Edited Volume (15)
- Other (9)
- Part of a Book (6)
- Habilitation Thesis (3)
- Master's Thesis (2)
- Part of Periodical (2)
- Report (1)
Language
- English (1386) (remove)
Keywords
- anomalous diffusion (14)
- radiation mechanisms: non-thermal (9)
- Eye movements (8)
- Holocene (8)
- gamma rays: galaxies (8)
- living cells (8)
- Earthquake source observations (7)
- gamma rays: general (7)
- Arabidopsis (6)
- Sun: coronal mass ejections (CMEs) (6)
Institute
- Institut für Biochemie und Biologie (243)
- Institut für Physik und Astronomie (234)
- Institut für Geowissenschaften (225)
- Institut für Chemie (177)
- Department Psychologie (82)
- Institut für Ernährungswissenschaft (53)
- Institut für Mathematik (53)
- Department Sport- und Gesundheitswissenschaften (41)
- Institut für Informatik und Computational Science (41)
- Department Linguistik (34)
This reference paper describes the sampling and contents of the IZA Evaluation Dataset Survey and outlines its vast potential for research in labor economics. The data have been part of a unique IZA project to connect administrative data from the German Federal Employment Agency with innovative survey data to study the out-mobility of individuals to work. This study makes the survey available to the research community as a Scientific Use File by explaining the development, structure, and access to the data. Furthermore, it also summarizes previous findings with the survey data.
Mathematical modeling of biological systems is a powerful tool to systematically investigate the functions of biological processes and their relationship with the environment. To obtain accurate and biologically interpretable predictions, a modeling framework has to be devised whose assumptions best approximate the examined scenario and which copes with the trade-off of complexity of the underlying mathematical description: with attention to detail or high coverage. Correspondingly, the system can be examined in detail on a smaller scale or in a simplified manner on a larger scale. In this thesis, the role of photosynthesis and its related biochemical processes in the context of plant metabolism was dissected by employing modeling approaches ranging from kinetic to stoichiometric models. The Calvin-Benson cycle, as primary pathway of carbon fixation in C3 plants, is the initial step for producing starch and sucrose, necessary for plant growth. Based on an integrative analysis for model ranking applied on the largest compendium of (kinetic) models for the Calvin-Benson cycle, those suitable for development of metabolic engineering strategies were identified. Driven by the question why starch rather than sucrose is the predominant transitory carbon storage in higher plants, the metabolic costs for their synthesis were examined. The incorporation of the maintenance costs for the involved enzymes provided a model-based support for the preference of starch as transitory carbon storage, by only exploiting the stoichiometry of synthesis pathways. Many photosynthetic organisms have to cope with processes which compete with carbon fixation, such as photorespiration whose impact on plant metabolism is still controversial. A systematic model-oriented review provided a detailed assessment for the role of this pathway in inhibiting the rate of carbon fixation, bridging carbon and nitrogen metabolism, shaping the C1 metabolism, and influencing redox signal transduction. The demand of understanding photosynthesis in its metabolic context calls for the examination of the related processes of the primary carbon metabolism. To this end, the Arabidopsis core model was assembled via a bottom-up approach. This large-scale model can be used to simulate photoautotrophic biomass production, as an indicator for plant growth, under so-called optimal, carbon-limiting and nitrogen-limiting growth conditions. Finally, the introduced model was employed to investigate the effects of the environment, in particular, nitrogen, carbon and energy sources, on the metabolic behavior. This resulted in a purely stoichiometry-based explanation for the experimental evidence for preferred simultaneous acquisition of nitrogen in both forms, as nitrate and ammonium, for optimal growth in various plant species. The findings presented in this thesis provide new insights into plant system's behavior, further support existing opinions for which mounting experimental evidences arise, and posit novel hypotheses for further directed large-scale experiments.
The photosynthetic carbon metabolism, including the Calvin-Benson cycle, is the primary pathway in C-3-plants, producing starch and sucrose from CO2. Understanding the interplay between regulation and efficiency of this pathway requires the development of mathematical models which would explain the observed dynamics of metabolic transformations. Here, we address this question by casting the existing models of Calvin-Benson cycle and the end-product processes into an analysis framework which not only facilitates the comparison of the different models, but also allows for their ranking with respect to chosen criteria, including stability, sensitivity, robustness and/or compliance with experimental data. The importance of the photosynthetic carbon metabolism for the increase of plant biomass has resulted in many models with various levels of detail. We provide the largest compendium of 15 existing, well-investigated models together with a comprehensive classification as well as a ranking framework to determine the best-performing models for metabolic engineering and planning of in silica experiments. The classification can be additionally used, based on the model structure, as a tool to identify the models which match best the experimental design. The provided ranking is just one alternative to score models and, by changing the weighting factor, this framework also could be applied for selection of other criteria of interest.
Giant planets helped to shape the conditions we see in the Solar System today and they account for more than 99% of the mass of the Sun's planetary system. They can be subdivided into the Ice Giants (Uranus and Neptune) and the Gas Giants (Jupiter and Saturn), which differ from each other in a number of fundamental ways. Uranus, in particular is the most challenging to our understanding of planetary formation and evolution, with its large obliquity, low self-luminosity, highly asymmetrical internal field, and puzzling internal structure. Uranus also has a rich planetary system consisting of a system of inner natural satellites and complex ring system, five major natural icy satellites, a system of irregular moons with varied dynamical histories, and a highly asymmetrical magnetosphere. Voyager 2 is the only spacecraft to have explored Uranus, with a flyby in 1986, and no mission is currently planned to this enigmatic system. However, a mission to the uranian system would open a new window on the origin and evolution of the Solar System and would provide crucial information on a wide variety of physicochemical processes in our Solar System. These have clear implications for understanding exoplanetary systems. In this paper we describe the science case for an orbital mission to Uranus with an atmospheric entry probe to sample the composition and atmospheric physics in Uranus' atmosphere. The characteristics of such an orbiter and a strawman scientific payload are described and we discuss the technical challenges for such a mission. This paper is based on a white paper submitted to the European Space Agency's call for science themes for its large-class mission programme in 2013.
The stem bark extract of Schizozygia coffaeoides (Apocynaceae) showed moderate antiplasmodial activity (IC50 = 8-12 mu g/mL) against the chloroquine-sensitive (D6) and chloroquine-resistant (W2) strains of Plasmodium falciparum. Chromatographic separation of the extract led to the isolation of a new schizozygane indoline alkaloid, named 3-oxo-14 alpha, 15 alpha-epoxyschizozygine. In addition, two dimeric anthraquinones, cassiamin A and cassiamin B, were identified for the first time in the family Apocynaceae. The structures of the isolated compounds were deduced on the basis of spectroscopic evidence. The schizozygane indole alkaloids showed good to moderate antiplasmodial activities (IC50 = 13-52 mu m). (C) 2014 Phytochemical Society of Europe. Published by Elsevier B.V. All rights reserved.
Dissolved organic carbon (DOC) concentrations - mainly of terrestrial origin - are increasing worldwide in inland waters. Heterotrophic bacteria are the main consumers of DOC and thus determine DOC temporal dynamics and availability for higher trophic levels. Our aim was to study bacterial carbon (C) turnover with respect to DOC quantity and chemical quality using both allochthonous and autochthonous DOC sources. We incubated a natural bacterial community with allochthonous C (C-13-labeled beech leachate) and increased concentrations and pulses (intermittent occurrence of organic matter input) of autochthonous C (phytoplankton lysate). We then determined bacterial C consumption, activities, and community composition together with the C flow through bacteria using stable C isotopes. The chemical analysis of single sources revealed differences in aromaticity and low-and high-molecular-weight substance fractions (LMWS and HMWS, respectively) between allochthonous and autochthonous C sources. Both DOC sources (allochthonous and autochthonous DOC) were metabolized at a high bacterial growth efficiency (BGE) around 50%. In treatments with mixed sources, rising concentrations of added autochthonous DOC resulted in a further, significant increase in bacterial DOC consumption of up to 68% when nutrients were not limiting. This rise was accompanied by a decrease in the humic substance (HS) fraction and an increase in bacterial biomass. Changes in DOC concentration and consumption in mixed treatments did not affect bacterial community composition (BCC), but BCC differed in single vs. mixed incubations. Our study highlights that DOC quantity affects bacterial C consumption but not BCC in nutrient-rich aquatic systems. BCC shifted when a mixture of allochthonous and autochthonous C was provided simultaneously to the bacterial community. Our results indicate that chemical quality rather than source of DOC per se (allochthonous vs. autochthonous) determines bacterial DOC turnover.
Confusion about model validation is one of the main challenges in using ecological models for decision support, such as the regulation of pesticides. Decision makers need to know whether a model is a sufficiently good representation of its real counterpart and what criteria can be used to answer this question. Unclear terminology is one of the main obstacles to a good understanding of what model validation is, how it works, and what it can deliver. Therefore, we performed a literature review and derived a standard set of terms. 'Validation' was identified as a catch-all term, which is thus useless for any practical purpose. We introduce the term 'evaludation', a fusion of 'evaluation' and 'validation', to describe the entire process of assessing a model's quality and reliability. Considering the iterative nature of model development, the modelling cycle, we identified six essential elements of evaludation: (i) 'data evaluation' for scrutinising the quality of numerical and qualitative data used for model development and testing; (ii) 'conceptual model evaluation' for examining the simplifying assumptions underlying a model's design; (iii) 'implementation verification' for testing the model's implementation in equations and as a computer programme; (iv) 'model output verification' for comparing model output to data and patterns that guided model design and were possibly used for calibration; (v) 'model analysis' for exploring the model's sensitivity to changes in parameters and process formulations to make sure that the mechanistic basis of main behaviours of the model has been well understood; and (vi) 'model output corroboration' for comparing model output to new data and patterns that were not used for model development and parameterisation. Currently, most decision makers require 'validating' a model by testing its predictions with new experiments or data. Despite being desirable, this is neither sufficient nor necessary for a model to be useful for decision support. We believe that the proposed set of terms and its relation to the modelling cycle can help to make quality assessments and reality checks of ecological models more comprehensive and transparent. (C) 2013 Elsevier B.V. All rights reserved.
There is robust evidence showing a link between executive function (EF) and theory of mind (ToM) in 3-to 5-year-olds. However, it is unclear whether this relationship extends to middle childhood. In addition, there has been much discussion about the nature of this relationship. Whereas some authors claim that ToM is needed for EF, others argue that ToM requires EF. To date, however, studies examining the longitudinal relationship between distinct sub components of EF [i.e., attention shifting, working memory (WM) updating, inhibition] and ToM in middle childhood are rare. The present study examined (1) the relationship between three EF subcomponents (attention shifting, WM updating, inhibition) and ToM in middle childhood, and (2) the longitudinal reciprocal relationships between the EF subcomponents and ToM across a 1-year period. EF and ToM measures were assessed experimentally in a sample of 1,657 children (aged 6-11 years) at time point one (t1) and 1 year later at time point two (t2). Results showed that the concurrent relationships between all three EF subcomponents and ToM pertained in middle childhood at t1 and t2, respectively, even when age, gender, and fluid intelligence were partialle dout. Moreover, cross-lagged structural equation modeling (again, controlling for age, gender, and fluid intelligence, as well as for the earlier levels of the target variables), revealed partial support for the view that early ToM predictslater EF, but stronger evidence for the assumption that early EF predictslater ToM. The latter was found for attention shifting and WM updating, but not for inhibition. This reveals the importance of studying the exact interplay of ToM and EF across childhood development, especially with regard to different EF subcomponents. Most likely, understanding others' mental states at different levels of perspective-taking requires specific EF subcomponents, suggesting developmental change in the relations between EF and ToM across childhood.
The paper argues that structural case assignment properties of English and German reduced comparative subclauses arise from syntactic requirements as well as processes holding at the syntax-phonology interface. I show that constructions involving both an adjectival and a verbal predicate require the subject remnant of the adjectival predicate to be marked for the accusative case both in English and German, which cannot be explained by the notion of default accusative case, especially because German has no default accusative case. I argue that a phonologically defective subclause is reanalysed as part of the matrix clausal object, and hence receives accusative morphological case.
Adopting a minimalist framework, the dissertation provides an analysis for the syntactic structure of comparatives, with special attention paid to the derivation of the subclause. The proposed account explains how the comparative subclause is connected to the matrix clause, how the subclause is formed in the syntax and what additional processes contribute to its final structure. In addition, it casts light upon these problems in cross-linguistic terms and provides a model that allows for synchronic and diachronic differences. This also enables one to give a more adequate explanation for the phenomena found in English comparatives since the properties of English structures can then be linked to general settings of the language and hence need no longer be considered as idiosyncratic features of the grammar of English. First, the dissertation provides a unified analysis of degree expressions, relating the structure of comparatives to that of other degrees. It is shown that gradable adjectives are located within a degree phrase (DegP), which in turn projects a quantifier phrase (QP) and that these two functional layers are always present, irrespectively of whether there is a phonologically visible element in these layers. Second, the dissertation presents a novel analysis of Comparative Deletion by reducing it to an overtness constraint holding on operators: in this way, it is reduced to morphological differences and cross-linguistic variation is not conditioned by way of postulating an arbitrary parameter. Cross-linguistic differences are ultimately dependent on whether a language has overt operators equipped with the relevant – [+compr] and [+rel] – features. Third, the dissertation provides an adequate explanation for the phenomenon of Attributive Comparative Deletion, as attested in English, by way of relating it to the regular mechanism of Comparative Deletion. I assume that Attributive Comparative Deletion is not a universal phenomenon, and its presence in English can be conditioned by independent, more general rules, while the absence of such restrictions leads to its absence in other languages. Fourth, the dissertation accounts for certain phenomena related to diachronic changes, examining how the changes in the status of comparative operators led to changes in whether Comparative Deletion is attested in a given language: I argue that only operators without a lexical XP can be grammaticalised. The underlying mechanisms underlying are essentially general economy principles and hence the processes are not language-specific or exceptional. Fifth, the dissertation accounts for optional ellipsis processes that play a crucial role in the derivation of typical comparative subclauses. These processes are not directly related to the structure of degree expressions and hence the elimination of the quantified expression from the subclause; nevertheless, they are shown to be in interaction with the mechanisms underlying Comparative Deletion or the absence thereof.
This paper examines cyclical changes in comparative subclauses, showing how operators are reanalysed as complementisers via the general mechanism of the relative cycle, and how this is related to whether certain lexical elements have to be deleted at the left periphery. I also show that only operators appearing without a lexical XP can be grammaticalised, which follows from the nature of the formal features associated with the various operator elements. Though the main focus is on Hungarian historical data, the framework can be applied to other languages too, such as German and Italian, since the changes stem from general principles of economy.
Biosensors for the detection of benzaldehyde and g-aminobutyric acid (GABA) are reported using aldehyde oxidoreductase PaoABC from Escherichia coli immobilized in a polymer containing bound low potential osmium redox complexes. The electrically connected enzyme already electrooxidizes benzaldehyde at potentials below −0.15 V (vs. Ag|AgCl, 1 M KCl). The pH-dependence of benzaldehyde oxidation can be strongly influenced by the ionic strength. The effect is similar with the soluble osmium redox complex and therefore indicates a clear electrostatic effect on the bioelectrocatalytic efficiency of PaoABC in the osmium containing redox polymer. At lower ionic strength, the pH-optimum is high and can be switched to low pH-values at high ionic strength. This offers biosensing at high and low pH-values. A “reagentless” biosensor has been formed with enzyme wired onto a screen-printed electrode in a flow cell device. The response time to addition of benzaldehyde is 30 s, and the measuring range is between 10–150 µM and the detection limit of 5 µM (signal to noise ratio 3:1) of benzaldehyde. The relative standard deviation in a series (n = 13) for 200 µM benzaldehyde is 1.9%. For the biosensor, a response to succinic semialdehyde was also identified. Based on this response and the ability to work at high pH a biosensor for GABA is proposed by coimmobilizing GABA-aminotransferase (GABA-T) and PaoABC in the osmium containing redox polymer.
Biosensors for the detection of benzaldehyde and g-aminobutyric acid (GABA) are reported using aldehyde oxidoreductase PaoABC from Escherichia coli immobilized in a polymer containing bound low potential osmium redox complexes. The electrically connected enzyme already electrooxidizes benzaldehyde at potentials below −0.15 V (vs. Ag|AgCl, 1 M KCl). The pH-dependence of benzaldehyde oxidation can be strongly influenced by the ionic strength. The effect is similar with the soluble osmium redox complex and therefore indicates a clear electrostatic effect on the bioelectrocatalytic efficiency of PaoABC in the osmium containing redox polymer. At lower ionic strength, the pH-optimum is high and can be switched to low pH-values at high ionic strength. This offers biosensing at high and low pH-values. A “reagentless” biosensor has been formed with enzyme wired onto a screen-printed electrode in a flow cell device. The response time to addition of benzaldehyde is 30 s, and the measuring range is between 10–150 µM and the detection limit of 5 µM (signal to noise ratio 3:1) of benzaldehyde. The relative standard deviation in a series (n = 13) for 200 µM benzaldehyde is 1.9%. For the biosensor, a response to succinic semialdehyde was also identified. Based on this response and the ability to work at high pH a biosensor for GABA is proposed by coimmobilizing GABA-aminotransferase (GABA-T) and PaoABC in the osmium containing redox polymer.
The inverse problem of determining the flow at the Earth's core-mantle boundary according to an outer core magnetic field and secular variation model has been investigated through a Bayesian formalism. To circumvent the issue arising from the truncated nature of the available fields, we combined two modeling methods. In the first step, we applied a filter on the magnetic field to isolate its large scales by reducing the energy contained in its small scales, we then derived the dynamical equation, referred as filtered frozen flux equation, describing the spatiotemporal evolution of the filtered part of the field. In the second step, we proposed a statistical parametrization of the filtered magnetic field in order to account for both its remaining unresolved scales and its large-scale uncertainties. These two modeling techniques were then included in the Bayesian formulation of the inverse problem. To explore the complex posterior distribution of the velocity field resulting from this development, we numerically implemented an algorithm based on Markov chain Monte Carlo methods. After evaluating our approach on synthetic data and comparing it to previously introduced methods, we applied it to a magnetic field model derived from satellite data for the single epoch 2005.0. We could confirm the existence of specific features already observed in previous studies. In particular, we retrieved the planetary scale eccentric gyre characteristic of flow evaluated under the compressible quasi-geostrophy assumption although this hypothesis was not considered in our study. In addition, through the sampling of the velocity field posterior distribution, we could evaluate the reliability, at any spatial location and at any scale, of the flow we calculated. The flow uncertainties we determined are nevertheless conditioned by the choice of the prior constraints we applied to the velocity field.
The new N-heterocyclic carbene (NHC) complex [PdCl2{(CN)(2)IMes}(PPh3)] (2) ({(CN)(2)IMes}: 4,5-dicyano-1,3-dimesitylimidazol-2-ylidene) and the NHC palladacycle [PdCl(dmba){(CN)(2)IMes}] (3) (dmba: N,N-dimethylbenzylamine) have been synthesized by thermolysis of 4,5-dicyano-1,3-dimesityl-2-(pentafluorophenyl) imidazoline (1) in the presence of suitable palladium(II) precursors. The acyclic complex 2 was formed by ligand exchange using the mononuclear precursor [PdCl2(PPh3)(2)] and the palladacycle 3 was formed by cleavage of the dinuclear chloro-bridged precursor [Pd(mu-Cl)(dmba)](2). The new NHC precursor 1-benzyl-4,5-dicyano-2-(pentafluorophenyl)-3-picolylimidazoline (5) was formed by condensation of pentafluorobenzaldehyde with N-benzyl-N'-picolyldiaminomaleonitrile (4). The NHC palladacycle [PdCl2{(CN)(2)IBzPic}] (6) ({(CN)(2)IBzPic}: 1-benzyl-4,5-dicyano-3-picolylimidazol-2-ylidene) was prepared by in situ thermolysis of 5 in the presence of [PdCl2(PhCN)(2)]. The three palladium(II) complexes were characterized by NMR and IR spectroscopy, mass spectrometry and elemental analysis. In addition, the molecular structures of 2 and 3 were determined by X-ray diffraction. The pi-acidity of (CN)(2)IBzPic was compared with (CN)(2)IMes and perviously reported pi-acidic imidazol-2-ylidenes by NBO analysis. The Mizoroki-Heck (MH) reactions of various aryl halides with n-butyl acrylate were performed in the presence of complexes 2, 3 and 6. The new precatalysts showed high activity in the MH reactions giving good-to-excellent product yields with 0.1 mol-% pre-catalyst. The nature of the catalytically active species of 2, 3 and 6 was investigated by poisoning experiments with mercury and transmission electron microscopy. It was found that palladium nanoparticles formed from the precatalysts were involved in the catalytic process.
While the maturity of process mining algorithms increases and more process mining tools enter the market, process mining projects still face the problem of different levels of abstraction when comparing events with modeled business activities. Current approaches for event log abstraction try to abstract from the events in an automated way that does not capture the required domain knowledge to fit business activities. This can lead to misinterpretation of discovered process models. We developed an approach that aims to abstract an event log to the same abstraction level that is needed by the business. We use domain knowledge extracted from existing process documentation to semi-automatically match events and activities. Our abstraction approach is able to deal with n:m relations between events and activities and also supports concurrency. We evaluated our approach in two case studies with a German IT outsourcing company. (C) 2014 Elsevier Ltd. All rights reserved.
Leaf senescence is a developmentally controlled process, which is additionally modulated by a number of adverse environmental conditions. Nitrogen shortage is a well-known trigger of precocious senescence in many plant species including crops, generally limiting biomass and seed yield. However, leaf senescence induced by nitrogen starvation may be reversed when nitrogen is resupplied at the onset of senescence. Here, the transcriptomic, hormonal, and global metabolic rearrangements occurring during nitrogen resupply-induced reversal of senescence in Arabidopsis thaliana were analysed. The changes induced by senescence were essentially in keeping with those previously described; however, these could, by and large, be reversed. The data thus indicate that plants undergoing senescence retain the capacity to sense and respond to the availability of nitrogen nutrition. The combined data are discussed in the context of the reversibility of the senescence programme and the evolutionary benefit afforded thereby. Future prospects for understanding and manipulating this process in both Arabidopsis and crop plants are postulated.
DNA origami nanostructures allow for the arrangement of different functionalities such as proteins, specific DNA structures, nanoparticles, and various chemical modifications with unprecedented precision. The arranged functional entities can be visualized by atomic force microscopy (AFM) which enables the study of molecular processes at a single-molecular level. Examples comprise the investigation of chemical reactions, electron-induced bond breaking, enzymatic binding and cleavage events, and conformational transitions in DNA. In this paper, we provide an overview of the advances achieved in the field of single-molecule investigations by applying atomic force microscopy to functionalized DNA origami substrates.
DNA origami nanostructures allow for the arrangement of different functionalities such as proteins, specific DNA structures, nanoparticles, and various chemical modifications with unprecedented precision. The arranged functional entities can be visualized by atomic force microscopy (AFM) which enables the study of molecular processes at a single-molecular level. Examples comprise the investigation of chemical reactions, electron-induced bond breaking, enzymatic binding and cleavage events, and conformational transitions in DNA. In this paper, we provide an overview of the advances achieved in the field of single-molecule investigations by applying atomic force microscopy to functionalized DNA origami substrates.
On the role of fluoro-substituted nucleosides in DNA radiosensitization for tumor radiation therapy
(2014)
Gemcitabine (2′,2′-difluorocytidine) is a well-known radiosensitizer routinely applied in concomitant chemoradiotherapy. During irradiation of biological media with high-energy radiation secondary low-energy (<10 eV) electrons are produced that can directly induce chemical bond breakage in DNA by dissociative electron attachment (DEA). Here, we investigate and compare DEA to the three molecules 2′-deoxycytidine, 2′-deoxy-5-fluorocytidine, and gemcitabine. Fluorination at specific molecular sites, i.e., nucleobase or sugar moiety, is found to control electron attachment and subsequent dissociation pathways. The presence of two fluorine atoms at the sugar ring results in more efficient electron attachment to the sugar moiety and subsequent bond cleavage. For the formation of the dehydrogenated nucleobase anion, we obtain an enhancement factor of 2.8 upon fluorination of the sugar, whereas the enhancement factor is 5.5 when the nucleobase is fluorinated. The observed fragmentation reactions suggest enhanced DNA strand breakage induced by secondary electrons when gemcitabine is incorporated into DNA.
On the role of fluoro-substituted nucleosides in DNA radiosensitization for tumor radiation therapy
(2014)
Gemcitabine (2′,2′-difluorocytidine) is a well-known radiosensitizer routinely applied in concomitant chemoradiotherapy. During irradiation of biological media with high-energy radiation secondary low-energy (<10 eV) electrons are produced that can directly induce chemical bond breakage in DNA by dissociative electron attachment (DEA). Here, we investigate and compare DEA to the three molecules 2′-deoxycytidine, 2′-deoxy-5-fluorocytidine, and gemcitabine. Fluorination at specific molecular sites, i.e., nucleobase or sugar moiety, is found to control electron attachment and subsequent dissociation pathways. The presence of two fluorine atoms at the sugar ring results in more efficient electron attachment to the sugar moiety and subsequent bond cleavage. For the formation of the dehydrogenated nucleobase anion, we obtain an enhancement factor of 2.8 upon fluorination of the sugar, whereas the enhancement factor is 5.5 when the nucleobase is fluorinated. The observed fragmentation reactions suggest enhanced DNA strand breakage induced by secondary electrons when gemcitabine is incorporated into DNA.
The southern foreland basin of the Alborz Mountains of northern Iran is characterized by an approximately 7.3-km-thick sequence of Miocene sedimentary rocks, constituting three basin-wde coarsening-upward units spanning a period of 10(6)years. We assess available magnetostratigraphy, paleoclimatic reconstructions, stratal architecture, records of depositional environments, and sediment-provenance data to characterize the relationships between tectonically-generated accommodation space (A) and sediment supply (S). Our analysis allows an inversion of the stratigraphy for particular forcing mechanisms, documenting causal relationships, and providing a basis to decipher the relative contributions of tectonics and climate (inferred changes in precipitation) in controlling sediment supply to the foreland basin. Specifically, A/S>1, typical of each basal unit (17.5-16.0, 13.8-13.1 and 10.3-9.6Ma), is associated with sharp facies retrogradation and reflects substantial tectonic subsidence. Within these time intervals, arid climatic conditions, changes in sediment provenance, and accelerated exhumation in the orogen suggest that sediment supply was most likely driven by high uplift rates. Conversely, A/S<1 (13.8 and 13.8-11Ma, units 1, and 2) reflects facies progradation during a sharp decline in tectonic subsidence caused by localized intra-basinal uplift. During these time intervals, climate continued to be arid and exhumation active, suggesting that sediment supply was again controlled by tectonics. A/S<1, at 11-10.3Ma and 9-6-7.6Ma (and possibly 6.2; top of units 2 and 3), is also associated with two episodes of extensive progradation, but during wetter phases. The first episode appears to have been linked to a pulse in sediment supply driven by an increase in precipitation. The second episode reflects a balance between a climatically-induced increase in sediment supply and a reduction of subsidence through the incorporation of the proximal foreland into the orogenic wedge. This in turn caused an expansion of the catchment and a consequent further increase in sediment supply.
The H.E.S.S. array is a third generation Imaging Atmospheric Cherenkov Telescope (IACT) array. It is located in the Khomas Highland in Namibia, and measures very high energy (VHE) gamma-rays. In Phase I, the array started data taking in 2004 with its four identical 13 m telescopes. Since then, H.E.S.S. has emerged as the most successful IACT experiment to date. Among the almost 150 sources of VHE gamma-ray radiation found so far, even the oldest detection, the Crab Nebula, keeps surprising the scientific community with unexplained phenomena such as the recently discovered very energetic flares of high energy gamma-ray radiation. During its most recent flare, which was detected by the Fermi satellite in March 2013, the Crab Nebula was simultaneously observed with the H.E.S.S. array for six nights. The results of the observations will be discussed in detail during the course of this work. During the nights of the flare, the new 24 m × 32 m H.E.S.S. II telescope was still being commissioned, but participated in the data taking for one night. To be able to reconstruct and analyze the data of the H.E.S.S. Phase II array, the algorithms and software used by the H.E.S.S. Phase I array had to be adapted. The most prominent advanced shower reconstruction technique developed by de Naurois and Rolland, the template-based model analysis, compares real shower images taken by the Cherenkov telescope cameras with shower templates obtained using a semi-analytical model. To find the best fitting image, and, therefore, the relevant parameters that describe the air shower best, a pixel-wise log-likelihood fit is done. The adaptation of this advanced shower reconstruction technique to the heterogeneous H.E.S.S. Phase II array for stereo events (i.e. air showers seen by at least two telescopes of any kind), its performance using MonteCarlo simulations as well as its application to real data will be described.
Planetary research is often user-based and requires considerable skill, time, and effort. Unfortunately, self-defined boundary conditions, definitions, and rules are often not documented or not easy to comprehend due to the complexity of research. This makes a comparison to other studies, or an extension of the already existing research, complicated. Comparisons are often distorted, because results rely on different, not well defined, or even unknown boundary conditions. The purpose of this research is to develop a standardized analysis method for planetary surfaces, which is adaptable to several research topics. The method provides a consistent quality of results. This also includes achieving reliable and comparable results and reducing the time and effort of conducting such studies. A standardized analysis method is provided by automated analysis tools that focus on statistical parameters. Specific key parameters and boundary conditions are defined for the tool application. The analysis relies on a database in which all key parameters are stored. These databases can be easily updated and adapted to various research questions. This increases the flexibility, reproducibility, and comparability of the research. However, the quality of the database and reliability of definitions directly influence the results. To ensure a high quality of results, the rules and definitions need to be well defined and based on previously conducted case studies. The tools then produce parameters, which are obtained by defined geostatistical techniques (measurements, calculations, classifications). The idea of an automated statistical analysis is tested to proof benefits but also potential problems of this method. In this study, I adapt automated tools for floor-fractured craters (FFCs) on Mars. These impact craters show a variety of surface features, occurring in different Martian environments, and having different fracturing origins. They provide a complex morphological and geological field of application. 433 FFCs are classified by the analysis tools due to their fracturing process. Spatial data, environmental context, and crater interior data are analyzed to distinguish between the processes involved in floor fracturing. Related geologic processes, such as glacial and fluvial activity, are too similar to be separately classified by the automated tools. Glacial and fluvial fracturing processes are merged together for the classification. The automated tools provide probability values for each origin model. To guarantee the quality and reliability of the results, classification tools need to achieve an origin probability above 50 %. This analysis method shows that 15 % of the FFCs are fractured by intrusive volcanism, 20 % by tectonic activity, and 43 % by water & ice related processes. In total, 75 % of the FFCs are classified to an origin type. This can be explained by a combination of origin models, superposition or erosion of key parameters, or an unknown fracturing model. Those features have to be manually analyzed in detail. Another possibility would be the improvement of key parameters and rules for the classification. This research shows that it is possible to conduct an automated statistical analysis of morphologic and geologic features based on analysis tools. Analysis tools provide additional information to the user and are therefore considered assistance systems.
Floor-Fractured Craters (FFCs) represent an impact crater type, where the infilling is separated by cracks into knobs of different sizes and shapes. This work focuses on the possible processes which form FFCs to understand the relationship between location and geological environment. We generated a global distribution map using new High Resolution Stereo Camera and Context Camera images. Four hundred and twenty-one potential FFCs have been identified on Mars. A strong link exists among floor fracturing, chaotic terrain, outflow channels and the dichotomy boundary. However, FFCs are also found in the Martian highlands. Additionally, two very diverse craters are used as a case study and we compared them regarding appearance of the surface units, chronology and geological processes. Five potential models of floor fracturing are presented and discussed here. The analyses suggest an origin due to volcanic activity, groundwater migration or tensile stresses. Also subsurface ice reservoirs and tectonic activity are taken into account. Furthermore, the origin of fracturing differs according to the location on Mars. (C) 2013 Elsevier Ltd. All rights reserved.
This paper investigates subject verb agreement in Turkish with particular focus on the role the animacy of plural subjects plays in verbal number marking. Previous descriptive grammars of Turkish (e.g., Sezer, 1978) report an asymmetry in number marking for plural subjects: if the plural subject denotes an animate entity, both plural and singular verbs are possible, whereas only singular verbs are possible when the plural subject denotes an inanimate entity. Using the magnitude estimation method, we measured the well-formedness of simple Turkish sentences consisting of a plural subject and a verb in two groups of participants that differ only in age (mean: 28 years old and 43 years old). The overall results provide an empirical validation of the proposed split between animate and inanimate subjects and suggest that the acceptability of plural agreement is sensitive to even more fine-grained distinctions of animacy. In particular, the plural dispreference was reduced for inanimates with a teleological capacity (in the sense of Folli and Harley, 2008) and for body parts, in comparison to true inanimates (e.g., furniture and clothes). Accordingly, we propose an animacy hierarchy for Turkish that is in line with typological observations (e.g., Corbett, 2000, 2006) and augment it with a further distinction between quasi-animates and inanimates.
Although less pronounced in sentences with animate subjects, we observed a higher preference for singular verbs over plural verbs across all conditions. This suggests that the singular marking on the verb, which is zero marked in Turkish, is the default. Furthermore, we find a significant effect of age: in the older group, the singular preference is less pronounced across the conditions and almost absent in sentences with an animate subject. Moreover, the older participants made finer distinctions in the animacy hierarchy, further differentiating between two types of quasi-animates (teleologically capable entities vs. entities with inherited animacy). The two generations in our study share the animate inanimate split as well as the sharp contrast between singular and plural agreement in sentences with inanimate subjects; they differ, however, in degree of optionality. Altogether, these results suggest a decrease in the degree of optionality across generations. As in research on language attrition and bilingualism (Hulk and Muller, 2000; Muller and Hulk, 2001; Sorace, 2011), the results accord with the idea that interface phenomena are vulnerable to change; however, non-convergence between generations in our study stemmed from areas that yield gradient rather than categorical results. (C) 2014 Elsevier B.V. All rights reserved.
The quantum dynamics of muonic molecular ions dd mu and dt mu excited by linearly polarized along the molecular (z)-axis super-intense laser pulses is studied beyond the Born-Oppenheimer approximation by the numerical solution of the time-dependent Schrodinger equation within a three-dimensional model, including the internuclear distance R and muon coordinates z and rho. The peak-intensity of the super-intense laser pulses used in our simulations is I-0 = 3.51 x 10(22) W/cm(2) and the wavelength is lambda(l) = 5nm. In both dd mu and dt mu, expectation values < z > and <rho > of muon demonstrate "post-laser-pulse" oscillations after the ends of the laser pulses. In dd mu post-laser-pulse z-oscillations appear as shaped nonoverlapping "echo-pulses". In dt mu post-laser-pulse muonic z-oscillations appear as comparatively slow large-amplitude oscillations modulated with small-amplitude pulsations. The post-laser-pulse rho-oscillations in both dd mu and dt mu appear, for the most part, as overlapping "echo-pulses". The post-laser-pulse oscillations do not occur if the Born-Oppenheimer approximation is employed. Power spectra generated due to muonic motion along both optically active z and optically passive rho degrees of freedom are calculated. The fusion probability in dt mu can be increased by more than 11 times by making use of three sequential super-intense laser pulses. The energy released from the dt fusion in dt mu can by more than 20 GeV exceed the energy required to produce a usable muon and the energy of the laser pulses used to enhance the fusion. The possibility of power production from the laser-enhanced muon-catalyzed fusion is discussed.
The present study proposes a General Probabilistic Framework (GPF) for uncertainty and global sensitivity analysis of deterministic models in which, in addition to scalar inputs, non-scalar and correlated inputs can be considered as well. The analysis is conducted with the variance-based approach of Sobol/Saltelli where first and total sensitivity indices are estimated. The results of the framework can be used in a loop for model improvement, parameter estimation or model simplification. The framework is applied to SWAP, a 113 hydrological model for the transport of water, solutes and heat in unsaturated and saturated soils. The sources of uncertainty are grouped in five main classes: model structure (soil discretization), input (weather data), time-varying (crop) parameters, scalar parameters (soil properties) and observations (measured soil moisture). For each source of uncertainty, different realizations are created based on direct monitoring activities. Uncertainty of evapotranspiration, soil moisture in the root zone and bottom fluxes below the root zone are considered in the analysis. The results show that the sources of uncertainty are different for each output considered and it is necessary to consider multiple output variables for a proper assessment of the model. Improvements on the performance of the model can be achieved reducing the uncertainty in the observations, in the soil parameters and in the weather data. Overall, the study shows the capability of the GPF to quantify the relative contribution of the different sources of uncertainty and to identify the priorities required to improve the performance of the model. The proposed framework can be extended to a wide variety of modelling applications, also when direct measurements of model output are not available.
We report the results of our investigations on the catchment area, surface sediments, and hydrology of the monsoonal Lonar Lake, central India. Our results indicate that the lake is currently stratified with an anoxic bottom layer, and there is a spatial heterogeneity in the sensitivity of sediment parameters to different environmental processes. In the shallow (0-5 m) near shore oxic-suboxic environments the lithogenic and terrestrial organic content is high and spatially variable, and the organics show degradation in the oxic part. Due to aerial exposure resulting from lake level changes of at least 3m, the evaporitic carbonates are not completely preserved. In the deep water (>5 m) anoxic environment the lithogenics are uniformly distributed and the delta C-13 is an indicator not only for aquatic vs. terrestrial plants but also of lake pH and salinity. The isotopic composition of the evaporites is dependent not only on the isotopic composition of source water (monsoon rainfall and stream inflow) and evaporation, but is also influenced by proximity to the isotopically depleted stream inflow. We conclude that in the deep water environment lithogenic content, and isotopic composition of organic matter can be used for palaeoenvironmental reconstruction.
Treating creationism as a controversial topic within the science and religion issue in the science classroom has been widely discussed in the recent literature. Some researchers have proposed that this topic is best addressed by focusing on sociocognitive conflict. To prepare new learning opportunities for this approach, it is necessary to know the concrete arguments that students use in their discussions on this issue. Therefore, this study aimed to provide a systematic description of these arguments. For this purpose, upper secondary students (N=43) argued for either the acceptance of evolutionary theory or faith in Genesis in a written speech. The study was conducted during their regular biology and religious education classes. Generated arguments were analysed by qualitative content analysis. Three dimensions of the arguments were described: the content (science or religion), the valuation of the argument (positive or negative), and whether the argument consisted of a descriptive or normative argumentation. The results indicate that students found it easier to generate arguments about the scientific side of the issue; however, these arguments were negatively constructed. The results are discussed with regard to implications for educational approaches for teaching controversial issues at the high-school level.
Surface displacement at volcanic edifices is related to subsurface processes associated with magma movements, fluid transfers within the volcano edifice and gravity-driven deformation processes. Understanding of associated ground displacements is of importance for assessment of volcanic hazards. For example, volcanic unrest is often preceded by surface uplift, caused by magma intrusion and followed by subsidence, after the withdrawal of magma. Continuous monitoring of the surface displacement at volcanoes therefore might allow the forecasting of upcoming eruptions to some extent. In geophysics, the measured surface displacements allow the parameters of possible deformation sources to be estimated through analytical or numerical modeling. This is one way to improve the understanding of subsurface processes acting at volcanoes. Although the monitoring of volcanoes has significantly improved in the last decades (in terms of technical advancements and number of monitored volcanoes), the forecasting of volcanic eruptions remains puzzling. In this work I contribute towards the understanding of the subsurface processes at volcanoes and thus to the improvement of volcano eruption forecasting. I have investigated the displacement field of Llaima volcano in Chile and of Tendürek volcano in East Turkey by using synthetic aperture radar interferometry (InSAR). Through modeling of the deformation sources with the extracted displacement data, it was possible to gain insights into potential subsurface processes occurring at these two volcanoes that had been barely studied before. The two volcanoes, although of very different origin, composition and geometry, both show a complexity of interacting deformation sources. At Llaima volcano, the InSAR technique was difficult to apply, due to the large decorrelation of the radar signal between the acquisition of images. I developed a model-based unwrapping scheme, which allows the production of reliable displacement maps at the volcano that I used for deformation source modeling. The modeling results show significant differences in pre- and post-eruptive magmatic deformation source parameters. Therefore, I conjecture that two magma chambers exist below Llaima volcano: a post-eruptive deep one and a shallow one possibly due to the pre-eruptive ascent of magma. Similar reservoir depths at Llaima have been confirmed by independent petrologic studies. These reservoirs are interpreted to be temporally coupled. At Tendürek volcano I have found long-term subsidence of the volcanic edifice, which can be described by a large, magmatic, sill-like source that is subject to cooling contraction. The displacement data in conjunction with high-resolution optical images, however, reveal arcuate fractures at the eastern and western flank of the volcano. These are most likely the surface expressions of concentric ring-faults around the volcanic edifice that show low magnitudes of slip over a long time. This might be an alternative mechanism for the development of large caldera structures, which are so far assumed to be generated during large catastrophic collapse events. To investigate the potential subsurface geometry and relation of the two proposed interacting sources at Tendürek, a sill-like magmatic source and ring-faults, I have performed a more sophisticated numerical modeling approach. The optimum source geometries show, that the size of the sill-like source was overestimated in the simple models and that it is difficult to determine the dip angle of the ring-faults with surface displacement data only. However, considering physical and geological criteria a combination of outward-dipping reverse faults in the west and inward-dipping normal faults in the east seem to be the most likely. Consequently, the underground structure at the Tendürek volcano consists of a small, sill-like, contracting, magmatic source below the western summit crater that causes a trapdoor-like faulting along the ring-faults around the volcanic edifice. Therefore, the magmatic source and the ring-faults are also interpreted to be temporally coupled. In addition, a method for data reduction has been improved. The modeling of subsurface deformation sources requires only a relatively small number of well distributed InSAR observations at the earth’s surface. Satellite radar images, however, consist of several millions of these observations. Therefore, the large amount of data needs to be reduced by several orders of magnitude for source modeling, to save computation time and increase model flexibility. I have introduced a model-based subsampling approach in particular for heterogeneously-distributed observations. It allows a fast calculation of the data error variance-covariance matrix, also supports the modeling of time dependent displacement data and is, therefore, an alternative to existing methods.
The concept that diversity promotes reliability of ecosystem function depends on the pattern that community-level biomass shows lower temporal variability than species-level biomasses. However, this pattern is not universal, as it relies on compensatory or independent species dynamics. When in contrast within--trophic level synchronization occurs, variability of community biomass will approach population-level variability. Current knowledge fails to integrate how species richness, functional distance between species, and the relative importance of predation and competition combine to drive synchronization at different trophic levels. Here we clarify these mechanisms. Intense competition promotes compensatory dynamics in prey, but predators may at the same time increasingly synchronize, under increasing species richness and functional similarity. In contrast, predators and prey both show perfect synchronization under strong top-down control, which is promoted by a combination of low functional distance and high net growth potential of predators. Under such conditions, community-level biomass variability peaks, with major negative consequences for reliability of ecosystem function.
Diffusion of finite-size particles in two-dimensional channels with random wall configurations
(2014)
Diffusion of chemicals or tracer molecules through complex systems containing irregularly shaped channels is important in many applications. Most theoretical studies based on the famed Fick-Jacobs equation focus on the idealised case of infinitely small particles and reflecting boundaries. In this study we use numerical simulations to consider the transport of finite-size particles through asymmetrical two-dimensional channels. Additionally, we examine transient binding of the molecules to the channel walls by applying sticky boundary conditions. We consider an ensemble of particles diffusing in independent channels, which are characterised by common structural parameters. We compare our results for the long-time effective diffusion coefficient with a recent theoretical formula obtained by Dagdug and Pineda
Diffusion of finite-size particles in two-dimensional channels with random wall configurations
(2014)
Diffusion of chemicals or tracer molecules through complex systems containing irregularly shaped channels is important in many applications. Most theoretical studies based on the famed Fick–Jacobs equation focus on the idealised case of infinitely small particles and reflecting boundaries. In this study we use numerical simulations to consider the transport of finite-size particles through asymmetrical two-dimensional channels. Additionally, we examine transient binding of the molecules to the channel walls by applying sticky boundary conditions. We consider an ensemble of particles diffusing in independent channels, which are characterised by common structural parameters. We compare our results for the long-time effective diffusion coefficient with a recent theoretical formula obtained by Dagdug and Pineda [J. Chem. Phys., 2012, 137, 024107].
Diffusion of finite-size particles in two-dimensional channels with random wall configurations
(2014)
Diffusion of chemicals or tracer molecules through complex systems containing irregularly shaped channels is important in many applications. Most theoretical studies based on the famed Fick–Jacobs equation focus on the idealised case of infinitely small particles and reflecting boundaries. In this study we use numerical simulations to consider the transport of finite-size particles through asymmetrical two-dimensional channels. Additionally, we examine transient binding of the molecules to the channel walls by applying sticky boundary conditions. We consider an ensemble of particles diffusing in independent channels, which are characterised by common structural parameters. We compare our results for the long-time effective diffusion coefficient with a recent theoretical formula obtained by Dagdug and Pineda [J. Chem. Phys., 2012, 137, 024107].
We show that moderate energy relaxation in the formation of dark matter halos invariably leads to profiles that match those observed in the central regions of galaxies. The density profile of the central region is universal and insensitive to either the seed perturbation shape or the details of the relaxation process. The profile has a central core; the multiplication of the central density by the core radius is almost independent of the halo mass, in accordance with observations. In the core area the density distribution behaves as an Einasto profile with low index (n similar to 0.5); it has an extensive region with rho proportional to r(-2) at larger distances. This is exactly the shape that observations suggest for the central region of galaxies. On the other hand, this shape does not fit the galaxy cluster profiles. A possible explanation of this fact is that the relaxation is violent in the case of galaxy clusters; however, it is not violent enough when galaxies or smaller dark matter structures are considered. We discuss the reasons for this.
While N-body simulations testify to a cuspy profile of the central region of dark matter halos, observations favor a shallow, cored density profile of the central region of at least some spiral galaxies and dwarf spheroidals. We show that a central profile, very close to the observed one, inevitably forms in the center of dark matter halos if we make a supposition about a moderate energy relaxation of the system during the halo formation. If we assume the energy exchange between dark matter particles during the halo collapse is not too intensive, the profile is universal: it depends almost not at all on the properties of the initial perturbation and is very akin, but not identical, to the Einasto profile with a small Einasto index n similar to 0.5. We estimate the size of the "central core" of the distribution, i.e., the extent of the very central region with a respectively gentle profile, and show that the cusp formation is unlikely, even if the dark matter is cold. The obtained profile is in good agreement with observational data for at least some types of galaxies but clearly disagrees with N-body simulations.