Refine
Year of publication
- 2014 (1386) (remove)
Document Type
- Article (1008)
- Doctoral Thesis (125)
- Postprint (97)
- Preprint (42)
- Review (39)
- Conference Proceeding (37)
- Monograph/Edited Volume (15)
- Other (9)
- Part of a Book (6)
- Habilitation Thesis (3)
Language
- English (1386) (remove)
Keywords
- anomalous diffusion (14)
- radiation mechanisms: non-thermal (9)
- Eye movements (8)
- Holocene (8)
- gamma rays: galaxies (8)
- living cells (8)
- Earthquake source observations (7)
- gamma rays: general (7)
- Arabidopsis (6)
- Sun: coronal mass ejections (CMEs) (6)
Institute
- Institut für Biochemie und Biologie (243)
- Institut für Physik und Astronomie (234)
- Institut für Geowissenschaften (225)
- Institut für Chemie (177)
- Department Psychologie (82)
- Institut für Ernährungswissenschaft (53)
- Institut für Mathematik (53)
- Department Sport- und Gesundheitswissenschaften (41)
- Institut für Informatik und Computational Science (41)
- Department Linguistik (34)
Zooplankton carcasses are ubiquitous in marine and freshwater systems, implicating the importance of non-predatory mortality, but both are often overlooked in ecological studies compared with predatory mortality. The development of several microscopic methods allows the distinction between live and dead zooplankton in field samples, and the reported percentages of dead zooplankton average 11.6 (minimum) to 59.8 (maximum) in marine environments, and 7.4 (minimum) to 47.6 (maximum) in fresh and inland waters. Common causes of non-predatory mortality among zooplankton include senescence, temperature change, physical and chemical stresses, parasitism and food-related factors. Carcasses resulting from non-predatory mortality may undergo decomposition leading to an increase in microbial production and a shift in microbial composition in the water column. Alternatively, sinking carcasses may contribute significantly to vertical carbon flux especially outside the phytoplankton growth seasons, and become a food source for the benthos. Global climate change is already altering freshwater ecosystems on multiple levels, and likely will have significant positive or negative effects on zooplankton non-predatory mortality. Better spatial and temporal studies of zooplankton carcasses and non-predatory mortality rates will improve our understanding of this important but under-appreciated topic.
Let M be a closed connected spin manifold of dimension 2 or 3 with a fixed orientation and a fixed spin structure. We prove that for a generic Riemannian metric on M the non-harmonic eigenspinors of the Dirac operator are nowhere zero. The proof is based on a transversality theorem and the unique continuation property of the Dirac operator.
World market governance
(2014)
Democratic capitalism or liberal democracy, as the successful marriage of convenience between market liberalism and democracy sometimes is called, is in trouble. The market economy system has become global and there is a growing mismatch with the territoriality of the nation-states. The functional global networks and inter-governmental order can no longer keep pace with the rapid development of the global market economy and regulatory capture is all too common. Concepts like de-globalization, self-regulation, and global government are floated in the debate. The alternatives are analysed and found to be improper, inadequate or plainly impossible. The proposed route is instead to accept that the global market economy has developed into an independent fundamental societal system that needs its own governance. The suggestion is World Market Governance based on the Rule of Law in order to shape the fitness environment for the global market economy and strengthen the nation-states so that they can regain the sovereignty to decide upon the social and cultural conditions in each country. Elements in the proposed Rule of Law are international legislation decided by an Assembly supported by a Council, and an independent Judiciary. Existing international organisations would function as executors. The need for broad sustained demand for regulations in the common interest is identified.
Purpose: Work-related anxieties are frequent and have a negative effect on the occupational performance of patients and absence due to sickness. Most important is workplace phobia, that is, panic when approaching or even thinking of the workplace. This study is the first to estimate the prevalence of workplace phobia among primary care patients suffering from chronic mental disorders and to describe which illness-related or workplace-specific context factors are associated with workplace phobia.
Methods: A convenience sample of 288 primary care patients with chronic mental disorders (70% women) seen by 40 primary care clinicians in Germany were assessed using a standardized diagnostic interview about mental disorders and workplace problems. Workplace phobia was assessed by the Workplace Phobia Scale and a structured Diagnostic and Statical Manual of Mental Disorders-based diagnostic interview. In addition, capacity and participation restrictions, illness severity, and sick leave were assessed.
Results: Workplace phobia was found in 10% of patients with chronic mental disorders, that is, approximately about 3% of all general practice patients. Patients with workplace phobia had longer durations of sick leave than patients without workplace phobia and were impaired to a higher degree in work-relevant capacities. They also had a higher degree of restrictions in participation in other areas of life.
Conclusions: Workplace phobia seems to be a frequent problem in primary care. It may behoove primary care clinicians to consider workplace-related anxiety, including phobia, particularly when patients ask for a work excuse for nonspecific somatic complaints.
Working memory load-dependent brain response predicts behavioral training gains in older adults
(2014)
In the domain of working memory (WM), a sigmoid-shaped relationship between WM load and brain activation patterns has been demonstrated in younger adults. It has been suggested that age-related alterations of this pattern are associated with changes in neural efficiency and capacity. At the same time, WM training studies have shown that some older adults are able to increase their WM performance through training. In this study, functional magnetic resonance imaging during an n-back WM task at different WM load levels was applied to compare blood oxygen level-dependent (BOLD) responses between younger and older participants and to predict gains in WM performance after a subsequent 12-session WM training procedure in older adults. We show that increased neural efficiency and capacity, as reflected by more "youth-like" brain response patterns in regions of interest of the frontoparietal WM network, were associated with better behavioral training outcome beyond the effects of age, sex, education, gray matter volume, and baseline WM performance. Furthermore, at low difficulty levels, decreases in BOLD response were found after WM training. Results indicate that both neural efficiency (i. e., decreased activation at comparable performance levels) and capacity (i. e., increasing activation with increasing WM load) of a WM-related network predict plasticity of the WM system, whereas WM training may specifically increase neural efficiency in older adults.
Analyses of metagenomes in life sciences present new opportunities as well as challenges to the scientific community and call for advanced computational methods and workflows. The large amount of data collected from samples via next-generation sequencing (NGS) technologies render manual approaches to sequence comparison and annotation unsuitable. Rather, fast and efficient computational pipelines are needed to provide comprehensive statistics and summaries and enable the researcher to choose appropriate tools for more specific analyses. The workflow presented here builds upon previous pipelines designed for automated clustering and annotation of raw sequence reads obtained from next-generation sequencing technologies such as 454 and Illumina. Employing specialized algorithms, the sequence reads are processed at three different levels. First, raw reads are clustered at high similarity cutoff to yield clusters which can be exported as multifasta files for further analyses. Independently, open reading frames (ORFs) are predicted from raw reads and clustered at two strictness levels to yield sets of non-redundant sequences and ORF families. Furthermore, single ORFs are annotated by performing searches against the Pfam database
Numerous studies have demonstrated effects of word frequency on eye movements during reading, but the precise timing of this influence has remained unclear. The fast priming paradigm was previously used to study influences of related versus unrelated primes on the target word. Here, we use this procedure to investigate whether the frequency of the prime word has a direct influence on eye movements during reading when the prime-target relation is not manipulated. We found that with average prime intervals of 32 ms readers made longer single fixation durations on the target word in the low than in the high frequency prime condition. Distributional analyses demonstrated that the effect of prime frequency on single fixation durations occurred very early, supporting theories of immediate cognitive control of eye movements. Finding prime frequency effects only 207 ms after visibility of the prime and for prime durations of 32 ms yields new time constraints for cognitive processes controlling eye movements during reading. Our variant of the fast priming paradigm provides a new approach to test early influences of word processing on eye movement control during reading.
Wood is used for many applications because of its excellent mechanical properties, relative abundance and as it is a renewable resource. However, its wider utilization as an engineering material is limited because it swells and shrinks upon moisture changes and is susceptible to degradation by microorganisms and/or insects. Chemical modifications of wood have been shown to improve dimensional stability, water repellence and/or durability, thus increasing potential service-life of wood materials. However current treatments are limited because it is difficult to introduce and fix such modifications deep inside the tissue and cell wall. Within the scope of this thesis, novel chemical modification methods of wood cell walls were developed to improve both dimensional stability and water repellence of wood material. These methods were partly inspired by the heartwood formation in living trees, a process, that for some species results in an insertion of hydrophobic chemical substances into the cell walls of already dead wood cells, In the first part of this thesis a chemistry to modify wood cell walls was used, which was inspired by the natural process of heartwood formation. Commercially available hydrophobic flavonoid molecules were effectively inserted in the cell walls of spruce, a softwood species with low natural durability, after a tosylation treatment to obtain “artificial heartwood”. Flavonoid inserted cell walls show a reduced moisture absorption, resulting in better dimensional stability, water repellency and increased hardness. This approach was quite different compared to established modifications which mainly address hydroxyl groups of cell wall polymers with hydrophilic substances. In the second part of the work in-situ styrene polymerization inside the tosylated cell walls was studied. It is known that there is a weak adhesion between hydrophobic polymers and hydrophilic cell wall components. The hydrophobic styrene monomers were inserted into the tosylated wood cell walls for further polymerization to form polystyrene in the cell walls, which increased the dimensional stability of the bulk wood material and reduced water uptake of the cell walls considerably when compared to controls. In the third part of the work, grafting of another hydrophobic and also biodegradable polymer, poly(ɛ-caprolactone) in the wood cell walls by ring opening polymerization of ɛ-caprolactone was studied at mild temperatures. Results indicated that polycaprolactone attached into the cell walls, caused permanent swelling of the cell walls up to 5%. Dimensional stability of the bulk wood material increased 40% and water absorption reduced more than 35%. A fully biodegradable and hydrophobized wood material was obtained with this method which reduces disposal problem of the modified wood materials and has improved properties to extend the material’s service-life. Starting from a bio-inspired approach which showed great promise as an alternative to standard cell wall modifications we showed the possibility of inserting hydrophobic molecules in the cell walls and supported this fact with in-situ styrene and ɛ-caprolactone polymerization into the cell walls. It was shown in this thesis that despite the extensive knowledge and long history of using wood as a material there is still room for novel chemical modifications which could have a high impact on improving wood properties.
The 2002 M-w 7.9 Denali Fault earthquake, Alaska, provides an unparalleled opportunity to investigate in quantitative detail the regional hillslope mass-wasting response to strong seismic shaking in glacierized terrain. We present the first detailed inventory of similar to 1580 coseismic slope failures, out of which some 20% occurred above large valley glaciers, based on mapping from multi-temporal remote sensing data. We find that the Denali earthquake produced at least one order of magnitude fewer landslides in a much narrower corridor along the fault ruptures than empirical predictions for an M 8 earthquake would suggest, despite the availability of sufficiently steep and dissected mountainous topography prone to frequent slope failure. In order to explore potential controls on the reduced extent of regional coseismic landsliding we compare our data with inventories that we compiled for two recent earthquakes in periglacial and formerly glaciated terrain, i.e. at Yushu, Tibet (M-w 6.9, 2010), and Aysen Fjord, Chile (2007 M-w 6.2). Fault movement during these events was, similarly to that of the Denali earthquake, dominated by strike-slip offsets along near-vertical faults. Our comparison returns very similar coseismic landslide patterns that are consistent with the idea that fault type, geometry, and dynamic rupture process rather than widespread glacier cover were among the first-order controls on regional hillslope erosional response in these earthquakes. We conclude that estimating the amount of coseismic hillslope sediment input to the sediment cascade from earthquake magnitude alone remains highly problematic, particularly if glacierized terrain is involved. (C) 2014 Elsevier Ltd. All rights reserved.
Co-doping of the MOF 3∞[Zn(2-methylimidazolate-4-amide-5-imidate)] (IFP-1 = Imidazolate Framework Potsdam-1) with luminescent Eu3+ and Tb3+ ions presents an approach to utilize the porosity of the MOF for the intercalation of luminescence centers and for tuning of the chromaticity to the emission of white light of the quality of a three color emitter. Organic based fluorescence processes of the MOF backbone as well as metal based luminescence of the dopants are combined to one homogenous single source emitter while retaining the MOF's porosity. The lanthanide ions Eu3+ and Tb3+ were doped in situ into IFP-1 upon formation of the MOF by intercalation into the micropores of the growing framework without a structure directing effect. Furthermore, the color point is temperature sensitive, so that a cold white light with a higher blue content is observed at 77 K and a warmer white light at room temperature (RT) due to the reduction of the organic emission at higher temperatures. The study further illustrates the dependence of the amount of luminescent ions on porosity and sorption properties of the MOF and proves the intercalation of luminescence centers into the pore system by low-temperature site selective photoluminescence spectroscopy, SEM and EDX. It also covers an investigation of the border of homogenous uptake within the MOF pores and the formation of secondary phases of lanthanide formates on the surface of the MOF. Crossing the border from a homogenous co-doping to a two-phase composite system can be beneficially used to adjust the character and warmth of the white light. This study also describes two-color emitters of the formula Ln@IFP-1a–d (Ln: Eu, Tb) by doping with just one lanthanide Eu3+ or Tb3+.
Soil in a changing world is subject to both anthropogenic and environmental stresses. Soil monitoring is essential to assess the magnitude of changes in soil variables and how they affect ecosystem processes and human livelihoods. However, we cannot always be sure which sampling design is best for a given monitoring task. We employed a rotational stratified simple random sampling (rotStRS) for the estimation of temporal changes in the spatial mean of saturated hydraulic conductivity (K-s) at three sites in central Panama in 2009, 2010 and 2011. To assess this design's efficiency we compared the resulting estimates of the spatial mean and variance for 2009 with those gained from stratified simple random sampling (StRS), which was effectively the data obtained on the first sampling time, and with an equivalent unexecuted simple random sampling (SRS). The poor performance of geometrical stratification and the weak predictive relationship between measurements of successive years yielded no advantage of sampling designs more complex than SRS. The failure of stratification may be attributed to the small large-scale variability of K-s. Revisiting previously sampled locations was not beneficial because of the large small-scale variability in combination with destructive sampling, resulting in poor consistency between revisited samples. We conclude that for our K-s monitoring scheme, repeated SRS is equally effective as rotStRS. Some problems of small-scale variability might be overcome by collecting several samples at close range to reduce the effect of small-scale variation. Finally, we give recommendations on the key factors to consider when deciding whether to use stratification and rotation in a soil monitoring scheme.
Many perceptual and cognitive tasks permit or require the integrated cooperation of specialized sensory channels, detectors, or other functionally separate units. In compound detection or discrimination tasks, 1 prominent general mechanism to model the combination of the output of different processing channels is probability summation. The classical example is the binocular summation model of Pirenne (1943), according to which a weak visual stimulus is detected if at least 1 of the 2 eyes detects this stimulus; as we review briefly, exactly the same reasoning is applied in numerous other fields. It is generally accepted that this mechanism necessarily predicts performance based on 2 (or more) channels to be superior to single channel performance, because 2 separate channels provide "2 chances" to succeed with the task. We argue that this reasoning is misleading because it neglects the increased opportunity with 2 channels not just for hits but also for false alarms and that there may well be no redundancy gain at all when performance is measured in terms of receiver operating characteristic curves. We illustrate and support these arguments with a visual detection experiment involving different spatial uncertainty conditions. Our arguments and findings have important implications for all models that, in one way or another, rest on, or incorporate, the notion of probability summation for the analysis of detection tasks, 2-alternative forced-choice tasks, and psychometric functions.
The term Linked Data refers to connected information sources comprising structured data about a wide range of topics and for a multitude of applications. In recent years, the conceptional and technical foundations of Linked Data have been formalized and refined. To this end, well-known technologies have been established, such as the Resource Description Framework (RDF) as a Linked Data model or the SPARQL Protocol and RDF Query Language (SPARQL) for retrieving this information. Whereas most research has been conducted in the area of generating and publishing Linked Data, this thesis presents novel approaches for improved management. In particular, we illustrate new methods for analyzing and processing SPARQL queries. Here, we present two algorithms suitable for identifying structural relationships between these queries. Both algorithms are applied to a large number of real-world requests to evaluate the performance of the approaches and the quality of their results. Based on this, we introduce different strategies enabling optimized access of Linked Data sources. We demonstrate how the presented approach facilitates effective utilization of SPARQL endpoints by prefetching results relevant for multiple subsequent requests. Furthermore, we contribute a set of metrics for determining technical characteristics of such knowledge bases. To this end, we devise practical heuristics and validate them through thorough analysis of real-world data sources. We discuss the findings and evaluate their impact on utilizing the endpoints. Moreover, we detail the adoption of a scalable infrastructure for improving Linked Data discovery and consumption. As we outline in an exemplary use case, this platform is eligible both for processing and provisioning the corresponding information.
Morphological systems are constrained in how they interact with each other. One case that has been widely studied in the psycholinguistic literature is the avoidance of plurals inside compounds (e.g. *rats eater vs. rat eater) in English and other languages, the so-called plurals-in-compounds effect. Several previous studies have shown that both adult and child speakers are sensitive to this contrast, but the question of whether semantic, morphological, or surface-form constraints are responsible for the plurals-in-compounds effect remains controversial. The present study provides new empirical evidence from adult and child English to resolve this controversy. Graded linguistic judgments were obtained from 96 children (age range: 7;06 to 12;08) and 32 adults. In the task, participants were asked to rate compounds containing different kinds of singular or plural modifiers. The results indicated that both children and adults disliked regular plurals inside compounds, whereas irregular plurals were rated as marginal and singulars as fully acceptable. Furthermore, acceptability ratings were found not to be affected by the phonological surface form of a compound-internal modifier. We conclude that semantic and morphological (rather than surface-form) constraints are responsible for the plurals-in-compounds effect, in both children and adults.
This study examines the course and driving forces of recent vegetation change in the Mongolian steppe. A sediment core covering the last 55years from a small closed-basin lake in central Mongolia was analyzed for its multi-proxy record at annual resolution. Pollen analysis shows that highest abundances of planted Poaceae and highest vegetation diversity occurred during 1977-1992, reflecting agricultural development in the lake area. A decrease in diversity and an increase in Artemisia abundance after 1992 indicate enhanced vegetation degradation in recent times, most probably because of overgrazing and farmland abandonment. Human impact is the main factor for the vegetation degradation within the past decades as revealed by a series of redundancy analyses, while climate change and soil erosion play subordinate roles. High Pediastrum (a green algae) influx, high atomic total organic carbon/total nitrogen (TOC/TN) ratios, abundant coarse detrital grains, and the decrease of C-13(org) and N-15 since about 1977 but particularly after 1992 indicate that abundant terrestrial organic matter and nutrients were transported into the lake and caused lake eutrophication, presumably because of intensified land use. Thus, we infer that the transition to a market economy in Mongolia since the early 1990s not only caused dramatic vegetation degradation but also affected the lake ecosystem through anthropogenic changes in the catchment area.
This study examines the course and driving forces of recent vegetation change in the Mongolian steppe. A sediment core covering the last 55years from a small closed-basin lake in central Mongolia was analyzed for its multi-proxy record at annual resolution. Pollen analysis shows that highest abundances of planted Poaceae and highest vegetation diversity occurred during 1977-1992, reflecting agricultural development in the lake area. A decrease in diversity and an increase in Artemisia abundance after 1992 indicate enhanced vegetation degradation in recent times, most probably because of overgrazing and farmland abandonment. Human impact is the main factor for the vegetation degradation within the past decades as revealed by a series of redundancy analyses, while climate change and soil erosion play subordinate roles. High Pediastrum (a green algae) influx, high atomic total organic carbon/total nitrogen (TOC/TN) ratios, abundant coarse detrital grains, and the decrease of C-13(org) and N-15 since about 1977 but particularly after 1992 indicate that abundant terrestrial organic matter and nutrients were transported into the lake and caused lake eutrophication, presumably because of intensified land use. Thus, we infer that the transition to a market economy in Mongolia since the early 1990s not only caused dramatic vegetation degradation but also affected the lake ecosystem through anthropogenic changes in the catchment area.
Geometric generalization is a fundamental concept in the digital mapping process. An increasing amount of spatial data is provided on the web as well as a range of tools to process it. This jABC workflow is used for the automatic testing of web-based generalization services like mapshaper.org by executing its functionality, overlaying both datasets before and after the transformation and displaying them visually in a .tif file. Mostly Web Services and command line tools are used to build an environment where ESRI shapefiles can be uploaded, processed through a chosen generalization service and finally visualized in Irfanview.
The Dansgaard-Oeschger oscillations and Heinrich events described in North Atlantic sediments and Greenland ice are expressed in the climate of the tropics, for example, as documented in Arabian Sea sediments. Given the strength of this teleconnection, we seek to reconstruct its range of environmental impacts. We present geochemical and sedimentological data from core SO130-289KL from the Indus submarine slope spanning the last similar to 80 kyr. Elemental and grain size analyses consistently indicate that interstadials are characterized by an increased contribution of fluvial suspension from the Indus River. In contrast, stadials are characterized by an increased contribution of aeolian dust from the Arabian Peninsula. Decadal-scale shifts at climate transitions, such as onsets of interstadials, were coeval with changes in productivity-related proxies. Heinrich events stand out as especially dry and dusty events, indicating a dramatically weakened Indian summer monsoon, potentially increased winter monsoon circulation, and increased aridity on the Arabian Peninsula. This finding is consistent with other paleoclimate evidence for continental aridity in the northern tropics during these events. Our results strengthen the evidence that circum-North Atlantic temperature variations translate to hydrological shifts in the tropics, with major impacts on regional environmental conditions such as rainfall, river discharge, aeolian dust transport, and ocean margin anoxia.
Context. It is not yet clear whether magnetic fields play an essential role in shaping planetary nebulae (PNe), or whether stellar rotation alone and/or a close binary companion, stellar or substellar, can account for the variety of the observed nebular morphologies.
Aims. In a quest for empirical evidence verifying or disproving the role of magnetic fields in shaping planetary nebulae, we follow up on previous attempts to measure the magnetic field in a representative sample of PN central stars.
Methods. We obtained low-resolution polarimetric spectra with FORS2 installed on the Antu telescope of the VLT for a sample of 12 bright central stars of PNe with different morphologies, including two round nebulae, seven elliptical nebulae, and three bipolar nebulae. Two targets are Wolf-Rayet type central stars.
Results. For the majority of the observed central stars, we do not find any significant evidence for the existence of surface magnetic fields. However, our measurements may indicate the presence of weak mean longitudinal magnetic fields of the order of 100 Gauss in the central star of the young elliptical planetary nebula IC 418 as well as in the Wolf-Rayet type central star of the bipolar nebula Hen 2-113 and the weak emission line central star of the elliptical nebula Hen 2-131. A clear detection of a 250 G mean longitudinal field is achieved for the A-type companion of the central star of NGC 1514. Some of the central stars show a moderate night-to-night spectrum variability, which may be the signature of a variable stellar wind and/or rotational modulation due to magnetic features.
Conclusions. Since our analysis indicates only weak fields, if any, in a few targets of our sample, we conclude that strong magnetic fields of the order of kG are not widespread among PNe central stars. Nevertheless, simple estimates based on a theoretical model of magnetized wind bubbles suggest that even weak magnetic fields below the current detection limit of the order of 100 G may well be sufficient to contribute to the shaping of the surrounding nebulae throughout their evolution. Our current sample is too small to draw conclusions about a correlation between nebular morphology and the presence of stellar magnetic fields.
Process models specify behavioral execution constraints between activities as well as between activities and data objects. A data object is characterized by its states and state transitions represented as object life cycle. For process execution, all behavioral execution constraints must be correct. Correctness can be verified via soundness checking which currently only considers control flow information. For data correctness, conformance between a process model and its object life cycles is checked. Current approaches abstract from dependencies between multiple data objects and require fully specified process models although, in real-world process repositories, often underspecified models are found. Coping with these issues, we introduce the concept of synchronized object life cycles and we define a mapping of data constraints of a process model to Petri nets extending an existing mapping. Further, we apply the notion of weak conformance to process models to tell whether each time an activity needs to access a data object in a particular state, it is guaranteed that the data object is in or can reach the expected state. Then, we introduce an algorithm for an integrated verification of control flow correctness and weak data conformance using soundness checking.
The economic impact analysis contained in this book shows how irrigation farming is particularly susceptible when applying certain water management policies in the Australian Murray-Darling Basin, one of the world largest river basins and Australia’s most fertile region. By comparing different pricing and non-pricing water management policies with the help of the Water Integrated Market Model, it is found that the impact of water demand reducing policies is most severe on crops that need to be intensively irrigated and are at the same time less water productive. A combination of increasingly frequent and severe droughts and the application of policies that decrease agricultural water demand, in the same region, will create a situation in which the highly water dependent crops rice and cotton cannot be cultivated at all.
Leaching of dissolved C in arable hummocky ground moraine soil landscapes is characterized by a spatial continuum of more or less erosion-affected Luvisols, Calcaric Regosols at exposed positions, and Colluvic Regosols in depressions. Our objective was to estimate the fluxes of dissolved C in four differently eroded soils as affected by erosion-induced pedological and soil structural alterations. In this model study, we considered landscape position effects by adapting the water table as the bottom boundary condition and erosion effects by using pedon-specific soil hydraulic properties. The one-dimensional vertical water movement was described with the Richards equation using HYDRUS-1D. Solute fluxes were obtained by combining calculated water fluxes with concentrations of dissolved organic and inorganic C (DOC and DIC, respectively) measured from soil solution extracted by suction cups at biweekly intervals. In the 3-yr period (2010-2012), DOC fluxes in the 2-m soil depth were similar at the three non-colluvic locations with -0.8 +/- 0.1 g m(-2) yr(-1) (i.e., outflow) but were 0.4 g m(-2) yr(-1) (i.e., input) in the depression. The DIC fluxes ranged from -10.2 g m(-2) yr(-1) for the eroded Luvisol, -9.2 g m(-2) yr(-1) for the Luvisol, and -6.1 g m(-2) yr(-1) for the Calcaric Regosol to 3.2 g m(-2) yr(-1) for the Colluvic Regosol. The temporal variations in DOC and DIC fluxes were controlled by water fluxes. The spatially distributed leaching results corroborate the hypothesis that the effects of soil erosion influence fluxes through modified hydraulic and transport properties and terrain-dependent boundary conditions.
The Kv-like (potassium voltage-dependent) K+ channels at the plasma membrane, including the inward-rectifying KAT1 K+ channel of Arabidopsis (Arabidopsis thaliana), are important targets for manipulating K+ homeostasis in plants. Gating modification, especially, has been identified as a promising means by which to engineer plants with improved characteristics in mineral and water use. Understanding plant K+ channel gating poses several challenges, despite many similarities to that of mammalian Kv and Shaker channel models. We have used site-directed mutagenesis to explore residues that are thought to form two electrostatic countercharge centers on either side of a conserved phenylalanine (Phe) residue within the S2 and S3 alpha-helices of the voltage sensor domain (VSD) of Kv channels. Consistent with molecular dynamic simulations of KAT1, we show that the voltage dependence of the channel gate is highly sensitive to manipulations affecting these residues. Mutations of the central Phe residue favored the closed KAT1 channel, whereas mutations affecting the countercharge centers favored the open channel. Modeling of the macroscopic current kinetics also highlighted a substantial difference between the two sets of mutations. We interpret these findings in the context of the effects on hydration of amino acid residues within the VSD and with an inherent bias of the VSD, when hydrated around a central Phe residue, to the closed state of the channel.
After more than a decade of multidisciplinary studies of the Central American subduction zone mainly in the framework of two large research programmes, the US MARGINS program and the German Collaborative Research Center SFB 574, we here review and interpret the data pertinent to quantify the cycling of mineral-bound volatiles (H2O, CO2, Cl, S) through this subduction system. For input-flux calculations, we divide the Middle America Trench into four segments differing in convergence rate and slab lithological profiles, use the latest evidence for mantle serpentinization of the Cocos slab approaching the trench, and for the first time explicitly include subduction erosion of forearc basement. Resulting input fluxes are 40-62 (53) Tg/Ma/m H2O, 7.8-11.4 (9.3) Tg/Ma/m CO2, 1.3-1.9 (1.6) Tg/Ma/m Cl, and 1.3-2.1 (1.6) Tg/Ma/m S (bracketed are mean values for entire trench length). Output by cold seeps on the forearc amounts to 0.625-1.25 Tg/Ma/m H2O partly derived from the slab sediments as determined by geochemical analyses of fluids and carbonates. The major volatile output occurs at the Central American volcanic arc that is divided into ten arc segments by dextral strike-slip tectonics. Based on volcanic edifice and widespread tephra volumes as well as calculated parental magma masses needed to form observed evolved compositions, we determine long-term (10(5) years) average magma and K2O fluxes for each of the ten segments as 32-242 (106) Tg/Ma/m magma and 0.28-2.91 (1.38) Tg/Ma/m K2O (bracketed are mean values for entire Central American volcanic arc length). Volatile/K2O concentration ratios derived from melt inclusion analyses and petrologic modelling then allow to calculate volatile fluxes as 1.02-14.3 (6.2) Tg/Ma/m H2O, 0.02-0.45 (0.17) Tg/Ma/m CO2, and 0.07-0.34 (0.22) Tg/Ma/m Cl. The same approach yields long-term sulfur fluxes of 0.12-1.08 (0.54) Tg/Ma/m while present-day open-vent SO2-flux monitoring yields 0.06-2.37 (0.83) Tg/Ma/m S. Input-output comparisons show that the arc water fluxes only account for up to 40 % of the input even if we include an "invisible" plutonic component constrained by crustal growth. With 20-30 % of the H2O input transferred into the deeper mantle as suggested by petrologic modeling, there remains a deficiency of, say, 30-40 % in the water budget. At least some of this water is transferred into two upper-plate regions of low seismic velocity and electrical resistivity whose sizes vary along arc: one region widely envelopes the melt ascent paths from slab top to arc and the other extends obliquely from the slab below the forearc to below the arc. Whether these reservoirs are transient or steady remains unknown.
VMP1-deficient Chlamydomonas exhibits severely aberrant cell morphology and disrupted cytokinesies
(2014)
Background: The versatile Vacuole Membrane Protein 1 (VMP1) has been previously investigated in six species. It has been shown to be essential in macroautophagy, where it takes part in autophagy initiation. In addition, VMP1 has been implicated in organellar biogenesis; endo-, exo- and phagocytosis, and protein secretion; apoptosis; and cell adhesion. These roles underly its proven involvement in pancreatitis, diabetes and cancer in humans.
Results: In this study we analyzed a VMP1 homologue from the green alga Chlamydomonas reinhardtii. CrVMP1 knockdown lines showed severe phenotypes, mainly affecting cell division as well as the morphology of cells and organelles. We also provide several pieces of evidence for its involvement in macroautophagy.
With the growth of virtualization and cloud computing, more and more forensic investigations rely on being able to perform live forensics on a virtual machine using virtual machine introspection (VMI). Inspecting a virtual machine through its hypervisor enables investigation without risking contamination of the evidence, crashing the computer, etc. To further access to these techniques for the investigator/researcher we have developed a new VMI monitoring language. This language is based on a review of the most commonly used VMI-techniques to date, and it enables the user to monitor the virtual machine's memory, events and data streams. A prototype implementation of our monitoring system was implemented in KVM, though implementation on any hypervisor that uses the common x86 virtualization hardware assistance support should be straightforward. Our prototype outperforms the proprietary VMWare VProbes in many cases, with a maximum performance loss of 18% for a realistic test case, which we consider acceptable. Our implementation is freely available under a liberal software distribution license. (C) 2014 Digital Forensics Research Workshop. Published by Elsevier Ltd. All rights reserved.
Software maintenance encompasses any changes made to a software system after its initial deployment and is thereby one of the key phases in the typical software-engineering lifecycle. In software maintenance, we primarily need to understand structural and behavioral aspects, which are difficult to obtain, e.g., by code reading. Software analysis is therefore a vital tool for maintaining these systems: It provides - the preferably automated - means to extract and evaluate information from their artifacts such as software structure, runtime behavior, and related processes. However, such analysis typically results in massive raw data, so that even experienced engineers face difficulties directly examining, assessing, and understanding these data. Among other things, they require tools with which to explore the data if no clear question can be formulated beforehand. For this, software analysis and visualization provide its users with powerful interactive means. These enable the automation of tasks and, particularly, the acquisition of valuable and actionable insights into the raw data. For instance, one means for exploring runtime behavior is trace visualization. This thesis aims at extending and improving the tool set for visual software analysis by concentrating on several open challenges in the fields of dynamic and static analysis of software systems. This work develops a series of concepts and tools for the exploratory visualization of the respective data to support users in finding and retrieving information on the system artifacts concerned. This is a difficult task, due to the lack of appropriate visualization metaphors; in particular, the visualization of complex runtime behavior poses various questions and challenges of both a technical and conceptual nature. This work focuses on a set of visualization techniques for visually representing control-flow related aspects of software traces from shared-memory software systems: A trace-visualization concept based on icicle plots aids in understanding both single-threaded as well as multi-threaded runtime behavior on the function level. The concept’s extensibility further allows the visualization and analysis of specific aspects of multi-threading such as synchronization, the correlation of such traces with data from static software analysis, and a comparison between traces. Moreover, complementary techniques for simultaneously analyzing system structures and the evolution of related attributes are proposed. These aim at facilitating long-term planning of software architecture and supporting management decisions in software projects by extensions to the circular-bundle-view technique: An extension to 3-dimensional space allows for the use of additional variables simultaneously; interaction techniques allow for the modification of structures in a visual manner. The concepts and techniques presented here are generic and, as such, can be applied beyond software analysis for the visualization of similarly structured data. The techniques' practicability is demonstrated by several qualitative studies using subject data from industry-scale software systems. The studies provide initial evidence that the techniques' application yields useful insights into the subject data and its interrelationships in several scenarios.
A workflow for visualizing server connections using the Google Maps API was built in the jABC. It makes use of three basic services: An XML-based IP address geolocation web service, a command line tool and the Static Maps API. The result of the workflow is an URL leading to an image file of a map, showing server connections between a client and a target host.
Using density functional theory and Ab Initio Molecular Dynamics with Electronic Friction (AIMDEF), we study the adsorption and dissipative vibrational dynamics of hydrogen atoms chemisorbed on free-standing lead films of increasing thickness. Lead films are known for their oscillatory behaviour of certain properties with increasing thickness, e.g., energy and electron spill-out change in discontinuous manner, due to quantum size effects [G. Materzanini, P. Saalfrank, and P.J.D. Lindan, Phys. Rev. B 63, 235405 (2001)]. Here, we demonstrate that oscillatory features arise also for hydrogen when chemisorbed on lead films. Besides stationary properties of the adsorbate, we concentrate on finite vibrational lifetimes of H-surface vibrations. As shown by AIMDEF, the damping via vibration-electron hole pair coupling dominates clearly over the vibration-phonon channel, in particular for high-frequency modes. Vibrational relaxation times are a characteristic function of layer thickness due to the oscillating behaviour of the embedding surface electronic density. Implications derived from AIMDEF for frictional many-atom dynamics, and physisorbed species will also be given. (C) 2014 AIP Publishing LLC.
The Galactic center is an interesting region for high-energy (0.1-100 GeV) and very-high-energy (E > 100 GeV) gamma-ray observations. Potential sources of GeV/TeV gamma-ray emission have been suggested, e.g., the accretion of matter onto the supermassive black hole, cosmic rays from a nearby supernova remnant (e.g., Sgr A East), particle acceleration in a plerion, or the annihilation of dark matter particles. The Galactic center has been detected by EGRET and by Fermi/LAT in the MeV/GeV energy band. At TeV energies, the Galactic center was detected with moderate significance by the CANGAROO and Whipple 10 m telescopes and with high significance by H.E.S.S., MAGIC, and VERITAS. We present the results from three years of VERITAS observations conducted at large zenith angles resulting in a detection of the Galactic center on the level of 18 standard deviations at energies above similar to 2.5 TeV. The energy spectrum is derived and is found to be compatible with hadronic, leptonic, and hybrid emission models discussed in the literature. Future, more detailed measurements of the high-energy cutoff and better constraints on the high-energy flux variability will help to refine and/or disentangle the individual models.
Vertical radar profiling (VRP) is a single-borehole geophysical technique, in which the receiver antenna is located within a borehole and the transmitter antenna is placed at one or various offsets from the borehole. Today, VRP surveying is primarily used to derive 1D velocity models by inverting the arrival times of direct waves. Using field data collected at a well-constrained test site in Germany, we evaluated a VRP workflow relying on the analysis of direct-arrival traveltimes and amplitudes as well as on imaging reflection events. To invert our VRP traveltime data, we used a global inversion strategy resulting in an ensemble of acceptable velocity models, and thus, it allowed us to appraise uncertainty issues in the estimated velocities as well as in porosity models derived via petrophysical translations. In addition to traveltime inversion, the analysis of direct-wave amplitudes and reflection events provided further valuable information regarding subsurface properties and architecture. The used VRP amplitude preprocessing and inversion procedures were adapted from raybased crosshole ground-penetrating radar (GPR) attenuation tomography and resulted in an attenuation model, which can be used to estimate variations in electrical resistivity. Our VRP reflection imaging approach relied on corridor stacking, which is a well-established processing sequence in vertical seismic profiling. The resulting reflection image outlines bounding layers and can be directly compared to surface-based GPR reflection profiling. Our results of the combined analysis of VRP, traveltimes, amplitudes, and reflections were consistent with independent core and borehole logs as well as GPR reflection profiles, which enabled us to derive a detailed hydro-stratigraphic model as needed, for example, to understand and model groundwater flow and transport.
Crustal earthquake swarms are an expression of intensive cracking and rock damaging over periods of days, weeks or month in a small source region in the crust. They are caused by longer lasting stress changes in the source region. Often, the localized stressing of the crust is associated with fluid or gas migration, possibly in combination with pre-existing zones of weaknesses. However, verifying and quantifying localized fluid movement at depth remains difficult since the area affected is small and geophysical prospecting methods often cannot reach the required resolution.
We apply a simple and robust method to estimate the velocity ratio between compressional (P) and shear (S) waves (upsilon(P)/upsilon(S)-ratio) in the source region of an earthquake swarm. The upsilon(P)/upsilon(S)-ratio may be unusual small if the swarm is related to gas in a porous or fractured rock. The method uses arrival time difference between P and S waves observed at surface seismic stations, and the associated double differences between pairs of earthquakes. An advantage is that earthquake locations are not required and the method seems lesser dependent on unknown velocity variations in the crust outside the source region. It is, thus, suited for monitoring purposes.
Applications comprise three natural, mid-crustal (8-10 km) earthquake swarms between 1997 and 2008 from the NW-Bohemia swarm region. We resolve a strong temporal decrease of upsilon(P)/upsilon(S) before and during the main activity of the swarm, and a recovery of upsilon(P)/upsilon(S) to background levels at the end of the swarms. The anomalies are interpreted in terms of the Biot-Gassman equations, assuming the presence of oversaturated fluids degassing during the beginning phase of the swarm activity.
We assessed tropical montane cloud forest (TMCF) sensitivity to natural disturbance by drought, fire, and dieback with a 7300-year-long paleorecord. We analyzed pollen assemblages, charcoal accumulation rates, and higher plant biomarker compounds (average chain length [ACL] of n-alkanes) in sediments from Wai 'anapanapa, a small lake near the upper forest limit and the mean trade wind inversion ('IWI) in Hawai`i. The paleorecord of ACL suggests increased drought frequency and a lower awl elevation from 2555-1323 cal yr B.P. and 606-334 cal yr B.P. Charcoal began to accumulate and a novel fire regime was initiated ca. 880 cal yr B.P., followed by a decreased fire return interval at ca. 550 cal yr B.P. Diebacks occurred at 2931, 2161, 1162, and 306 cal yr B.P., and two of these were independent of drought or fire. Pollen assemblages indicate that on average species composition changed only 2.8% per decade. These dynamics, though slight, were significantly associated with disturbance. The direction of species composition change varied with disturbance type. Drought was associated with significantly more vines and lianas; fire was associated with an increase in the tree fern Sadleria and indicators of open, disturbed landscapes at the expense of epiphytic ferns; whereas stand-scale dieback was associated with an increase in the tree fern Cibotium. Though this cloud forest was dynamic in response to past disturbance, it has recovered, suggesting a resilient TMCF with no evidence of state change in vegetation type (e.g., grassland or shrubland).
QuestionDoes eutrophication drive vegetation change in pine forests on nutrient deficient sites and thus lead to the homogenization of understorey species composition?
LocationForest area (1600ha) in the Lower Spreewald, Brandenburg, Germany.
MethodsResurvey of 77 semi-permanent plots after 45yr, including vascular plants, bryophytes and ground lichens. We applied multidimensional ordination of species composition, dissimilarity indices, mean Ellenberg indicator values and the concept of winner/loser species to identify vegetation change between years. Differential responses along a gradient of nutrient availability were analysed on the basis of initial vegetation type, reflecting topsoil N availability of plots.
ResultsSpecies composition changed strongly and overall shifted towards higher N and slightly lower light availability. Differences in vegetation change were related to initial vegetation type, with strongest compositional changes in the oligotrophic forest type, but strongest increase of nitrophilous species in the mesotrophic forest type. Despite an overall increase in species number, species composition was homogenized between study years due to the loss of species (mainly ground lichens) on the most oligotrophic sites.
ConclusionsThe response to N enrichment is confounded by canopy closure on the N-richest sites and probably by water limitation on N-poorest sites. The relative importance of atmospheric N deposition in the eutrophication effect is difficult to disentangle from natural humus accumulation after historical litter raking. However, the profound differences in species composition between study years across all forest types suggest that atmospheric N deposition contributes to the eutrophication, which drives understorey vegetation change and biotic homogenization in Central European Scots pine forests on nutrient deficient sites.
We discuss the solution theory of operators of the form del(x) + A, acting on smooth sections of a vector bundle with connection del over a manifold M, where X is a vector field having a critical point with positive linearization at some point p is an element of M. As an operator on a suitable space of smooth sections Gamma(infinity)(U, nu), it fulfills a Fredholm alternative, and the same is true for the adjoint operator. Furthermore, we show that the solutions depend smoothly on the data del, X and A.
Two opposing viewpoints have been advanced to account for morphological productivity, one according to which some knowledge is couched in the form of operations over variables, and another in which morphological generalization is primarily determined by similarity. We investigated this controversy by examining the generalization of Portuguese verb stems, which fall into one of three conjugation classes. In Study 1, an elicited production task revealed that the generalization of 2nd and 3rd conjugation stems is influenced by the degree of phonological similarity between novel roots and existing verbs, whereas the 1st conjugation generalizes beyond similarity. In Study 2, we directly contrasted two distinct computational implementations of conjugation class assignment in how well they matched the human data: a similarity-driven model that captures phonological similarities, and a dual-mechanism model that implements an explicit distinction between context-free and similarity-based generalizations. The similarity-driven model consistently underestimated 1st conjugation responses and overestimated proportions of 2nd and 3rd conjugation responses, especially for novel verbs that are highly similar to existing verbs of those classes. In contrast, the expected proportions produced by the dual-mechanism model were statistically indistinguishable from human responses. We conclude that both context-free and context-sensitive processes determine the generalization of conjugations in Portuguese, and that similarity-based algorithms of morphological acquisition are insufficient to exhibit default-like generalization. (C) 2014 Elsevier Inc. All rights reserved.
The nature of the links between speech production and perception has been the subject of longstanding debate. The present study investigated the articulatory parameter of tongue height and the acoustic F1–F0 difference for the phonological distinction of vowel height in American English front vowels. Multiple repetitions of /i, ɪ, e, ɛ, æ/ in [(h)Vd] sequences were recorded in seven adult speakers. Articulatory (ultrasound) and acoustic data were collected simultaneously to provide a direct comparison of variability in vowel production in both domains. Results showed idiosyncratic patterns of articulation for contrasting the three front vowel pairs /i-ɪ/, /e-ɛ/, and /ɛ-æ/ across subjects, with the degree of variability in vowel articulation comparable to that observed in the acoustics for all seven participants. However, contrary to what was expected, some speakers showed reversals for tongue height for /ɪ/-/e/ that were also reflected in acoustics, with F1 higher for /ɪ/ than for /e/. The data suggest the phonological distinction of height is conveyed via speaker-specific articulatory-acoustic patterns that do not strictly match features descriptions. However, the acoustic signal is faithful to the articulatory configuration that generated it, carrying the crucial information for perceptual contrast.
The International Project for the Evaluation of Educational Achievement (IEA) was formed in the 1950s (Postlethwaite, 1967). Since that time, the IEA has conducted many studies in the area of mathematics, such as the First International Mathematics Study (FIMS) in 1964, the Second International Mathematics Study (SIMS) in 1980-1982, and a series of studies beginning with the Third International Mathematics and Science Study (TIMSS) which has been conducted every 4 years since 1995. According to Stigler et al. (1999), in the FIMS and the SIMS, U.S. students achieved low scores in comparison with students in other countries (p. 1). The TIMSS 1995 “Videotape Classroom Study” was therefore a complement to the earlier studies conducted to learn “more about the instructional and cultural processes that are associated with achievement” (Stigler et al., 1999, p. 1). The TIMSS Videotape Classroom Study is known today as the TIMSS Video Study. From the findings of the TIMSS 1995 Video Study, Stigler and Hiebert (1999) likened teaching to “mountain ranges poking above the surface of the water,” whereby they implied that we might see the mountaintops, but we do not see the hidden parts underneath these mountain ranges (pp. 73-78). By watching the videotaped lessons from Germany, Japan, and the United States again and again, they discovered that “the systems of teaching within each country look similar from lesson to lesson. At least, there are certain recurring features [or patterns] that typify many of the lessons within a country and distinguish the lessons among countries” (pp. 77-78). They also discovered that “teaching is a cultural activity,” so the systems of teaching “must be understood in relation to the cultural beliefs and assumptions that surround them” (pp. 85, 88). From this viewpoint, one of the purposes of this dissertation was to study some cultural aspects of mathematics teaching and relate the results to mathematics teaching and learning in Vietnam. Another research purpose was to carry out a video study in Vietnam to find out the characteristics of Vietnamese mathematics teaching and compare these characteristics with those of other countries. In particular, this dissertation carried out the following research tasks: - Studying the characteristics of teaching and learning in different cultures and relating the results to mathematics teaching and learning in Vietnam - Introducing the TIMSS, the TIMSS Video Study and the advantages of using video study in investigating mathematics teaching and learning - Carrying out the video study in Vietnam to identify the image, scripts and patterns, and the lesson signature of eighth-grade mathematics teaching in Vietnam - Comparing some aspects of mathematics teaching in Vietnam and other countries and identifying the similarities and differences across countries - Studying the demands and challenges of innovating mathematics teaching methods in Vietnam – lessons from the video studies Hopefully, this dissertation will be a useful reference material for pre-service teachers at education universities to understand the nature of teaching and develop their teaching career.
Sedimentary proxies used to reconstruct marine productivity suffer from variable preservation and are sensitive to factors other than productivity. Therefore, proxy calibration is warranted. Here we map the spatial patterns of two paleoproductivity proxies, biogenic opal and barium fluxes, from a set of core-top sediments recovered in the Subarctic North Pacific. Comparisons of the proxy data with independent estimates of primary and export production, surface water macronutrient concentrations, and biological pCO(2) drawdown indicate that neither proxy shows a significant correlation with primary or export productivity for the entire region. Biogenic opal fluxes, when corrected for preservation using Th-230-normalized accumulation rates, show a good correlation with primary productivity along the volcanic arcs (tau = 0.71, p = 0.0024) and with export productivity throughout the western Subarctic North Pacific (tau = 0.71, p = 0.0107). Moderate and good correlations of biogenic barium flux with export production (tau = 0.57, p = 0.0022) and with surface water silicate concentrations (tau = 0.70, p = 0.0002) are observed for the central and eastern Subarctic North Pacific. For reasons unknown, however, no correlation is found in the western Subarctic North Pacific between biogenic barium flux and the reference data. Nonetheless, we show that barite saturation, uncertainty in the lithogenic barium corrections, and problems with the reference data sets are not responsible for the lack of a significant correlation between biogenic barium flux and the reference data. Further studies evaluating the factors controlling the variability of the biogenic constituents in the sediments are desirable in this region.
Background: Knowing and, if necessary, altering competitive athletes' real attitudes towards the use of banned performance-enhancing substances is an important goal of worldwide doping prevention efforts. However athletes will not always be willing to reporting their real opinions. Reaction time-based attitude tests help conceal the ultimate goal of measurement from the participant and impede strategic answering. This study investigated how well a reaction time-based attitude test discriminated between athletes who were doping and those who were not. We investigated whether athletes whose urine samples were positive for at least one banned substance (dopers) evaluated doping more favorably than clean athletes (non-dopers).
Methods: We approached a group of 61 male competitive bodybuilders and collected urine samples for biochemical testing. The pictorial doping Brief Implicit Association Test (BIAT) was used for attitude measurement. This test quantifies the difference in response latencies (in milliseconds) to stimuli representing related concepts (i.e. doping-dislike/like-[health food]).
Results: Prohibited substances were found in 43% of all tested urine samples. Dopers had more lenient attitudes to doping than non-dopers (Hedges's g = -0.76). D-scores greater than -0.57 (CI95 = -0.72 to -0.46) might be indicative of a rather lenient attitude to doping. In urine samples evidence of administration of combinations of substances, complementary administration of substances to treat side effects and use of stimulants to promote loss of body fat was common.
Conclusion: This study demonstrates that athletes' attitudes to doping can be assessed indirectly with a reaction time-based test, and that their attitudes are related to their behavior. Although bodybuilders may be more willing to reveal their attitude to doping than other athletes, these results still provide evidence that the pictorial doping BIAT may be useful in athletes from other sports, perhaps as a complementary measure in evaluations of the effectiveness of doping prevention interventions.
The magnetosphere-ionosphere-thermosphere (MIT) dynamic system significantly depends on the highly variable solar wind conditions, in particular, on changes of the strength and orientation of the interplanetary magnetic field (IMF). The solar wind and IMF interactions with the magnetosphere drive the MIT system via the magnetospheric field-aligned currents (FACs). The global modeling helps us to understand the physical background of this complex system. With the present study, we test the recently developed high-resolution empirical model of field-aligned currents MFACE (a high-resolution Model of Field-Aligned Currents through Empirical orthogonal functions analysis). These FAC distributions were used as input of the time-dependent, fully self-consistent global Upper Atmosphere Model (UAM) for different seasons and various solar wind and IMF conditions. The modeling results for neutral mass density and thermospheric wind are directly compared with the CHAMP satellite measurements. In addition, we perform comparisons with the global empirical models: the thermospheric wind model (HWM07) and the atmosphere density model (Naval Research Laboratory Mass Spectrometer and Incoherent Scatter Extended 2000). The theoretical model shows a good agreement with the satellite observations and an improved behavior compared with the empirical models at high latitudes. Using the MFACE model as input parameter of the UAM model, we obtain a realistic distribution of the upper atmosphere parameters for the Northern and Southern Hemispheres during stable IMF orientation as well as during dynamic situations. This variant of the UAM can therefore be used for modeling the MIT system and space weather predictions.
Due to increasing demands and competition for high quality groundwater resources in many parts of the world, there is an urgent need for efficient methods that shed light on the interplay between complex natural settings and anthropogenic impacts. Thus a new approach is introduced, that aims to identify and quantify the predominant processes or factors of influence that drive groundwater and lake water dynamics on a catchment scale. The approach involves a non-linear dimension reduction method called Isometric feature mapping (Isomap). This method is applied to time series of groundwater head and lake water level data from a complex geological setting in Northeastern Germany. Two factors explaining more than 95% of the observed spatial variations are identified: (1) the anthropogenic impact of a waterworks in the study area and (2) natural groundwater recharge with different degrees of dampening at the respective sites of observation. The approach enables a presumption-free assessment to be made of the existing geological conception in the catchment, leading to an extension of the conception. Previously unknown hydraulic connections between two aquifers are identified, and connections revealed between surface water bodies and groundwater. (C) 2014 Elsevier B.V. All rights reserved.
Aims: Contrast media-induced nephropathy (CIN) is associated with increased morbidity and mortality. The renal endothelin system has been associated with disease progression of various acute and chronic renal diseases. However, robust data coming from adequately powered prospective clinical studies analyzing the short and long-term impacts of the renal ET system in patients with CIN are missing so far. We thus performed a prospective study addressing this topic.
Main methods: We included 327 patients with diabetes or renal impairment undergoing coronary angiography. Blood and spot urine were collected before and 24 h after contrast media (CM) application. Patients were followed for 90 days for major clinical events like need for dialysis, unplanned rehospitalization or death.
Key findings: The concentration of ET-1 and the urinary ET-1/creatinine ratio decreased in spot urine after CM application (ET-1 concentration: 0.91 +/- 1.23pg/ml versus 0.63 +/- 1.03pg/ml, p<0.001; ET-1/creatinine ratio: 0.14 +/- 0.23 versus 0.09 +/- 0.19, p<0.001). The urinary ET-1 concentrations in patients with CIN decreased significantly more than in patients without CIN (-0.26 +/- 1.42pg/ml vs. -0.79 +/- 1.69pg/ml, p=0.041), whereas the decrease of the urinary ET-1/creatinine ratio was not significantly different (non-CIN patients: -0.05 +/- 0.30; CIN patients: -0.11 +/- 0.21, p=0.223). Urinary ET-1 concentrations as well as the urinary ET-1/creatinine ratio were not associated with clinical events (need for dialysis, rehospitalization or death) during the 90day follow-up after contrast media exposure. However, the urinary ET-1 concentration and the urinary ET-1/creatinine ratio after CM application were higher in those patients who had a decrease of GFR of at least 25% after 90days of follow-up.
Significance: In general the ET-1 system in the kidney seems to be down-regulated after contrast media application in patients with moderate CIN risk. Major long-term complications of CIN (need for dialysis, rehospitalization or death) are not associated with the renal ET system. (C) 2014 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license.
Inorganic arsenicals are environmental toxins that have been connected with neuropathies and impaired cognitive functions. To investigate whether such substances accumulate in brain astrocytes and affect their viability and glutathione metabolism, we have exposed cultured primary astrocytes to arsenite or arsenate. Both arsenicals compromised the cell viability of astrocytes in a time- and concentration-dependent manner. However, the early onset of cell toxicity in arsenite-treated astrocytes revealed the higher toxic potential of arsenite compared with arsenate. The concentrations of arsenite and arsenate that caused within 24 h half-maximal release of the cytosolic enzyme lactate dehydrogenase were around 0.3 mM and 10 mM, respectively. The cellular arsenic contents of astrocytes increased rapidly upon exposure to arsenite or arsenate and reached after 4 h of incubation almost constant steady state levels. These levels were about 3-times higher in astrocytes that had been exposed to a given concentration of arsenite compared with the respective arsenate condition. Analysis of the intracellular arsenic species revealed that almost exclusively arsenite was present in viable astrocytes that had been exposed to either arsenate or arsenite. The emerging toxicity of arsenite 4 h after exposure was accompanied by a loss in cellular total glutathione and by an increase in the cellular glutathione disulfide content. These data suggest that the high arsenite content of astrocytes that had been exposed to inorganic arsenicals causes an increase in the ratio of glutathione disulfide to glutathione which contributes to the toxic potential of these substances.
Herein, we report the use of upconversion agents to modify graphite carbon nitride (g-C3N4) by direct thermal condensation of a mixture of ErCl3 center dot 6H(2)O and the supramolecular precursor cyanuric acid-melamine. We show the enhancement of g-C3N4 photoactivity after Er3+ doping by monitoring the photodegradation of Rhodamine B dye under visible light. The contribution of the upconversion agent is demonstrated by measurements using only a red laser. The Er3+ doping alters both the electronic and the chemical properties of g-C3N4. The Er3+ doping reduces emission intensity and lifetime, indicating the formation of new, nonradiative deactivation pathways, probably involving charge-transfer processes.
The Epoch of Reionization marks after recombination the second major change in the ionization state of the universe, going from a neutral to an ionized state. It starts with the appearance of the first stars and galaxies; a fraction of high-energy photons emitted from galaxies permeate into the intergalactic medium (IGM) and gradually ionize the hydrogen, until the IGM is completely ionized at z~6 (Fan et al., 2006). While the progress of reionization is driven by galaxy evolution, it changes the ionization and thermal state of the IGM substantially and affects subsequent structure and galaxy formation by various feedback mechanisms.
Understanding this interaction between reionization and galaxy formation is further impeded by a lack of understanding of the high-redshift galactic properties such as the dust distribution and the escape fraction of ionizing photons. Lyman Alpha Emitters (LAEs) represent a sample of high-redshift galaxies that are sensitive to all these galactic properties and the effects of reionization.
In this thesis we aim to understand the progress of reionization by performing cosmological simulations, which allows us to investigate the limits of constraining reionization by high-redshift galaxies as LAEs, and examine how galactic properties and the ionization state of the IGM affect the visibility and observed quantities of LAEs and Lyman Break galaxies (LBGs).
In the first part of this thesis we focus on performing radiative transfer calculations to simulate reionization. We have developed a mapping-sphere-scheme, which, starting from spherically averaged temperature and density fields, uses our 1D radiative transfer code and computes the effect of each source on the IGM temperature and ionization (HII, HeII, HeIII) profiles, which are subsequently mapped onto a grid. Furthermore we have updated the 3D Monte-Carlo radiative transfer pCRASH, enabling detailed reionization simulations which take individual source characteristics into account.
In the second part of this thesis we perform a reionization simulation by post-processing a smoothed-particle hydrodynamical (SPH) simulation (GADGET-2) with 3D radiative transfer (pCRASH), where the ionizing sources are modelled according to the characteristics of the stellar populations in the hydrodynamical simulation. Following the ionization fractions of hydrogen (HI) and helium (HeII, HeIII), and temperature in our simulation, we find that reionization starts at z~11 and ends at z~6, and high density regions near sources are ionized earlier than low density regions far from sources.
In the third part of this thesis we couple the cosmological SPH simulation and the radiative transfer simulations with a physically motivated, self-consistent model for LAEs, in order to understand the importance of the ionization state of the IGM, the escape fraction of ionizing photons from galaxies and dust in the interstellar medium (ISM) on the visibility of LAEs. Comparison of our models results with the LAE Lyman Alpha (Lya) and UV luminosity functions at z~6.6 reveals a three-dimensional degeneracy between the ionization state of the IGM, the ionizing photons escape fraction and the ISM dust distribution, which implies that LAEs act not only as tracers of reionization but also of the ionizing photon escape fraction and of the ISM dust distribution. This degeneracy does not even break down when we compare simulated with observed clustering of LAEs at z~6.6. However, our results show that reionization has the largest impact on the amplitude of the LAE angular correlation functions, and its imprints are clearly distinguishable from those of properties on galactic scales. These results show that reionization cannot be constrained tightly by exclusively using LAE observations. Further observational constraints, e.g. tomographies of the redshifted hydrogen 21cm line, are required.
In addition we also use our LAE model to probe the question when a galaxy is visible as a LAE or a LBG. Within our model galaxies above a critical stellar mass can produce enough luminosity to be visible as a LBG and/or a LAE. By finding an increasing duty cycle of LBGs with Lya emission as the UV magnitude or stellar mass of the galaxy rises, our model reveals that the brightest (and most massive) LBGs most often show Lya emission.
Predicting the Lya equivalent width (Lya EW) distribution and the fraction of LBGs showing Lya emission at z~6.6, we reproduce the observational trend of the Lya EWs with UV magnitude. However, the Lya EWs of the UV brightest LBGs exceed observations and can only be reconciled by accounting for an increased Lya attenuation of massive galaxies, which implies that the observed Lya brightest LAEs do not necessarily coincide with the UV brightest galaxies. We have analysed the dependencies of LAE observables on the properties of the galactic and intergalactic medium and the LAE-LBG connection, and this enhances our understanding of the nature of LAEs.
Inferring the internal interaction patterns of a complex dynamical system is a challenging problem. Traditional methods often rely on examining the correlations among the dynamical units. However, in systems such as transcription networks, one unit's variable is also correlated with the rate of change of another unit's variable. Inspired by this, we introduce the concept of derivative-variable correlation, and use it to design a new method of reconstructing complex systems (networks) from dynamical time series. Using a tunable observable as a parameter, the reconstruction of any system with known interaction functions is formulated via a simple matrix equation. We suggest a procedure aimed at optimizing the reconstruction from the time series of length comparable to the characteristic dynamical time scale. Our method also provides a reliable precision estimate. We illustrate the method's implementation via elementary dynamical models, and demonstrate its robustness to both model error and observation error.
A detailed knowledge of cell wall heterogeneity and complexity is crucial for understanding plant growth and development. One key challenge is to establish links between polysaccharide-rich cell walls and their phenotypic characteristics. It is of particular interest for some plant material, like cotton fibers, which are of both biological and industrial importance. To this end, we attempted to study cotton fiber characteristics together with glycan arrays using regression based approaches. Taking advantage of the comprehensive microarray polymer profiling technique (CoMPP), 32 cotton lines from different cotton species were studied. The glycan array was generated by sequential extraction of cell wall polysaccharides from mature cotton fibers and screening samples against eleven extensively characterized cell wall probes. Also, phenotypic characteristics of cotton fibers such as length, strength, elongation and micronaire were measured. The relationship between the two datasets was established in an integrative manner using linear regression methods. In the conducted analysis, we demonstrated the usefulness of regression based approaches in establishing a relationship between glycan measurements and phenotypic traits. In addition, the analysis also identified specific polysaccharides which may play a major role during fiber development for the final fiber characteristics. Three different regression methods identified a negative correlation between micronaire and the xyloglucan and homogalacturonan probes. Moreover, homogalacturonan and callose were shown to be significant predictors for fiber length. The role of these polysaccharides was already pointed out in previous cell wall elongation studies. Additional relationships were predicted for fiber strength and elongation which will need further experimental validation.
Background: Agrammatic speakers have problems with grammatical encoding and decoding. However, not all syntactic processes are equally problematic: present time reference, who questions, and reflexives can be processed by narrow syntax alone and are relatively spared compared to past time reference, which questions, and personal pronouns, respectively. The latter need additional access to discourse and information structures to link to their referent outside the clause (Avrutin, 2006). Linguistic processing that requires discourse-linking is difficult for agrammatic individuals: verb morphology with reference to the past is more difficult than with reference to the present (Bastiaanse et al., 2011). The same holds for which questions compared to who questions and for pronouns compared to reflexives (Avrutin, 2006). These results have been reported independently for different populations in different languages. The current study, for the first time, tested all conditions within the same population.
Aims: We had two aims with the current study. First, we wanted to investigate whether discourse-linking is the common denominator of the deficits in time reference, wh questions, and object pronouns. Second, we aimed to compare the comprehension of discourse-linked elements in people with agrammatic and fluent aphasia.
Methods and procedures: Three sentence-picture-matching tasks were administered to 10 agrammatic, 10 fluent aphasic, and 10 non-brain-damaged Russian speakers (NBDs): (1) the Test for Assessing Reference of Time (TART) for present imperfective (reference to present) and past perfective (reference to past), (2) the Wh Extraction Assessment Tool (WHEAT) for which and who subject questions, and (3) the Reflexive-Pronoun Test (RePro) for reflexive and pronominal reference.
Outcomes and results: NBDs scored at ceiling and significantly higher than the aphasic participants. We found an overall effect of discourse-linking in the TART and WHEAT for the agrammatic speakers, and in all three tests for the fluent speakers. Scores on the RePro were at ceiling.
Conclusions: The results are in line with the prediction that problems that individuals with agrammatic and fluent aphasia experience when comprehending sentences that contain verbs with past time reference, which question words and pronouns are caused by the fact that these elements involve discourse linking. The effect is not specific to agrammatism, although it may result from different underlying disorders in agrammatic and fluent aphasia.
The aim of the present thesis is to answer the question to what degree the processes involved in sentence comprehension are sensitive to task demands. A central phenomenon in this regard is the so-called ambiguity advantage, which is the finding that ambiguous sentences can be easier to process than unambiguous sentences. This finding may appear counterintuitive, because more meanings should be associated with a higher computational effort. Currently, two theories exist that can explain this finding.
The Unrestricted Race Model (URM) by van Gompel et al. (2001) assumes that several sentence interpretations are computed in parallel, whenever possible, and that the first interpretation to be computed is assigned to the sentence. Because the duration of each structure-building process varies from trial to trial, the parallelism in structure-building predicts that ambiguous sentences should be processed faster. This is because when two structures are permissible, the chances that some interpretation will be computed quickly are higher than when only one specific structure is permissible. Importantly, the URM is not sensitive to task demands such as the type of comprehension questions being asked.
A radically different proposal is the strategic underspecification model by Swets et al. (2008). It assumes that readers do not attempt to resolve ambiguities unless it is absolutely necessary. In other words, they underspecify. According the strategic underspecification hypothesis, all attested replications of the ambiguity advantage are due to the fact that in those experiments, readers were not required to fully understand the sentence.
In this thesis, these two models of the parser’s actions at choice-points in the sentence are presented and evaluated. First, it is argued that the Swets et al.’s (2008) evidence against the URM and in favor of underspecification is inconclusive. Next, the precise predictions of the URM as well as the underspecification model are refined. Subsequently, a self-paced reading experiment involving the attachment of pre-nominal relative clauses in Turkish is presented, which provides evidence against strategical underspecification. A further experiment is presented which investigated relative clause attachment in German using the speed-accuracy tradeoff (SAT) paradigm. The experiment provides evidence against strategic underspecification and in favor of the URM. Furthermore the results of the experiment are used to argue that human sentence comprehension is fallible, and that theories of parsing should be able to account for that fact. Finally, a third experiment is presented, which provides evidence for the sensitivity to task demands in the treatment of ambiguities. Because this finding is incompatible with the URM, and because the strategic underspecification model has been ruled out, a new model of ambiguity resolution is proposed: the stochastic multiple-channel model of ambiguity resolution (SMCM). It is further shown that the quantitative predictions of the SMCM are in agreement with experimental data.
In conclusion, it is argued that the human sentence comprehension system is parallel and fallible, and that it is sensitive to task-demands.
Scientific inquiry requires that we formulate not only what we know, but also what we do not know and by how much. In climate data analysis, this involves an accurate specification of measured quantities and a consequent analysis that consciously propagates the measurement errors at each step. The dissertation presents a thorough analytical method to quantify errors of measurement inherent in paleoclimate data. An additional focus are the uncertainties in assessing the coupling between different factors that influence the global mean temperature (GMT).
Paleoclimate studies critically rely on `proxy variables' that record climatic signals in natural archives. However, such proxy records inherently involve uncertainties in determining the age of the signal. We present a generic Bayesian approach to analytically determine the proxy record along with its associated uncertainty, resulting in a time-ordered sequence of correlated probability distributions rather than a precise time series. We further develop a recurrence based method to detect dynamical events from the proxy probability distributions. The methods are validated with synthetic examples and
demonstrated with real-world proxy records. The proxy estimation step reveals the interrelations between proxy variability and uncertainty. The recurrence analysis of the East Asian Summer Monsoon during the last 9000 years confirms the well-known `dry' events at 8200 and 4400 BP, plus an additional significantly dry event at 6900 BP.
We also analyze the network of dependencies surrounding GMT. We find an intricate, directed network with multiple links between the different factors at multiple time delays. We further uncover a significant feedback from the GMT to the El Niño Southern Oscillation at quasi-biennial timescales. The analysis highlights the need of a more nuanced formulation of influences between different climatic factors, as well as the limitations in trying to estimate such dependencies.
Ultraschall Berlin
(2014)
Ultraschall Berlin
(2014)
A new concept for shortening hard X-ray pulses emitted from a third-generation synchrotron source down to few picoseconds is presented. The device, called the PicoSwitch, exploits the dynamics of coherent acoustic phonons in a photo-excited thin film. A characterization of the structure demonstrates switching times of <= 5 ps and a peak reflectivity of similar to 10(-3). The device is tested in a real synchrotron-based pump-probe experiment and reveals features of coherent phonon propagation in a second thin film sample, thus demonstrating the potential to significantly improve the temporal resolution at existing synchrotron facilities.
In processing and data storage mainly ferromagnetic (FM) materials are being used. Approaching physical limits, new concepts have to be found for faster, smaller switches, for higher data densities and more energy efficiency. Some of the discussed new concepts involve the material classes of correlated oxides and materials with antiferromagnetic coupling. Their applicability depends critically on their switching behavior, i.e., how fast and how energy efficient material properties can be manipulated. This thesis presents investigations of ultrafast non-equilibrium phase transitions on such new materials. In transition metal oxides (TMOs) the coupling of different degrees of freedom and resulting low energy excitation spectrum often result in spectacular changes of macroscopic properties (colossal magneto resistance, superconductivity, metal-to-insulator transitions) often accompanied by nanoscale order of spins, charges, orbital occupation and by lattice distortions, which make these material attractive. Magnetite served as a prototype for functional TMOs showing a metal-to-insulator-transition (MIT) at T = 123 K. By probing the charge and orbital order as well as the structure after an optical excitation we found that the electronic order and the structural distortion, characteristics of the insulating phase in thermal equilibrium, are destroyed within the experimental resolution of 300 fs. The MIT itself occurs on a 1.5 ps timescale. It shows that MITs in functional materials are several thousand times faster than switching processes in semiconductors. Recently ferrimagnetic and antiferromagnetic (AFM) materials have become interesting. It was shown in ferrimagnetic GdFeCo, that the transfer of angular momentum between two opposed FM subsystems with different time constants leads to a switching of the magnetization after laser pulse excitation. In addition it was theoretically predicted that demagnetization dynamics in AFM should occur faster than in FM materials as no net angular momentum has to be transferred out of the spin system. We investigated two different AFM materials in order to learn more about their ultrafast dynamics. In Ho, a metallic AFM below T ≈ 130 K, we found that the AFM Ho can not only be faster but also ten times more energy efficiently destroyed as order in FM comparable metals. In EuTe, an AFM semiconductor below T ≈ 10 K, we compared the loss of magnetization and laser-induced structural distortion in one and the same experiment. Our experiment shows that they are effectively disentangled. An exception is an ultrafast release of lattice dynamics, which we assign to the release of magnetostriction. The results presented here were obtained with time-resolved resonant soft x-ray diffraction at the Femtoslicing source of the Helmholtz-Zentrum Berlin and at the free-electron laser in Stanford (LCLS). In addition the development and setup of a new UHV-diffractometer for these experiments will be reported.
Using ultrafast X-ray diffraction, we study the coherent picosecond lattice dynamics of photoexcited thin films in the two limiting cases, where the photoinduced stress profile decays on a length scale larger and smaller than the film thickness. We solve a unifying analytical model of the strain propagation for acoustic impedance-matched opaque films on a semi-infinite transparent substrate, showing that the lattice dynamics essentially depend on two parameters: One for the spatial profile and one for the amplitude of the strain. We illustrate the results by comparison with high-quality ultrafast X-ray diffraction data of SrRuO3 films on SrTiO3 substrates. (C) 2014 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.
The UDKM1DSIM toolbox is a collection of MATLAB (MathWorks Inc.) classes and routines to simulate the structural dynamics and the according X-ray diffraction response in one-dimensional crystalline sample structures upon an arbitrary time-dependent external stimulus, e.g. an ultrashort laser pulse. The toolbox provides the capabilities to define arbitrary layered structures on the atomic level including a rich database of corresponding element-specific physical properties. The excitation of ultrafast dynamics is represented by an N-temperature model which is commonly applied for ultrafast optical excitations. Structural dynamics due to thermal stress are calculated by a linear-chain model of masses and springs. The resulting X-ray diffraction response is computed by dynamical X-ray theory. The UDKM1DSIM toolbox is highly modular and allows for introducing user-defined results at any step in the simulation procedure.
Program summary
Program title: udkm1Dsim
Catalogue identifier: AERH_v1_0
Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AERH_v1_0.html
Licensing provisions: BSD
No. of lines in distributed program, including test data, etc.: 130221
No. of bytes in distributed program, including test data, etc.: 2746036
Distribution format: tar.gz
Programming language: Matlab (MathWorks Inc.).
Computer: PC/Workstation.
Operating system: Running Matlab installation required (tested on MS Win XP -7, Ubuntu Linux 11.04-13.04).
Has the code been vectorized or parallelized?: Parallelization for dynamical XRD computations. Number of processors used: 1-12 for Matlab Parallel Computing Toolbox; 1 - infinity for Matlab Distributed Computing Toolbox
External routines:
Optional: Matlab Parallel Computing Toolbox, Matlab Distributed Computing Toolbox Required (included in the package): mtimesx Fast Matrix Multiply for Matlab by James Tursa, xml io tools by Jaroslaw Tuszynski, textprogressbar by Paul Proteus
Nature of problem:
Simulate the lattice dynamics of 1D crystalline sample structures due to an ultrafast excitation including thermal transport and compute the corresponding transient X-ray diffraction pattern.
Solution method:
Restrictions:
The program is restricted to 1D sample structures and is further limited to longitudinal acoustic phonon modes and symmetrical X-ray diffraction geometries.
Unusual features: The program is highly modular and allows the inclusion of user-defined inputs at any time of the simulation procedure.
Running time: The running time is highly dependent on the number of unit cells in the sample structure and other simulation parameters such as time span or angular grid for X-ray diffraction computations. However, the example files are computed in approx. 1-5 min each on a 8 Core Processor with 16 GB RAM available.
Two-photon polymerization of hydrogels – versatile solutions to fabricate well-defined 3D structures
(2014)
Hydrogels are cross-linked water-containing polymer networks that are formed by physical, ionic or covalent interactions. In recent years, they have attracted significant attention because of their unique physical properties, which make them promising materials for numerous applications in food and cosmetic processing, as well as in drug delivery and tissue engineering. Hydrogels are highly water-swellable materials, which can considerably increase in volume without losing cohesion, are biocompatible and possess excellent tissue-like physical properties, which can mimic in vivo conditions. When combined with highly precise manufacturing technologies, such as two-photon polymerization (2PP), well-defined three-dimensional structures can be obtained. These structures can become scaffolds for selective cell-entrapping, cell/drug delivery, sensing and prosthetic implants in regenerative medicine. 2PP has been distinguished from other rapid prototyping methods because it is a non-invasive and efficient approach for hydrogel cross-linking. This review discusses the 2PP-based fabrication of 3D hydrogel structures and their potential applications in biotechnology. A brief overview regarding the 2PP methodology and hydrogel properties relevant to biomedical applications is given together with a review of the most important recent achievements in the field.
Current chemical risk assessment procedures may result in imprecise estimates of risk due to sometimes arbitrary simplifying assumptions. As a way to incorporate ecological complexity and improve risk estimates, mechanistic effect models have been recommended. However, effect modeling has not yet been extensively used for regulatory purposes, one of the main reasons being uncertainty about which model type to use to answer specific regulatory questions. We took an individual-based model (IBM), which was developed for risk assessment of soil invertebrates and includes avoidance of highly contaminated areas, and contrasted it with a simpler, more standardized model, based on the generic metapopulation matrix model RAMAS. In the latter the individuals within a sub-population are not treated as separate entities anymore and the spatial resolution is lower. We explored consequences of model aggregation in terms of assessing population-level effects for different spatial distributions of a toxic chemical. For homogeneous contamination of the soil, we found good agreement between the two models, whereas for heterogeneous contamination, at different concentrations and percentages of contaminated area, RAMAS results were alternatively similar to IBM results with and without avoidance, and different food levels. This inconsistency is explained on the basis of behavioral responses that are included in the IBM but not in RAMAS. Overall, RAMAS was less sensitive than the IBM in detecting population-level effects of different spatial patterns of exposure. We conclude that choosing the right model type for risk assessment of chemicals depends on whether or not population-level effects of small-scale heterogeneity in exposure need to be detected. We recommend that if in doubt, both model types should be used and compared. Describing both models following the same standard format, the ODD protocol, makes them equally transparent and understandable. The simpler model helps to build up trust for the more complex model and can be used for more homogeneous exposure patterns. The more complex model helps detecting and understanding the limitations of the simpler model and is needed to ensure ecological realism for more complex exposure scenarios. (C) 2013 Elsevier B.V. All rights reserved.
Two of a kind?
(2014)
School attacks are attracting increasing attention in aggression research. Recent systematic analyses provided new insights into offense and offender characteristics. Less is known about attacks in institutes of higher education (e.g., universities). It is therefore questionable whether the term “school attack” should be limited to institutions of general education or could be extended to institutions of higher education. Scientific literature is divided in distinguishing or unifying these two groups and reports similarities as well as differences. We researched 232 school attacks and 45 attacks in institutes of higher education throughout the world and conducted systematic comparisons between the two groups. The analyses yielded differences in offender (e.g., age, migration background) and offense characteristics (e.g., weapons, suicide rates), and some similarities (e.g., gender). Most differences can apparently be accounted for by offenders’ age and situational influences. We discuss the implications of our findings for future research and the development of preventative measures.
In Germany, active bat rabies surveillance was conducted between 1993 and 2012. A total of 4546 oropharyngeal swab samples from 18 bat species were screened for the presence of EBLV-1- , EBLV-2- and BBLV-specific RNA. Overall, 0 center dot 15% of oropharyngeal swab samples tested EBLV-1 positive, with the majority originating from Eptesicus serotinus. Interestingly, out of seven RT-PCR-positive oropharyngeal swabs subjected to virus isolation, viable virus was isolated from a single serotine bat (E. serotinus). Additionally, about 1226 blood samples were tested serologically, and varying virus neutralizing antibody titres were found in at least eight different bat species. The detection of viral RNA and seroconversion in repeatedly sampled serotine bats indicates long-term circulation of the virus in a particular bat colony. The limitations of random-based active bat rabies surveillance over passive bat rabies surveillance and its possible application of targeted approaches for future research activities on bat lyssavirus dynamics and maintenance are discussed.
Animal personalities are by definition stable over time, but to what extent they may change during development and in adulthood to adjust to environmental change is unclear. Animals of temperate environments have evolved physiological and behavioural adaptations to cope with the cyclic seasonal changes. This may also result in changes in personality: suites of behavioural and physiological traits that vary consistently among individuals. Winter, typically the adverse season challenging survival, may require individuals to have shy/cautious personality, whereas during summer, energetically favourable to reproduction, individuals may benefit from a bold/risk-taking personality. To test the effects of seasonal changes in early life and in adulthood on behaviours (activity, exploration and anxiety), body mass and stress response, we manipulated the photoperiod and quality of food in two experiments to simulate the conditions of winter and summer. We used the common voles (Microtus arvalis) as they have been shown to display personality based on behavioural consistency over time and contexts. Summer-born voles allocated to winter conditions at weaning had lower body mass, a higher corticosterone increase after stress and a less active, more cautious behavioural phenotype in adulthood compared to voles born in and allocated to summer conditions. In contrast, adult females only showed plasticity in stress-induced corticosterone levels, which were higher in the animals that were transferred to the winter conditions than to those staying in summer conditions. These results suggest a sensitive period for season-related behavioural plasticity in which juveniles shift over the bold-shy axis.
Zinc oxide (ZnO) is regarded as a promising alternative material for transparent conductive electrodes in optoelectronic devices. However, ZnO suffers from poor chemical stability. ZnO also has a moderate work function (WF), which results in substantial charge injection barriers into common (organic) semiconductors that constitute the active layer in a device. Controlling and tuning the ZnO WF is therefore necessary but challenging. Here, a variety of phosphonic acid based self-assembled monolayers (SAMs) deposited on ZnO surfaces are investigated. It is demonstrated that they allow the tuning the WF over a wide range of more than 1.5 eV, thus enabling the use of ZnO as both the hole-injecting and electron-injecting contact. The modified ZnO surfaces are characterized using a number of complementary techniques, demonstrating that the preparation protocol yields dense, well-defined molecular monolayers.
Mueller, J, Mueller, S, Stoll, J, Baur, H, and Mayer, F. Trunk extensor and flexor strength capacity in healthy young elite athletes aged 11-15 years. J Strength Cond Res 28(5): 1328-1334, 2014-Differences in trunk strength capacity because of gender and sports are well documented in adults. In contrast, data concerning young athletes are sparse. The purpose of this study was to assess the maximum trunk strength of adolescent athletes and to investigate differences between genders and age groups. A total of 520 young athletes were recruited. Finally, 377 (n = 233/144 M/F; 13 +/- 1 years; 1.62 +/- 0.11 m height; 51 +/- 12 kg mass; training: 4.5 +/- 2.6 years; training sessions/week: 4.3 +/- 3.0; various sports) young athletes were included in the final data analysis. Furthermore, 5 age groups were differentiated (age groups: 11, 12, 13, 14, and 15 years; n = 90, 150, 42, 43, and 52, respectively). Maximum strength of trunk flexors (Flex) and extensors (Ext) was assessed in all subjects during isokinetic concentric measurements (60 degrees center dot s(-1); 5 repetitions; range of motion: 55 degrees). Maximum strength was characterized by absolute peak torque (Flex(abs), Ext(abs); N center dot m), peak torque normalized to body weight (Flex(norm), Ext(norm); N center dot m center dot kg(-1) BW), and Flex(abs)/Ext(abs) ratio (RKquot). Descriptive data analysis (mean +/- SD) was completed, followed by analysis of variance (alpha = 0.05; post hoc test [Tukey-Kramer]). Mean maximum strength for all athletes was 97 +/- 34 N center dot m in Flex(abs) and 140 +/- 50 N center dot m in Ext(abs) (Flex(norm) = 1.9 +/- 0.3 N center dot m center dot kg(-1) BW, Ext(norm) = 2.8 +/- 0.6 N center dot m center dot kg(-1) BW). Males showed statistically significant higher absolute and normalized values compared with females (p < 0.001). Flex(abs) and Ext(abs) rose with increasing age almost 2-fold for males and females (Flex(abs), Ext(abs): p < 0.001). Flex(norm) and Ext(norm) increased with age for males (p < 0.001), however, not for females (Flex(norm): p = 0.26; Ext(norm): p = 0.20). RKquot (mean +/- SD: 0.71 +/- 0.16) did not reveal any differences regarding age (p = 0.87) or gender (p = 0.43). In adolescent athletes, maximum trunk strength must be discussed in a gender- and age-specific context. The Flex(abs)/Ext(abs) ratio revealed extensor dominance, which seems to be independent of age and gender. The values assessed may serve as a basis to evaluate and discuss trunk strength in athletes.
Bats are important components in tropical mammal assemblages. Unravelling the mechanisms allowing multiple syntopic bat species to coexist can provide insights into community ecology. However, dietary information on component species of these assemblages is often difficult to obtain. Here we measuredstable carbon and nitrogen isotopes in hair samples clipped from the backs of 94 specimens to indirectly examine whether trophic niche differentiation and microhabitat segregation explain the coexistence of 16 bat species at Ankarana, northern Madagascar. The assemblage ranged over 4.4% in delta N-15 and was structured into two trophic levels with phytophagous Pteropodidae as primary consumers (c. 3% enriched over plants) and different insectivorous bats as secondary consumers (c. 4% enriched over primary consumers). Bat species utilizing different microhabitats formed distinct isotopic clusters (metric analyses of delta C-13-delta N-15 bi-plots), but taxa foraging in the same microhabitat did not show more pronounced trophic differentiation than those occupying different microhabitats. As revealed by multivariate analyses, no discernible feeding competition was found in the local assemblage amongst congeneric species as compared with non-congeners. In contrast to ecological niche theory, but in accordance with studies on New and Old World bat assemblages, competitive interactions appear to be relaxed at Ankarana and not a prevailing structuring force.
Background: Chronic kidney disease (CKD) is a frequent comorbidity among elderly patients and those with cardiovascular disease. CKD carries prognostic relevance. We aimed to describe patient characteristics, risk factor management and control status of patients in cardiac rehabilitation (CR), differentiated by presence or absence of CKD.
Design and methods: Data from 92,071 inpatients with adequate information to calculate glomerular filtration rate (GFR) based on the Cockcroft-Gault formula were analyzed at the beginning and the end of a 3-week CR stay. CKD was defined as estimated GFR <60 ml/min/1.73 m(2).
Results: Compared with non-CKD patients, CKD patients were significantly older (72.0 versus 58.0 years) and more often had diabetes mellitus, arterial hypertension, and atherothrombotic manifestations (previous stroke, peripheral arterial disease), but fewer were current or previous smokers had a CHD family history. Exercise capacity was much lower in CKD (59 vs. 92Watts). Fewer patients with CKD were treated with percutaneous coronary intervention (PCI), but more had coronary artery bypass graft (CABG) surgery. Patients with CKD compared with non-CKD less frequently received statins, acetylsalicylic acid (ASA), clopidogrel, beta blockers, and angiotensin converting enzyme (ACE) inhibitors, and more frequently received angiotensin receptor blockers, insulin and oral anticoagulants. In CKD, mean low density lipoprotein cholesterol (LDL-C), total cholesterol, and high density lipoprotein cholesterol (HDL-C) were slightly higher at baseline, while triglycerides were substantially lower. This lipid pattern did not change at the discharge visit, but overall control rates for all described parameters (with the exception of HDL-C) were improved substantially. At discharge, systolic blood pressure (BP) was higher in CKD (124 versus 121 mmHg) and diastolic BP was lower (72 versus 74 mmHg). At discharge, 68.7% of CKD versus 71.9% of non-CKD patients had LDL-C <100 mg/dl. Physical fitness on exercise testing improved substantially in both groups. When the Modification of Diet in Renal Disease (MDRD) formula was used for CKD classification, there was no clinically relevant change in these results.
Conclusion: Within a short period of 3-4 weeks, CR led to substantial improvements in key risk factors such as lipid profile, blood pressure, and physical fitness for all patients, even if CKD was present.
Background: Chronic kidney disease (CKD) is a frequent comorbidity among elderly patients and those with cardiovascular disease. CKD carries prognostic relevance. We aimed to describe patient characteristics, risk factor management and control status of patients in cardiac rehabilitation (CR), differentiated by presence or absence of CKD.
Design and methods: Data from 92,071 inpatients with adequate information to calculate glomerular filtration rate (GFR) based on the Cockcroft-Gault formula were analyzed at the beginning and the end of a 3-week CR stay. CKD was defined as estimated GFR <60 ml/min/1.73 m(2).
Results: Compared with non-CKD patients, CKD patients were significantly older (72.0 versus 58.0 years) and more often had diabetes mellitus, arterial hypertension, and atherothrombotic manifestations (previous stroke, peripheral arterial disease), but fewer were current or previous smokers had a CHD family history. Exercise capacity was much lower in CKD (59 vs. 92Watts). Fewer patients with CKD were treated with percutaneous coronary intervention (PCI), but more had coronary artery bypass graft (CABG) surgery. Patients with CKD compared with non-CKD less frequently received statins, acetylsalicylic acid (ASA), clopidogrel, beta blockers, and angiotensin converting enzyme (ACE) inhibitors, and more frequently received angiotensin receptor blockers, insulin and oral anticoagulants. In CKD, mean low density lipoprotein cholesterol (LDL-C), total cholesterol, and high density lipoprotein cholesterol (HDL-C) were slightly higher at baseline, while triglycerides were substantially lower. This lipid pattern did not change at the discharge visit, but overall control rates for all described parameters (with the exception of HDL-C) were improved substantially. At discharge, systolic blood pressure (BP) was higher in CKD (124 versus 121 mmHg) and diastolic BP was lower (72 versus 74 mmHg). At discharge, 68.7% of CKD versus 71.9% of non-CKD patients had LDL-C <100 mg/dl. Physical fitness on exercise testing improved substantially in both groups. When the Modification of Diet in Renal Disease (MDRD) formula was used for CKD classification, there was no clinically relevant change in these results.
Conclusion: Within a short period of 3-4 weeks, CR led to substantial improvements in key risk factors such as lipid profile, blood pressure, and physical fitness for all patients, even if CKD was present.
Background: Travel-related conditions have impact on the quality of oral anticoagulation therapy (OAT) with vitamin K-antagonists. No predictors for travel activity and for travel-associated haemorrhage or thromboembolic complications of patients on OAT are known.
Methods: A standardised questionnaire was sent to 2500 patients on long-term OAT in Austria, Switzerland and Germany. 997 questionnaires were received (responder rate 39.9%). Ordinal or logistic regression models with travel activity before and after onset of OAT or travel-associated haemorrhages and thromboembolic complications as outcome measures were applied.
Results: 43.4% changed travel habits since onset of OAT with 24.9% and 18.5% reporting decreased or increased travel activity, respectively. Long-distance worldwide before OAT or having suffered from thromboembolic complications was associated with reduced travel activity. Increased travel activity was associated with more intensive travel experience, increased duration of OAT, higher education, or performing patient self-management (PSM). Travel-associated haemorrhages or thromboennbolic complications were reported by 6.5% and 0.9% of the patients, respectively. Former thromboennbolic complications, former bleedings and PSM were significant predictors of travel-associated complications.
Conclusions: OAT also increases travel intensity. Specific medical advice prior travelling to prevent complications should be given especially to patients with former bleedings or thromboennbolic complications and to those performing PSM. (C) 2014 Elsevier Ltd. All rights reserved.
Injection of nanoscale zero-valent iron (nZVI) has recently gained great interest as emerging technology for in-situ remediation of chlorinated organic compounds from groundwater systems. Zero-valent iron (ZVI) is able to reduce organic compounds and to render it to less harmful substances. The use of nanoscale particles instead of granular or microscale particles can increase dechlorination rates by-orders of magnitude due to its high surface area. However, classical nZVI appears to be hampered in its environmental application by its limited mobility. One approach is colloid supported transport of nZVI, where the nZVI gets transported by a Mobile colloid. In this study transport properties of activated carbon colloid supported nZVI (c-nZVI; d(50) = 2.4 mu m) are investigated in column tests using columns of 40 cm length, which were filled with porous media. A suspension was pumped through the column under different physicochemical conditions (addition of a polyanionic stabilizer and changes in pH and ionic strength). Highest observed breakthrough was 62% of the injected concentration in glass beads with addition of stabilizer. Addition of mono- and bivalent salt, e.g. more than 0.5 mM/L CaCl2, can decrease mobility and changes in pH to values below six can inhibit mobility at all. Measurements of colloid sizes and zeta potentials show changes in the mean particle size by a factor of ten and an increase of zeta potential from -62 mV to -80 mV during the transport experiment. However, results suggest potential applicability of c-nZVI under field conditions. (C) 2014 Elsevier B.V. All rights reserved.
In this opinion article we propose a scenario detailing how two crucial components have evolved simultaneously to ensure the transition of glycogen to starch in the cytosol of the Archaeplastida last common ancestor: (i) the recruitment of an enzyme from intracellular Chlamydiae pathogens to facilitate crystallization of alpha-glucan chains; and (ii) the evolution of novel types of polysaccharide (de)phosphorylating enzymes from preexisting glycogen (de)phosphorylation host pathways to allow the turnover of such crystals. We speculate that the transition to starch benefitted Archaeplastida in three ways: more carbon could be packed into osmotically inert material; the host could resume control of carbon assimilation from the chlamydial pathogen that triggered plastid endosymbiosis; and cyanobacterial photosynthate export could be integrated in the emerging Archaeplastida.
The final size of an organism, or of single organs within an organism, depends on an intricate coordination of cell proliferation and cell expansion. Although organism size is of fundamental importance, the molecular and genetic mechanisms that control it remain far from understood. Here we identify a transcription factor, KUODA1 (KUA1), which specifically controls cell expansion during leaf development in Arabidopsis thaliana. We show that KUA1 expression is circadian regulated and depends on an intact clock. Furthermore, KUA1 directly represses the expression of a set of genes encoding for peroxidases that control reactive oxygen species (ROS) homeostasis in the apoplast. Disruption of KUA1 results in increased peroxidase activity and smaller leaf cells. Chemical or genetic interference with the ROS balance or peroxidase activity affects cell size in a manner consistent with the identified KUA1 function. Thus, KUA1 modulates leaf cell expansion and final organ size by controlling ROS homeostasis.
In ecology, biodiversity-ecosystem functioning (BEE) research has seen a shift in perspective from taxonomy to function in the last two decades, with successful application of trait-based approaches. This shift offers opportunities for a deeper mechanistic understanding of the role of biodiversity in maintaining multiple ecosystem processes and services. In this paper, we highlight studies that have focused on BEE of microbial communities with an emphasis on integrating trait-based approaches to microbial ecology. In doing so, we explore some of the inherent challenges and opportunities of understanding BEE using microbial systems. For example, microbial biologists characterize communities using gene phylogenies that are often unable to resolve functional traits. Additionally, experimental designs of existing microbial BEE studies are often inadequate to unravel BEE relationships. We argue that combining eco-physiological studies with contemporary molecular tools in a trait-based framework can reinforce our ability to link microbial diversity to ecosystem processes. We conclude that such trait-based approaches are a promising framework to increase the understanding of microbial BEE relationships and thus generating systematic principles in microbial ecology and more generally ecology.
The mid- to late Holocene interval is characterised by a highly variable climate in response to a gradual change in orbital insolation. The seasonal impact of these changes on the Eifel Maar region is not yet well documented largely due to uncertainties about the completeness of this archive ("missing varves" in the well known Lake Holzmaar) and a limited understanding of the factors (e.g. temperature, precipitation) influencing the seasonality archived within the lamination/varves. In this study we approach these challenges from a different perspective. Using detailed microfacies investigations we: (1) demonstrate that the ambiguity about the "missing varves" is related to the climate induced complex biotic and abiotic laminations that led to mis-identification of varves; (2) use a combination of detailed microfacies investigations (varve structure, seasonality of biotic and abiotic signals), lamination quality, varve counts on multiple cores, published and new radiocarbon dates to develop a continuous master chronology based on the Bayesian modelling approach. The dates of major climate, volcanic, and archaeological event(s) determined using our model are in good agreement with the independently determined ages of the same events from other archives, confirming the accuracy of our age model; (3) test the sensitivity of the seasonal proxies to the available data on mid-Holocene changes in temperature and precipitation; (4) demonstrate that the changes in lake eutrophicity are correlative with temperature changes in NW Europe and probably triggered by solar variability; and (5) show that the early Iron Age onset of eutrophication in Lake Holzmaar was climate induced and began several decades before the impact of anthropogenic activity was seen in the form of intensified detrital erosion in the catchment area. Our work has implications for understanding the impact of climate change and anthropogenic activities on limnological systems. (C) 2014 Elsevier B.V. All rights reserved.
The tropical warm pool waters surrounding Indonesia are one of the equatorial heat and moisture sources that are considered as a driving force of the global climate system. The climate in Indonesia is dominated by the equatorial monsoon system, and has been linked to El Niño-Southern Oscillation (ENSO) events, which often result in severe droughts or floods over Indonesia with profound societal and economic impacts on the populations living in the world's fourth most populated country. The latest IPCC report states that ENSO will remain the dominant mode in the tropical Pacific with global effects in the 21st century and ENSO-related precipitation extremes will intensify. However, no common agreement exists among climate simulation models for projected change in ENSO and the Australian-Indonesian Monsoon. Exploring high-resolution palaeoclimate archives, like tree rings or varved lake sediments, provide insights into the natural climate variability of the past, and thus helps improving and validating simulations of future climate changes. Centennial tree-ring stable isotope records | Within this doctoral thesis the main goal was to explore the potential of tropical tree rings to record climate signals and to use them as palaeoclimate proxies. In detail, stable carbon (δ13C) and oxygen (δ18O) isotopes were extracted from teak trees in order to establish the first well-replicated centennial (AD 1900-2007) stable isotope records for Java, Indonesia. Furthermore, different climatic variables were tested whether they show significant correlation with tree-ring proxies (ring-width, δ13C, δ18O). Moreover, highly resolved intra-annual oxygen isotope data were established to assess the transfer of the seasonal precipitation signal into the tree rings. Finally, the established oxygen isotope record was used to reveal possible correlations with ENSO events. Methodological achievements | A second goal of this thesis was to assess the applicability of novel techniques which facilitate and optimize high-resolution and high-throughput stable isotope analysis of tree rings. Two different UV-laser-based microscopic dissection systems were evaluated as a novel sampling tool for high-resolution stable isotope analysis. Furthermore, an improved procedure of tree-ring dissection from thin cellulose laths for stable isotope analysis was designed. The most important findings of this thesis are: I) The herein presented novel sampling techniques improve stable isotope analyses for tree-ring studies in terms of precision, efficiency and quality. The UV-laser-based microdissection serve as a valuable tool for sampling plant tissue at ultrahigh-resolution and for unprecedented precision. II) A guideline for a modified method of cellulose extraction from wholewood cross-sections and subsequent tree-ring dissection was established. The novel technique optimizes the stable isotope analysis process in two ways: faster and high-throughput cellulose extraction and precise tree-ring separation at annual to high-resolution scale. III) The centennial tree-ring stable isotope records reveal significant correlation with regional precipitation. High-resolution stable oxygen values, furthermore, allow distinguishing between dry and rainy season rainfall. IV) The δ18O record reveals significant correlation with different ENSO flavors and demonstrates the importance of considering ENSO flavors when interpreting palaeoclimatic data in the tropics. The findings of my dissertation show that seasonally resolved δ18O records from Indonesian teak trees are a valuable proxy for multi-centennial reconstructions of regional precipitation variability (monsoon signals) and large-scale ocean-atmosphere phenomena (ENSO) for the Indo-Pacific region. Furthermore, the novel methodological achievements offer many unexplored avenues for multidisciplinary research in high-resolution palaeoclimatology.
Recultivation of disturbed oil sand mining areas is an issue of increasing importance. Nevertheless only little is known about the fate of organic matter, cell abundances and microbial community structures during oil sand processing, tailings management and initial soil development on reclamation sites. Thus the focus of this work is on biogeochemical changes of mined oil sands through the entire process chain until its use as substratum for newly developing soils on reclamation sites. Therefore, oil sand, mature fine tailings (MFTs) from tailings ponds and drying cells and tailings sand covered with peat-mineral mix (PMM) as part of land reclamation were analyzed. The sample set was selected to address the question whether changes in the above-mentioned biogeochemical parameters can be related to oil sand processing or biological processes and how these changes influence microbial activities and soil development.
GC-MS analyses of oil-derived biomarkers reveal that these compounds remain unaffected by oil sand processing and biological activity. In contrast, changes in polycyclic aromatic hydrocarbon (PAH) abundance and pattern can be observed along the process chain. Especially naphthalenes, phenanthrenes and chrysenes are altered or absent on reclamation sites, Furthermore, root-bearing horizons on reclamation sites exhibit cell abundances at least ten times higher (10(8) to 10(9) cells g(-1)) than in oil sand and MFF samples (10(7) cells g(-1)) and show a higher diversity in their microbial community structure. Nitrate in the pore water and roots derived from the PMM seem to be the most important stimulants for microbial growth. The combined data show that the observed compositional changes are mostly related to biological activity and the addition of exogenous organic components (PMM), whereas oil extraction, tailings dewatering and compaction do not have significant influences on the evaluated compounds. Microbial community composition remains relatively stable through the entire process chain. (C) 2014 Elsevier B.V. All rights reserved.
This study aims to further mechanistically understand toxic modes of action after chronic inorganic arsenic exposure. Therefore long-term incubation studies in cultured cells were carried out, to display chronically attained changes, which cannot be observed in the generally applied in vitro short-term incubation studies. Particularly, the cytotoxic, genotoxic and epigenetic effects of an up to 21 days incubation of human urothelial (UROtsa) cells with pico- to nanomolar concentrations of iAsIII and its metabolite thio-DMAV were compared. After 21 days of incubation, cytotoxic effects were strongly enhanced in the case of iAsIII and might partly be due to glutathione depletion and genotoxic effects on the chromosomal level. These results are in strong contrast to cells exposed to thio-DMAV. Thus, cells seemed to be able to adapt to this arsenical, as indicated among others by an increase in the cellular glutathione level. Most interestingly, picomolar concentrations of both iAsIII and thio-DMAV caused global DNA hypomethylation in UROtsa cells, which was quantified in parallel by 5-medC immunostaining and a newly established, reliable, high resolution mass spectrometry (HRMS)-based test system. This is the first time that epigenetic effects are reported for thio-DMAV; iAsIII induced epigenetic effects occur in at least 8000 fold lower concentrations as reported in vitro before. The fact that both arsenicals cause DNA hypomethylation at really low, exposure-relevant concentrations in human urothelial cells suggests that this epigenetic effect might contribute to inorganic arsenic induced carcinogenicity, which for sure has to be further investigated in future studies.
This study aims to further mechanistically understand toxic modes of action after chronic inorganic arsenic exposure. Therefore long-term incubation studies in cultured cells were carried out, to display chronically attained changes, which cannot be observed in the generally applied in vitro short-term incubation studies. Particularly, the cytotoxic, genotoxic and epigenetic effects of an up to 21 days incubation of human urothelial (UROtsa) cells with pico- to nanomolar concentrations of iAsIII and its metabolite thio-DMAV were compared. After 21 days of incubation, cytotoxic effects were strongly enhanced in the case of iAsIII and might partly be due to glutathione depletion and genotoxic effects on the chromosomal level. These results are in strong contrast to cells exposed to thio-DMAV. Thus, cells seemed to be able to adapt to this arsenical, as indicated among others by an increase in the cellular glutathione level. Most interestingly, picomolar concentrations of both iAsIII and thio-DMAV caused global DNA hypomethylation in UROtsa cells, which was quantified in parallel by 5-medC immunostaining and a newly established, reliable, high resolution mass spectrometry (HRMS)-based test system. This is the first time that epigenetic effects are reported for thio-DMAV; iAsIII induced epigenetic effects occur in at least 8000 fold lower concentrations as reported in vitro before. The fact that both arsenicals cause DNA hypomethylation at really low, exposure-relevant concentrations in human urothelial cells suggests that this epigenetic effect might contribute to inorganic arsenic induced carcinogenicity, which for sure has to be further investigated in future studies.
This study aims to further mechanistically understand toxic modes of action after chronic inorganic arsenic exposure. Therefore long-term incubation studies in cultured cells were carried out, to display chronically attained changes, which cannot be observed in the generally applied in vitro short-term incubation studies. Particularly, the cytotoxic, genotoxic and epigenetic effects of an up to 21 days incubation of human urothelial (UROtsa) cells with pico- to nanomolar concentrations of iAs(III) and its metabolite thio-DMA(V) were compared. After 21 days of incubation, cytotoxic effects were strongly enhanced in the case of iAs(III) and might partly be due to glutathione depletion and genotoxic effects on the chromosomal level. These results are in strong contrast to cells exposed to thio-DMA(V). Thus, cells seemed to be able to adapt to this arsenical, as indicated among others by an increase in the cellular glutathione level. Most interestingly, picomolar concentrations of both iAs(III) and thio-DMA(V) caused global DNA hypomethylation in UROtsa cells, which was quantified in parallel by 5-medC immunostaining and a newly established, reliable, high resolution mass spectrometry (HRMS)-based test system. This is the first time that epigenetic effects are reported for thio-DMA(V); iAs(III) induced epigenetic effects occur in at least 8000 fold lower concentrations as reported in vitro before. The fact that both arsenicals cause DNA hypomethylation at really low, exposure-relevant concentrations in human urothelial cells suggests that this epigenetic effect might contribute to inorganic arsenic induced carcinogenicity, which for sure has to be further investigated in future studies.
Two lines of research are combined in this study: first, the development of tools for the temporal disaggregation of precipitation, and second, some newer results on the exponential scaling of heavy short-term precipitation with temperature, roughly following the Clausius-Clapeyron (CC) relation. Having no extra temperature dependence, the traditional disaggregation schemes are shown to lack the crucial CC-type temperature dependence. The authors introduce a proof-of-concept adjustment of an existing disaggregation tool, the multiplicative cascade model of Olsson, and show that, in principal, it is possible to include temperature dependence in the disaggregation step, resulting in a fairly realistic temperature dependence of the CC type. They conclude by outlining the main calibration steps necessary to develop a full-fledged CC disaggregation scheme and discuss possible applications.
The submission and management of computational jobs is a traditional part of utility computing environments. End users and developers of domain-specific software abstractions often have to deal with the heterogeneity of such batch processing systems. This lead to a number of application programming interface and job description standards in the past, which are implemented and established for cluster and Grid systems. With the recent rise of cloud computing as new utility computing paradigm, the standardized access to batch processing facilities operated on cloud resources becomes an important issue. Furthermore, the design of such a standard has to consider a tradeoff between feature completeness and the achievable level of interoperability. The article discusses this general challenge, and presents some existing standards with traditional cluster and Grid computing background that may be applicable to cloud environments. We present OCCI-DRMAA as one approach for standardized access to batch processing facilities hosted in a cloud.
The International Union of Geological Sciences (JUGS) is evaluating whether there are additional geoscientific activities that would be beneficial in helping mitigate the impacts of tsunami. Public concerns about poor decisions and inaction, and advances in computing power and data mining call for new scientific approaches. Three fundamental requirements for mitigating impacts of natural hazards are defined. These are: (1) improvement of process-oriented understanding, (2) adequate monitoring and optimal use of data, and (3) generation of advice based on scientific, technical and socio-economic expertise. International leadership/coordination is also important.
To increase the capacity to predict and mitigate the impacts of tsunami and other natural hazards a broad consensus is needed. The main needs include the integration of systematic geological inputs - identifying and studying paleo-tsunami deposits for all subduction zones; optimising coverage and coordination of geodetic and seismic monitoring networks; underpinning decision making at national and international scales by developing appropriate mechanisms for gathering, managing and communicating authoritative scientific and technical advice information; international leadership for coordination and authoritative statements of best approaches. All these suggestions are reflected in the Sendai Agreement, the collective views of the experts at the International Workshop on Natural Hazards, presented later in this volume.
The potential of ecological models for supporting environmental decision making is increasingly acknowledged. However, it often remains unclear whether a model is realistic and reliable enough. Good practice for developing and testing ecological models has not yet been established. Therefore, TRACE, a general framework for documenting a model's rationale, design, and testing was recently suggested. Originally TRACE was aimed at documenting good modelling practice. However, the word 'documentation' does not convey TRACE's urgency. Therefore, we re-define TRACE as a tool for planning, performing, and documenting good modelling practice. TRACE documents should provide convincing evidence that a model was thoughtfully designed, correctly implemented, thoroughly tested, well understood, and appropriately used for its intended purpose. TRACE documents link the science underlying a model to its application, thereby also linking modellers and model users, for example stakeholders, decision makers, and developers of policies. We report on first experiences in producing TRACE documents. We found that the original idea underlying TRACE was valid, but to make its use more coherent and efficient, an update of its structure and more specific guidance for its use are needed. The updated TRACE format follows the recently developed framework of model 'evaludation': the entire process of establishing model quality and credibility throughout all stages of model development, analysis, and application. TRACE thus becomes a tool for planning, documenting, and assessing model evaludation, which includes understanding the rationale behind a model and its envisaged use. We introduce the new structure and revised terminology of TRACE and provide examples. (C) 2014 Elsevier B.V. All rights reserved.
Miniaturized analytical chip devices like biosensors nowadays provide assistance in highly diverse fields of application such as point-of-care diagnostics and industrial bioprocess engineering. However, upon contact with fluids, the sensor requires a protective shell for its electrical components that simultaneously offers controlled access for the target analytes to the measuring units. We therefore developed a capsule that comprises a permeable and a sealed compartment consisting of variable polymers such as biocompatible and biodegradable polylactic acid (PLA) for medical applications or more economical polyvinyl chloride (PVC) and polystyrene (PS) polymers for bioengineering applications. Production of the sealed capsule compartments was performed by heat pressing of polymer pellets placed in individually designable molds. Controlled permeability of the opposite compartments was achieved by inclusion of NaCl inside the polymer matrix during heat pressing, followed by its subsequent release in aqueous solution. Correlating diffusion rates through the so made permeable capsule compartments were quantified for preselected model analytes: glucose, peroxidase, and polystyrene beads of three different diameters (1.4 mu m, 4.2 mu m, and 20.0 mu m). In summary, the presented capsule system turned out to provide sufficient shelter for small-sized electronic devices and gives insight into its potential permeability for defined substances of analytical interest.
Charged cellular polypropylene foams (i.e., ferro-or piezoelectrets) demonstrate high piezoelectric activity upon being electrically charged. When an external electric field is applied, dielectric barrier discharges (DBDs) occur, resulting in a separation of charges which are subsequently deposited on dielectric surfaces of internal micrometer sized voids. This deposited space charge is responsible for the piezoelectric activity of the material. Previous studies have indicated charging fields larger than predicted by Townsend's model of Paschen breakdown applied to a multilayered electromechanical model; a discrepancy which prompted the present study. The actual breakdown fields for micrometer sized voids were determined by constructing single cell voids using polypropylene spacers with heights ranging from 8 to 75 mu m, "sandwiched" between two polypropylene dielectric barriers and glass slides with semi-transparent electrodes. Subsequently, a bipolar triangular charging waveform with a peak voltage of 6 kV was applied to the samples. The breakdown fields were determined by monitoring the emission of light due to the onset of DBDs using an electron multiplying CCD camera. The breakdown fields at absolute pressures from 101 to 251 kPa were found to be in good agreement with the standard Paschen curves. Additionally, the magnitude of the light emission was found to scale linearly with the amount of gas, i.e., the height of the voids. Emissions were homogeneous over the observed regions of the voids for voids with heights of 25 lm or less and increasingly inhomogeneous for void heights greater than 40 lm at high electric fields.