Refine
Year of publication
- 2014 (1386) (remove)
Document Type
- Article (1008)
- Doctoral Thesis (125)
- Postprint (97)
- Preprint (42)
- Review (39)
- Conference Proceeding (37)
- Monograph/Edited Volume (15)
- Other (9)
- Part of a Book (6)
- Habilitation Thesis (3)
Language
- English (1386) (remove)
Keywords
- anomalous diffusion (14)
- radiation mechanisms: non-thermal (9)
- Eye movements (8)
- Holocene (8)
- gamma rays: galaxies (8)
- living cells (8)
- Earthquake source observations (7)
- gamma rays: general (7)
- Arabidopsis (6)
- Sun: coronal mass ejections (CMEs) (6)
Institute
- Institut für Biochemie und Biologie (243)
- Institut für Physik und Astronomie (234)
- Institut für Geowissenschaften (225)
- Institut für Chemie (177)
- Department Psychologie (82)
- Institut für Ernährungswissenschaft (53)
- Institut für Mathematik (53)
- Department Sport- und Gesundheitswissenschaften (41)
- Institut für Informatik und Computational Science (41)
- Department Linguistik (34)
Leaching of dissolved C in arable hummocky ground moraine soil landscapes is characterized by a spatial continuum of more or less erosion-affected Luvisols, Calcaric Regosols at exposed positions, and Colluvic Regosols in depressions. Our objective was to estimate the fluxes of dissolved C in four differently eroded soils as affected by erosion-induced pedological and soil structural alterations. In this model study, we considered landscape position effects by adapting the water table as the bottom boundary condition and erosion effects by using pedon-specific soil hydraulic properties. The one-dimensional vertical water movement was described with the Richards equation using HYDRUS-1D. Solute fluxes were obtained by combining calculated water fluxes with concentrations of dissolved organic and inorganic C (DOC and DIC, respectively) measured from soil solution extracted by suction cups at biweekly intervals. In the 3-yr period (2010-2012), DOC fluxes in the 2-m soil depth were similar at the three non-colluvic locations with -0.8 +/- 0.1 g m(-2) yr(-1) (i.e., outflow) but were 0.4 g m(-2) yr(-1) (i.e., input) in the depression. The DIC fluxes ranged from -10.2 g m(-2) yr(-1) for the eroded Luvisol, -9.2 g m(-2) yr(-1) for the Luvisol, and -6.1 g m(-2) yr(-1) for the Calcaric Regosol to 3.2 g m(-2) yr(-1) for the Colluvic Regosol. The temporal variations in DOC and DIC fluxes were controlled by water fluxes. The spatially distributed leaching results corroborate the hypothesis that the effects of soil erosion influence fluxes through modified hydraulic and transport properties and terrain-dependent boundary conditions.
The economic impact analysis contained in this book shows how irrigation farming is particularly susceptible when applying certain water management policies in the Australian Murray-Darling Basin, one of the world largest river basins and Australia’s most fertile region. By comparing different pricing and non-pricing water management policies with the help of the Water Integrated Market Model, it is found that the impact of water demand reducing policies is most severe on crops that need to be intensively irrigated and are at the same time less water productive. A combination of increasingly frequent and severe droughts and the application of policies that decrease agricultural water demand, in the same region, will create a situation in which the highly water dependent crops rice and cotton cannot be cultivated at all.
Process models specify behavioral execution constraints between activities as well as between activities and data objects. A data object is characterized by its states and state transitions represented as object life cycle. For process execution, all behavioral execution constraints must be correct. Correctness can be verified via soundness checking which currently only considers control flow information. For data correctness, conformance between a process model and its object life cycles is checked. Current approaches abstract from dependencies between multiple data objects and require fully specified process models although, in real-world process repositories, often underspecified models are found. Coping with these issues, we introduce the concept of synchronized object life cycles and we define a mapping of data constraints of a process model to Petri nets extending an existing mapping. Further, we apply the notion of weak conformance to process models to tell whether each time an activity needs to access a data object in a particular state, it is guaranteed that the data object is in or can reach the expected state. Then, we introduce an algorithm for an integrated verification of control flow correctness and weak data conformance using soundness checking.
Context. It is not yet clear whether magnetic fields play an essential role in shaping planetary nebulae (PNe), or whether stellar rotation alone and/or a close binary companion, stellar or substellar, can account for the variety of the observed nebular morphologies.
Aims. In a quest for empirical evidence verifying or disproving the role of magnetic fields in shaping planetary nebulae, we follow up on previous attempts to measure the magnetic field in a representative sample of PN central stars.
Methods. We obtained low-resolution polarimetric spectra with FORS2 installed on the Antu telescope of the VLT for a sample of 12 bright central stars of PNe with different morphologies, including two round nebulae, seven elliptical nebulae, and three bipolar nebulae. Two targets are Wolf-Rayet type central stars.
Results. For the majority of the observed central stars, we do not find any significant evidence for the existence of surface magnetic fields. However, our measurements may indicate the presence of weak mean longitudinal magnetic fields of the order of 100 Gauss in the central star of the young elliptical planetary nebula IC 418 as well as in the Wolf-Rayet type central star of the bipolar nebula Hen 2-113 and the weak emission line central star of the elliptical nebula Hen 2-131. A clear detection of a 250 G mean longitudinal field is achieved for the A-type companion of the central star of NGC 1514. Some of the central stars show a moderate night-to-night spectrum variability, which may be the signature of a variable stellar wind and/or rotational modulation due to magnetic features.
Conclusions. Since our analysis indicates only weak fields, if any, in a few targets of our sample, we conclude that strong magnetic fields of the order of kG are not widespread among PNe central stars. Nevertheless, simple estimates based on a theoretical model of magnetized wind bubbles suggest that even weak magnetic fields below the current detection limit of the order of 100 G may well be sufficient to contribute to the shaping of the surrounding nebulae throughout their evolution. Our current sample is too small to draw conclusions about a correlation between nebular morphology and the presence of stellar magnetic fields.
The Dansgaard-Oeschger oscillations and Heinrich events described in North Atlantic sediments and Greenland ice are expressed in the climate of the tropics, for example, as documented in Arabian Sea sediments. Given the strength of this teleconnection, we seek to reconstruct its range of environmental impacts. We present geochemical and sedimentological data from core SO130-289KL from the Indus submarine slope spanning the last similar to 80 kyr. Elemental and grain size analyses consistently indicate that interstadials are characterized by an increased contribution of fluvial suspension from the Indus River. In contrast, stadials are characterized by an increased contribution of aeolian dust from the Arabian Peninsula. Decadal-scale shifts at climate transitions, such as onsets of interstadials, were coeval with changes in productivity-related proxies. Heinrich events stand out as especially dry and dusty events, indicating a dramatically weakened Indian summer monsoon, potentially increased winter monsoon circulation, and increased aridity on the Arabian Peninsula. This finding is consistent with other paleoclimate evidence for continental aridity in the northern tropics during these events. Our results strengthen the evidence that circum-North Atlantic temperature variations translate to hydrological shifts in the tropics, with major impacts on regional environmental conditions such as rainfall, river discharge, aeolian dust transport, and ocean margin anoxia.
Geometric generalization is a fundamental concept in the digital mapping process. An increasing amount of spatial data is provided on the web as well as a range of tools to process it. This jABC workflow is used for the automatic testing of web-based generalization services like mapshaper.org by executing its functionality, overlaying both datasets before and after the transformation and displaying them visually in a .tif file. Mostly Web Services and command line tools are used to build an environment where ESRI shapefiles can be uploaded, processed through a chosen generalization service and finally visualized in Irfanview.
This study examines the course and driving forces of recent vegetation change in the Mongolian steppe. A sediment core covering the last 55years from a small closed-basin lake in central Mongolia was analyzed for its multi-proxy record at annual resolution. Pollen analysis shows that highest abundances of planted Poaceae and highest vegetation diversity occurred during 1977-1992, reflecting agricultural development in the lake area. A decrease in diversity and an increase in Artemisia abundance after 1992 indicate enhanced vegetation degradation in recent times, most probably because of overgrazing and farmland abandonment. Human impact is the main factor for the vegetation degradation within the past decades as revealed by a series of redundancy analyses, while climate change and soil erosion play subordinate roles. High Pediastrum (a green algae) influx, high atomic total organic carbon/total nitrogen (TOC/TN) ratios, abundant coarse detrital grains, and the decrease of C-13(org) and N-15 since about 1977 but particularly after 1992 indicate that abundant terrestrial organic matter and nutrients were transported into the lake and caused lake eutrophication, presumably because of intensified land use. Thus, we infer that the transition to a market economy in Mongolia since the early 1990s not only caused dramatic vegetation degradation but also affected the lake ecosystem through anthropogenic changes in the catchment area.
This study examines the course and driving forces of recent vegetation change in the Mongolian steppe. A sediment core covering the last 55years from a small closed-basin lake in central Mongolia was analyzed for its multi-proxy record at annual resolution. Pollen analysis shows that highest abundances of planted Poaceae and highest vegetation diversity occurred during 1977-1992, reflecting agricultural development in the lake area. A decrease in diversity and an increase in Artemisia abundance after 1992 indicate enhanced vegetation degradation in recent times, most probably because of overgrazing and farmland abandonment. Human impact is the main factor for the vegetation degradation within the past decades as revealed by a series of redundancy analyses, while climate change and soil erosion play subordinate roles. High Pediastrum (a green algae) influx, high atomic total organic carbon/total nitrogen (TOC/TN) ratios, abundant coarse detrital grains, and the decrease of C-13(org) and N-15 since about 1977 but particularly after 1992 indicate that abundant terrestrial organic matter and nutrients were transported into the lake and caused lake eutrophication, presumably because of intensified land use. Thus, we infer that the transition to a market economy in Mongolia since the early 1990s not only caused dramatic vegetation degradation but also affected the lake ecosystem through anthropogenic changes in the catchment area.
Morphological systems are constrained in how they interact with each other. One case that has been widely studied in the psycholinguistic literature is the avoidance of plurals inside compounds (e.g. *rats eater vs. rat eater) in English and other languages, the so-called plurals-in-compounds effect. Several previous studies have shown that both adult and child speakers are sensitive to this contrast, but the question of whether semantic, morphological, or surface-form constraints are responsible for the plurals-in-compounds effect remains controversial. The present study provides new empirical evidence from adult and child English to resolve this controversy. Graded linguistic judgments were obtained from 96 children (age range: 7;06 to 12;08) and 32 adults. In the task, participants were asked to rate compounds containing different kinds of singular or plural modifiers. The results indicated that both children and adults disliked regular plurals inside compounds, whereas irregular plurals were rated as marginal and singulars as fully acceptable. Furthermore, acceptability ratings were found not to be affected by the phonological surface form of a compound-internal modifier. We conclude that semantic and morphological (rather than surface-form) constraints are responsible for the plurals-in-compounds effect, in both children and adults.
The term Linked Data refers to connected information sources comprising structured data about a wide range of topics and for a multitude of applications. In recent years, the conceptional and technical foundations of Linked Data have been formalized and refined. To this end, well-known technologies have been established, such as the Resource Description Framework (RDF) as a Linked Data model or the SPARQL Protocol and RDF Query Language (SPARQL) for retrieving this information. Whereas most research has been conducted in the area of generating and publishing Linked Data, this thesis presents novel approaches for improved management. In particular, we illustrate new methods for analyzing and processing SPARQL queries. Here, we present two algorithms suitable for identifying structural relationships between these queries. Both algorithms are applied to a large number of real-world requests to evaluate the performance of the approaches and the quality of their results. Based on this, we introduce different strategies enabling optimized access of Linked Data sources. We demonstrate how the presented approach facilitates effective utilization of SPARQL endpoints by prefetching results relevant for multiple subsequent requests. Furthermore, we contribute a set of metrics for determining technical characteristics of such knowledge bases. To this end, we devise practical heuristics and validate them through thorough analysis of real-world data sources. We discuss the findings and evaluate their impact on utilizing the endpoints. Moreover, we detail the adoption of a scalable infrastructure for improving Linked Data discovery and consumption. As we outline in an exemplary use case, this platform is eligible both for processing and provisioning the corresponding information.
Many perceptual and cognitive tasks permit or require the integrated cooperation of specialized sensory channels, detectors, or other functionally separate units. In compound detection or discrimination tasks, 1 prominent general mechanism to model the combination of the output of different processing channels is probability summation. The classical example is the binocular summation model of Pirenne (1943), according to which a weak visual stimulus is detected if at least 1 of the 2 eyes detects this stimulus; as we review briefly, exactly the same reasoning is applied in numerous other fields. It is generally accepted that this mechanism necessarily predicts performance based on 2 (or more) channels to be superior to single channel performance, because 2 separate channels provide "2 chances" to succeed with the task. We argue that this reasoning is misleading because it neglects the increased opportunity with 2 channels not just for hits but also for false alarms and that there may well be no redundancy gain at all when performance is measured in terms of receiver operating characteristic curves. We illustrate and support these arguments with a visual detection experiment involving different spatial uncertainty conditions. Our arguments and findings have important implications for all models that, in one way or another, rest on, or incorporate, the notion of probability summation for the analysis of detection tasks, 2-alternative forced-choice tasks, and psychometric functions.
Soil in a changing world is subject to both anthropogenic and environmental stresses. Soil monitoring is essential to assess the magnitude of changes in soil variables and how they affect ecosystem processes and human livelihoods. However, we cannot always be sure which sampling design is best for a given monitoring task. We employed a rotational stratified simple random sampling (rotStRS) for the estimation of temporal changes in the spatial mean of saturated hydraulic conductivity (K-s) at three sites in central Panama in 2009, 2010 and 2011. To assess this design's efficiency we compared the resulting estimates of the spatial mean and variance for 2009 with those gained from stratified simple random sampling (StRS), which was effectively the data obtained on the first sampling time, and with an equivalent unexecuted simple random sampling (SRS). The poor performance of geometrical stratification and the weak predictive relationship between measurements of successive years yielded no advantage of sampling designs more complex than SRS. The failure of stratification may be attributed to the small large-scale variability of K-s. Revisiting previously sampled locations was not beneficial because of the large small-scale variability in combination with destructive sampling, resulting in poor consistency between revisited samples. We conclude that for our K-s monitoring scheme, repeated SRS is equally effective as rotStRS. Some problems of small-scale variability might be overcome by collecting several samples at close range to reduce the effect of small-scale variation. Finally, we give recommendations on the key factors to consider when deciding whether to use stratification and rotation in a soil monitoring scheme.
Co-doping of the MOF 3∞[Zn(2-methylimidazolate-4-amide-5-imidate)] (IFP-1 = Imidazolate Framework Potsdam-1) with luminescent Eu3+ and Tb3+ ions presents an approach to utilize the porosity of the MOF for the intercalation of luminescence centers and for tuning of the chromaticity to the emission of white light of the quality of a three color emitter. Organic based fluorescence processes of the MOF backbone as well as metal based luminescence of the dopants are combined to one homogenous single source emitter while retaining the MOF's porosity. The lanthanide ions Eu3+ and Tb3+ were doped in situ into IFP-1 upon formation of the MOF by intercalation into the micropores of the growing framework without a structure directing effect. Furthermore, the color point is temperature sensitive, so that a cold white light with a higher blue content is observed at 77 K and a warmer white light at room temperature (RT) due to the reduction of the organic emission at higher temperatures. The study further illustrates the dependence of the amount of luminescent ions on porosity and sorption properties of the MOF and proves the intercalation of luminescence centers into the pore system by low-temperature site selective photoluminescence spectroscopy, SEM and EDX. It also covers an investigation of the border of homogenous uptake within the MOF pores and the formation of secondary phases of lanthanide formates on the surface of the MOF. Crossing the border from a homogenous co-doping to a two-phase composite system can be beneficially used to adjust the character and warmth of the white light. This study also describes two-color emitters of the formula Ln@IFP-1a–d (Ln: Eu, Tb) by doping with just one lanthanide Eu3+ or Tb3+.
The 2002 M-w 7.9 Denali Fault earthquake, Alaska, provides an unparalleled opportunity to investigate in quantitative detail the regional hillslope mass-wasting response to strong seismic shaking in glacierized terrain. We present the first detailed inventory of similar to 1580 coseismic slope failures, out of which some 20% occurred above large valley glaciers, based on mapping from multi-temporal remote sensing data. We find that the Denali earthquake produced at least one order of magnitude fewer landslides in a much narrower corridor along the fault ruptures than empirical predictions for an M 8 earthquake would suggest, despite the availability of sufficiently steep and dissected mountainous topography prone to frequent slope failure. In order to explore potential controls on the reduced extent of regional coseismic landsliding we compare our data with inventories that we compiled for two recent earthquakes in periglacial and formerly glaciated terrain, i.e. at Yushu, Tibet (M-w 6.9, 2010), and Aysen Fjord, Chile (2007 M-w 6.2). Fault movement during these events was, similarly to that of the Denali earthquake, dominated by strike-slip offsets along near-vertical faults. Our comparison returns very similar coseismic landslide patterns that are consistent with the idea that fault type, geometry, and dynamic rupture process rather than widespread glacier cover were among the first-order controls on regional hillslope erosional response in these earthquakes. We conclude that estimating the amount of coseismic hillslope sediment input to the sediment cascade from earthquake magnitude alone remains highly problematic, particularly if glacierized terrain is involved. (C) 2014 Elsevier Ltd. All rights reserved.
Wood is used for many applications because of its excellent mechanical properties, relative abundance and as it is a renewable resource. However, its wider utilization as an engineering material is limited because it swells and shrinks upon moisture changes and is susceptible to degradation by microorganisms and/or insects. Chemical modifications of wood have been shown to improve dimensional stability, water repellence and/or durability, thus increasing potential service-life of wood materials. However current treatments are limited because it is difficult to introduce and fix such modifications deep inside the tissue and cell wall. Within the scope of this thesis, novel chemical modification methods of wood cell walls were developed to improve both dimensional stability and water repellence of wood material. These methods were partly inspired by the heartwood formation in living trees, a process, that for some species results in an insertion of hydrophobic chemical substances into the cell walls of already dead wood cells, In the first part of this thesis a chemistry to modify wood cell walls was used, which was inspired by the natural process of heartwood formation. Commercially available hydrophobic flavonoid molecules were effectively inserted in the cell walls of spruce, a softwood species with low natural durability, after a tosylation treatment to obtain “artificial heartwood”. Flavonoid inserted cell walls show a reduced moisture absorption, resulting in better dimensional stability, water repellency and increased hardness. This approach was quite different compared to established modifications which mainly address hydroxyl groups of cell wall polymers with hydrophilic substances. In the second part of the work in-situ styrene polymerization inside the tosylated cell walls was studied. It is known that there is a weak adhesion between hydrophobic polymers and hydrophilic cell wall components. The hydrophobic styrene monomers were inserted into the tosylated wood cell walls for further polymerization to form polystyrene in the cell walls, which increased the dimensional stability of the bulk wood material and reduced water uptake of the cell walls considerably when compared to controls. In the third part of the work, grafting of another hydrophobic and also biodegradable polymer, poly(ɛ-caprolactone) in the wood cell walls by ring opening polymerization of ɛ-caprolactone was studied at mild temperatures. Results indicated that polycaprolactone attached into the cell walls, caused permanent swelling of the cell walls up to 5%. Dimensional stability of the bulk wood material increased 40% and water absorption reduced more than 35%. A fully biodegradable and hydrophobized wood material was obtained with this method which reduces disposal problem of the modified wood materials and has improved properties to extend the material’s service-life. Starting from a bio-inspired approach which showed great promise as an alternative to standard cell wall modifications we showed the possibility of inserting hydrophobic molecules in the cell walls and supported this fact with in-situ styrene and ɛ-caprolactone polymerization into the cell walls. It was shown in this thesis that despite the extensive knowledge and long history of using wood as a material there is still room for novel chemical modifications which could have a high impact on improving wood properties.
Numerous studies have demonstrated effects of word frequency on eye movements during reading, but the precise timing of this influence has remained unclear. The fast priming paradigm was previously used to study influences of related versus unrelated primes on the target word. Here, we use this procedure to investigate whether the frequency of the prime word has a direct influence on eye movements during reading when the prime-target relation is not manipulated. We found that with average prime intervals of 32 ms readers made longer single fixation durations on the target word in the low than in the high frequency prime condition. Distributional analyses demonstrated that the effect of prime frequency on single fixation durations occurred very early, supporting theories of immediate cognitive control of eye movements. Finding prime frequency effects only 207 ms after visibility of the prime and for prime durations of 32 ms yields new time constraints for cognitive processes controlling eye movements during reading. Our variant of the fast priming paradigm provides a new approach to test early influences of word processing on eye movement control during reading.
Analyses of metagenomes in life sciences present new opportunities as well as challenges to the scientific community and call for advanced computational methods and workflows. The large amount of data collected from samples via next-generation sequencing (NGS) technologies render manual approaches to sequence comparison and annotation unsuitable. Rather, fast and efficient computational pipelines are needed to provide comprehensive statistics and summaries and enable the researcher to choose appropriate tools for more specific analyses. The workflow presented here builds upon previous pipelines designed for automated clustering and annotation of raw sequence reads obtained from next-generation sequencing technologies such as 454 and Illumina. Employing specialized algorithms, the sequence reads are processed at three different levels. First, raw reads are clustered at high similarity cutoff to yield clusters which can be exported as multifasta files for further analyses. Independently, open reading frames (ORFs) are predicted from raw reads and clustered at two strictness levels to yield sets of non-redundant sequences and ORF families. Furthermore, single ORFs are annotated by performing searches against the Pfam database