Refine
Year of publication
- 2014 (1681) (remove)
Document Type
- Article (1122)
- Doctoral Thesis (202)
- Postprint (102)
- Monograph/Edited Volume (55)
- Review (54)
- Preprint (47)
- Conference Proceeding (42)
- Part of a Book (19)
- Part of Periodical (14)
- Other (11)
Language
Is part of the Bibliography
- yes (1681) (remove)
Keywords
- anomalous diffusion (14)
- prevention (10)
- radiation mechanisms: non-thermal (9)
- Eye movements (8)
- Holocene (8)
- gamma rays: galaxies (8)
- living cells (8)
- Earthquake source observations (7)
- gamma rays: general (7)
- violence (7)
Institute
- Institut für Biochemie und Biologie (246)
- Institut für Physik und Astronomie (237)
- Institut für Geowissenschaften (226)
- Institut für Chemie (188)
- Department Psychologie (90)
- Wirtschaftswissenschaften (59)
- Institut für Ernährungswissenschaft (58)
- Sozialwissenschaften (55)
- Institut für Mathematik (53)
- Historisches Institut (50)
The complementary advantages of high-rate Global Positioning System (GPS) and accelerometer observations for measuring seismic ground motion have been recognised in previous research. Here we propose an approach of tight integration of GPS and accelerometer measurements. The baseline shifts of the accelerometer are introduced as unknown parameters and estimated by a random walk process in the Precise Point Positioning (PPP) solution. To demonstrate the performance of the new strategy, we carried out several experiments using collocated GPS and accelerometer. The experimental results show that the baseline shifts of the accelerometer are automatically corrected, and high precision coseismic information of strong ground motion can be obtained in real-time. Additionally, the convergence and precision of the PPP is improved by the combined solution.
The complementary advantages of high-rate Global Positioning System (GPS) and accelerometer observations for measuring seismic ground motion have been recognised in previous research. Here we propose an approach of tight integration of GPS and accelerometer measurements. The baseline shifts of the accelerometer are introduced as unknown parameters and estimated by a random walk process in the Precise Point Positioning (PPP) solution. To demonstrate the performance of the new strategy, we carried out several experiments using collocated GPS and accelerometer. The experimental results show that the baseline shifts of the accelerometer are automatically corrected, and high precision coseismic information of strong ground motion can be obtained in real-time. Additionally, the convergence and precision of the PPP is improved by the combined solution.
In quantum mechanics the temporal decay of certain resonance states is associated with an effective time evolution e(-ith(kappa)), where h(.) is an analytic family of non-self-adjoint matrices. In general the corresponding resonance states do not decay exponentially in time. Using analytic perturbation theory, we derive asymptotic expansions for e(-ith(kappa)), simultaneously in the limits kappa -> 0 and t -> infinity, where the corrections with respect to pure exponential decay have uniform bounds in one complex variable kappa(2)t.
In the Appendix we briefly review analytic perturbation theory, replacing the classical reference to the 1920 book of Knopp [Funktionentheorie II, Anwendungen und Weiterfuhrung der allgemeinen Theorie, Sammlung Goschen, Vereinigung wissenschaftlicher Verleger Walter de Gruyter, 1920] and its terminology by standard modern references. This might be of independent interest.
Aims: The goal of this study is twofold. First, it aims to untangle tense problems from problems with past time reference through verb morphology in people with aphasia. Second, this study aims to compare the production of time reference inflection by people with agrammatic and fluent aphasia.
Methods & Procedures: A sentence completion task was used to elicit reference to the non-past and past in Dutch. Reference to the past was tested through (1) a simple verb in past tense and (2) a verb complex with an auxiliary in present tense + participle (the present perfect). Reference to the non-past was tested through a simple verb in present tense. Fourteen agrammatic aphasic speakers, sixteen fluent aphasic speakers, and twenty non-brain-damaged speakers (NBDs) took part in this study. Data were analysed quantitatively and qualitatively.
Outcomes & Results: NBDs scored at ceiling and significantly higher than the aphasic participants. Agrammatic speakers performed worse than fluent speakers, but the pattern of performance in both aphasic groups was similar. Reference to the past through past tense and [present tense auxiliary + participle] was more impaired than reference to the non-past. An error analysis revealed differences between the two groups.
Conclusions: People with agrammatic and fluent aphasia experience problems with expressing reference to the past through verb inflection. This past time reference deficit is irrespective of the tense employed. The error patterns between the two groups reveal different underlying problems.
Context. During their evolution massive stars can reach the phase of critical rotation when a further increase in rotational speed is no longer possible. Direct centrifugal ejection from a critically or near-critically rotating surface forms a gaseous equatorial decretion disk. Anomalous viscosity provides the efficient mechanism for transporting the angular momentum outwards. The outer part of the disk can extend up to a very large distance from the parent star.
Aims. We study the evolution of density, radial and azimuthal velocity, and angular momentum loss rate of equatorial decretion disks out to very distant regions. We investigate how the physical characteristics of the disk depend on the distribution of temperature and viscosity.
Methods. We calculated stationary models using the Newton-Raphson method. For time-dependent hydrodynamic modeling we developed the numerical code based on an explicit finite difference scheme on an Eulerian grid including full Navier-Stokes shear viscosity.
Results. The sonic point distance and the maximum angular momentum loss rate strongly depend on the temperature profile and are almost independent of viscosity. The rotational velocity at large radii rapidly drops accordingly to temperature and viscosity distribution. The total amount of disk mass and the disk angular momentum increase with decreasing temperature and viscosity.
Conclusions. The time-dependent one-dimensional models basically confirm the results obtained in the stationary models as well as the assumptions of the analytical approximations. Including full Navier-Stokes viscosity we systematically avoid the rotational velocity sign change at large radii. The unphysical drop of the rotational velocity and angular momentum loss at large radii (present in some models) can be avoided in the models with decreasing temperature and viscosity.
Time-Travel-Treasures
(2014)
Previous studies suggest that there are special timing relations in syllable onsets. The consonants are assumed to be timed, on the one hand, with the vocalic nucleus and, on the other hand, with each other. These competing timing relations result in the C-center effect. However, the C-center effect has not consistently been found in languages with complex onsets. Moreover, it has occasionally been found in languages disallowing complex onsets. The present study investigates onset timing in German while discussing alternative explanations (not related to bonding) for the timing patterns observed. Six German speakers were recorded via Electromagnetic Articulography. The corpus contained items with four clusters (/sk/, /kv/, /gl/, and /pl/). The clusters occur in word-initial position, word-medial position, and across a word boundary preceding different vowels. The results suggest that segmental properties (i.e., oral-laryngeal coordination, coarticulatory resistance) determine the observed timing patterns, and specifically the absence or presence of the C-center effect.
Time-of-flight secondary ion mass spectrometry (ToF-SIMS) was used for label-free analyses of the molecular lateral distribution of two different epithelial cell membranes (PANC-1 and UROtsa). The goal of the research was to enhance the ion yield of specific membrane molecules for improving the membrane imaging capability of ToF-SIMS on the nanoscale lateral dimension. For this task, a special silicon wafer sandwich preparation technique was optimized using different wafer materials, spacers, and washing procedures. Under optimized preparation conditions, the yield could be significantly enhanced, allowing imaging of the inhomogeneous distribution of phosphocholine (common head group for phosphatidylcholine and sphingomyelin) of a PANC-1 cell membrane's outer lipid layer with a lateral resolution of less than 200nm. Copyright (c) 2014 John Wiley & Sons, Ltd.
This paper employs a complex network approach to determine the topology and evolution of the network of extreme precipitation that governs the organization of extreme rainfall before, during, and after the Indian Summer Monsoon (ISM) season. We construct networks of extreme rainfall events during the ISM (June-September), post-monsoon (October-December), and pre-monsoon (March-May) periods from satellite-derived (Tropical Rainfall Measurement Mission, TRMM) and rain-gauge interpolated (Asian Precipitation Highly Resolved Observational Data Integration Towards the Evaluation of Water Resources, APHRODITE) data sets. The structure of the networks is determined by the level of synchronization of extreme rainfall events between different grid cells throughout the Indian subcontinent. Through the analysis of various complex-network metrics, we describe typical repetitive patterns in North Pakistan (NP), the Eastern Ghats (EG), and the Tibetan Plateau (TP). These patterns appear during the pre-monsoon season, evolve during the ISM, and disappear during the post-monsoon season. These are important meteorological features that need further attention and that may be useful in ISM timing and strength prediction.
The electronic coupling between redox sites in mixed-valence systems has attracted the interest of the chemistry community for a long time. Many computational studies have focused on trying to determine its magnitude as a function of the nature of the redox sites and of the bridge(s) between them. However, in most instances, the quantum-chemical methodologies that have been employed suffer from intrinsic errors that lead to either an overlocalized or an overdelocalized character of the electronic structure. These deficiencies prevent an accurate depiction of the degree of charge (de)localization in the system and, as a result, of the extent of electronic coupling. Here we use nonempirically tuned long-range corrected density functional theory and show that it provides a robust, efficient approach to characterize organic mixed-valence systems. We first demonstrate the performance of this approach via a study of representative Robin-Day class-II (localized) and class-III (delocalized) complexes. We then examine a borderline class-II/class-III complex, which had proven difficult to describe accurately with standard density functional theory and Hartree-Fock methods.
When state-of-the-art bulk heterojunction organic solar cells with ideal morphology are exposed to prolonged storage or operation at elevated temperatures, a thermally induced disruption of the active layer blend can occur, in the form of a separation of donor and acceptor domains, leading to diminished photovoltaic performance. Toward the long-term use of organic solar cells in real-life conditions, an important challenge is, therefore, the development of devices with a thermally stable active layer morphology. Several routes are being explored, ranging from the use of high glass transition temperature, cross-linkable and/or side-chain functionalized donor and acceptor materials, to light-induced dimerization of the fullerene acceptor. A better fundamental understanding of the nature and underlying mechanisms of the phase separation and stabilization effects has been obtained through a variety of analytical, thermal analysis, and electro-optical techniques. Accelerated aging systems have been used to study the degradation kinetics of bulk heterojunction solar cells in situ at various temperatures to obtain aging models predicting solar cell lifetime. The following contribution gives an overview of the current insights regarding the intrinsic thermally induced aging effects and the proposed solutions, illustrated by examples of our own research groups. (C) The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License.
Along with the rise of the now popular 'open' paradigm in innovation management, networks have become a common approach to practicing innovation. Foresight could potentially greatly benefit from resources that become available when the knowledge base increases through networks. This article seeks to investigate how innovation networks and foresight are related, to what extent networked foresight activities exist and how they are practiced. For the former the Cyclic Innovation Model (CIM) is utilized as analytical framework and applied to three cases. The foresight activities are analyzed in terms of type, scope and role.
The cases are a collaboration between government agencies and a research organization and two inter-organizational networks of different size. 'Networked foresight' is clearly observable in all three cases. Indeed, a networked approach to foresight seems to strengthen the various roles of foresight. However, the rooting and openness of foresight activities in the three networks varies significantly. The advantages that 'networked foresight' entails could be exploited to a much higher degree for the networks themselves, e.g., the broad resource base and the large pool of people with diverse backgrounds that are available. Furthermore, effective instruments for the reintegration of knowledge into the networks' partner organizations are needed. (C) 2014 Elsevier Ltd. All rights reserved.
Charged cellular polypropylene foams (i.e., ferro-or piezoelectrets) demonstrate high piezoelectric activity upon being electrically charged. When an external electric field is applied, dielectric barrier discharges (DBDs) occur, resulting in a separation of charges which are subsequently deposited on dielectric surfaces of internal micrometer sized voids. This deposited space charge is responsible for the piezoelectric activity of the material. Previous studies have indicated charging fields larger than predicted by Townsend's model of Paschen breakdown applied to a multilayered electromechanical model; a discrepancy which prompted the present study. The actual breakdown fields for micrometer sized voids were determined by constructing single cell voids using polypropylene spacers with heights ranging from 8 to 75 mu m, "sandwiched" between two polypropylene dielectric barriers and glass slides with semi-transparent electrodes. Subsequently, a bipolar triangular charging waveform with a peak voltage of 6 kV was applied to the samples. The breakdown fields were determined by monitoring the emission of light due to the onset of DBDs using an electron multiplying CCD camera. The breakdown fields at absolute pressures from 101 to 251 kPa were found to be in good agreement with the standard Paschen curves. Additionally, the magnitude of the light emission was found to scale linearly with the amount of gas, i.e., the height of the voids. Emissions were homogeneous over the observed regions of the voids for voids with heights of 25 lm or less and increasingly inhomogeneous for void heights greater than 40 lm at high electric fields.
Miniaturized analytical chip devices like biosensors nowadays provide assistance in highly diverse fields of application such as point-of-care diagnostics and industrial bioprocess engineering. However, upon contact with fluids, the sensor requires a protective shell for its electrical components that simultaneously offers controlled access for the target analytes to the measuring units. We therefore developed a capsule that comprises a permeable and a sealed compartment consisting of variable polymers such as biocompatible and biodegradable polylactic acid (PLA) for medical applications or more economical polyvinyl chloride (PVC) and polystyrene (PS) polymers for bioengineering applications. Production of the sealed capsule compartments was performed by heat pressing of polymer pellets placed in individually designable molds. Controlled permeability of the opposite compartments was achieved by inclusion of NaCl inside the polymer matrix during heat pressing, followed by its subsequent release in aqueous solution. Correlating diffusion rates through the so made permeable capsule compartments were quantified for preselected model analytes: glucose, peroxidase, and polystyrene beads of three different diameters (1.4 mu m, 4.2 mu m, and 20.0 mu m). In summary, the presented capsule system turned out to provide sufficient shelter for small-sized electronic devices and gives insight into its potential permeability for defined substances of analytical interest.
The potential of ecological models for supporting environmental decision making is increasingly acknowledged. However, it often remains unclear whether a model is realistic and reliable enough. Good practice for developing and testing ecological models has not yet been established. Therefore, TRACE, a general framework for documenting a model's rationale, design, and testing was recently suggested. Originally TRACE was aimed at documenting good modelling practice. However, the word 'documentation' does not convey TRACE's urgency. Therefore, we re-define TRACE as a tool for planning, performing, and documenting good modelling practice. TRACE documents should provide convincing evidence that a model was thoughtfully designed, correctly implemented, thoroughly tested, well understood, and appropriately used for its intended purpose. TRACE documents link the science underlying a model to its application, thereby also linking modellers and model users, for example stakeholders, decision makers, and developers of policies. We report on first experiences in producing TRACE documents. We found that the original idea underlying TRACE was valid, but to make its use more coherent and efficient, an update of its structure and more specific guidance for its use are needed. The updated TRACE format follows the recently developed framework of model 'evaludation': the entire process of establishing model quality and credibility throughout all stages of model development, analysis, and application. TRACE thus becomes a tool for planning, documenting, and assessing model evaludation, which includes understanding the rationale behind a model and its envisaged use. We introduce the new structure and revised terminology of TRACE and provide examples. (C) 2014 Elsevier B.V. All rights reserved.
The International Union of Geological Sciences (JUGS) is evaluating whether there are additional geoscientific activities that would be beneficial in helping mitigate the impacts of tsunami. Public concerns about poor decisions and inaction, and advances in computing power and data mining call for new scientific approaches. Three fundamental requirements for mitigating impacts of natural hazards are defined. These are: (1) improvement of process-oriented understanding, (2) adequate monitoring and optimal use of data, and (3) generation of advice based on scientific, technical and socio-economic expertise. International leadership/coordination is also important.
To increase the capacity to predict and mitigate the impacts of tsunami and other natural hazards a broad consensus is needed. The main needs include the integration of systematic geological inputs - identifying and studying paleo-tsunami deposits for all subduction zones; optimising coverage and coordination of geodetic and seismic monitoring networks; underpinning decision making at national and international scales by developing appropriate mechanisms for gathering, managing and communicating authoritative scientific and technical advice information; international leadership for coordination and authoritative statements of best approaches. All these suggestions are reflected in the Sendai Agreement, the collective views of the experts at the International Workshop on Natural Hazards, presented later in this volume.
The submission and management of computational jobs is a traditional part of utility computing environments. End users and developers of domain-specific software abstractions often have to deal with the heterogeneity of such batch processing systems. This lead to a number of application programming interface and job description standards in the past, which are implemented and established for cluster and Grid systems. With the recent rise of cloud computing as new utility computing paradigm, the standardized access to batch processing facilities operated on cloud resources becomes an important issue. Furthermore, the design of such a standard has to consider a tradeoff between feature completeness and the achievable level of interoperability. The article discusses this general challenge, and presents some existing standards with traditional cluster and Grid computing background that may be applicable to cloud environments. We present OCCI-DRMAA as one approach for standardized access to batch processing facilities hosted in a cloud.
Two lines of research are combined in this study: first, the development of tools for the temporal disaggregation of precipitation, and second, some newer results on the exponential scaling of heavy short-term precipitation with temperature, roughly following the Clausius-Clapeyron (CC) relation. Having no extra temperature dependence, the traditional disaggregation schemes are shown to lack the crucial CC-type temperature dependence. The authors introduce a proof-of-concept adjustment of an existing disaggregation tool, the multiplicative cascade model of Olsson, and show that, in principal, it is possible to include temperature dependence in the disaggregation step, resulting in a fairly realistic temperature dependence of the CC type. They conclude by outlining the main calibration steps necessary to develop a full-fledged CC disaggregation scheme and discuss possible applications.
This study aims to further mechanistically understand toxic modes of action after chronic inorganic arsenic exposure. Therefore long-term incubation studies in cultured cells were carried out, to display chronically attained changes, which cannot be observed in the generally applied in vitro short-term incubation studies. Particularly, the cytotoxic, genotoxic and epigenetic effects of an up to 21 days incubation of human urothelial (UROtsa) cells with pico- to nanomolar concentrations of iAs(III) and its metabolite thio-DMA(V) were compared. After 21 days of incubation, cytotoxic effects were strongly enhanced in the case of iAs(III) and might partly be due to glutathione depletion and genotoxic effects on the chromosomal level. These results are in strong contrast to cells exposed to thio-DMA(V). Thus, cells seemed to be able to adapt to this arsenical, as indicated among others by an increase in the cellular glutathione level. Most interestingly, picomolar concentrations of both iAs(III) and thio-DMA(V) caused global DNA hypomethylation in UROtsa cells, which was quantified in parallel by 5-medC immunostaining and a newly established, reliable, high resolution mass spectrometry (HRMS)-based test system. This is the first time that epigenetic effects are reported for thio-DMA(V); iAs(III) induced epigenetic effects occur in at least 8000 fold lower concentrations as reported in vitro before. The fact that both arsenicals cause DNA hypomethylation at really low, exposure-relevant concentrations in human urothelial cells suggests that this epigenetic effect might contribute to inorganic arsenic induced carcinogenicity, which for sure has to be further investigated in future studies.
This study aims to further mechanistically understand toxic modes of action after chronic inorganic arsenic exposure. Therefore long-term incubation studies in cultured cells were carried out, to display chronically attained changes, which cannot be observed in the generally applied in vitro short-term incubation studies. Particularly, the cytotoxic, genotoxic and epigenetic effects of an up to 21 days incubation of human urothelial (UROtsa) cells with pico- to nanomolar concentrations of iAsIII and its metabolite thio-DMAV were compared. After 21 days of incubation, cytotoxic effects were strongly enhanced in the case of iAsIII and might partly be due to glutathione depletion and genotoxic effects on the chromosomal level. These results are in strong contrast to cells exposed to thio-DMAV. Thus, cells seemed to be able to adapt to this arsenical, as indicated among others by an increase in the cellular glutathione level. Most interestingly, picomolar concentrations of both iAsIII and thio-DMAV caused global DNA hypomethylation in UROtsa cells, which was quantified in parallel by 5-medC immunostaining and a newly established, reliable, high resolution mass spectrometry (HRMS)-based test system. This is the first time that epigenetic effects are reported for thio-DMAV; iAsIII induced epigenetic effects occur in at least 8000 fold lower concentrations as reported in vitro before. The fact that both arsenicals cause DNA hypomethylation at really low, exposure-relevant concentrations in human urothelial cells suggests that this epigenetic effect might contribute to inorganic arsenic induced carcinogenicity, which for sure has to be further investigated in future studies.
This study aims to further mechanistically understand toxic modes of action after chronic inorganic arsenic exposure. Therefore long-term incubation studies in cultured cells were carried out, to display chronically attained changes, which cannot be observed in the generally applied in vitro short-term incubation studies. Particularly, the cytotoxic, genotoxic and epigenetic effects of an up to 21 days incubation of human urothelial (UROtsa) cells with pico- to nanomolar concentrations of iAsIII and its metabolite thio-DMAV were compared. After 21 days of incubation, cytotoxic effects were strongly enhanced in the case of iAsIII and might partly be due to glutathione depletion and genotoxic effects on the chromosomal level. These results are in strong contrast to cells exposed to thio-DMAV. Thus, cells seemed to be able to adapt to this arsenical, as indicated among others by an increase in the cellular glutathione level. Most interestingly, picomolar concentrations of both iAsIII and thio-DMAV caused global DNA hypomethylation in UROtsa cells, which was quantified in parallel by 5-medC immunostaining and a newly established, reliable, high resolution mass spectrometry (HRMS)-based test system. This is the first time that epigenetic effects are reported for thio-DMAV; iAsIII induced epigenetic effects occur in at least 8000 fold lower concentrations as reported in vitro before. The fact that both arsenicals cause DNA hypomethylation at really low, exposure-relevant concentrations in human urothelial cells suggests that this epigenetic effect might contribute to inorganic arsenic induced carcinogenicity, which for sure has to be further investigated in future studies.
Recultivation of disturbed oil sand mining areas is an issue of increasing importance. Nevertheless only little is known about the fate of organic matter, cell abundances and microbial community structures during oil sand processing, tailings management and initial soil development on reclamation sites. Thus the focus of this work is on biogeochemical changes of mined oil sands through the entire process chain until its use as substratum for newly developing soils on reclamation sites. Therefore, oil sand, mature fine tailings (MFTs) from tailings ponds and drying cells and tailings sand covered with peat-mineral mix (PMM) as part of land reclamation were analyzed. The sample set was selected to address the question whether changes in the above-mentioned biogeochemical parameters can be related to oil sand processing or biological processes and how these changes influence microbial activities and soil development.
GC-MS analyses of oil-derived biomarkers reveal that these compounds remain unaffected by oil sand processing and biological activity. In contrast, changes in polycyclic aromatic hydrocarbon (PAH) abundance and pattern can be observed along the process chain. Especially naphthalenes, phenanthrenes and chrysenes are altered or absent on reclamation sites, Furthermore, root-bearing horizons on reclamation sites exhibit cell abundances at least ten times higher (10(8) to 10(9) cells g(-1)) than in oil sand and MFF samples (10(7) cells g(-1)) and show a higher diversity in their microbial community structure. Nitrate in the pore water and roots derived from the PMM seem to be the most important stimulants for microbial growth. The combined data show that the observed compositional changes are mostly related to biological activity and the addition of exogenous organic components (PMM), whereas oil extraction, tailings dewatering and compaction do not have significant influences on the evaluated compounds. Microbial community composition remains relatively stable through the entire process chain. (C) 2014 Elsevier B.V. All rights reserved.
The tropical warm pool waters surrounding Indonesia are one of the equatorial heat and moisture sources that are considered as a driving force of the global climate system. The climate in Indonesia is dominated by the equatorial monsoon system, and has been linked to El Niño-Southern Oscillation (ENSO) events, which often result in severe droughts or floods over Indonesia with profound societal and economic impacts on the populations living in the world's fourth most populated country. The latest IPCC report states that ENSO will remain the dominant mode in the tropical Pacific with global effects in the 21st century and ENSO-related precipitation extremes will intensify. However, no common agreement exists among climate simulation models for projected change in ENSO and the Australian-Indonesian Monsoon. Exploring high-resolution palaeoclimate archives, like tree rings or varved lake sediments, provide insights into the natural climate variability of the past, and thus helps improving and validating simulations of future climate changes. Centennial tree-ring stable isotope records | Within this doctoral thesis the main goal was to explore the potential of tropical tree rings to record climate signals and to use them as palaeoclimate proxies. In detail, stable carbon (δ13C) and oxygen (δ18O) isotopes were extracted from teak trees in order to establish the first well-replicated centennial (AD 1900-2007) stable isotope records for Java, Indonesia. Furthermore, different climatic variables were tested whether they show significant correlation with tree-ring proxies (ring-width, δ13C, δ18O). Moreover, highly resolved intra-annual oxygen isotope data were established to assess the transfer of the seasonal precipitation signal into the tree rings. Finally, the established oxygen isotope record was used to reveal possible correlations with ENSO events. Methodological achievements | A second goal of this thesis was to assess the applicability of novel techniques which facilitate and optimize high-resolution and high-throughput stable isotope analysis of tree rings. Two different UV-laser-based microscopic dissection systems were evaluated as a novel sampling tool for high-resolution stable isotope analysis. Furthermore, an improved procedure of tree-ring dissection from thin cellulose laths for stable isotope analysis was designed. The most important findings of this thesis are: I) The herein presented novel sampling techniques improve stable isotope analyses for tree-ring studies in terms of precision, efficiency and quality. The UV-laser-based microdissection serve as a valuable tool for sampling plant tissue at ultrahigh-resolution and for unprecedented precision. II) A guideline for a modified method of cellulose extraction from wholewood cross-sections and subsequent tree-ring dissection was established. The novel technique optimizes the stable isotope analysis process in two ways: faster and high-throughput cellulose extraction and precise tree-ring separation at annual to high-resolution scale. III) The centennial tree-ring stable isotope records reveal significant correlation with regional precipitation. High-resolution stable oxygen values, furthermore, allow distinguishing between dry and rainy season rainfall. IV) The δ18O record reveals significant correlation with different ENSO flavors and demonstrates the importance of considering ENSO flavors when interpreting palaeoclimatic data in the tropics. The findings of my dissertation show that seasonally resolved δ18O records from Indonesian teak trees are a valuable proxy for multi-centennial reconstructions of regional precipitation variability (monsoon signals) and large-scale ocean-atmosphere phenomena (ENSO) for the Indo-Pacific region. Furthermore, the novel methodological achievements offer many unexplored avenues for multidisciplinary research in high-resolution palaeoclimatology.
The mid- to late Holocene interval is characterised by a highly variable climate in response to a gradual change in orbital insolation. The seasonal impact of these changes on the Eifel Maar region is not yet well documented largely due to uncertainties about the completeness of this archive ("missing varves" in the well known Lake Holzmaar) and a limited understanding of the factors (e.g. temperature, precipitation) influencing the seasonality archived within the lamination/varves. In this study we approach these challenges from a different perspective. Using detailed microfacies investigations we: (1) demonstrate that the ambiguity about the "missing varves" is related to the climate induced complex biotic and abiotic laminations that led to mis-identification of varves; (2) use a combination of detailed microfacies investigations (varve structure, seasonality of biotic and abiotic signals), lamination quality, varve counts on multiple cores, published and new radiocarbon dates to develop a continuous master chronology based on the Bayesian modelling approach. The dates of major climate, volcanic, and archaeological event(s) determined using our model are in good agreement with the independently determined ages of the same events from other archives, confirming the accuracy of our age model; (3) test the sensitivity of the seasonal proxies to the available data on mid-Holocene changes in temperature and precipitation; (4) demonstrate that the changes in lake eutrophicity are correlative with temperature changes in NW Europe and probably triggered by solar variability; and (5) show that the early Iron Age onset of eutrophication in Lake Holzmaar was climate induced and began several decades before the impact of anthropogenic activity was seen in the form of intensified detrital erosion in the catchment area. Our work has implications for understanding the impact of climate change and anthropogenic activities on limnological systems. (C) 2014 Elsevier B.V. All rights reserved.
In ecology, biodiversity-ecosystem functioning (BEE) research has seen a shift in perspective from taxonomy to function in the last two decades, with successful application of trait-based approaches. This shift offers opportunities for a deeper mechanistic understanding of the role of biodiversity in maintaining multiple ecosystem processes and services. In this paper, we highlight studies that have focused on BEE of microbial communities with an emphasis on integrating trait-based approaches to microbial ecology. In doing so, we explore some of the inherent challenges and opportunities of understanding BEE using microbial systems. For example, microbial biologists characterize communities using gene phylogenies that are often unable to resolve functional traits. Additionally, experimental designs of existing microbial BEE studies are often inadequate to unravel BEE relationships. We argue that combining eco-physiological studies with contemporary molecular tools in a trait-based framework can reinforce our ability to link microbial diversity to ecosystem processes. We conclude that such trait-based approaches are a promising framework to increase the understanding of microbial BEE relationships and thus generating systematic principles in microbial ecology and more generally ecology.
The final size of an organism, or of single organs within an organism, depends on an intricate coordination of cell proliferation and cell expansion. Although organism size is of fundamental importance, the molecular and genetic mechanisms that control it remain far from understood. Here we identify a transcription factor, KUODA1 (KUA1), which specifically controls cell expansion during leaf development in Arabidopsis thaliana. We show that KUA1 expression is circadian regulated and depends on an intact clock. Furthermore, KUA1 directly represses the expression of a set of genes encoding for peroxidases that control reactive oxygen species (ROS) homeostasis in the apoplast. Disruption of KUA1 results in increased peroxidase activity and smaller leaf cells. Chemical or genetic interference with the ROS balance or peroxidase activity affects cell size in a manner consistent with the identified KUA1 function. Thus, KUA1 modulates leaf cell expansion and final organ size by controlling ROS homeostasis.
In this opinion article we propose a scenario detailing how two crucial components have evolved simultaneously to ensure the transition of glycogen to starch in the cytosol of the Archaeplastida last common ancestor: (i) the recruitment of an enzyme from intracellular Chlamydiae pathogens to facilitate crystallization of alpha-glucan chains; and (ii) the evolution of novel types of polysaccharide (de)phosphorylating enzymes from preexisting glycogen (de)phosphorylation host pathways to allow the turnover of such crystals. We speculate that the transition to starch benefitted Archaeplastida in three ways: more carbon could be packed into osmotically inert material; the host could resume control of carbon assimilation from the chlamydial pathogen that triggered plastid endosymbiosis; and cyanobacterial photosynthate export could be integrated in the emerging Archaeplastida.
Injection of nanoscale zero-valent iron (nZVI) has recently gained great interest as emerging technology for in-situ remediation of chlorinated organic compounds from groundwater systems. Zero-valent iron (ZVI) is able to reduce organic compounds and to render it to less harmful substances. The use of nanoscale particles instead of granular or microscale particles can increase dechlorination rates by-orders of magnitude due to its high surface area. However, classical nZVI appears to be hampered in its environmental application by its limited mobility. One approach is colloid supported transport of nZVI, where the nZVI gets transported by a Mobile colloid. In this study transport properties of activated carbon colloid supported nZVI (c-nZVI; d(50) = 2.4 mu m) are investigated in column tests using columns of 40 cm length, which were filled with porous media. A suspension was pumped through the column under different physicochemical conditions (addition of a polyanionic stabilizer and changes in pH and ionic strength). Highest observed breakthrough was 62% of the injected concentration in glass beads with addition of stabilizer. Addition of mono- and bivalent salt, e.g. more than 0.5 mM/L CaCl2, can decrease mobility and changes in pH to values below six can inhibit mobility at all. Measurements of colloid sizes and zeta potentials show changes in the mean particle size by a factor of ten and an increase of zeta potential from -62 mV to -80 mV during the transport experiment. However, results suggest potential applicability of c-nZVI under field conditions. (C) 2014 Elsevier B.V. All rights reserved.
Background: Travel-related conditions have impact on the quality of oral anticoagulation therapy (OAT) with vitamin K-antagonists. No predictors for travel activity and for travel-associated haemorrhage or thromboembolic complications of patients on OAT are known.
Methods: A standardised questionnaire was sent to 2500 patients on long-term OAT in Austria, Switzerland and Germany. 997 questionnaires were received (responder rate 39.9%). Ordinal or logistic regression models with travel activity before and after onset of OAT or travel-associated haemorrhages and thromboembolic complications as outcome measures were applied.
Results: 43.4% changed travel habits since onset of OAT with 24.9% and 18.5% reporting decreased or increased travel activity, respectively. Long-distance worldwide before OAT or having suffered from thromboembolic complications was associated with reduced travel activity. Increased travel activity was associated with more intensive travel experience, increased duration of OAT, higher education, or performing patient self-management (PSM). Travel-associated haemorrhages or thromboennbolic complications were reported by 6.5% and 0.9% of the patients, respectively. Former thromboennbolic complications, former bleedings and PSM were significant predictors of travel-associated complications.
Conclusions: OAT also increases travel intensity. Specific medical advice prior travelling to prevent complications should be given especially to patients with former bleedings or thromboennbolic complications and to those performing PSM. (C) 2014 Elsevier Ltd. All rights reserved.
Background: Chronic kidney disease (CKD) is a frequent comorbidity among elderly patients and those with cardiovascular disease. CKD carries prognostic relevance. We aimed to describe patient characteristics, risk factor management and control status of patients in cardiac rehabilitation (CR), differentiated by presence or absence of CKD.
Design and methods: Data from 92,071 inpatients with adequate information to calculate glomerular filtration rate (GFR) based on the Cockcroft-Gault formula were analyzed at the beginning and the end of a 3-week CR stay. CKD was defined as estimated GFR <60 ml/min/1.73 m(2).
Results: Compared with non-CKD patients, CKD patients were significantly older (72.0 versus 58.0 years) and more often had diabetes mellitus, arterial hypertension, and atherothrombotic manifestations (previous stroke, peripheral arterial disease), but fewer were current or previous smokers had a CHD family history. Exercise capacity was much lower in CKD (59 vs. 92Watts). Fewer patients with CKD were treated with percutaneous coronary intervention (PCI), but more had coronary artery bypass graft (CABG) surgery. Patients with CKD compared with non-CKD less frequently received statins, acetylsalicylic acid (ASA), clopidogrel, beta blockers, and angiotensin converting enzyme (ACE) inhibitors, and more frequently received angiotensin receptor blockers, insulin and oral anticoagulants. In CKD, mean low density lipoprotein cholesterol (LDL-C), total cholesterol, and high density lipoprotein cholesterol (HDL-C) were slightly higher at baseline, while triglycerides were substantially lower. This lipid pattern did not change at the discharge visit, but overall control rates for all described parameters (with the exception of HDL-C) were improved substantially. At discharge, systolic blood pressure (BP) was higher in CKD (124 versus 121 mmHg) and diastolic BP was lower (72 versus 74 mmHg). At discharge, 68.7% of CKD versus 71.9% of non-CKD patients had LDL-C <100 mg/dl. Physical fitness on exercise testing improved substantially in both groups. When the Modification of Diet in Renal Disease (MDRD) formula was used for CKD classification, there was no clinically relevant change in these results.
Conclusion: Within a short period of 3-4 weeks, CR led to substantial improvements in key risk factors such as lipid profile, blood pressure, and physical fitness for all patients, even if CKD was present.
Background: Chronic kidney disease (CKD) is a frequent comorbidity among elderly patients and those with cardiovascular disease. CKD carries prognostic relevance. We aimed to describe patient characteristics, risk factor management and control status of patients in cardiac rehabilitation (CR), differentiated by presence or absence of CKD.
Design and methods: Data from 92,071 inpatients with adequate information to calculate glomerular filtration rate (GFR) based on the Cockcroft-Gault formula were analyzed at the beginning and the end of a 3-week CR stay. CKD was defined as estimated GFR <60 ml/min/1.73 m(2).
Results: Compared with non-CKD patients, CKD patients were significantly older (72.0 versus 58.0 years) and more often had diabetes mellitus, arterial hypertension, and atherothrombotic manifestations (previous stroke, peripheral arterial disease), but fewer were current or previous smokers had a CHD family history. Exercise capacity was much lower in CKD (59 vs. 92Watts). Fewer patients with CKD were treated with percutaneous coronary intervention (PCI), but more had coronary artery bypass graft (CABG) surgery. Patients with CKD compared with non-CKD less frequently received statins, acetylsalicylic acid (ASA), clopidogrel, beta blockers, and angiotensin converting enzyme (ACE) inhibitors, and more frequently received angiotensin receptor blockers, insulin and oral anticoagulants. In CKD, mean low density lipoprotein cholesterol (LDL-C), total cholesterol, and high density lipoprotein cholesterol (HDL-C) were slightly higher at baseline, while triglycerides were substantially lower. This lipid pattern did not change at the discharge visit, but overall control rates for all described parameters (with the exception of HDL-C) were improved substantially. At discharge, systolic blood pressure (BP) was higher in CKD (124 versus 121 mmHg) and diastolic BP was lower (72 versus 74 mmHg). At discharge, 68.7% of CKD versus 71.9% of non-CKD patients had LDL-C <100 mg/dl. Physical fitness on exercise testing improved substantially in both groups. When the Modification of Diet in Renal Disease (MDRD) formula was used for CKD classification, there was no clinically relevant change in these results.
Conclusion: Within a short period of 3-4 weeks, CR led to substantial improvements in key risk factors such as lipid profile, blood pressure, and physical fitness for all patients, even if CKD was present.
Bats are important components in tropical mammal assemblages. Unravelling the mechanisms allowing multiple syntopic bat species to coexist can provide insights into community ecology. However, dietary information on component species of these assemblages is often difficult to obtain. Here we measuredstable carbon and nitrogen isotopes in hair samples clipped from the backs of 94 specimens to indirectly examine whether trophic niche differentiation and microhabitat segregation explain the coexistence of 16 bat species at Ankarana, northern Madagascar. The assemblage ranged over 4.4% in delta N-15 and was structured into two trophic levels with phytophagous Pteropodidae as primary consumers (c. 3% enriched over plants) and different insectivorous bats as secondary consumers (c. 4% enriched over primary consumers). Bat species utilizing different microhabitats formed distinct isotopic clusters (metric analyses of delta C-13-delta N-15 bi-plots), but taxa foraging in the same microhabitat did not show more pronounced trophic differentiation than those occupying different microhabitats. As revealed by multivariate analyses, no discernible feeding competition was found in the local assemblage amongst congeneric species as compared with non-congeners. In contrast to ecological niche theory, but in accordance with studies on New and Old World bat assemblages, competitive interactions appear to be relaxed at Ankarana and not a prevailing structuring force.
Mueller, J, Mueller, S, Stoll, J, Baur, H, and Mayer, F. Trunk extensor and flexor strength capacity in healthy young elite athletes aged 11-15 years. J Strength Cond Res 28(5): 1328-1334, 2014-Differences in trunk strength capacity because of gender and sports are well documented in adults. In contrast, data concerning young athletes are sparse. The purpose of this study was to assess the maximum trunk strength of adolescent athletes and to investigate differences between genders and age groups. A total of 520 young athletes were recruited. Finally, 377 (n = 233/144 M/F; 13 +/- 1 years; 1.62 +/- 0.11 m height; 51 +/- 12 kg mass; training: 4.5 +/- 2.6 years; training sessions/week: 4.3 +/- 3.0; various sports) young athletes were included in the final data analysis. Furthermore, 5 age groups were differentiated (age groups: 11, 12, 13, 14, and 15 years; n = 90, 150, 42, 43, and 52, respectively). Maximum strength of trunk flexors (Flex) and extensors (Ext) was assessed in all subjects during isokinetic concentric measurements (60 degrees center dot s(-1); 5 repetitions; range of motion: 55 degrees). Maximum strength was characterized by absolute peak torque (Flex(abs), Ext(abs); N center dot m), peak torque normalized to body weight (Flex(norm), Ext(norm); N center dot m center dot kg(-1) BW), and Flex(abs)/Ext(abs) ratio (RKquot). Descriptive data analysis (mean +/- SD) was completed, followed by analysis of variance (alpha = 0.05; post hoc test [Tukey-Kramer]). Mean maximum strength for all athletes was 97 +/- 34 N center dot m in Flex(abs) and 140 +/- 50 N center dot m in Ext(abs) (Flex(norm) = 1.9 +/- 0.3 N center dot m center dot kg(-1) BW, Ext(norm) = 2.8 +/- 0.6 N center dot m center dot kg(-1) BW). Males showed statistically significant higher absolute and normalized values compared with females (p < 0.001). Flex(abs) and Ext(abs) rose with increasing age almost 2-fold for males and females (Flex(abs), Ext(abs): p < 0.001). Flex(norm) and Ext(norm) increased with age for males (p < 0.001), however, not for females (Flex(norm): p = 0.26; Ext(norm): p = 0.20). RKquot (mean +/- SD: 0.71 +/- 0.16) did not reveal any differences regarding age (p = 0.87) or gender (p = 0.43). In adolescent athletes, maximum trunk strength must be discussed in a gender- and age-specific context. The Flex(abs)/Ext(abs) ratio revealed extensor dominance, which seems to be independent of age and gender. The values assessed may serve as a basis to evaluate and discuss trunk strength in athletes.
Zinc oxide (ZnO) is regarded as a promising alternative material for transparent conductive electrodes in optoelectronic devices. However, ZnO suffers from poor chemical stability. ZnO also has a moderate work function (WF), which results in substantial charge injection barriers into common (organic) semiconductors that constitute the active layer in a device. Controlling and tuning the ZnO WF is therefore necessary but challenging. Here, a variety of phosphonic acid based self-assembled monolayers (SAMs) deposited on ZnO surfaces are investigated. It is demonstrated that they allow the tuning the WF over a wide range of more than 1.5 eV, thus enabling the use of ZnO as both the hole-injecting and electron-injecting contact. The modified ZnO surfaces are characterized using a number of complementary techniques, demonstrating that the preparation protocol yields dense, well-defined molecular monolayers.
This article examines and discusses aspects of the acquisition of Turkish literacy in the minority context in Germany. After describing the particular sociolinguistic and language contact situation of Turkish in Germany, the article focuses on two empirical aspects of the acquisition of Turkish literacy within this situation. First, the development of noun phrase complexity is analyzed in a pseudo-longitudinal approach investigating Turkish texts of German-Turkish bilingual pupils of different grades. Second, strategies of literacy are analyzed in the investigation of Turkish texts from bilingual high school pupils of the 12th grade.
Animal personalities are by definition stable over time, but to what extent they may change during development and in adulthood to adjust to environmental change is unclear. Animals of temperate environments have evolved physiological and behavioural adaptations to cope with the cyclic seasonal changes. This may also result in changes in personality: suites of behavioural and physiological traits that vary consistently among individuals. Winter, typically the adverse season challenging survival, may require individuals to have shy/cautious personality, whereas during summer, energetically favourable to reproduction, individuals may benefit from a bold/risk-taking personality. To test the effects of seasonal changes in early life and in adulthood on behaviours (activity, exploration and anxiety), body mass and stress response, we manipulated the photoperiod and quality of food in two experiments to simulate the conditions of winter and summer. We used the common voles (Microtus arvalis) as they have been shown to display personality based on behavioural consistency over time and contexts. Summer-born voles allocated to winter conditions at weaning had lower body mass, a higher corticosterone increase after stress and a less active, more cautious behavioural phenotype in adulthood compared to voles born in and allocated to summer conditions. In contrast, adult females only showed plasticity in stress-induced corticosterone levels, which were higher in the animals that were transferred to the winter conditions than to those staying in summer conditions. These results suggest a sensitive period for season-related behavioural plasticity in which juveniles shift over the bold-shy axis.
In Germany, active bat rabies surveillance was conducted between 1993 and 2012. A total of 4546 oropharyngeal swab samples from 18 bat species were screened for the presence of EBLV-1- , EBLV-2- and BBLV-specific RNA. Overall, 0 center dot 15% of oropharyngeal swab samples tested EBLV-1 positive, with the majority originating from Eptesicus serotinus. Interestingly, out of seven RT-PCR-positive oropharyngeal swabs subjected to virus isolation, viable virus was isolated from a single serotine bat (E. serotinus). Additionally, about 1226 blood samples were tested serologically, and varying virus neutralizing antibody titres were found in at least eight different bat species. The detection of viral RNA and seroconversion in repeatedly sampled serotine bats indicates long-term circulation of the virus in a particular bat colony. The limitations of random-based active bat rabies surveillance over passive bat rabies surveillance and its possible application of targeted approaches for future research activities on bat lyssavirus dynamics and maintenance are discussed.
Two of a kind?
(2014)
School attacks are attracting increasing attention in aggression research. Recent systematic analyses provided new insights into offense and offender characteristics. Less is known about attacks in institutes of higher education (e.g., universities). It is therefore questionable whether the term “school attack” should be limited to institutions of general education or could be extended to institutions of higher education. Scientific literature is divided in distinguishing or unifying these two groups and reports similarities as well as differences. We researched 232 school attacks and 45 attacks in institutes of higher education throughout the world and conducted systematic comparisons between the two groups. The analyses yielded differences in offender (e.g., age, migration background) and offense characteristics (e.g., weapons, suicide rates), and some similarities (e.g., gender). Most differences can apparently be accounted for by offenders’ age and situational influences. We discuss the implications of our findings for future research and the development of preventative measures.
Current chemical risk assessment procedures may result in imprecise estimates of risk due to sometimes arbitrary simplifying assumptions. As a way to incorporate ecological complexity and improve risk estimates, mechanistic effect models have been recommended. However, effect modeling has not yet been extensively used for regulatory purposes, one of the main reasons being uncertainty about which model type to use to answer specific regulatory questions. We took an individual-based model (IBM), which was developed for risk assessment of soil invertebrates and includes avoidance of highly contaminated areas, and contrasted it with a simpler, more standardized model, based on the generic metapopulation matrix model RAMAS. In the latter the individuals within a sub-population are not treated as separate entities anymore and the spatial resolution is lower. We explored consequences of model aggregation in terms of assessing population-level effects for different spatial distributions of a toxic chemical. For homogeneous contamination of the soil, we found good agreement between the two models, whereas for heterogeneous contamination, at different concentrations and percentages of contaminated area, RAMAS results were alternatively similar to IBM results with and without avoidance, and different food levels. This inconsistency is explained on the basis of behavioral responses that are included in the IBM but not in RAMAS. Overall, RAMAS was less sensitive than the IBM in detecting population-level effects of different spatial patterns of exposure. We conclude that choosing the right model type for risk assessment of chemicals depends on whether or not population-level effects of small-scale heterogeneity in exposure need to be detected. We recommend that if in doubt, both model types should be used and compared. Describing both models following the same standard format, the ODD protocol, makes them equally transparent and understandable. The simpler model helps to build up trust for the more complex model and can be used for more homogeneous exposure patterns. The more complex model helps detecting and understanding the limitations of the simpler model and is needed to ensure ecological realism for more complex exposure scenarios. (C) 2013 Elsevier B.V. All rights reserved.
Two-photon polymerization of hydrogels – versatile solutions to fabricate well-defined 3D structures
(2014)
Hydrogels are cross-linked water-containing polymer networks that are formed by physical, ionic or covalent interactions. In recent years, they have attracted significant attention because of their unique physical properties, which make them promising materials for numerous applications in food and cosmetic processing, as well as in drug delivery and tissue engineering. Hydrogels are highly water-swellable materials, which can considerably increase in volume without losing cohesion, are biocompatible and possess excellent tissue-like physical properties, which can mimic in vivo conditions. When combined with highly precise manufacturing technologies, such as two-photon polymerization (2PP), well-defined three-dimensional structures can be obtained. These structures can become scaffolds for selective cell-entrapping, cell/drug delivery, sensing and prosthetic implants in regenerative medicine. 2PP has been distinguished from other rapid prototyping methods because it is a non-invasive and efficient approach for hydrogel cross-linking. This review discusses the 2PP-based fabrication of 3D hydrogel structures and their potential applications in biotechnology. A brief overview regarding the 2PP methodology and hydrogel properties relevant to biomedical applications is given together with a review of the most important recent achievements in the field.
The UDKM1DSIM toolbox is a collection of MATLAB (MathWorks Inc.) classes and routines to simulate the structural dynamics and the according X-ray diffraction response in one-dimensional crystalline sample structures upon an arbitrary time-dependent external stimulus, e.g. an ultrashort laser pulse. The toolbox provides the capabilities to define arbitrary layered structures on the atomic level including a rich database of corresponding element-specific physical properties. The excitation of ultrafast dynamics is represented by an N-temperature model which is commonly applied for ultrafast optical excitations. Structural dynamics due to thermal stress are calculated by a linear-chain model of masses and springs. The resulting X-ray diffraction response is computed by dynamical X-ray theory. The UDKM1DSIM toolbox is highly modular and allows for introducing user-defined results at any step in the simulation procedure.
Program summary
Program title: udkm1Dsim
Catalogue identifier: AERH_v1_0
Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AERH_v1_0.html
Licensing provisions: BSD
No. of lines in distributed program, including test data, etc.: 130221
No. of bytes in distributed program, including test data, etc.: 2746036
Distribution format: tar.gz
Programming language: Matlab (MathWorks Inc.).
Computer: PC/Workstation.
Operating system: Running Matlab installation required (tested on MS Win XP -7, Ubuntu Linux 11.04-13.04).
Has the code been vectorized or parallelized?: Parallelization for dynamical XRD computations. Number of processors used: 1-12 for Matlab Parallel Computing Toolbox; 1 - infinity for Matlab Distributed Computing Toolbox
External routines:
Optional: Matlab Parallel Computing Toolbox, Matlab Distributed Computing Toolbox Required (included in the package): mtimesx Fast Matrix Multiply for Matlab by James Tursa, xml io tools by Jaroslaw Tuszynski, textprogressbar by Paul Proteus
Nature of problem:
Simulate the lattice dynamics of 1D crystalline sample structures due to an ultrafast excitation including thermal transport and compute the corresponding transient X-ray diffraction pattern.
Solution method:
Restrictions:
The program is restricted to 1D sample structures and is further limited to longitudinal acoustic phonon modes and symmetrical X-ray diffraction geometries.
Unusual features: The program is highly modular and allows the inclusion of user-defined inputs at any time of the simulation procedure.
Running time: The running time is highly dependent on the number of unit cells in the sample structure and other simulation parameters such as time span or angular grid for X-ray diffraction computations. However, the example files are computed in approx. 1-5 min each on a 8 Core Processor with 16 GB RAM available.
Using ultrafast X-ray diffraction, we study the coherent picosecond lattice dynamics of photoexcited thin films in the two limiting cases, where the photoinduced stress profile decays on a length scale larger and smaller than the film thickness. We solve a unifying analytical model of the strain propagation for acoustic impedance-matched opaque films on a semi-infinite transparent substrate, showing that the lattice dynamics essentially depend on two parameters: One for the spatial profile and one for the amplitude of the strain. We illustrate the results by comparison with high-quality ultrafast X-ray diffraction data of SrRuO3 films on SrTiO3 substrates. (C) 2014 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.
In processing and data storage mainly ferromagnetic (FM) materials are being used. Approaching physical limits, new concepts have to be found for faster, smaller switches, for higher data densities and more energy efficiency. Some of the discussed new concepts involve the material classes of correlated oxides and materials with antiferromagnetic coupling. Their applicability depends critically on their switching behavior, i.e., how fast and how energy efficient material properties can be manipulated. This thesis presents investigations of ultrafast non-equilibrium phase transitions on such new materials. In transition metal oxides (TMOs) the coupling of different degrees of freedom and resulting low energy excitation spectrum often result in spectacular changes of macroscopic properties (colossal magneto resistance, superconductivity, metal-to-insulator transitions) often accompanied by nanoscale order of spins, charges, orbital occupation and by lattice distortions, which make these material attractive. Magnetite served as a prototype for functional TMOs showing a metal-to-insulator-transition (MIT) at T = 123 K. By probing the charge and orbital order as well as the structure after an optical excitation we found that the electronic order and the structural distortion, characteristics of the insulating phase in thermal equilibrium, are destroyed within the experimental resolution of 300 fs. The MIT itself occurs on a 1.5 ps timescale. It shows that MITs in functional materials are several thousand times faster than switching processes in semiconductors. Recently ferrimagnetic and antiferromagnetic (AFM) materials have become interesting. It was shown in ferrimagnetic GdFeCo, that the transfer of angular momentum between two opposed FM subsystems with different time constants leads to a switching of the magnetization after laser pulse excitation. In addition it was theoretically predicted that demagnetization dynamics in AFM should occur faster than in FM materials as no net angular momentum has to be transferred out of the spin system. We investigated two different AFM materials in order to learn more about their ultrafast dynamics. In Ho, a metallic AFM below T ≈ 130 K, we found that the AFM Ho can not only be faster but also ten times more energy efficiently destroyed as order in FM comparable metals. In EuTe, an AFM semiconductor below T ≈ 10 K, we compared the loss of magnetization and laser-induced structural distortion in one and the same experiment. Our experiment shows that they are effectively disentangled. An exception is an ultrafast release of lattice dynamics, which we assign to the release of magnetostriction. The results presented here were obtained with time-resolved resonant soft x-ray diffraction at the Femtoslicing source of the Helmholtz-Zentrum Berlin and at the free-electron laser in Stanford (LCLS). In addition the development and setup of a new UHV-diffractometer for these experiments will be reported.
A new concept for shortening hard X-ray pulses emitted from a third-generation synchrotron source down to few picoseconds is presented. The device, called the PicoSwitch, exploits the dynamics of coherent acoustic phonons in a photo-excited thin film. A characterization of the structure demonstrates switching times of <= 5 ps and a peak reflectivity of similar to 10(-3). The device is tested in a real synchrotron-based pump-probe experiment and reveals features of coherent phonon propagation in a second thin film sample, thus demonstrating the potential to significantly improve the temporal resolution at existing synchrotron facilities.
Ultraschall Berlin
(2014)
Ultraschall Berlin
(2014)
Scientific inquiry requires that we formulate not only what we know, but also what we do not know and by how much. In climate data analysis, this involves an accurate specification of measured quantities and a consequent analysis that consciously propagates the measurement errors at each step. The dissertation presents a thorough analytical method to quantify errors of measurement inherent in paleoclimate data. An additional focus are the uncertainties in assessing the coupling between different factors that influence the global mean temperature (GMT).
Paleoclimate studies critically rely on `proxy variables' that record climatic signals in natural archives. However, such proxy records inherently involve uncertainties in determining the age of the signal. We present a generic Bayesian approach to analytically determine the proxy record along with its associated uncertainty, resulting in a time-ordered sequence of correlated probability distributions rather than a precise time series. We further develop a recurrence based method to detect dynamical events from the proxy probability distributions. The methods are validated with synthetic examples and
demonstrated with real-world proxy records. The proxy estimation step reveals the interrelations between proxy variability and uncertainty. The recurrence analysis of the East Asian Summer Monsoon during the last 9000 years confirms the well-known `dry' events at 8200 and 4400 BP, plus an additional significantly dry event at 6900 BP.
We also analyze the network of dependencies surrounding GMT. We find an intricate, directed network with multiple links between the different factors at multiple time delays. We further uncover a significant feedback from the GMT to the El Niño Southern Oscillation at quasi-biennial timescales. The analysis highlights the need of a more nuanced formulation of influences between different climatic factors, as well as the limitations in trying to estimate such dependencies.
The aim of the present thesis is to answer the question to what degree the processes involved in sentence comprehension are sensitive to task demands. A central phenomenon in this regard is the so-called ambiguity advantage, which is the finding that ambiguous sentences can be easier to process than unambiguous sentences. This finding may appear counterintuitive, because more meanings should be associated with a higher computational effort. Currently, two theories exist that can explain this finding.
The Unrestricted Race Model (URM) by van Gompel et al. (2001) assumes that several sentence interpretations are computed in parallel, whenever possible, and that the first interpretation to be computed is assigned to the sentence. Because the duration of each structure-building process varies from trial to trial, the parallelism in structure-building predicts that ambiguous sentences should be processed faster. This is because when two structures are permissible, the chances that some interpretation will be computed quickly are higher than when only one specific structure is permissible. Importantly, the URM is not sensitive to task demands such as the type of comprehension questions being asked.
A radically different proposal is the strategic underspecification model by Swets et al. (2008). It assumes that readers do not attempt to resolve ambiguities unless it is absolutely necessary. In other words, they underspecify. According the strategic underspecification hypothesis, all attested replications of the ambiguity advantage are due to the fact that in those experiments, readers were not required to fully understand the sentence.
In this thesis, these two models of the parser’s actions at choice-points in the sentence are presented and evaluated. First, it is argued that the Swets et al.’s (2008) evidence against the URM and in favor of underspecification is inconclusive. Next, the precise predictions of the URM as well as the underspecification model are refined. Subsequently, a self-paced reading experiment involving the attachment of pre-nominal relative clauses in Turkish is presented, which provides evidence against strategical underspecification. A further experiment is presented which investigated relative clause attachment in German using the speed-accuracy tradeoff (SAT) paradigm. The experiment provides evidence against strategic underspecification and in favor of the URM. Furthermore the results of the experiment are used to argue that human sentence comprehension is fallible, and that theories of parsing should be able to account for that fact. Finally, a third experiment is presented, which provides evidence for the sensitivity to task demands in the treatment of ambiguities. Because this finding is incompatible with the URM, and because the strategic underspecification model has been ruled out, a new model of ambiguity resolution is proposed: the stochastic multiple-channel model of ambiguity resolution (SMCM). It is further shown that the quantitative predictions of the SMCM are in agreement with experimental data.
In conclusion, it is argued that the human sentence comprehension system is parallel and fallible, and that it is sensitive to task-demands.
Background: Agrammatic speakers have problems with grammatical encoding and decoding. However, not all syntactic processes are equally problematic: present time reference, who questions, and reflexives can be processed by narrow syntax alone and are relatively spared compared to past time reference, which questions, and personal pronouns, respectively. The latter need additional access to discourse and information structures to link to their referent outside the clause (Avrutin, 2006). Linguistic processing that requires discourse-linking is difficult for agrammatic individuals: verb morphology with reference to the past is more difficult than with reference to the present (Bastiaanse et al., 2011). The same holds for which questions compared to who questions and for pronouns compared to reflexives (Avrutin, 2006). These results have been reported independently for different populations in different languages. The current study, for the first time, tested all conditions within the same population.
Aims: We had two aims with the current study. First, we wanted to investigate whether discourse-linking is the common denominator of the deficits in time reference, wh questions, and object pronouns. Second, we aimed to compare the comprehension of discourse-linked elements in people with agrammatic and fluent aphasia.
Methods and procedures: Three sentence-picture-matching tasks were administered to 10 agrammatic, 10 fluent aphasic, and 10 non-brain-damaged Russian speakers (NBDs): (1) the Test for Assessing Reference of Time (TART) for present imperfective (reference to present) and past perfective (reference to past), (2) the Wh Extraction Assessment Tool (WHEAT) for which and who subject questions, and (3) the Reflexive-Pronoun Test (RePro) for reflexive and pronominal reference.
Outcomes and results: NBDs scored at ceiling and significantly higher than the aphasic participants. We found an overall effect of discourse-linking in the TART and WHEAT for the agrammatic speakers, and in all three tests for the fluent speakers. Scores on the RePro were at ceiling.
Conclusions: The results are in line with the prediction that problems that individuals with agrammatic and fluent aphasia experience when comprehending sentences that contain verbs with past time reference, which question words and pronouns are caused by the fact that these elements involve discourse linking. The effect is not specific to agrammatism, although it may result from different underlying disorders in agrammatic and fluent aphasia.
A detailed knowledge of cell wall heterogeneity and complexity is crucial for understanding plant growth and development. One key challenge is to establish links between polysaccharide-rich cell walls and their phenotypic characteristics. It is of particular interest for some plant material, like cotton fibers, which are of both biological and industrial importance. To this end, we attempted to study cotton fiber characteristics together with glycan arrays using regression based approaches. Taking advantage of the comprehensive microarray polymer profiling technique (CoMPP), 32 cotton lines from different cotton species were studied. The glycan array was generated by sequential extraction of cell wall polysaccharides from mature cotton fibers and screening samples against eleven extensively characterized cell wall probes. Also, phenotypic characteristics of cotton fibers such as length, strength, elongation and micronaire were measured. The relationship between the two datasets was established in an integrative manner using linear regression methods. In the conducted analysis, we demonstrated the usefulness of regression based approaches in establishing a relationship between glycan measurements and phenotypic traits. In addition, the analysis also identified specific polysaccharides which may play a major role during fiber development for the final fiber characteristics. Three different regression methods identified a negative correlation between micronaire and the xyloglucan and homogalacturonan probes. Moreover, homogalacturonan and callose were shown to be significant predictors for fiber length. The role of these polysaccharides was already pointed out in previous cell wall elongation studies. Additional relationships were predicted for fiber strength and elongation which will need further experimental validation.
Unfälle der Sprache
(2014)
Der Begriff »Katastrophe« hat in unserer Alltags- und Mediensprache Hochkonjunktur. Was in der Abfolge von Kriegen, Attentaten, Erdbeben, Vulkanausbrüchen und Tsunamis als »Katastrophe« bezeichnet wird, verlangt nach einer zugespitzten Analyse. In der Literaturwissenschaft wird der Ausdruck als Bezeichnung für das schreckliche Unglück verwendet, mit dem eine Tragödie endet. Die »Strophe« bezeichnet dabei ursprünglich die körperliche Drehung, mit welcher der Chor in der antiken Tragödie seinen Gesang begleitete, bevor etwas Neues beginnt ..
Obwohl in den unionalen Verträgen bis heute keine Vorschrift bezüglich einer Staatshaftung der Mitgliedstaaten für Entscheidungen ihrer Gerichte existiert, hat der Gerichtshof der Europäischen Union (EuGH) in einer Reihe von Entscheidungen eine solche Haftung entwickelt und präzisiert. Die vorliegende Arbeit analysiert eingehend diese Rechtsprechung mitsamt den sich daraus ergebenden facettenreichen Rechtsfragen. Im ersten Kapitel widmet sich die Arbeit der historischen Entwicklung der unionsrechtlichen Staatshaftung im Allgemeinen, ausgehend von dem bekannten Francovich-Urteil aus dem Jahr 1991. Sodann werden im zweiten Kapitel die zur Haftung für judikatives Unrecht grundlegenden Entscheidungen in den Rechtssachen Köbler und Traghetti vorgestellt. In dem sich anschließenden dritten Kapitel wird der Rechtscharakter der unionsrechtlichen Staatshaftung – einschließlich der Frage einer Subsidiarität des unionsrechtlichen Anspruchs gegenüber bestehenden nationalen Staatshaftungsansprüchen – untersucht. Das vierte Kapitel widmet sich der Frage, ob eine unionsrechtliche Staatshaftung für judikatives Unrecht prinzipiell anzuerkennen ist, wobei die wesentlichen für und gegen eine solche Haftung sprechenden Argumente ausführlich behandelt und bewertet werden. Im fünften Kapitel werden die im Zusammenhang mit den unionsrechtlichen Haftungsvoraussetzungen stehenden Probleme der Haftung für letztinstanzliche Gerichtsentscheidungen detailliert erörtert. Zugleich wird der Frage nachgegangen, ob eine Haftung für fehlerhafte unterinstanzliche Gerichtsentscheidungen zu befürworten ist. Das sechste Kapitel befasst sich mit der Ausgestaltung der unionsrechtlichen Staatshaftung für letztinstanzliche Gerichtsentscheidungen durch die Mitgliedstaaten, wobei u.a. zur Anwendbarkeit der deutschen Haftungsprivilegien bei judikativem Unrecht auf den unionsrechtlichen Staatshaftungsanspruch Stellung genommen wird. Im letzten Kapitel wird der Frage nachgegangen, ob der EuGH überhaupt über eine Kompetenz zur Schaffung der Staatshaftung für letztinstanzliche Gerichtsentscheidungen verfügte. Abschließend werden die wichtigsten Ergebnisse der Arbeit präsentiert und ein Ausblick auf weitere mögliche Auswirkungen und Entwicklungen der unionsrechtlichen Staatshaftung für judikatives Unrecht gegeben.
Uno sguardo che renda omogenee le teorie della lingua relative al XVII e al XVIII secolo non può cogliere che a grandi linee la realtà delle concezioni della lingua abbracciate in questo periodo. Il riconoscimento di una teoria cartesiana della lingua come la spiegazione indifferenziata degli sviluppi conseguenti il passaggio da visioni razionalistiche a concezioni orientate ai sensi sono risultato di tale omogeneizzazione, un processo che contempla la realtà solo in parte.
Il pensiero linguistico era contrassegnato da un misto di forme di riflessione di carattere narrativo e di tipo concettuale-razionale che si completavano in modo reciproco. Se l’approccio concettuale tentava di rilevare le proprietà fondamentali della lingua e ordinarle razionalmente, le forme narrative della riflessione linguistica non si rivolgevano alla lingua in quanto oggetto concettuale. Piuttosto la rappresentavano come oggetto da comprendere. Gli approcci narrativi e concettuali alla lingua prevedono differenze discorsive nelle impostazioni teorico-linguistiche. Anche lo stampo del pensiero teoretico-linguistico contribuisce, attraverso tradizioni differenti, alla molteplicità delle vedute teoretico-linguistiche. Per tradizioni intendiamo posizioni dominanti nella riflessione metalinguistica, presenti in contesti regionali, che possono differenziarsi da altre tradizioni. Ad ogni modo, anche il ritardato sviluppo o la ricezione di teorie linguistiche può condurre a differenze caratteristiche. Le teorie linguistiche dell´Illuminismo furono per esempio recepite in Spagna più tardi che in altri Paesi europei. Ciò condusse all’accettazione sincronica di elementi teorici relativi a teorie diverse e consecutive. Se si concentra l’attenzione al di fuori dell’Europa si verrà attratti dallo sviluppo degli approcci analoghi alla riflessione linguistica che trovarono sviluppo in Cina all’inizio del XX secolo.
Unità e diversità sono tuttavia rintracciabili non solo sul piano della conoscenza metalinguistica ma anche sul piano dell’oggetto. Una sfida per la descrizione della lingua orientata alla tradizione greco-latina era rappresentata dalle lingue indigene con le quali si stava iniziando ad entrare in contatto attraverso i viaggi di scoperta e in seguito all’inizio del colonialismo. Da affiancare alla comunicazione esogena della trasmissione metalinguistica dei rapporti nell’ambito delle lingue europee sono presenti anche approcci per una percezione della specificità categoriale delle lingue americane. Sebbene in alcuni casi non verranno riconosciute le giuste categorizzazioni per le lingue descritte, per lo meno verrà assodato che le categorie rese note dalla grammatica latina non sussistono.
Nella ricerca degli ultimi decenni, la rappresentazione di un paradigma della filosofia della lingua del XVII e del XVIII secolo che postordini e subordini universalmente la molteplicità delle lingue a strutture valide di pensiero e che prescriva per la riflessione linguistica categorie fisse di una grammatica generale strettamente orientata alla logica razionalistica è stata più volte relativizzata. In quanto connessa con la fondatezza dell’unità e con l’inalterabilità del genere umano a seconda di spazio e tempo, la tesi che le lingue nella loro natura molteplice possano esistere solo alle dipendenze di una struttura universale del pensiero si lasciava catalogare tra quelle posizioni paradigmatiche sussistenti nell’ambito della filosofia della lingua di allora. Attraverso la conoscenza dell’origine storica dell’evoluzione dell’uomo, di tutti i suoi stili di vita e forme di comunicazione, assume rilievo un’altra posizione paradigmatica che attribuisce alla lingua un influsso formativo sul pensiero.
Attraverso la differenziazione ideologico-filosofica e la specificità nazionale delle sue tesi relative alla lingua in generale e alle lingue storiche in particolare la visione secolarizzata dell’uomo e della società elaborata all’apice dell’Illuminismo si associava allo sviluppo corrispettivo e al cambiamento delle posizioni teorico-linguistiche. Con la proclamazione della lingua e del pensiero come risultati di un lungo sviluppo corrispettivo nella storia dell’umanità viene assegnato nuovo valore alle prese di posizione sulla natura e sull’origine della lingua.
Unstetige Galerkin-Diskretisierung niedriger Ordnung in einem atmosphärischen Multiskalenmodell
(2014)
Die Dynamik der Atmosphäre der Erde umfasst einen Bereich von mikrophysikalischer Turbulenz über konvektive Prozesse und Wolkenbildung bis zu planetaren Wellenmustern. Für Wettervorhersage und zur Betrachtung des Klimas über Jahrzehnte und Jahrhunderte ist diese Gegenstand der Modellierung mit numerischen Verfahren. Mit voranschreitender Entwicklung der Rechentechnik sind Neuentwicklungen der dynamischen Kerne von Klimamodellen, die mit der feiner werdenden Auflösung auch entsprechende Prozesse auflösen können, notwendig. Der dynamische Kern eines Modells besteht in der Umsetzung (Diskretisierung) der grundlegenden dynamischen Gleichungen für die Entwicklung von Masse, Energie und Impuls, so dass sie mit Computern numerisch gelöst werden können. Die vorliegende Arbeit untersucht die Eignung eines unstetigen Galerkin-Verfahrens niedriger Ordnung für atmosphärische Anwendungen. Diese Eignung für Gleichungen mit Wirkungen von externen Kräften wie Erdanziehungskraft und Corioliskraft ist aus der Theorie nicht selbstverständlich. Es werden nötige Anpassungen beschrieben, die das Verfahren stabilisieren, ohne sogenannte „slope limiter” einzusetzen. Für das unmodifizierte Verfahren wird belegt, dass es nicht geeignet ist, atmosphärische Gleichgewichte stabil darzustellen. Das entwickelte stabilisierte Modell reproduziert eine Reihe von Standard-Testfällen der atmosphärischen Dynamik mit Euler- und Flachwassergleichungen in einem weiten Bereich von räumlichen und zeitlichen Skalen. Die Lösung der thermischen Windgleichung entlang der mit den Isobaren identischen charakteristischen Kurven liefert atmosphärische Gleichgewichtszustände mit durch vorgegebenem Grundstrom einstellbarer Neigung zu(barotropen und baroklinen)Instabilitäten, die für die Entwicklung von Zyklonen wesentlich sind. Im Gegensatz zu früheren Arbeiten sind diese Zustände direkt im z-System(Höhe in Metern)definiert und müssen nicht aus Druckkoordinaten übertragen werden.Mit diesen Zuständen, sowohl als Referenzzustand, von dem lediglich die Abweichungen numerisch betrachtet werden, und insbesondere auch als Startzustand, der einer kleinen Störung unterliegt, werden verschiedene Studien der Simulation von barotroper und barokliner Instabilität durchgeführt. Hervorzuheben ist dabei die durch die Formulierung von Grundströmen mit einstellbarer Baroklinität ermöglichte simulationsgestützte Studie des Grades der baroklinen Instabilität verschiedener Wellenlängen in Abhängigkeit von statischer Stabilität und vertikalem Windgradient als Entsprechung zu Stabilitätskarten aus theoretischen Betrachtungen in der Literatur.
Inferring the internal interaction patterns of a complex dynamical system is a challenging problem. Traditional methods often rely on examining the correlations among the dynamical units. However, in systems such as transcription networks, one unit's variable is also correlated with the rate of change of another unit's variable. Inspired by this, we introduce the concept of derivative-variable correlation, and use it to design a new method of reconstructing complex systems (networks) from dynamical time series. Using a tunable observable as a parameter, the reconstruction of any system with known interaction functions is formulated via a simple matrix equation. We suggest a procedure aimed at optimizing the reconstruction from the time series of length comparable to the characteristic dynamical time scale. Our method also provides a reliable precision estimate. We illustrate the method's implementation via elementary dynamical models, and demonstrate its robustness to both model error and observation error.
Es ist in dieser Arbeit gelungen, starre Oligospiroketal(OSK)-Stäbe als Grundbausteine für komplexe 2D- und 3D-Systeme zu verwenden. Dazu wurde ein difunktionalisierter starrer Stab synthetisiert, der mit seines Gleichen und anderen verzweigten Funktionalisierungseinheiten in Azid-Alkin-Klickreaktionen eingesetzt wurde. An zwei über Klickreaktion verknüpften OSK-Stäben konnten mittels theoretischer Berechnungen Aussagen über die neuartige Bimodalität der Konformation getroffen werden. Es wurde dafür der Begriff Gelenkstab eingeführt, da die Moleküle um ein Gelenk gedreht sowohl gestreckt als auch geknickt vorliegen können. Aufbauend auf diesen Erkenntnissen konnte gezeigt werden, dass nicht nur gezielt große Polymere aus bis zu vier OSK-Stäben synthetisiert werden können, sondern es auch möglich ist, durch gezielte Änderung von Reaktionsbedingungen der Klickreaktion auch Cyclen aus starren OSK-Stäben herzustellen. Die neu entwickelte Substanzklasse der Gelenkstäbe wurde im Hinblick auf die Steuerung des vorliegenden Gleichgewichts zwischen geknicktem und gestrecktem Gelenkstab hin untersucht. Dafür wurde der Gelenkstab mit Pyrenylresten in terminaler Position versehen. Es wurde durch Fluoreszenzmessungen festgestellt, dass das Gleichgewicht z. B. durch die Temperatur oder die Wahl des Lösungsmittels beeinflussbar ist. Für vielfache Anwendungen wurde eine vereinfachte Synthesestrategie gefunden, mit der eine beliebige Funktionalisierung in nur einem Syntheseschritt erreicht werden konnte. Es konnten photoaktive Gelenkstäbe synthetisiert werden, die gezielt zur intramolekularen Dimerisierung geführt werden konnten. Zusätzlich wurde durch Aminosäuren ein Verknüpfungselement am Ende der Gelenkstäbe gefunden, das eine stereoselektive Synthese von Mehrfachfunktionalisierungen zulässt. Die Synthese der komplexen Gelenkstäbe wurde als ein neuartiges Gebiet aufgezeigt und bietet ein breites Forschungspotential für weitere Anwendungen z. B. in der Biologie (als molekulare Schalter für Ionentransporte) und in der Materialchemie (als Ladungs- oder Energietransporteure).
Untersuchungen zur pro-inflammatorischen Wirkung von Serum-Amyloid A in glatten Gefäßmuskelzellen
(2014)
The Epoch of Reionization marks after recombination the second major change in the ionization state of the universe, going from a neutral to an ionized state. It starts with the appearance of the first stars and galaxies; a fraction of high-energy photons emitted from galaxies permeate into the intergalactic medium (IGM) and gradually ionize the hydrogen, until the IGM is completely ionized at z~6 (Fan et al., 2006). While the progress of reionization is driven by galaxy evolution, it changes the ionization and thermal state of the IGM substantially and affects subsequent structure and galaxy formation by various feedback mechanisms.
Understanding this interaction between reionization and galaxy formation is further impeded by a lack of understanding of the high-redshift galactic properties such as the dust distribution and the escape fraction of ionizing photons. Lyman Alpha Emitters (LAEs) represent a sample of high-redshift galaxies that are sensitive to all these galactic properties and the effects of reionization.
In this thesis we aim to understand the progress of reionization by performing cosmological simulations, which allows us to investigate the limits of constraining reionization by high-redshift galaxies as LAEs, and examine how galactic properties and the ionization state of the IGM affect the visibility and observed quantities of LAEs and Lyman Break galaxies (LBGs).
In the first part of this thesis we focus on performing radiative transfer calculations to simulate reionization. We have developed a mapping-sphere-scheme, which, starting from spherically averaged temperature and density fields, uses our 1D radiative transfer code and computes the effect of each source on the IGM temperature and ionization (HII, HeII, HeIII) profiles, which are subsequently mapped onto a grid. Furthermore we have updated the 3D Monte-Carlo radiative transfer pCRASH, enabling detailed reionization simulations which take individual source characteristics into account.
In the second part of this thesis we perform a reionization simulation by post-processing a smoothed-particle hydrodynamical (SPH) simulation (GADGET-2) with 3D radiative transfer (pCRASH), where the ionizing sources are modelled according to the characteristics of the stellar populations in the hydrodynamical simulation. Following the ionization fractions of hydrogen (HI) and helium (HeII, HeIII), and temperature in our simulation, we find that reionization starts at z~11 and ends at z~6, and high density regions near sources are ionized earlier than low density regions far from sources.
In the third part of this thesis we couple the cosmological SPH simulation and the radiative transfer simulations with a physically motivated, self-consistent model for LAEs, in order to understand the importance of the ionization state of the IGM, the escape fraction of ionizing photons from galaxies and dust in the interstellar medium (ISM) on the visibility of LAEs. Comparison of our models results with the LAE Lyman Alpha (Lya) and UV luminosity functions at z~6.6 reveals a three-dimensional degeneracy between the ionization state of the IGM, the ionizing photons escape fraction and the ISM dust distribution, which implies that LAEs act not only as tracers of reionization but also of the ionizing photon escape fraction and of the ISM dust distribution. This degeneracy does not even break down when we compare simulated with observed clustering of LAEs at z~6.6. However, our results show that reionization has the largest impact on the amplitude of the LAE angular correlation functions, and its imprints are clearly distinguishable from those of properties on galactic scales. These results show that reionization cannot be constrained tightly by exclusively using LAE observations. Further observational constraints, e.g. tomographies of the redshifted hydrogen 21cm line, are required.
In addition we also use our LAE model to probe the question when a galaxy is visible as a LAE or a LBG. Within our model galaxies above a critical stellar mass can produce enough luminosity to be visible as a LBG and/or a LAE. By finding an increasing duty cycle of LBGs with Lya emission as the UV magnitude or stellar mass of the galaxy rises, our model reveals that the brightest (and most massive) LBGs most often show Lya emission.
Predicting the Lya equivalent width (Lya EW) distribution and the fraction of LBGs showing Lya emission at z~6.6, we reproduce the observational trend of the Lya EWs with UV magnitude. However, the Lya EWs of the UV brightest LBGs exceed observations and can only be reconciled by accounting for an increased Lya attenuation of massive galaxies, which implies that the observed Lya brightest LAEs do not necessarily coincide with the UV brightest galaxies. We have analysed the dependencies of LAE observables on the properties of the galactic and intergalactic medium and the LAE-LBG connection, and this enhances our understanding of the nature of LAEs.
Herein, we report the use of upconversion agents to modify graphite carbon nitride (g-C3N4) by direct thermal condensation of a mixture of ErCl3 center dot 6H(2)O and the supramolecular precursor cyanuric acid-melamine. We show the enhancement of g-C3N4 photoactivity after Er3+ doping by monitoring the photodegradation of Rhodamine B dye under visible light. The contribution of the upconversion agent is demonstrated by measurements using only a red laser. The Er3+ doping alters both the electronic and the chemical properties of g-C3N4. The Er3+ doping reduces emission intensity and lifetime, indicating the formation of new, nonradiative deactivation pathways, probably involving charge-transfer processes.
Inorganic arsenicals are environmental toxins that have been connected with neuropathies and impaired cognitive functions. To investigate whether such substances accumulate in brain astrocytes and affect their viability and glutathione metabolism, we have exposed cultured primary astrocytes to arsenite or arsenate. Both arsenicals compromised the cell viability of astrocytes in a time- and concentration-dependent manner. However, the early onset of cell toxicity in arsenite-treated astrocytes revealed the higher toxic potential of arsenite compared with arsenate. The concentrations of arsenite and arsenate that caused within 24 h half-maximal release of the cytosolic enzyme lactate dehydrogenase were around 0.3 mM and 10 mM, respectively. The cellular arsenic contents of astrocytes increased rapidly upon exposure to arsenite or arsenate and reached after 4 h of incubation almost constant steady state levels. These levels were about 3-times higher in astrocytes that had been exposed to a given concentration of arsenite compared with the respective arsenate condition. Analysis of the intracellular arsenic species revealed that almost exclusively arsenite was present in viable astrocytes that had been exposed to either arsenate or arsenite. The emerging toxicity of arsenite 4 h after exposure was accompanied by a loss in cellular total glutathione and by an increase in the cellular glutathione disulfide content. These data suggest that the high arsenite content of astrocytes that had been exposed to inorganic arsenicals causes an increase in the ratio of glutathione disulfide to glutathione which contributes to the toxic potential of these substances.
Aims: Contrast media-induced nephropathy (CIN) is associated with increased morbidity and mortality. The renal endothelin system has been associated with disease progression of various acute and chronic renal diseases. However, robust data coming from adequately powered prospective clinical studies analyzing the short and long-term impacts of the renal ET system in patients with CIN are missing so far. We thus performed a prospective study addressing this topic.
Main methods: We included 327 patients with diabetes or renal impairment undergoing coronary angiography. Blood and spot urine were collected before and 24 h after contrast media (CM) application. Patients were followed for 90 days for major clinical events like need for dialysis, unplanned rehospitalization or death.
Key findings: The concentration of ET-1 and the urinary ET-1/creatinine ratio decreased in spot urine after CM application (ET-1 concentration: 0.91 +/- 1.23pg/ml versus 0.63 +/- 1.03pg/ml, p<0.001; ET-1/creatinine ratio: 0.14 +/- 0.23 versus 0.09 +/- 0.19, p<0.001). The urinary ET-1 concentrations in patients with CIN decreased significantly more than in patients without CIN (-0.26 +/- 1.42pg/ml vs. -0.79 +/- 1.69pg/ml, p=0.041), whereas the decrease of the urinary ET-1/creatinine ratio was not significantly different (non-CIN patients: -0.05 +/- 0.30; CIN patients: -0.11 +/- 0.21, p=0.223). Urinary ET-1 concentrations as well as the urinary ET-1/creatinine ratio were not associated with clinical events (need for dialysis, rehospitalization or death) during the 90day follow-up after contrast media exposure. However, the urinary ET-1 concentration and the urinary ET-1/creatinine ratio after CM application were higher in those patients who had a decrease of GFR of at least 25% after 90days of follow-up.
Significance: In general the ET-1 system in the kidney seems to be down-regulated after contrast media application in patients with moderate CIN risk. Major long-term complications of CIN (need for dialysis, rehospitalization or death) are not associated with the renal ET system. (C) 2014 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license.
Due to increasing demands and competition for high quality groundwater resources in many parts of the world, there is an urgent need for efficient methods that shed light on the interplay between complex natural settings and anthropogenic impacts. Thus a new approach is introduced, that aims to identify and quantify the predominant processes or factors of influence that drive groundwater and lake water dynamics on a catchment scale. The approach involves a non-linear dimension reduction method called Isometric feature mapping (Isomap). This method is applied to time series of groundwater head and lake water level data from a complex geological setting in Northeastern Germany. Two factors explaining more than 95% of the observed spatial variations are identified: (1) the anthropogenic impact of a waterworks in the study area and (2) natural groundwater recharge with different degrees of dampening at the respective sites of observation. The approach enables a presumption-free assessment to be made of the existing geological conception in the catchment, leading to an extension of the conception. Previously unknown hydraulic connections between two aquifers are identified, and connections revealed between surface water bodies and groundwater. (C) 2014 Elsevier B.V. All rights reserved.
The magnetosphere-ionosphere-thermosphere (MIT) dynamic system significantly depends on the highly variable solar wind conditions, in particular, on changes of the strength and orientation of the interplanetary magnetic field (IMF). The solar wind and IMF interactions with the magnetosphere drive the MIT system via the magnetospheric field-aligned currents (FACs). The global modeling helps us to understand the physical background of this complex system. With the present study, we test the recently developed high-resolution empirical model of field-aligned currents MFACE (a high-resolution Model of Field-Aligned Currents through Empirical orthogonal functions analysis). These FAC distributions were used as input of the time-dependent, fully self-consistent global Upper Atmosphere Model (UAM) for different seasons and various solar wind and IMF conditions. The modeling results for neutral mass density and thermospheric wind are directly compared with the CHAMP satellite measurements. In addition, we perform comparisons with the global empirical models: the thermospheric wind model (HWM07) and the atmosphere density model (Naval Research Laboratory Mass Spectrometer and Incoherent Scatter Extended 2000). The theoretical model shows a good agreement with the satellite observations and an improved behavior compared with the empirical models at high latitudes. Using the MFACE model as input parameter of the UAM model, we obtain a realistic distribution of the upper atmosphere parameters for the Northern and Southern Hemispheres during stable IMF orientation as well as during dynamic situations. This variant of the UAM can therefore be used for modeling the MIT system and space weather predictions.
Background: Knowing and, if necessary, altering competitive athletes' real attitudes towards the use of banned performance-enhancing substances is an important goal of worldwide doping prevention efforts. However athletes will not always be willing to reporting their real opinions. Reaction time-based attitude tests help conceal the ultimate goal of measurement from the participant and impede strategic answering. This study investigated how well a reaction time-based attitude test discriminated between athletes who were doping and those who were not. We investigated whether athletes whose urine samples were positive for at least one banned substance (dopers) evaluated doping more favorably than clean athletes (non-dopers).
Methods: We approached a group of 61 male competitive bodybuilders and collected urine samples for biochemical testing. The pictorial doping Brief Implicit Association Test (BIAT) was used for attitude measurement. This test quantifies the difference in response latencies (in milliseconds) to stimuli representing related concepts (i.e. doping-dislike/like-[health food]).
Results: Prohibited substances were found in 43% of all tested urine samples. Dopers had more lenient attitudes to doping than non-dopers (Hedges's g = -0.76). D-scores greater than -0.57 (CI95 = -0.72 to -0.46) might be indicative of a rather lenient attitude to doping. In urine samples evidence of administration of combinations of substances, complementary administration of substances to treat side effects and use of stimulants to promote loss of body fat was common.
Conclusion: This study demonstrates that athletes' attitudes to doping can be assessed indirectly with a reaction time-based test, and that their attitudes are related to their behavior. Although bodybuilders may be more willing to reveal their attitude to doping than other athletes, these results still provide evidence that the pictorial doping BIAT may be useful in athletes from other sports, perhaps as a complementary measure in evaluations of the effectiveness of doping prevention interventions.
Sedimentary proxies used to reconstruct marine productivity suffer from variable preservation and are sensitive to factors other than productivity. Therefore, proxy calibration is warranted. Here we map the spatial patterns of two paleoproductivity proxies, biogenic opal and barium fluxes, from a set of core-top sediments recovered in the Subarctic North Pacific. Comparisons of the proxy data with independent estimates of primary and export production, surface water macronutrient concentrations, and biological pCO(2) drawdown indicate that neither proxy shows a significant correlation with primary or export productivity for the entire region. Biogenic opal fluxes, when corrected for preservation using Th-230-normalized accumulation rates, show a good correlation with primary productivity along the volcanic arcs (tau = 0.71, p = 0.0024) and with export productivity throughout the western Subarctic North Pacific (tau = 0.71, p = 0.0107). Moderate and good correlations of biogenic barium flux with export production (tau = 0.57, p = 0.0022) and with surface water silicate concentrations (tau = 0.70, p = 0.0002) are observed for the central and eastern Subarctic North Pacific. For reasons unknown, however, no correlation is found in the western Subarctic North Pacific between biogenic barium flux and the reference data. Nonetheless, we show that barite saturation, uncertainty in the lithogenic barium corrections, and problems with the reference data sets are not responsible for the lack of a significant correlation between biogenic barium flux and the reference data. Further studies evaluating the factors controlling the variability of the biogenic constituents in the sediments are desirable in this region.
The International Project for the Evaluation of Educational Achievement (IEA) was formed in the 1950s (Postlethwaite, 1967). Since that time, the IEA has conducted many studies in the area of mathematics, such as the First International Mathematics Study (FIMS) in 1964, the Second International Mathematics Study (SIMS) in 1980-1982, and a series of studies beginning with the Third International Mathematics and Science Study (TIMSS) which has been conducted every 4 years since 1995. According to Stigler et al. (1999), in the FIMS and the SIMS, U.S. students achieved low scores in comparison with students in other countries (p. 1). The TIMSS 1995 “Videotape Classroom Study” was therefore a complement to the earlier studies conducted to learn “more about the instructional and cultural processes that are associated with achievement” (Stigler et al., 1999, p. 1). The TIMSS Videotape Classroom Study is known today as the TIMSS Video Study. From the findings of the TIMSS 1995 Video Study, Stigler and Hiebert (1999) likened teaching to “mountain ranges poking above the surface of the water,” whereby they implied that we might see the mountaintops, but we do not see the hidden parts underneath these mountain ranges (pp. 73-78). By watching the videotaped lessons from Germany, Japan, and the United States again and again, they discovered that “the systems of teaching within each country look similar from lesson to lesson. At least, there are certain recurring features [or patterns] that typify many of the lessons within a country and distinguish the lessons among countries” (pp. 77-78). They also discovered that “teaching is a cultural activity,” so the systems of teaching “must be understood in relation to the cultural beliefs and assumptions that surround them” (pp. 85, 88). From this viewpoint, one of the purposes of this dissertation was to study some cultural aspects of mathematics teaching and relate the results to mathematics teaching and learning in Vietnam. Another research purpose was to carry out a video study in Vietnam to find out the characteristics of Vietnamese mathematics teaching and compare these characteristics with those of other countries. In particular, this dissertation carried out the following research tasks: - Studying the characteristics of teaching and learning in different cultures and relating the results to mathematics teaching and learning in Vietnam - Introducing the TIMSS, the TIMSS Video Study and the advantages of using video study in investigating mathematics teaching and learning - Carrying out the video study in Vietnam to identify the image, scripts and patterns, and the lesson signature of eighth-grade mathematics teaching in Vietnam - Comparing some aspects of mathematics teaching in Vietnam and other countries and identifying the similarities and differences across countries - Studying the demands and challenges of innovating mathematics teaching methods in Vietnam – lessons from the video studies Hopefully, this dissertation will be a useful reference material for pre-service teachers at education universities to understand the nature of teaching and develop their teaching career.
The nature of the links between speech production and perception has been the subject of longstanding debate. The present study investigated the articulatory parameter of tongue height and the acoustic F1–F0 difference for the phonological distinction of vowel height in American English front vowels. Multiple repetitions of /i, ɪ, e, ɛ, æ/ in [(h)Vd] sequences were recorded in seven adult speakers. Articulatory (ultrasound) and acoustic data were collected simultaneously to provide a direct comparison of variability in vowel production in both domains. Results showed idiosyncratic patterns of articulation for contrasting the three front vowel pairs /i-ɪ/, /e-ɛ/, and /ɛ-æ/ across subjects, with the degree of variability in vowel articulation comparable to that observed in the acoustics for all seven participants. However, contrary to what was expected, some speakers showed reversals for tongue height for /ɪ/-/e/ that were also reflected in acoustics, with F1 higher for /ɪ/ than for /e/. The data suggest the phonological distinction of height is conveyed via speaker-specific articulatory-acoustic patterns that do not strictly match features descriptions. However, the acoustic signal is faithful to the articulatory configuration that generated it, carrying the crucial information for perceptual contrast.
Two opposing viewpoints have been advanced to account for morphological productivity, one according to which some knowledge is couched in the form of operations over variables, and another in which morphological generalization is primarily determined by similarity. We investigated this controversy by examining the generalization of Portuguese verb stems, which fall into one of three conjugation classes. In Study 1, an elicited production task revealed that the generalization of 2nd and 3rd conjugation stems is influenced by the degree of phonological similarity between novel roots and existing verbs, whereas the 1st conjugation generalizes beyond similarity. In Study 2, we directly contrasted two distinct computational implementations of conjugation class assignment in how well they matched the human data: a similarity-driven model that captures phonological similarities, and a dual-mechanism model that implements an explicit distinction between context-free and similarity-based generalizations. The similarity-driven model consistently underestimated 1st conjugation responses and overestimated proportions of 2nd and 3rd conjugation responses, especially for novel verbs that are highly similar to existing verbs of those classes. In contrast, the expected proportions produced by the dual-mechanism model were statistically indistinguishable from human responses. We conclude that both context-free and context-sensitive processes determine the generalization of conjugations in Portuguese, and that similarity-based algorithms of morphological acquisition are insufficient to exhibit default-like generalization. (C) 2014 Elsevier Inc. All rights reserved.
Varietätenlinguistik
(2014)
We discuss the solution theory of operators of the form del(x) + A, acting on smooth sections of a vector bundle with connection del over a manifold M, where X is a vector field having a critical point with positive linearization at some point p is an element of M. As an operator on a suitable space of smooth sections Gamma(infinity)(U, nu), it fulfills a Fredholm alternative, and the same is true for the adjoint operator. Furthermore, we show that the solutions depend smoothly on the data del, X and A.
QuestionDoes eutrophication drive vegetation change in pine forests on nutrient deficient sites and thus lead to the homogenization of understorey species composition?
LocationForest area (1600ha) in the Lower Spreewald, Brandenburg, Germany.
MethodsResurvey of 77 semi-permanent plots after 45yr, including vascular plants, bryophytes and ground lichens. We applied multidimensional ordination of species composition, dissimilarity indices, mean Ellenberg indicator values and the concept of winner/loser species to identify vegetation change between years. Differential responses along a gradient of nutrient availability were analysed on the basis of initial vegetation type, reflecting topsoil N availability of plots.
ResultsSpecies composition changed strongly and overall shifted towards higher N and slightly lower light availability. Differences in vegetation change were related to initial vegetation type, with strongest compositional changes in the oligotrophic forest type, but strongest increase of nitrophilous species in the mesotrophic forest type. Despite an overall increase in species number, species composition was homogenized between study years due to the loss of species (mainly ground lichens) on the most oligotrophic sites.
ConclusionsThe response to N enrichment is confounded by canopy closure on the N-richest sites and probably by water limitation on N-poorest sites. The relative importance of atmospheric N deposition in the eutrophication effect is difficult to disentangle from natural humus accumulation after historical litter raking. However, the profound differences in species composition between study years across all forest types suggest that atmospheric N deposition contributes to the eutrophication, which drives understorey vegetation change and biotic homogenization in Central European Scots pine forests on nutrient deficient sites.
We assessed tropical montane cloud forest (TMCF) sensitivity to natural disturbance by drought, fire, and dieback with a 7300-year-long paleorecord. We analyzed pollen assemblages, charcoal accumulation rates, and higher plant biomarker compounds (average chain length [ACL] of n-alkanes) in sediments from Wai 'anapanapa, a small lake near the upper forest limit and the mean trade wind inversion ('IWI) in Hawai`i. The paleorecord of ACL suggests increased drought frequency and a lower awl elevation from 2555-1323 cal yr B.P. and 606-334 cal yr B.P. Charcoal began to accumulate and a novel fire regime was initiated ca. 880 cal yr B.P., followed by a decreased fire return interval at ca. 550 cal yr B.P. Diebacks occurred at 2931, 2161, 1162, and 306 cal yr B.P., and two of these were independent of drought or fire. Pollen assemblages indicate that on average species composition changed only 2.8% per decade. These dynamics, though slight, were significantly associated with disturbance. The direction of species composition change varied with disturbance type. Drought was associated with significantly more vines and lianas; fire was associated with an increase in the tree fern Sadleria and indicators of open, disturbed landscapes at the expense of epiphytic ferns; whereas stand-scale dieback was associated with an increase in the tree fern Cibotium. Though this cloud forest was dynamic in response to past disturbance, it has recovered, suggesting a resilient TMCF with no evidence of state change in vegetation type (e.g., grassland or shrubland).
Crustal earthquake swarms are an expression of intensive cracking and rock damaging over periods of days, weeks or month in a small source region in the crust. They are caused by longer lasting stress changes in the source region. Often, the localized stressing of the crust is associated with fluid or gas migration, possibly in combination with pre-existing zones of weaknesses. However, verifying and quantifying localized fluid movement at depth remains difficult since the area affected is small and geophysical prospecting methods often cannot reach the required resolution.
We apply a simple and robust method to estimate the velocity ratio between compressional (P) and shear (S) waves (upsilon(P)/upsilon(S)-ratio) in the source region of an earthquake swarm. The upsilon(P)/upsilon(S)-ratio may be unusual small if the swarm is related to gas in a porous or fractured rock. The method uses arrival time difference between P and S waves observed at surface seismic stations, and the associated double differences between pairs of earthquakes. An advantage is that earthquake locations are not required and the method seems lesser dependent on unknown velocity variations in the crust outside the source region. It is, thus, suited for monitoring purposes.
Applications comprise three natural, mid-crustal (8-10 km) earthquake swarms between 1997 and 2008 from the NW-Bohemia swarm region. We resolve a strong temporal decrease of upsilon(P)/upsilon(S) before and during the main activity of the swarm, and a recovery of upsilon(P)/upsilon(S) to background levels at the end of the swarms. The anomalies are interpreted in terms of the Biot-Gassman equations, assuming the presence of oversaturated fluids degassing during the beginning phase of the swarm activity.
Vertical radar profiling (VRP) is a single-borehole geophysical technique, in which the receiver antenna is located within a borehole and the transmitter antenna is placed at one or various offsets from the borehole. Today, VRP surveying is primarily used to derive 1D velocity models by inverting the arrival times of direct waves. Using field data collected at a well-constrained test site in Germany, we evaluated a VRP workflow relying on the analysis of direct-arrival traveltimes and amplitudes as well as on imaging reflection events. To invert our VRP traveltime data, we used a global inversion strategy resulting in an ensemble of acceptable velocity models, and thus, it allowed us to appraise uncertainty issues in the estimated velocities as well as in porosity models derived via petrophysical translations. In addition to traveltime inversion, the analysis of direct-wave amplitudes and reflection events provided further valuable information regarding subsurface properties and architecture. The used VRP amplitude preprocessing and inversion procedures were adapted from raybased crosshole ground-penetrating radar (GPR) attenuation tomography and resulted in an attenuation model, which can be used to estimate variations in electrical resistivity. Our VRP reflection imaging approach relied on corridor stacking, which is a well-established processing sequence in vertical seismic profiling. The resulting reflection image outlines bounding layers and can be directly compared to surface-based GPR reflection profiling. Our results of the combined analysis of VRP, traveltimes, amplitudes, and reflections were consistent with independent core and borehole logs as well as GPR reflection profiles, which enabled us to derive a detailed hydro-stratigraphic model as needed, for example, to understand and model groundwater flow and transport.
The Galactic center is an interesting region for high-energy (0.1-100 GeV) and very-high-energy (E > 100 GeV) gamma-ray observations. Potential sources of GeV/TeV gamma-ray emission have been suggested, e.g., the accretion of matter onto the supermassive black hole, cosmic rays from a nearby supernova remnant (e.g., Sgr A East), particle acceleration in a plerion, or the annihilation of dark matter particles. The Galactic center has been detected by EGRET and by Fermi/LAT in the MeV/GeV energy band. At TeV energies, the Galactic center was detected with moderate significance by the CANGAROO and Whipple 10 m telescopes and with high significance by H.E.S.S., MAGIC, and VERITAS. We present the results from three years of VERITAS observations conducted at large zenith angles resulting in a detection of the Galactic center on the level of 18 standard deviations at energies above similar to 2.5 TeV. The energy spectrum is derived and is found to be compatible with hadronic, leptonic, and hybrid emission models discussed in the literature. Future, more detailed measurements of the high-energy cutoff and better constraints on the high-energy flux variability will help to refine and/or disentangle the individual models.
Die Arbeitsbibliothek von Dieter Adelmann befindet sich in der Universitätsbibliothek Potsdam und ist in diesem Band verzeichnet; der Nachlass und das Findbuch befinden sich im Universitätsarchiv Potsdam. Dieter Adelmann wurde am 1. Februar 1936 in Eisenach, Thüringen, geboren. Er studierte Philosophie, Germanistik und Soziologie an der Freien Universität Berlin und an der Universität Heidelberg und wurde dort 1968 mit der Arbeit Einheit des Bewusstseins als Grundlage der Philosophie Hermann Cohens bei Dieter Henrich und Hans-Georg Gadamer promoviert. Von 1968 bis 1970 war Adelmann Leiter des „Collegium Academicum“ der Universität Heidelberg; von 1970 bis 1974 Landesgeschäftsführer der SPD in Baden-Württemberg (Zuständigkeit: Politische Planung) und zeitweise auch Wahlkreisassistent des SPD-Bundestagsabgeordneten Horst Ehmke. Anschließend arbeitete Adelmann publizistisch mit dem Grafiker und gegenwärtigen Präsidenten der Berliner Akademie der Künste, Klaus Staeck zusammen, bevor er von Juli 1977 bis einschließlich September 1979 beim Vorwärts im Ressort Parteien und Programme beschäftigt war. Nach seinem Abschied vom Vorwärts war Adelmann freiberuflich in Bonn tätig, u.a. als Journalist. 1995 war er wissenschaftlicher Mitarbeiter im Rahmen der Herausgabe der Werke Hermann Cohens am „Moses-Mendelssohn-Zentrum“ und am Lehrstuhl für Innenraumgestaltung an der Technischen Universität Dresden. Nach dem Ende der Tätigkeit in Potsdam war Adelmann bis zu seinem Tod am 30. September 2008 freiberuflicher Philosoph und Cohen-Forscher.
Using density functional theory and Ab Initio Molecular Dynamics with Electronic Friction (AIMDEF), we study the adsorption and dissipative vibrational dynamics of hydrogen atoms chemisorbed on free-standing lead films of increasing thickness. Lead films are known for their oscillatory behaviour of certain properties with increasing thickness, e.g., energy and electron spill-out change in discontinuous manner, due to quantum size effects [G. Materzanini, P. Saalfrank, and P.J.D. Lindan, Phys. Rev. B 63, 235405 (2001)]. Here, we demonstrate that oscillatory features arise also for hydrogen when chemisorbed on lead films. Besides stationary properties of the adsorbate, we concentrate on finite vibrational lifetimes of H-surface vibrations. As shown by AIMDEF, the damping via vibration-electron hole pair coupling dominates clearly over the vibration-phonon channel, in particular for high-frequency modes. Vibrational relaxation times are a characteristic function of layer thickness due to the oscillating behaviour of the embedding surface electronic density. Implications derived from AIMDEF for frictional many-atom dynamics, and physisorbed species will also be given. (C) 2014 AIP Publishing LLC.
Das Projekt „Medienbildung in der LehrerInnenbildung“ hat das Ziel, den Einsatz digitaler Medien in den Lehramtsstudiengängen der Universität Potsdam nachhaltig zu fördern. Am Beispiel der Musiklehrerausbildung (Lehrstuhl für Musikpädagogik und Musikdidaktik) wurde ein Konzept für die Nutzung von Video-Podcasts in schulischen Praxisphasen entwickelt, um Studierende bei der Unterrichtsplanung zu unterstützen. Die fachspezifische Umsetzung des E-Learning-Ansatzes und die damit verbundenen Möglichkeiten und Heraus- forderungen werden gezeigt und betonen die Wichtigkeit der Zusammenarbeit zwischen Fachdidaktik und Mediendidaktik, um eine bedarfsorientierte Lösung zu finden, die praktisch umsetzbar ist.
Vielheit statt Einheit
(2014)