Refine
Year of publication
- 2013 (1715) (remove)
Document Type
- Article (1053)
- Doctoral Thesis (284)
- Monograph/Edited Volume (132)
- Postprint (63)
- Review (53)
- Conference Proceeding (42)
- Preprint (39)
- Part of Periodical (18)
- Other (13)
- Master's Thesis (10)
- Part of a Book (4)
- Bachelor Thesis (2)
- Habilitation Thesis (1)
- Moving Images (1)
Language
Is part of the Bibliography
- yes (1715) (remove)
Keywords
- climate change (7)
- Arabidopsis thaliana (6)
- Climate change (6)
- Eye movements (6)
- gamma rays: galaxies (6)
- Reading (5)
- children (5)
- galaxies: active (5)
- migration (5)
- morphology (5)
Institute
- Institut für Biochemie und Biologie (266)
- Institut für Geowissenschaften (187)
- Institut für Physik und Astronomie (185)
- Institut für Chemie (150)
- Department Psychologie (84)
- Wirtschaftswissenschaften (72)
- Institut für Romanistik (68)
- Institut für Mathematik (53)
- Department Linguistik (51)
- Institut für Germanistik (51)
Internalin J (InlJ) gehört zu der Klasse der bakteriellen, cysteinhaltigen (leucine-rich repeat) LRR Proteine. Bei den Internalinen handelt es sich um meist invasions-assoziierte Proteine der Listerien. Die LRR-Domäne von InlJ ist aus 15 regelmäßig wiederkehrenden, stark konservierten Sequenzeinheiten (repeats, 21 Aminosäuren) aufgebaut. Ein interessantes Detail dieses Internalins ist das stark konservierte Cystein innerhalb der repeats. Daraus ergibt sich eine ungewöhnliche Anordnung von 12 Cysteinen in einem Stapel. Die Häufigkeit von Cysteinen in InlJ ist für ein extrazelluläres Protein von L. monocytogenes außergewöhnlich, und die Frage nach ihrer Funktion daher umso brennender. Im Vergleich zum ubiquitären Vorkommen der sogenannten repeat-Proteine in der Natur sind Studien zu ihrer Stabilität und Faltung nicht äquivalent vertreten. Die zentrale Eigenschaft der repeat-Proteine ist ihr modularer Aufbau, der durch einfache Topologie gekennzeichnet ist und auf kurzreichenden Wechselwirkungen basiert. Diese Topologie macht repeat-Proteine zu idealen Modellproteinen, um die stabilitätsrelevanten Wechselwirkungen zu separieren und zuzuordnen. In der vorliegenden Arbeit wurde die Faltung und Entfaltung von InlJ umfassend charakterisiert und die Relevanz der Cysteine näher beleuchtet. Die spektroskopische Charakterisierung von InlJ zeigte, dass dessen Faltungszustand durch zwei Tryptophane im N- und C-Terminus fluoreszenzspektroskopisch gut zugänglich ist. Die thermodynamische Stabilität wurde mittels fluoreszenz-detektierten, Guanidiniumchlorid-induzierten Gleichgewichtsexperimenten bestimmt. Um die kinetischen Eigenschaften von InlJ zu erfassen, wurden die Faltungs- sowie die Entfaltungsreaktion spektroskopisch untersucht. Die Identifizierung der produktiven Faltungsreaktion war lediglich durch die Anwendung des reversen Doppelsprungexperiments möglich. Die Auswertung erfolgte nach dem Zweizustandsmodell, wonach die Faltung dem „Alles-oder-Nichts“ Prinzip folgt. Die Gültigkeit dieser Annahme wurde durch die kinetische Charakterisierung bestätigt. Es wurde sowohl in den Gleichgewichtsexperimenten als auch in den kinetisch erhaltenen Daten eine hohe freie Stabilisierungsenthalpie festgestellt. Die hohe Stabilität von InlJ geht mit hoher Kooperativität einher. Die kinetischen Daten zeigen zudem, dass die hohe Kooperativität hauptsächlich der Faltungsreaktion entstammt. Der Tanford-Wert von 0.93 impliziert, dass die Oberflächenänderung während der Faltung bereits zum größten Teil erfolgt ist, bevor der Übergangszustand ausgebildet wurde. Direkte strukturelle Informationen über den Übergangszustand wurden mit Hilfe von Mutationsstudien erhalten. Zu diesem Zweck wurden 12 der 14 Cysteine gegen ein Alanin ausgetauscht. Die repeats 1 bis 11 von InlJ beinhalten jeweils ein Cystein, deren Anordnung eine Leiter ergibt. Deren Substitutionen haben einen vergleichbar destabilisierenden Effekt auf InlJ von durchschnittlich 4.8 kJ/mol. Die Verlangsamung der Faltung deutet daraufhin, dass die Interaktionen der repeats 5 bis 11 im Übergangszustand bereits voll ausgebildet sind. Demnach liegt bei InlJ ein zentraler Faltungsnukleus vor. Im Rahmen dieser Promotionsarbeit wurde eine hohe Stabilität und ein stark-kooperatives Verhalten für das extrazelluläre Protein InlJ beobachtet. Diese Erkenntnisse könnten wichtige Beiträge zur Entwicklung artifizieller repeat-Proteine leisten, deren Verwendung sich stetig ausweitet.
Recent astronomical data strongly suggest that a significant part of the dark matter content of the Local Group and Virgo Supercluster is not incorporated into the galaxy halos and forms diffuse components of these galaxy clusters. A portion of the particles from these components may penetrate the Milky Way and make an extragalactic contribution to the total dark matter containment of our Galaxy. We find that the particles of the diffuse component of the Local Group are apt to contribute similar to 12% to the total dark matter density near Earth. The particles of the extragalactic dark matter stand out because of their high speed (similar to 600 km s(-1)), i.e., they are much faster than the galactic dark matter. In addition, their speed distribution is very narrow (similar to 20 km s(-1)). The particles have an isotropic velocity distribution (perhaps, in contrast to the galactic dark matter). The extragalactic dark matter should provide a significant contribution to the direct detection signal. If the detector is sensitive only to the fast particles (v > 450 km s(-1)), then the signal may even dominate. The density of other possible types of the extragalactic dark matter (for instance, of the diffuse component of the Virgo Supercluster) should be relatively small and comparable with the average dark matter density of the universe. However, these particles can generate anomaly high-energy collisions in direct dark matter detectors.
Climatic variations and human activity now and increasingly in the future cause land cover changes and introduce perturbations in the terrestrial carbon reservoirs in vegetation, soil and detritus. Optical remote sensing and in particular Imaging Spectroscopy has shown the potential to quantify land surface parameters over large areas, which is accomplished by taking advantage of the characteristic interactions of incident radiation and the physico-chemical properties of a material. The objective of this thesis is to quantify key soil parameters, including soil organic carbon, using field and Imaging Spectroscopy. Organic carbon, iron oxides and clay content are selected to be analyzed to provide indicators for ecosystem function in relation to land degradation, and additionally to facilitate a quantification of carbon inventories in semiarid soils. The semiarid Albany Thicket Biome in the Eastern Cape Province of South Africa is chosen as study site. It provides a regional example for a semiarid ecosystem that currently undergoes land changes due to unadapted management practices and furthermore has to face climate change induced land changes in the future. The thesis is divided in three methodical steps. Based on reflectance spectra measured in the field and chemically determined constituents of the upper topsoil, physically based models are developed to quantify soil organic carbon, iron oxides and clay content. Taking account of the benefits limitations of existing methods, the approach is based on the direct application of known diagnostic spectral features and their combination with multivariate statistical approaches. It benefits from the collinearity of several diagnostic features and a number of their properties to reduce signal disturbances by influences of other spectral features. In a following step, the acquired hyperspectral image data are prepared for an analysis of soil constituents. The data show a large spatial heterogeneity that is caused by the patchiness of the natural vegetation in the study area that is inherent to most semiarid landscapes. Spectral mixture analysis is performed and used to deconvolve non-homogenous pixels into their constituent components. For soil dominated pixels, the subpixel information is used to remove the spectral influence of vegetation and to approximate the pure spectral signature coming from the soil. This step is an integral part when working in natural non-agricultural areas where pure bare soil pixels are rare. It is identified as the largest benefit within the multi-stage methodology, providing the basis for a successful and unbiased prediction of soil constituents from hyperspectral imagery. With the proposed approach it is possible (1) to significantly increase the spatial extent of derived information of soil constituents to areas with about 40 % vegetation coverage and (2) to reduce the influence of materials such as vegetation on the quantification of soil constituents to a minimum. Subsequently, soil parameter quantities are predicted by the application of the feature-based soil prediction models to the maps of locally approximated soil signatures. Thematic maps showing the spatial distribution of the three considered soil parameters in October 2009 are produced for the Albany Thicket Biome of South Africa. The maps are evaluated for their potential to detect erosion affected areas as effects of land changes and to identify degradation hot spots in regard to support local restoration efforts. A regional validation, carried out using available ground truth sites, suggests remaining factors disturbing the correlation of spectral characteristics and chemical soil constituents. The approach is developed for semiarid areas in general and not adapted to specific conditions in the study area. All processing steps of the developed methodology are implemented in software modules, where crucial steps of the workflow are fully automated. The transferability of the methodology is shown for simulated data of the future EnMAP hyperspectral satellite. Soil parameters are successfully predicted from these data despite intense spectral mixing within the lower spatial resolution EnMAP pixels. This study shows an innovative approach to use Imaging Spectroscopy for mapping of key soil constituents, including soil organic carbon, for large areas in a non-agricultural ecosystem and under consideration of a partially vegetation coverage. It can contribute to a better assessment of soil constituents that describe ecosystem processes relevant to detect and monitor land changes. The maps further provide an assessment of the current carbon inventory in soils, valuable for carbon balances and carbon mitigation products.
1. The health of managed and wild honeybee colonies appears to have declined substantially in Europe and the United States over the last decade. Sustainability of honeybee colonies is important not only for honey production, but also for pollination of crops and wild plants alongside other insect pollinators. A combination of causal factors, including parasites, pathogens, land use changes and pesticide usage, are cited as responsible for the increased colony mortality. 2. However, despite detailed knowledge of the behaviour of honeybees and their colonies, there are no suitable tools to explore the resilience mechanisms of this complex system under stress. Empirically testing all combinations of stressors in a systematic fashion is not feasible. We therefore suggest a cross-level systems approach, based on mechanistic modelling, to investigate the impacts of (and interactions between) colony and land management. 3. We review existing honeybee models that are relevant to examining the effects of different stressors on colony growth and survival. Most of these models describe honeybee colony dynamics, foraging behaviour or honeybee - varroa mite - virus interactions. 4. We found that many, but not all, processes within honeybee colonies, epidemiology and foraging are well understood and described in the models, but there is no model that couples in-hive dynamics and pathology with foraging dynamics in realistic landscapes. 5. Synthesis and applications. We describe how a new integrated model could be built to simulate multifactorial impacts on the honeybee colony system, using building blocks from the reviewed models. The development of such a tool would not only highlight empirical research priorities but also provide an important forecasting tool for policy makers and beekeepers, and we list examples of relevant applications to bee disease and landscape management decisions.
Beta diversity is a conceptual link between diversity at local and regional scales. Various additional methodologies of quantifying this and related phenomena have been applied. Among them, measures of pairwise (dis)similarity of sites are particularly popular. Undersampling, i.e. not recording all taxa present at a site, is a common situation in ecological data. Bias in many metrics related to beta diversity must be expected, but only few studies have explicitly investigated the properties of various measures under undersampling conditions. On the basis of an empirical data set, representing near-complete local inventories of the Lepidoptera from an isolated Pacific island, as well as simulated communities with varying properties, we mimicked different levels of undersampling. We used 14 different approaches to quantify beta diversity, among them dataset-wide multiplicative partitioning (i.e. true beta diversity') and pairwise site x site dissimilarities. We compared their values from incomplete samples to true results from the full data. We used these comparisons to quantify undersampling bias and we calculated correlations of the dissimilarity measures of undersampled data with complete data of sites. Almost all tested metrics showed bias and low correlations under moderate to severe undersampling conditions (as well as deteriorating precision, i.e. large chance effects on results). Measures that used only species incidence were very sensitive to undersampling, while abundance-based metrics with high dependency on the distribution of the most common taxa were particularly robust. Simulated data showed sensitivity of results to the abundance distribution, confirming that data sets of high evenness and/or the application of metrics that are strongly affected by rare species are particularly sensitive to undersampling. The class of beta measure to be used should depend on the research question being asked as different metrics can lead to quite different conclusions even without undersampling effects. For each class of metric, there is a trade-off between robustness to undersampling and sensitivity to rare species. In consequence, using incidence-based metrics carries a particular risk of false conclusions when undersampled data are involved. Developing bias corrections for such metrics would be desirable.
Systems of Systems (SoS) have received a lot of attention recently. In this thesis we will focus on SoS that are built atop the techniques of Service-Oriented Architectures and thus combine the benefits and challenges of both paradigms. For this thesis we will understand SoS as ensembles of single autonomous systems that are integrated to a larger system, the SoS. The interesting fact about these systems is that the previously isolated systems are still maintained, improved and developed on their own. Structural dynamics is an issue in SoS, as at every point in time systems can join and leave the ensemble. This and the fact that the cooperation among the constituent systems is not necessarily observable means that we will consider these systems as open systems. Of course, the system has a clear boundary at each point in time, but this can only be identified by halting the complete SoS. However, halting a system of that size is practically impossible. Often SoS are combinations of software systems and physical systems. Hence a failure in the software system can have a serious physical impact what makes an SoS of this kind easily a safety-critical system. The contribution of this thesis is a modelling approach that extends OMG's SoaML and basically relies on collaborations and roles as an abstraction layer above the components. This will allow us to describe SoS at an architectural level. We will also give a formal semantics for our modelling approach which employs hybrid graph-transformation systems. The modelling approach is accompanied by a modular verification scheme that will be able to cope with the complexity constraints implied by the SoS' structural dynamics and size. Building such autonomous systems as SoS without evolution at the architectural level --- i. e. adding and removing of components and services --- is inadequate. Therefore our approach directly supports the modelling and verification of evolution.
Maintaining and increasing walking speed in old age is clinically important because this activity of daily living predicts functional and clinical state. We reviewed evidence for the biomechanical mechanisms of how strength and power training increase gait speed in old adults. A systematic search yielded only four studies that reported changes in selected gait biomechanical variables after an intervention. A secondary analysis of 20 studies revealed an association of r(2) = 0.21 between the 22% and 12% increase, respectively, in quadriceps strength and gait velocity in 815 individuals age 72. In 6 studies, there was a correlation of r(2) = 0.16 between the 19% and 9% gains in plantarflexion strength and gait speed in 240 old volunteers age 75. In 8 studies, there was zero association between the 35% and 13% gains in leg mechanical power and gait speed in 150 old adults age 73. To increase the efficacy of intervention studies designed to improve gait speed and other critical mobility functions in old adults, there is a need for a paradigm shift from conventional (clinical) outcome assessments to more sophisticated biomechanical analyses that examine joint kinematics, kinetics, energetics, muscle-tendon function, and musculoskeletal modeling before and after interventions.
The heat transport mediated by near-field interactions in networks of plasmonic nanostructures is shown to be analogous to a generalized random walk process. The existence of superdiffusive regimes is demonstrated both in linear ordered chains and in three-dimensional random networks by analyzing the asymptotic behavior of the corresponding probability distribution function. We show that the spread of heat in these networks is described by a type of Levy flight. The presence of such anomalous heat-transport regimes in plasmonic networks opens the way to the design of a new generation of composite materials able to transport heat faster than the normal diffusion process in solids.
Haberlea rhodopensis is a resurrection species with extreme resistance to drought stress and desiccation but also with ability to withstand low temperatures and freezing stress. In order to identify biochemical strategies which contribute to Haberlea's remarkable stress tolerance, the metabolic reconfiguration of H. rhodopensis during low temperature (4 degrees C) and subsequent return to optimal temperatures (21 degrees C) was investigated and compared with that of the stress tolerant Thellungiella halophyla and the stress sensitive Arabidopsis thaliana. Metabolic analysis by GC-MS revealed intrinsic differences in the metabolite levels of the three species even at 21 degrees C. H. rhodopensis had significantly more raffinose, melibiose, trehalose, rhamnose, myo-inositol, sorbitol, galactinol, erythronate, threonate, 2-oxoglutarate, citrate, and glycerol than the other two species. A. thaliana had the highest levels of putrescine and fumarate, while T halophila had much higher levels of several amino acids, including alanine, asparagine, beta-alanine, histidine, isoleucine, phenylalanine, serine, threonine, and valine. In addition, the three species responded differently to the low temperature treatment and the subsequent recovery, especially with regard to the sugar metabolism. Chilling induced accumulation of maltose in H. rhodopensis and raffinose in A. thaliana but the raffinose levels in low temperature exposed Arabidopsis were still much lower than these in unstressed Haberlea. While all species accumulated sucrose during chilling, that accumulation was transient in H. rhodopensis and A. thaliana but sustained in T halophila after the return to optimal temperature. Thus, Haberlea's metabolome appeared primed for chilling stress but the low temperature acclimation induced additional stress-protective mechanisms. A diverse array of sugars, organic acids, and polyols constitute Haberlea's main metabolic defence mechanisms against chilling, while accumulation of amino acids and amino acid derivatives contribute to the low temperature acclimation in Arabidopsis and Thellungiella. Collectively, these results show inherent differences in the metabolomes under the ambient temperature and the strategies to respond to low temperature in the three species.
The genetic code is degenerate; thus, protein evolution does not uniquely determine the coding sequence. One of the puzzles in evolutionary genetics is therefore to uncover evolutionary driving forces that result in specific codon choice. In many bacteria, the first 5-10 codons of protein-coding genes are often codons that are less frequently used in the rest of the genome, an effect that has been argued to arise from selection for slowed early elongation to reduce ribosome traffic jams. However, genome analysis across many species has demonstrated that the region shows reduced mRNA folding consistent with pressure for efficient translation initiation. This raises the possibility that unusual codon usage is a side effect of selection for reduced mRNA structure. Here we discriminate between these two competing hypotheses, and show that in bacteria selection favours codons that reduce mRNA folding around the translation start, regardless of whether these codons are frequent or rare. Experiments confirm that primarily mRNA structure, and not codon usage, at the beginning of genes determines the translation rate.
The genetic code is degenerate; thus, protein evolution does not uniquely determine the coding sequence. One of the puzzles in evolutionary genetics is therefore to uncover evolutionary driving forces that result in specific codon choice. In many bacteria, the first 5-10 codons of protein-coding genes are often codons that are less frequently used in the rest of the genome, an effect that has been argued to arise from selection for slowed early elongation to reduce ribosome traffic jams. However, genome analysis across many species has demonstrated that the region shows reduced mRNA folding consistent with pressure for efficient translation initiation. This raises the possibility that unusual codon usage is a side effect of selection for reduced mRNA structure. Here we discriminate between these two competing hypotheses, and show that in bacteria selection favours codons that reduce mRNA folding around the translation start, regardless of whether these codons are frequent or rare. Experiments confirm that primarily mRNA structure, and not codon usage, at the beginning of genes determines the translation rate.
We explore the effect of cross-diffusion on pattern formation in the two-variable Oregonator model of the Belousov-Zhabotinsky reaction. For high negative cross-diffusion of the activator (the activator being attracted towards regions of increased inhibitor concentration) we find, depending on the values of the parameters, Turing patterns, standing waves, oscillatory Turing patterns, and quasi-standing waves. For the inhibitor, we find that positive cross-diffusion (the inhibitor being repelled by increasing concentrations of the activator) can induce Turing patterns, jumping waves and spatially modulated bulk oscillations. We qualitatively explain the formation of these patterns. With one model we can explain Turing patterns, standing waves and jumping waves, which previously was done with three different models.
Requirements engineers have to elicit, document, and validate how stakeholders act and interact to achieve their common goals in collaborative scenarios. Only after gathering all information concerning who interacts with whom to do what and why, can a software system be designed and realized which supports the stakeholders to do their work. To capture and structure requirements of different (groups of) stakeholders, scenario-based approaches have been widely used and investigated. Still, the elicitation and validation of requirements covering collaborative scenarios remains complicated, since the required information is highly intertwined, fragmented, and distributed over several stakeholders. Hence, it can only be elicited and validated collaboratively. In times of globally distributed companies, scheduling and conducting workshops with groups of stakeholders is usually not feasible due to budget and time constraints. Talking to individual stakeholders, on the other hand, is feasible but leads to fragmented and incomplete stakeholder scenarios. Going back and forth between different individual stakeholders to resolve this fragmentation and explore uncovered alternatives is an error-prone, time-consuming, and expensive task for the requirements engineers. While formal modeling methods can be employed to automatically check and ensure consistency of stakeholder scenarios, such methods introduce additional overhead since their formal notations have to be explained in each interaction between stakeholders and requirements engineers. Tangible prototypes as they are used in other disciplines such as design, on the other hand, allow designers to feasibly validate and iterate concepts and requirements with stakeholders. This thesis proposes a model-based approach for prototyping formal behavioral specifications of stakeholders who are involved in collaborative scenarios. By simulating and animating such specifications in a remote domain-specific visualization, stakeholders can experience and validate the scenarios captured so far, i.e., how other stakeholders act and react. This interactive scenario simulation is referred to as a model-based virtual prototype. Moreover, through observing how stakeholders interact with a virtual prototype of their collaborative scenarios, formal behavioral specifications can be automatically derived which complete the otherwise fragmented scenarios. This, in turn, enables requirements engineers to elicit and validate collaborative scenarios in individual stakeholder sessions – decoupled, since stakeholders can participate remotely and are not forced to be available for a joint session at the same time. This thesis discusses and evaluates the feasibility, understandability, and modifiability of model-based virtual prototypes. Similarly to how physical prototypes are perceived, the presented approach brings behavioral models closer to being tangible for stakeholders and, moreover, combines the advantages of joint stakeholder sessions and decoupled sessions.
Measures of gender identity have almost exclusively relied on positive aspects of masculinity and femininity, although conceptually the self-concept is not limited to positive attributes. A theoretical argument is made for considering negative attributes of gender identity, followed by five studies developing the Positive-Negative Sex-Role Inventory (PN-SRI) as a new measure of gender identity. Study 1 demonstrated that many of the attributes of a German version of the Bem Sex-Role Inventory are no longer considered to differ in desirability for men and women. For the PN-SRI, Study 2 elicited attributes characterizing men and women in today's society, for which ratings of typicality and desirability as well as self-ratings by men and women were obtained in Study 3. Study 4 examined the reliability and factorial structure of the four subscales of positive and negative masculinity and femininity and demonstrated the construct and discriminant validity of the PN-SRI by showing that the negative masculinity and femininity scales were unique predictors of select validation constructs. Study 5 showed that the new instrument explained variance in the validation constructs beyond earlier measures of gender identity. Key message: Even in the construction of negative aspects of gender identity, individuals prefer gender-congruent attributes. Negative masculinity and femininity make a unique contribution to understanding gender-related differences in psychological outcome variables.
Die Komplexität heutiger Geschäftsabläufe und die Menge der zu verwaltenden Daten stellen hohe Anforderungen an die Entwicklung und Wartung von Geschäftsanwendungen. Ihr Umfang entsteht unter anderem aus der Vielzahl von Modellentitäten und zugehörigen Nutzeroberflächen zur Bearbeitung und Analyse der Daten. Dieser Bericht präsentiert neuartige Konzepte und deren Umsetzung zur Vereinfachung der Entwicklung solcher umfangreichen Geschäftsanwendungen. Erstens: Wir schlagen vor, die Datenbank und die Laufzeitumgebung einer dynamischen objektorientierten Programmiersprache zu vereinen. Hierzu organisieren wir die Speicherstruktur von Objekten auf die Weise einer spaltenorientierten Hauptspeicherdatenbank und integrieren darauf aufbauend Transaktionen sowie eine deklarative Anfragesprache nahtlos in dieselbe Laufzeitumgebung. Somit können transaktionale und analytische Anfragen in derselben objektorientierten Hochsprache implementiert werden, und dennoch nah an den Daten ausgeführt werden. Zweitens: Wir beschreiben Programmiersprachkonstrukte, welche es erlauben, Nutzeroberflächen sowie Nutzerinteraktionen generisch und unabhängig von konkreten Modellentitäten zu beschreiben. Um diese abstrakte Beschreibung nutzen zu können, reichert man die Domänenmodelle um vormals implizite Informationen an. Neue Modelle müssen nur um einige Informationen erweitert werden um bereits vorhandene Nutzeroberflächen und -interaktionen auch für sie verwenden zu können. Anpassungen, die nur für ein Modell gelten sollen, können unabhängig vom Standardverhalten, inkrementell, definiert werden. Drittens: Wir ermöglichen mit einem weiteren Programmiersprachkonstrukt die zusammenhängende Beschreibung von Abläufen der Anwendung, wie z.B. Bestellprozesse. Unser Programmierkonzept kapselt Nutzerinteraktionen in synchrone Funktionsaufrufe und macht somit Prozesse als zusammenhängende Folge von Berechnungen und Interaktionen darstellbar. Viertens: Wir demonstrieren ein Konzept, wie Endnutzer komplexe analytische Anfragen intuitiver formulieren können. Es basiert auf der Idee, dass Endnutzer Anfragen als Konfiguration eines Diagramms sehen. Entsprechend beschreibt ein Nutzer eine Anfrage, indem er beschreibt, was sein Diagramm darstellen soll. Nach diesem Konzept beschriebene Diagramme enthalten ausreichend Informationen, um daraus eine Anfrage generieren zu können. Hinsichtlich der Ausführungsdauer sind die generierten Anfragen äquivalent zu Anfragen, die mit konventionellen Anfragesprachen formuliert sind. Das Anfragemodell setzen wir in einem Prototypen um, der auf den zuvor eingeführten Konzepten aufsetzt.
Compared to the well-studied open water of the "growing" season, under-ice conditions in lakes are characterized by low and rather constant temperature, slow water movements, limited light availability, and reduced exchange with the surrounding landscape. These conditions interact with ice-cover duration to shape microbial processes in temperate lakes and ultimately influence the phenology of community and ecosystem processes. We review the current knowledge on microorganisms in seasonally frozen lakes. Specifically, we highlight how under-ice conditions alter lake physics and the ways that this can affect the distribution and metabolism of auto-and heterotrophic microorganisms. We identify functional traits that we hypothesize are important for understanding under-ice dynamics and discuss how these traits influence species interactions. As ice coverage duration has already been seen to reduce as air temperatures have warmed, the dynamics of the under-ice microbiome are important for understanding and predicting the dynamics and functioning of seasonally frozen lakes in the near future.
Novel hydrogels based on hydroxyethyl starch modified with polyethylene glycol methacrylate (HES-P(EG)(6)MA) were developed as delivery system for the controlled release of proteins. Since the drug release behavior is supposed to be related to the pore structure of the hydrogel network the pore sizes were determined by cryo-SEM, which is a mild technique for imaging on a nanometer scale. The results showed a decreasing pore size and an increase in pore homogeneity with increasing polymer concentration. Furthermore, the mesh sizes of the hydrogels were calculated based on swelling data. Pore and mesh size were significantly different which indicates that both structures are present in the hydrogel. The resulting structural model was correlated with release data for bulk hydrogel cylinders loaded with FITC-dextran and hydrogel microspheres loaded with FITC-IgG and FITC-dextran of different molecular size. The initial release depended much on the relation between hydrodynamic diameter and pore size while the long term release of the incorporated substances was predominantly controlled by degradation of the network of the much smaller meshes.
We report on the fabrication, modeling, and experimental verification of the emission of fiber lenses fabricated on multimode fibers in different media. Concave fiber lenses with a radius of 150 mu m were fabricated onto a multimode silica fiber (100 mu m core) by grinding and polishing against a ruby sphere template. In our theoretical model we assume that the fiber guides light from a Lambertian light source and that the emission cone is governed solely by the range of permitted emission angles. We investigate concave and convex lenses at 532 nm with different radii and in a variety of surrounding media from air (n(0) = 1.00) to sapphire (n(0) = 1.77). It was found that noticeable focusing or defocusing effects of a silica fiber lens in ethanol (n(0) = 1.36) and dimethyl sulfoxide (DMSO) (n(0) = 1.48) are only observed when the fiber lens radius was less than the fiber diameter.
Selective ultrafast probing of transient hot chemisorbed and precursor States of CO on Ru(0001)
(2013)
We have studied the femtosecond dynamics following optical laser excitation of CO adsorbed on a Ru surface by monitoring changes in the occupied and unoccupied electronic structure using ultrafast soft x-ray absorption and emission. We recently reported [M. Dell'Angela et al. Science 339, 1302 (2013)] a phonon-mediated transition into a weakly adsorbed precursor state occurring on a time scale of >2 ps prior to desorption. Here we focus on processes within the first picosecond after laser excitation and show that the metal-adsorbate coordination is initially increased due to hot-electron-driven vibrational excitations. This process is faster than, but occurs in parallel with, the transition into the precursor state. With resonant x-ray emission spectroscopy, we probe each of these states selectively and determine the respective transient populations depending on optical laser fluence. Ab initio molecular dynamics simulations of CO adsorbed on Ru(0001) were performed at 1500 and 3000 K providing insight into the desorption process.
Soft X-ray spectroscopy is one of the best tools to directly address the electronic structure, the driving force of chemical reactions. It enables selective studies on sample surfaces to single out reaction centers in heterogeneous catalytic reactions. With core-hole clock methods, specific dynamics are related to the femtosecond life time of a core-hole. Typically, this method is used with photoemission spectroscopy, but advancements in soft X-ray emission techniques render more specific studies possible. With the advent of bright femtosecond pulsed soft X-ray sources, highly selective pump-probe X-ray emission studies are enabled with temporal resolutions down to tens of femtoseconds. This finally allows to study dynamics in the electronic structure of adsorbed reaction centers on the whole range of relevant time scales - closing the gap between kinetic soft X-ray studies and the atto- to femtosecond core-hole clock techniques.
Resonant inelastic X-ray scattering and X-ray emission spectroscopy can be used to probe the energy and dispersion of the elementary low-energy excitations that govern functionality in matter: vibronic, charge, spin and orbital excitations(1-7). A key drawback of resonant inelastic X-ray scattering has been the need for high photon densities to compensate for fluorescence yields of less than a per cent for soft X-rays(8). Sample damage from the dominant non-radiative decays thus limits the materials to which such techniques can be applied and the spectral resolution that can be obtained. A means of improving the yield is therefore highly desirable. Here we demonstrate stimulated X-ray emission for crystalline silicon at photon densities that are easily achievable with free-electron lasers(9). The stimulated radiative decay of core excited species at the expense of non-radiative processes reduces sample damage and permits narrow-bandwidth detection in the directed beam of stimulated radiation. We deduce how stimulated X-ray emission can be enhanced by several orders of magnitude to provide, with high yield and reduced sample damage, a superior probe for low-energy excitations and their dispersion in matter. This is the first step to bringing nonlinear X-ray physics in the condensed phase from theory(10-16) to application.
Dynamics in materials typically involve different degrees of freedom, like charge, lattice, orbital and spin in a complex interplay. Time-resolved resonant inelastic X-ray scattering (RIXS) as a highly selective tool can provide unique insight and follow the details of dynamical processes while resolving symmetries, chemical and charge states, momenta, spin configurations, etc. In this paper, we review examples where the intrinsic scattering duration time is used to study femtosecond phenomena. Free-electron lasers access timescales starting in the sub-ps range through pump-probe methods and synchrotrons study the time scales longer than tens of ps. In these examples, time-resolved resonant inelastic X-ray scattering is applied to solids as well as molecular systems.
The Acheulean technological tradition, characterized by a large (>10 cm) flake-based component, represents a significant technological advance over the Oldowan. Although stone tool assemblages attributed to the Acheulean have been reported from as early as circa 1.6-1.75 Ma, the characteristics of these earliest occurrences and comparisons with later assemblages have not been reported in detail. Here, we provide a newly established chronometric calibration for the Acheulean assemblages of the Konso Formation, southern Ethiopia, which span the time period similar to 1.75 to <1.0 Ma. The earliest Konso Acheulean is chronologically indistinguishable from the assemblage recently published as the world's earliest with an age of similar to 1.75 Ma at Kokiselei, west of Lake Turkana, Kenya. This Konso assemblage is characterized by a combination of large picks and crude bifaces/unifaces made predominantly on large flake blanks. An increase in the number of flake scars was observed within the Konso Formation handaxe assemblages through time, but this was less so with picks. The Konso evidence suggests that both picks and handaxes were essential components of the Acheulean from its initial stages and that the two probably differed in function. The temporal refinement seen, especially in the handaxe forms at Konso, implies enhanced function through time, perhaps in processing carcasses with long and stable cutting edges. The documentation of the earliest Acheulean at similar to 1.75 Ma in both northern Kenya and southern Ethiopia suggests that behavioral novelties were being established in a regional scale at that time, paralleling the emergence of Homo erectus-like hominid morphology.
Induction of apoptosis mediated by the inhibition of ceramidases has been shown to enhance the efficacy of conventional chemotherapy in several cancer models. Among the inhibitors of ceramidases reported in the literature, B-13 is considered as a lead compound having good in vitro potency towards acid ceramidase. Furthermore, owing to the poor activity of B-13 on lysosoamal acid ceramidase in living cells, LCL-464 a modified derivative of B-13 containing a basic omega-amino group at the fatty acid was reported to have higher potency towards lysosomal acid ceramidase in living cells. In a search for more potent inhibitors of ceramidases, we have designed a series of compounds with structural modifications of B-13 and LCL-464. In this study, we show that the efficacy of B-13 in vitro as well as in intact cells can be enhanced by suitable modification of functional groups. Furthermore, a detailed SAR investigation on LCL-464 analogues revealed novel promising inhibitors of aCDase and nCDase. In cell culture studies using the breast cancer cell line MDA-MB-231, some of the newly developed compounds elevated endogenous ceramide levels and in parallel, also induced apoptotic cell death. In summary, this study shows that structural modification of the known ceramidase inhibitors B-13 and LCL-464 generates more potent ceramidase inhibitors that are active in intact cells and not only elevates the cellular ceramide levels, but also enhances cell death.
Perceptual attunement to one's native language results in language-specific processing of speech sounds. This includes stress cues, instantiated by differences in intensity, pitch, and duration. The present study investigates the effects of linguistic experience on the perception of these cues by studying the Iambic-Trochaic Law (ITL), which states that listeners group sounds trochaically (strong-weak) if the sounds vary in loudness or pitch and iambically (weak-strong) if they vary in duration. Participants were native listeners either of French or German; this comparison was chosen because French adults have been shown to be less sensitive than speakers of German and other languages to word-level stress, which is communicated by variation in cues such as intensity, fundamental frequency (F0), or duration. In experiment 1, participants listened to sequences of co-articulated syllables varying in either intensity or duration. The German participants were more consistent in their grouping than the French for both cues. Experiment 2 was identical to experiment 1 except that intensity variation was replaced by pitch variation. German participants again showed more consistency for both cues, and French participants showed especially inconsistent grouping for the pitch-varied sequences. These experiments show that the perception of linguistic rhythm is strongly influenced by linguistic experience.
Die Dissertation untersucht von Autorinnen (Louisa Johnson, Jane Loudon, Maria Theresa Earle, Gertrude Jekyll, Elizabeth von Arnim) verfasste Ratgeberliteratur zum Hausgarten für ein weibliches Lesepublikum, mit dem Anspruch an eine praktische Gartentätigkeit, im Zeitraum von 1839 bis 1900. Die Genderperspektive steht hieraus folgend im Mittelpunkt der vorliegenden Arbeit. Der Fokus auf die bürgerliche Mittelklasse ergibt sich aus der Autorinnenperspektive und der angesprochenen Leserschaft. Die Behandlung des Gartens wird einer Analyse unterzogen, die nach der weiblichen Sicht auf den Garten und einem spezifisch weiblichen Selbstverständnis der garteninteressierten bzw. gärtnernden Frauen fragt. In ihrer Beschäftigung mit dem Garten leisten die Frauen einen Beitrag zur Konzeption von männlich und weiblich, zur Bewertung von Geschlechternormen und deren Verhandlung. Das Schreiben und Lesen über den Garten sowie hieraus resultierende Handlungen waren mit der Konstruktion weiblicher Identität verknüpft. In ihrer befreienden Konzeption des Gartens heben sich diese Frauenstimmen zu Weiblichkeitsvorstellungen von anderen gesellschaftlichen zugeschriebenen Wirkungsbereichen ab. An die bürgerliche Frau herangetragene Rollenerwartungen werden in den Werken weder affirmativ bestätigt noch offen subversiv hinterfragt. Es handelt sich vielmehr um ein subtiles Unterlaufen durch das Anbieten von Handlungsfeldern, die dem Wunsch nach Selbstverwirklichung und Selbstbestimmung entgegen kamen. Im Garten als vermeintlich kleinem, hausnah-restriktivem Kontext nehmen die Frauen neue Rollen an und variieren diese. Der Beschäftigung mit dem Garten kommt daher ein protofeministischer Charakter vor dem Einsetzen der Ersten Frauenbewegung zu, so dass von einem Gartenfeminismus als Instrument zur weiblichen Bewusstwerdung gesprochen werden kann.
Ten ice-sheet models are used to study sensitivity of the Greenland and Antarctic ice sheets to prescribed changes of surface mass balance, sub-ice-shelf melting and basal sliding. Results exhibit a large range in projected contributions to sea-level change. In most cases, the ice volume above flotation lost is linearly dependent on the strength of the forcing. Combinations of forcings can be closely approximated by linearly summing the contributions from single forcing experiments, suggesting that nonlinear feedbacks are modest. Our models indicate that Greenland is more sensitive than Antarctica to likely atmospheric changes in temperature and precipitation, while Antarctica is more sensitive to increased ice-shelf basal melting. An experiment approximating the Intergovernmental Panel on Climate Change's RCP8.5 scenario produces additional first-century contributions to sea level of 22.3 and 8.1 cm from Greenland and Antarctica, respectively, with a range among models of 62 and 14 cm, respectively. By 200 years, projections increase to 53.2 and 26.7 cm, respectively, with ranges of 79 and 43 cm. Linear interpolation of the sensitivity results closely approximates these projections, revealing the relative contributions of the individual forcings on the combined volume change and suggesting that total ice-sheet response to complicated forcings over 200 years can be linearized.
The Arctic is considered as a focal region in the ongoing climate change debate. The currently observed and predicted climate warming is particularly pronounced in the high northern latitudes. Rising temperatures in the Arctic cause progressive deepening and duration of permafrost thawing during the arctic summer, creating an ‘active layer’ with high bioavailability of nutrients and labile carbon for microbial consumption. The microbial mineralization of permafrost carbon creates large amounts of greenhouse gases, including carbon dioxide and methane, which can be released to the atmosphere, creating a positive feedback to global warming. However, to date, the microbial communities that drive the overall carbon cycle and specifically methane production in the Arctic are poorly constrained. To assess how these microbial communities will respond to the predicted climate changes, such as an increase in atmospheric and soil temperatures causing increased bioavailability of organic carbon, it is necessary to investigate the current status of this environment, but also how these microbial communities reacted to climate changes in the past. This PhD thesis investigated three records from two different study sites in the Russian Arctic, including permafrost, lake shore and lake deposits from Siberia and Chukotka. A combined stratigraphic approach of microbial and molecular organic geochemical techniques were used to identify and quantify characteristic microbial gene and lipid biomarkers. Based on this data it was possible to characterize and identify the climate response of microbial communities involved in past carbon cycling during the Middle Pleistocene and the Late Pleistocene to Holocene. It is shown that previous warmer periods were associated with an expansion of bacterial and archaeal communities throughout the Russian Arctic, similar to present day conditions. Different from this situation, past glacial and stadial periods experienced a substantial decrease in the abundance of Bacteria and Archaea. This trend can also be confirmed for the community of methanogenic archaea that were highly abundant and diverse during warm and particularly wet conditions. For the terrestrial permafrost, a direct effect of the temperature on the microbial communities is likely. In contrast, it is suggested that the temperature rise in scope of the glacial-interglacial climate variations led to an increase of the primary production in the Arctic lake setting, as can be seen in the corresponding biogenic silica distribution. The availability of this algae-derived carbon is suggested to be a driver for the observed pattern in the microbial abundance. This work demonstrates the effect of climate changes on the community composition of methanogenic archae. Methanosarcina-related species were abundant throughout the Russian Arctic and were able to adapt to changing environmental conditions. In contrast, members of Methanocellales and Methanomicrobiales were not able to adapt to past climate changes. This PhD thesis provides first evidence that past climatic warming led to an increased abundance of microbial communities in the Arctic, closely linked to the cycling of carbon and methane production. With the predicted climate warming, it may, therefore, be anticipated that extensive amounts of microbial communities will develop. Increasing temperatures in the Arctic will affect the temperature sensitive parts of the current microbiological communities, possibly leading to a suppression of cold adapted species and the prevalence of methanogenic archaea that tolerate or adapt to increasing temperatures. These changes in the composition of methanogenic archaea will likely increase the methane production potential of high latitude terrestrial regions, changing the Arctic from a carbon sink to a source.
Die nichtproteinogene Aminosäure GABA (γ-Aminobuttersäure) gilt als der wichtigste inhibitorische Neurotransmitter im Zentralnervensystem von Vertebraten sowie Invertebraten und vermittelt ihre Wirkung u. a. über die metabotropen GABAB-Rezeptoren. Bisher sind diese Rezeptoren bei Insekten nur rudimentär untersucht. Für die Amerikanische Großschabe als etablierter Modellorganismus konnte pharmakologisch eine modulatorische Rolle der GABAB-Rezeptoren bei der Bildung von Primärspeichel nachgewiesen werden. Ziel dieser Arbeit war eine umfassende Charakterisierung der GABAB-Rezeptor-Subtypen 1 und 2 von Periplaneta americana. Unter Verwendung verschiedenster Klonierungsstrategien sowie der Kooperationsmöglichkeit mit der Arbeitsgruppe von Prof. Dr. T. Miura (Hokkaido, Japan) in Hinsicht auf eine dort etablierte P. americana EST-Datenbank gelang die Klonierung von zwei Rezeptor-cDNAs. Die Analyse der abgeleiteten Aminosäuresequenzen auf GB-spezifische Domänen und konservierte Aminosäure-Reste, sowie der Vergleich zu bekannten GB Sequenzen anderer Arten legen nahe, dass es sich bei den isolierten Sequenzen um die GABAB-Rezeptor-Subtypen 1 und 2 (PeaGB1 und PeaGB2) handelt. Für die funktionelle und pharmakologische Charakterisierung des Heteromers aus PeaGB1 und PeaGB2 wurden Expressionskonstrukte für die Transfektion in HEK-flpTM-Zellen hergestellt. Das Heteromer aus PeaGB1 und PeaGB2 hemmt bei steigenden GABA-Konzentrationen die cAMP-Produktion. Die Substanzen SKF97541 und 3-APPA konnten als Agonisten identifiziert werden. CGP55845 und CGP54626 wirken als vollwertige Antagonisten. Das in vitro ermittelte pharmakologische Profil im Vergleich zur Pharmakologie an der isolierten Drüse bestätigt, dass die GABA-Wirkung in der Speicheldrüse tatsächlich von GBs vermittelt wird. Für die immunhistochemische Charakterisierung konnte ein spezifischer polyklonaler Antikörper gegen die extrazelluläre Schleife 2 des PeaGB1 generiert werden. Ein weiterer Antikörper, welcher gegen den PeaGB2 gerichtet ist, erwies sich hingegen nicht als ausreichend spezifisch. Western-Blot-Analysen bestätigen das Vorkommen beider Subtypen im Zentralnervensystem von P. americana. Zudem wird der PeaGB1 in der Speicheldrüse und in den Geschlechtsdrüsen der Schabenmännchen exprimiert. Immunhistochemische Analysen zeigen eine PeaGB1-ähnliche Markierung in den GABAergen Fasern der Speicheldrüse auf. Demnach fungiert der PeaGB1 hier als Autorezeptor. Weiterhin konnte eine PeaGB1-ähnliche Markierung in nahezu allen Gehirnneuropilen festgestellt werden. Auch die akzessorischen Drüsen der Männchen, Pilzdrüse und Phallusdrüse, sind PeaGB1-immunreaktiv.
The polymer-controlled and bioinspired precipitation of inorganic minerals from aqueous solution at near-ambient or physiological conditions avoiding high temperatures or organic solvents is a key research area in materials science. Polymer-controlled mineralization has been studied as a model for biomineralization and for the synthesis of (bioinspired and biocompatible) hybrid materials for a virtually unlimited number of applications. Calcium phosphate mineralization is of particular interest for bone and dental repair. Numerous studies have therefore addressed the mineralization of calcium phosphate using a wide variety of low- and high-molecular-weight additives. In spite of the growing interest and increasing number of experimental and theoretical data, the mechanisms of polymer-controlled calcium phosphate mineralization are not entirely clear to date, although the field has made significant progress in the last years. A set of elegant experiments and calculations has shed light on some details of mineral formation, but it is currently not possible to preprogram a mineralization reaction to yield a desired product for a specific application. The current article therefore summarizes and discusses the influence of (macro)molecular entities such as polymers, peptides, proteins and gels on biomimetic calcium phosphate mineralization from aqueous solution. It focuses on strategies to tune the kinetics, morphologies, final dimensions and crystal phases of calcium phosphate, as well as on mechanistic considerations.
Background: Clock genes govern circadian rhythms and shape the effect of alcohol use on the physiological system. Exposure to severe negative life events is related to both heavy drinking and disturbed circadian rhythmicity. The aim of this study was 1) to extend previous findings suggesting an association of a haplotype tagging single nucleotide polymorphism of PER2 gene with drinking patterns, and 2) to examine a possible role for an interaction of this gene with life stress in hazardous drinking.
Methods: Data were collected as part of an epidemiological cohort study on the outcome of early risk factors followed since birth. At age 19 years, 268 young adults (126 males, 142 females) were genotyped for PER2 rs56013859 and were administered a 45-day alcohol timeline follow-back interview and the Alcohol Use Disorders Identification Test (AUDIT). Life stress was assessed as the number of severe negative life events during the past four years reported in a questionnaire and validated by interview.
Results: Individuals with the minor G allele of rs56013859 were found to be less engaged in alcohol use, drinking at only 72% of the days compared to homozygotes for the major A allele. Moreover, among regular drinkers, a gene x environment interaction emerged (p = .020). While no effects of genotype appeared under conditions of low stress, carriers of the G allele exhibited less hazardous drinking than those homozygous for the A allele when exposed to high stress.
Conclusions: These findings may suggest a role of the circadian rhythm gene PER2 in both the drinking patterns of young adults and in moderating the impact of severe life stress on hazardous drinking in experienced alcohol users. However, in light of the likely burden of multiple tests, the nature of the measures used and the nominal evidence of interaction, replication is needed before drawing firm conclusions.
BackgroundEarly alcohol use is one of the strongest predictors of later alcohol use disorders, with early use usually taking place during puberty. Many researchers have suggested drinking during puberty as a potential biological basis of the age at first drink (AFD) effect. However, the influence of the pubertal phase at alcohol use initiation on subsequent drinking in later life has not been examined so far.
MethodsPubertal stage at first drink (PSFD) was determined in N=283 young adults (131 males, 152 females) from an epidemiological cohort study. At ages 19, 22, and 23years, drinking behavior (number of drinking days, amount of alcohol consumed, hazardous drinking) was assessed using interview and questionnaire methods. Additionally, an animal study examined the effects of pubertal or adult ethanol (EtOH) exposure on voluntary EtOH consumption in later life in 20 male Wistar rats.
ResultsPSFD predicted drinking behavior in humans in early adulthood, indicating that individuals who had their first drink during puberty displayed elevated drinking levels compared to those with postpubertal drinking onset. These findings were corroborated by the animal study, in which rats that received free access to alcohol during the pubertal period were found to consume more alcohol as adults, compared to the control animals that first came into contact with alcohol during adulthood.
ConclusionsThe results point to a significant role of stage of pubertal development at first contact with alcohol for the development of later drinking habits. Possible biological mechanisms and implications for prevention are discussed.
Any understanding of sediment routing from mountain belts to their forelands and offshore sinks remains incomplete without estimates of intermediate storage that decisively buffers sediment yields from erosion rates, attenuates water and sediment fluxes, and protects underlying bedrock from incision. We quantify for the first time the sediment stored in > 38000 mainly postglacial Himalayan valley fills, based on an empirical volume-area scaling of valley-fill outlines automatically extracted from digital topographic data. The estimated total volume of 690(+452/-242) km(3) is mostly contained in few large valley fills > 1 km(3), while catastrophic mass wasting adds another 177(31) km(3). Sediment storage volumes are highly disparate along the strike of the orogen. Much of the Himalaya's stock of sediment is sequestered in glacially scoured valleys that provide accommodation space for similar to 44% of the total volume upstream of the rapidly exhuming and incising syntaxes. Conversely, the step-like long-wave topography of the central Himalayas limits glacier extent, and thus any significant glacier-derived storage of sediment away from tectonic basins. We show that exclusive removal of Himalayan valley fills could nourish contemporary sediment flux from the Indus and Brahmaputra basins for > 1 kyr, though individual fills may attain residence times of > 100 kyr. These millennial lag times in the Himalayan sediment routing system may sufficiently buffer signals of short-term seismic as well as climatic disturbances, thus complicating simple correlation and interpretation of sedimentary archives from the Himalayan orogen, its foreland, and its submarine fan systems. (C) 2013 Elsevier B.V. All rights reserved.
Assessing diversity is among the major tasks in ecology and conservation science. In ecological and conservation studies, epiphytic cryptogams are usually sampled up to accessible heights in forests. Thus, their diversity, especially of canopy specialists, likely is underestimated. If the proportion of those species differs among forest types, plot-based diversity assessments are biased and may result in misleading conservation recommendations. We sampled bryophytes and lichens in 30 forest plots of 20 m x 20 m in three German regions, considering all substrates, and including epiphytic litter fall. First, the sampling of epiphytic species was restricted to the lower 2 m of trees and shrubs. Then, on one representative tree per plot, we additionally recorded epiphytic species in the crown, using tree climbing techniques. Per tree, on average 54% of lichen and 20% of bryophyte species were overlooked if the crown was not been included. After sampling all substrates per plot, including the bark of all shrubs and trees, still 38% of the lichen and 4% of the bryophyte species were overlooked if the tree crown of the sampled tree was not included. The number of overlooked lichen species varied strongly among regions. Furthermore, the number of overlooked bryophyte and lichen species per plot was higher in European beech than in coniferous stands and increased with increasing diameter at breast height of the sampled tree. Thus, our results indicate a bias of comparative studies which might have led to misleading conservation recommendations of plot-based diversity assessments.
There is a wealth of smaller-scale studies on the effects of forest management on plant diversity. However, studies comparing plant species diversity in forests with different management types and intensity, extending over different regions and forest stages, and including detailed information on site conditions are missing. We studied vascular plants on 1500 20 m x 20 m forest plots in three regions of Germany (Schwabische Alb, Hainich-Dun, Schorfheide-Chorin). In all regions, our study plots comprised different management types (unmanaged, selection cutting, deciduous and coniferous age-class forests, which resulted from clear cutting or shelterwood logging), various stand ages, site conditions, and levels of management-related disturbances. We analyzed how overall richness and richness of different plant functional groups (trees, shrubs, herbs, herbaceous species typically growing in forests and herbaceous light-demanding species) responded to the different management types. On average, plant species richness was 13% higher in age-class than in unmanaged forests, and did not differ between deciduous age-class and selection forests. In age-class forests of the Schwabische Alb and Hainich-Dun, coniferous stands had higher species richness than deciduous stands. Among age-class forests, older stands with large quantities of standing biomass were slightly poorer in shrub and light-demanding herb species than younger stands. Among deciduous forests, the richness of herbaceous forest species was generally lower in unmanaged than in managed forests, and it was even 20% lower in unmanaged than in selection forests in Hainich-Dun. Overall, these findings show that disturbances by management generally increase plant species richness. This suggests that total plant species richness is not suited as an indicator for the conservation status of forests, but rather indicates disturbances.
Die Arbeit beleuchtet die Beziehungen zwischen der DDR und der Volksrepublik China in den Jahren 1978 bis 1990. Dabei werden sowohl die innen-, wie auch die außenpolitischen Bedingungen dieser Beziehungen in der DDR und China beleuchtet. Besonderes Augenmerk wird auch auf die Sowjetunion gelegt. Die Beziehungen Moskaus gegenüber Beijing und Ostberlin werden dargestellt und mit den daraus resultierenden Folgen für die DDR-Führung in Bezug gesetzt.
The problem under consideration in the thesis is a two level atom in a photonic crystal and a pumping laser. The photonic crystal provides an environment for the atom, that modifies the decay of the exited state, especially if the atom frequency is close to the band gap. The population inversion is investigated als well as the emission spectrum. The dynamics is analysed in the context of open quantum systems. Due to the multiple reflections in the photonic crystal, the system has a finite memory that inhibits the Markovian approximation. In the Heisenberg picture the equations of motion for the system variables form a infinite hierarchy of integro-differential equations. To get a closed system, approximations like a weak coupling approximation are needed. The thesis starts with a simple photonic crystal that is amenable to analytic calculations: a one-dimensional photonic crystal, that consists of alternating layers. The Bloch modes inside and the vacuum modes outside a finite crystal are linked with a transformation matrix that is interpreted as a transfer matrix. Formulas for the band structure, the reflection from a semi-infinite crystal, and the local density of states in absorbing crystals are found; defect modes and negative refraction are discussed. The quantum optics section of the work starts with the discussion of three problems, that are related to the full resonance fluorescence problem: a pure dephasing model, the driven atom and resonance fluorescence in free space. In the lowest order of the system-environment coupling, the one-time expectation values for the full problem are calculated analytically and the stationary states are discussed for certain cases. For the calculation of the two time correlation functions and spectra, the additional problem of correlations between the two times appears. In the Markovian case, the quantum regression theorem is valid. In the general case, the fluctuation dissipation theorem can be used instead. The two-time correlation functions are calculated by the two different methods. Within the chosen approximations, both methods deliver the same result. Several plots show the dependence of the spectrum on the parameters. Some examples for squeezing spectra are shown with different approximations. A projection operator method is used to establish two kinds of Markovian expansion with and without time convolution. The lowest order is identical with the lowest order of system environment coupling, but higher orders give different results.
Assessment of coupled cluster theory and more approximate methods for hydrogen bonded systems
(2013)
To assess the accuracy of post-Hartree-Fock methods like CCSD(T), MP3, MP2.5, MP2, SCS-MP2, SOS-MP2, and DFT-SAPT, we evaluated several effects going beyond valence-correlated CCSD(T). For 16 small hydrogen bonded systems, CCSD(T) achieves an RMS error of 0.17 kJ/mol in the dissociation energy compared to our best estimate, which is a composite method akin to W4 theory. The error of CCSD(T) is thus much lower than for atomization energies. MP2 is surprisingly accurate for these systems with an RMS error of 1.3 kJ/mol. MP2.5 yields a clear improvement over MP2 (RMS of 0.5 kJ/mol) but still has an error about 3 times as large as CCSD(T) for the absolute RMS and almost 10 times as large for the relative RMS. error. Neither SCS-MP2, SOS-MP2, nor DFT-SAPT yield lower errors than MP2. With a Delta CCSD(T) correction to MP2, the basis set limit is readily achieved when employing diffuse functions-without these, the convergence is rather slow.
Brillouin scattering of visible and hard X-ray photons from optically synthesized phonon wavepackets
(2013)
We monitor how destructive interference of undesired phonon frequency components shapes a quasi-monochromatic hypersound wavepacket spectrum during its local real-time preparation by a nanometric transducer and follow the subsequent decay by nonlinear coupling. We prove each frequency component of an optical supercontinuum probe to be sensitive to one particular phonon wavevector in bulk material and cross-check this by ultrafast x-ray diffraction experiments with direct access to the lattice dynamics. Establishing reliable experimental techniques with direct access to the transient spectrum of the excitation is crucial for the interpretation in strongly nonlinear regimes, such as soliton formation.
Background: Recent studies have demonstrated a superior diagnostic accuracy of cardiovascular magnetic resonance (CMR) for the detection of coronary artery disease (CAD). We aimed to determine the comparative cost-effectiveness of CMR versus single-photon emission computed tomography (SPECT).
Methods: Based on Bayes' theorem, a mathematical model was developed to compare the cost-effectiveness and utility of CMR with SPECT in patients with suspected CAD. Invasive coronary angiography served as the standard of reference. Effectiveness was defined as the accurate detection of CAD, and utility as the number of quality-adjusted life-years (QALYs) gained. Model input parameters were derived from the literature, and the cost analysis was conducted from a German health care payer's perspective. Extensive sensitivity analyses were performed.
Results: Reimbursement fees represented only a minor fraction of the total costs incurred by a diagnostic strategy. Increases in the prevalence of CAD were generally associated with improved cost-effectiveness and decreased costs per utility unit (Delta QALY). By comparison, CMR was consistently more cost-effective than SPECT, and showed lower costs per QALY gained. Given a CAD prevalence of 0.50, CMR was associated with total costs of (sic)6,120 for one patient correctly diagnosed as having CAD and with (sic)2,246 per Delta QALY gained versus (sic)7,065 and (sic)2,931 for SPECT, respectively. Above a threshold value of CAD prevalence of 0.60, proceeding directly to invasive angiography was the most cost-effective approach.
Conclusions: In patients with low to intermediate CAD probabilities, CMR is more cost-effective than SPECT. Moreover, lower costs per utility unit indicate a superior clinical utility of CMR.
Background: Recent studies have demonstrated a superior diagnostic accuracy of cardiovascular magnetic resonance (CMR) for the detection of coronary artery disease (CAD). We aimed to determine the comparative cost-effectiveness of CMR versus single-photon emission computed tomography (SPECT).
Methods: Based on Bayes' theorem, a mathematical model was developed to compare the cost-effectiveness and utility of CMR with SPECT in patients with suspected CAD. Invasive coronary angiography served as the standard of reference. Effectiveness was defined as the accurate detection of CAD, and utility as the number of quality-adjusted life-years (QALYs) gained. Model input parameters were derived from the literature, and the cost analysis was conducted from a German health care payer's perspective. Extensive sensitivity analyses were performed.
Results: Reimbursement fees represented only a minor fraction of the total costs incurred by a diagnostic strategy. Increases in the prevalence of CAD were generally associated with improved cost-effectiveness and decreased costs per utility unit (Delta QALY). By comparison, CMR was consistently more cost-effective than SPECT, and showed lower costs per QALY gained. Given a CAD prevalence of 0.50, CMR was associated with total costs of (sic)6,120 for one patient correctly diagnosed as having CAD and with (sic)2,246 per Delta QALY gained versus (sic)7,065 and (sic)2,931 for SPECT, respectively. Above a threshold value of CAD prevalence of 0.60, proceeding directly to invasive angiography was the most cost-effective approach.
Conclusions: In patients with low to intermediate CAD probabilities, CMR is more cost-effective than SPECT. Moreover, lower costs per utility unit indicate a superior clinical utility of CMR.
OCP-Place, a cross-linguistically well-attested constraint against pairs of consonants with shared [place], is psychologically real. Studies have shown that the processing of words violating OCP-Place is inhibited. Functionalists assume that OCP arises as a consequence of low-level perception: a consonant following another with the same [place] cannot be faithfully perceived as an independent unit. If functionalist theories were correct, then lexical access would be inhibited if two homorganic consonants conjoin at word boundaries-a problem that can only be solved with lexical feedback.
Here, we experimentally challenge the functional account by showing that OCP-Place can be used as a speech segmentation cue during pre-lexical processing without lexical feedback, and that the use relates to distributions in the input.
In Experiment 1, native listeners of Dutch located word boundaries between two labials when segmenting an artificial language. This indicates a use of OCP-Labial as a segmentation cue, implying a full perception of both labials. Experiment 2 shows that segmentation performance cannot solely be explained by well-formedness intuitions. Experiment 3 shows that knowledge of OCP-Place depends on language-specific input: in Dutch, co-occurrences of labials are under-represented, but co-occurrences of coronals are not. Accordingly, Dutch listeners fail to use OCP-Coronal for segmentation.
The Atlantic subpolar gyre (SPG) is one of the main drivers of decadal climate variability in the North Atlantic. Here we analyze its dynamics in pre-industrial control simulations of 19 different comprehensive coupled climate models. The analysis is based on a recently proposed description of the SPG dynamics that found the circulation to be potentially bistable due to a positive feedback mechanism including salt transport and enhanced deep convection in the SPG center. We employ a statistical method to identify multiple equilibria in time series that are subject to strong noise and analyze composite fields to assess whether the bistability results from the hypothesized feedback mechanism. Because noise dominates the time series in most models, multiple circulation modes can unambiguously be detected in only six models. Four of these six models confirm that the intensification is caused by the positive feedback mechanism.
Modulation of direct electron transfer of cytochrome c by use of a molecularly imprinted thin film
(2013)
We describe the preparation of a molecularly imprinted polymer film (MIP) on top of a self-assembled monolayer (SAM) of mercaptoundecanoic acid (MUA) on gold, where the template cytochrome c (cyt c) participates in direct electron transfer (DET) with the underlying electrode. To enable DET, a non-conductive polymer film is electrodeposited from an aqueous solution of scopoletin and cyt c on to the surface of a gold electrode previously modified with MUA. The electroactive surface concentration of cyt c was 0.5 pmol cm(-2). In the absence of the MUA layer, no cyt c DET was observed and the pseudo-peroxidatic activity of the scopoletin-entrapped protein, assessed via oxidation of Ampliflu red in the presence of hydrogen peroxide, was only 30 % of that for the MIP on MUA. This result indicates that electrostatic adsorption of cyt c by the MUA-SAM substantially increases the surface concentration of cyt c during the electrodeposition step, and is a prerequisite for the productive orientation required for DET. After template removal by treatment with sulfuric acid, rebinding of cyt c to the MUA-MIP-modified electrode occurred with an affinity constant of 100,000 mol(-1) L, a value three times higher than that determined by use of fluorescence titration for the interaction between scopoletin and cyt c in solution. The DET of cyt c in the presence of myoglobin, lysozyme, and bovine serum albumin (BSA) reveals that the MIP layer suppresses the effect of competing proteins.
The results of numerical modeling by using the global upper atmosphere model of the Earth (UAM) for reproducing the peak F2 layer electron density (N (m) F2) and total electron content (TEC) during recovery period after the magnetic storm of the April 15-20, 2002 are discussed. According to the simulations, the time it takes to reach a stationary regime of N (m) F2 and TEC diurnal variations is 24 hours, much shorter then the plasmasphere refilling time. The results are compared with the predictions of the IRI-2007 empirical model and GPS data on the TEC and found in good quantitative agreement for the latitudinal variations of N (m) F2 and TEC for daytime conditions in the southern hemisphere. The worst agreement occurs in the region of the main ionospheric trough.
Stress-levels experienced by school-aged elite athletes are pronounced, but data on their mental health status are widely lacking. In our study, we examined self-reported psychological symptoms and chronic mood. Data from a representative sample of 866 elite student-athletes (aged 12-15 years), enrolled in high-performance sport programming in German Elite Schools of Sport, were compared with data from 80 student-athletes from the same schools who have just been deselected from elite sport promotion, and from 432 age-and sex-matched non-sport students from regular schools (without such programming). Anxiety symptoms were least prevalent in female elite student-athletes. In male elite student-athletes, only symptoms of posttraumatic stress were less prevalent than in the other groups. Somatoform symptoms were generally more frequent in athletes, a trend that was significantly pronounced in deselected athletes. Deselected athletes showed an increased risk for psychological symptoms compared with both other groups. Regarding chronic mood, again deselected athletes showed less positive scores. While there was a trend toward high-performance sport being associated with better psychological health at least in girls, preventative programs should take into account that deselection from elite sport programming may be associated with specific risks for mental disorders.
The apostolic see and totalitarian ideologies the 1937 march-encyclicals in their inner context
(2013)
Local adaptation to different pollinators is considered one of the possible initial stages of ecological speciation as reproductive isolation is a by-product of the divergence in pollination systems. However, pollinator-mediated divergent selection will not necessarily result in complete reproductive isolation, because incipient speciation is often overcome by gene flow. We investigated the potential of pollinator shift in the sexually deceptive orchids Ophrys sphegodes and Ophrys exaltata and compared the levels of floral isolation vs. genetic distance among populations with contrasting predominant pollinators. We analysed floral hydrocarbons as a proxy for floral divergence between populations. Floral adoption of pollinators and their fidelity was tested using pollinator choice experiments. Interpopulation gene flow and population differentiation levels were estimated using AFLP markers. The Tyrrhenian O.sphegodes population preferentially attracted the pollinator bee Andrena bimaculata, whereas the Adriatic O.sphegodes population exclusively attracted A.nigroaenea. Significant differences in scent component proportions were identified in O.sphegodes populations that attracted different preferred pollinators. High interpopulation gene flow was detected, but populations were genetically structured at species level. The high interpopulation gene flow levels independent of preferred pollinators suggest that local adaptation to different pollinators has not (yet) generated detectable genome-wide separation. Alternatively, despite extensive gene flow, few genes underlying floral isolation remain differentiated as a consequence of divergent selection. Different pollination ecotypes in O.sphegodes might represent a local selective response imposed by temporal variation in a geographical mosaic of pollinators as a consequence of the frequent disturbance regimes typical of Ophrys habitats.
Laser-based ion mobility (IM) spectrometry was used for the detection of neuroleptics and PAH. A gas chromatograph was connected to the IM spectrometer in order to investigate compounds with low vapour pressure. The substances were ionized by resonant two-photon ionization at the wavelengths lambda = 213 and 266 nm and pulse energies between 50 and 300 mu J. Ion mobilities, linear ranges, limits of detection and response factors are reported. Limits of detection for the substances are in the range of 1-50 fmol. Additionally, the mechanism of laser ionization at atmospheric pressure was investigated. First, the primary product ions were determined by a laser-based time-of-flight mass spectrometer with effusive sample introduction. Then, a combination of a laser-based IM spectrometer and an ion trap mass spectrometer was developed and characterized to elucidate secondary ion-molecule reactions that can occur at atmospheric pressure. Some substances, namely naphthalene, anthracene, promazine and thioridazine, could be detected as primary ions (radical cations), while other substances, in particular acridine, phenothiazine and chlorprothixene, are detected as secondary ions (protonated molecules). The results are interpreted on the basis of quantum chemical calculations, and an ionization mechanism is proposed.
Photon Density Wave (PDW) spectroscopy is presented as a fascinating technology for the independent determination of scattering (mu(s)’ and absorption (ita) properties of highly turbid liquid dispersions. The theory is reviewed introducing new expressions for the PDW coefficients k(I) and k(Phi). Furthermore, two models for dependent scattering, namely the hard sphere model in the Percus-Yevick Approximation (HSPYA) and the Yukawa model in the Mean Spherical Approximation (YMSA), are experimentally examined. On the basis of the HSPYA particle sizing is feasible in dispersions of high ionic strength. It is furthermore shown that in dialyzed dispersions or in technical copolymers with high particle charge only the YMSA allows for correct dilution-free particle sizing. (C) 2013 Elsevier Ltd. All rights reserved.
Aus dem Inhalt: - Die Umsetzung der Europäischen Menschenrechtskonvention in der deutschen Rechtsordnung - Ein neues Mind-set der Europäischen Grenzschutzagentur? Zur Internalisierung menschenrechtlicher Vorgaben durch Frontex - Menschenwürde und Freiheitsentzug – Die Tätigkeit der Nationalen Stelle zur Verhütung von Folter
Functional evaluation of candidate ice structuring proteins using
cell-free expression systems
(2013)
Ice structuring proteins (ISPs) protect organisms from damage or death by freezing. They depress the non-equilibrium freezing point of water and prevent recrystallization, probably by binding to the surface of ice crystals. Many ISPs have been described and it is likely that many more exist in nature that have not yet been identified. ISPs come in many forms and thus cannot be reliably identified by their structure or consensus ice-binding motifs. Recombinant protein expression is the gold standard for proving the activity of a candidate ISP. Among existing expression systems, cell-free protein expression is the simplest and gives the fastest access to the protein of interest, but selection of the appropriate cell-free expression system is crucial for functionality. Here we describe cell-free expression methods for three ISPs that differ widely in structure and glycosylation status from three organisms: a fish (Macrozoarces americanus), an insect (Dendroides canadensis) and an alga (Chlamydomonas sp. CCMP681). We use both prokaryotic and eukaryotic expression systems for the production of ISPs. An ice recrystallization inhibition assay is used to test functionality. The techniques described here should improve the success of cell-free expression of ISPs in future applications. (C) 2012 Elsevier B.V. All rights reserved.
Charming country GDR interpretations and self-interpretations of literary West-East-Migration
(2013)
Ecological regime shifts and carbon cycling in aquatic systems have both been subject to increasing attention in recent years, yet the direct connection between these topics has remained poorly understood. A four-fold increase in sedimentation rates was observed within the past 50 years in a shallow eutrophic lake with no surface in-or outflows. This change coincided with an ecological regime shift involving the complete loss of submerged macrophytes, leading to a more turbid, phytoplankton-dominated state. To determine whether the increase in carbon (C) burial resulted from a comprehensive transformation of C cycling pathways in parallel to this regime shift, we compared the annual C balances (mass balance and ecosystem budget) of this turbid lake to a similar nearby lake with submerged macrophytes, a higher transparency, and similar nutrient concentrations. C balances indicated that roughly 80% of the C input was permanently buried in the turbid lake sediments, compared to 40% in the clearer macrophyte-dominated lake. This was due to a higher measured C burial efficiency in the turbid lake, which could be explained by lower benthic C mineralization rates. These lower mineralization rates were associated with a decrease in benthic oxygen availability coinciding with the loss of submerged macrophytes. In contrast to previous assumptions that a regime shift to phytoplankton dominance decreases lake heterotrophy by boosting whole-lake primary production, our results suggest that an equivalent net metabolic shift may also result from lower C mineralization rates in a shallow, turbid lake. The widespread occurrence of such shifts may thus fundamentally alter the role of shallow lakes in the global C cycle, away from channeling terrestrial C to the atmosphere and towards burying an increasing amount of C.
Regime shifts are commonly associated with the loss of submerged macrophytes in shallow lakes; yet, the effects of this on whole-lake primary productivity remain poorly understood. This study compares the annual gross primary production (GPP) of two shallow, eutrophic lakes with different plant community structures but similar nutrient concentrations. Daily GPP rates were substantially higher in the lake containing submerged macrophytes (58623gCm(-2)year(-1)) than in the lake featuring only phytoplankton and periphyton (40823gCm(-2)year(-1); P<0.0001). Comparing lake-centre diel oxygen curves to compartmental estimates of GPP confirmed that single-site oxygen curves may provide unreliable estimates of whole-lake GPP. The discrepancy between approaches was greatest in the macrophyte-dominated lake during the summer, with a high proportion of GPP occurring in the littoral zone. Our empirical results were used to construct a simple conceptual model relating GPP to nutrient availability for these alternative ecological regimes. This model predicted that lakes featuring submerged macrophytes may commonly support higher rates of GPP than phytoplankton-dominated lakes, but only within a moderate range of nutrient availability (total phosphorus ranging from 30 to 100gL(-1)) and with mean lake depths shallower than 3 or 4m. We conclude that shallow lakes with a submerged macrophyte-epiphyton complex may frequently support a higher annual primary production than comparable lakes that contain only phytoplankton and periphyton. We thus suggest that a regime shift involving the loss of submerged macrophytes may decrease the primary productivity of many lakes, with potential consequences for the entire food webs of these ecosystems.
Bacteriophage HK620 recognizes and cleaves the O-antigen polysaccharide of Escherichia coli serogroup O18A1 with its tailspike protein (TSP). HK620TSP binds hexasaccharide fragments with low affinity, but single amino acid exchanges generated a set of high-affinity mutants with submicromolar dissociation constants. Isothermal titration calorimetry showed that only small amounts of heat were released upon complex formation via a large number of direct and solvent-mediated hydrogen bonds between carbohydrate and protein. At room temperature, association was both enthalpy- and entropy-driven emphasizing major solvent rearrangements upon complex formation. Crystal structure analysis showed identical protein and sugar conformers in the TSP complexes regardless of their hexasaccharide affinity. Only in one case, a TSP mutant bound a different hexasaccharide conformer. The extended sugar binding site could be dissected in two regions: first, a hydrophobic pocket at the reducing end with minor affinity contributions. Access to this site could be blocked by a single aspartate to asparagine exchange without major loss in hexasaccharide affinity. Second, a region where the specific exchange of glutamate for glutamine created a site for an additional water molecule. Side-chain rearrangements upon sugar binding led to desolvation and additional hydrogen bonding which define this region of the binding site as the high-affinity scaffold.
Die vorliegende Arbeit beschäftigt sich mit einer klassischen aber noch immer zentralen und aktuellen Frage der Evaluationsforschung, der Hinterfragung der Verwendung bzw. Wirksamkeit von Evaluationsverfahren. Vor dem Hintergrund der seit Ende der 1990er Jahre vor allem in Europa starken Zunahme von institutionalisierten Politik-Evaluationsverfahren sowie der zugleich zunehmenden Kritik dieser Verfahren in Wissenschaft und Praxis, untersucht die Arbeit diese Wirksamkeit am Fallbeispiel der Forschungspolitik der Europäischen Union. Aufbauend auf einer Aufarbeitung des Forschungsstandes zur Evaluationsverwendungsforschung und einer Vorstellung des gewählten Politikfeldes sowie der spezifischen Evaluationspraxis, erfolgt dazu eine systematische Gegenüberstellung der zentralen Evaluationsempfehlungen und der Entwicklung im Politikfeld über die vergangenen 15 Jahre. Im Ergebnis kommt die Arbeit zu der Feststellung eines (überraschend) hohen Ausmaßes an Entsprechung der Evaluationsempfehlungen mit der Politikentwicklung im untersuchten Fallbeispiel. Auf der Basis der Untersuchung des Fallbeispiels aber auch unter Heranziehung weiterer empirischer Beiträge in der Literatur ist damit der Behauptung der fehlenden Wirksamkeit der institutionalisierten Evaluation auf die Politikgestaltung klar zu widersprechen. Eine weitergehende Diskussion des Ergebnisses der Fallstudie legt darüber hinaus nahe, dass einige spezifische Faktoren und Bedingungen die Wirksamkeit der Evaluationsverfahren im untersuchten Fallbeispiel positiv zu beeinflussen scheinen. Im Einzelnen sind dies: der Charakter und die Ausprägung der Evaluationsempfehlungen, das spezifische institutionelle Umfeld der Evaluation sowie das spezifische 'politische Klima'. Aus dem Ergebnis lässt sich andererseits aber auch folgern, dass insbesondere im Hinblick auf die Akzeptanzproblematik eine Verstärkung der Bemühungen zur Wahrnehmung der Evaluations-wirksamkeit auf Seiten aller Beteiligten geboten scheint. Die Arbeit stellt hierzu abschließend einige Vorschläge und Ideen zusammen, die diese Wahrnehmung verbessern können.
The Arctic tundra, covering approx. 5.5 % of the Earth’s land surface, is one of the last ecosystems remaining closest to its untouched condition. Remote sensing is able to provide information at regular time intervals and large spatial scales on the structure and function of Arctic ecosystems. But almost all natural surfaces reveal individual anisotropic reflectance behaviors, which can be described by the bidirectional reflectance distribution function (BRDF). This effect can cause significant changes in the measured surface reflectance depending on solar illumination and sensor viewing geometries. The aim of this thesis is the hyperspectral and spectro-directional reflectance characterization of important Arctic tundra vegetation communities at representative Siberian and Alaskan tundra sites as basis for the extraction of vegetation parameters, and the normalization of BRDF effects in off-nadir and multi-temporal remote sensing data. Moreover, in preparation for the upcoming German EnMAP (Environmental Mapping and Analysis Program) satellite mission, the understanding of BRDF effects in Arctic tundra is essential for the retrieval of high quality, consistent and therefore comparable datasets. The research in this doctoral thesis is based on field spectroscopic and field spectro-goniometric investigations of representative Siberian and Alaskan measurement grids. The first objective of this thesis was the development of a lightweight, transportable, and easily managed field spectro-goniometer system which nevertheless provides reliable spectro-directional data. I developed the Manual Transportable Instrument platform for ground-based Spectro-directional observations (ManTIS). The outcome of the field spectro-radiometrical measurements at the Low Arctic study sites along important environmental gradients (regional climate, soil pH, toposequence, and soil moisture) show that the different plant communities can be distinguished by their nadir-view reflectance spectra. The results especially reveal separation possibilities between the different tundra vegetation communities in the visible (VIS) blue and red wavelength regions. Additionally, the near-infrared (NIR) shoulder and NIR reflectance plateau, despite their relatively low values due to the low structure of tundra vegetation, are still valuable information sources and can separate communities according to their biomass and vegetation structure. In general, all different tundra plant communities show: (i) low maximum NIR reflectance; (ii) a weakly or nonexistent visible green reflectance peak in the VIS spectrum; (iii) a narrow “red-edge” region between the red and NIR wavelength regions; and (iv) no distinct NIR reflectance plateau. These common nadir-view reflectance characteristics are essential for the understanding of the variability of BRDF effects in Arctic tundra. None of the analyzed tundra communities showed an even closely isotropic reflectance behavior. In general, tundra vegetation communities: (i) usually show the highest BRDF effects in the solar principal plane; (ii) usually show the reflectance maximum in the backward viewing directions, and the reflectance minimum in the nadir to forward viewing directions; (iii) usually have a higher degree of reflectance anisotropy in the VIS wavelength region than in the NIR wavelength region; and (iv) show a more bowl-shaped reflectance distribution in longer wavelength bands (>700 nm). The results of the analysis of the influence of high sun zenith angles on the reflectance anisotropy show that with increasing sun zenith angles, the reflectance anisotropy changes to azimuthally symmetrical, bowl-shaped reflectance distributions with the lowest reflectance values in the nadir view position. The spectro-directional analyses also show that remote sensing products such as the NDVI or relative absorption depth products are strongly influenced by BRDF effects, and that the anisotropic characteristics of the remote sensing products can significantly differ from the observed BRDF effects in the original reflectance data. But the results further show that the NDVI can minimize view angle effects relative to the contrary spectro-directional effects in the red and NIR bands. For the researched tundra plant communities, the overall difference of the off-nadir NDVI values compared to the nadir value increases with increasing sensor viewing angles, but on average never exceeds 10 %. In conclusion, this study shows that changes in the illumination-target-viewing geometry directly lead to an altering of the reflectance spectra of Arctic tundra communities according to their object-specific BRDFs. Since the different tundra communities show only small, but nonetheless significant differences in the surface reflectance, it is important to include spectro-directional reflectance characteristics in the algorithm development for remote sensing products.
There is converging evidence suggesting a particular susceptibility to the addictive properties of nicotine among adolescents. The aim of the current study was to prospectively ascertain the relationship between age at first cigarette and initial smoking experiences, and to examine the combined effects of these characteristics of adolescent smoking behavior on adult smoking. It was hypothesized that the association between earlier age at first cigarette and later development of nicotine dependence may, at least in part, be attributable to differences in experiencing pleasurable early smoking sensations. Data were drawn from the participants of the Mannheim Study of Children at Risk, an ongoing epidemiological cohort study from birth to adulthood. Structured interviews at age 15, 19 and 22 years were conducted to assess the age at first cigarette, early smoking experiences and current smoking behavior in 213 young adults. In addition, the participants completed the Fagerstrom Test for Nicotine Dependence. Adolescents who smoked their first cigarette at an earlier age reported more pleasurable sensations from the cigarette, and they were more likely to be regular smokers at age 22. The age at first cigarette also predicted the number of cigarettes smoked and dependence at age 22. Thus, both the age of first cigarette and the pleasure experienced from the cigarette independently predicted aspects of smoking at age 22.
Recent studies have emphasized an important role for neurotrophins, such as brain-derived neurotrophic factor (BDNF), in regulating the plasticity of neural circuits involved in the pathophysiology of stress-related diseases. The aim of the present study was to examine the interplay of the BDNF Val(66)Met and the serotonin transporter promoter (5-HTTLPR) polymorphisms in moderating the impact of early-life adversity on BDNF plasma concentration and depressive symptoms. Participants were taken from an epidemiological cohort study following the long-term outcome of early risk factors from birth into young adulthood. In 259 individuals (119 males, 140 females), genotyped for the BDNF Val(66)Met and the 5-HTTLPR polymorphisms, plasma BDNF was assessed at the age of 19 years. In addition, participants completed the Beck Depression Inventory (BDI). Early adversity was determined according to a family adversity index assessed at 3 months of age. Results indicated that individuals homozygous for both the BDNF Val and the 5-HTTLPR L allele showed significantly reduced BDNF levels following exposure to high adversity. In contrast, BDNF levels appeared to be unaffected by early psychosocial adversity in carriers of the BDNF Met or the 5-HTTLPR S allele. While the former group appeared to be most susceptible to depressive symptoms, the impact of early adversity was less pronounced in the latter group. This is the first preliminary evidence indicating that early-life adverse experiences may have lasting sequelae for plasma BDNF levels in humans, highlighting that the susceptibility to this effect is moderated by BDNF Val(66)Met and 5-HTTLPR genotype.
Fragmentation and loss of habitat are major threats to animal communities and are therefore important to conservation. Due to the complexity of the interplay of spatial effects and community processes, our mechanistic understanding of how communities respond to such landscape changes is still poor. Modelling studies have mostly focused on elucidating the principles of community response to fragmentation and habitat loss at relatively large spatial and temporal scales relevant to metacommunity dynamics. Yet, it has been shown that also small scale processes, like foraging behaviour, space use by individuals and local resource competition are also important factors. However, most studies that consider these smaller scales are designed for single species and are characterized by high model complexity. Hence, they are not easily applicable to ecological communities of interacting individuals. To fill this gap, we apply an allometric model of individual home range formation to investigate the effects of habitat loss and fragmentation on mammal and bird communities, and, in this context, to investigate the role of interspecific competition and individual space use. Results show a similar response of both taxa to habitat loss. Community composition is shifted towards higher frequency of relatively small animals. The exponent and the 95%-quantile of the individual size distribution (ISD, described as a power law distribution) of the emerging communities show threshold behaviour with decreasing habitat area. Fragmentation per se has a similar and strong effect on mammals, but not on birds. The ISDs of bird communities were insensitive to fragmentation at the small scales considered here. These patterns can be explained by competitive release taking place in interacting animal communities, with the exception of bird's buffering response to fragmentation, presumably by adjusting the size of their home ranges. These results reflect consequences of higher mobility of birds compared to mammals of the same size and the importance of considering competitive interaction, particularly for mammal communities, in response to landscape fragmentation. Our allometric approach enables scaling up from individual physiology and foraging behaviour to terrestrial communities, and disentangling the role of individual space use and interspecific competition in controlling the response of mammal and bird communities to landscape changes.
The current study examines the neural correlates of 8-to-12-year-old children and adults producing inflected word forms, specifically regular vs. irregular past-tense forms in English, using a silent production paradigm. ERPs were time-locked to a visual cue for silent production of either a regular or irregular past-tense form or a 3rd person singular present tense form of a given verb (e.g., walked/sang vs. walks/sings). Subsequently, another visual stimulus cued participants for an overt vocalization of their response. ERP results for the adult group revealed a negativity 300-450 ms after the silent-production cue for regular compared to irregular past-tense forms. There was no difference in the present form condition. Children's brain potentials revealed developmental changes, with the older children demonstrating more adult-like ERP responses than the younger ones. We interpret the observed ERP responses as reflecting combinatorial processing involved in regular (but not irregular) past-tense formation.
In order to explore the behavioral mechanisms underlying aggregation of foragers on local resource patches, it is necessary to manipulate the location, quality and quantity of food patches. This requires careful control over the conditions in the foraging arena, which may be a challenging task in the case of aquatic resource-consumer systems, like that of freshwater zooplankton feeding on suspended algal cells. We present an experimental tool designed to aid behavioral ecologists in exploring the consequences of resource characteristics for zooplankton aggregation behavior and movement decisions under conditions where the boundaries and characteristics (quantity and quality) of food patches can be standardized. The aggregation behavior of Daphnia magna and D. galeata x hyalina was tested in relation to i) the presence or absence of food or ii) food quality, where algae of high or low nutrient (phosphorus) content were offered in distinct patches. Individuals of both Daphnia species chose tubes containing food patches and D. galeata x hyalina also showed a preference towards food patches of high nutrient content. We discuss how the described equipment complements other behavioral approaches providing a useful tool to understand animal foraging decisions in environments with heterogeneous resource distributions.
Diese Arbeit befasst sich mit der Synthese und der Charakterisierung von thermoresponsiven Polymeren und ihrer Immobilisierung auf festen Oberflächen als nanoskalige dünne Schichten. Dabei wurden thermoresponsive Polymere vom Typ der unteren kritischen Entmischungstemperatur (engl.: lower critical solution temperature, LCST) verwendet. Sie sind bei niedrigeren Temperaturen im Lösungsmittel gut und nach Erwärmen oberhalb einer bestimmten kritischen Temperatur nicht mehr löslich; d. h. sie weisen bei einer bestimmten Temperatur einen Phasenübergang auf. Als Basismaterial wurden verschiedene thermoresponsive und biokompatible Polymere basierend auf Diethylenglykolmethylethermethacrylat (MEO2MA) und Oligo(ethylenglykol)methylethermethacrylat (OEGMA475, Mn = 475 g/ mol) über frei radikalische Copolymerisation synthetisiert. Der thermoresponsive Phasenübergang der Copolymere wurde in wässriger Lösung und in gequollenen vernetzten dünnen Schichten beobachtet. Außerdem wurde untersucht, inwiefern eine selektive Proteinbindung an geeignete funktionalisierte Copolymere die Phasenübergangstemperatur beeinflusst. Die thermoresponsiven Copolymere wurden über photovernetzbare Gruppen auf festen Oberflächen immobilisiert. Die nötigen lichtempfindlichen Vernetzereinheiten wurden mittels des polymerisierbaren Benzophenonderivates 2 (4 Benzoylphenoxy)ethylmethacrylat (BPEM) in das Copolymer integriert. Dünne Filme der Copolymere mit ca. 100 nm Schichtdicke wurden über Rotationsbeschichtung auf Siliziumwafer aufgeschleudert und anschließend durch Bestrahlung mit UV Licht vernetzt und auf der Oberfläche immobilisiert. Die Filme sind stabiler je größer der Vernetzeranteil und je größer die Molmasse der Copolymere ist. Bei einem Waschprozess nach der Vernetzung wird beispielsweise aus einem Film mit moderater Molmasse und geringem Vernetzeranteil mehr unvernetztes Copolymer ausgewaschen als bei einem höhermolekularen Copolymer mit hohem Vernetzeranteil. Die Quellbarkeit der Polymerschichten wurde mit Ellipsometrie untersucht. Sie ist größer je geringer der Vernetzeranteil in den Copolymeren ist. Schichten aus thermoresponsiven OEG Copolymeren zeigen einen Volumenphasenübergang vom Typ der LCST. Der thermoresponsive Kollaps der Schichten ist komplett reversibel, die Kollapstemperatur kann über die Zusammensetzung der Copolymere eingestellt werden. Für einen Vergleich dieser Eigenschaften mit dem gut charakterisierten und derzeit wohl am häufigsten untersuchten thermoresponsiven Polymer Poly(N-isopropylacrylamid) (PNIPAM) wurden zusätzlich photovernetzte Schichten aus PNIPAM hergestellt und ebenfalls ellipsometrisch vermessen. Im Vergleich zu PNIPAM verläuft der Phasenübergang der Schichten aus den Copolymeren mit Oligo(ethylenglykol)-seitenketten (OEG Copolymere) über einen größeren Temperaturbereich. Mit Licht einer Wellenlänge > 300 nm wurden die photosensitiven Benzophenongruppen selektiv angeregt. Bei der Verwendung kleinerer Wellenlängen vernetzten die Copolymerschichten auch ohne die Anwesenheit der lichtempfindlichen Benzophenongruppen. Dieser Effekt ließ sich zur kontrollierten Immobilisierung und Vernetzung der OEG Copolymere einsetzen. Als weitere Methode zur Immobilisierung der Copolymere wurde die Anbindung über Amidbindungen untersucht. Dazu wurden OEG Copolymere mit dem carboxylgruppenhaltigen 2 Succinyloxyethylmethacrylat (MES) auf mit 3 Aminopropyldimethylethoxysilan (APDMSi) silanisierte Siliziumwafer rotationsbeschichtet, und mit dem oligomeren α, ω Diamin Jeffamin® ED 900 vernetzt. Die Vernetzungsreaktion erfolgte ohne weitere Zusätze durch Erhitzen der Proben. Die Hydrogelschichten waren anschließend stabil und zeigten neben thermoresponsivem auch pH responsives Verhalten. Um zu untersuchen, ob die Phasenübergangstemperatur durch eine Proteinbindung beeinflusst werden kann, wurde ein polymerisierbares Biotinderivat 2 Biotinyl-aminoethylmethacrylat (BAEMA) in das thermoresponsive Copolymer eingebaut. Der Einfluss des biotinbindenen Proteins Avidin auf das thermoresponsive Verhalten des Copolymers in Lösung wurde untersucht. Die spezifische Bindung von Avidin an das biotinylierte Copolymer verschob die Übergangstemperatur deutlich zu höheren Temperaturen. Kontrollversuche zeigten, dass dieses Verhalten auf eine selektive Proteinbindung zurückzuführen ist. Thermoresponsive OEG Copolymere mit photovernetzbaren Gruppen aus BPEM und Biotingruppen aus BAEMA wurden über Rotationsbeschichtung auf Gold- und auf Siliziumoberflächen aufgetragen und durch UV Strahlung vernetzt. Die spezifische Bindung von Avidin an die Copolymerschicht wurde mit Oberflächenplasmonenresonanz und Ellipsometrie untersucht. Die Bindungskapazität der Schichten war umso größer, je kleiner der Vernetzeranteil, d. h. je größer die Maschenweite des Netzwerkes war. Die Quellbarkeit der Schichten wurde durch die Avidinbindung erhöht. Bei hochgequollenen Systemen verursachte eine Mehrfachbindung des tetravalenten Avidins allerdings eine zusätzliche Quervernetzung des Polymernetzwerkes. Dieser Effekt wirkt der erhöhten Quellbarkeit durch die Avidinbindung entgegen und lässt die Polymernetzwerke schrumpfen.
Solid surfaces are modified using photo-crosslinkable copolymers based on oligo(ethylene glycol) methacrylate (OEGMA) bearing 2-(4-benzoylphenoxy) ethyl methacrylate (BPEM) as a photosensitive crosslinking unit. Thin films of about 100 nm are formed by spin-coating these a priori highly biocompatible copolymers onto silicon substrates. Subsequent UV-irradiation assures immobilization and crosslinking of the hydrogel films. Their stability is controlled by the number of crosslinker units per chain and the molar mass of the copolymers. The swelling of the hydrogel layers, as investigated by ellipsometry, can be tuned by the crosslinker content in the copolymer. If films are built from the ternary copolymers of OEGMA, BPEM and 2-(2-methoxyethoxy) ethyl methacrylate (MEO(2)MA), the hydrogel films exhibit a swelling/deswelling transition of the lower critical solution temperature (LCST) type. The observed thermally induced hydrogel collapse is fully reversible and the onset temperature of the transition can be tuned at will by the copolymer composition. Different from analogously prepared thermo-responsive hydrogel films of photocrosslinked poly(N-isopropylacrylamide), the swelling-deswelling transition occurs more gradually, but shows no hysteresis.
Background: In behavioural tests of sentence comprehension in aphasia, correct and incorrect responses are often randomly distributed. Such a pattern of chance performance is a typical trait of Broca's aphasia, but can be found in other aphasic syndromes as well. Many researchers have argued that chance behaviour is the result of a guessing strategy, which is adopted in the face of a syntactic breakdown in sentence processing. Aims: Capitalising on new evidence from recent studies investigating online sentence comprehension in aphasia using the visual world paradigm, the aim of this paper is to review the concept of chance performance as a reflection of a syntactic impairment in sentence processing and to re-examine the conventional interpretation of chance performance as a guessing behaviour. Main Contribution: Based on a review of recent evidence from visual world paradigm studies, we argue that the assumption of chance performance equalling guessing is not necessarily compatible with actual real-time parsing procedures in people with aphasia. We propose a reinterpretation of the concept of chance performance by assuming that there are two distinct processing mechanisms underlying sentence comprehension in aphasia. Correct responses are always the result of normal-like parsing mechanisms, even in those cases where the overall performance pattern is at chance. Incorrect responses, on the other hand, are the result of intermittent deficiencies of the parser. Hence the random guessing behaviour that persons with aphasia often display does not necessarily reflect a syntactic breakdown in sentence comprehension and a random selection between alternatives. Instead it should be regarded as a result of temporal deficient parsing procedures in otherwise normal-like comprehension routines. Conclusion: Our conclusion is that the consideration of behavioural offline data alone may not be sufficient to interpret a performance in language tests and subsequently draw theoretical conclusions about language impairments. Rather it is important to call on additional data from online studies that look at language processing in real time in order to gain a comprehensive picture about syntactic comprehension abilities of people with aphasia and possible underlying deficits.
Two optically obscured Wolf-Rayet (WR) stars have been recently discovered by means of their infrared (IR) circumstellar shells, which show signatures of interaction with each other. Following the systematics of the WR star catalogues, these stars obtain the names WR 120bb and WR 120bc. In this paper, we present and analyse new near-IR, J-, H- and K-band spectra using the Potsdam Wolf-Rayet model atmosphere code. For that purpose, the atomic data base of the code has been extended in order to include all significant lines in the near-IR bands.
The spectra of both stars are classified as WN9h. As their spectra are very similar the parameters that we obtained by the spectral analyses hardly differ. Despite their late spectral subtype, we found relatively high stellar temperatures of 63 kK. The wind composition is dominated by helium, while hydrogen is depleted to 25 per cent by mass.
Because of their location in the Scutum-Centaurus Arm, WR 120bb and WR 120bc appear highly reddened, A(Ks) approximate to 2 mag. We adopt a common distance of 5.8 kpc to both stars, which complies with the typical absolute K-band magnitude for the WN9h subtype of -6.5 mag, is consistent with their observed extinction based on comparison with other massive stars in the region, and allows for the possibility that their shells are interacting with each other. This leads to luminosities of log(L/L-circle dot) = 5.66 and 5.54 for WR 120bb and WR 120bc, with large uncertainties due to the adopted distance.
The values of the luminosities of WR 120bb and WR 120bc imply that the immediate precursors of both stars were red supergiants (RSG). This implies in turn that the circumstellar shells associated with WR 120bb and WR 120bc were formed by interaction between the WR wind and the dense material shed during the preceding RSG phase.
Sightseeing in the poorest quarters of southern hemisphere cities has been observed occurring in Cape Town, Rio de Janeiro, Mumbai and many other cities. The increasing global interest in touring poor urban environments is accompanied by a strong morally charged debate; so far, this debate has not been critically addressed. This article avoids asking if slum tourism is good or bad, but instead seeks a second-order observation, i.e. to investigate under what conditions the social praxis of slum tourism is considered as good or bad, by processing information on esteem or dis-esteem among tourists and tour providers. Special attention is given to any relation between morality and place, and the thesis posited is that the moral charging of slum tourism is dependent on the presence of specific preconceived notions of slums and poverty. This shall be clarified by means of references to two empirical case studies carried out in (1) Cape Town in 2007 and 2008 and (2) Mumbai in 2009.
When playing violent video games, aggressive actions are performed against the background of an originally neutral environment, and associations are formed between cues related to violence and contextual features. This experiment examined the hypothesis that neutral contextual features of a virtual environment become associated with aggressive meaning and acquire the function of primes for aggressive cognitions. Seventy-six participants were assigned to one of two violent video game conditions that varied in context (ship vs. city environment) or a control condition. Afterwards, they completed a Lexical Decision Task to measure the accessibility of aggressive cognitions in which they were primed either with ship-related or city-related words. As predicted, participants who had played the violent game in the ship environment had shorter reaction times for aggressive words following the ship primes than the city primes, whereas participants in the city condition responded faster to the aggressive words following the city primes compared to the ship primes. No parallel effect was observed for the non-aggressive targets. The findings indicate that the associations between violent and neutral cognitions learned during violent game play facilitate the accessibility of aggressive cognitions.
When playing violent video games, aggressive actions are performed against the background of an originally neutral environment, and associations are formed between cues related to violence and contextual features. This experiment examined the hypothesis that neutral contextual features of a virtual environment become associated with aggressive meaning and acquire the function of primes for aggressive cognitions. Seventy-six participants were assigned to one of two violent video game conditions that varied in context (ship vs. city environment) or a control condition. Afterwards, they completed a Lexical Decision Task to measure the accessibility of aggressive cognitions in which they were primed either with ship-related or city-related words. As predicted, participants who had played the violent game in the ship environment had shorter reaction times for aggressive words following the ship primes than the city primes, whereas participants in the city condition responded faster to the aggressive words following the city primes compared to the ship primes. No parallel effect was observed for the non-aggressive targets. The findings indicate that the associations between violent and neutral cognitions learned during violent game play facilitate the accessibility of aggressive cognitions.
Sphingosine-1-phosphate (S1P) is a cellular signalling lipid generated by sphingosine kinase-1 (SPHK1). The aim of the study was to investigate whether the activated coagulation factor-X (FXa) regulates SPHK1 transcription and the formation of S1P and subsequent mitogenesis and migration of human vascular smooth muscle cells (SMC).
FXa induced a time- (36 h) and concentration-dependent (330 nmol/L) increase of SPHK1 mRNA and protein expression in human aortic SMC, resulting in an increased synthesis of S1P. FXa-stimulated transcription of SPHK1 was mediated by the protease-activated receptor-1 (PAR-1) and PAR-2. In human carotid artery plaques, expression of SPHK1 was observed at SMC-rich sites and was co-localized with intraplaque FX/FXa content. FXa-induced SPHK1 transcription was attenuated by inhibitors of Rho kinase (Y27632) and by protein kinase C (PKC) isoforms (GF109203X). In addition, FXa rapidly induced the activation of the small GTPase Rho A. Inhibition of signalling pathways which regulate SPHK1 expression, inhibition of its activity or siRNA-mediated SPHK1 knockdown attenuated the mitogenic and chemotactic response of human SMC to FXa.
These data suggest that FXa induces SPHK1 expression and increases S1P formation independent of thrombin and that this involves the activation of Rho A and PKC signalling. In addition to its key function in coagulation, this direct effect of FXa on human SMC may increase cell proliferation and migration at sites of vessel injury and thereby contribute to the progression of vascular lesions.
This thesis presents novel ideas and research findings for the Web of Data – a global data space spanning many so-called Linked Open Data sources. Linked Open Data adheres to a set of simple principles to allow easy access and reuse for data published on the Web. Linked Open Data is by now an established concept and many (mostly academic) publishers adopted the principles building a powerful web of structured knowledge available to everybody. However, so far, Linked Open Data does not yet play a significant role among common web technologies that currently facilitate a high-standard Web experience. In this work, we thoroughly discuss the state-of-the-art for Linked Open Data and highlight several shortcomings – some of them we tackle in the main part of this work. First, we propose a novel type of data source meta-information, namely the topics of a dataset. This information could be published with dataset descriptions and support a variety of use cases, such as data source exploration and selection. For the topic retrieval, we present an approach coined Annotated Pattern Percolation (APP), which we evaluate with respect to topics extracted from Wikipedia portals. Second, we contribute to entity linking research by presenting an optimization model for joint entity linking, showing its hardness, and proposing three heuristics implemented in the LINked Data Alignment (LINDA) system. Our first solution can exploit multi-core machines, whereas the second and third approach are designed to run in a distributed shared-nothing environment. We discuss and evaluate the properties of our approaches leading to recommendations which algorithm to use in a specific scenario. The distributed algorithms are among the first of their kind, i.e., approaches for joint entity linking in a distributed fashion. Also, we illustrate that we can tackle the entity linking problem on the very large scale with data comprising more than 100 millions of entity representations from very many sources. Finally, we approach a sub-problem of entity linking, namely the alignment of concepts. We again target a method that looks at the data in its entirety and does not neglect existing relations. Also, this concept alignment method shall execute very fast to serve as a preprocessing for further computations. Our approach, called Holistic Concept Matching (HCM), achieves the required speed through grouping the input by comparing so-called knowledge representations. Within the groups, we perform complex similarity computations, relation conclusions, and detect semantic contradictions. The quality of our result is again evaluated on a large and heterogeneous dataset from the real Web. In summary, this work contributes a set of techniques for enhancing the current state of the Web of Data. All approaches have been tested on large and heterogeneous real-world input.
Public debate about energy relations between the EU and Russia is distorted. These distortions present considerable obstacles to the development of true partnership. At the core of the conflict is a struggle for resource rents between energy producing, energy consuming and transit countries. Supposed secondary aspects, however, are also of great importance. They comprise of geopolitics, market access, economic development and state sovereignty. The European Union, having engaged in energy market liberalisation, faces a widening gap between declining domestic resources and continuously growing energy demand. Diverse interests inside the EU prevent the definition of a coherent and respected energy policy. Russia, for its part, is no longer willing to subsidise its neighbouring economies by cheap energy exports. The Russian government engages in assertive policies pursuing Russian interests. In so far, it opts for a different globalisation approach, refusing the role of mere energy exporter. In view of the intensifying struggle for global resources, Russia, with its large energy potential, appears to be a very favourable option for European energy supplies, if not the best one. However, several outcomes of the strategic game between the two partners can be imagined. Engaging in non-cooperative strategies will in the end leave all stakeholders worse-off. The European Union should therefore concentrate on securing its partnership with Russia instead of damaging it. Stable cooperation would need the acceptance that the partner may pursue his own goals, which might be different from one’s own interests. The question is, how can a sustainable compromise be found? This thesis finds that a mix of continued dialogue, a tit for tat approach bolstered by an international institutional framework and increased integration efforts appears as a preferable solution.
While sea level rise is one of the most likely consequences of climate change, the provoked costs remain highly uncertain. Based on a block-maxima approach, we provide a stochastic framework to estimate the increase of expected damages with sea level rise as well as with meteorological changes and demonstrate the application to two case studies. In addition, the uncertainty of the damage estimations due to the stochastic nature of extreme events is studied. Starting with the probability distribution of extreme flood levels, we calculate the distribution of implied damages in a specific region employing stage-damage functions. Universal relations of the expected damages and their standard deviation, which demonstrate the importance of the shape of the damage function, are provided. We also calculate how flood protection reduces the damages leading to a more complex picture, where the extreme value behavior plays a fundamental role. Citation: Boettle, M., D. Rybski, and J. P. Kropp (2013), How changing sea level extremes and protection measures alter coastal flood damages, Water Resour. Res., 49, 1199-1210, doi: 10.1002/wrcr.20108.
This study follows up on a previous downscaling intercomparison for present climate. Using a larger set of eight methods the authors downscale atmospheric fields representing present (1981-2000) and future (2046-65) conditions, as simulated by six global climate models following three emission scenarios. Local extremes were studied at 20 locations in British Columbia as measured by the same set of 27 indices, ClimDEX, as in the precursor study. Present and future simulations give 2 x 3 x 6 x 8 x 20 x 27 = 155 520 index climatologies whose analysis in terms of mean change and variation is the purpose of this study. The mean change generally reinforces what is to be expected in a warmer climate: that extreme cold events become less frequent and extreme warm events become more frequent, and that there are signs of more frequent precipitation extremes. There is considerable variation, however, about this tendency, caused by the influence of scenario, climate model, downscaling method, and location. This is analyzed using standard statistical techniques such as analysis of variance and multidimensional scaling, along with an assessment of the influence of each modeling component on the overall variation of the simulated change. It is found that downscaling generally has the strongest influence, followed by climate model; location and scenario have only a minor influence. The influence of downscaling could be traced back in part to various issues related to the methods, such as the quality of simulated variability or the dependence on predictors. Using only methods validated in the precursor study considerably reduced the influence of downscaling, underpinning the general need for method verification.
Chillen gestern
(2013)
Fluid flow in low-permeable carbonate rocks depends on the density of fractures, their interconnectivity and on the formation of fault damage zones. The present-day stress field influences the aperture hence the transmissivity of fractures whereas paleostress fields are responsible for the formation of faults and fractures. In low-permeable reservoir rocks, fault zones belong to the major targets. Before drilling, an estimate for reservoir productivity of wells drilled into the damage zone of faults is therefore required. Due to limitations in available data, a characterization of such reservoirs usually relies on the use of numerical techniques. The requirements of these mathematical models encompass a full integration of the actual fault geometry, comprising the dimension of the fault damage zone and of the fault core, and the individual population with properties of fault zones in the hanging and foot wall and the host rock. The paper presents both the technical approach to develop such a model and the property definition of heterogeneous fault zones and host rock with respect to the current stress field. The case study describes a deep geothermal reservoir in the western central Molasse Basin in southern Bavaria, Germany. Results from numerical simulations indicate that the well productivity can be enhanced along compressional fault zones if the interconnectivity of fractures is lateral caused by crossing synthetic and antithetic fractures. The model allows a deeper understanding of production tests and reservoir properties of faulted rocks.
This paper focuses on estimating the magnitude of any potential weight discrimination by examining whether obese job applicants in Germany get treated or behave differently from non-obese applicants. Based on two waves of rich survey data from the IZA Evaluation dataset, which includes measures that control for education, demographic characteristics, labor market history, psychological factors and health, we estimate differences in job search behavior and labor market outcomes between obese/overweight and normal weight individuals. Unlike other observational studies which are generally based on obese and non-obese individuals who might already be at different points in the job ladder (e.g., household surveys), in our data, individuals are newly unemployed and all start from the same point. The only subgroup we find in our data experiencing any possible form of negative labor market outcomes is obese women. Despite making more job applications and engaging more in job training programs, we find some indications that they experienced worse (or at best similar) employment outcomes than normal weight women. Obese women who found a job also had significantly lower wages than normal weight women.
Benefit duration, unemployment duration and job match quality aregression-discontinuity approach
(2013)
We use a sharp discontinuity in the maximum duration of benefit entitlement to identify the effect of extended benefit duration on unemployment duration and post-unemployment outcomes (employment stability and re-employment wages). We address dynamic selection, which may arise even under an initially random assignment to treatment, estimating a bivariate discrete-time hazard model jointly with a wage equation and correlated unobservables. Owing to the non-stationarity of job search behavior, we find heterogeneous effects of extended benefit duration on the re-employment hazard and on job match quality. Our results suggest that the unemployed who find a job close to and after benefit exhaustion experience less stable employment patterns and receive lower re-employment wages compared to their counterparts who receive extended benefits and exit unemployment in the same period. These results are found to be significant for men but not for women.
Developing rich Web applications can be a complex job - especially when it comes to mobile device support. Web-based environments such as Lively Webwerkstatt can help developers implement such applications by making the development process more direct and interactive. Further the process of developing software is collaborative which creates the need that the development environment offers collaboration facilities. This report describes extensions of the webbased development environment Lively Webwerkstatt such that it can be used in a mobile environment. The extensions are collaboration mechanisms, user interface adaptations but as well event processing and performance measuring on mobile devices.
A new sedimentary sequence from Lago di Venere on Pantelleria Island, located in the Strait of Sicily between Tunisia and Sicily was recovered. The lake is located in the coastal infra-Mediterranean vegetation belt at 2 m a.s.l. Pollen, charcoal and sedimentological analyses are used to explore linkages among vegetation, fire and climate at a decadal scale over the past 1200 years. A dry period from ad 800 to 1000 that corresponds to the Medieval Warm Period' (WMP) is inferred from sedimentological analysis. The high content of carbonate recorded in this period suggests a dry phase, when the ratio of evaporation/precipitation was high. During this period the island was dominated by thermophilous and drought-tolerant taxa, such as Quercus ilex, Olea, Pistacia and Juniperus. A marked shift in the sediment properties is recorded at ad 1000, when carbonate content became very low suggesting wetter conditions until ad 1850-1900. Broadly, this period coincides with the Little Ice Age' (LIA), which was characterized by wetter and colder conditions in Europe. During this time rather mesic conifers (i.e. Pinus pinaster), shrubs and herbs (e.g. Erica arborea and Selaginella denticulata) expanded, whereas more drought-adapted species (e.g. Q. ilex) declined. Charcoal data suggest enhanced fire activity during the LIA probably as a consequence of anthropogenic burning and/or more flammable fuel (e.g. resinous Pinus biomass). The last century was characterized by a shift to high carbonate content, indicating a change towards drier conditions, and re-expansion of Q. ilex and Olea. The post-LIA warming is in agreement with historical documents and meteorological time series. Vegetation dynamics were co-determined by agricultural activities on the island. Anthropogenic indicators (e.g. Cerealia-type, Sporormiella) reveal the importance of crops and grazing on the island. Our pollen data suggest that extensive logging caused the local extinction of deciduous Quercus pubescens around ad1750.
Flood loss modeling is an important component within flood risk assessments. Traditionally, stage-damage functions are used for the estimation of direct monetary damage to buildings. Although it is known that such functions are governed by large uncertainties, they are commonly applied - even in different geographical regions - without further validation, mainly due to the lack of real damage data. Until now, little research has been done to investigate the applicability and transferability of such damage models to other regions. In this study, the last severe flood event in the Austrian Lech Valley in 2005 was simulated to test the performance of various damage functions from different geographical regions in Central Europe for the residential sector. In addition to common stage-damage curves, new functions were derived from empirical flood loss data collected in the aftermath of recent flood events in neighboring Germany. Furthermore, a multi-parameter flood loss model for the residential sector was adapted to the study area and also evaluated with official damage data. The analysis reveals that flood loss functions derived from related and more similar regions perform considerably better than those from more heterogeneous data sets of different regions and flood events. While former loss functions estimate the observed damage well, the latter overestimate the reported loss clearly. To illustrate the effect of model choice on the resulting uncertainty of damage estimates, the current flood risk for residential areas was calculated. In the case of extreme events like the 300 yr flood, for example, the range of losses to residential buildings between the highest and the lowest estimates amounts to a factor of 18, in contrast to properly validated models with a factor of 2.3. Even if the risk analysis is only performed for residential areas, our results reveal evidently that a carefree model transfer in other geographical regions might be critical. Therefore, we conclude that loss models should at least be selected or derived from related regions with similar flood and building characteristics, as far as no model validation is possible. To further increase the general reliability of flood loss assessment in the future, more loss data and more comprehensive loss data for model development and validation are needed.
In the recent past, the Alpine Lech valley (Austria) experienced three damaging flood events within 6 years despite the various structural flood protection measures in place. For an improved flood risk management, the analysis of flood damage potentials is a crucial component. Since the expansion of built-up areas and their associated values is seen as one of the main drivers of rising flood losses, the goal of this study is to analyze the spatial development of the assets at risk, particularly of residential areas, due to land use changes over a historic period (since 1971) and up to possible shifts in future (until 2030). The analysis revealed that the alpine study area was faced to remarkable land use changes like urbanization and the decline of agriculturally used grassland areas. Although the major agglomeration of residential areas inside the flood plains took place before 1971, a steady growth of values at risk can still be observed until now. Even for the future, the trend is ongoing, but depends very much on the assumed land use scenario and the underlying land use policy. Between 1971 and 2006, the annual growth rate of the damage potential of residential areas amounted to 1.1 % ('constant values,' i.e., asset values at constant prices of reference year 2006) or 3.0 % ('adjusted values,' i.e., asset values adjusted by GDP increase at constant prices of reference year 2006) for three flood scenarios. For the projected time span between 2006 and 2030, a further annual increase by 1.0 % ('constant values') or even 4.2 % ('adjusted values') may be possible when the most extreme urbanization scenario 'Overall Growth' is considered. Although socio-economic development is regarded as the main driver for increasing flood losses, our analysis shows that settlement development does not preferably take place within flood prone areas.
Flood risk is expected to increase in many regions of the world in the next decades with rising flood losses as a consequence. First and foremost, it can be attributed to the expansion of settlement and industrial areas into flood plains and the resulting accumulation of assets. For a future-oriented and a more robust flood risk management, it is therefore of importance not only to estimate potential impacts of climate change on the flood hazard, but also to analyze the spatio-temporal dynamics of flood exposure due to land use changes. In this study, carried out in the Alpine Lech Valley in Tyrol (Austria), various land use scenarios until 2030 were developed by means of a spatially explicit land use model, national spatial planning scenarios and current spatial policies. The combination of the simulated land use patterns with different inundation scenarios enabled us to derive statements about possible future changes in flood-exposed built-up areas. The results indicate that the potential assets at risk depend very much on the selected socioeconomic scenario. The important conditions affecting the potential assets at risk that differ between the scenarios are the demand for new built-up areas as well as on the types of conversions allowed to provide the necessary areas at certain locations. The range of potential changes in flood-exposed residential areas varies from no further change in the most moderate scenario 'Overall Risk' to 119 % increase in the most extreme scenario 'Overall Growth' (under current spatial policy) and 159 % increase when disregarding current building restrictions.
A total of 271 pollen records were selected from a large collection of both raw and digitized pollen spectra from eastern continental Asia (70 degrees-135 degrees E and 18 degrees-55 degrees N). Following pollen percentage recalculations, taxonomic homogenization, and age-depth model revision, the pollen spectra were interpolated at a 500-year resolution and a taxonomically harmonized and temporally standardized fossil pollen dataset established with 226 pollen taxa, covering the last 22 cal lea. Of the 271 pollen records, 85% were published since 1990, with reliable chronologies and high temporal resolutions; of these, 50% have raw data with complete pollen assemblages, ensuring the quality of this dataset The pollen records available for each 500-year time slice are well distributed over all main vegetation types and climatic zones of the study area, making their pollen spectra suitable for paleovegetation and paleoclimate research. Such a dataset can be used as an example for the development of similar datasets for other regions of the world.