Refine
Year of publication
- 2013 (2103) (remove)
Document Type
- Article (1336)
- Doctoral Thesis (311)
- Monograph/Edited Volume (136)
- Postprint (89)
- Review (71)
- Preprint (49)
- Conference Proceeding (48)
- Part of Periodical (21)
- Other (17)
- Master's Thesis (16)
Language
Keywords
- Curriculum Framework (18)
- European values education (18)
- Europäische Werteerziehung (18)
- Familie (18)
- Family (18)
- Lehrevaluation (18)
- Studierendenaustausch (18)
- Unterrichtseinheiten (18)
- curriculum framework (18)
- lesson evaluation (18)
Institute
- Institut für Biochemie und Biologie (269)
- Institut für Physik und Astronomie (189)
- Institut für Geowissenschaften (187)
- Institut für Chemie (158)
- Wirtschaftswissenschaften (89)
- Department Psychologie (86)
- Institut für Romanistik (84)
- Extern (83)
- Department Linguistik (64)
- Institut für Mathematik (63)
We report results from TeV gamma-ray observations of the microquasar Cygnus X-3. The observations were made with the Very Energetic Radiation Imaging Telescope Array System (VERITAS) over a time period from 2007 June 11 to 2011 November 28. VERITAS is most sensitive to gamma rays at energies between 85 GeV and 30 TeV. The effective exposure time amounts to a total of about 44 hr, with the observations covering six distinct radio/X-ray states of the object. No significant TeV gamma-ray emission was detected in any of the states, nor with all observations combined. The lack of a positive signal, especially in the states where GeV gamma rays were detected, places constraints on TeV gamma-ray production in Cygnus X-3. We discuss the implications of the results.
The Magellanic Stream (MS) is a massive and extended tail of multi-phase gas stripped out of the Magellanic Clouds and interacting with the Galactic halo. In this first paper of an ongoing program to study the Stream in absorption, we present a chemical abundance analysis based on HST/COS and VLT/UVES spectra of four active galactic nuclei (RBS 144, NGC 7714, PHL 2525, and HE 0056-3622) lying behind the MS. Two of these sightlines yield good MS metallicity measurements: toward RBS 144 we measure a low MS metallicity of [S/H] = [S II/H I] = -1.13 +/- 0.16 while toward NGC 7714 we measure [O/H] = [O I/H I] = -1.24 +/- 0.20. Taken together with the published MS metallicity toward NGC 7469, these measurements indicate a uniform abundance of approximate to 0.1 solar along the main body of the Stream. This provides strong support to a scenario in which most of the Stream was tidally stripped from the SMC approximate to 1.5-2.5 Gyr ago (a time at which the SMC had a metallicity of approximate to 0.1 solar), as predicted by several N-body simulations. However, in Paper II of this series, we report a much higher metallicity (S/H = 0.5 solar) in the inner Stream toward Fairall 9, a direction sampling a filament of the MS that Nidever et al. claim can be traced kinematically to the Large Magellanic Cloud, not the Small Magellanic Cloud. This shows that the bifurcation of the Stream is evident in its metal enrichment, as well as its spatial extent and kinematics. Finally we measure a similar low metallicity [O/H] = [O I/H I] = -1.03 +/- 0.18 in the v(LSR) = 150 km s(-1) cloud toward HE 0056-3622, which belongs to a population of anomalous velocity clouds near the south Galactic pole. This suggests these clouds are associated with the Stream or more distant structures (possibly the Sculptor Group, which lies in this direction at the same velocity), rather than tracing foreground Galactic material.
The formation of unmagnetized electrostatic shock-like structures with a high Mach number is examined with one-and two-dimensional particle-in-cell (PIC) simulations. The structures are generated through the collision of two identical plasma clouds, which consist of equally hot electrons and ions with a mass ratio of 250. The Mach number of the collision speed with respect to the initial ion acoustic speed of the plasma is set to 4.6. This high Mach number delays the formation of such structures by tens of inverse ion plasma frequencies. A pair of stable shock-like structures is observed after this time in the 1D simulation, which gradually evolves into electrostatic shocks. The ion acoustic instability, which can develop in the 2D simulation but not in the 1D one, competes with the nonlinear process that gives rise to these structures. The oblique ion acoustic waves fragment their electric field. The transition layer, across which the bulk of the ions change their speed, widens and their speed change is reduced. Double layer-shock hybrid structures develop.
We report the discovery of an unidentified, extended source of very-high-energy gamma-ray emission, VER J2019+407, within the radio shell of the supernova remnant SNR G78.2+2.1, using 21.4 hr of data taken by the VERITAS gamma-ray observatory in 2009. These data confirm the preliminary indications of gamma-ray emission previously seen in a two-year (2007-2009) blind survey of the Cygnus region by VERITAS. VER J2019+407, which is detected at a post-trials significance of 7.5 standard deviations in the 2009 data, is localized to the northwestern rim of the remnant in a region of enhanced radio and X-ray emission. It has an intrinsic extent of 0 degrees.23 +/- 0 degrees.03(stat-0 degrees.02sys)(+0 degrees.04) and its spectrum is well-characterized by a differential power law (dN/dE = N-0 x (E/TeV)-Gamma) with a photon index of Gamma = 2.37 +/- 0.14(stat) +/- 0.20(sys) and a flux normalization of N-0 = 1.5 +/- 0.2(stat) +/- 0.4(sys) x 10(-12) photon TeV-1 cm(-2) s(-1). This yields an integral flux of 5.2 +/- 0.8(stat) +/- 1.4(sys) x 10(-12) photon cm(-2) s(-1) above 320 GeV, corresponding to 3.7% of the Crab Nebula flux. We consider the relationship of the TeV gamma-ray emission with the GeV gamma-ray emission seen from SNR G78.2+2.1 as well as that seen from a nearby cocoon of freshly accelerated cosmic rays. Multiple scenarios are considered as possible origins for the TeV gamma-ray emission, including hadronic particle acceleration at the SNR shock.
We report on the detection of a very rapid TeV gamma-ray flare from BL Lacertae on 2011 June 28 with the Very Energetic Radiation Imaging Telescope Array System (VERITAS). The flaring activity was observed during a 34.6 minute exposure, when the integral flux above 200 GeV reached (3.4 +/- 0.6) x 10(-6) photons m(-2) s(-1), roughly 125% of the Crab Nebula flux measured by VERITAS. The light curve indicates that the observations missed the rising phase of the flare but covered a significant portion of the decaying phase. The exponential decay time was determined to be 13 +/- 4 minutes, making it one of the most rapid gamma-ray flares seen from a TeV blazar. The gamma-ray spectrum of BL Lacertae during the flare was soft, with a photon index of 3.6 +/- 0.4, which is in agreement with the measurement made previously by MAGIC in a lower flaring state. Contemporaneous radio observations of the source with the Very Long Baseline Array revealed the emergence of a new, superluminal component from the core around the time of the TeV gamma-ray flare, accompanied by changes in the optical polarization angle. Changes in flux also appear to have occurred at optical, UV, and GeV gamma-ray wavelengths at the time of the flare, although they are difficult to quantify precisely due to sparse coverage. A strong flare was seen at radio wavelengths roughly four months later, which might be related to the gamma-ray flaring activities. We discuss the implications of these multiwavelength results.
We present a multi-wavelength study of the Magellanic Stream (MS), a massive gaseous structure in the Local Group that is believed to represent material stripped from the Magellanic Clouds. We use ultraviolet, optical and radio data obtained with HST/COS, VLT/UVES, FUSE, GASS, and ATCA to study metal abundances and physical conditions in the Stream toward the quasar Fairall 9. Line absorption in the MS from a large number of metal ions and from molecular hydrogen is detected in up to seven absorption components, indicating the presence of multi-phase gas. From the analysis of unsaturated S II absorption, in combination with a detailed photoionization model, we obtain a surprisingly high alpha abundance in the Stream toward Fairall 9 of [S/H] = -0.30 +/- 0.04 (0.50 solar). This value is five times higher than what is found along other MS sightlines based on similar COS/UVES data sets. In contrast, the measured nitrogen abundance is found to be substantially lower ([N/H] = -1.15 +/- 0.06), implying a very low [N/alpha] ratio of -0.85 dex. The substantial differences in the chemical composition of MS toward Fairall 9 compared to other sightlines point toward a complex enrichment history of the Stream. We favor a scenario, in which the gas toward Fairall 9 was locally enriched with a elements by massive stars and then was separated from the Magellanic Clouds before the delayed nitrogen enrichment from intermediate-mass stars could set in. Our results support (but do not require) the idea that there is a metal-enriched filament in the Stream toward Fairall 9 that originates in the LMC.
Magnetic field generation in a jet-sheath plasma via the kinetic Kelvin-Helmholtz instability
(2013)
We have investigated the generation of magnetic fields associated with velocity shear between an unmagnetized relativistic jet and an unmagnetized sheath plasma. We have examined the strong magnetic fields generated by kinetic shear (Kelvin-Helmholtz) instabilities. Compared to the previous studies using counter-streaming performed by Alves et al. (2012), the structure of the kinetic Kelvin-Helmholtz instability (KKHI) of our jet-sheath configuration is slightly different, even for the global evolution of the strong transverse magnetic field. In our simulations the major components of growing modes are the electric field E-z, perpendicular to the flow boundary, and the magnetic field B-y, transverse to the flow direction. After the B-y component is excited, an induced electric field E-x, parallel to the flow direction, becomes significant. However, other field components remain small. We find that the structure and growth rate of KKHI with mass ratios m(i)/m(e) = 1836 and m(i)/m(e) = 20 are similar. In our simulations in the nonlinear stage is not as clear as in counter-streaming cases. The growth rate for a mildly-relativistic jet case (gamma(j) = 1.5) is larger than for a relativistic jet case (gamma(j) = 15).
Nonrelativistic electrostatic unmagnetized shocks are frequently observed in laboratory plasmas and they are likely to exist in astrophysical plasmas. Their maximum speed, expressed in units of the ion acoustic speed far upstream of the shock, depends only on the electron-to-ion temperature ratio if binary collisions are absent. The formation and evolution of such shocks is examined here for a wide range of shock speeds with particle-in-cell simulations. The initial temperatures of the electrons and the 400 times heavier ions are equal. Shocks form on electron time scales at Mach numbers between 1.7 and 2.2. Shocks with Mach numbers up to 2.5 form after tens of inverse ion plasma frequencies. The density of the shock-reflected ion beam increases and the number of ions crossing the shock thus decreases with an increasing Mach number, causing a slower expansion of the downstream region in its rest frame. The interval occupied by this ion beam is on a positive potential relative to the far upstream. This potential pre-heats the electrons ahead of the shock even in the absence of beam instabilities and decouples the electron temperature in the foreshock ahead of the shock from the one in the far upstream plasma. The effective Mach number of the shock is reduced by this electron heating. This effect can potentially stabilize nonrelativistic electrostatic shocks moving as fast as supernova remnant shocks.
Aims. We analyze the emission in the 0.3-30 GeV energy range of gamma-ray bursts detected with the Fermi Gamma-ray Space Telescope. We concentrate on bursts that were previously only detected with the Gamma-Ray Burst Monitor in the keV energy range. These bursts will then be compared to the bursts that were individually detected with the Large Area Telescope at higher energies.
Methods. To estimate the emission of faint GRBs we used nonstandard analysis methods and sum over many GRBs to find an average signal that is significantly above background level. We used a subsample of 99 GRBs listed in the Burst Catalog from the first two years of observation.
Results. Although most are not individually detectable, the bursts not detected by the Large Area Telescope on average emit a significant flux in the energy range from 0.3 GeV to 30 GeV, but their cumulative energy fluence is only 8% of that of all GRBs. Likewise, the GeV-to-MeV flux ratio is less and the GeV-band spectra are softer. We confirm that the GeV-band emission lasts much longer than the emission found in the keV energy range. The average allsky energy flux from GRBs in the GeV band is 6.4 x 10(-4) erg cm(-2) yr(-1) or only similar to 4% of the energy flux of cosmic rays above the ankle at 10(18.6) eV.
We investigate the temporal and spectral correlations between flux and anisotropy fluctuations of TeV-band cosmic rays in light of recent data taken with IceCube. We find that for a conventional distribution of cosmic-ray sources, the dipole anisotropy is higher than observed, even if source discreteness is taken into account. Moreover, even for a shallow distribution of galactic cosmic-ray sources and a reacceleration model, fluctuations arising from source discreteness provide a probability only of the order of 10% that the cosmic-ray anisotropy limits of the recent IceCube analysis are met. This probability estimate is nearly independent of the exact choice of source rate, but generous for a large halo size. The location of the intensity maximum far from the Galactic Center is naturally reproduced.
The hypothesis is considered that the present, local Galactic cosmic-ray spectrum is, due to source intermittency, softer than average over time and over the Galaxy. Measurements of muogenic nuclides underground could provide an independent measurement of the time-averaged spectrum. Source intermittency could also account for the surprising low anisotropy reported by the IceCube Collaboration. Predictions for Galactic emission of ultrahigh-energy (UHE) quanta, such as UHE gamma rays and neutrinos, might be higher or lower than previously estimated.
The course timetabling problem can be generally defined as the task of assigning a number of lectures to a limited set of timeslots and rooms, subject to a given set of hard and soft constraints. The modeling language for course timetabling is required to be expressive enough to specify a wide variety of soft constraints and objective functions. Furthermore, the resulting encoding is required to be extensible for capturing new constraints and for switching them between hard and soft, and to be flexible enough to deal with different formulations. In this paper, we propose to make effective use of ASP as a modeling language for course timetabling. We show that our ASP-based approach can naturally satisfy the above requirements, through an ASP encoding of the curriculum-based course timetabling problem proposed in the third track of the second international timetabling competition (ITC-2007). Our encoding is compact and human-readable, since each constraint is individually expressed by either one or two rules. Each hard constraint is expressed by using integrity constraints and aggregates of ASP. Each soft constraint S is expressed by rules in which the head is the form of penalty (S, V, C), and a violation V and its penalty cost C are detected and calculated respectively in the body. We carried out experiments on four different benchmark sets with five different formulations. We succeeded either in improving the bounds or producing the same bounds for many combinations of problem instances and formulations, compared with the previous best known bounds.
Considerable effort has been devoted to the development of simulation algorithms for facies modeling, whereas a discussion of how to combine those techniques has not existed. The integration of multiple geologic data into a three-dimensional model, which requires the combination of simulation techniques, is yet a current challenge for reservoir modeling. This article presents a thought process that guides the acquisition and modeling of geologic data at various scales. Our work is based on outcrop data collected from a Jurassic carbonate ramp located in the High Atlas mountain range of Morocco. The study window is 1 km (0.6 mi) wide and 100 m (328.1 ft) thick. We describe and model the spatial and hierarchical arrangement of carbonate bodies spanning from largest to smallest: (1) stacking pattern of high-frequency depositional sequences, (2) facies association, and (3) lithofacies. Five sequence boundaries were modeled using differential global position system mapping and light detection and ranging data. The surface-based model shows a low-angle profile with modest paleotopographic relief at the inner-to-middle ramp transition. Facies associations were populated using truncated Gaussian simulation to preserve ordered trends between the inner, middle, and outer ramps. At the lithofacies scale, field observations and statistical analysis show a mosaiclike distribution that was simulated using a fully stochastic approach with sequential indicator simulation.
This study observes that the use of one single simulation technique is unlikely to correctly model the natural patterns and variability of carbonate rocks. The selection and implementation of different techniques customized for each level of the stratigraphic hierarchy will provide the essential computing flexibility to model carbonate settings. This study demonstrates that a scale-dependent modeling approach should be a common procedure when building subsurface and outcrop models.
The Late Permian Zechstein Group in northeastern Germany is characterized by shelf and slope carbonates that rimmed a basin extending from eastern England through the Netherlands and Germany to Poland. Conventional reservoirs are found in grainstones rimming islands created by pre-existing paleohighs and platform-rimming shoals that compose steep margins in the north and ramp deposits in the southern part. The slope and basin deposits are characterized by debris flows and organic-rich mudstones. Lagoonal and basinal evaporites formed the seal for these carbonate and underlying sandstone reservoirs. The objective of this investigation is to evaluate potential unconventional reservoirs in organic-rich, fine-grained and/or tight mudrocks in slope and basin as well as platform carbonates occurring in this stratigraphic interval. Therefore, a comprehensive study was conducted that included sedimentology, sequence stratigraphy, petrography, and geochemistry. Sequence stratigraphic correlations from shelf to basin are crucial in establishing a framework that allows correlation of potential productive facies in fine-grained, organic-rich basinal siliceous and calcareous mudstones or interfingering tight carbonates and siltstones, ranging from the lagoon, to slope to basin, which might be candidates for forming an unconventional reservoir. Most organic-rich shales worldwide are associated with eustatic transgressions. The basal Zechstein cycles, Z1 and Z2, contain organic-rich siliceous and calcareous mudstones and carbonates that form major transgressive deposits in the basin. Maturities range from over-mature (gas) in the basin to oil-generation on the slope with variable TOC contents. This sequence stratigraphic and sedimentologic evaluation of the transgressive facies in the Z1 and Z2 assesses the potential for shale-gas/oil and hybrid unconventional plays. Potential unconventional reservoirs might be explored in laminated organic-rich mudstones within the oil window along the northern and southern slopes of the basin. Although the Zechstein Z1 and Z2 cycles might have limited shale-gas potential because of low thickness and deep burial depth to be economic at this point, unconventional reservoir opportunities that include hybrid and shale-oil potential are possible in the study area.
The occurrence of neritic microbial carbonates is often related to ecological refuges, where grazers and other competitors are reduced by environmental conditions, or to post-extinction events (e.g. in the Late Devonian, Early Triassic). Here, we present evidence for Middle Jurassic (Bajocian) microbial mounds formed in the normal marine, shallow neritic setting of an inner, ramp system from the High Atlas of Morocco. The microbial mounds are embedded in cross-bedded oolitic facies. Individual mounds show low relief domal geometries (up to 3 m high and 4.5 m across), but occasionally a second generation of mounds exhibits tabular geometries (<1 m high). The domes are circular in plan view and have intact tops, lacking evidence of current influence on mound preferred growth direction or distribution patterns, or truncation. The mound fades consists almost entirely of non-laminated, micritic thrombolites with branching morphologies and fine-grained, clotted and peloidal fabrics. Normal marine biota are present but infrequent. Several lines of evidence document that microbial mound growth alternates with time intervals of active ooid shoal deposition. This notion is of general significance when compared with modern Bahamian microbialites that co-exist with active sub-aquatic dunes. Furthermore, the lack of detailed studies of Middle Jurassic, normal marine shallow neritic microbial mounds adds a strong motivation for the present study. Specifically, Bajocian mounds formed on a firmground substratum during transgressive phases under condensed sedimentation. Furthermore, a transient increase in nutrient supply in the prevailing mesotrophic setting, as suggested by the heterotrophic-dominated biota, may have controlled microbial mound stages.
The course timetabling problem can be generally defined as the task of assigning a number of lectures to a limited set of timeslots and rooms, subject to a given set of hard and soft constraints. The modeling language for course timetabling is required to be expressive enough to specify a wide variety of soft constraints and objective functions. Furthermore, the resulting encoding is required to be extensible for capturing new constraints and for switching them between hard and soft, and to be flexible enough to deal with different formulations. In this paper, we propose to make effective use of ASP as a modeling language for course timetabling. We show that our ASP-based approach can naturally satisfy the above requirements, through an ASP encoding of the curriculum-based course timetabling problem proposed in the third track of the second international timetabling competition (ITC-2007). Our encoding is compact and human-readable, since each constraint is individually expressed by either one or two rules. Each hard constraint is expressed by using integrity constraints and aggregates of ASP. Each soft constraint S is expressed by rules in which the head is the form of penalty (S, V, C), and a violation V and its penalty cost C are detected and calculated respectively in the body. We carried out experiments on four different benchmark sets with five different formulations. We succeeded either in improving the bounds or producing the same bounds for many combinations of problem instances and formulations, compared with the previous best known bounds.
Context. Theories on the origin of magnetic fields in massive stars remain poorly developed, because the properties of their magnetic field as function of stellar parameters could not yet be investigated. Additional observations are of utmost importance to constrain the conditions that are conducive to magnetic fields and to determine first trends about their occurrence rate and field strength distribution.
Aims. To investigate whether magnetic fields in massive stars are ubiquitous or appear only in stars with a specific spectral classification, certain ages, or in a special environment, we acquired 67 new spectropolarimetric observations for 30 massive stars. Among the observed sample, roughly one third of the stars are probable members of clusters at different ages, whereas the remaining stars are field stars not known to belong to any cluster or association.
Methods. Spectropolarimetric observations were obtained during four different nights using the low-resolution spectropolarimetric mode of FOcal Reducer low dispersion Spectrograph (FORS 2) mounted on the 8-m Antu telescope of the VLT. Furthermore, we present a number of follow-up observations carried out with the high-resolution spectropolarimeters SOFIN mounted at the Nordic Optical Telescope (NOT) and HARPS mounted at the ESO 3.6 m between 2008 and 2011. To assess the membership in open clusters and associations, we used astrometric catalogues with the highest quality kinematic and photometric data currently available.
Results. The presence of a magnetic field is confirmed in nine stars previously observed with FORS 1/2: HD36879, HD47839, CPD-28 2561, CPD-47 2963, HD93843, HD148937, HD149757, HD328856, and HD164794. New magnetic field detections at a significance level of at least 3 sigma were achieved in five stars: HD92206c, HD93521, HD93632, CPD-46 8221, and HD157857. Among the stars with a detected magnetic field, five stars belong to open clusters with high membership probability. According to previous kinematic studies, five magnetic O-type stars in our sample are candidate runaway stars.
The study of outcrop modeling is located at the interface between two fields of expertise, Sedimentology and Computing Geoscience, which respectively investigates and simulates geological heterogeneity observed in the sedimentary record. During the last past years, modeling tools and techniques were constantly improved. In parallel, the study of Phanerozoic carbonate deposits emphasized the common occurrence of a random facies distribution along single depositional domain. Although both fields of expertise are intrinsically linked during outcrop simulation, their respective advances have not been combined in literature to enhance carbonate modeling studies. The present study re-examines the modeling strategy adapted to the simulation of shallow-water carbonate systems, based on a close relationship between field sedimentology and modeling capabilities. In the present study, the evaluation of three commonly used algorithms Truncated Gaussian Simulation (TGSim), Sequential Indicator Simulation (SISim), and Indicator Kriging (IK), were performed for the first time using visual and quantitative comparisons on an ideally suited carbonate outcrop. The results show that the heterogeneity of carbonate rocks cannot be fully simulated using one single algorithm. The operating mode of each algorithm involves capabilities as well as drawbacks that are not capable to match all field observations carried out across the modeling area. Two end members in the spectrum of carbonate depositional settings, a low-angle Jurassic ramp (High Atlas, Morocco) and a Triassic isolated platform (Dolomites, Italy), were investigated to obtain a complete overview of the geological heterogeneity in shallow-water carbonate systems. Field sedimentology and statistical analysis performed on the type, morphology, distribution, and association of carbonate bodies and combined with palaeodepositional reconstructions, emphasize similar results. At the basin scale (x 1 km), facies association, composed of facies recording similar depositional conditions, displays linear and ordered transitions between depositional domains. Contrarily, at the bedding scale (x 0.1 km), individual lithofacies type shows a mosaic-like distribution consisting of an arrangement of spatially independent lithofacies bodies along the depositional profile. The increase of spatial disorder from the basin to bedding scale results from the influence of autocyclic factors on the transport and deposition of carbonate sediments. Scale-dependent types of carbonate heterogeneity are linked with the evaluation of algorithms in order to establish a modeling strategy that considers both the sedimentary characteristics of the outcrop and the modeling capabilities. A surface-based modeling approach was used to model depositional sequences. Facies associations were populated using TGSim to preserve ordered trends between depositional domains. At the lithofacies scale, a fully stochastic approach with SISim was applied to simulate a mosaic-like lithofacies distribution. This new workflow is designed to improve the simulation of carbonate rocks, based on the modeling of each scale of heterogeneity individually. Contrarily to simulation methods applied in literature, the present study considers that the use of one single simulation technique is unlikely to correctly model the natural patterns and variability of carbonate rocks. The implementation of different techniques customized for each level of the stratigraphic hierarchy provides the essential computing flexibility to model carbonate systems. Closer feedback between advances carried out in the field of Sedimentology and Computing Geoscience should be promoted during future outcrop simulations for the enhancement of 3-D geological models.
Introducing the CTA concept
(2013)
The Cherenkov Telescope Array (CTA) is a new observatory for very high-energy (VHE) gamma rays. CTA has ambitions science goals, for which it is necessary to achieve full-sky coverage, to improve the sensitivity by about an order of magnitude, to span about four decades of energy, from a few tens of GeV to above 100 TeV with enhanced angular and energy resolutions over existing VHE gamma-ray observatories. An international collaboration has formed with more than 1000 members from 27 countries in Europe, Asia, Africa and North and South America. In 2010 the CTA Consortium completed a Design Study and started a three-year Preparatory Phase which leads to production readiness of CTA in 2014. In this paper we introduce the science goals and the concept of CTA, and provide an overview of the project.
The relevance of biological Si cycling for dissolved silica (DSi) export from terrestrial biogeosystems is still in debate. Even in systems showing a high content of weatherable minerals, like Cambisols on volcanic tuff, biogenic Si (BSi) might contribute > 50% to DSi (Gerard et al., 2008). However, the number of biogeosystem studies is rather limited for generalized conclusions. To cover one end of controlling factors on DSi, i.e., weatherable minerals content, we studied a forested site with absolute quartz dominance (> 95 %). Here we hypothesise minimal effects of chemical weathering of silicates on DSi. During a four year observation period (05/2007-04/2011), we quantified (i) internal and external Si fluxes of a temperate-humid biogeosystem (beech, 120 yr) by BIOME-BGC (version ZALF), (ii) related Si budgets, and (iii) Si pools in soil and beech, chemically as well as by SEM-EDX. For the first time two compartments of biogenic Si in soils were analysed, i.e., phytogenic and zoogenic Si pool (testate amoebae). We quantified an average Si plant uptake of 35 kg Si ha(-1) yr(-1) - most of which is recycled to the soil by litterfall - and calculated an annual biosilicification from idiosomic testate amoebae of 17 kg Si ha(-1). The comparatively high DSi concentrations (6 mg L-1) and DSi exports (12 kg Si ha(-1) yr(-1)) could not be explained by chemical weathering of feldspars or quartz dissolution. Instead, dissolution of a relictic, phytogenic Si pool seems to be the main process for the DSi observed. We identified canopy closure accompanied by a disappearance of grasses as well as the selective extraction of pine trees 30 yr ago as the most probable control for the phenomena observed. From our results we concluded the biogeosystem to be in a transient state in terms of Si cycling.
Vertical flow systems filled with porous medium have been shown to efficiently remove volatile organic contaminants (VOCs) from contaminated groundwater. To apply this semi-natural remediation strategy it is however necessary to distinguish between removal due to biodegradation and due to volatile losses to the atmosphere. Especially for (potentially) toxic VOCs, the latter needs to be minimized to limit atmospheric emissions. In this study, numerical simulation was used to investigate quantitatively the removal of volatile organic compounds in two pilot-scale water treatment systems: an unplanted vertical flow filter and a planted one, which could also be called a vertical flow constructed wetland, both used for the treatment of contaminated groundwater. These systems were intermittently loaded with contaminated water containing benzene and MTBE as main VOCs. The highly dynamic but permanently unsaturated conditions in the porous medium facilitated aerobic biodegradation but could lead to volatile emissions of the contaminants. Experimental data from porous material analyses, flow rate measurements, solute tracer and gas tracer test, as well as contaminant concentration measurements at the boundaries of the systems were used to constrain a numerical reactive transport modeling approach. Numerical simulations considered unsaturated water flow, transport of species in the aqueous and the gas phase as well as aerobic degradation processes, which made it possible to quantify the rates of biodegradation and volatile emissions and calculating their contribution to total contaminant removal. A range of degradation rates was determined using experimental results of both systems under two operation modes and validated by field data obtained at different operation modes applied to the filters. For both filters, simulations and experimental data point to high biodegradation rates, if the flow filters have had time to build up their removal capacity. For this case volatile emissions are negligible and total removal can be attributed to biodegradation, only. The simulation study thus supports the use of both of these vertical flow systems for the treatment of groundwater contaminated with VOCs and the use of reactive transport modeling for the assessment of VOCs removal and operation modes in these high performance treatment systems.
Context. Recent studies of O-type stars have demonstrated that discrepant mass-loss rates are obtained when different diagnostic methods are employed. Fitting the unsaturated UV resonance lines (e.g., P v) gives drastically lower values than obtained from the Ha emission. Wind inhomogeneity (so-called "clumping") may be the main cause of this discrepancy. Aims. In a previous paper, we presented 3D Monte-Carlo calculations for the formation of scattering lines in a clumped stellar wind. In the present paper we select five O-type supergiants (from 04 to 07) and test whether the reported discrepancies can be resolved this way. Methods. In the first step, the analyses started with simulating the observed spectra with Potsdam Wolf-Rayet (PoWR) non-LTE model atmospheres. The mass-loss rates are adjusted to fit to the observed Ha emission lines best. For the unsaturated UV resonance lines (i.e., P v) we then applied our 3D Monte-Carlo code, which can account for wind clumps of any optical depths ("macroclumping"), a non-void interclump medium, and a velocity dispersion inside the clumps. The ionization stratifications and underlying photospheric spectra were adopted from the PoWR models. The properties of the wind clumps were constrained by fitting the observed resonance line profiles. Results. Our results show that with the mass-loss rates that fit Ha (and other Balmer and He II lines), the UV resonance lines (especially the unsaturated doublet of P v) can also be reproduced with no problem when macroclumping is taken into account. There is no need to artificially reduce the mass-loss rates or to assume a subsolar phosphorus abundance or an extremely high clumping factor, unlike what was claimed by other authors. These consistent mass-loss rates are lower by a factor of 1.3 to 2.6, compared to the mass-loss rate recipe from Vink et al. Conclusions. Macroclumping resolves the previously reported discrepancy between Ha and P v mass-loss diagnostics.
Context. The Be/X-ray binary SXP 1062 is of especial interest owing to the large spin period of the neutron star, its large spin-down rate, and the association with a supernova remnant constraining its age. This makes the source an important probe for accretion physics.
Aims. To investigate the long-term evolution of the spin period and associated spectral variations, we performed an XMM-Newton target-of-opportunity observation of SXP 1062 during X-ray outburst.
Methods. Spectral and timing analysis of the XMM-Newton data was compared with previous studies, as well as complementary Swift/XRT monitoring and optical spectroscopy with the SALT telescope were obtained.
Results. The spin period was measured to be P-s = (1071.01 +/- 0.16) s on 2012 Oct. 14. The X-ray spectrum is similar to that of previous observations. No convincing cyclotron absorption features, which could be indicative for a high magnetic field strength, are found. The high-resolution RGS spectra indicate the presence of emission lines, which may not completely be accounted for by the SNR emission. The comparison of multi-epoch optical spectra suggest an increasing size or density of the decretion disc around the Be star.
Conclusions. SXP 1062 showed a net spin-down with an average of P-s = ( 2.27 +/- 0.44) s yr(-1) over a baseline of 915 days.
An increasing number of OB stars have been shown to possess magnetic fields. Although the sample remains small, it is surprising that the magnetic and X-ray properties of these stars appear to be far less correlated than expected. This contradicts model predictions, which generally indicate that the X-rays from magnetic stars are harder and more luminous than their non-magnetic counterparts. Instead, the X-ray properties of magnetic OB stars are quite diverse.
tau Sco is one example where the expectations are better met. This bright main-sequence, early B star has been studied extensively in a variety of wavebands. It has a surface magnetic field of around 500 G, and Zeeman Doppler tomography has revealed an unusual field configuration. Furthermore, tau Sco displays an unusually hard X-ray spectrum, much harder than similar, non-magnetic OB stars. In addition, the profiles of its UV P Cygni wind lines have long been known to possess a peculiar morphology.
Recently, two stars, HD 66665 and HD 63425, whose spectral types and UV wind line profiles are similar to those of tau Sco, have also been determined to be magnetic. In the hope of establishing a magnetic field - X-ray connection for at least a subset of the magnetic stars, we obtained XMM-Newton European Photon Imaging Camera spectra of these two objects. Our results for HD 66665 are somewhat inconclusive. No especially strong hard component is detected; however, the number of source counts is insufficient to rule out hard emission. Longer exposure is needed to assess the nature of the X-rays from this star. On the other hand, we do find that HD 63425 has a substantial hard X-ray component, thereby bolstering its close similarity to tau Sco.
We obtained four pointings of over 100 ks each of the well-studied Wolf-Rayet star WR 6 with the XMM-Newton satellite. With a first paper emphasizing the results of spectral analysis, this follow-up highlights the X-ray variability clearly detected in all four pointings. However, phased light curves fail to confirm obvious cyclic behavior on the well-established 3.766 day period widely found at longer wavelengths. The data are of such quality that we were able to conduct a search for event clustering in the arrival times of X-ray photons. However, we fail to detect any such clustering. One possibility is that X-rays are generated in a stationary shock structure. In this context we favor a corotating interaction region (CIR) and present a phenomenological model for X-rays from a CIR structure. We show that a CIR has the potential to account simultaneously for the X-ray variability and constraints provided by the spectral analysis. Ultimately, the viability of the CIR model will require both intermittent long-term X-ray monitoring of WR 6 and better physical models of CIR X-ray production at large radii in stellar winds.
Tea aroma is one of the most important factors affecting the character and quality of tea. Recent advances in methods and instruments for separating and identifying volatile compounds have led to intensive investigations of volatile compounds in tea. These studies have resulted in a number of insightful and useful discoveries. Here we summarize the recent investigations into tea volatile compounds: the volatile compounds in tea products; the metabolic pathways of volatile formation in tea plants and the glycosidically-bound volatile compounds in tea; and the techniques used for studying such compounds. Finally, we discuss practical applications for the improvement of aroma and flavor quality in teas. (C) 2013 Elsevier Ltd. All rights reserved.
The multiple high-pressure (HP), low-temperature (LT) metamorphic units of Western and Central Anatolia offer a great opportunity to investigate the subduction-and continental accretion-related evolution of the eastern limb of the long-lived Aegean subduction system. Recent reports of the HP-LT index mineral Fe-Mg-carpholite in three metasedimentary units of the Gondwana-derived Anatolide-Tauride continental block (namely the Afyon Zone, the Oren Unit and the southern Menderes Massif) suggest a more complicated scenario than the single-continental accretion model generally put forward in previous studies. This study presents the first isotopic dates (white mica Ar-40-Ar-39 geochronology), and where possible are combined with P-T estimates (chlorite thermometry, phengite barometry, multi-equilibrium thermobarometry), on carpholite-bearing rocks from these three HP-LT metasedimentary units. It is shown that, in the Afyon Zone, carpholite-bearing assemblages were retrogressed through greenschist-facies conditions at c. 67-62 Ma. Early retrograde stages in the Oren Unit are dated to 63-59 Ma. In the Kurudere-Nebiler Unit (HP Mesozoic cover of the southern Menderes Massif), HP retrograde stages are dated to c. 45 Ma, and post-collisional cooling to c. 26 Ma. These new results support that the Oren Unit represents the westernmost continuation of the Afyon Zone, whereas the Kurudere-Nebiler Unit correlates with the Cycladic Blueschist Unit of the Aegean Domain. In Western Anatolia, three successive HP-LT metamorphic belts thus formed: the northernmost Tavsanli Zone (c. 88-82 Ma), the Oren-Afyon Zone (between 70 and 65 Ma), and the Kurudere-Nebiler Unit (c. 52-45 Ma). The southward younging trend of the HP-LT metamorphism from the upper and internal to the deeper and more external structural units, as in the Aegean Domain, points to the persistence of subduction in Western Anatolia between 93-90 and c. 35 Ma. After the accretion of the Menderes-Tauride terrane, in Eocene times, subduction stopped, leading to continental collision and associated Barrovian-type metamorphism. Because, by contrast, the Aegean subduction did remain active due to slab roll-back and trench migration, the eastern limb (below Southwestern Anatolia) of the Hellenic slab was dramatically curved and consequently teared. It therefore is suggested that the possibility for subduction to continue after the accretion of buoyant (e.g. continental) terranes probably depends much on palaeogeography.
The Acheulean technological tradition, characterized by a large (>10 cm) flake-based component, represents a significant technological advance over the Oldowan. Although stone tool assemblages attributed to the Acheulean have been reported from as early as circa 1.6-1.75 Ma, the characteristics of these earliest occurrences and comparisons with later assemblages have not been reported in detail. Here, we provide a newly established chronometric calibration for the Acheulean assemblages of the Konso Formation, southern Ethiopia, which span the time period similar to 1.75 to <1.0 Ma. The earliest Konso Acheulean is chronologically indistinguishable from the assemblage recently published as the world's earliest with an age of similar to 1.75 Ma at Kokiselei, west of Lake Turkana, Kenya. This Konso assemblage is characterized by a combination of large picks and crude bifaces/unifaces made predominantly on large flake blanks. An increase in the number of flake scars was observed within the Konso Formation handaxe assemblages through time, but this was less so with picks. The Konso evidence suggests that both picks and handaxes were essential components of the Acheulean from its initial stages and that the two probably differed in function. The temporal refinement seen, especially in the handaxe forms at Konso, implies enhanced function through time, perhaps in processing carcasses with long and stable cutting edges. The documentation of the earliest Acheulean at similar to 1.75 Ma in both northern Kenya and southern Ethiopia suggests that behavioral novelties were being established in a regional scale at that time, paralleling the emergence of Homo erectus-like hominid morphology.
SoriZ93 zircon was separated from residual mineral fraction after the preparation of the SORI93 biotite standard from the Sori Granodiorite in the Ashio Mountains, Northeast Japan, and analyzed for its U-Pb age using a sensitive high resolution ion microprobe (SHRIMP). The zircon grains of SoriZ93 are prismatic with pyramidal ends or broken prismatic fragments. Most zircons are 100-250m long and 50-150m wide. The zircons are clear crystals and colorless to pale yellow, although some grains are brown with optically low transparency. Cathodoluminescence (CL) imaging of the SoriZ93 zircons showed a fine oscillatory zoning, which is a typical characteristic of zircons in granitic rocks. A xenocrystic core was not present in the zircons. Although some mineral inclusions were present in the zircons, it is possible to select a typical analytical area with a dimension of 30m necessary for the microbeam technique. The analytical results of the colorless zircons provided a weighted mean Pb-207 corrected Pb-206/U-238 age of 93.9 +/- 0.6Ma (95% confidence, MSWD=0.97). This Pb-206/U-238 age is 1.3m.y. older than the K-Ar age of the SORI93 biotite, indicating that the granodiorite cooled to a closure temperature of the K-Ar biotite system within a short time interval. Although some grains of the SoriZ93 zircons show high U concentration, a selection of colorless zircons provided the precise age to be used for the calibration and reference for zircons of the Late Cretaceous.
Newly determined Late Cretaceous Ar-40/Ar-39 ages on megacrystic kaersutite from four lamprophyre dikes, and a U-Pb zircon age on a trachyte, from central and north Westland (New Zealand) are presented. These ages suggest that the intrusion of mafic dikes (88-86 and 69 Ma) was not necessarily restricted to the previously established narrow age range of 80-92 Ma. The younger lamprophyre and trachyte dikes (c. 68-70 Ma) imply that tensional stresses in the Western Province were either renewed at this time, or that extension and related magmatism continued during opening of the Tasman Sea. Extension-related magmatism in the region not only preceded Tasman seafloor spreading initiation (starting at c. 83 Ma, lasting to c. 53 Ma), but may have sporadically continued for up to 15 Ma after continental break-up.
Multi-proxy dating of Holocene maar lakes and Pleistocene dry maar sediments in the Eifel, Germany
(2013)
During the last twelve years the ELSA Project (Eifel Laminated Sediment Archive) at Mainz University has drilled a total of about 52 cores from 27 maar lakes and filled-in maar basins in the Eifel/Germany. Dating has been completed for the Holocene cores using 6 different methods (Pb-210 and Cs-137 activities, palynostratigraphy, event markers, varve counting, C-14) In general, the different methods consistently complement one another within error margins. Event correlation was used for relating typical lithological changes with historically known events such as the two major Holocene flood events at 1342 AD and ca 800 BC. Dating of MIS2-MIS3 core sections is based on greyscale tuning, radiocarbon and OSL dating, magnetostratigraphy and tephrochronology. The lithological changes in the sediment cores demonstrate a sequence of events similar to the North Atlantic rapid climate variability of the Last Glacial Cycle. The warmest of the MIS3 interstadials was GI14, when a forest with abundant spruce covered the Eifel area from 55 to 48 ka BP, i.e. during a time when also other climate archives in Europe suggested very warm conditions. The forest of this "Early Stage 3 warm phase" developed subsequently into a steppe with scattered birch and pine, and finally into a glacial desert at around 25 ka BP. Evidence for Mono Lake and Laschamp geomagnetic excursions is found in two long cores. Several large eruptions during Middle and Late Pleistocene (Ulmener Maar - 11,000 varve years BP, Laacher See - 12,900 varve years BP, Mosenberg volcanoes/Meerfelder Maar 41-45 cal ka BP, Dumpel Maar 116 ka BP, Glees Maar - 151 ka BP) produced distinct ash-layers crucial for inter-core and inter-site correlations. The oldest investigated maar of the Eifel is Ar-40/Ar-39 dated to the time older than 520 ka BP.
The Tuz Golu Basin is the largest sedimentary depression located at the center of the Central Anatolian Plateau, an extensive, low-relief region with elevations of ca. 1 km located between the Pontide and Tauride mountains. Presently, the basin morphology and sedimentation processes are mainly controlled by the extensional Tuz Golu Fault Zone in the east and the transtensional Inonu-Eskisehir Fault System in the west. The purpose of this study is to contribute to the understanding of the Plio-Quaternary deformation history and to refine the timing of the latest extensional phase of the Tuz Golu Basin. Field observations, kinematic analyses, interpretations of seismic reflection lines, and Ar-40/Ar-39 dating of a key ignimbrite layer suggest that a regional phase of NNW-SSE to NE-SW contraction ended by 6.81 +/- 0.24 Ma and was followed by N-S to NE-SW extension during the Pliocene-Quaternary periods. Based on sedimentological and chronostratigraphic markers, the average vertical displacement rates over the past 5 or 3 Ma with respect to the central part of Tuz Golu Lake are 0.03 to 0.05 mm/year for the fault system at the western flank of the basin and 0.08 to 0.13 mm/year at the eastern flank. Paleo-shorelines of the Tuz Golu Lake, vestiges of higher lake levels related to Quaternary climate change, are important strain markers and were formed during Last Glacial Maximum conditions as indicated by a radiocarbon age of 21.8 +/- 0.4 ka BP obtained from a stromatolitic crust. Geomorphic observations and deformed lacustrine shorelines suggest that the main strand of the Tuz Golu Fault Zone straddling the foothills of the Sereflikochisar-Aksaray range has not been active during the Holocene. Instead, deformation appears to have migrated towards the interior of the basin along an offshore fault that runs immediately west of Sereflikochisar Peninsula. This basinward migration of deformation is probably associated with various processes acting at the lithospheric scale, such as plateau uplift and/or microplate extrusion.
Jahresbericht 2012
(2013)
Inhalt: 1. Allgemeiner Überblick 2. Organisationsstruktur des MRZ 2.1 Angehörige des MRZ 2.2 Wissenschaftlicher Beirat des MRZ 2.3 Förderverein 3. Aktivitäten im Berichtszeitraum 3.1 Forschungsprojekte und wissenschaftliche Veranstaltungen 3.2 Promotionen 3.3 Lehrveranstaltungen 3.4 Publikationen – Neuerscheinungen 2012 3.5 Wissenschaftliche Vorträge, Vorlesungen, Fachgespräche u. ä. 4. Anhang
The northward motion of the Pamir indenter with respect to Eurasia has resulted in coeval thrusting, strike-slip faulting, and normal faulting. The eastern Pamir is currently deformed by east-west oriented extension, accompanied by uplift and exhumation of the Kongur Shan (7719m) and Muztagh Ata (7546m) gneiss domes. Both domes are an integral part of the footwall of the Kongur Shan extensional fault system (KES), a 250 km long, north-south oriented graben. Why active normal faulting within the Pamir is primarily localized along the KES and not distributed more widely throughout the orogen has remained unclear. In addition, relatively little is known about how deformation has evolved throughout the Cenozoic, despite refined estimates on present-day crustal deformation rates and microseismicity, which indicate where crustal deformation is presently being accommodated. To better constrain the spatiotemporal evolution of faulting along the KES, we present 39 new apatite fission track, zircon U-Th-Sm/He, and Ar-40/Ar-39 cooling ages from a series of footwall transects along the KES graben shoulder. Combining these data with present-day topographic relief, 1-D thermokinematic and exhumational modeling documents successive stages, rather than synchronous deformation and gneiss dome exhumation. While the exhumation of the Kongur Shan commenced during the late Miocene, extensional processes in the Muztagh Ata massif began earlier and have slowed down since the late Miocene. We present a new model of synorogenic extension suggesting that thermal and density effects associated with a lithospheric tear fault along the eastern margin of the subducting Alai slab localize extensional upper plate deformation along the KES and decouple crustal motion between the central/western Pamir and eastern Pamir/Tarim basin.
The northern part of the Pamir orogen is the preeminent example of an active intracontinental subduction zone in the early stages of continent-continent collision. Such zones are the least understood type of plate boundaries because modern examples are few and of limited access, and ancient analogs have been extensively overprinted by subsequent tectonic and erosion processes. In the Pamir, it has been assumed that most of the plate convergence was accommodated by overthrusting along the plate-bounding Main Pamir Thrust (MPT), which forms the principal northern mountain and deformation front of the Pamir. However, the synopsis of our new and previously published thermochronologic data from this region shows that the hanging wall of the MPT experienced relatively minor amounts of late Cenozoic exhumation. The Pamir orogen as a whole is an integral part of the overriding plate in a subduction system, while the remnant basin to the north constitutes the downgoing plate, with the bulk of the convergence accommodated by underthrusting. Herein, we demonstrate that the observed deformation of the upper and lower plates within the Pamir-Alai convergence zone resembles highly arcuate oceanic subduction systems characterized by slab rollback, subduction erosion, subduction accretion, and marginal slab-tear faults. We suggest that the curvature of the North Pamir is genetically linked to the short width and rollback of the south-dipping Alai slab; northward motion (indentation) of the Pamir is accommodated by crustal processes related to this rollback. The onset of south-dipping subduction is tentatively linked to intense Pamir contraction following break-off of the north-dipping Indian slab beneath the Karakoram.
The Sierra de Aconquija and Cumbres Calchaquies in the thick-skinned northern Sierras Pampeanas, NW Argentina present an ideal setting to investigate the tectonically and erosionally controlled exhumation and uplift history of mountain ranges using thermochronological methods. Although these ranges are located along strike of one another, their spatiotemporal evolution varies significantly. Integrating modeled cooling histories constrained by K-Ar ages of muscovite and biotite, apatite fission track data as well as (U-Th)/He measurement of zircon and apatite reveal the structural evolution of these ranges beginning in the late stage of the Paleozoic Famatinian Orogeny. Following localized rift-related exhumation in the central part of the study area and slow erosion elsewhere, growth of the modern topography commenced in the Cenozoic during Andean deformation. The main activity occurred during the late Miocene, with varying magnitudes of rock uplift, surface uplift, and exhumation in the two mountain ranges. The Cumbres Calchaquies is characterized by a total of 5-7km of vertical rock uplift, around 3km of crestal surface uplift, and a maximum exhumation of 2-4km since that time. The Sierra de Aconquija experienced 10-13km of vertical rock uplift, similar to 4-5km of peak surface uplift, and 6-8km of exhumation since around 9Ma. Much of this exhumation occurred along a previously poorly recognized fault. Miocene reactivation of Cretaceous rift structures may explain along-strike variations within these ranges. Dating of sedimentary samples from adjacent basins supports the evolutionary model developed for the mountain ranges.
Basement-cored ranges formed by reverse faulting within intracontinental mountain belts are often composed of poly-deformed lithologies. Geological data capable of constraining the timing, magnitude, and distribution of the most recent deformational phase are usually missing in such ranges. In this paper, we present new low temperature thermochronological and geological data from a transect through the basement-cored Terskey Range, located in the Kyrgyz Tien Shan. Using these data, we are able to investigate the range's late Cenozoic deformation for the first time. Displacements on reactivated faults are constrained and deformation of thermochronologically derived structural markers is assessed. These structural markers postdate the earlier deformational phases, providing the only record of Cenozoic deformation and of the reactivation of structures within the Terskey Range. Overall, these structural markers have a southern inclination, interpreted to reflect the decreasing inclination of the reverse fault bounding the Terskey Range. Our thermochronological data are also used to investigate spatial and temporal variations in the exhumation of the Terskey Range, identifying a three-stage Cenozoic exhumation history: (1) virtually no exhumation in the Paleogene, (2) increase to slightly higher exhumation rates at similar to 26-20Ma, and (3) significant increase in exhumation starting at similar to 10Ma.
Intra-continental mountain belts typically form as a result of tectonic forces associated with distant plate collisions. In general, each mountain belt has a distinctive morphology and orogenic evolution that is highly dependent on the unique distribution and geometries of inherited structures and other crustal weaknesses. In this thesis, I have investigated the complex and irregular Cenozoic orogenic evolution of the Central Kyrgyz Tien Shan in Central Asia, which is presently one of the most active intra-continental mountain belts in the world. This work involved combining a broad array of datasets, including thermochronologic, magnetostratigraphic, sediment provenance and stable isotope data, to identify and date various changes in tectonic deformation, climate and surface processes. Many of these changes are linked and can ultimately be related to regional-scale processes that altered the orogenic evolution of the Central Kyrgyz Tien Shan. The Central Kyrgyz Tien Shan contains a sub-parallel series of structures that were reactivated in the late Cenozoic in response to the tectonic forces associated with the distant India-Eurasia collision. Over time, slip on the various reactivated structures created the succession of mountain ranges and intermontane basins which characterises the modern morphology of the region. In this thesis, new quantitative constraints on the exhumation histories of several mountain ranges have been obtained by using low temperature thermochronological data from 95 samples (zircon (U-Th)/He, apatite fission track and (U-Th)/He). Time-temperature histories derived by modelling the thermochronologic data of individual samples identify at least two stages of Cenozoic cooling in most of the region’s mountain ranges: (1) initially low cooling rates (<1°C/Myr) during the tectonic quiescent period and (2) increased cooling in the late Cenozoic, which occurred diachronously and with variable magnitude in different ranges. This second cooling stage is interpreted to represent increased erosion caused by active deformation, and in many of the sampled mountain ranges, provides the first available constraints on the timing of late Cenozoic deformation. New constraints on the timing of deformation have also been derived from the sedimentary record of intermontane basins. In the intermontane Issyk Kul basin, new magnetostratigraphic data from two sedimentary sections suggests that deposition of the first Cenozoic syn-tectonic sediments commenced at ~26 Ma. Zircon U-Pb provenance data, paleocurrent and conglomerate clast analysis reveals that these sediments were sourced from the Terskey Range to the south of the basin, suggesting that the onset of the late Cenozoic deformation occurred >26 Ma in that particular range. Elsewhere, growth strata relationships are used to identify syn-tecotnic deposition and constrain the timing of nearby deformation. Collectively, these new constraints obtained from thermochronologic and sedimentary data have allowed me to infer the spatiotemporal distribution of deformation in a transect through the Central Kyrgyz Tien Shan, and determine the order in which mountain ranges started deforming. These data suggest that deformation began in a few widely-spaced mountain ranges in the late Oligocene and early Miocene. Typically, these earlier mountain ranges are bounded on at least one side by a reactivated structure, which probably corresponds to the frictionally weakest and most suitably orientated inherited structures for accommodating the roughly north-south directed horizontal crustal shortening of the late Cenozoic. Moreover, tectonically-induced rock uplift in the Terskey Range, following the reactivation of the bounding structure before 26 Ma, likely caused significant surface uplift across the range, which in turn lead to enhanced orographic precipitation. These wetter conditions have been inferred from stable isotope data collected in the two magnetostratigraphically-dated sections in the Issyk Kul basin. Subsequently, in the late Miocene (~12‒5 Ma), more mountain ranges and inherited structures appear to have started actively deforming. Importantly, the onset of deformation at these locations in the late Miocene coincides with an increase in exhumation of ranges that had started deforming earlier in the late Oligocene‒early Miocene. Based on this observation, I have suggested that there must have been an overall increase in the rate of horizontal crustal shortening across the Central Kyrgyz Tien Shan, which likely relates to regional tectonic changes that affected much of Central Asia. Many of the mountain ranges that started deforming in the late Miocene were associated with out-of-sequence tectonic reactivation and initiation, which lead to the partitioning of larger intermontane basins. Moreover, within most of the intermontane basins in the Central Kyrgyz Tien Shan, this inferred late Miocene increase in horizontal crustal shortening occurs roughly at the same time as an increase in sedimentation rates and a significant change sediment composition. Therefore, I have suggested that the overall magnitude of deformational processes increased in the late Miocene, promoting more flexural subsidence in the intermontane basins of the Central Kyrgyz Tien Shan.
In the recent past, the Alpine Lech valley (Austria) experienced three damaging flood events within 6 years despite the various structural flood protection measures in place. For an improved flood risk management, the analysis of flood damage potentials is a crucial component. Since the expansion of built-up areas and their associated values is seen as one of the main drivers of rising flood losses, the goal of this study is to analyze the spatial development of the assets at risk, particularly of residential areas, due to land use changes over a historic period (since 1971) and up to possible shifts in future (until 2030). The analysis revealed that the alpine study area was faced to remarkable land use changes like urbanization and the decline of agriculturally used grassland areas. Although the major agglomeration of residential areas inside the flood plains took place before 1971, a steady growth of values at risk can still be observed until now. Even for the future, the trend is ongoing, but depends very much on the assumed land use scenario and the underlying land use policy. Between 1971 and 2006, the annual growth rate of the damage potential of residential areas amounted to 1.1 % ('constant values,' i.e., asset values at constant prices of reference year 2006) or 3.0 % ('adjusted values,' i.e., asset values adjusted by GDP increase at constant prices of reference year 2006) for three flood scenarios. For the projected time span between 2006 and 2030, a further annual increase by 1.0 % ('constant values') or even 4.2 % ('adjusted values') may be possible when the most extreme urbanization scenario 'Overall Growth' is considered. Although socio-economic development is regarded as the main driver for increasing flood losses, our analysis shows that settlement development does not preferably take place within flood prone areas.
Studies explaining the choice of model structure for population viability analysis (PVA) are rare and no such study exists for butterfly species, a focal group for conservation. Here, we describe in detail the development of a model to predict population viability of a glacial relict butterfly species, Boloria eunomia, under climate change. We compared four alternative formulations of an individual-based model, differing in the environmental factors acting on the survival of immature life stages: temperature (only temperature impact), weather (temperature, precipitation, and sunshine), temperature and parasitism, and weather and parasitism. Following pattern-oriented modeling, four observed patterns were used to contrast these models: one qualitative (response of population size to habitat parameters) and three quantitative ones describing population dynamics during eight years (mean and variability of population size, and magnitude of the temporal autocorrelation in yearly population growth rates). The four model formulations were not equally able to depict population dynamics under current environmental conditions; the model including only temperature was selected as the most parsimonious model sufficiently well reproducing the empirical patterns. We used all four model formulations to test a range of climate change scenarios that were characterized by changes in both mean and variability of the weather variables. All models predicted adverse effects of climate change and resulted in the same ranking of mean climate change scenarios. However, models differed in their absolute values of population viability measures, underlining the need to explicitly choose the most appropriate model formulation and avoid arbitrary usage of environmental drivers in a model. We conclude that further applications of pattern-oriented modeling to butterfly and other species are likely to help in identifying the key factors impacting the viability of certain taxa, which, ultimately, will aid and speed up informed management decisions for endangered species under climate change.
Flood loss modeling is an important component within flood risk assessments. Traditionally, stage-damage functions are used for the estimation of direct monetary damage to buildings. Although it is known that such functions are governed by large uncertainties, they are commonly applied - even in different geographical regions - without further validation, mainly due to the lack of real damage data. Until now, little research has been done to investigate the applicability and transferability of such damage models to other regions. In this study, the last severe flood event in the Austrian Lech Valley in 2005 was simulated to test the performance of various damage functions from different geographical regions in Central Europe for the residential sector. In addition to common stage-damage curves, new functions were derived from empirical flood loss data collected in the aftermath of recent flood events in neighboring Germany. Furthermore, a multi-parameter flood loss model for the residential sector was adapted to the study area and also evaluated with official damage data. The analysis reveals that flood loss functions derived from related and more similar regions perform considerably better than those from more heterogeneous data sets of different regions and flood events. While former loss functions estimate the observed damage well, the latter overestimate the reported loss clearly. To illustrate the effect of model choice on the resulting uncertainty of damage estimates, the current flood risk for residential areas was calculated. In the case of extreme events like the 300 yr flood, for example, the range of losses to residential buildings between the highest and the lowest estimates amounts to a factor of 18, in contrast to properly validated models with a factor of 2.3. Even if the risk analysis is only performed for residential areas, our results reveal evidently that a carefree model transfer in other geographical regions might be critical. Therefore, we conclude that loss models should at least be selected or derived from related regions with similar flood and building characteristics, as far as no model validation is possible. To further increase the general reliability of flood loss assessment in the future, more loss data and more comprehensive loss data for model development and validation are needed.
Flood risk is expected to increase in many regions of the world in the next decades with rising flood losses as a consequence. First and foremost, it can be attributed to the expansion of settlement and industrial areas into flood plains and the resulting accumulation of assets. For a future-oriented and a more robust flood risk management, it is therefore of importance not only to estimate potential impacts of climate change on the flood hazard, but also to analyze the spatio-temporal dynamics of flood exposure due to land use changes. In this study, carried out in the Alpine Lech Valley in Tyrol (Austria), various land use scenarios until 2030 were developed by means of a spatially explicit land use model, national spatial planning scenarios and current spatial policies. The combination of the simulated land use patterns with different inundation scenarios enabled us to derive statements about possible future changes in flood-exposed built-up areas. The results indicate that the potential assets at risk depend very much on the selected socioeconomic scenario. The important conditions affecting the potential assets at risk that differ between the scenarios are the demand for new built-up areas as well as on the types of conversions allowed to provide the necessary areas at certain locations. The range of potential changes in flood-exposed residential areas varies from no further change in the most moderate scenario 'Overall Risk' to 119 % increase in the most extreme scenario 'Overall Growth' (under current spatial policy) and 159 % increase when disregarding current building restrictions.
Due to limited public budgets and the need to economize, the analysis of costs of hazard mitigation and emergency management of natural hazards becomes increasingly important for public natural hazard and risk management. In recent years there has been a growing body of literature on the estimation of losses which supported to help to determine benefits of measures in terms of prevented losses. On the contrary, the costs of mitigation are hardly addressed. This paper thus aims to shed some light on expenses for mitigation and emergency services. For this, we analysed the annual costs of mitigation efforts in four regions/countries of the Alpine Arc: Bavaria (Germany), Tyrol (Austria), South Tyrol (Italy) and Switzerland. On the basis of PPP values (purchasing power parities), annual expenses on public safety ranged from EUR 44 per capita in the Free State of Bavaria to EUR 216 in the Autonomous Province of South Tyrol. To analyse the (variable) costs for emergency services in case of an event, we used detailed data from the 2005 floods in the Federal State of Tyrol (Austria) as well as aggregated data from the 2002 floods in Germany. The analysis revealed that multi-hazards, the occurrence and intermixture of different natural hazard processes, contribute to increasing emergency costs. Based on these findings, research gaps and recommendations for costing Alpine natural hazards are discussed.
We have used polarized confocal Raman microspectroscopy and scanning near-field optical microscopy with a resolution of 60 nm to characterize photoinscribed grating structures of azobenzene doped polymer films on a glass support. Polarized Raman microscopy allowed determining the reorientation of the chromophores as a function of the grating phase and penetration depth of the inscribing laser in three dimensions. We found periodic patterns, which are not restricted to the surface alone, but appear also well below the surface in the bulk of the material. Near-field optical microscopy with nanoscale resolution revealed lateral two-dimensional optical contrast, which is not observable by atomic force and Raman microscopy.
In this paper, we report on in-situ atomic force microscopy (AFM) studies of topographical changes in azobenzene-containing photosensitive polymer films that are irradiated with light interference patterns. We have developed an experimental setup consisting of an AFM combined with two-beam interferometry that permits us to switch between different polarization states of the two interfering beams while scanning the illuminated area of the polymer film, acquiring corresponding changes in topography in-situ. This way, we are able to analyze how the change in topography is related to the variation of the electrical field vector within the interference pattern. It is for the first time that with a rather simple experimental approach a rigorous assignment can be achieved. By performing in-situ measurements we found that for a certain polarization combination of two interfering beams [namely for the SP (a dagger center dot, a dagger") polarization pattern] the topography forms surface relief grating with only half the period of the interference patterns. Exploiting this phenomenon we are able to fabricate surface relief structures with characteristic features measuring only 140 nm, by using far field optics with a wavelength of 491 nm. We believe that this relatively simple method could be extremely valuable to, for instance, produce structural features below the diffraction limit at high-throughput, and this could significantly contribute to the search of new fabrication strategies in electronics and photonics industry.
In this paper we report on an opto-mechanical scission of polymer chains within photosensitive diblock-copolymer brushes grafted to flat solid substrates. We employ surface-initiated polymerization of methylmethacrylate (MMA) and t-butyl methacrylate (tBMA) to grow diblock-copolymer brushes of poly(methylmethacrylate-b-t-butyl methacrylate) following the atom transfer polymerization (ATRP) scheme. After the synthesis, deprotection of the PtBMA block yields poly(methacrylic acid) (PMAA). To render PMMA-b-PMAA copolymers photosensitive, cationic azobenzene containing surfactants are attached to the negatively charged outer PMAA block. During irradiation with an ultraviolet (UV) interference pattern, the extent of photoisomerization of the azobenzene groups varies spatially and results in a topography change of the brush, i.e., formation of surface relief gratings (SRG). The SRG formation is accompanied by local rupturing of the polymer chains in areas from which the polymer material recedes. This opto-mechanically induced scission of the polymer chains takes place at the interfaces of the two blocks and depends strongly on the UV irradiation intensity. Our results indicate that this process may be explained by employing classical continuum fracture mechanics, which might be important for tailoring the phenomenon for applying it to poststructuring of polymer brushes.
We report on conductivity behavior of very thin gold layer deposited on a photosensitive polymer film. Under irradiation with light interference pattern, the azobenzene containing photosensitive polymer film undergoes deformation at which topography follows a distribution of intensity, resulting in the formation of a surface relief grating. This process is accompanied by a change in the shape of the polymer surface from flat to sinusoidal together with a corresponding increase in surface area. The gold layer placed above deforms along with the polymer and ruptures at a strain of 4%. The rupturing is spatially well defined, occurring at the topographic maxima and minima resulting in periodic cracks across the whole irradiated area. We have shown that this periodic micro-rupturing of a thin metal film has no significant impact on the electrical conductivity of the films. We suggest a model to explain this phenomenon and support this by additional experiments where the conductivity is measured in a process when a single nanoscopic scratch is formed with an AFM tip. Our results indicate that in flexible electronic materials consisting of a polymer support and an integrated metal circuit, nano-and micro cracks do not alter significantly the behavior of the conductivity unless the metal is disrupted completely. (C) 2013 AIP Publishing LLC.
We report on a change in the properties of monomolecular films of polyelectrolyte molecules, induced by illuminating the silicon substrate on which they adsorb. It was found that under illumination the thickness of the adsorbed layer decreases by at least 27% and at the same time the roughness is significantly reduced in comparison to a layer adsorbed without irradiation. Furthermore, the homogeneity of the film topography and the surface potential is shown to be improved by illumination. The effect is explained by a change in surface charge density under irradiation of n- and p-type silicon wafers. The altered charge density in turn induces conformational changes of the adsorbing polyelectrolyte molecules. Their photocontrolled adsorption opens new possibilities for selective manipulation of adsorbed films. This possibility is of potential importance for many applications such as the production of well-defined coatings in biosensors or microelectronics.
The interface between thin films of metal and polymer materials play a significant role in modern flexible microelectronics viz., metal contacts on polymer substrates, printed electronics and prosthetic devices. The major emphasis in metal polymer interface is on studying how the externally applied stress in the polymer substrate leads to the deformation and cracks in metal film and vice versa. Usually, the deformation process involves strains varying over large lateral dimensions because of excessive stress at local imperfections. Here we show that the seemingly random phenomena at macroscopic scales can be rendered rather controllable at submicrometer length scales. Recently, we have created a metal polymer interface system with strains varying over periods of several hundred nanometers. This was achieved by exploiting the formation of surface relief grating (SRG) within the azobenzene containing photosensitive polymer film upon irradiation with light interference pattern. Up to a thickness of 60 nm, the adsorbed metal film adapts neatly to the forming relief, until it ultimately ruptures into an array of stripes by formation of highly regular and uniform cracks along the maxima and minima of the polymer topography. This surprising phenomenon has far-reaching implications. This is the first time a direct probe is available to estimate the forces emerging in SRG formation in glassy polymers. Furthermore, crack formation in thin metal films can be studied literally in slow motion, which could lead to substantial improvements in the design process of flexible electronics. Finally, cracks are produced uniformly and at high density, contrary to common sense. This could offer new strategies for precise nanofabrication procedures mechanical in character.
We discuss the controlled subdiffraction modulations of photosensitive polymer films that are induced by surface plasmon interference in striking contrast to well-known conventional microscopic gratings. The near-field light intensity patterns were generated at the nanoslits fabricated in a silver layer with the photosensitive polymer film placed above. We observed that the topographical modulations can be excited only when the polarization is perpendicular to the nanoslits. Moreover, we have shown that light with certain wavelengths resulted in a characteristic topographical pattern with the periodicity three times smaller than the wavelength of incoming light. A combination of experimental observations with simulations showed that the unique subdiffraction topographical patterns are caused by constructive interference between two counter-propagating surface plasmon waves generated at neighboring nanoslits in the metal layer beneath the photosensitive polymer film. The light intensity distribution was simulated to demonstrate strong dependency upon the slit array periodicity as well as wavelength and polarization of incoming light.
When photosensitive azobenzene-containing polymer films are irradiated with light interference patterns, topographic variations in the film develop that follow the local distribution of the electric field vector. The exact correspondence of e.g., the vector orientation in relation to the presence of local topographic minima or maxima is in general difficult to determine. Here, we report on a systematic procedure how this can be accomplished. For this, we devise a new set-up combining an atomic force microscope and two-beam interferometry. With this set-up, it is possible to track the topography change in-situ, while at the same time changing polarization and phase of the impinging interference pattern. This is the first time that an absolute correspondence between the local distribution of electric field vectors and the local topography of the relief grating could be established exhaustively. Our setup does not require a complex mathematical post-processing and its simplicity renders it interesting for characterizing photosensitive polymer films in general.
In this paper, we report on the properties of nano-slits created in metal thin films using atomic force microscope (AFM) nanolithography (AFM-NL). We demonstrate that instead of expensive diamond AFM tips, it is also possible to use low cost silicon nitride tips. It is shown that depending on the direction of scratching, nano-slits of different widths and depths can be fabricated at constant load force. We elucidate the reasons for this behavior and identify an optimal direction and load force for scratching a gold layer.
The effect of illumination on the thickness and roughness of monolayers of polycationic molecules of polyethyleneimine deposited from solution onto a silicon substrate was discovered and investigated. The super-bandgap illumination of the substrate during polyethyleneimine adsorption causes a decrease in both the roughness and integral thickness of the organic layer on n- and p-Si substrates.
The standard charging process for polymer ferroelectrets, e. g., from polypropylene foams or layered film systems involves the application of high DC fields either to metal electrodes or via a corona discharge. In this often-used process, the DC field triggers the internal breakdown and limits the final charge densities inside the ferroelectret cavities and, thus, the final polarization. Here, an AC + DC charging procedure is proposed and demonstrated in which a high-voltage high-frequency (HV-HF) wave train is applied together with a DC poling voltage. Thus, the internal dielectric-barrier discharges in the ferroelectret cavities are induced by the HV-HF wave train, while the final charge and polarization level is controlled separately through the applied DC voltage. In the new process, the frequency and the amplitude of the HV-HF wave train must be kept within critical boundaries that are closely related to the characteristics of the respective ferroelectrets. The charging method has been tested and investigated on a fluoropolymer-film system with a single well-defined cylindrical cavity. It is found that the internal electrical polarization of the cavity can be easily controlled and increases linearly with the applied DC voltage up to the breakdown voltage of the cavity. In the standard charging method, however, the DC voltage would have to be chosen above the respective breakdown voltage. With the new method, control of the HV-HF wave-train duration prevents a plasma-induced deterioration of the polymer surfaces inside the cavities. It is observed that the frequency of the HV-HF wave train during ferroelectret charging and the temperature applied during poling of ferroelectrics serve an analogous purpose. The analogy and the similarities between the proposed ferroelectret charging method and the poling of ferroelectric materials or dipole electrets at elevated temperatures with subsequent cooling under field are discussed.
Ecological regime shifts and carbon cycling in aquatic systems have both been subject to increasing attention in recent years, yet the direct connection between these topics has remained poorly understood. A four-fold increase in sedimentation rates was observed within the past 50 years in a shallow eutrophic lake with no surface in-or outflows. This change coincided with an ecological regime shift involving the complete loss of submerged macrophytes, leading to a more turbid, phytoplankton-dominated state. To determine whether the increase in carbon (C) burial resulted from a comprehensive transformation of C cycling pathways in parallel to this regime shift, we compared the annual C balances (mass balance and ecosystem budget) of this turbid lake to a similar nearby lake with submerged macrophytes, a higher transparency, and similar nutrient concentrations. C balances indicated that roughly 80% of the C input was permanently buried in the turbid lake sediments, compared to 40% in the clearer macrophyte-dominated lake. This was due to a higher measured C burial efficiency in the turbid lake, which could be explained by lower benthic C mineralization rates. These lower mineralization rates were associated with a decrease in benthic oxygen availability coinciding with the loss of submerged macrophytes. In contrast to previous assumptions that a regime shift to phytoplankton dominance decreases lake heterotrophy by boosting whole-lake primary production, our results suggest that an equivalent net metabolic shift may also result from lower C mineralization rates in a shallow, turbid lake. The widespread occurrence of such shifts may thus fundamentally alter the role of shallow lakes in the global C cycle, away from channeling terrestrial C to the atmosphere and towards burying an increasing amount of C.
The coastal stretch of north-eastern Mediterranean Morocco holds vitally important ecological, social, and economic functions. The implementation of large-scale luxury tourism resorts shall push socio-economic development and facilitate the shift from a mainly agrarian to a service economy. Sufficient water availability and intact beaches are among the key requirements for the successful realization of regional development plans. The water situation is already critical, additional water-intense sectors could overstrain the capacity of water resources. Further, coastal erosion caused by sea-level rise is projected. Regional climate change is observable, and must be included in regional water management. Long-term climate trends are assessed for the larger region (Moulouya basin) and for the near-coastal zone at Saidia. The amount of additional water demand is assessed for the large-dimensioned Saidia resort; including the monthly, seasonal and annual tourist per capita water need under inclusion of irrigated golf courses and garden areas. A shift of climate patterns is observed, a lengthening of the dry summer season, and as well a significant decline of annual precipitation. Thus, current water scarcity is mainly human-induced; however, climate change will aggravate the situation. As a consequence, severe environmental damage due to water scarcity is likely and could impinge on the quality of local tourism. The re-adjustment of current management routines is therefore essential. Possible adjustments are discussed and the analysis concludes with management recommendations for innovative regional water management of tourism facilities.
Increases in animal products consumption and the associated environmental consequences have been a matter of scientific debate for decades. Consequences of such increases include rises in greenhouse gas emissions, growth of consumptive water use, and perturbation of global nutrients cycles. These consequences vary spatially depending on livestock types, their densities and their production system. In this letter, we investigate the spatial distribution of embodied crop calories in animal products. On a global scale, about 40% of the global crop calories are used as livestock feed (we refer to this ratio as crop balance for livestock) and about 4 kcal of crop products are used to generate 1 kcal of animal products (embodied crop calories of around 4). However, these values vary greatly around the world. In some regions, more than 100% of the crops produced is required to feed livestock requiring national or international trade to meet the deficit in livestock feed. Embodied crop calories vary between less than 1 for 20% of the livestock raising areas worldwide and greater than 10 for another 20% of the regions. Low values of embodied crop calories are related to production systems for ruminants based on fodder and forage, while large values are usually associated with production systems for non-ruminants fed on crop products. Additionally, we project the future feed demand considering three scenarios: (a) population growth, (b) population growth and changes in human dietary patterns and (c) changes in population, dietary patterns and feed conversion efficiency. When considering dietary changes, we project the global feed demand to be almost doubled (1.8-2.3 times) by 2050 compared to 2000, which would force us to produce almost equal or even more crops to raise our livestock than to directly nourish ourselves in the future. Feed demand is expected to increase over proportionally in Africa, South-Eastern Asia and Southern Asia, putting additional stress on these regions.
The electricity system is particularly susceptible to climate change due to the close interconnectedness between electricity production, consumption and climate. This study provides a country based relative analysis of 21 European countries' electricity system susceptibility to climate change. Taking into account 14 quantitative influencing factors, the susceptibility of each country is examined both for the current and projected system with the result being a relative ranked index. Luxembourg and Greece are the most susceptible relatively due in part to their inability to meet their own electricity consumption demand with inland production, and the fact that the majority of their production is from more susceptible sources, primarily combustible fuels. Greece experiences relatively warm mean temperatures, which are expected to increase in the future leading to greater summer electricity consumption, increasing susceptibility. Norway was found to be the least susceptible, relatively, due to its consistent production surplus, which is primarily from hydro (a less susceptible source) and a likely decrease of winter electricity consumption as temperatures rise due to climate change. The findings of this study enable countries to identify the main factors that increase their electricity system susceptibility and proceed with adaptation measures that are the most effective in decreasing susceptibility.
We perform a systematic study of all cities in Europe to assess the Urban Heat Island (UHI) intensity by means of remotely sensed land surface temperature data. Defining cities as spatial clusters of urban land cover, we investigate the relationships of the UHI intensity, with the cluster size and the temperature of the surroundings. Our results show that in Europe, the UHI intensity in summer has a strong correlation with the cluster size, which can be well fitted by an empirical sigmoid model. Furthermore, we find a novel seasonality of the UHI intensity for individual clusters in the form of hysteresis-like curves. We characterize the shape and identify apparent regional patterns.
In the last decade growing interest has emerged in quantifying the spatial and temporal variations in mountain building. Until recently, insufficient data have been available to attempt such a task at the scale of large orogens such as the Himalaya. The Himalaya accommodates ongoing convergence between India and Eurasia and is a focal point for studying orogen evolution and hypothesized interactions between tectonics and climate. Here we integrate 1126 published bedrock mineral cooling ages with a transient 1D Monte-Carlo thermal-kinematic erosion model to quantify the denudation histories along similar to 2700 km of the Himalaya. The model free parameter is a temporally variable denudation rate from 50 Ma to present. Thermophysical material properties and boundary conditions were tuned to individual study areas. Monte-Carlo simulations were conducted to identify the range of denudation histories that can reproduce the observed cooling ages. Results indicate large temporal and spatial variations in denudation and these are resolvable across different tectonic units of the Himalaya. More specifically, across > 1000 km of the southern Greater Himalaya denudation rates were highest (similar to 1.5-3 mm/yr) between similar to 10 and 2 Ma and lower (0.5-2.6 mm/yr) over the last 2 My. These differences are best determined in the NW-Himalaya. In contrast to this, across the similar to 2500 km length of the northern Greater Himalaya denudation rates vary over length scales of similar to 300-1700 km. Slower denudation (<1 mm/yr) occurred between 10 and 4 Ma followed by a large increase (1.2-2.6 mm/yr) in the last similar to 4 Ma. We find that only the southern Greater Himalayan Sequence clearly supports a continuous co-evolution of tectonics, climate and denudation. Results from the higher elevation northern Greater Himalaya suggest either tectonic driven variations in denudation due to a ramp-flat geometry in the main decollement and/or recent glacially enhanced denudation.
Direct hysteresis measurements on ferroelectret films by means of a modified Sawyer-Tower circuit
(2013)
Ferro- and piezo-electrets are non-polar polymer foams or film systems with internally charged cavities. Since their invention more than two decades ago, ferroelectrets have become a welcome addition to the range of piezo-, pyro-, and ferro-electric materials available for device applications. A polarization-versus-electric-field hysteresis is an essential feature of a ferroelectric material and may also be used for determining some of its main properties. Here, a modified Sawyer-Tower circuit and a combination of unipolar and bipolar voltage waveforms are employed to record hysteresis curves on cellular-foam polypropylene ferroelectret films and on tubular-channel fluoroethylenepropylene copolymer ferroelectret film systems. Internal dielectric barrier discharges (DBDs) are required for depositing the internal charges in ferroelectrets. The true amount of charge transferred during the internal DBDs is obtained from voltage measurements on a standard capacitor connected in series with the sample, but with a much larger capacitance than the sample. Another standard capacitor with a much smaller capacitance-which is, however, still considerably larger than the sample capacitance-is also connected in series as a high-voltage divider protecting the electrometer against destructive breakdown. It is shown how the DBDs inside the polymer cavities lead to phenomenological hysteresis curves that cannot be distinguished from the hysteresis loops found on other ferroic materials. The physical mechanisms behind the hysteresis behavior are described and discussed.
Changing food consumption patterns and associated greenhouse gas (GHG) emissions have been a matter of scientific debate for decades. The agricultural sector is one of the major GHG emitters and thus holds a large potential for climate change mitigation through optimal management and dietary changes. We assess this potential, project emissions, and investigate dietary patterns and their changes globally on a per country basis between 1961 and 2007. Sixteen representative and spatially differentiated patterns with a per capita calorie intake ranging from 1,870 to >3,400 kcal/day were derived. Detailed analyses show that low calorie diets are decreasing worldwide, while in parallel diet composition is changing as well: a discernable shift towards more balanced diets in developing countries can be observed and steps towards more meat rich diets as a typical characteristics in developed countries. Low calorie diets which are mainly observable in developing countries show a similar emission burden than moderate and high calorie diets. This can be explained by a less efficient calorie production per unit of GHG emissions in developing countries. Very high calorie diets are common in the developed world and exhibit high total per capita emissions of 3.7-6.1 kg CO2eq./day due to high carbon intensity and high intake of animal products. In case of an unbridled demographic growth and changing dietary patterns the projected emissions from agriculture will approach 20 Gt CO2eq./yr by 2050.
Tetrafluoroethylene-hexafluoropropylene copolymer (FEP) films were treated with titanium-tetrachloride vapor in a molecular-layer deposition process. As a result of the surface treatment, significant improvements of the thermal and temporal charge stability were observed. Charge-decay measurements revealed enhancements of the half-value temperatures and the relaxation times of positively charged FEP electrets by at least 120 A degrees C and two orders of magnitude, respectively. Beyond previous publications on fluoropolymer electrets with surface modification, we here report enhanced charge stabilities of the FEP films charged in negative as well as in positive corona discharges. Even though the improvement for negatively charged FEP films is moderate (half-value temperature about 20 A degrees C higher), our experiments show that the asymmetry in positive and negative charge stability that is typical for FEP electrets can be overcome by means of chemical surface treatments. The results are discussed in the context of the formation of modified surface layers with enhanced charge-trapping properties.
While sea level rise is one of the most likely consequences of climate change, the provoked costs remain highly uncertain. Based on a block-maxima approach, we provide a stochastic framework to estimate the increase of expected damages with sea level rise as well as with meteorological changes and demonstrate the application to two case studies. In addition, the uncertainty of the damage estimations due to the stochastic nature of extreme events is studied. Starting with the probability distribution of extreme flood levels, we calculate the distribution of implied damages in a specific region employing stage-damage functions. Universal relations of the expected damages and their standard deviation, which demonstrate the importance of the shape of the damage function, are provided. We also calculate how flood protection reduces the damages leading to a more complex picture, where the extreme value behavior plays a fundamental role. Citation: Boettle, M., D. Rybski, and J. P. Kropp (2013), How changing sea level extremes and protection measures alter coastal flood damages, Water Resour. Res., 49, 1199-1210, doi: 10.1002/wrcr.20108.
Temporal evolution of the re-breakdown voltage in small gaps from nanoseconds to milliseconds
(2013)
A detailed understanding of electric breakdown in dielectrics is of scientific and technological interest. In gaseous dielectrics, a so-called re-breakdown is sometimes observed after extinction of the previous discharge. Although time-dependent re-breakdown voltage is essentially known, its behavior immediately after the previous discharge is not precisely understood. We present an electronic circuit for accurate measurements of the time-dependent re-breakdown voltage in small gaps from tens of nanoseconds to several milliseconds after the previous spark. Results from such experiments are compared with earlier findings, and relevant physical mechanisms such as heating of the gas, decay of the plasma, and ionization of excited atoms and molecules are discussed. It is confirmed that the thermal model is not valid at times below several microseconds.
Urban agglomerations exhibit complex emergent features of which Zipf’s law, i.e., a power-law size distribution, and fractality may be regarded as the most prominent ones. We propose a simplistic model for the generation of citylike structures which is solely based on the assumption that growth is more likely to take place close to inhabited space. The model involves one parameter which is an exponent determining how strongly the attraction decays with the distance. In addition, the model is run iteratively so that existing clusters can grow (together) and new ones can emerge. The model is capable of reproducing the size distribution and the fractality of the boundary of the largest cluster. Although the power-law distribution depends on both, the imposed exponent and the iteration, the fractality seems to be independent of the former and only depends on the latter. Analyzing land-cover data, we estimate the parameter-value gamma approximate to 2.5 for Paris and its surroundings. DOI: 10.1103/PhysRevE.87.042114
Shape-memory polymers (SMPs) are stimuli-sensitive materials capable of performing complex movements on demand, which makes them interesting candidates for various applications, for example, in biomedicine or aerospace. This trend article highlights current approaches in the chemistry of SMPs, such as tailored segment chemistry to integrate additional functions and novel synthetic routes toward permanent and temporary netpoints. Multiphase polymer networks and multimaterial systems illustrate that SMPs can be constructed as a modular system of different building blocks and netpoints. Future developments are aiming at multifunctional and multistimuli-sensitive SMPs.
We report the discovery of extended X-ray emission within the young star cluster NGC 602a in the Wing of the Small Magellanic Cloud (SMC) based on observations obtained with the Chandra X-Ray Observatory. X-ray emission is detected from the cluster core area with the highest stellar density and from a dusty ridge surrounding the H II region. We use a census of massive stars in the cluster to demonstrate that a cluster wind or wind-blown bubble is unlikely to provide a significant contribution to the X-ray emission detected from the central area of the cluster. We therefore suggest that X-ray emission at the cluster core originates from an ensemble of low-and solar-mass pre-main-sequence (PMS) stars, each of which would be too weak in X-rays to be detected individually. We attribute the X-ray emission from the dusty ridge to the embedded tight cluster of the newborn stars known in this area from infrared studies. Assuming that the levels of X-ray activity in young stars in the low-metallicity environment of NGC 602a are comparable to their Galactic counterparts, then the detected spatial distribution, spectral properties, and level of X-ray emission are largely consistent with those expected from low-and solar-mass PMS stars and young stellar objects (YSOs). This is the first discovery of X-ray emission attributable to PMS stars and YSOs in the SMC, which suggests that the accretion and dynamo processes in young, low-mass objects in the SMC resemble those in the Galaxy.
A detailed x-ray investigation of zeta puppis - II. the variability on short and long timescales
(2013)
Stellar winds are a crucial component of massive stars, but their exact properties still remain uncertain. To shed some light on this subject, we have analyzed an exceptional set of X-ray observations of zeta Puppis, one of the closest and brightest massive stars. The sensitive light curves that were derived reveal two major results. On the one hand, a slow modulation of the X-ray flux (with a relative amplitude of up to 15% over 16 hr in the 0.3-4.0 keV band) is detected. Its characteristic timescale cannot be determined with precision, but amounts from one to several days. It could be related to corotating interaction regions, known to exist in zeta Puppis from UV observations. Hour-long changes, linked to flares or to the pulsation activity, are not observed in the last decade covered by the XMM observations; the 17 hr tentative period, previously reported in a ROSAT analysis, is not confirmed either and is thus transient, at best. On the other hand, short-term changes are surprisingly small (<1% relative amplitude for the total energy band). In fact, they are compatible solely with the presence of Poisson noise in the data. This surprisingly low level of short-term variability, in view of the embedded wind-shock origin, requires a very high fragmentation of the stellar wind, for both absorbing and emitting features (>10(5) parcels, comparing with a two-dimensional wind model). This is the first time that constraints have been placed on the number of clumps in an O-type star wind and from X-ray observations.
Der UN-Menschenrechtsrat
(2013)
UN-Profile kleiner und mittlerer Staaten am Beispiel der Schweiz, Österreichs und Liechtensteins
(2013)
I. Kleine und mittlere Staaten in den Vereinten Nationen
II. Kleinstaatendiskussion in den Vereinten Nationen
III. Das UN-Profil Österreichs, der Schweiz und Liechtensteins
IV. Beitritt zu den Vereinten Nationen
V. Derzeitiger außenpolitischer Stellenwert der Vereinten Nationen
VI. Standortbestimmung in der Gegenwart
VII. Aktuelle Schwerpunkte der UN-Mitarbeit
VIII. Resumee
Deutschland und die UNO
(2013)
Die Anpassung von Sektoren an veränderte klimatische Bedingungen erfordert ein Verständnis von regionalen Vulnerabilitäten. Vulnerabilität ist als Funktion von Sensitivität und Exposition, welche potentielle Auswirkungen des Klimawandels darstellen, und der Anpassungsfähigkeit von Systemen definiert. Vulnerabilitätsstudien, die diese Komponenten quantifizieren, sind zu einem wichtigen Werkzeug in der Klimawissenschaft geworden. Allerdings besteht von der wissenschaftlichen Perspektive aus gesehen Uneinigkeit darüber, wie diese Definition in Studien umgesetzt werden soll. Ausdiesem Konflikt ergeben sich viele Herausforderungen, vor allem bezüglich der Quantifizierung und Aggregierung der einzelnen Komponenten und deren angemessenen Komplexitätsniveaus. Die vorliegende Dissertation hat daher zum Ziel die Anwendbarkeit des Vulnerabilitätskonzepts voranzubringen, indem es in eine systematische Struktur übersetzt wird. Dies beinhaltet alle Komponenten und schlägt für jede Klimaauswirkung (z.B. Sturzfluten) eine Beschreibung des vulnerablen Systems vor (z.B. Siedlungen), welches direkt mit einer bestimmten Richtung eines relevanten klimatischen Stimulus in Verbindung gebracht wird (z.B. stärkere Auswirkungen bei Zunahme der Starkregentage). Bezüglich der herausfordernden Prozedur der Aggregierung werden zwei alternative Methoden, die einen sektorübergreifenden Überblick ermöglichen, vorgestellt und deren Vor- und Nachteile diskutiert. Anschließend wird die entwickelte Struktur einer Vulnerabilitätsstudie mittels eines indikatorbasierten und deduktiven Ansatzes beispielhaft für Gemeinden in Nordrhein-Westfalen in Deutschland angewandt. Eine Übertragbarkeit auf andere Regionen ist dennoch möglich. Die Quantifizierung für die Gemeinden stützt sich dabei auf Informationen aus der Literatur. Da für viele Sektoren keine geeigneten Indikatoren vorhanden waren, werden in dieser Arbeit neue Indikatoren entwickelt und angewandt, beispielsweise für den Forst- oder Gesundheitssektor. Allerdings stellen fehlende empirische Daten bezüglich relevanter Schwellenwerte eine Lücke dar, beispielsweise welche Stärke von Klimaänderungen eine signifikante Auswirkung hervorruft. Dies führt dazu, dass die Studie nur relative Aussagen zum Grad der Vulnerabilität jeder Gemeinde im Vergleich zum Rest des Bundeslandes machen kann. Um diese Lücke zu füllen, wird für den Forstsektor beispielhaft die heutige und zukünftige Sturmwurfgefahr von Wäldern berechnet. Zu diesem Zweck werden die Eigenschaften der Wälder mit empirischen Schadensdaten eines vergangenen Sturmereignisses in Verbindung gebracht. Der sich daraus ergebende Sensitivitätswert wird anschließend mit den Windverhältnissen verknüpft. Sektorübergreifende Vulnerabilitätsstudien erfordern beträchtliche Ressourcen, was oft deren Anwendbarkeit erschwert. In einem nächsten Schritt wird daher das Potential einer Vereinfachung der Komplexität anhand zweier sektoraler Beispiele untersucht. Um das Auftreten von Waldbränden vorherzusagen, stehen zahlreiche meteorologische Indices zur Verfügung, welche eine Spannbreite unterschiedlicher Komplexitäten aufweisen. Bezüglich der Anzahl monatlicher Waldbrände weist die relative Luftfeuchtigkeit für die meisten deutschen Bundesländer eine bessere Vorhersagekraft als komplexere Indices auf. Dies ist er Fall, obgleich sie selbst als Eingangsvariable für die komplexeren Indices verwendet wird. Mit Hilfe dieses einzelnen meteorologischen Faktors kann also die Waldbrandgefahr in deutschen Region ausreichend genau ausgedrückt werden, was die Ressourceneffizienz von Studien erhöht. Die Methodenkomplexität wird auf ähnliche Weise hinsichtlich der Anwendung des ökohydrologischen Modells SWIM für die Region Brandenburg untersucht. Die interannuellen Bodenwasserwerte, welche durch dieses Modell simuliert werden, können nur unzureichend durch ein einfacheres statistisches Modell, welches auf denselben Eingangsdaten aufbaut, abgebildet werden. Innerhalb eines Zeithorizonts von Jahrzehnten, kann der statistische Ansatz jedoch das Bodenwasser zufriedenstellend abbilden und zeigt eine Dominanz der Bodeneigenschaft Feldkapazität. Dies deutet darauf hin, dass die Komplexität im Hinblick auf die Anzahl der Eingangsvariablen für langfristige Berechnungen reduziert werden kann. Allerdings sind die Aussagen durch fehlende beobachtete Bodenwasserwerte zur Validierung beschränkt. Die vorliegenden Studien zur Vulnerabilität und ihren Komponenten haben gezeigt, dass eine Anwendung noch immer wissenschaftlich herausfordernd ist. Folgt man der hier verwendeten Vulnerabilitätsdefinition, treten zahlreiche Probleme bei der Implementierung in regionalen Studien auf. Mit dieser Dissertation wurden Fortschritte bezüglich der aufgezeigten Lücken bisheriger Studien erzielt, indem eine systematische Struktur für die Beschreibung und Aggregierung von Vulnerabilitätskomponenten erarbeitet wurde. Hierfür wurden mehrere Ansätze diskutiert, die jedoch Vor- und Nachteile besitzen. Diese sollten vor der Anwendung von zukünftigen Studien daher ebenfalls sorgfältig abgewogen werden. Darüber hinaus hat sich gezeigt, dass ein Potential besteht einige Ansätze zu vereinfachen, jedoch sind hierfür weitere Untersuchungen nötig. Insgesamt konnte die Dissertation die Anwendung von Vulnerabilitätsstudien als Werkzeug zur Unterstützung von Anpassungsmaßnahmen stärken.
HPI Future SOC Lab
(2013)
The “HPI Future SOC Lab” is a cooperation of the Hasso-Plattner-Institut (HPI) and industrial partners. Its mission is to enable and promote exchange and interaction between the research community and the industrial partners. The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard- and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies. This technical report presents results of research projects executed in 2012. Selected projects have presented their results on June 18th and November 26th 2012 at the Future SOC Lab Day events.
User-centered design processes are the first choice when new interactive systems or services are developed to address real customer needs and provide a good user experience. Common tools for collecting user research data, conducting brainstormings, or sketching ideas are whiteboards and sticky notes. They are ubiquitously available, and no technical or domain knowledge is necessary to use them. However, traditional pen and paper tools fall short when saving the content and sharing it with others unable to be in the same location. They are also missing further digital advantages such as searching or sorting content. Although research on digital whiteboard and sticky note applications has been conducted for over 20 years, these tools are not widely adopted in company contexts. While many research prototypes exist, they have not been used for an extended period of time in a real-world context. The goal of this thesis is to investigate what the enablers and obstacles for the adoption of digital whiteboard systems are. As an instrument for different studies, we developed the Tele-Board software system for collaborative creative work. Based on interviews, observations, and findings from former research, we tried to transfer the analog way of working to the digital world. Being a software system, Tele-Board can be used with a variety of hardware and does not depend on special devices. This feature became one of the main factors for adoption on a larger scale. In this thesis, I will present three studies on the use of Tele-Board with different user groups and foci. I will use a combination of research methods (laboratory case studies and data from field research) with the overall goal of finding out when a digital whiteboard system is used and in which cases not. Not surprisingly, the system is used and accepted if a user sees a main benefit that neither analog tools nor other applications can offer. However, I found that these perceived benefits are very different for each user and usage context. If a tool provides possibilities to use in different ways and with different equipment, the chances of its adoption by a larger group increase. Tele-Board has now been in use for over 1.5 years in a global IT company in at least five countries with a constantly growing user base. Its use, advantages, and disadvantages will be described based on 42 interviews and usage statistics from server logs. Through these insights and findings from laboratory case studies, I will present a detailed analysis of digital whiteboard use in different contexts with design implications for future systems.
Cloud computing is a model for enabling on-demand access to a shared pool of computing resources. With virtually limitless on-demand resources, a cloud environment enables the hosted Internet application to quickly cope when there is an increase in the workload. However, the overhead of provisioning resources exposes the Internet application to periods of under-provisioning and performance degradation. Moreover, the performance interference, due to the consolidation in the cloud environment, complicates the performance management of the Internet applications. In this dissertation, we propose two approaches to mitigate the impact of the resources provisioning overhead. The first approach employs control theory to scale resources vertically and cope fast with workload. This approach assumes that the provider has knowledge and control over the platform running in the virtual machines (VMs), which limits it to Platform as a Service (PaaS) and Software as a Service (SaaS) providers. The second approach is a customer-side one that deals with the horizontal scalability in an Infrastructure as a Service (IaaS) model. It addresses the trade-off problem between cost and performance with a multi-goal optimization solution. This approach finds the scale thresholds that achieve the highest performance with the lowest increase in the cost. Moreover, the second approach employs a proposed time series forecasting algorithm to scale the application proactively and avoid under-utilization periods. Furthermore, to mitigate the interference impact on the Internet application performance, we developed a system which finds and eliminates the VMs suffering from performance interference. The developed system is a light-weight solution which does not imply provider involvement. To evaluate our approaches and the designed algorithms at large-scale level, we developed a simulator called (ScaleSim). In the simulator, we implemented scalability components acting as the scalability components of Amazon EC2. The current scalability implementation in Amazon EC2 is used as a reference point for evaluating the improvement in the scalable application performance. ScaleSim is fed with realistic models of the RUBiS benchmark extracted from the real environment. The workload is generated from the access logs of the 1998 world cup website. The results show that optimizing the scalability thresholds and adopting proactive scalability can mitigate 88% of the resources provisioning overhead impact with only a 9% increase in the cost.
Für die vorliegende Studie »Qualitative Untersuchung zur Akzeptanz des neuen Personalausweises und Erarbeitung von Vorschlägen zur Verbesserung der Usability der Software AusweisApp« arbeitete ein Innovationsteam mit Hilfe der Design Thinking Methode an der Aufgabenstellung »Wie können wir die AusweisApp für Nutzer intuitiv und verständlich gestalten?« Zunächst wurde die Akzeptanz des neuen Personalausweises getestet. Bürger wurden zu ihrem Wissensstand und ihren Erwartungen hinsichtlich des neuen Personalausweises befragt, darüber hinaus zur generellen Nutzung des neuen Personalausweises, der Nutzung der Online-Ausweisfunktion sowie der Usability der AusweisApp. Weiterhin wurden Nutzer bei der Verwendung der aktuellen AusweisApp beobachtet und anschließend befragt. Dies erlaubte einen tiefen Einblick in ihre Bedürfnisse. Die Ergebnisse aus der qualitativen Untersuchung wurden verwendet, um Verbesserungsvorschläge für die AusweisApp zu entwickeln, die den Bedürfnissen der Bürger entsprechen. Die Vorschläge zur Optimierung der AusweisApp wurden prototypisch umgesetzt und mit potentiellen Nutzern getestet. Die Tests haben gezeigt, dass die entwickelten Neuerungen den Bürgern den Zugang zur Nutzung der Online-Ausweisfunktion deutlich vereinfachen. Im Ergebnis konnte festgestellt werden, dass der Akzeptanzgrad des neuen Personalausweises stark divergiert. Die Einstellung der Befragten reichte von Skepsis bis hin zu Befürwortung. Der neue Personalausweis ist ein Thema, das den Bürger polarisiert. Im Rahmen der Nutzertests konnten zahlreiche Verbesserungspotenziale des bestehenden Service Designs sowohl rund um den neuen Personalausweis, als auch im Zusammenhang mit der verwendeten Software aufgedeckt werden. Während der Nutzertests, die sich an die Ideen- und Prototypenphase anschlossen, konnte das Innovtionsteam seine Vorschläge iterieren und auch verifizieren. Die ausgearbeiteten Vorschläge beziehen sich auf die AusweisApp. Die neuen Funktionen umfassen im Wesentlichen: · den direkten Zugang zu den Diensteanbietern, · umfangreiche Hilfestellungen (Tooltips, FAQ, Wizard, Video), · eine Verlaufsfunktion, · einen Beispieldienst, der die Online-Ausweisfunktion erfahrbar macht. Insbesondere gilt es, den Nutzern mit der neuen Version der AusweisApp Anwendungsfelder für ihren neuen Personalausweis und einen Mehrwert zu bieten. Die Ausarbeitung von weiteren Funktionen der AusweisApp kann dazu beitragen, dass der neue Personalausweis sein volles Potenzial entfalten kann.
HPI Future SOC Lab
(2013)
Together with industrial partners Hasso-Plattner-Institut (HPI) is currently establishing a “HPI Future SOC Lab,” which will provide a complete infrastructure for research on on-demand systems. The lab utilizes the latest, multi/many-core hardware and its practical implementation and testing as well as further development. The necessary components for such a highly ambitious project are provided by renowned companies: Fujitsu and Hewlett Packard provide their latest 4 and 8-way servers with 1-2 TB RAM, SAP will make available its latest Business byDesign (ByD) system in its most complete version. EMC² provides high performance storage systems and VMware offers virtualization solutions. The lab will operate on the basis of real data from large enterprises. The HPI Future SOC Lab, which will be open for use by interested researchers also from other universities, will provide an opportunity to study real-life complex systems and follow new ideas all the way to their practical implementation and testing. This technical report presents results of research projects executed in 2011. Selected projects have presented their results on June 15th and October 26th 2011 at the Future SOC Lab Day events.
INTRICATE/SEC 2012 Workshop held in Conjunction with The 11th Information Security South Africa Conference (ISSA 2012).
The Semantic Web provides information contained in the World Wide Web as machine-readable facts. In comparison to a keyword-based inquiry, semantic search enables a more sophisticated exploration of web documents. By clarifying the meaning behind entities, search results are more precise and the semantics simultaneously enable an exploration of semantic relationships. However, unlike keyword searches, a semantic entity-focused search requires that web documents are annotated with semantic representations of common words and named entities. Manual semantic annotation of (web) documents is time-consuming; in response, automatic annotation services have emerged in recent years. These annotation services take continuous text as input, detect important key terms and named entities and annotate them with semantic entities contained in widely used semantic knowledge bases, such as Freebase or DBpedia. Metadata of video documents require special attention. Semantic analysis approaches for continuous text cannot be applied, because information of a context in video documents originates from multiple sources possessing different reliabilities and characteristics. This thesis presents a semantic analysis approach consisting of a context model and a disambiguation algorithm for video metadata. The context model takes into account the characteristics of video metadata and derives a confidence value for each metadata item. The confidence value represents the level of correctness and ambiguity of the textual information of the metadata item. The lower the ambiguity and the higher the prospective correctness, the higher the confidence value. The metadata items derived from the video metadata are analyzed in a specific order from high to low confidence level. Previously analyzed metadata are used as reference points in the context for subsequent disambiguation. The contextually most relevant entity is identified by means of descriptive texts and semantic relationships to the context. The context is created dynamically for each metadata item, taking into account the confidence value and other characteristics. The proposed semantic analysis follows two hypotheses: metadata items of a context should be processed in descendent order of their confidence value, and the metadata that pertains to a context should be limited by content-based segmentation boundaries. The evaluation results support the proposed hypotheses and show increased recall and precision for annotated entities, especially for metadata that originates from sources with low reliability. The algorithms have been evaluated against several state-of-the-art annotation approaches. The presented semantic analysis process is integrated into a video analysis framework and has been successfully applied in several projects for the purpose of semantic video exploration of videos.
The new interactive online educational platform openHPI, (https://openHPI.de) from Hasso Plattner Institute (HPI), offers freely accessible courses at no charge for all who are interested in subjects in the field of information technology and computer science. Since 2011, “Massive Open Online Courses,” called MOOCs for short, have been offered, first at Stanford University and then later at other U.S. elite universities. Following suit, openHPI provides instructional videos on the Internet and further reading material, combined with learning-supportive self-tests, homework and a social discussion forum. Education is further stimulated by the support of a virtual learning community. In contrast to “traditional” lecture platforms, such as the tele-TASK portal (http://www.tele-task.de) where multimedia recorded lectures are available on demand, openHPI offers didactic online courses. The courses have a fixed start date and offer a balanced schedule of six consecutive weeks presented in multimedia and, whenever possible, interactive learning material. Each week, one chapter of the course subject is treated. In addition, a series of learning videos, texts, self-tests and homework exercises are provided to course participants at the beginning of the week. The course offering is combined with a social discussion platform where participants have the opportunity to enter into an exchange with course instructors and fellow participants. Here, for example, they can get answers to questions and discuss the topics in depth. The participants naturally decide themselves about the type and range of their learning activities. They can make personal contributions to the course, for example, in blog posts or tweets, which they can refer to in the forum. In turn, other participants have the chance to comment on, discuss or expand on what has been said. In this way, the learners become the teachers and the subject matter offered to a virtual community is linked to a social learning network.
Die neue interaktive Online-Bildungsplattform openHPI (https://openHPI.de) des Hasso-Plattner-Instituts (HPI) bietet frei zugängliche und kostenlose Onlinekurse für interessierte Teilnehmer an, die sich mit Inhalten aus dem Bereich der Informationstechnologien und Informatik beschäftige¬n. Wie die seit 2011 zunächst von der Stanford University, später aber auch von anderen Elite-Universitäten der USA angeboten „Massive Open Online Courses“, kurz MOOCs genannt, bietet openHPI im Internet Lernvideos und weiterführenden Lesestoff in einer Kombination mit lernunterstützenden Selbsttests, Hausaufgaben und einem sozialen Diskussionsforum an und stimuliert die Ausbildung einer das Lernen fördernden virtuellen Lerngemeinschaft. Im Unterschied zu „traditionellen“ Vorlesungsportalen, wie z.B. dem tele-TASK Portal (http://www.tele-task.de), bei dem multimedial aufgezeichnete Vorlesungen zum Abruf bereit gestellt werden, bietet openHPI didaktisch aufbereitete Onlinekurse an. Diese haben einen festen Starttermin und bieten dann in einem austarierten Zeitplan von sechs aufeinanderfolgenden Kurswochen multimedial aufbereitete und wann immer möglich interaktive Lehrmaterialien. In jeder Woche wird ein Kapitel des Kursthemas behandelt. Dazu werden zu Wochenbeginn eine Reihe von Lehrvideos, Texten, Selbsttests und ein Hausaufgabenblatt bereitgestellt, mit denen sich die Kursteilnehmer in dieser Woche beschäftigen. Kombiniert sind die Angebote mit einer sozialen Diskussionsplattform, auf der sich die Teilnehmer mit den Kursbetreuern und anderen Teilnehmern austauschen, Fragen klären und weiterführende Themen diskutieren können. Natürlich entscheiden die Teilnehmer selbst über Art und Umfang ihrer Lernaktivitäten. Sie können in den Kurs eigene Beiträge einbringen, zum Beispiel durch Blogposts oder Tweets, auf die sie im Forum verweisen. Andere Lernende können diese dann kommentieren, diskutieren oder ihrerseits erweitern. Auf diese Weise werden die Lernenden, die Lehrenden und die angebotenen Lerninhalte in einer virtuellen Gemeinschaft, einem sozialen Lernnetzwerk miteinander verknüpft.
Am 29. und 30. November 2012 fand am Hasso-Plattner-Institut für Softwaresystemtechnik GmbH in Potsdam der 5. Deutsche IPv6 Gipfel 2012 statt, als dessen Dokumentation der vorliegende technische Report dient. Wie mit den vorhergegangenen nationalen IPv6 Gipfeln verfolgte der Deutsche IPv6-Rat auch mit dem 5. Gipfel, der unter dem Motto „IPv6- der Wachstumstreiber für die Deutsche Wirtschaft” stand, das Ziel, Einblicke in aktuelle Entwicklungen rund um den Einsatz von IPv6 zu geben. Unter anderem wurden die Vorzüge des neue Internetstandards IPv6 vorgestellt und über die Anwendung von IPv6 auf dem Massenmarkt, sowie den Einsatz von IPv6 in Unternehmen und in der öffentlichen Verwaltung referiert. Weitere Themen des Gipfels bezogen sich auf Aktionen und Bedingungen in Unternehmen und Privathaushalten, die für den Umstieg auf IPv6 notwendig sind und welche Erfahrungen dabei bereits gesammelt werden konnten. Neben Vorträgen des Bundesbeauftragten für Datenschutz Peter Schaar und des Geschäftsführers der Technik Telekom Deutschland GmbH, Bruno Jacobfeuerborn, wurden weiteren Beiträge hochrangiger Vertretern aus Politik, Wissenschaft und Wirtschaft präsentiert, die in diesem technischen Bericht zusammengestellt sind.
A water soluble fluorescent polymer as a dual colour sensor for temperature and a specific protein
(2013)
We present two thermoresponsive water soluble copolymers prepared via free radical statistical copolymerization of N-isopropylacrylamide (NIPAm) and of oligo(ethylene glycol) methacrylates (OEGMAs), respectively, with a solvatochromic 7-(diethylamino)-3-carboxy-coumarin (DEAC)- functionalized monomer. In aqueous solutions, the NIPAm-based copolymer exhibits characteristic changes in its fluorescence profile in response to a change in solution temperature as well as to the presence of a specific protein, namely an anti-DEAC antibody. This polymer emits only weakly at low temperatures, but exhibits a marked fluorescence enhancement accompanied by a change in its emission colour when heated above its cloud point. Such drastic changes in the fluorescence and absorbance spectra are observed also upon injection of the anti-DEAC antibody, attributed to the specific binding of the antibody to DEAC moieties. Importantly, protein binding occurs exclusively when the polymer is in the well hydrated state below the cloud point, enabling a temperature control on the molecular recognition event. On the other hand, heating of the polymer–antibody complexes releases a fraction of the bound antibody. In the presence of the DEAC-functionalized monomer in this mixture, the released antibody competitively binds to the monomer and the antibody-free chains of the polymer undergo a more effective collapse and inter-aggregation. In contrast, the emission properties of the OEGMA-based analogous copolymer are rather insensitive to the thermally induced phase transition or to antibody binding. These opposite behaviours underline the need for a carefully tailored molecular design of responsive polymers aimed at specific applications, such as biosensing.
Based on a numerical model of the Northeast German Basin (NEGB), we investigate the sensitivity of the calculated thermal field as resulting from heat conduction, forced and free convection in response to consecutive horizontal and vertical mesh refinements. Our results suggest that computational findings are more sensitive to consecutive horizontal mesh refinements than to changes in the vertical resolution. In addition, the degree of mesh sensitivity depends strongly on the type of the process being investigated, whether heat conduction, forced convection or free thermal convection represents the active heat driver. In this regard, heat conduction exhibits to be relative robust to imposed changes in the spatial discretization. A systematic mesh sensitivity is observed in areas where forced convection promotes an effective role in shorten the background conductive thermal field. In contrast, free thermal convection is to be regarded as the most sensitive heat transport process as demonstrated by non-systematic changes in the temperature field with respect to imposed changes in the model resolution.
The spatial and temporal variability of a low-centred polygon on the eastern floodplain area of the lower Anabar River (72.070 degrees N, 113.921 degrees E; northern Yakutia, Siberia) has been investigated using a multi-method approach. The present-day vegetation in each square metre was analysed, revealing a community of Larix, shrubby Betula, and Salix on the polygon rim, a dominance of Carex and Andromeda polifolia in the rim-to-pond transition zone, and a predominantly monospecific Scorpidium scorpioides coverage within the pond. The total organic carbon (TOC) content, TOC/TN (total nitrogen) ratio, grain size, vascular plant macrofossils, moss remains, diatoms, and pollen were analysed for two vertical sections and a sediment core from a transect across the polygon. Radiocarbon dating indicates that the formation of the polygon started at least 1500 yr ago; the general positions of the pond and rim have not changed since that time. Two types of pond vegetation were identified, indicating two contrasting development stages of the polygon. The first was a well-established moss association, dominated by submerged or floating Scorpidium scorpioides and/or Drepanocladus spp. and overgrown by epiphytic diatoms such as Tabellaria flocculosa and Eunotia taxa. This stage coincides temporally with a period in which the polygon was only drained by lateral subsurface water flow, as indicated by mixed grain sizes. A different moss association occurred during times of repeated river flooding (indicated by homogeneous medium-grained sand that probably accumulated during the annual spring snowmelt), characterized by an abundance of Meesia triquetra and a dominance of benthic diatoms (e. g. Navicula vulpina), indicative of a relatively high pH and a high tolerance of disturbance. A comparison of the local polygon vegetation (inferred from moss and macrofossil spectra) with the regional vegetation (inferred from pollen spectra) indicated that the moss association with Scorpidium scorpioides became established during relatively favourable climatic conditions, while the association dominated by Meesia triquetra occurred during periods of harsh climatic conditions. Our study revealed a strong riverine influence (in addition to climatic influences) on polygon development and the type of peat accumulated.
Fluid flow in low-permeable carbonate rocks depends on the density of fractures, their interconnectivity and on the formation of fault damage zones. The present-day stress field influences the aperture hence the transmissivity of fractures whereas paleostress fields are responsible for the formation of faults and fractures. In low-permeable reservoir rocks, fault zones belong to the major targets. Before drilling, an estimate for reservoir productivity of wells drilled into the damage zone of faults is therefore required. Due to limitations in available data, a characterization of such reservoirs usually relies on the use of numerical techniques. The requirements of these mathematical models encompass a full integration of the actual fault geometry, comprising the dimension of the fault damage zone and of the fault core, and the individual population with properties of fault zones in the hanging and foot wall and the host rock. The paper presents both the technical approach to develop such a model and the property definition of heterogeneous fault zones and host rock with respect to the current stress field. The case study describes a deep geothermal reservoir in the western central Molasse Basin in southern Bavaria, Germany. Results from numerical simulations indicate that the well productivity can be enhanced along compressional fault zones if the interconnectivity of fractures is lateral caused by crossing synthetic and antithetic fractures. The model allows a deeper understanding of production tests and reservoir properties of faulted rocks.
The impact of inclined faults on the hydrothermal field is assessed by adding simplified structural settings to synthetic models. This study is innovative in carrying out numerical simulations because it integrates the real 3-D nature of flow influenced by a fault in a porous medium, thereby providing a useful tool for complex geothermal modelling. The 3-D simulations for the coupled fluid flow and heat transport processes are based on the finite element method. In the model, one geological layer is dissected by a dipping fault. Sensitivity analyses are conducted to quantify the effects of the fault's transmissivity on the fluid flow and thermal field. Different fault models are compared with a model where no fault is present to evaluate the effect of varying fault transmissivity. The results show that faults have a significant impact on the hydrothermal field. Varying either the fault zone width or the fault permeability will result in relevant differences in the pressure, velocity and temperature field. A linear relationship between fault zone width and fluid velocity is found, indicating that velocities increase with decreasing widths. The faults act as preferential pathways for advective heat transport in case of highly transmissive faults, whereas almost no fluid may be transported through poorly transmissive faults.
Background: Clock genes govern circadian rhythms and shape the effect of alcohol use on the physiological system. Exposure to severe negative life events is related to both heavy drinking and disturbed circadian rhythmicity. The aim of this study was 1) to extend previous findings suggesting an association of a haplotype tagging single nucleotide polymorphism of PER2 gene with drinking patterns, and 2) to examine a possible role for an interaction of this gene with life stress in hazardous drinking.
Methods: Data were collected as part of an epidemiological cohort study on the outcome of early risk factors followed since birth. At age 19 years, 268 young adults (126 males, 142 females) were genotyped for PER2 rs56013859 and were administered a 45-day alcohol timeline follow-back interview and the Alcohol Use Disorders Identification Test (AUDIT). Life stress was assessed as the number of severe negative life events during the past four years reported in a questionnaire and validated by interview.
Results: Individuals with the minor G allele of rs56013859 were found to be less engaged in alcohol use, drinking at only 72% of the days compared to homozygotes for the major A allele. Moreover, among regular drinkers, a gene x environment interaction emerged (p = .020). While no effects of genotype appeared under conditions of low stress, carriers of the G allele exhibited less hazardous drinking than those homozygous for the A allele when exposed to high stress.
Conclusions: These findings may suggest a role of the circadian rhythm gene PER2 in both the drinking patterns of young adults and in moderating the impact of severe life stress on hazardous drinking in experienced alcohol users. However, in light of the likely burden of multiple tests, the nature of the measures used and the nominal evidence of interaction, replication is needed before drawing firm conclusions.
Intrusion Detection Systems are widely deployed in computer networks. As modern attacks are getting more sophisticated and the number of sensors and network nodes grow, the problem of false positives and alert analysis becomes more difficult to solve. Alert correlation was proposed to analyse alerts and to decrease false positives. Knowledge about the target system or environment is usually necessary for efficient alert correlation. For representing the environment information as well as potential exploits, the existing vulnerabilities and their Attack Graph (AG) is used. It is useful for networks to generate an AG and to organize certain vulnerabilities in a reasonable way. In this article, a correlation algorithm based on AGs is designed that is capable of detecting multiple attack scenarios for forensic analysis. It can be parameterized to adjust the robustness and accuracy. A formal model of the algorithm is presented and an implementation is tested to analyse the different parameters on a real set of alerts from a local network. To improve the speed of the algorithm, a multi-core version is proposed and a HMM-supported version can be used to further improve the quality. The parallel implementation is tested on a multi-core correlation platform, using CPUs and GPUs.
With this paper, we assess the present-day conductive thermal field of the Glueckstadt Graben in NW Germany that is characterized by large salt walls and diapirs structuring the graben fill. We use a finite element method to calculate the 3D steady-state conductive thermal field based on a lithosphere-scale 3D structural model that resolves the first-order structural characteristics of the graben and its underlying lithosphere. Model predictions are validated against measured temperatures in six deep wells. Our investigations show that the interaction of thickness distributions and thermal rock properties of the different geological layers is of major importance for the distribution of temperatures in the deep subsurface of the Glueckstadt Graben. However, the local temperatures may result from the superposed effects of different controlling factors. Especially, the upper sedimentary part of the model exhibits huge lateral temperature variations, which correlate spatially with the shape of the thermally highly conductive Permian salt layer. Variations in thickness and geometry of the salt cause two major effects, which provoke considerable lateral temperature variations for a given depth. (1) The "chimney effect" causes more efficient heat transport within salt diapirs. As a consequence positive thermal anomalies develop in the upper part and above salt structures, where the latter are covered by much less conductive sediments. In contrast, negative thermal anomalies are noticeable underneath salt structures. (2) The "thermal blanketing effect" is caused by thermally low conductive sediments that provoke the local storage of heat where these insulating sediments are present. The latter effect leads to both local and regional thermal anomalies. Locally, this translates to higher temperatures where salt margin synclines are filled with thick insulating clastic sediments. For the regional anomalies the cumulative insulating effects of the entire sediment fill results in a long-wavelength variation of temperatures in response to heat refraction effects caused by the contrast between insulating sediments and highly conductive crystalline crust. Finally, the longest wavelength of temperature variations is caused by the depth position of the isothermal lithosphere-asthenosphere boundary defining the regional variations of the overall geothermal gradient. We find that a conductive thermal model predicts observed temperatures reasonably well for five of the six available wells, whereas the steady-state conductive approach appears not to be valid for the sixth well.
The deep thermal field in sedimentary basins can be affected by convection, conduction or both resulting from the structural inventory, physical properties of geological layers and physical processes taking place therein. For geothermal energy extraction, the controlling factors of the deep thermal field need to be understood to delineate favorable drill sites and exploitation compartments. We use geologically based 3-D finite element simulations to figure out the geologic controls on the thermal field of the geothermal research site Gro Schonebeck located in the E part of the North German Basin. Its target reservoir consists of Permian Rotliegend clastics that compose the lower part of a succession of Late Carboniferous to Cenozoic sediments, subdivided into several aquifers and aquicludes. The sedimentary succession includes a layer of mobilized Upper Permian Zechstein salt which plays a special role for the thermal field due to its high thermal conductivity. Furthermore, the salt is impermeable and due to its rheology decouples the fault systems in the suprasalt units from subsalt layers. Conductive and coupled fluid and heat transport simulations are carried out to assess the relative impact of different heat transfer mechanisms on the temperature distribution. The measured temperatures in 7 wells are used for model validation and show a better fit with models considering fluid and heat transport than with a purely conductive model. Our results suggest that advective and convective heat transport are important heat transfer processes in the suprasalt sediments. In contrast, thermal conduction mainly controls the subsalt layers. With a third simulation, we investigate the influence of a major permeable and of three impermeable faults dissecting the subsalt target reservoir and compare the results to the coupled model where no faults are integrated. The permeable fault may have a local, strong impact on the thermal, pressure and velocity fields whereas the impermeable faults only cause deviations of the pressure field.