Refine
Has Fulltext
- no (13)
Year of publication
Language
- English (13)
Is part of the Bibliography
- yes (13)
Keywords
The growing body of research on large-scale mass wasting events so far has only scarcely investigated the sedimentology of chaotic deposits from non-volcanic terrestrial landslides such that any overarching and systematic terminological framework remains elusive. Yet recent work has emphasized the need for better understanding the internal structure and composition of rockslide deposits as a means to characterise the mechanics during the final stages of runout and emplacement. We offer a comprehensive overview on the occurrence of rock fragmentation and frictional melt both at different geographic locations, and different sections within large (>10(6) m(3)) rockslide masses. We argue that exposures of pervasively fragmented and interlocked jigsaw-cracked rock masses; basal melange containing rip-up clasts and phantom blocks; micro-breccia; and thin bands of basal frictionite are indispensable clues for identifying deposits from giant rockslides that may remain morphologically inconspicuous otherwise. These sedimentary assemblages are diagnostic tools for distinguishing large rockslide debris from macro and microscopically similar glacial deposits, tectonic fault-zone breccias, and impact breccias, and thus help avoid palaeoclimatic and tectonic misinterpretations, let alone misestimates of the hazard from giant rockslides. Moreover, experimental results from Mossbauer spectroscopy of frictionite samples support visual interpretations of thin sections, and demonstrate that short-lived (<10 s) friction-induced partial melting at temperatures >1500 degrees C in the absence of water occurred at the base of several giant moving rockslides. This finding supports previous theories of dry excess runout accompanied by comminution of rock masses down to gm-scale, and indicates that catastrophic motion of large fragmenting rock masses does not require water as a potential lubricant.
The solar outer atmosphere is an extremely dynamic environment characterized by the continuous interplay between the plasma and the magnetic field that generates and permeates it. Such interactions play a fundamental role in hugely diverse astrophysical systems, but occur at scales that cannot be studied outside the solar system. Understanding this complex system requires concerted, simultaneous solar observations from the visible to the vacuum ultraviolet (VUV) and soft X-rays, at high spatial resolution (between 0.1'' and 0.3''), at high temporal resolution (on the order of 10 s, i.e., the time scale of chromospheric dynamics), with a wide temperature coverage (0.01 MK to 20 MK, from the chromosphere to the flaring corona), and the capability of measuring magnetic fields through spectropolarimetry at visible and near-infrared wavelengths. Simultaneous spectroscopic measurements sampling the entire temperature range are particularly important. These requirements are fulfilled by the Japanese Solar-C mission (Plan B), composed of a spacecraft in a geosynchronous orbit with a payload providing a significant improvement of imaging and spectropolarimetric capabilities in the UV, visible, and near-infrared with respect to what is available today and foreseen in the near future. The Large European Module for solar Ultraviolet Research (LEMUR), described in this paper, is a large VUV telescope feeding a scientific payload of high-resolution imaging spectrographs and cameras. LEMUR consists of two major components: a VUV solar telescope with a 30 cm diameter mirror and a focal length of 3.6 m, and a focal-plane package composed of VUV spectrometers covering six carefully chosen wavelength ranges between 170 and 1270 . The LEMUR slit covers 280'' on the Sun with 0.14'' per pixel sampling. In addition, LEMUR is capable of measuring mass flows velocities (line shifts) down to 2 km s (-aEuro parts per thousand 1) or better. LEMUR has been proposed to ESA as the European contribution to the Solar C mission.
Intransitive competition is widespread in plant communities and maintains their species richness
(2015)
Intransitive competition networks, those in which there is no single best competitor, may ensure species coexistence. However, their frequency and importance in maintaining diversity in real-world ecosystems remain unclear. We used two large data sets from drylands and agricultural grasslands to assess: (1) the generality of intransitive competition, (2) intransitivity-richness relationships and (3) effects of two major drivers of biodiversity loss (aridity and land-use intensification) on intransitivity and species richness. Intransitive competition occurred in >65% of sites and was associated with higher species richness. Intransitivity increased with aridity, partly buffering its negative effects on diversity, but was decreased by intensive land use, enhancing its negative effects on diversity. These contrasting responses likely arise because intransitivity is promoted by temporal heterogeneity, which is enhanced by aridity but may decline with land-use intensity. We show that intransitivity is widespread in nature and increases diversity, but it can be lost with environmental homogenisation.
This study follows up on a previous downscaling intercomparison for present climate. Using a larger set of eight methods the authors downscale atmospheric fields representing present (1981-2000) and future (2046-65) conditions, as simulated by six global climate models following three emission scenarios. Local extremes were studied at 20 locations in British Columbia as measured by the same set of 27 indices, ClimDEX, as in the precursor study. Present and future simulations give 2 x 3 x 6 x 8 x 20 x 27 = 155 520 index climatologies whose analysis in terms of mean change and variation is the purpose of this study. The mean change generally reinforces what is to be expected in a warmer climate: that extreme cold events become less frequent and extreme warm events become more frequent, and that there are signs of more frequent precipitation extremes. There is considerable variation, however, about this tendency, caused by the influence of scenario, climate model, downscaling method, and location. This is analyzed using standard statistical techniques such as analysis of variance and multidimensional scaling, along with an assessment of the influence of each modeling component on the overall variation of the simulated change. It is found that downscaling generally has the strongest influence, followed by climate model; location and scenario have only a minor influence. The influence of downscaling could be traced back in part to various issues related to the methods, such as the quality of simulated variability or the dependence on predictors. Using only methods validated in the precursor study considerably reduced the influence of downscaling, underpinning the general need for method verification.
We present two case studies that demonstrate how a common evaluation methodology can be used to assess the reliability of regional climate model simulations from different fields of research. In Case I, we focused on the agricultural yield loss risk for maize in Northeastern Brazil during a drought linked to an El-Nino event. In Case II, the present-day regional climatic conditions in Europe for a 10-year period are simulated. To comprehensively evaluate the model results for both kinds of investigations, we developed a general methodology. On its basis, we elaborated and implemented modules to assess the quality of model results using both advanced visualization techniques and statistical algorithms. Besides univariate approaches for individual near-surface parameters, we used multivariate statistics to investigate multiple near-surface parameters of interest together. For the latter case, we defined generalized quality measures to quantify the model's accuracy. Furthermore, we elaborated a diagnosis tool applicable for atmospheric variables to assess the model's accuracy in representing the physical processes above the surface under various aspects. By means of this evaluation approach, it could be demonstrated in Case Study I that the accuracy of the applied regional climate model resides at the same level as that we found for another regional model and a global model. Excessive precipitation during the rainy season in coastal regions could be identified as a major contribution leading to this result. In Case Study II, we also identified the accuracy of the investigated mean characteristics for near- surface temperature and precipitation to be comparable to another regional model. In this case, an artificial modulation of the used initial and boundary data during preprocessing could be identified as the major source of error in the simulation. Altogether, the achieved results for the presented investigations indicate the potential of our methodology to be applied as a common test bed to different fields of research in regional climate modeling
We have undertaken a thorough dynamical investigation of five extrasolar planetary systems using extensive numerical experiments. The systems Gl 777 A, HD 72659, Gl 614, 47 Uma and HD 4208 were examined concerning the question of whether they could host terrestrial-like planets in their habitable zones (HZ). First we investigated the mean motion resonances between fictitious terrestrial planets and the existing gas giants in these five extrasolar systems. Then a fine grid of initial conditions for a potential terrestrial planet within the HZ was chosen for each system, from which the stability of orbits was then assessed by direct integrations over a time interval of 1 million years. For each of the five systems the 2-dimensional grid of initial conditions contained 80 eccentricity points for the Jovian planet and up to 160 semimajor axis points for the fictitious planet. The computations were carried out using a Lie-series integration method with an adaptive step size control. This integration method achieves machine precision accuracy in a highly efficient and robust way, requiring no special adjustments when the orbits have large eccentricities. The stability of orbits was examined with a determination of the Renyi entropy, estimated from recurrence plots, and with a more straightforward method based on the maximum eccentricity achieved by the planet over the 1 million year integration. Additionally, the eccentricity is an indication of the habitability of a terrestrial planet in the HZ; any value of e > 0.2 produces a significant temperature difference on a planet's surface between apoapse and periapse. The results for possible stable orbits for terrestrial planets in habitable zones for the five systems are: for Gl 777 A nearly the entire HZ is stable, for 47 Uma, HD 72659 and HD 4208 terrestrial planets can survive for a sufficiently long time, while for Gl 614 our results exclude terrestrial planets moving in stable orbits within the HZ. Studies such as this one are of primary interest to future space missions dedicated to finding habitable terrestrial planets in other stellar systems. Assessing the likelihood of other habitable planets, and more generally the possibility of other life, is the central question of astrobiology today. Our investigation indicates that, from the dynamical point of view, habitable terrestrial planets seem to be compatible with many of the currently discovered extrasolar systems
We present deep VERITAS observations of the blazar PKS 1424+240, along with contemporaneous Fermi Large Area Telescope, Swift X-ray Telescope, and Swift UV Optical Telescope data between 2009 February 19 and 2013 June 8. This blazar resides at a redshift of z >= 0.6035, displaying a significantly attenuated gamma-ray flux above 100 GeV due to photon absorption via pair-production with the extragalactic background light. We present more than 100 hr of VERITAS observations over three years, a multiwavelength light curve, and the contemporaneous spectral energy distributions. The source shows a higher flux of (2.1 +/- 0.3) x 10(-7) photons m(-2) s(-1) above 120 GeV in 2009 and 2011 as compared to the flux measured in 2013, corresponding to (1.02 +/- 0.08) x 10-7 photons m(-2) s(-1) above 120 GeV. The measured differential very high energy (VHE; E >= 100 GeV) spectral indices are Gamma = 3.8 +/- 0.3, 4.3 +/- 0.6 and 4.5 +/- 0.2 in 2009, 2011, and 2013, respectively. No significant spectral change across the observation epochs is detected. We find no evidence for variability at gamma-ray opacities of greater than tau = 2, where it is postulated that any variability would be small and occur on timescales longer than a year if hadronic cosmic-ray interactions with extragalactic photon fields provide a secondary VHE photon flux. The data cannot rule out such variability due to low statistics.
Introducing the CTA concept
(2013)
The Cherenkov Telescope Array (CTA) is a new observatory for very high-energy (VHE) gamma rays. CTA has ambitions science goals, for which it is necessary to achieve full-sky coverage, to improve the sensitivity by about an order of magnitude, to span about four decades of energy, from a few tens of GeV to above 100 TeV with enhanced angular and energy resolutions over existing VHE gamma-ray observatories. An international collaboration has formed with more than 1000 members from 27 countries in Europe, Asia, Africa and North and South America. In 2010 the CTA Consortium completed a Design Study and started a three-year Preparatory Phase which leads to production readiness of CTA in 2014. In this paper we introduce the science goals and the concept of CTA, and provide an overview of the project.