Refine
Has Fulltext
- no (20)
Document Type
- Article (20) (remove)
Language
- English (20)
Is part of the Bibliography
- yes (20)
Keywords
- Sun: photosphere (7)
- Sun: magnetic fields (6)
- techniques: spectroscopic (6)
- methods: data analysis (5)
- Sun: chromosphere (4)
- techniques: polarimetric (4)
- sunspots (3)
- Sun: activity (2)
- ancient DNA (2)
- methods: observational (2)
Institute
We here present the results from a detailed analysis of nebular abundances of commonly observed ions in the collisional ring galaxy Cartwheel using the Very Large Telescope (VLT) Multi-Unit Spectroscopic Explorer (MUSE) data set. The analysis includes 221 H II regions in the star-forming ring, in addition to 40 relatively fainter H a-emitting regions in the spokes, disc, and the inner ring. The ionic abundances of He, N, O, and Fe are obtained using the direct method (DM) for 9, 20, 20, and 17 ring H II regions, respectively, where the S++ temperature-sensitive line is detected. For the rest of the regions, including all the nebulae between the inner and the outer ring, we obtained O abundances using the strong-line method (SLM). The ring regions have a median 12 + log O/H = 8.19 +/- 0.15, log N/O = -1.57 +/- 0.09 and log Fe/O = -2.24 +/- 0.09 using the DM. Within the range of O abundances seen in the Cartwheel, the N/O and Fe/O values decrease proportionately with increasing O, suggesting local enrichment of O without corresponding enrichment of primary N and Fe. The O abundances of the disc H II regions obtained using the SLM show a well-defined radial gradient. The mean O abundance of the ring H II regions is lower by similar to 0.1 dex as compared to the extrapolation of the radial gradient. The observed trends suggest the preservation of the pre-collisional abundance gradient, displacement of most of the processed elements to the ring, as predicted by the recent simulation by Renaud et al., and post-collisional infall of metal-poor gas in the ring.
Consensify
(2020)
A standard practise in palaeogenome analysis is the conversion of mapped short read data into pseudohaploid sequences, frequently by selecting a single high-quality nucleotide at random from the stack of mapped reads. This controls for biases due to differential sequencing coverage, but it does not control for differential rates and types of sequencing error, which are frequently large and variable in datasets obtained from ancient samples. These errors have the potential to distort phylogenetic and population clustering analyses, and to mislead tests of admixture using D statistics. We introduce Consensify, a method for generating pseudohaploid sequences, which controls for biases resulting from differential sequencing coverage while greatly reducing error rates. The error correction is derived directly from the data itself, without the requirement for additional genomic resources or simplifying assumptions such as contemporaneous sampling. For phylogenetic and population clustering analysis, we find that Consensify is less affected by artefacts than methods based on single read sampling. For D statistics, Consensify is more resistant to false positives and appears to be less affected by biases resulting from different laboratory protocols than other frequently used methods. Although Consensify is developed with palaeogenomic data in mind, it is applicable for any low to medium coverage short read datasets. We predict that Consensify will be a useful tool for future studies of palaeogenomes.
Cloud model inversions of strong chromospheric absorption lines using principal component analysis
(2020)
High-resolution spectroscopy of strong chromospheric absorption lines delivers nowadays several millions of spectra per observing day, when using fast scanning devices to cover large regions on the solar surface. Therefore, fast and robust inversion schemes are needed to explore the large data volume. Cloud model (CM) inversions of the chromospheric H alpha line are commonly employed to investigate various solar features including filaments, prominences, surges, jets, mottles, and (macro-) spicules. The choice of the CM was governed by its intuitive description of complex chromospheric structures as clouds suspended above the solar surface by magnetic fields. This study is based on observations of active region NOAA 11126 in H alpha, which were obtained November 18-23, 2010 with the echelle spectrograph of the vacuum tower telescope at the Observatorio del Teide, Spain. Principal component analysis reduces the dimensionality of spectra and conditions noise-stripped spectra for CM inversions. Modeled H alpha intensity and contrast profiles as well as CM parameters are collected in a database, which facilitates efficient processing of the observed spectra. Physical maps are computed representing the line-core and continuum intensity, absolute contrast, equivalent width, and Doppler velocities, among others. Noise-free spectra expedite the analysis of bisectors. The data processing is evaluated in the context of "big data," in particular with respect to automatic classification of spectra.
Aims. We study the evolution of an arch filament system (AFS) and of its individual arch filaments to learn about the processes occurring in them. Methods. We observed the AFS at the GREGOR solar telescope on Tenerife at high cadence with the very fast spectroscopic mode of the GREGOR Infrared Spectrograph (GRIS) in the He I 10 830 angstrom spectral range. The He I triplet profiles were fitted with analytic functions to infer line-of-sight (LOS) velocities to follow plasma motions within the AFS. Results. We tracked the temporal evolution of an individual arch filament over its entire lifetime, as seen in the He I 10 830 angstrom triplet. The arch filament expanded in height and extended in length from 13 ' to 21 '. The lifetime of this arch filament is about 30 min. About 11 min after the arch filament is seen in He I, the loop top starts to rise with an average Doppler velocity of 6 km s(-1). Only two minutes later, plasma drains down with supersonic velocities towards the footpoints reaching a peak velocity of up to 40 km s(-1) in the chromosphere. The temporal evolution of He I 10 830 angstrom profiles near the leading pore showed almost ubiquitous dual red components of the He I triplet, indicating strong downflows, along with material nearly at rest within the same resolution element during the whole observing time.
In high-resolution solar physics, the volume and complexity of photometric, spectroscopic, and polarimetric ground-based data significantly increased in the last decade, reaching data acquisition rates of terabytes per hour. This is driven by the desire to capture fast processes on the Sun and the necessity for short exposure times "freezing" the atmospheric seeing, thus enabling ex post facto image restoration. Consequently, large-format and high-cadence detectors are nowadays used in solar observations to facilitate image restoration. Based on our experience during the "early science" phase with the 1.5 m GREGOR solar telescope (2014–2015) and the subsequent transition to routine observations in 2016, we describe data collection and data management tailored toward image restoration and imaging spectroscopy. We outline our approaches regarding data processing, analysis, and archiving for two of GREGOR's post-focus instruments (see http://gregor.aip.de), i.e., the GREGOR Fabry–Pérot Interferometer (GFPI) and the newly installed High-Resolution Fast Imager (HiFI). The heterogeneous and complex nature of multidimensional data arising from high-resolution solar observations provides an intriguing but also a challenging example for "big data" in astronomy. The big data challenge has two aspects: (1) establishing a workflow for publishing the data for the whole community and beyond and (2) creating a collaborative research environment (CRE), where computationally intense data and postprocessing tools are colocated and collaborative work is enabled for scientists of multiple institutes. This requires either collaboration with a data center or frameworks and databases capable of dealing with huge data sets based on virtual observatory (VO) and other community standards and procedures.
Present-day domestic horses are immensely diverse in their maternally inherited mitochondrial DNA, yet they show very little variation on their paternally inherited Y chromosome. Although it has recently been shown that Y chromosomal diversity in domestic horses was higher at least until the Iron Age, when and why this diversity disappeared remain controversial questions. We genotyped 16 recently discovered Y chromosomal single-nucleotide polymorphisms in 96 ancient Eurasian stallions spanning the early domestication stages (Copper and Bronze Age) to the Middle Ages. Using this Y chromosomal time series, which covers nearly the entire history of horse domestication, we reveal how Y chromosomal diversity changed over time. Our results also show that the lack of multiple stallion lineages in the extant domestic population is caused by neither a founder effect nor random demographic effects but instead is the result of artificial selection-initially during the Iron Age by nomadic people from the Eurasian steppes and later during the Roman period. Moreover, the modern domestic haplotype probably derived from another, already advantageous, haplotype, most likely after the beginning of the domestication. In line with recent findings indicating that the Przewalski and domestic horse lineages remained connected by gene flow after they diverged about 45,000 years ago, we present evidence for Y chromosomal introgression of Przewalski horses into the gene pool of European domestic horses at least until medieval times.
Broad-band imaging and even imaging with a moderate bandpass (about 1 nm) provides a photon-rich environment, where frame selection (lucky imaging) becomes a helpful tool in image restoration, allowing us to perform a cost-benefit analysis on how to design observing sequences for imaging with high spatial resolution in combination with real-time correction provided by an adaptive optics (AO) system. This study presents high-cadence (160 Hz) G-band and blue continuum image sequences obtained with the High-resolution Fast Imager (HiFI) at the 1.5-meter GREGOR solar telescope, where the speckle-masking technique is used to restore images with nearly diffraction-limited resolution. The HiFI employs two synchronized large-format and high-cadence sCMOS detectors. The median filter gradient similarity (MFGS) image-quality metric is applied, among others, to AO-corrected image sequences of a pore and a small sunspot observed on 2017 June 4 and 5. A small region of interest, which was selected for fast-imaging performance, covered these contrastrich features and their neighborhood, which were part of Active Region NOAA 12661. Modifications of theMFGS algorithm uncover the field-and structure-dependency of this imagequality metric. However, MFGS still remains a good choice for determining image quality without a priori knowledge, which is an important characteristic when classifying the huge number of high-resolution images contained in data archives. In addition, this investigation demonstrates that a fast cadence and millisecond exposure times are still insufficient to reach the coherence time of daytime seeing. Nonetheless, the analysis shows that data acquisition rates exceeding 50 Hz are required to capture a substantial fraction of the best seeing moments, significantly boosting the performance of post-facto image restoration.
Aims. Combining high-resolution spectropolarimetric and imaging data is key to understanding the decay process of sunspots as it allows us to scrutinize the velocity and magnetic fields of sunspots and their surroundings. Methods. Active region NOAA 12597 was observed on 2016 September 24 with the 1.5-meter GREGOR solar telescope using high-spatial-resolution imaging as well as imaging spectroscopy and near-infrared (NIR) spectropolarimetry. Horizontal proper motions were estimated with local correlation tracking, whereas line-of-sight (LOS) velocities were computed with spectral line fitting methods. The magnetic field properties were inferred with the "Stokes Inversions based on Response functions" (SIR) code for the Si I and Ca I NIR lines. Results. At the time of the GREGOR observations, the leading sunspot had two light bridges indicating the onset of its decay. One of the light bridges disappeared, and an elongated, dark umbral core at its edge appeared in a decaying penumbral sector facing the newly emerging flux. The flow and magnetic field properties of this penumbral sector exhibited weak Evershed flow, moat flow, and horizontal magnetic field. The penumbral gap adjacent to the elongated umbral core and the penumbra in that penumbral sector displayed LOS velocities similar to granulation. The separating polarities of a new flux system interacted with the leading and central part of the already established active region. As a consequence, the leading spot rotated 55 degrees clockwise over 12 h. Conclusions. In the high-resolution observations of a decaying sunspot, the penumbral filaments facing the flux emergence site contained a darkened area resembling an umbral core filled with umbral dots. This umbral core had velocity and magnetic field properties similar to the sunspot umbra. This implies that the horizontal magnetic fields in the decaying penumbra became vertical as observed in flare-induced rapid penumbral decay, but on a very different time-scale.
GrassPlot is a collaborative vegetation-plot database organised by the Eurasian Dry Grassland Group (EDGG) and listed in the Global Index of Vegetation-Plot Databases (GIVD ID EU-00-003). GrassPlot collects plot records (releves) from grasslands and other open habitats of the Palaearctic biogeographic realm. It focuses on precisely delimited plots of eight standard grain sizes (0.0001; 0.001;... 1,000 m(2)) and on nested-plot series with at least four different grain sizes. The usage of GrassPlot is regulated through Bylaws that intend to balance the interests of data contributors and data users. The current version (v. 1.00) contains data for approximately 170,000 plots of different sizes and 2,800 nested-plot series. The key components are richness data and metadata. However, most included datasets also encompass compositional data. About 14,000 plots have near-complete records of terricolous bryophytes and lichens in addition to vascular plants. At present, GrassPlot contains data from 36 countries throughout the Palaearctic, spread across elevational gradients and major grassland types. GrassPlot with its multi-scale and multi-taxon focus complements the larger international vegetationplot databases, such as the European Vegetation Archive (EVA) and the global database " sPlot". Its main aim is to facilitate studies on the scale-and taxon-dependency of biodiversity patterns and drivers along macroecological gradients. GrassPlot is a dynamic database and will expand through new data collection coordinated by the elected Governing Board. We invite researchers with suitable data to join GrassPlot. Researchers with project ideas addressable with GrassPlot data are welcome to submit proposals to the Governing Board.
The prevalence of contaminant microbial DNA in ancient bone samples represents the principal limiting factor for palaeogenomic studies, as it may comprise more than 99% of DNA molecules obtained. Efforts to exclude or reduce this contaminant fraction have been numerous but also variable in their success. Here, we present a simple but highly effective method to increase the relative proportion of endogenous molecules obtained from ancient bones. Using computed tomography (CT) scanning, we identify the densest region of a bone as optimal for sampling. This approach accurately identifies the densest internal regions of petrous bones, which are known to be a source of high-purity ancient DNA. For ancient long bones, CT scans reveal a high-density outermost layer, which has been routinely removed and discarded prior to DNA extraction. For almost all long bones investigated, we find that targeted sampling of this outermost layer provides an increase in endogenous DNA content over that obtained from softer, trabecular bone. This targeted sampling can produce as much as 50-fold increase in the proportion of endogenous DNA, providing a directly proportional reduction in sequencing costs for shotgun sequencing experiments. The observed increases in endogenous DNA proportion are not associated with any reduction in absolute endogenous molecule recovery. Although sampling the outermost layer can result in higher levels of human contamination, some bones were found to have more contamination associated with the internal bone structures. Our method is highly consistent, reproducible and applicable across a wide range of bone types, ages and species. We predict that this discovery will greatly extend the potential to study ancient populations and species in the genomics era.