Filtern
Volltext vorhanden
- nein (3)
Dokumenttyp
Sprache
- Englisch (3)
Gehört zur Bibliographie
- ja (3)
Schlagworte
- ISM (1)
- ISM: clouds (1)
- ISM: supernova remnants (1)
- conomics (1)
- gamma rays: (1)
- gamma rays: general (1)
- open science (1)
- political science (1)
- replication (1)
- reproduction (1)
Institut
The unidentified very-high-energy (VHE; E > 0.1 TeV) gamma -ray source, HESS J1826-130, was discovered with the High Energy Stereoscopic System (HESS) in the Galactic plane. The analysis of 215 h of HESS data has revealed a steady gamma -ray flux from HESS J1826-130, which appears extended with a half-width of 0.21 degrees +/- 0.02 <br /> (stat)degrees <br /> stat degrees +/- 0.05 <br /> (sys)degrees sys degrees . The source spectrum is best fit with either a power-law function with a spectral index Gamma = 1.78 +/- 0.10(stat) +/- 0.20(sys) and an exponential cut-off at 15.2 <br /> (+5.5)(-3.2) -3.2+5.5 TeV, or a broken power-law with Gamma (1) = 1.96 +/- 0.06(stat) +/- 0.20(sys), Gamma (2) = 3.59 +/- 0.69(stat) +/- 0.20(sys) for energies below and above E-br = 11.2 +/- 2.7 TeV, respectively. The VHE flux from HESS J1826-130 is contaminated by the extended emission of the bright, nearby pulsar wind nebula, HESS J1825-137, particularly at the low end of the energy spectrum. Leptonic scenarios for the origin of HESS J1826-130 VHE emission related to PSR J1826-1256 are confronted by our spectral and morphological analysis. In a hadronic framework, taking into account the properties of dense gas regions surrounding HESS J1826-130, the source spectrum would imply an astrophysical object capable of accelerating the parent particle population up to greater than or similar to 200 TeV. Our results are also discussed in a multiwavelength context, accounting for both the presence of nearby supernova remnants, molecular clouds, and counterparts detected in radio, X-rays, and TeV energies.
Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
This study pushes our understanding of research reliability by reproducing and replicating claims from 110 papers in leading economic and political science journals. The analysis involves computational reproducibility checks and robustness assessments. It reveals several patterns. First, we uncover a high rate of fully computationally reproducible results (over 85%). Second, excluding minor issues like missing packages or broken pathways, we uncover coding errors for about 25% of studies, with some studies containing multiple errors. Third, we test the robustness of the results to 5,511 re-analyses. We find a robustness reproducibility of about 70%. Robustness reproducibility rates are relatively higher for re-analyses that introduce new data and lower for re-analyses that change the sample or the definition of the dependent variable. Fourth, 52% of re-analysis effect size estimates are smaller than the original published estimates and the average statistical significance of a re-analysis is 77% of the original. Lastly, we rely on six teams of researchers working independently to answer eight additional research questions on the determinants of robustness reproducibility. Most teams find a negative relationship between replicators' experience and reproducibility, while finding no relationship between reproducibility and the provision of intermediate or even raw data combined with the necessary cleaning codes.