Refine
Has Fulltext
- no (2)
Document Type
- Article (1)
- Working Paper (1)
Language
- English (2)
Is part of the Bibliography
- yes (2)
Keywords
- conomics (1)
- open science (1)
- political science (1)
- replication (1)
- reproduction (1)
- research transparency (1)
Institute
The vertical distribution of chlorophyll in stratified lakes and reservoirs frequently exhibits a maximum peak deep in the water column, referred to as the deep chlorophyll maximum (DCM). DCMs are ecologically important hot spots of primary production and nutrient cycling, and their location can determine vertical habitat gradients for primary consumers. Consequently, the drivers of DCM structure regulate many characteristics of aquatic food webs and biogeochemistry. Previous studies have identified light and thermal stratification as important drivers of summer DCM depth, but their relative importance across a broad range of lakes is not well resolved. We analyzed profiles of chlorophyll fluorescence, temperature, and light during summer stratification from 100 lakes in the Global Lake Ecological Observatory Network (GLEON) and quantified two characteristics of DCM structure: depth and thickness. While DCMs do form in oligotrophic lakes, we found that they can also form in eutrophic to dystrophic lakes. Using a random forest algorithm, we assessed the relative importance of variables associated with light attenuation vs. thermal stratification for predicting DCM structure in lakes that spanned broad gradients of morphometry and transparency. Our analyses revealed that light attenuation was a more important predictor of DCM depth than thermal stratification and that DCMs deepen with increasing lake clarity. DCM thickness was best predicted by lake size with larger lakes having thicker DCMs. Additionally, our analysis demonstrates that the relative importance of light and thermal stratification on DCM structure is not uniform across a diversity of lake types.
This study pushes our understanding of research reliability by reproducing and replicating claims from 110 papers in leading economic and political science journals. The analysis involves computational reproducibility checks and robustness assessments. It reveals several patterns. First, we uncover a high rate of fully computationally reproducible results (over 85%). Second, excluding minor issues like missing packages or broken pathways, we uncover coding errors for about 25% of studies, with some studies containing multiple errors. Third, we test the robustness of the results to 5,511 re-analyses. We find a robustness reproducibility of about 70%. Robustness reproducibility rates are relatively higher for re-analyses that introduce new data and lower for re-analyses that change the sample or the definition of the dependent variable. Fourth, 52% of re-analysis effect size estimates are smaller than the original published estimates and the average statistical significance of a re-analysis is 77% of the original. Lastly, we rely on six teams of researchers working independently to answer eight additional research questions on the determinants of robustness reproducibility. Most teams find a negative relationship between replicators' experience and reproducibility, while finding no relationship between reproducibility and the provision of intermediate or even raw data combined with the necessary cleaning codes.