Refine
Has Fulltext
- no (5)
Document Type
- Article (4)
- Working Paper (1)
Language
- English (5)
Is part of the Bibliography
- yes (5)
Keywords
- Animal movement modeling (1)
- Penalized likelihood (1)
- Smoothing (1)
- Time series (1)
- conomics (1)
- open science (1)
- political science (1)
- replication (1)
- reproduction (1)
- research transparency (1)
Institute
Understanding animal movement is essential to elucidate how animals interact, survive, and thrive in a changing world. Recent technological advances in data collection and management have transformed our understanding of animal "movement ecology" (the integrated study of organismal movement), creating a big-data discipline that benefits from rapid, cost-effective generation of large amounts of data on movements of animals in the wild. These high-throughput wildlife tracking systems now allow more thorough investigation of variation among individuals and species across space and time, the nature of biological interactions, and behavioral responses to the environment. Movement ecology is rapidly expanding scientific frontiers through large interdisciplinary and collaborative frameworks, providing improved opportunities for conservation and insights into the movements of wild animals, and their causes and consequences.
Hidden semi-Markov models generalise hidden Markov models by explicitly modelling the time spent in a given state, the so-called dwell time, using some distribution defined on the natural numbers. While the (shifted) Poisson and negative binomial distribution provide natural choices for such distributions, in practice, parametric distributions can lack the flexibility to adequately model the dwell times. To overcome this problem, a penalised maximum likelihood approach is proposed that allows for a flexible and data-driven estimation of the dwell-time distributions without the need to make any distributional assumption. This approach is suitable for direct modelling purposes or as an exploratory tool to investigate the latent state dynamics. The feasibility and potential of the suggested approach is illustrated in a simulation study and by modelling muskox movements in northeast Greenland using GPS tracking data. The proposed method is implemented in the R-package PHSMM which is available on CRAN.
We investigated the systems response of metabolism and growth after an increase in irradiance in the nonsaturating range in the algal model Chlamydomonas reinhardtii. In a three-step process, photosynthesis and the levels of metabolites increased immediately, growth increased after 10 to 15 min, and transcript and protein abundance responded by 40 and 120 to 240 min, respectively. In the first phase, starch and metabolites provided a transient buffer for carbon until growth increased. This uncouples photosynthesis from growth in a fluctuating light environment. In the first and second phases, rising metabolite levels and increased polysome loading drove an increase in fluxes. Most Calvin-Benson cycle (CBC) enzymes were substrate-limited in vivo, and strikingly, many were present at higher concentrations than their substrates, explaining how rising metabolite levels stimulate CBC flux. Rubisco, fructose-1,6-biosphosphatase, and seduheptulose-1,7-bisphosphatase were close to substrate saturation in vivo, and flux was increased by posttranslational activation. In the third phase, changes in abundance of particular proteins, including increases in plastidial ATP synthase and some CBC enzymes, relieved potential bottlenecks and readjusted protein allocation between different processes. Despite reasonable overall agreement between changes in transcript and protein abundance (R-2 = 0.24), many proteins, including those in photosynthesis, changed independently of transcript abundance.
River ecosystems receive and process vast quantities of terrestrial organic carbon, the fate of which depends strongly on microbial activity. Variation in and controls of processing rates, however, are poorly characterized at the global scale. In response, we used a peer-sourced research network and a highly standardized carbon processing assay to conduct a global-scale field experiment in greater than 1000 river and riparian sites. We found that Earth’s biomes have distinct carbon processing signatures. Slow processing is evident across latitudes, whereas rapid rates are restricted to lower latitudes. Both the mean rate and variability decline with latitude, suggesting temperature constraints toward the poles and greater roles for other environmental drivers (e.g., nutrient loading) toward the equator. These results and data set the stage for unprecedented “next-generation biomonitoring” by establishing baselines to help quantify environmental impacts to the functioning of ecosystems at a global scale.
This study pushes our understanding of research reliability by reproducing and replicating claims from 110 papers in leading economic and political science journals. The analysis involves computational reproducibility checks and robustness assessments. It reveals several patterns. First, we uncover a high rate of fully computationally reproducible results (over 85%). Second, excluding minor issues like missing packages or broken pathways, we uncover coding errors for about 25% of studies, with some studies containing multiple errors. Third, we test the robustness of the results to 5,511 re-analyses. We find a robustness reproducibility of about 70%. Robustness reproducibility rates are relatively higher for re-analyses that introduce new data and lower for re-analyses that change the sample or the definition of the dependent variable. Fourth, 52% of re-analysis effect size estimates are smaller than the original published estimates and the average statistical significance of a re-analysis is 77% of the original. Lastly, we rely on six teams of researchers working independently to answer eight additional research questions on the determinants of robustness reproducibility. Most teams find a negative relationship between replicators' experience and reproducibility, while finding no relationship between reproducibility and the provision of intermediate or even raw data combined with the necessary cleaning codes.