Refine
Has Fulltext
- no (2)
Document Type
- Article (2)
Language
- English (2)
Is part of the Bibliography
- yes (2)
Keywords
- Biodiversity experiments (1)
- Biodiversity theory (1)
- Conservation management (1)
- Decision-making (1)
- Ecosystem functions and services (1)
- Forecasting (1)
- Functional traits (1)
- Global change (1)
- Interdisciplinarity (1)
- Monitoring programmes (1)
Institute
Improving our understanding of biodiversity and ecosystem functioning and our capacity to inform ecosystem management requires an integrated framework for functional biodiversity research (FBR). However, adequate integration among empirical approaches (monitoring and experimental) and modelling has rarely been achieved in FBR. We offer an appraisal of the issues involved and chart a course towards enhanced integration. A major element of this path is the joint orientation towards the continuous refinement of a theoretical framework for FBR that links theory testing and generalization with applied research oriented towards the conservation of biodiversity and ecosystem functioning. We further emphasize existing decision-making frameworks as suitable instruments to practically merge these different aims of FBR and bring them into application. This integrated framework requires joint research planning, and should improve communication and stimulate collaboration between modellers and empiricists, thereby overcoming existing reservations and prejudices. The implementation of this integrative research agenda for FBR requires an adaptation in most national and international funding schemes in order to accommodate such joint teams and their more complex structures and data needs.
Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.