Refine
Has Fulltext
- no (3)
Document Type
- Article (2)
- Working Paper (1)
Language
- English (3)
Is part of the Bibliography
- yes (3)
Keywords
- Ecosystem services (1)
- Local-to-regional scale (1)
- Operational use (1)
- Sociocultural valuation (1)
- cocreated knowledge (1)
- conomics (1)
- ecosystem services (1)
- open science (1)
- participatory research (1)
- political science (1)
Ecosystem services inherently involve people, whose values help define the benefits of nature's services. It is thus important for researchers to involve stakeholders in ecosystem services research. However, a simple and practicable framework to guide such engagement, and in particular to help researchers anticipate and consider key issues and challenges, has not been well explored. Here, we use experience from the 12 case studies in the European Operational Potential of Ecosystem Research Applications (OPERAs) project to propose a stakeholder engagement framework comprising three key elements: creating space, aligning motivations, and building trust. We argue that involving stakeholders in research demands thoughtful reflection from the researchers about what kind of space they want to create, including if and how they want to bring different interests together, how much space they want to allow for critical discussion, and whether there is a role for particular stakeholders to serve as conduits between others. In addition, understanding their own motivations—including values, knowledge, goals, and desired benefits—will help researchers decide when and how to involve stakeholders, identify areas of common ground and potential disagreement, frame the project appropriately, set expectations, and ensure each party is able to see benefits of engaging with each other. Finally, building relationships with stakeholders can be difficult but considering the roles of existing relationships, time, approach, reputation, and belonging can help build mutual trust. Although the three key elements and the paths between them can play out differently depending on the particular research project, we suggest that a research design that considers how to create the space in which researchers and stakeholders will meet, align motivations between researchers and stakeholders, and build mutual trust will help foster productive researcher–stakeholder relationships.
Sociocultural valuation (SCV) of ecosystem services (ES) discloses the principles, importance or preferences expressed by people towards nature. Although ES research has increasingly addressed sociocultural values in past years, little effort has been made to systematically review the components of sociocultural valuation applications for different decision contexts (i.e. awareness raising, accounting, priority setting, litigation and instrument design). In this analysis, we investigate the characteristics of 48 different sociocultural valuation applications—characterised by unique combinations of decision context, methods, data collection formats and participants—across ten European case studies. Our findings show that raising awareness for the sociocultural value of ES by capturing people’s perspective and establishing the status quo, was found the most frequent decision context in case studies, followed by priority setting and instrument development. Accounting and litigation issues were not addressed in any of the applications. We reveal that applications for particular decision contexts are methodologically similar, and that decision contexts determine the choice of methods, data collection formats and participants involved. Therefore, we conclude that understanding the decision context is a critical first step to designing and carrying out fit-for-purpose sociocultural valuation of ES in operational ecosystem management.
This study pushes our understanding of research reliability by reproducing and replicating claims from 110 papers in leading economic and political science journals. The analysis involves computational reproducibility checks and robustness assessments. It reveals several patterns. First, we uncover a high rate of fully computationally reproducible results (over 85%). Second, excluding minor issues like missing packages or broken pathways, we uncover coding errors for about 25% of studies, with some studies containing multiple errors. Third, we test the robustness of the results to 5,511 re-analyses. We find a robustness reproducibility of about 70%. Robustness reproducibility rates are relatively higher for re-analyses that introduce new data and lower for re-analyses that change the sample or the definition of the dependent variable. Fourth, 52% of re-analysis effect size estimates are smaller than the original published estimates and the average statistical significance of a re-analysis is 77% of the original. Lastly, we rely on six teams of researchers working independently to answer eight additional research questions on the determinants of robustness reproducibility. Most teams find a negative relationship between replicators' experience and reproducibility, while finding no relationship between reproducibility and the provision of intermediate or even raw data combined with the necessary cleaning codes.