Refine
Document Type
- Article (8)
- Postprint (2)
- Monograph/Edited Volume (1)
Language
- English (11) (remove)
Is part of the Bibliography
- yes (11)
Keywords
- floods (3)
- inference (3)
- machine learning (3)
- Biodiversity Exploratories (1)
- Fagus sylvatica (1)
- Picea abies (1)
- Pinus sylvestris (1)
- Pseudotsuga menziesii (1)
- forest conversion (1)
- forest management (1)
Tree species diversity can positively affect the multifunctionality of forests. This is why conifer monocultures of Scots pine and Norway spruce, widely promoted in Central Europe since the 18th and 19th century, are currently converted into mixed stands with naturally dominant European beech. Biodiversity is expected to benefit from these mixtures compared to pure conifer stands due to increased abiotic and biotic resource heterogeneity. Evidence for this assumption is, however, largely lacking. Here, we investigated the diversity of vascular plants, bryophytes and lichens at the plot (alpha diversity) and at the landscape (gamma diversity) level in pure and mixed stands of European beech and conifer species (Scots pine, Norway spruce, Douglas fir) in four regions in Germany. We aimed to identify compositions of pure and mixed stands in a hypothetical forest landscape that can optimize gamma diversity of vascular plants, bryophytes and lichens within regions. Results show that gamma diversity of the investigated groups is highest when a landscape comprises different pure stands rather than tree species mixtures at the stand scale. Species mainly associated with conifers rely on light regimes that are only provided in pure conifer forests, whereas mixtures of beech and conifers are more similar to beech stands. Combining pure beech and pure conifer stands at the landscape scale can increase landscape level biodiversity and conserve species assemblages of both stand types, while landscapes solely composed of stand scale tree species mixtures could lead to a biodiversity reduction of a combination of investigated groups of 7 up to 20%.
Objective: To examine prospectively whether early parental child-rearing behavior is a predictor of cardiometabolic outcome in young adulthood when other potential risk factors are controlled. Metabolic factors associated with increased risk for cardiovascular disease have been found to vary, depending on lifestyle as well as genetic predisposition. Moreover, there is evidence suggesting that environmental conditions, such as stress in pre- and postnatal life, may have a sustained impact on an individual's metabolic risk profile. Methods: Participants were drawn from a prospective, epidemiological, cohort study followed up from birth into young adulthood. Parent interviews and behavioral observations at the age of 3 months were conducted to assess child-rearing practices and mother-infant interaction in the home setting and in the laboratory. In 279 participants, anthropometric characteristics, low-density lipoprotein and high-density lipoprotein cholesterol, apolipoproteins, and triglycerides were recorded at age 19 years. In addition, structured interviews were administered to the young adults to assess indicators of current lifestyle and education. Results: Adverse early-life interaction experiences were significantly associated with lower levels of high- density lipoprotein cholesterol and apolipoprotein A1 in young adulthood. Current lifestyle variables and level of education did not account for this effect, although habitual smoking and alcohol consumption also contributed significantly to cardiometabolic outcomes. Conclusions: These findings suggest that early parental child-rearing behavior may predict health outcome in later life through its impact on metabolic parameters in adulthood.
Polyglycolide (PGA) is a biodegradable polymer with multiple applications in the medical sector. Here the synthesis of high molecular weight polyglycolide by ring-opening polymerization of diglycolide is reported. For the first time stabilizer free supercritical carbon dioxide (scCO(2)) was used as a reaction medium. scCO(2) allowed for a reduction in reaction temperature compared to conventional processes. Together with the lowering of monomer concentration and consequently reduced heat generation compared to bulk reactions thermal decomposition of the product occurring already during polymerization is strongly reduced. The reaction temperatures and pressures were varied between 120 and 150 degrees C and 145 to 1400 bar. Tin(II) ethyl hexanoate and 1-dodecanol were used as catalyst and initiator, respectively. The highest number average molecular weight of 31 200 g mol(-1) was obtained in 5 hours from polymerization at 120 degrees C and 530 bar. In all cases the products were obtained as a dry white powder. Remarkably, independent of molecular weight the melting temperatures were always at (219 +/- 2)degrees C.
Polyglycolide (PGA) is a biodegradable polymer with multiple applications in the medical sector. Here the synthesis of high molecular weight polyglycolide by ring-opening polymerization of diglycolide is reported. For the first time stabilizer free supercritical carbon dioxide (scCO2) was used as a reaction medium. scCO2 allowed for a reduction in reaction temperature compared to conventional processes. Together with the lowering of monomer concentration and consequently reduced heat generation compared to bulk reactions thermal decomposition of the product occurring already during polymerization is strongly reduced. The reaction temperatures and pressures were varied between 120 and 150 °C and 145 to 1400 bar. Tin(II) ethyl hexanoate and 1-dodecanol were used as catalyst and initiator, respectively. The highest number average molecular weight of 31 200 g mol−1 was obtained in 5 hours from polymerization at 120 °C and 530 bar. In all cases the products were obtained as a dry white powder. Remarkably, independent of molecular weight the melting temperatures were always at (219 ± 2) °C.
Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
Machine learning (ML) algorithms are being increasingly used in Earth and Environmental modeling studies owing to the ever-increasing availability of diverse data sets and computational resources as well as advancement in ML algorithms. Despite advances in their predictive accuracy, the usefulness of ML algorithms for inference remains elusive. In this study, we employ two popular ML algorithms, artificial neural networks and random forest, to analyze a large data set of flood events across Germany with the goals to analyze their predictive accuracy and their usability to provide insights to hydrologic system functioning. The results of the ML algorithms are contrasted against a parametric approach based on multiple linear regression. For analysis, we employ a model-agnostic framework named Permuted Feature Importance to derive the influence of models' predictors. This allows us to compare the results of different algorithms for the first time in the context of hydrology. Our main findings are that (1) the ML models achieve higher prediction accuracy than linear regression, (2) the results reflect basic hydrological principles, but (3) further inference is hindered by the heterogeneity of results across algorithms. Thus, we conclude that the problem of equifinality as known from classical hydrological modeling also exists for ML and severely hampers its potential for inference. To account for the observed problems, we propose that when employing ML for inference, this should be made by using multiple algorithms and multiple methods, of which the latter should be embedded in a cross-validation routine.
Machine learning (ML) algorithms are being increasingly used in Earth and Environmental modeling studies owing to the ever-increasing availability of diverse data sets and computational resources as well as advancement in ML algorithms. Despite advances in their predictive accuracy, the usefulness of ML algorithms for inference remains elusive. In this study, we employ two popular ML algorithms, artificial neural networks and random forest, to analyze a large data set of flood events across Germany with the goals to analyze their predictive accuracy and their usability to provide insights to hydrologic system functioning. The results of the ML algorithms are contrasted against a parametric approach based on multiple linear regression. For analysis, we employ a model-agnostic framework named Permuted Feature Importance to derive the influence of models' predictors. This allows us to compare the results of different algorithms for the first time in the context of hydrology. Our main findings are that (1) the ML models achieve higher prediction accuracy than linear regression, (2) the results reflect basic hydrological principles, but (3) further inference is hindered by the heterogeneity of results across algorithms. Thus, we conclude that the problem of equifinality as known from classical hydrological modeling also exists for ML and severely hampers its potential for inference. To account for the observed problems, we propose that when employing ML for inference, this should be made by using multiple algorithms and multiple methods, of which the latter should be embedded in a cross-validation routine.