Refine
Has Fulltext
- no (4)
Year of publication
- 2020 (4) (remove)
Document Type
- Article (4)
Language
- English (4)
Is part of the Bibliography
- yes (4) (remove)
Keywords
- mortality (4) (remove)
Issues The last Soviet anti-alcohol campaign of 1985 resulted in considerably reduced alcohol consumption and saved thousands of lives. But once the campaign's policies were abandoned and the Soviet alcohol monopoly broken up, a steep rise in mortality was observed in many of the newly formed successor countries, although some kept their monopolies. Almost 30 years after the campaign's end, the region faces diverse challenges in relation to alcohol.
Approach The present narrative review sheds light on recent drinking trends and alcohol policy developments in the 15 Former Soviet Union (FSU) countries, highlighting the most important setbacks, achievements and best practices. Vignettes of alcohol control policies in Belarus, Estonia, Kazakhstan, Lithuania and Uzbekistan are presented to illustrate the recent developments. <br /> Key Findings Over the past decade, drinking levels have declined in almost all FSU countries, paralleled by the introduction of various alcohol-control measures. The so-called three 'best buys' put forward by the World Health Organization to reduce alcohol-attributable burden (taxation and other measures to increase price, restrictions on alcohol availability and marketing) are relatively well implemented across the countries.
Implications In recent years, evidence-based alcohol policies have been actively implemented as a response to the enormous alcohol-attributable burden in many of the countries, although there is big variance across and within different jurisdictions.
Conclusion Strong declines in alcohol consumption were observed in the 15 FSU countries, which have introduced various alcohol control measures in recent years, resulting in a reduction of alcohol consumption in the World Health Organization European region overall.
Background
Despite numerous studies and meta-analyses the prognostic effect of cardiac rehabilitation is still under debate. This update of the Cardiac Rehabilitation Outcome Study (CROS II) provides a contemporary and practice focused approach including only cardiac rehabilitation interventions based on published standards and core components to evaluate cardiac rehabilitation delivery and effectiveness in improving patient prognosis.
Design
A systematic review and meta-analysis.
Methods
Randomised controlled trials and retrospective and prospective controlled cohort studies evaluating patients after acute coronary syndrome, coronary artery bypass grafting or mixed populations with coronary artery disease published until September 2018 were included.
Resulte
Based on CROS inclusion criteria out of 7096 abstracts six additional studies including 8671 patients were identified (two randomised controlled trials, two retrospective controlled cohort studies, two prospective controlled cohort studies). In total, 31 studies including 228,337 patients were available for this meta-analysis (three randomised controlled trials, nine prospective controlled cohort studies, 19 retrospective controlled cohort studies; 50,653 patients after acute coronary syndrome 14,583, after coronary artery bypass grafting 163,101, mixed coronary artery disease populations; follow-up periods ranging from 9 months to 14 years). Heterogeneity in design, cardiac rehabilitation delivery, biometrical assessment and potential confounders was considerable. Controlled cohort studies showed a significantly reduced total mortality (primary endpoint) after cardiac rehabilitation participation in patients after acute coronary syndrome (prospective controlled cohort studies: hazard ratio (HR) 0.37, 95% confidence interval (CI) 0.20-0.69; retrospective controlled cohort studies HR 0.64, 95% CI 0.53-0.76; prospective controlled cohort studies odds ratio 0.20, 95% CI 0.08-0.48), but the single randomised controlled trial fulfilling the CROS inclusion criteria showed neutral results. Cardiac rehabilitation participation was also associated with reduced total mortality in patients after coronary artery bypass grafting (retrospective controlled cohort studies HR 0.62, 95% CI 0.54-0.70, one single randomised controlled trial without fatal events), and in mixed coronary artery disease populations (retrospective controlled cohort studies HR 0.52, 95% CI 0.36-0.77; two out of 10 controlled cohort studies with neutral results).
Conclusion
CROS II confirms the effectiveness of cardiac rehabilitation participation after acute coronary syndrome and after coronary artery bypass grafting in actual clinical practice by reducing total mortality under the conditions of current evidence-based coronary artery disease treatment. The data of CROS II, however, underscore the urgent need to define internationally accepted minimal standards for cardiac rehabilitation delivery as well as for scientific evaluation.
Background:
Many felid species are of high conservation concern, and with increasing human disturbance the situation is worsening. Small isolated populations are at risk of genetic impoverishment decreasing within-species biodiversity. Movement is known to be a key behavioural trait that shapes both demographic and genetic dynamics and affects population survival. However, we have limited knowledge on how different manifestations of movement behaviour translate to population processes. In this study, we aimed to 1) understand the potential effects of movement behaviour on the genetic diversity of small felid populations in heterogeneous landscapes, while 2) presenting a simulation tool that can help inform conservation practitioners following, or considering, population management actions targeting the risk of genetic impoverishment.
Methods:
We developed a spatially explicit individual-based population model including neutral genetic markers for felids and applied this to the example of Eurasian lynx. Using a neutral landscape approach, we simulated reintroductions into a three-patch system, comprising two breeding patches separated by a larger patch of differing landscape heterogeneity, and tested for the effects of various behavioural movement syndromes and founder population sizes. We explored a range of movement syndromes by simulating populations with various movement model parametrisations that range from 'shy' to 'bold' movement behaviour.
Results:
We find that movement syndromes can lead to a higher loss of genetic diversity and an increase in between population genetic structure for both "bold" and "shy" movement behaviours, depending on landscape conditions, with larger decreases in genetic diversity and larger increases in genetic differentiation associated with bold movement syndromes, where the first colonisers quickly reproduce and subsequently dominate the gene pool. In addition, we underline the fact that a larger founder population can offset the genetic losses associated with subpopulation isolation and gene pool dominance. Conclusions We identified a movement syndrome trade-off for population genetic variation, whereby bold-explorers could be saviours - by connecting populations and promoting panmixia, or sinks - by increasing genetic losses via a 'founder takes all' effect, whereas shy-stayers maintain a more gradual genetic drift due to their more cautious behaviour. Simulations should incorporate movement behaviour to provide better projections of long-term population viability and within-species biodiversity, which includes genetic diversity. Simulations incorporating demographics and genetics have great potential for informing conservation management actions, such as population reintroductions or reinforcements. Here, we present such a simulation tool for solitary felids.
Background:
COVID-19 has infected millions of people worldwide and is responsible for several hundred thousand fatalities. The COVID-19 pandemic has necessitated thoughtful resource allocation and early identification of high-risk patients. However, effective methods to meet these needs are lacking.
Objective:
The aims of this study were to analyze the electronic health records (EHRs) of patients who tested positive for COVID-19 and were admitted to hospitals in the Mount Sinai Health System in New York City; to develop machine learning models for making predictions about the hospital course of the patients over clinically meaningful time horizons based on patient characteristics at admission; and to assess the performance of these models at multiple hospitals and time points.
Methods:
We used Extreme Gradient Boosting (XGBoost) and baseline comparator models to predict in-hospital mortality and critical events at time windows of 3, 5, 7, and 10 days from admission. Our study population included harmonized EHR data from five hospitals in New York City for 4098 COVID-19-positive patients admitted from March 15 to May 22, 2020. The models were first trained on patients from a single hospital (n=1514) before or on May 1, externally validated on patients from four other hospitals (n=2201) before or on May 1, and prospectively validated on all patients after May 1 (n=383). Finally, we established model interpretability to identify and rank variables that drive model predictions.
Results:
Upon cross-validation, the XGBoost classifier outperformed baseline models, with an area under the receiver operating characteristic curve (AUC-ROC) for mortality of 0.89 at 3 days, 0.85 at 5 and 7 days, and 0.84 at 10 days. XGBoost also performed well for critical event prediction, with an AUC-ROC of 0.80 at 3 days, 0.79 at 5 days, 0.80 at 7 days, and 0.81 at 10 days. In external validation, XGBoost achieved an AUC-ROC of 0.88 at 3 days, 0.86 at 5 days, 0.86 at 7 days, and 0.84 at 10 days for mortality prediction. Similarly, the unimputed XGBoost model achieved an AUC-ROC of 0.78 at 3 days, 0.79 at 5 days, 0.80 at 7 days, and 0.81 at 10 days. Trends in performance on prospective validation sets were similar. At 7 days, acute kidney injury on admission, elevated LDH, tachypnea, and hyperglycemia were the strongest drivers of critical event prediction, while higher age, anion gap, and C-reactive protein were the strongest drivers of mortality prediction.
Conclusions:
We externally and prospectively trained and validated machine learning models for mortality and critical events for patients with COVID-19 at different time horizons. These models identified at-risk patients and uncovered underlying relationships that predicted outcomes.