Refine
Has Fulltext
- yes (97)
Year of publication
- 2014 (97) (remove)
Document Type
- Postprint (97) (remove)
Language
- English (97) (remove)
Keywords
- anomalous diffusion (5)
- living cells (4)
- Cardiac rehabilitation (2)
- RPA (2)
- acquisition (2)
- ancient DNA (2)
- arsenolipids present (2)
- behavior (2)
- cod-liver (2)
- damage (2)
Institute
- Institut für Chemie (32)
- Mathematisch-Naturwissenschaftliche Fakultät (17)
- Humanwissenschaftliche Fakultät (14)
- Institut für Biochemie und Biologie (9)
- Institut für Physik und Astronomie (6)
- Department Sport- und Gesundheitswissenschaften (4)
- Institut für Anglistik und Amerikanistik (4)
- Philosophische Fakultät (3)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (2)
- Extern (1)
This study investigates the spatial and temporal distributions of 14 key arboreal taxa and their driving forces during the last 22,000 calendar years before ad 1950 (kyr BP) using a taxonomically harmonized and temporally standardized fossil pollen dataset with a 500-year resolution from the eastern part of continental Asia. Logistic regression was used to estimate pollen abundance thresholds for vegetation occurrence (presence or dominance), based on modern pollen data and present ranges of 14 taxa in China. Our investigation reveals marked changes in spatial and temporal distributions of the major arboreal taxa. The thermophilous (Castanea, Castanopsis, Cyclobalanopsis, Fagus, Pterocarya) and eurythermal (Juglans, Quercus, Tilia, Ulmus) broadleaved tree taxa were restricted to the current tropical or subtropical areas of China during the Last Glacial Maximum (LGM) and spread northward since c. 14.5 kyr BP. Betula and conifer taxa (Abies, Picea, Pinus), in contrast, retained a wider distribution during the LGM and showed no distinct expansion direction during the Late Glacial. Since the late mid-Holocene, the abundance but not the spatial extent of most trees decreased. The changes in spatial and temporal distributions for the 14 taxa are a reflection of climate changes, in particular monsoonal moisture, and, in the late Holocene, human impact. The post-LGM expansion patterns in eastern continental China seem to be different from those reported for Europe and North America, for example, the westward spread for eurythermal broadleaved taxa.
This study examines the course and driving forces of recent vegetation change in the Mongolian steppe. A sediment core covering the last 55years from a small closed-basin lake in central Mongolia was analyzed for its multi-proxy record at annual resolution. Pollen analysis shows that highest abundances of planted Poaceae and highest vegetation diversity occurred during 1977-1992, reflecting agricultural development in the lake area. A decrease in diversity and an increase in Artemisia abundance after 1992 indicate enhanced vegetation degradation in recent times, most probably because of overgrazing and farmland abandonment. Human impact is the main factor for the vegetation degradation within the past decades as revealed by a series of redundancy analyses, while climate change and soil erosion play subordinate roles. High Pediastrum (a green algae) influx, high atomic total organic carbon/total nitrogen (TOC/TN) ratios, abundant coarse detrital grains, and the decrease of C-13(org) and N-15 since about 1977 but particularly after 1992 indicate that abundant terrestrial organic matter and nutrients were transported into the lake and caused lake eutrophication, presumably because of intensified land use. Thus, we infer that the transition to a market economy in Mongolia since the early 1990s not only caused dramatic vegetation degradation but also affected the lake ecosystem through anthropogenic changes in the catchment area.
Two of a kind?
(2014)
School attacks are attracting increasing attention in aggression research. Recent systematic analyses provided new insights into offense and offender characteristics. Less is known about attacks in institutes of higher education (e.g., universities). It is therefore questionable whether the term “school attack” should be limited to institutions of general education or could be extended to institutions of higher education. Scientific literature is divided in distinguishing or unifying these two groups and reports similarities as well as differences. We researched 232 school attacks and 45 attacks in institutes of higher education throughout the world and conducted systematic comparisons between the two groups. The analyses yielded differences in offender (e.g., age, migration background) and offense characteristics (e.g., weapons, suicide rates), and some similarities (e.g., gender). Most differences can apparently be accounted for by offenders’ age and situational influences. We discuss the implications of our findings for future research and the development of preventative measures.
Leaking comprises observable behavior or statements that signal intentions of committing a violent offense and is considered an important warning sign for school shootings. School staff who are confronted with leaking have to assess its seriousness and react appropriately - a difficult task, because knowledge about leaking is sparse. The present study, therefore, examined how frequently leaking occurs in schools and how teachers identify leaking and respond to it. To achieve this aim, we informed teachers from eight schools in Germany about the definition of leaking and other warning signs and risk factors for school shootings in a one-hour information session. Teachers were then asked to report cases of leaking over a six- to nine-month period and to answer a questionnaire on leaking and its treatment after the information session and six to nine months later. Our results suggest that leaking is a relevant problem in German schools. Teachers mostly rated the information session positively and benefited in several aspects (e.g. reported more perceived courses of action or improved knowledge about leaking), but also expressed a constant need for support. Our findings highlight teachers' needs for further support and training and may be used in the planning of prevention measures for school shootings.
Although politicization is a perennial research topic in public administration to investigate relationships between ministers and civil servants, the concept still lacks clarification. This article contributes to this literature by systematically identifying different conceptualizations of politicization and suggests a typology including three politicization mechanisms to strengthen the political responsiveness of the ministerial bureaucracy: formal, functional and administrative politicization. The typology is empirically validated through a comparative case analysis of politicization mechanisms in Germany, Belgium, the UK and Denmark. The empirical analysis further refines the general idea of Western democracies becoming ‘simply’ more politicized, by illustrating how some politicization mechanisms do not continue to increase, but stabilize – at least for the time being.
Background Transcatheter aortic-valve implantation (TAVI) is an established alternative therapy in patients with severe aortic stenosis and a high surgical risk. Despite a rapid growth in its use, very few data exist about the efficacy of cardiac rehabilitation (CR) in these patients. We assessed the hypothesis that patients after TAVI benefit from CR, compared to patients after surgical aortic-valve replacement (sAVR).
Methods From September 2009 to August 2011, 442 consecutive patients after TAVI (n=76) or sAVR (n=366) were referred to a 3-week CR. Data regarding patient characteristics as well as changes of functional (6-min walk test. 6-MWT), bicycle exercise test), and emotional status (Hospital Anxiety and Depression Scale) were retrospectively evaluated and compared between groups after propensity score adjustment.
Results Patients after TAVI were significantly older (p<0.001), more female (p<0.001), and had more often coronary artery disease (p=0.027), renal failure (p=0.012) and a pacemaker (p=0.032). During CR, distance in 6-MWT (both groups p0.001) and exercise capacity (sAVR p0.001, TAVI p0.05) significantly increased in both groups. Only patients after sAVR demonstrated a significant reduction in anxiety and depression (p0.001). After propensity scores adjustment, changes were not significantly different between sAVR and TAVI, with the exception of 6-MWT (p=0.004).
Conclusions Patients after TAVI benefit from cardiac rehabilitation despite their older age and comorbidities. CR is a helpful tool to maintain independency for daily life activities and participation in socio-cultural life.
Background: Chronic kidney disease (CKD) is a frequent comorbidity among elderly patients and those with cardiovascular disease. CKD carries prognostic relevance. We aimed to describe patient characteristics, risk factor management and control status of patients in cardiac rehabilitation (CR), differentiated by presence or absence of CKD.
Design and methods: Data from 92,071 inpatients with adequate information to calculate glomerular filtration rate (GFR) based on the Cockcroft-Gault formula were analyzed at the beginning and the end of a 3-week CR stay. CKD was defined as estimated GFR <60 ml/min/1.73 m(2).
Results: Compared with non-CKD patients, CKD patients were significantly older (72.0 versus 58.0 years) and more often had diabetes mellitus, arterial hypertension, and atherothrombotic manifestations (previous stroke, peripheral arterial disease), but fewer were current or previous smokers had a CHD family history. Exercise capacity was much lower in CKD (59 vs. 92Watts). Fewer patients with CKD were treated with percutaneous coronary intervention (PCI), but more had coronary artery bypass graft (CABG) surgery. Patients with CKD compared with non-CKD less frequently received statins, acetylsalicylic acid (ASA), clopidogrel, beta blockers, and angiotensin converting enzyme (ACE) inhibitors, and more frequently received angiotensin receptor blockers, insulin and oral anticoagulants. In CKD, mean low density lipoprotein cholesterol (LDL-C), total cholesterol, and high density lipoprotein cholesterol (HDL-C) were slightly higher at baseline, while triglycerides were substantially lower. This lipid pattern did not change at the discharge visit, but overall control rates for all described parameters (with the exception of HDL-C) were improved substantially. At discharge, systolic blood pressure (BP) was higher in CKD (124 versus 121 mmHg) and diastolic BP was lower (72 versus 74 mmHg). At discharge, 68.7% of CKD versus 71.9% of non-CKD patients had LDL-C <100 mg/dl. Physical fitness on exercise testing improved substantially in both groups. When the Modification of Diet in Renal Disease (MDRD) formula was used for CKD classification, there was no clinically relevant change in these results.
Conclusion: Within a short period of 3-4 weeks, CR led to substantial improvements in key risk factors such as lipid profile, blood pressure, and physical fitness for all patients, even if CKD was present.
Background: Knowing and, if necessary, altering competitive athletes' real attitudes towards the use of banned performance-enhancing substances is an important goal of worldwide doping prevention efforts. However athletes will not always be willing to reporting their real opinions. Reaction time-based attitude tests help conceal the ultimate goal of measurement from the participant and impede strategic answering. This study investigated how well a reaction time-based attitude test discriminated between athletes who were doping and those who were not. We investigated whether athletes whose urine samples were positive for at least one banned substance (dopers) evaluated doping more favorably than clean athletes (non-dopers).
Methods: We approached a group of 61 male competitive bodybuilders and collected urine samples for biochemical testing. The pictorial doping Brief Implicit Association Test (BIAT) was used for attitude measurement. This test quantifies the difference in response latencies (in milliseconds) to stimuli representing related concepts (i.e. doping-dislike/like-[health food]).
Results: Prohibited substances were found in 43% of all tested urine samples. Dopers had more lenient attitudes to doping than non-dopers (Hedges's g = -0.76). D-scores greater than -0.57 (CI95 = -0.72 to -0.46) might be indicative of a rather lenient attitude to doping. In urine samples evidence of administration of combinations of substances, complementary administration of substances to treat side effects and use of stimulants to promote loss of body fat was common.
Conclusion: This study demonstrates that athletes' attitudes to doping can be assessed indirectly with a reaction time-based test, and that their attitudes are related to their behavior. Although bodybuilders may be more willing to reveal their attitude to doping than other athletes, these results still provide evidence that the pictorial doping BIAT may be useful in athletes from other sports, perhaps as a complementary measure in evaluations of the effectiveness of doping prevention interventions.