Open Access
Refine
Year of publication
- 2020 (16) (remove)
Language
- English (16)
Is part of the Bibliography
- yes (16)
Keywords
- Europe (2)
- Hindi (2)
- Islamophobia (2)
- MiSpEx (2)
- O2C spectrophotometer (2)
- SOV language (2)
- contrast effect (2)
- ethnic-racial identity (2)
- feedback (2)
- hemoglobin amount (2)
Institute
- Humanwissenschaftliche Fakultät (16) (remove)
Rats are a reservoir of human- and livestock-associated methicillin-resistant Staphylococcus aureus (MRSA). However, the composition of the natural S. aureus population in wild and laboratory rats is largely unknown. Here, 144 nasal S. aureus isolates from free-living wild rats, captive wild rats and laboratory rats were genotyped and profiled for antibiotic resistances and human-specific virulence genes. The nasal S. aureus carriage rate was higher among wild rats (23.4%) than laboratory rats (12.3%). Free-living wild rats were primarily colonized with isolates of clonal complex (CC) 49 and CC130 and maintained these strains even in husbandry. Moreover, upon livestock contact, CC398 isolates were acquired. In contrast, laboratory rats were colonized with many different S. aureus lineages—many of which are commonly found in humans. Five captive wild rats were colonized with CC398-MRSA. Moreover, a single CC30-MRSA and two CC130-MRSA were detected in free-living or captive wild rats. Rat-derived S. aureus isolates rarely harbored the phage-carried immune evasion gene cluster or superantigen genes, suggesting long-term adaptation to their host. Taken together, our study revealed a natural S. aureus population in wild rats, as well as a colonization pressure on wild and laboratory rats by exposure to livestock- and human-associated S. aureus, respectively.
Background
Multi-component cardiac rehabilitation (CR) is performed to achieve an improved prognosis, superior health-related quality of life (HRQL) and occupational resumption through the management of cardiovascular risk factors, as well as improvement of physical performance and patients’ subjective health. Out of a multitude of variables gathered at CR admission and discharge, we aimed to identify predictors of returning to work (RTW) and HRQL 6 months after CR.
Design
Prospective observational multi-centre study, enrolment in CR between 05/2017 and 05/2018.
Method
Besides general data (e.g. age, sex, diagnoses), parameters of risk factor management (e.g. smoking, hypertension), physical performance (e.g. maximum exercise capacity, endurance training load, 6-min walking distance) and patient-reported outcome measures (e.g. depression, anxiety, HRQL, subjective well-being, somatic and mental health, pain, lifestyle change motivation, general self-efficacy, pension desire and self-assessment of the occupational prognosis using several questionnaires) were documented at CR admission and discharge. These variables (at both measurement times and as changes during CR) were analysed using multiple linear regression models regarding their predictive value for RTW status and HRQL (SF-12) six months after CR.
Results
Out of 1262 patients (54±7 years, 77% men), 864 patients (69%) returned to work. Predictors of failed RTW were primarily the desire to receive pension (OR = 0.33, 95% CI: 0.22–0.50) and negative self-assessed occupational prognosis (OR = 0.34, 95% CI: 0.24–0.48) at CR discharge, acute coronary syndrome (OR = 0.64, 95% CI: 0.47–0.88) and comorbid heart failure (OR = 0.51, 95% CI: 0.30–0.87). High educational level, stress at work and physical and mental HRQL were associated with successful RTW. HRQL was determined predominantly by patient-reported outcome measures (e.g. pension desire, self-assessed health prognosis, anxiety, physical/mental HRQL/health, stress, well-being and self-efficacy) rather than by clinical parameters or physical performance.
Conclusion
Patient-reported outcome measures predominantly influenced return to work and HRQL in patients with heart disease. Therefore, the multi-component CR approach focussing on psychosocial support is crucial for subjective health prognosis and occupational resumption.
Syntactic priming is known to facilitate comprehension of the target sentence if the syntactic structure of the target sentence aligns with the structure of the prime (Branigan et al., 2005; Tooley and Traxler, 2010). Such a processing facilitation is understood to be constrained due to factors such as lexical overlap between the prime and the target, frequency of the prime structure, etc. Syntactic priming in SOV languages is also understood to be influenced by similar constraints (Arai, 2012). Sentence comprehension in SOV languages is known to be incremental and predictive. Such a top-down parsing process involves establishing various syntactic relations based on the linguistic cues of a sentence and the role of preverbal case-markers in achieving this is known to be critical. Given the evidence of syntactic priming during comprehension in these languages, this aspect of the comprehension process and its effect on syntactic priming becomes important. In this work, we show that syntactic priming during comprehension is affected by the probability of using the prime structure while parsing the target sentence. If the prime structure has a low probability given the sentential cues (e.g., nominal case-markers) in the target sentence, then the chances of persisting with the prime structure in the target reduces. Our work demonstrates the role of structural complexity of the target with regard to syntactic priming during comprehension and highlights that syntactic priming is modulated by an overarching preference of the parser to avoid rare structures
Syntactic priming is known to facilitate comprehension of the target sentence if the syntactic structure of the target sentence aligns with the structure of the prime (Branigan et al., 2005; Tooley and Traxler, 2010). Such a processing facilitation is understood to be constrained due to factors such as lexical overlap between the prime and the target, frequency of the prime structure, etc. Syntactic priming in SOV languages is also understood to be influenced by similar constraints (Arai, 2012). Sentence comprehension in SOV languages is known to be incremental and predictive. Such a top-down parsing process involves establishing various syntactic relations based on the linguistic cues of a sentence and the role of preverbal case-markers in achieving this is known to be critical. Given the evidence of syntactic priming during comprehension in these languages, this aspect of the comprehension process and its effect on syntactic priming becomes important. In this work, we show that syntactic priming during comprehension is affected by the probability of using the prime structure while parsing the target sentence. If the prime structure has a low probability given the sentential cues (e.g., nominal case-markers) in the target sentence, then the chances of persisting with the prime structure in the target reduces. Our work demonstrates the role of structural complexity of the target with regard to syntactic priming during comprehension and highlights that syntactic priming is modulated by an overarching preference of the parser to avoid rare structures
The objective of the study is to develop a better understanding of the capillary circulation in contracting muscles. Ten subjects were measured during a submaximal fatiguing isometric muscle action by use of the O2C spectrophotometer. In all measurements the capillary-venous oxygen saturation of hemoglobin (SvO2) decreases immediately after the start of loading and levels off into a steady state. However, two different patterns (type I and type II) emerged. They differ in the extent of deoxygenation (–10.37 ±2.59 percent points (pp) vs. –33.86 ±17.35 pp, P = .008) and the behavior of the relative hemoglobin amount (rHb). Type I reveals a positive rank correlation of SvO2 and rHb (? = 0.735, P <.001), whereas a negative rank correlation (? = –0.522, P <.001) occurred in type II, since rHb decreases until a reversal point, then increases averagely 13% above the baseline value and levels off into a steady state. The results reveal that a homeostasis of oxygen delivery and consumption during isometric muscle actions is possible. A rough distinction in two types of regulation is suggested.
The objective of the study is to develop a better understanding of the capillary circulation in contracting muscles. Ten subjects were measured during a submaximal fatiguing isometric muscle action by use of the O2C spectrophotometer. In all measurements the capillary-venous oxygen saturation of hemoglobin (SvO2) decreases immediately after the start of loading and levels off into a steady state. However, two different patterns (type I and type II) emerged. They differ in the extent of deoxygenation (–10.37 ±2.59 percent points (pp) vs. –33.86 ±17.35 pp, P = .008) and the behavior of the relative hemoglobin amount (rHb). Type I reveals a positive rank correlation of SvO2 and rHb (? = 0.735, P <.001), whereas a negative rank correlation (? = –0.522, P <.001) occurred in type II, since rHb decreases until a reversal point, then increases averagely 13% above the baseline value and levels off into a steady state. The results reveal that a homeostasis of oxygen delivery and consumption during isometric muscle actions is possible. A rough distinction in two types of regulation is suggested.
Purpose: Psychosocial variables are known risk factors for the development and chronification of low back pain (LBP). Psychosocial stress is one of these risk factors. Therefore, this study aims to identify the most important types of stress predicting LBP. Self-efficacy was included as a potential protective factor related to both, stress and pain.
Participants and Methods: This prospective observational study assessed n = 1071 subjects with low back pain over 2 years. Psychosocial stress was evaluated in a broad manner using instruments assessing perceived stress, stress experiences in work and social contexts, vital exhaustion and life-event stress. Further, self-efficacy and pain (characteristic pain intensity and disability) were assessed. Using least absolute shrinkage selection operator regression, important predictors of characteristic pain intensity and pain-related disability at 1-year and 2-years follow-up were analyzed.
Results: The final sample for the statistic procedure consisted of 588 subjects (age: 39.2 (± 13.4) years; baseline pain intensity: 27.8 (± 18.4); disability: 14.3 (± 17.9)). In the 1-year follow-up, the stress types “tendency to worry”, “social isolation”, “work discontent” as well as vital exhaustion and negative life events were identified as risk factors for both pain intensity and pain-related disability. Within the 2-years follow-up, Lasso models identified the stress types “tendency to worry”, “social isolation”, “social conflicts”, and “perceived long-term stress” as potential risk factors for both pain intensity and disability. Furthermore, “self-efficacy” (“internality”, “self-concept”) and “social externality” play a role in reducing pain-related disability.
Conclusion: Stress experiences in social and work-related contexts were identified as important risk factors for LBP 1 or 2 years in the future, even in subjects with low initial pain levels. Self-efficacy turned out to be a protective factor for pain development, especially in the long-term follow-up. Results suggest a differentiation of stress types in addressing psychosocial factors in research, prevention and therapy approaches.
Purpose: Psychosocial variables are known risk factors for the development and chronification of low back pain (LBP). Psychosocial stress is one of these risk factors. Therefore, this study aims to identify the most important types of stress predicting LBP. Self-efficacy was included as a potential protective factor related to both, stress and pain.
Participants and Methods: This prospective observational study assessed n = 1071 subjects with low back pain over 2 years. Psychosocial stress was evaluated in a broad manner using instruments assessing perceived stress, stress experiences in work and social contexts, vital exhaustion and life-event stress. Further, self-efficacy and pain (characteristic pain intensity and disability) were assessed. Using least absolute shrinkage selection operator regression, important predictors of characteristic pain intensity and pain-related disability at 1-year and 2-years follow-up were analyzed.
Results: The final sample for the statistic procedure consisted of 588 subjects (age: 39.2 (± 13.4) years; baseline pain intensity: 27.8 (± 18.4); disability: 14.3 (± 17.9)). In the 1-year follow-up, the stress types “tendency to worry”, “social isolation”, “work discontent” as well as vital exhaustion and negative life events were identified as risk factors for both pain intensity and pain-related disability. Within the 2-years follow-up, Lasso models identified the stress types “tendency to worry”, “social isolation”, “social conflicts”, and “perceived long-term stress” as potential risk factors for both pain intensity and disability. Furthermore, “self-efficacy” (“internality”, “self-concept”) and “social externality” play a role in reducing pain-related disability.
Conclusion: Stress experiences in social and work-related contexts were identified as important risk factors for LBP 1 or 2 years in the future, even in subjects with low initial pain levels. Self-efficacy turned out to be a protective factor for pain development, especially in the long-term follow-up. Results suggest a differentiation of stress types in addressing psychosocial factors in research, prevention and therapy approaches.
The increasing application of intersectionality to the psychological study of identity development raises questions regarding how we as researchers construct and operationalize social identity categories, as well as how we best capture and address systems of oppression and privilege within our work. In the continental European context, the use of the intersectionality paradigm raises additional issues, since “race” was officially removed from the vernacular following the atrocities of WWII, yet racialized oppression continues to occur at every level of society. Within psychological research, participants are often divided into those with and without “migration background,” which can reiterate inequitable norms of national belonging while washing over salient lived experiences in relation to generation status, citizenship, religion, gender, and the intersection between these and other social locations. Although discrimination is increasingly examined in identity development research, rarely are the history and impact of colonialism and related socio-historical elements acknowledged. In the current paper, we aim to address these issues by reviewing previous research and discussing theoretical and practical possibilities for the future. In doing so, we delve into the problems of trading in one static social identity category (e.g., “race”) for another (e.g., “migration background/migrant”) without examining the power structures inherent in the creation of these top-down categories, or the lived experiences of those navigating what it means to be marked as a racialized Other. Focusing primarily on contextualized ethno-cultural identity development, we discuss relevant examples from the continental European context, highlighting research gaps, points for improvement, and best practices.
The increasing application of intersectionality to the psychological study of identity development raises questions regarding how we as researchers construct and operationalize social identity categories, as well as how we best capture and address systems of oppression and privilege within our work. In the continental European context, the use of the intersectionality paradigm raises additional issues, since “race” was officially removed from the vernacular following the atrocities of WWII, yet racialized oppression continues to occur at every level of society. Within psychological research, participants are often divided into those with and without “migration background,” which can reiterate inequitable norms of national belonging while washing over salient lived experiences in relation to generation status, citizenship, religion, gender, and the intersection between these and other social locations. Although discrimination is increasingly examined in identity development research, rarely are the history and impact of colonialism and related socio-historical elements acknowledged. In the current paper, we aim to address these issues by reviewing previous research and discussing theoretical and practical possibilities for the future. In doing so, we delve into the problems of trading in one static social identity category (e.g., “race”) for another (e.g., “migration background/migrant”) without examining the power structures inherent in the creation of these top-down categories, or the lived experiences of those navigating what it means to be marked as a racialized Other. Focusing primarily on contextualized ethno-cultural identity development, we discuss relevant examples from the continental European context, highlighting research gaps, points for improvement, and best practices.