Refine
Year of publication
Document Type
- Review (616) (remove)
Language
- English (616) (remove)
Keywords
- review (6)
- Molybdenum cofactor (5)
- Review (5)
- capitalism (4)
- climate change (4)
- embodied cognition (4)
- financial crisis (4)
- financial institutions (4)
- financial markets (4)
- globalization (4)
Institute
- Institut für Biochemie und Biologie (125)
- Institut für Geowissenschaften (51)
- Institut für Anglistik und Amerikanistik (44)
- Historisches Institut (36)
- Institut für Physik und Astronomie (34)
- Institut für Ernährungswissenschaft (31)
- Institut für Chemie (30)
- Department Psychologie (27)
- Institut für Jüdische Studien und Religionswissenschaft (27)
- Sozialwissenschaften (26)
In public perception, abnormal animal behavior is widely assumed to be a potential earthquake precursor, in strong contrast to the viewpoint in natural sciences. Proponents of earthquake prediction via animals claim that animals feel and react abnormally to small changes in environmental and physico-chemical parameters related to the earthquake preparation process. In seismology, however, observational evidence for changes of physical parameters before earthquakes is very weak. In this study, we reviewed 180 publications regarding abnormal animal behavior before earthquakes and analyze and discuss them with respect to (1) magnitude-distance relations, (2) foreshock activity, and (3) the quality and length of the published observations. More than 700 records of claimed animal precursors related to 160 earthquakes are reviewed with unusual behavior of more than 130 species. The precursor time ranges from months to seconds prior to the earthquakes, and the distances from a few to hundreds of kilometers. However, only 14 time series were published, whereas all other records are single observations. The time series are often short (the longest is 1 yr), or only small excerpts of the full data set are shown. The probability density of foreshocks and the occurrence of animal precursors are strikingly similar, suggesting that at least parts of the reported animal precursors are in fact related to foreshocks. Another major difficulty for a systematic and statistical analysis is the high diversity of data, which are often only anecdotal and retrospective. The study clearly demonstrates strong weaknesses or even deficits in many of the published reports on possible abnormal animal behavior. To improve the research on precursors, we suggest a scheme of yes and no questions to be assessed to ensure the quality of such claims.
Teaching culturally diverse classrooms starts from embracing beliefs that recognise the strengths of cultural diversity. Research is needed to understand how teacher training contributes to shaping pre-service teachers’ beliefs about cultural diversity. Accordingly, the purpose of this review is to 1) provide a description of main components and contextual characteristics of teacher trainings targeting cultural diversity beliefs, 2) report the training effects, and 3) detail the methodological strengths and weaknesses of these studies. A total of 36 studies published between 2005 and 2015 that used a longitudinal assessment of cultural diversity beliefs were reviewed. The collective results of these studies indicate a large variance amongst trainings, with experiential learning shifting cultural diversity beliefs positively. However, existing studies have significant limitations in the study design and training evaluation that hinder their conclusions regarding internal and external validity and point towards new directions for future research.
The purpose of this conceptual article is to advance theory and research on one critical aspect of the context of ethnic–racial identity (ERI) development: ethnic–racial settings, or the objective and subjective nature of group representation within an individual's context. We present a new conceptual framework that consists of four dimensions: (1) perspective (that settings can be understood in both objective and subjective terms); (2) differentiation (how groups are defined in a setting); (3) heterogeneity (the range of groups in a setting); and (4) proximity (the distance between the individual and the setting). Clarifying this complexity is crucial for advancing a more coherent understanding of how ethnic–racial settings are related to ERI development.
To safeguard the sustainable use of ecosystems and their services, early detection of potentially damaging changes in functional capabilities is needed. To support a proper ecosystem management, the analysis of an ecosystem’s vulnerability provide information on its weaknesses as well as on its capacity to recover after suffering an impact. However, the application of the vulnerability concept to ecosystems is still an emerging topic. After providing background on the vulnerability concept, we summarize existing ecosystem vulnerability research on the basis of a systematic literature review with a special focus on ecosystem type, disciplinary background, and more detailed definition of the ecosystem vulnerability components. Using the Web of ScienceTM Core Collection, we overviewed the literature from 1991 onwards but used the 5 years from 2011 to 2015 for an in-depth analysis, including 129 articles. We found that ecosystem vulnerability analysis has been applied most notably in conservation biology, climate change research, and ecological risk assessments, pinpointing a limited spreading across the environmental sciences. It occurred primarily within marine and freshwater ecosystems. To avoid confusion, we recommend using the unambiguous term ecosystem vulnerability rather than ecological, environmental, population, or community vulnerability. Further, common ground has been identified, on which to define the ecosystem vulnerability components exposure, sensitivity, and adaptive capacity. We propose a framework for ecosystem assessments that coherently connects the concepts of vulnerability, resilience, and adaptability as different ecosystem responses. A short outlook on the possible operationalization of the concept by ecosystem vulnerabilty indices, and a conclusion section complete the review.
Aim: To scrutinize to what extent modern ideas about nutrition effects on growth are supported by historic observations in European populations. Method: We reviewed 19th and early 20th century paediatric journals in the Staatsbibliothek zu Berlin, the third largest European library with an almost complete collection of the German medical literature. During a three-day visit, we inspected 15 bookshelf meters of literature not available in electronic format. Results: Late 19th and early 20th century breastfed European infants and children, independent of social strata, grew far below World Health Organisation (WHO) standards and 15-30% of adequately-fed children would be classified as stunted by the WHO standards. Historic sources indicate that growth in height is largely independent of the extent and nature of the diet. Height catch-up after starvation was greater than catch-up reported in modern nutrition intervention studies, and allowed for unimpaired adult height. Conclusion: Historical studies are indispensable to understand why stunting does not equate with undernutrition and why modern diet interventions frequently fail to prevent stunting. Appropriateness and effect size of modern nutrition interventions on growth need revision.
Nowadays, the role of trace elements (TE) is of growing interest because dyshomeostasis of selenium (Se), manganese (Mn), zinc (Zn), and copper (Cu) is supposed to be a risk factor for several diseases. Thereby, research focuses on identifying new biomarkers for the TE status to allow for a more reliable description of the individual TE and health status. This review mirrors a lack of well-defined, sensitive, and selective biomarkers and summarizes technical limitations to measure them. Thus, the capacity to assess the relationship between dietary TE intake, homeostasis, and health is restricted, which would otherwise provide the basis to define adequate intake levels of single TE in both healthy and diseased humans. Besides that, our knowledge is even more limited with respect to the real life situation of combined TE intake and putative interactions between single TE.
Flowers represent a key innovation during plant evolution. Driven by reproductive optimization, evolution of flower morphology has been central in boosting species diversification. In most cases, this has happened through specialized interactions with animal pollinators and subsequent reduction of gene flow between specialized morphs. While radiation has led to an enormous variability in flower forms and sizes, recurrent evolutionary patterns can be observed. Here, we discuss the targets of selection involved in major trends of pollinator-driven flower evolution. We review recent findings on their adaptive values, developmental grounds and genetic bases, in an attempt to better understand the repeated nature of pollinator-driven flower evolution. This analysis highlights how structural innovation can provide flexibility in phenotypic evolution, adaptation and speciation. (C) 2017 Elsevier Ltd. All rights reserved.
The article is a review of Patrick Gray's latest monograph: Shakespeare and the Fall of the Roman Republic: Selfhood, Stoicism and Civil War. Gray analyzes Shakespare's and his characters' representation of the 'self' in Julius Caesar and Antony and Cleopatra, with Coriolanus used for comparative purposes. The book induced a lively discussion of its content in academic community.
Singlet oxygen can be released in the dark in nearly quantitative yield from endoperoxides of naphthalenes, anthracenes and pyridones as an alternative to its generation by photosensitization. Recently, new donor systems have been designed which operate at very low temperatures but which are prepared from their parent forms at acceptable rates. Enhancement of the reactivity of donors is conveniently achieved by the design of the substitution pattern or through the use of plasmonic heating of nanoparticle-bound donors. The most important aim of these donor molecules is to transfer singlet oxygen in a controlled and directed manner to a target. Low temperatures and the linking between donors and acceptors reduce the random walk of oxygen and may force an attack at the desired position. By using chiral donor systems, new stereocenters might be introduced into prochiral acceptors.
Airless bodies are directly exposed to ambient plasma and meteoroid fluxes, making them characteristically different from bodies whose dense atmospheres protect their surfaces from such fluxes. Direct exposure to plasma and meteoroids has important consequences for the formation and evolution of planetary surfaces, including altering chemical makeup and optical properties, generating neutral gas and/or dust exospheres, and leading to the generation of circumplanetary and interplanetary dust grain populations. In the past two decades, there have been many advancements in our understanding of airless bodies and their interaction with various dust populations. In this paper, we describe relevant dust phenomena on the surface and in the vicinity of airless bodies over a broad range of scale sizes from to , with a focus on recent developments in this field.
Psychosocial risk factors for chronic back pain in the general population and in competitive sports
(2018)
Lumbar back pain and the high risk of chronic complaints is not only an important health concern in the general population but also in high performance athletes. In contrast to non-athletes, there is a lack of research into psychosocial risk factors in athletes. Moreover, the development of psychosocial screening questionnaires that would be qualified to detect athletes with a high risk of chronicity is in the early stages. The purpose of this review is to give an overview of research into psychosocial risk factors in both populations and to evaluate the performance of screening instruments in non-athletes. The databases MEDLINE, PubMed, and PsycINFO were searched from March to June 2016 using the keywords "psychosocial screening", "low back pain", "sciatica" and "prognosis", "athletes". We included prospective studies conducted in patients with low back pain with and without radiation to the legs, aged ae<yen>18 years and a follow-up of at least 3 months. We identified 16 eligible studies, all of them conducted in samples of non-athletes. Among the most frequently published screening questionnaires, the A-rebro Musculoskeletal Pain Screening Questionnaire (A-MPSQ) demonstrated a sufficient early prediction of return to work and the STarT Back Screening Tool (SBT) revealed acceptable performance predicting pain-related impairment. The prediction of future pain was sufficient with the Risk Analysis of Back Pain Chronification (RISC-BP) and the Heidelberg Short Questionnaire (HKF). Psychosocial risk factors of chronic back pain, such as chronic stress, depressive mood, and maladaptive pain processing are becoming increasingly more recognized in competitive sports. Screening instruments that have been shown to be predictive in the general population are currently being tested for suitability in the German MiSpEx research consortium.
Within the wealth of molecules constituting marine dissolved organic matter, carbohydrates make up the largest coherent and quantifiable fraction. Their main sources are from primary producers, which release large amounts of photosynthetic products – mainly polysaccharides – directly into the surrounding water via passive and active exudation. The organic carbon and other nutrients derived from these photosynthates enrich the ‘phycosphere’ and attract heterotrophic bacteria. The rapid uptake and remineralization of dissolved free monosaccharides by heterotrophic bacteria account for the barely detectable levels of these compounds. By contrast, dissolved combined polysaccharides can reach high concentrations, especially during phytoplankton blooms. Polysaccharides are too large to be taken up directly by heterotrophic bacteria, instead requiring hydrolytic cleavage to smaller oligo- or monomers by bacteria with a suitable set of exoenzymes. The release of diverse polysaccharides by various phytoplankton taxa is generally interpreted as the deposition of excess organic material. However, these molecules likely also fulfil distinct, yet not fully understood functions, as inferred from their active modulation in terms of quality and quantity when phytoplankton becomes nutrient limited or is exposed to heterotrophic bacteria. This minireview summarizes current knowledge regarding the exudation and composition of phytoplankton-derived exopolysaccharides and acquisition of these compounds by heterotrophic bacteria.
Aldehyde oxidases are molybdenum and flavin dependent enzymes characterized by a very wide substrate specificity and performing diverse reactions that include oxidations (e.g., aldehydes and azaheterocycles), hydrolysis of amide bonds, and reductions (e.g., nitro, S-oxides and N-oxides). Oxidation reactions and amide hydrolysis occur at the molybdenum site while the reductions are proposed to occur at the flavin site. AOX activity affects the metabolism of different drugs and xenobiotics, some of which designed to resist other liver metabolizing enzymes (e.g., cytochrome P450 monooxygenase isoenzymes), raising its importance in drug development. This work consists of a comprehensive overview on aldehyde oxidases, concerning the genetic evolution of AOX, its diversity among the human population, the crystal structures available, the known catalytic reactions and the consequences in pre-clinical pharmacokinetic and pharmacodynamic studies. Analysis of the different animal models generally used for pre-clinical trials and comparison between the human (hAOX1), mouse homologs as well as the related xanthine oxidase (XOR) are extensively considered. The data reviewed also include a systematic analysis of representative classes of molecules that are hAOX1 substrates as well as of typical and well characterized hAOX1 inhibitors. The considerations made on the basis of a structural and functional analysis are correlated with reported kinetic and metabolic data for typical classes of drugs, searching for potential structural determinants that may dictate substrate and/or inhibitor specificities.
Introduction:
We aim to highlight the utility of this model in the analysis of the psycho-behavioral implications of family cancer, presenting the scientific literature that used Leventhal’s model as the theoretical framework of approach.
Material and methods:
A systematic search was performed in six databases (EBSCO, ScienceDirect, PubMed Central, ProQuest, Scopus, and Web of Science) with empirical studies published between 2006 and 2015 in English with regard to the Common Sense Model of Self-Regulation (CSMR) and familial/hereditary cancer. The key words used were: illness representations, common sense model, self regulatory model, familial/hereditary/genetic cancer, genetic cancer counseling. The selection of studies followed the PRISMA-P guidelines (Moher et al., 2009; Shamseer et al., 2015), which suggest a three-stage procedure.
Results:
Individuals create their own cognitive and emotional representation of the disease when their health is threatened, being influenced by the presence of a family history of cancer, causing them to adopt or not a salutogenetic behavior. Disease representations, particularly the cognitive ones, can be predictors of responses to health threats that determine different health behaviors. Age, family history of cancer, and worrying about the disease are factors associated with undergoing screening. No consensus has been reached as to which factors act as predictors of compliance with cancer screening programs.
Conclusions:
This model can generate interventions that are conceptually clear as well as useful in regulating the individuals’ behaviors by reducing the risk of developing the disease and by managing as favorably as possible health and/or disease.
Studies over the past several years have demonstrated the important role of sphingolipids in cystic fibrosis (CF), chronic obstructive pulmonary disease and acute lung injury. Ceramide is increased in airway epithelial cells and alveolar macrophages of CF mice and humans, while sphingosine is dramatically decreased. This increase in ceramide results in chronic inflammation, increased death of epithelial cells, release of DNA into the bronchial lumen and thereby an impairment of mucociliary clearance; while the lack of sphingosine in airway epithelial cells causes high infection susceptibility in CF mice and possibly patients. The increase in ceramide mediates an ectopic expression of beta 1-integrins in the luminal membrane of CF epithelial cells, which results, via an unknown mechanism, in a down-regulation of acid ceramidase. It is predominantly this down-regulation of acid ceramidase that results in the imbalance of ceramide and sphingosine in CF cells. Correction of ceramide and sphingosine levels can be achieved by inhalation of functional acid sphingomyelinase inhibitors, recombinant acid ceramidase or by normalization of beta 1-integrin expression and subsequent re-expression of endogenous acid ceramidase. These treatments correct pulmonary inflammation and prevent or treat, respectively, acute and chronic pulmonary infections in CF mice with Staphylococcus aureus and mucoid or non-mucoid Pseudomonas aeruginosa. Inhalation of sphingosine corrects sphingosine levels only and seems to mainly act against the infection. Many antidepressants are functional inhibitors of the acid sphingomyelinase and were designed for systemic treatment of major depression. These drugs could be repurposed to treat CF by inhalation.
Exercise prescription in patients with different combinations of cardiovascular disease risk factors
(2018)
Whereas exercise training is key in the management of patients with cardiovascular disease (CVD) risk (obesity, diabetes, dyslipidaemia, hypertension), clinicians experience difficulties in how to optimally prescribe exercise in patients with different CVD risk factors. Therefore, a consensus statement for state-of-the-art exercise prescription in patients with combinations of CVD risk factors as integrated into a digital training and decision support system (the EXercise Prescription in Everyday practice & Rehabilitative Training (EXPERT) tool) needed to be established. EXPERT working group members systematically reviewed the literature for meta-analyses, systematic reviews and/or clinical studies addressing exercise prescriptions in specific CVD risk factors and formulated exercise recommendations (exercise training intensity, frequency, volume and type, session and programme duration) and exercise safety precautions, for obesity, arterial hypertension, type 1 and 2 diabetes, and dyslipidaemia. The impact of physical fitness, CVD risk altering medications and adverse events during exercise testing was further taken into account to fine-tune this exercise prescription. An algorithm, supported by the interactive EXPERT tool, was developed by Hasselt University based on these data. Specific exercise recommendations were formulated with the aim to decrease adipose tissue mass, improve glycaemic control and blood lipid profile, and lower blood pressure. The impact of medications to improve CVD risk, adverse events during exercise testing and physical fitness was also taken into account. Simulations were made of how the EXPERT tool provides exercise prescriptions according to the variables provided. In this paper, state-of-the-art exercise prescription to patients with combinations of CVD risk factors is formulated, and it is shown how the EXPERT tool may assist clinicians. This contributes to an appropriately tailored exercise regimen for every CVD risk patient.
Numerical knowledge, including number concepts and arithmetic procedures, seems to be a clear-cut case for abstract symbol manipulation. Yet, evidence from perceptual and motor behaviour reveals that natural number knowledge and simple arithmetic also remain closely associated with modal experiences. Following a review of behavioural, animal and neuroscience studies of number processing, we propose a revised understanding of psychological number concepts as grounded in physical constraints, embodied in experience and situated through task-specific intentions. The idea that number concepts occupy a range of positions on the continuum between abstract and modal conceptual knowledge also accounts for systematic heuristics and biases in mental arithmetic, thus inviting psycho-logical approaches to the study of the mathematical mind.
Combining training of muscle strength and cardiorespiratory fitness within a training cycle could increase athletic performance more than single-mode training. However, the physiological effects produced by each training modality could also interfere with each other, improving athletic performance less than single-mode training. Because anthropometric, physiological, and biomechanical differences between young and adult athletes can affect the responses to exercise training, young athletes might respond differently to concurrent training (CT) compared with adults. Thus, the aim of the present systematic review with meta-analysis was to determine the effects of concurrent strength and endurance training on selected physical fitness components and athletic performance in youth. A systematic literature search of PubMed and Web of Science identified 886 records. The studies included in the analyses examined children (girls age 6-11 years, boys age 6-13 years) or adolescents (girls age 12-18 years, boys age 14-18 years), compared CT with single-mode endurance (ET) or strength training (ST), and reported at least one strength/power-(e.g., jump height), endurance-(e.g., peak. VO2, exercise economy), or performance-related (e.g., time trial) outcome. We calculated weighted standardized mean differences (SMDs). CT compared to ET produced small effects in favor of CT on athletic performance (n = 11 studies, SMD = 0.41, p = 0.04) and trivial effects on cardiorespiratory endurance (n = 4 studies, SMD = 0.04, p = 0.86) and exercise economy (n = 5 studies, SMD = 0.16, p = 0.49) in young athletes. A sub-analysis of chronological age revealed a trend toward larger effects of CT vs. ET on athletic performance in adolescents (SMD = 0.52) compared with children (SMD = 0.17). CT compared with ST had small effects in favor of CT on muscle power (n = 4 studies, SMD = 0.23, p = 0.04). In conclusion, CT is more effective than single-mode ET or ST in improving selected measures of physical fitness and athletic performance in youth. Specifically, CT compared with ET improved athletic performance in children and particularly adolescents. Finally, CT was more effective than ST in improving muscle power in youth.
The AlpArray seismic network
(2018)
The AlpArray programme is a multinational, European consortium to advance our understanding of orogenesis and its relationship to mantle dynamics, plate reorganizations, surface processes and seismic hazard in the Alps-Apennines-Carpathians-Dinarides orogenic system. The AlpArray Seismic Network has been deployed with contributions from 36 institutions from 11 countries to map physical properties of the lithosphere and asthenosphere in 3D and thus to obtain new, high-resolution geophysical images of structures from the surface down to the base of the mantle transition zone. With over 600 broadband stations operated for 2 years, this seismic experiment is one of the largest simultaneously operated seismological networks in the academic domain, employing hexagonal coverage with station spacing at less than 52 km. This dense and regularly spaced experiment is made possible by the coordinated coeval deployment of temporary stations from numerous national pools, including ocean-bottom seismometers, which were funded by different national agencies. They combine with permanent networks, which also required the cooperation of many different operators. Together these stations ultimately fill coverage gaps. Following a short overview of previous large-scale seismological experiments in the Alpine region, we here present the goals, construction, deployment, characteristics and data management of the AlpArray Seismic Network, which will provide data that is expected to be unprecedented in quality to image the complex Alpine mountains at depth.
As a tumor suppressor and the most frequently mutated gene in cancer, p53 is among the best-described molecules in medical research. As cancer is in most cases an age-related disease, it seems paradoxical that p53 is so strongly conserved from early multicellular organisms to humans. A function not directly related to tumor suppression, such as the regulation of metabolism in nontransformed cells, could explain this selective pressure. While this role of p53 in cellular metabolism is gradually emerging, it is imperative to dissect the tissue-and cell-specific actions of p53 and its downstream signaling pathways. In this review, we focus on studies reporting p53's impact on adipocyte development, function, and maintenance, as well as the causes and consequences of altered p53 levels in white and brown adipose tissue (AT) with respect to systemic energy homeostasis. While whole body p53 knockout mice gain less weight and fat mass under a high-fat diet owing to increased energy expenditure, modifying p53 expression specifically in adipocytes yields more refined insights: (1) p53 is a negative regulator of in vitro adipogenesis; (2) p53 levels in white AT are increased in diet-induced and genetic obesity mouse models and in obese humans; (3) functionally, elevated p53 in white AT increases senescence and chronic inflammation, aggravating systemic insulin resistance; (4) p53 is not required for normal development of brown AT; and (5) when p53 is activated in brown AT in mice fed a high-fat diet, it increases brown AT temperature and brown AT marker gene expression, thereby contributing to reduced fat mass accumulation. In addition, p53 is increasingly being recognized as crucial player in nutrient sensing pathways. Hence, despite existence of contradictory findings and a varying density of evidence, several functions of p53 in adipocytes and ATs have been emerging, positioning p53 as an essential regulatory hub in ATs. Future studies need to make use of more sophisticated in vivo model systems and should identify an AT-specific set of p53 target genes and downstream pathways upon different (nutrient) challenges to identify novel therapeutic targets to curb metabolic diseases
(1) Background: Sexual violence (SV) is a major public health problem, with negative socio-economic, physical, mental, sexual, and reproductive health consequences. Migrants, applicants for international protection, and refugees (MARs) are vulnerable to SV. Since many European countries are seeing high migratory pressure, the development of prevention strategies and care paths focusing on victimised MARs is highly needed. To this end, this study reviews evidence on the prevalence of SV among MAR groups in Europe and the challenges encountered in research on this topic. (2) Methods: A critical interpretive synthesis of 25 peer-reviewed academic studies and 22 relevant grey literature documents was conducted based on a socio-ecological model. (3) Results: Evidence shows that SV is highly frequent in MARs in Europe, yet comparison with other groups is still difficult. Methodologically and ethically sound representative studies comparing between populations are still lacking. Challenges in researching SV in MARs are located at the intrapersonal, interpersonal, community, societal, and policy levels. (4) Conclusions: Future research should start with a clear definition of the concerned population and acts of SV to generate comparable data. Participatory qualitative research approaches could be applied to better grasp the complexity of interplaying determinants of SV in MARs.
Background Effects and dose-response relationships of balance training on measures of balance are well-documented for healthy young and old adults. However, this has not been systematically studied in youth. Objectives The objectives of this systematic review and meta-analysis were to quantify effects of balance training (BT) on measures of static and dynamic balance in healthy children and adolescents. Additionally, dose-response relations for BT modalities (e.g. training period, frequency, volume) were quantified through the analysis of controlled trials. Data Sources A computerized systematic literature search was conducted in the electronic databases PubMed and Web of Science from January 1986 until June 2017 to identify articles related to BT in healthy trained and untrained children and adolescents. Study Eligibility Criteria A systematic approach was used to evaluate articles that examined the effects of BT on balance outcomes in youth. Controlled trials with pre- and post-measures were included if they examined healthy youth with a mean age of 6-19 years and assessed at least one measure of balance (i.e. static/dynamic steady-state balance, reactive balance, proactive balance) with behavioural (e.g. time during single-leg stance) or biomechanical (e.g. centre of pressure displacements during single-leg stance) test methods. Study Appraisal and Synthesis Methods The included studies were coded for the following criteria: training modalities (i.e. training period, frequency, volume), balance outcomes (i.e. static and dynamic balance) as well as chronological age, sex (male vs. female), training status (trained vs. untrained), setting (school vs. club), and testing method (biomechanical vs. physical fitness test). Weighted mean standardized mean differences (SMDwm) were calculated using a random-effects model to compute overall intervention effects relative to active and passive control groups. Between-study heterogeneity was assessed using I 2 and chi(2) statistics. A multivariate random effects meta-regression was computed to explain the influence of key training modalities (i.e. training period, training frequency, total number of training sessions, duration of training sessions, and total duration of training per week) on the effectiveness of BT on measures of balance performance. Further, subgroup univariate analyses were computed for each training modality. Additionally, dose-response relationships were characterized independently by interpreting the modality specific magnitude of effect sizes. Methodological quality of the included studies was rated with the help of the Physiotherapy Evidence Database (PEDro) Scale. Results Overall, our literature search revealed 198 hits of which 17 studies were eligible for inclusion in this systematic review and meta-analysis. Irrespective of age, sex, training status, sport discipline and training method, moderate to large BT-related effects were found for measures of static (SMDwm = 0.71) and dynamic (SMDwm = 1.03) balance in youth. However, our subgroup analyses did not reveal any statistically significant effects of the moderator variables age, sex, training status, setting and testing method on overall balance (i.e. aggregation of static and dynamic balance). BT-related effects in adolescents were moderate to large for measures of static (SMDwm = 0.61) and dynamic (SMDwm = 0.86) balance. With regard to the dose-response relationships, findings from the multivariate random effects meta-regression revealed that none of the examined training modalities predicted the effects of BT on balance performance in adolescents (R-2 = 0.00). In addition, results from univariate analysis have to be interpreted with caution because training modalities were computed as single factors irrespective of potential between-modality interactions. For training period, 12 weeks of training achieved the largest effect (SMDwm = 1.40). For training frequency, the largest effect was found for two sessions per week (SMDwm = 1.29). For total number of training sessions, the largest effect was observed for 24-36 sessions (SMDwm = 1.58). For the modality duration of a single training session, 4-15 min reached the largest effect (SMDwm = 1.03). Finally, for the modality training per week, a total duration of 31-60 min per week (SMDwm = 1.33) provided the largest effects on overall balance in adolescents. Methodological quality of the studies was rated as moderate with a median PEDro score of 6.0. Limitations Dose-response relationships were calculated independently for training modalities (i.e. modality specific) and not interdependently. Training intensity was not considered for the calculation of dose-response relationships because the included studies did not report this training modality. Further, the number of included studies allowed the characterization of dose-response relationships in adolescents for overall balance only. In addition, our analyses revealed a considerable between-study heterogeneity (I-2 = 66-83%). The results of this meta-analysis have to be interpreted with caution due to their preliminary status. Conclusions BT is a highly effective means to improve balance performance with moderate to large effects on static and dynamic balance in healthy youth irrespective of age, sex, training status, setting and testing method. The examined training modalities did not have a moderating effect on balance performance in healthy adolescents. Thus, we conclude that an additional but so far unidentified training modality may have a major effect on balance performance that was not assessed in our analysis. Training intensity could be a promising candidate. However, future studies are needed to find appropriate methods to assess BT intensity.
Evaluating climate geoengineering proposals in the context of the Paris Agreement temperature goals
(2018)
Current mitigation efforts and existing future commitments are inadequate to accomplish the Paris Agreement temperature goals. In light of this, research and debate are intensifying on the possibilities of additionally employing proposed climate geoengineering technologies, either through atmospheric carbon dioxide removal or farther-reaching interventions altering the Earth’s radiative energy budget. Although research indicates that several techniques may eventually have the physical potential to contribute to limiting climate change, all are in early stages of development, involve substantial uncertainties and risks, and raise ethical and governance dilemmas. Based on present knowledge, climate geoengineering techniques cannot be relied on to significantly contribute to meeting the Paris Agreement temperature goals.
Several meta-analyses have been published summarizing the associations of the Mediterranean diet (MedDiet) with chronic diseases. We evaluated the quality and credibility of evidence from these meta-analyses as well as characterized the different indices used to define MedDiet and re-calculated the associations with the different indices identified. We conducted an umbrella review of meta-analyses on cohort studies evaluating the association of the MedDiet with type 2 diabetes, cardiovascular disease, cancer and cognitive-related diseases. We used the AMSTAR (A MeaSurement Tool to Assess systematic Reviews) checklist to evaluate the methodological quality of the meta-analyses, and the NutriGrade scoring system to evaluate the credibility of evidence. We also identified different indices used to define MedDiet; tests for subgroup differences were performed to compare the associations with the different indices when at least 2 studies were available for different definitions. Fourteen publications were identified and within them 27 meta-analyses which were based on 70 primary studies. Almost all meta-analyses reported inverse associations between MedDiet and risk of chronic disease, but the credibility of evidence was rated low to moderate. Moreover, substantial heterogeneity was observed on the use of the indices assessing adherence to the MedDiet, but two indices were the most used ones [Trichopoulou MedDiet (tMedDiet) and alternative MedDiet (aMedDiet)]. Overall, we observed little difference in risk associations comparing different MedDiet indices in the subgroup meta-analyses. Future prospective cohort studies are advised to use more homogenous definitions of the MedDiet to improve the comparability across meta-analyses.
Objective Depression after stroke and myocardial infarction (MI) is common but often assumed to be undertreated without reliable evidence being available. Thus, we aimed to determine treatment rates and investigate the application of guidelines in these conditions. Methods Databases MEDLINE, EMBASE, PsycInfo, Web of Science, CINAHL, and Scopus were systematically searched without language restriction from inception to June 30, 2017. Prospective observational studies with consecutive recruitment reporting any antidepressant treatment in adults with depression after stroke or MI were included. Random-effects models were used to calculate pooled estimates of treatment rates. Results Fifty-five studies reported 32 stroke cohorts (n = 8938; pooled frequency of depression = 34%, 95% confidence interval [CI] = 29%-38%) and 17 MI cohorts (n = 10,767; pooled frequency of depression = 24%, 95% CI = 20%-28%). In 29 stroke cohorts, 24% (95% CI = 20%-27%) of 2280 depressed people used antidepressant medication. In 15 MI cohorts, 14% (95% CI = 8%-19%) of 2381 depressed people used antidepressant medication indicating a lower treatment rate than in stroke. Two studies reported use of psychosocial interventions, indicating that less than 10% of participants were treated. Conclusions Despite the high frequency of depression after stroke and MI and the existence of efficacious treatment strategies, people often remain untreated. Innovative strategies are needed to increase the use of effective antidepressive interventions in patients with cardiovascular disease.
Knowledge of the present-day crustal in-situ stress field is a key for the understanding of geodynamic processes such as global plate tectonics and earthquakes. It is also essential for the management of geo-reservoirs and underground storage sites for energy and waste. Since 1986, the World Stress Map (WSM) project has systematically compiled the orientation of maximum horizontal stress (S-Hmax). For the 30th anniversary of the project, the WSM database has been updated significantly with 42,870 data records which is double the amount of data in comparison to the database release in 2008. The update focuses on areas with previously sparse data coverage to resolve the stress pattern on different spatial scales. In this paper, we present details of the new WSM database release 2016 and an analysis of global and regional stress pattern. With the higher data density, we can now resolve stress pattern heterogeneities from plate-wide to local scales. In particular, we show two examples of 40 degrees-60 degrees S-Hmax rotations within 70 km. These rotations can be used as proxies to better understand the relative importance of plate boundary forces that control the long wave-length pattern in comparison to regional and local controls of the crustal stress state. In the new WSM project phase IV that started in 2017, we will continue to further refine the information on the S-Hmax orientation and the stress regime. However, we will also focus on the compilation of stress magnitude data as this information is essential for the calibration of geomechanical-numerical models. This enables us to derive a 3-D continuous description of the stress tensor from point-wise and incomplete stress tensor information provided with the WSM database. Such forward models are required for safety aspects of anthropogenic activities in the underground and for a better understanding of tectonic processes such as the earthquake cycle.
Organic semiconductors are of great interest for a broad range of optoelectronic applications due to their solution processability, chemical tunability, highly scalable fabrication, and mechanical flexibility. In contrast to traditional inorganic semiconductors, organic semiconductors are intrinsically disordered systems and therefore exhibit much lower charge carrier mobilities-the Achilles heel of organic photovoltaic cells. In this progress review, the authors discuss recent important developments on the impact of charge carrier mobility on the charge transfer state dissociation, and the interplay of free charge extraction and recombination. By comparing the mobilities on different timescales obtained by different techniques, the authors highlight the dispersive nature of these materials and how this reflects on the key processes defining the efficiency of organic photovoltaics.
Reviews and syntheses
(2018)
The cycling of carbon (C) between the Earth surface and the atmosphere is controlled by biological and abiotic processes that regulate C storage in biogeochemical compartments and release to the atmosphere. This partitioning is quantified using various forms of C-use efficiency (CUE) - the ratio of C remaining in a system to C entering that system. Biological CUE is the fraction of C taken up allocated to biosynthesis. In soils and sediments, C storage depends also on abiotic processes, so the term C-storage efficiency (CSE) can be used. Here we first review and reconcile CUE and CSE definitions proposed for autotrophic and heterotrophic organisms and communities, food webs, whole ecosystems and watersheds, and soils and sediments using a common mathematical framework. Second, we identify general CUE patterns; for example, the actual CUE increases with improving growth conditions, and apparent CUE decreases with increasing turnover. We then synthesize > 5000CUE estimates showing that CUE decreases with increasing biological and ecological organization - from uni-cellular to multicellular organisms and from individuals to ecosystems. We conclude that CUE is an emergent property of coupled biological-abiotic systems, and it should be regarded as a flexible and scale-dependent index of the capacity of a given system to effectively retain C.
We review the evidence for a putative early 21st-century divergence between global mean surface temperature (GMST) and Coupled Model Intercomparison Project Phase 5 (CMIP5) projections. We provide a systematic comparison between temperatures and projections using historical versions of GMST products and historical versions of model projections that existed at the times when claims about a divergence were made. The comparisons are conducted with a variety of statistical techniques that correct for problems in previous work, including using continuous trends and a Monte Carlo approach to simulate internal variability. The results show that there is no robust statistical evidence for a divergence between models and observations. The impression of a divergence early in the 21st century was caused by various biases in model interpretation and in the observations, and was unsupported by robust statistics.
Methodological and technological advances have recently paved the way for metabolic flux profiling in higher organisms, like plants. However, in comparison with omics technologies, flux profiling has yet to provide comprehensive differential flux maps at a genome-scale and in different cell types, tissues, and organs. Here we highlight the recent advances in technologies to gather metabolic labeling patterns and flux profiling approaches. We provide an opinion of how recent local flux profiling approaches can be used in conjunction with the constraint-based modeling framework to arrive at genome-scale flux maps. In addition, we point at approaches which use metabolomics data without introduction of label to predict either non-steady state fluxes in a time-series experiment or flux changes in different experimental scenarios. The combination of these developments allows an experimentally feasible approach for flux-based large-scale systems biology studies.
The primary function of leaves is to provide an interface between plants and their environment for gas exchange, light exposure and thermoregulation. Leaves have, therefore a central contribution to plant fitness by allowing an efficient absorption of sunlight energy through photosynthesis to ensure an optimal growth. Their final geometry will result from a balance between the need to maximize energy uptake while minimizing the damage caused by environmental stresses. This intimate relationship between leaf and its surroundings has led to an enormous diversification in leaf forms. Leaf shape varies between species, populations, individuals or even within identical genotypes when those are subjected to different environmental conditions. For instance, the extent of leaf margin dissection has, for long, been found to inversely correlate with the mean annual temperature, such that Paleobotanists have used models based on leaf shape to predict the paleoclimate from fossil flora. Leaf growth is not only dependent on temperature but is also regulated by many other environmental factors such as light quality and intensity or ambient humidity. This raises the question of how the different signals can be integrated at the molecular level and converted into clear developmental decisions. Several recent studies have started to shed the light on the molecular mechanisms that connect the environmental sensing with organ-growth and patterning. In this review, we discuss the current knowledge on the influence of different environmental signals on leaf size and shape, their integration as well as their importance for plant adaptation.
The relevance for in vitro three-dimensional (3D) tissue culture of skin has been present for almost a century. From using skin biopsies in organ culture, to vascularized organotypic full-thickness reconstructed human skin equivalents, in vitro tissue regeneration of 3D skin has reached a golden era. However, the reconstruction of 3D skin still has room to grow and develop. The need for reproducible methodology, physiological structures and tissue architecture, and perfusable vasculature are only recently becoming a reality, though the addition of more complex structures such as glands and tactile corpuscles require advanced technologies. In this review, we will discuss the current methodology for biofabrication of 3D skin models and highlight the advantages and disadvantages of the existing systems as well as emphasize how new techniques can aid in the production of a truly physiologically relevant skin construct for preclinical innovation.
Can't remember to forget you
(2017)
In nature plants are exposed to frequent changes in their abiotic and biotic environment. While some environmental cues are used to gauge the environment and align growth and development, others are beyond the regularly encountered spectrum of a species and trigger stress responses. Such stressful conditions provide a potential threat to survival and integrity. Plants adapt to extreme environmental conditions through physiological adaptations that are usually transient and are maintained until stressful environments subside. It is increasingly appreciated that in some cases environmental cues activate a stress memory that persists for some time after the extreme condition has subsided. Recent research has shown that this stress-induced environmental memory is mediated by epigenetic and chromatin-based mechanisms and both histone methylation and nucleosome occupancy are associated with it.
Cardiogenesis is a complex developmental process involving multiple overlapping stages of cell fate specification, proliferation, differentiation, and morphogenesis. Precise spatiotemporal coordination between the different cardiogenic processes is ensured by intercellular signalling crosstalk and tissue-tissue interactions. Notch is an intercellular signalling pathway crucial for cell fate decisions during multicellular organismal development and is aptly positioned to coordinate the complex signalling crosstalk required for progressive cell lineage restriction during cardiogenesis. In this Review, we describe the role of Notch signalling and the crosstalk with other signalling pathways during the differentiation and patterning of the different cardiac tissues and in cardiac valve and ventricular chamber development. We examine how perturbation of Notch signalling activity is linked to congenital heart diseases affecting the neonate and adult, and discuss studies that shed light on the role of Notch signalling in heart regeneration and repair after injury.
The book by Božena Bednaříková, Slovo a jeho konverze (‘BSJK’), was originally published in 2009. However, in our view, there has not yet been given a due consideration and certainly not recognition as a genuine new territory of word formation. This is the reason to write a short review in order to give this book the consideration it has by large and far deserved. For in this book, two theoretically interesting working hypotheses are represented and covered by numerous examples of the Czech contemporary language: (i) conversion is the central process (not derivation), and (ii) conversion belongs to morphology and not (just) to word formation.
The book is divided into 9 sections. The section 1 (p. 13–14) gives the road map of the book, in section 2 (p. 15–42), the central concern about the position of word as a central unit of morphology (form formation) is established. In this chapter, the traditional views of Czech descriptive and Academic grammars but also manuals and handbooks or teacher’s books for high schools are reviewed. In most of them, word formation is considered being a part of lexicology, and not an integral part of morphology or better form formation. The review serves not only the improvement towards a unifying grammatical terminology in academic circles (university and academy of science) but it should also improve the quality of teaching at elementary and high schools (cf. 2.6., p. 31–42: Školský exkurz). Bednaříková is famous for her leading role as missing link between the Academia and the consumers of grammars. In chapter 3, entitled Návrat slova ‘The return of the word’ (into the Morphology, p. 43–54), arguments in favor of a morphological approach are raised. In this important methodological chapter, the main reasons are given why the word must be a central part of the form formation (morphology/grammar) and not of the lexicology. In addition, key terms such as stem, root and affix are subject to revision. The chapter is very brief, but very precise in its reasoning and arguments, in which the formal teaching is assigned a central supporting role in the context of conversion and transposition. In chapter 4 Slovo jako slovní druh (‘The word as a pars orationis’, p. 55–70), the syntactic function of transposition of one pars orationis to another with the means of conversion is considered. In Chapter 5, the central role of morphology for word formation is analyzed taking as starting point Mel’čuks theory which is understood as the analysis of morphological processes (cf. Mel’čuk, I. 2000. Morphological processes. In G. Booji, Ch. Lehmann, J. Mugdan, & S. Skopeteas (eds.), Morphologie/Morphology. Vol. 1, 523–535. Berlin & New York). The innovative part of the book are without any doubt the chapters 6–9, in which the internal structure of the word is introduced (chapter 6, 79–122), furthermore the part of speech transfer (or PS Transfer) including the conversion (Chapter 7, 123–149), once more the transposition understood as the shift from one part of speech to the other and concentrating on nouns, verbs and adjectives (Chapter 8, 150–201), and, finally, transflexion, “transflexe” (chapter 9, 203–219), which belongs rather to the domain of derivation than to a new type of word formation, and which does not include the transposition from one part of speech to another but rather the transition from one declension class to another. However, it is to be criticized that in some chapters, certain systematics are missing (this is expressed for example in the repetition of the same phenomenon in several places), and the illustrations in the form of derivation trees or the abbreviations are not always transparent and explicitly defined. It took a very long time until I received information about the abbreviation “S”.
I would now like to give a short statement concerning the innovative potential and the contribution of the book itself as compared to the western standard on the same topic. At the beginning of the monograph, the author raises the central concerns of her two hypotheses. In her study, she is concerned with the bases of morphemic analysis of word formation and with the function of the syntagma. In view of methodology, two central acts of actualization are, following Mathesius’ terminology, under review: first, the category called “pojmenovávací”, and second, the category called “usouvztažňovací” (cf. also Mathesius, V. 1982. Jazyk, kultura a slovesnost; Daneš, F. 1991.Mathesiova koncepce funkční gramatiky v kontextu dnešní jazykovědy. In SaS 52. 161–174 and Panevová, J. 2010. Kategorie pojmenovávací a usouvztažňovací [Jak František Daneš rozvíjí Viléma Mathesia]. In S. Čmejrková & J. Hoffmannová ad. [eds.], Užívání a prožívání jazyka, 21–26.). Her major concern is thus to establish a missing link between an analysis of word formation and form formation (morphology). Her morphemic analysis of word formation processes wants to “combat traditional school views of word formation as a (mechanical) connection of the root, prefix, and suffix”. Doing so, she analyzes in the book the relationship between transfer, transposition (as change of partes orationis) and conversion (as the operation process serving transposition). In the last chapter 8, BSJK re-introduces and refines the term transflection (BSJK 2009,13).
This book is important for its consistent satisfactory treatment of the term conversion as a morphological process in the Czech tradition; still we cannot confirm that in European context, this topic would be “seriously under-researched” (cited from the introduction, Chapter 1, p. 13). The contrary is true, in context of English word formation besides the most influential work by Marchand (Marchand, H. 1996. The categories and types of present-day English word-formation: A synchronic diachronic approach. 2nd ed. München), conversion as the most productive process of word formation has become perhaps the most researched object recently: to mention just a few influential monographs: Martsa, S. 2013. Conversion in English: A Cognitive Semantic Approach. Cambridge; Vogel, P. M. 1996. Wortarten und Wortartenwechsel. Berlin & New York.
The word formation called conversion originally comes from analytic languages such as English and French. Especially English is a language in which the derivation of a noun from a verb and vice versa causes a considerable large amount of homonymous forms in the dictionary and of course, this is not just a problem of morphology but especially a problem of any theoretical approach to language acquisition, cognitive semantics or even generative morphosyntax. Thus, in his seminal book, Language Instinct (1995), Steven Pinker argues persuasively that prescriptive grammar rules disallowing, among other things, the sentence-final use of prepositions, the splitting of infinitives and the conversion of nouns to verbs are both useless and nonsensical (371–379). As regards the conversion of nouns to verbs, he says: “[i]n fact the easy conversion of nouns to verbs has been part of English grammar for centuries; it is one of the processes that make English English” (ibidem: 379). To illustrate the easiness characterizing this type of conversion, he lists verbs converted from nouns designating human body parts, some of which are reproduced in (1):
(1) head a committee, scalp a missionary, eye a babe, nose around the office, mouth the lyrics, tongue each note on the flute, neck in the back seat, back the candidate, arm the militia, shoulder the burden, elbow your way in, finger the culprit, knuckle under, thumb a ride, belly up to the bar, stomach someone’s complaints, knee the goalie, leg it across the town, foot the bill, toe the line (cf. Pinker, S. 1995. The Language Instinct. New York, 379–380 and Pinker, S. 1996. Language learnability and language development. Cambridge MA)
Pinker estimates that approximately a fifth of English verbs originate from nouns, which, as documented extensively in Clark & Clark (Clark, E. V. & H. H. Clark. 1979. When nouns surface as verbs. In Language 55. 767–811), may also have to do with the fact that new or innovative verbs in English arise predominantly from conversion of nouns to verbs.
Without questioning the dominance of noun to verb conversion, I shall claim in this review that it is not only the easy conversion of verbs from nouns, but, more broadly, conversion as a word-formation process that makes English English. Consider, for instance, (2) below demonstrating that the easiness of forming conversion verbs equally characterizes, though in a lesser degree, the conversion of nouns from verbs. The expressions given in (2) are modelled on Pinker’s above examples by the seminal work of Sándor Martsa (2013. Conversion in English: A Cognitive Semantic Approach. Cambridge), and they contain nouns converted from verbs designating actions functionally related to different parts of the human body.
(2) have your say, give a shout, let out a shriek / a cry, give a talk, take a look at the notes, keep a close watch, down the whisky with a swallow, have a chew on it, have a smell of this cheese, with a smile, the touch of her fingers, Hey! Nice catch! go for a run, it’s worth a go, go for a walk
Thus, the major difference between the term konverze as introduced and defended in BSJK (2009, 149) on one hand, and the English type of conversion mostly called “Zero-Derivation” by a zero morpheme (as Marchand 1969 op. cit., has called it) is to be found inside of the two quite different systems of word formation.
Czech very rarely allows for pure zero derivation such as demonstrated in the English examples (1)-(2).
Despite this major difference, even Czech language being still a highly inflectional language with rich case, number and declension system and agreement, nevertheless more and more allows for similar word formations typical for English with a true zero affixation, e. g. tunnel > to tunnel : Cz tunel > tunelovat and this process is an integral part of the grammar because it includes even the category of verbal aspect deriving also the perfective forms and negated verbs such as nevytunelovalo peníze,
ve snaze “politicky korektně” uctít Havlovu památku jednotliví ministři české vlády přislíbili, že přestanou tento stát vykrádat a tunelovat, tedy alespoň do začátku příštího roku; Nové vedení Obce spisovatelů a jeho sekretariát nevytunelovalo peníze Obce spisovatelů, vždyť nebylo ani co tunelovat, naopak zachránilo tuto organizaci před téměř nezvratným koncem (ČNK. Last accessed July 10, 2018).
Thus conversion is becoming more and more an important process of word and form formation in the system of Czech word formation and morphology.
One critical observation remains to be mentioned: The book is solid but in a certain sense restricted to just functional approaches not considering or even including the important contribution of alternative approaches in formal linguistics. Thus, mainstream generative syntax, based on the theory of government and binding or minimalism (introduced by Noam Chomsky in 1981 and 1995), are not reviewed in this book even though there are many allusions including the important role of syntax for word formation (this is an important demand on any theory of word formation, cf. also Dokulil, M. 1962. K vzájemnému poměru slovotvorby a skladby. In Acta Universitatis Carolinae: SLAVICA PRAGENSIA IV, 369–375. UK, Praha).
Most of the recent work devoted to a theoretical approach of minimalism considers conversion as a “syntactic decomposition” based on root semantics (cf. e. g. Borer, H. 2005. In name only: Structuring sense Vol. I. & The normal course of events: Structuring sense Vol. II. Oxford; Harley, H. & R. Noyer. 1999. State-of-the article: Distributed Morphology. In GLOT 4. 3–9; Halle, M. & A. Marantz. 1993. Distributed morphology and the pieces of inflection. In Keyser, S. J. & K. Hale (eds.), The view from Building 20, 111–176. Cambridge.). A recent development in minimalism is the concept of roots and categorial features (cf. Panagiotidis, Ph. 2014. Categorial Features. A Generative Theory of Word Class Categories. Cambridge.). This theory differentiates between so-called true “denominal verbs tape-type verbs” as opposed to those verbs which are “directly derived from a root hammer-type”. The structural differences between them are argued by Panagiotidis (2014: 63) “to account for the idiosyncratic meaning of the latter, as opposed to the predictable and systematic meaning of the former”. The two types are demonstrated under (3) vs. (4)
(3) nP vP
/ \ / \
N HAMMER v xP
/ \
HAMMER x
(Panagiotidis op. cit., 2014: 63)
In (3) to the left, the nominalizer head n takes a root complement, nominalizing it syntactically. In the tree to the right, the root h a m m e r is a manner adjunct to an xP (schematically rendered) inside the vP. On the other hand, verbs like tape behave differently. They seem to be truly denominal, formed by converting a noun into a verb, by recategorizing the noun and not by categorizing a root. By hypothesis, the verbalizing head takes as its complement a structure that already contains a noun – that is, an nP in which the root tape has already been nominalized:
(4) nP vP
/ \ / \
N TAPE v xP
/ \
np. X
/ \
n TAPE
(Panagiotidis 2014:63)
As opposed to this approach, the present monograph uses the term “transpozice” (‘transposition’) as the change of parts of speech of different classes by the means of konverze (‘conversion’) (chapter 8, 151–201). We will just mention one typical class or type of such conversions as given under (5) and (6):
(5) kapř
/ \
Kapř í
(BSJK,156)
(6) výlov [vylovit]
/ \
vý [vy] lov [lovit]
(BSJK, 180)
In summary, I would see the great merit of the publication especially in a new view on ancient phenomena. Additionally, the work also excels in a thorough multi-level analysis of conversion, transposition and transflexion, including consideration of morphonological alternations and differences of semantic interpretation by adding or removing a specific onomasiological feature (according to the onomasiological word formation theory of Dokulil, M. 1962. Tvoření slov v češtině. Teorie odvozování slov. Praha.).
Above all, I value the book because of its consistent insistence on the role of shaping for conversion as a part of morphology (form formation). I also think that conversion will play an increasingly important role in the further development of the Czech language, both for system external reasons, as a language contact phenomenon for English, but also for system inherent reasons, triggered and flanked by the tendency towards analytism and simplification, and finally the gradual reduction of the complex inflectional system of nouns and verbs.
For the theoretical linguist, this book may not be a substitute for word-formation theories such as Marchand, op. cit. (1969) or Dokulil, op. cit. (1962, 1968); but it is a very stimulating and original study in which a more thorough reading could lead to a differentiated view than that given here, showing the differences between a true zero-derivative language such as English based on a more elaborated morpho-syntactic generative theory of root semantics by Panagiotidis (2014) in which the term conversion is very different from that presented in Bednaříkovás book (see Examples 1 and 2), and a derivational language such as Czech with additional affixes and other word-forming means more clearly.
The author is to be recommended for bridging the gap with traditional (and, in my view, not negligible) theories and newer views. The work must necessarily have place in every slavist’s and bohemist’s book shelf.
Background
Jump training (JT) can be used to enhance the ability of skeletal muscle to exert maximal force in as short a time as possible. Despite its usefulness as a method of performance enhancement in athletes, only a small number of studies have investigated its effects on muscle power in older adults.
Objectives
The aims of this meta-analysis were to measure the effect of JT on muscular power in older adults (≥ 50 years), and to establish appropriate programming guidelines for this population.
Data Sources
The data sources utilised were Google Scholar, PubMed, and Microsoft Academic.
Study Eligibility Criteria
Studies were eligible for inclusion if they comprised JT interventions in healthy adults (≥ 50 years) who were free of any medical condition that could impair movement.
Study Appraisal and Synthesis Methods
The inverse variance random-effects model for meta-analyses was used because it allocates a proportionate weight to trials based on the size of their individual standard errors and facilitates analysis while accounting for heterogeneity across studies. Effect sizes (ESs), calculated from a measure of muscular power, were represented by the standardised mean difference and were presented alongside 95% confidence intervals (CIs).
Results
Thirteen training groups across nine studies were included in this meta-analysis. The magnitude of the main effect was ‘moderate’ (0.66, 95% CI 0.33, 0.98). ESs were larger in non-obese participants (body mass index [BMI] < 30 vs. ≥ 30 kg/m2; 1.03 [95% CI 0.34, 1.73] vs. 0.53 [95% CI − 0.03, 1.09]). Among the studies included in this review, just one reported an acute injury, which did not result in the participant ceasing their involvement. JT was more effective in programmes with more than one exercise (range 1–4 exercises; ES = 0.74 [95% CI − 0.49, 1.96] vs. 0.53 [95% CI 0.29, 0.78]), more than two sets per exercise (range 1–4 sets; ES = 0.91 [95% CI 0.04, 1.77] vs. 0.68 [95% CI 0.15, 1.21]), more than three jumps per set (range 1–14 jumps; ES = 1.02 [95% CI 0.16, 1.87] vs. 0.53 [95% CI − 0.03, 1.09]) and more than 25 jumps per session (range 6–200 jumps; ES = 0.88 [95% CI 0.05, 1.70] vs. 0.49 [95% CI 0.14, 0.83]).
Conclusions
JT is safe and effective in older adults. Practitioners should construct varied JT programmes that include more than one exercise and comprise more than two sets per exercise, more than three jumps per set, and 60 s of recovery between sets. An upper limit of three sets per exercise and ten jumps per set is recommended. Up to three training sessions per week can be performed.
Ecological communities change in time and space, but long-term dynamics at the century-to-millennia scale are poorly documented due to lack of relevant data sets. Nevertheless, understanding long-term dynamics is important for explaining present-day biodiversity patterns and placing conservation goals in a historical context. Here, we use recent examples and new perspectives to highlight how environmental DNA (eDNA) is starting to provide a powerful new source of temporal data for research questions that have so far been overlooked, by helping to resolve the ecological dynamics of populations, communities, and ecosystems over hundreds to thousands of years. We give examples of hypotheses that may be addressed by temporal eDNA biodiversity data, discuss possible research directions, and outline related challenges.
The Renewable energy power generation capacity has been rapidly increasing in China recently. Meanwhile, the contradiction between power supply and demand is becoming increasingly more prominent due to the intermittence of renewable energies. On the other hand, on the mitigation of carbon dioxide (CO2) emissions in China needs immediate attention. Power-to-Gas (PtG), a chemical energy storage technology, can convert surplus electricity into combustible gases. Subsurface energy storage can meet the requirements of long term storage with its large capacity. This paper provides a discussion of the entire PtG energy storage technology process and the current research progress. Based on the comparative study of different geological storage schemes for synthetic methane, their respective research progress and limitations are noted. In addition, a full investigation of the distribution and implementation of global PtG and CO2 capture and storage (CCS) demonstration projects is performed. Subsequently, the opportunities and challenges of the development of this technology in China are discussed based on techno-economic and ecological effects analysis. While PtG is expected to be a revolutionary technology that will replace traditional power systems, the main issues of site selection, energy efficiency and the economy still need to be adequately addressed. Additionally, based on the comprehensive discussion of the results of the analysis, power-to-gas and subsurface energy storage implementation strategies, as well as outlook in China are presented.
Children’s motor competence is known to have a determinant role in learning and engaging later in complex motor skills and, thus, in physical activity. The development of adequate motor competence is a central aim of physical education, and assuring that pupils are learning and developing motor competence depends on accurate assessment protocols. The MOBAK 1 test battery is a recent instrument developed to assess motor competence in primary physical education. This study used the MOBAK 1 to explore motor competence levels and gender differences among 249 (Mage = 6.3, SD = 0.5 years; 127 girls and 122 boys) Grade 1 primary school Portuguese children. On independent sample t tests, boys presented higher object movement motor competence than girls (boys: M = 5.8, SD = 1.7; girls: M = 4.0, SD = 1.7; p < .001), while girls were more proficient among self-movement skills (girls: M = 5.1, SD = 1.8; boys: M = 4.3, SD = 1.7; p < .01). On “total motor competence,” boys (M = 10.3, SD = 2.6) averaged one point ahead of girls (M = 9.1, SD = 2.9). The percentage of girls in the first quartile of object movement was 18.9%, while, for “self movement,” the percentage of boys in the first quartile was almost double that of girls (30.3% and 17.3%, respectively). The confirmatory model to test for construct validity confirmed the assumed theoretical two-factor structure of MOBAK 1 test items in this Portuguese sample. These results support the MOBAK 1 instrument for assessing motor competence and highlighted gender differences, of relevance to intervention efforts.
ObjectivesThe use of simulated and standardized patients (SP) is widely accepted in the medical field and, from there, is beginning to disseminate into clinical psychology and psychotherapy. The purpose of this study was therefore to systematically review barriers and facilitators that should be considered in the implementation of SP interventions specific to clinical psychology and psychotherapy.MethodsFollowing current guidelines, a scoping review was conducted. The literature search focused on the MEDLINE, PsycINFO and Web of Science databases, including Dissertation Abstracts International. After screening for titles and abstracts, full texts were screened independently and in duplicate according to our inclusion criteria. For data extraction, a pre-defined form was piloted and used. Units of meaning with respect to barriers and facilitators were extracted and categorized inductively using content-analysis techniques. From the results, a matrix of interconnections and a network graph were compiled.ResultsThe 41 included publications were mainly in the fields of psychiatry and mental health nursing, as well as in training and education. The detailed category system contrasts four supercategories, i.e., which organizational and economic aspects to consider, which persons to include as eligible SPs, how to develop adequate scenarios, and how to authentically and consistently portray mental health patients.ConclusionsPublications focused especially on the interrelation between authenticity and consistency of portrayals, on how to evoke empathy in learners, and on economic and training aspects. A variety of recommendations for implementing SP programs, from planning to training, monitoring, and debriefing, is provided, for example, ethical screening of and ongoing support for SPs.
This work reviews the literature on an alleged global warming 'pause' in global mean surface temperature (GMST) to determine how it has been defined, what time intervals are used to characterise it, what data are used to measure it, and what methods used to assess it. We test for 'pauses', both in the normally understood meaning of the term to mean no warming trend, as well as for a 'pause' defined as a substantially slower trend in GMST. The tests are carried out with the historical versions of GMST that existed for each pause-interval tested, and with current versions of each of the GMST datasets. The tests are conducted following the common (but questionable) practice of breaking the linear fit at the start of the trend interval ('broken' trends), and also with trends that are continuous with the data bordering the trend interval. We also compare results when appropriate allowance is made for the selection bias problem. The results show that there is little or no statistical evidence for a lack of trend or slower trend in GMST using either the historical data or the current data. The perception that there was a 'pause' in GMST was bolstered by earlier biases in the data in combination with incomplete statistical testing.
Cardiovascular complications are commonly associated with obesity. However, a subgroup of obese individuals may not be at an increased risk for cardiovascular complications; these individuals are said to have metabolically healthy obesity (MHO). In contrast, metabolically unhealthy individuals are at high risk of cardiovascular disease (CVD), irrespective of BMI; thus, this group can include individuals within the normal weight category (BMI 18.5-24.9kg/m(2)). This review provides a summary of prospective studies on MHO and metabolically unhealthy normal-weight (MUHNW) phenotypes. Notably, there is ongoing dispute surrounding the concept of MHO, including the lack of a uniform definition and the potentially transient nature of metabolic health status. This review highlights the relevance of alternative measures of body fatness, specifically measures of fat distribution, for determining MHO and MUHNW. It also highlights alternative approaches of risk stratification, which account for the continuum of risk in relation to CVD, which is observable for most risk factors. Moreover, studies evaluating the transition from metabolically healthy to unhealthy phenotypes and potential determinants for such conversions are discussed. Finally, the review proposes several strategies for the use of epidemiological research to further inform the current debate on metabolic health and its determination across different stages of body fatness.
Over a lifetime, rhythmic contractions of the heart provide a continuous flow of blood throughout the body. An essential morphogenetic process during cardiac development which ensures unidirectional blood flow is the formation of cardiac valves. These structures are largely composed of extracellular matrix and of endocardial cells, a specialized population of endothelial cells that line the interior of the heart and that are subjected to changing hemodynamic forces. Recent studies have significantly expanded our understanding of this morphogenetic process. They highlight the importance of the mechanobiology of cardiac valve formation and show how biophysical forces due to blood flow drive biochemical and electrical signaling required for the differentiation of cells to produce cardiac valves.
The identification of buried soil horizons in agricultural landscapes helps to quantify sediment budgets and erosion-related carbon dynamics. High-resolution mapping of buried horizons using conventional soil surveys is destructive and time consuming. Geoelectrical sensors can offer a fast and non-destructive alternative for determining horizon positions and properties. In this paper, we compare the suitability of several geoelectrical methods for measuring the depth to buried horizons (Apb, Ahb and Hab) in the hummocky ground moraine landscape of northeastern Germany. Soil profile descriptions were developed for 269 locations within a 6-ha experimental field "CarboZALF-D". A stepwise linear discriminant analysis (LDA) estimated the lateral position of the buried horizons using electromagnetic induction data and terrain attributes. To predict the depth of a buried horizon, multiple linear regression (MLR) was used for both a 120-m transect and a 0.2-ha pseudo-three-dimensional (3D) area. At these scales, apparent electrical conductivity (ECa), electrical resistivity (ER) and terrain attributes were used as independent variables. The LDA accurately predicted Apb- and Ahb-horizons (a correct classification of 93%). The LDA of the Hab-horizon had a misclassification of 24%, which was probably related to the smaller test set and the higher depth of this horizon. The MLR predicted the depth of the Apb-, Ahb- and Hab-horizons with relative root mean square errors (RMSEs) of 7, 3 and 13%, respectively, in the pseudo-3D area. MLR had a lower accuracy for the 2D transect compared to the pseudo-3D area. Overall, the use of LDA and MLR has been an efficient methodological approach for predicting buried horizon positions. Highlights The suitability of geoelectrical measurements for digital modelling of diagnostic buried soil horizons was determined. LDA and MLR were used to detect multiple horizons with geoelectrical devices and terrain attributes. Geoelectrical variables were significant predictors of the position of the target soil horizons. The use of these tested digital technologies gives an opportunity to develop high-resolution soil mapping procedures.
When I took up the task of writing a review of the Routledge handbook of international local government, it occurred to me, as a member of the generation of the 1950s, that I had not even considered whether such compendiums were even necessary in times of easy internet searching. This review will look at whether that is indeed the case.
Social-science handbooks naturally are very broad. This also applies to the particular handbook under review. It comprises six content-thematic parts with 33 chapters by 73 authors from 21 countries, with the UK and USA dominant. The focal points, discussed in more detail below, are local elections and local governance, local governments in different jurisdictions, the challenges of local government services, citizen engagement in local affairs, and local authorities in multi-level finance systems that shape how municipal governments ‘get and spend’ public money. These are exactly the topics actually discussed in the international community of political scientists.
As a preliminary, the editors work out the theoretical-methodological foundations of the topic. They define ‘the local’ as ‘geographically defined sub-national state administrative or political divisions’ (p. 3). As next steps, they analyse the difference between government and governance, and investigate whether local government is globally important and relevant. Fortunately, they conclude that this is indeed the case.
Part I of the handbook illustrates ‘substantive variations’ in the local electoral systems and ‘notable divergences in the values and assumptions of local governance among democratic countries’ (p. 23). That topic is indeed central to local authorities’ legitimacy in democratic political systems. The focus of this part of the handbook is on current research and debates around local electoral systems, the challenges of local political leadership and the councillor’s role in modern local policy. Current trends at the local level are analysed from the actors’ perspectives or from an economic point of view by comparing institutionalised differences in city managers, mayors and council members across different jurisdictions. Sections that investigate traditional leadership and local government in Pacific Island countries are of particular interest to most Western readers, because in Europe and North America we know too little about such issues in that part of the world.
Part II of the handbook presents current development processes and challenges in various local government systems. The chapters are territorially oriented around nation states or sub-national regions. This part of the handbook deal with local government in the Pacific Islands, Latin America, and New Zealand and in the Caribbean. However, the rationale behind country selection is not always clear; important countries like China, India and Nigeria, just to name a few, are absent. Unfortunately, there is no summary article highlighting similarities and differences, as well as the challenges in local government, relating to the countries studied in the book.
The development of local services is the focus of Part III of the handbook, however, the definition of local services remains highly controversial and their scope varies widely between the countries. From the 1980s onwards, there was a long-term trend towards the marketisation and economisation of local politics, but since the turn of the millennium, there has been a counter-trend of the return of municipalities and third sector in the fields of local public services (Wollmann 2018). The book analyses the US and Georgia as case studies for development trends, finding that local government entrepreneurship remains an important factor in promoting economic development and strengthening capacities.
I was pleased to see that Part IV, the next and most extensive part of the handbook, deals with citizen engagement, because the future of local self-government across the world depends not only on top down activities by local governing elites, but above all on the commitment of the inhabitants of cities and municipalities. Practices and challenges of citizen participation in local government are analysed in inspiring case studies of mid-sized cities in Russia and the United States. The contribution on urban governance of austerity in Europe is also of particular interest. The 2008 global financial crash and the subsequent severe budgetary pressure on municipalities in many countries was a key event in the history and development of local self-government, confronting municipalities with ‘the harsh realities of political economy’ (p. 293). Several articles analyse the causes of the declining confidence of the citizens in local authorities in some countries. In contrast, the open budget tool in Brazil is as a positive example of collaborative stakeholder engagement.
Part V deals with multi-level governance. With the exception of Australia, it is all about Europe, especially the role of municipalities in the EU’s multilevel system. The authors conclude that ‘local authorities are essential for executing EU legislation, and this turn allows them to shape EU policies’ (p. 401). This part of the handbook includes the issue of local territorial reforms, which are central to local autonomy, combined with analyses of redesigning regional government and local-level Europeanisation. Subsequently, by comparing the local government systems of Southern Europe (France, Italy, Portugal and Spain), the authors underline convincingly the role of traditions, identity, legal frameworks and institutions in local government.
Part VI of the book deals with the financial dimension of local self-government under the heading ‘Getting and spending’. This is indeed the ‘key source of dispute between local and central government’ (p. 467) and the crucial factor shaping true local autonomy. Meritoriously, this part also contains a chapter on the fight against corruption and unethical behaviour by public servants. Based on research linking corruption to transparency and accountability, two case studies describe how Tbilisi (Georgia) and Lviv (Ukraine) try to reduce corruption in government budgeting and procurement. Enhancing Value-For-Money audit in local government highlights another important side of local finance. An interesting comparison reveals significant differences in local government revenues in European Union member states between 2000 and 2014.
Of course, even in a 530-page book, some important aspects remain underexposed. Above all, I would have liked more attention on some of the enormous future challenges facing democratic systems and with them local governments all over the world, such as digitisation (e.g. in smart cities), the integration of migrants or climate change. The international networking of municipalities should also be given greater prominence.
To sum it up, The Routledge Handbook on International Local Government is indeed ‘ambitiously titled’ as the editors underline. Yet, despite my critical objections about its focus on current issues rather than future challenges, they largely fulfil this promise and their general approach has worked well. Across continents and political-administrative cultures, illustrated with many new research findings, they have created an outstanding publication focusing on the challenges and policy of local self-governmental authorities and other local stakeholders. There is a good chance that this handbook will belong in future to the social science standard works on local issues, and be included in academic political science teaching. May the publisher’s wish come true; that this book stimulates its readers to develop further research ideas.
Finally, I come back to my initial question. ‘Old fashioned’ printed handbooks like these continue to make sense, even in modern digital times.
Winter is an important season for many limnological processes, which can range from biogeochemical transformations to ecological interactions. Interest in the structure and function of lake ecosystems under ice is on the rise. Although limnologists working at polar latitudes have a long history of winter work, the required knowledge to successfully sample under winter conditions is not widely available and relatively few limnologists receive formal training. In particular, the deployment and operation of equipment in below 0 degrees C temperatures pose considerable logistical and methodological challenges, as do the safety risks of sampling during the ice-covered period. Here, we consolidate information on winter lake sampling and describe effective methods to measure physical, chemical, and biological variables in and under ice. We describe variation in snow and ice conditions and discuss implications for sampling logistics and safety. We outline commonly encountered methodological challenges and make recommendations for best practices to maximize safety and efficiency when sampling through ice or deploying instruments in ice-covered lakes. Application of such practices over a broad range of ice-covered lakes will contribute to a better understanding of the factors that regulate lakes during winter and how winter conditions affect the subsequent ice-free period.
A challenge for eco-evolutionary research is to better understand the effect of climate and landscape changes on species and their distribution. Populations of species can respond to changes in their environment through local genetic adaptation or plasticity, dispersal, or local extinction. The individual-based modeling (IBM) approach has been repeatedly applied to assess organismic responses to environmental changes. IBMs simulate emerging adaptive behaviors from the basic entities upon which both ecological and evolutionary mechanisms act. The objective of this review is to summarize the state of the art of eco-evolutionary IBMs and to explore to what degree they already address the key responses of organisms to environmental change. In this, we identify promising approaches and potential knowledge gaps in the implementation of eco-evolutionary mechanisms to motivate future research. Using mainly the ISI Web of Science, we reveal that most of the progress in eco-evolutionary IBMs in the last decades was achieved for genetic adaptation to novel local environmental conditions. There is, however, not a single eco-evolutionary IBM addressing the three potential adaptive responses simultaneously. Additionally, IBMs implementing adaptive phenotypic plasticity are rare. Most commonly, plasticity was implemented as random noise or reaction norms. Our review further identifies a current lack of models where plasticity is an evolving trait. Future eco-evolutionary models should consider dispersal and plasticity as evolving traits with their associated costs and benefits. Such an integrated approach could help to identify conditions promoting population persistence depending on the life history strategy of organisms and the environment they experience.
Langmuir monolayers provide a fast and elegant route to analyze the degradation behavior of biodegradable polymer materials. In contrast to bulk materials, diffusive transport of reactants and reaction products in the (partially degraded) material can be neglected at the air-water interface, allowing for the study of molecular degradation kinetics in experiments taking less than a day and in some cases just a few minutes, in contrast to experiments with bulk materials that can take years. Several aspects of the biodegradation behavior of polymer materials, such as the interaction with biomolecules and degradation products, are directly observable. Expanding the technique with surface-sensitive instrumental techniques enables evaluating the evolution of the morphology, chemical composition, and the mechanical properties of the degrading material in situ. The potential of the Langmuir monolayer degradation technique as a predictive tool for implant degradation when combined with computational methods is outlined, and related open questions and strategies to overcome these challenges are pointed out.
Purpose
The purpose of this paper is to investigate whether and how evolving ideas about management control (MC) emerge in research about public sector performance management (PSPM).
Design/methodology/approach
This is a literature review on PSPM research through using a set of key terms derived from a review of recent developments in MC.
Findings
MC research, originating in the management accounting discipline, is largely disconnected from PSPM research as part of public administration and public management disciplines. Overlaps between MC and PSPM research are visible in a cybernetic control approach, control variety and contingency-based reasoning. Both academic communities share an understanding of certain issues, although under diverging labels, especially enabling controls or, in a more general sense, usable performance controls, horizontal controls and control packaging. Specific MC concepts are valuable for future PSPM research, i.e. trust as a complement of performance-based controls in complex settings, and strategy as a variable in contingency-based studies.
Research limitations/implications
Breaking the boundaries between two currently remote research disciplines, on the one hand, might dismantle “would-be” innovations in one of these disciplines, and, on the other hand, may provide a fertile soil for mutual transfer of knowledge. A limitation of the authors’ review of PSPM research is that it may insufficiently cover research published in the public sector accounting journals, which could be an outlet for MC-inspired PSPM research.
Originality/value
The paper unravels the “apparent” and “real” differences between MC and PSPM research, and, in doing so, takes the detected “real” differences as a starting point for discussing in what ways PSPM research can benefit from MC achievements.
Narcissists are assumed to lack the motivation and ability to share and understand the mental states of others. Prior empirical research, however, has yielded inconclusive findings and has differed with respect to the specific aspects of narcissism and socioemotional cognition that have been examined. Here, we propose a differentiated facet approach that can be applied across research traditions and that distinguishes between facets of narcissism (agentic vs. antagonistic) on the one hand, and facets of socioemotional cognition ability (SECA; self-perceived vs. actual) on the other. Using five nonclinical samples in two studies (total N = 602), we investigated the effect of facets of grandiose narcissism on aspects of socioemotional cognition across measures of affective and cognitive empathy, Theory of Mind, and emotional intelligence, while also controlling for general reasoning ability. Across both studies, agentic facets of narcissism were found to be positively related to perceived SECA, whereas antagonistic facets of narcissism were found to be negatively related to perceived SECA. However, both narcissism facets were negatively related to actual SECA. Exploratory condition-based regression analyses further showed that agentic narcissists had a higher directed discrepancy between perceived and actual SECA: They self-enhanced their socio-emotional capacities. Implications of these results for the multifaceted theoretical understanding of the narcissism-SECA link are discussed.
The ability of an organism to change its phenotype in response to different environments, termed plasticity, is a particularly important characteristic to enable sessile plants to adapt to rapid changes in their surroundings. Plasticity is a quantitative trait that can provide a fitness advantage and mitigate negative effects due to environmental perturbations. Yet, its genetic basis is not fully understood. Alongside technological limitations, the main challenge in studying plasticity has been the selection of suitable approaches for quantification of phenotypic plasticity. Here, we propose a categorization of the existing quantitative measures of phenotypic plasticity into nominal and relative approaches. Moreover, we highlight the recent advances in the understanding of the genetic architecture underlying phenotypic plasticity in plants. We identify four pillars for future research to uncover the genetic basis of phenotypic plasticity, with emphasis on development of computational approaches and theories. These developments will allow us to perform specific experiments to validate the causal genes for plasticity and to discover their role in plant fitness and evolution.
Work has become more precarious in recent years. Although this claim is more or less uncontested among social scientists, there are a still many questions that have not yet been conclusively answered. What exactly constitutes precariousness? How should it be operationalized and measured? How does the character of precarious employment vary across organizations, occupations, demographic groups, and countries?
The edited volume by Arne Kalleberg and Steven Vallas seeks to provide answers to these and related questions. Sociologists from around the world employed different methodologies in a broad range of economic sectors and countries to identify the origins, manifestations, and consequences of precarious work. The different contributions not only illustrate the great heterogeneity that exists within precarious employment but also point to some central features of precarious work independent of the geographical context in which it occurs. Moreover, they highlight some challenges for the study of precarious work.
First, drawing on their earlier work, Kalleberg and Vallas conceptualize precarious employment as work that is characterized by uncertainty and insecurity with regard to pay and the stability of the work arrangement; workers in precarious jobs only have limited access to social benefits and statutory protections and bear the entrepreneurial risk of the employment relationship. This broad definition not only captures various forms of nonstandard employment, such as temporary employment, part-time work, or one-person businesses, but also covers informal workers or workers who are at risk of losing their jobs. Nonetheless, this definition does not seem to be broad enough or specific enough to fit the needs of all types of research and to appropriately capture the multifaceted nature of precarious work. Kiersztyn, for example, shows the necessity to distinguish between objective and subjective insecurities when measuring precarious work. Likewise, Rogan et al. point out that the concept of “precarious employment” has little resonance in the developing world, where most of the workforce is at or near poverty and informal work is the default employment type.
Second, the book repeatedly illustrates that the increase in precarious work can be attributed to the rise of neoliberal doctrines and practices, the deinstitutionalization of organized workers, and the dismantling of the welfare state. This applies not only to the United States, where market logics have often been equated with economic freedom, but also to countries like Germany with its corporatist tradition and a strong welfare state (Brady and Biegert) as well as to emerging economies like India (Sapkal and Sundar). In the opening chapter, Pulignano, moreover, convincingly argues that the institutional determinants of precariousness should not only be sought at the national level but that the supranational context plays a major role when it comes to explain precarity.
Third, by focusing on different aspects of precariousness and employment, the book shows the need for differentiation when studying precarious work. This is nicely illustrated by the following three chapters, which draw different conclusions on the gendered nature of precarious employment. Wallace and Kwak study the rise of “bad jobs” in U.S. metropolitan areas and show that men’s work became more precarious during the Great Financial Crisis. By contrast, Banch and Hanley, who have investigated the prevalence of different forms of nonstandard work since the 1980s in the United States, show that the risk of working in precarious jobs has declined over time for men. Likewise, Witteveen shows that the employment trajectories of young men are less precarious than those of young women in the United States. These seemingly contradictory claims stem from the fact that the authors focused on different aspects of precariousness, used different methodologies and datasets, and took on slightly different populations and time frames. The work on precarious work is hence not yet done.
Fourth, precarious work is certainly no longer a characteristic of those with low levels of education but has increasingly become common among professional and technical workers as well. It might come in disguise and is oftentimes perceived as an opportunity, a means for career advancement, and a personal choice. These disguises and perceptions are evident in chapters by Zukin and Papadantonakis on the unpaid work performed by programmers in hackathons, the chapter by Rao on young professionals in international organizations, and to some degree also the chapter by Williams on professional female workers in the oil and gas industry.
These insights (and more that are not mentioned here) make the book relevant and interesting to read. A summary chapter to synthesize the diverse findings and potentially also outline some of the methodological challenges in the study of precarious work would have had been a nice close of the book. Furthermore, such a summation would have been the place to speculate about the consequences of recent changes in the world of work, such as the rise of the gig economy and cloud or crowd work, which add new forms of precarity to the ones that we have known thus far.
Although it has primarily been written for an academic audience, the book is a highly commendable and enjoyable read for both social scientists and practitioners such as labor activists, human resources managers, and policy makers. Moreover, the book is certainly a valuable teaching resource suitable for graduate and master’s seminars in sociology due to its broad coverage of various aspects of precariousness, geographical regions, and methodological approaches.
Inducible defences against predation are widespread in the natural world, allowing prey to economise on the costs of defence when predation risk varies over time or is spatially structured. Through interspecific interactions, inducible defences have major impacts on ecological dynamics, particularly predator-prey stability and phase lag. Researchers have developed multiple distinct approaches, each reflecting assumptions appropriate for particular ecological communities. Yet, the impact of inducible defences on ecological dynamics can be highly sensitive to the modelling approach used, making the choice of model a critical decision that affects interpretation of the dynamical consequences of inducible defences. Here, we review three existing approaches to modelling inducible defences: Switching Function, Fitness Gradient and Optimal Trait. We assess when and how the dynamical outcomes of these approaches differ from each other, from classic predator-prey dynamics and from commonly observed eco-evolutionary dynamics with evolving, but non-inducible, prey defences. We point out that the Switching Function models tend to stabilise population dynamics, and the Fitness Gradient models should be carefully used, as the difference with evolutionary dynamics is important. We discuss advantages of each approach for applications to ecological systems with particular features, with the goal of providing guidelines for future researchers to build on.
We summarize the current state of observations of circumplanetary dust populations, including both dilute and dense rings and tori around the giant planets, ejecta clouds engulfing airless moons, and rings around smaller planetary bodies throughout the Solar System. We also discuss the theoretical models that enable these observations to be understood in terms of the sources, sinks and transport of various dust populations. The dynamics and resulting transport of the particles can be quite complex, due to the fact that their motion is influenced by neutral and plasma drag, radiation pressure, and electromagnetic forcesall in addition to gravity. The relative importance of these forces depends on the environment, as well as the makeup and size of the particles. Possible dust sources include the generation of ejecta particles by impacts, active volcanoes and geysers, and the capture of exogenous particles. Possible dust sinks include collisions with moons, rings, or the central planet, erosion due to sublimation and sputtering, even ejection and escape from the circumplanetary environment.
There is an increasing need for an assessment of the impacts of land use and land use change (LUCC). In this context, simulation models are valuable tools for investigating the impacts of stakeholder actions or policy decisions. Agricultural landscape generators (ALGs), which systematically and automatically generate realistic but simplified representations of land cover in agricultural landscapes, can provide the input for LUCC models. We reviewed existing ALGs in terms of their objectives, design and scope. We found eight ALGs that met our definition. They were based either on generic mathematical algorithms (pattern-based) or on representations of ecological or land use processes (process-based). Most ALGs integrate only a few landscape metrics, which limits the design of the landscape pattern and thus the range of applications. For example, only a few specific farming systems have been implemented. We conclude that existing ALGs contain useful approaches that can be used for specific purposes, but ideally generic modular ALGs are developed that can be used for a wide range of scenarios, regions and model types. We have compiled features of such generic ALGs and propose a possible software architecture. Considerable joint efforts are required to develop such generic ALGs, but the benefits in terms of a better understanding and development of more efficient agricultural policies would be high.
We review our current knowledge of comet 67P/Churyumov–Gerasimenko nucleus composition as inferred from measurements made by remote sensing and in-situ instruments aboard Rosetta orbiter and Philae lander. Spectrophotometric properties (albedos, color indexes and Hapke parameters) of 67P/CG derived by Rosetta are discussed in the context of other comets previously explored by space missions. Composed of an assemblage made of ices, organic materials and minerals, cometary nuclei exhibit very dark and red surfaces which can be described by means of spectrophotometric quantities and reproduced with laboratory measurements. The presence of surface water and carbon dioxide ices was found by Rosetta to occur at localized sites where the activity driven by solar input, gaseous condensation or exposure of pristine inner layers can maintain these species on the surface. Apart from these specific areas, 67P/CG’s surface appears remarkably uniform in composition with a predominance of organic materials and minerals. The organic compounds contain abundant hydroxyl group and a refractory macromolecular material bearing aliphatic and aromatic hydrocarbons. The mineral components are compatible with a mixture of silicates and fine-grained opaques, including Fe-sulfides, like troilite and pyrrhotite, and ammoniated salts. In the vicinity of the perihelion several active phenomena, including the erosion of surface layers, the localized activity in cliffs, fractures and pits, the collapse of overhangs and walls, the transfer and redeposition of dust, cause the evolution of the different regions of the nucleus by inducing color, composition and texture changes.
Analytical epigenetics
(2018)
The field of epigenetics describes the relationship between genotype and phenotype, by regulating gene expression without changing the canonical base sequence of DNA. It deals with molecular genomic information that is encoded by a rich repertoire of chemical modifications and molecular interactions. This regulation involves DNA, RNA and proteins that are enzymatically tagged with small molecular groups that alter their physical and chemical properties. It is now clear that epigenetic alterations are involved in development and disease, and thus, are the focus of intensive research. The ability to record epigenetic changes and quantify them in rare medical samples is critical for next generation diagnostics. Optical detection offers the ultimate single-molecule sensitivity and the potential for spectral multiplexing. Here we review recent progress in ultrasensitive optical detection of DNA and histone modifications.
The protein fractions of cocoa have been implicated influencing both the bioactive potential and sensory properties of cocoa and cocoa products. The objective of the present review is to show the impact of different stages of cultivation and processing with regard to the changes induced in the protein fractions. Special focus has been laid on the major seed storage proteins throughout the different stages of processing. The study starts with classical introduction of the extraction and the characterization methods used, while addressing classification approaches of cocoa proteins evolved during the timeline. The changes in protein composition during ripening and maturation of cocoa seeds, together with the possible modifications during the post-harvest processing (fermentation, drying, and roasting), have been documented. Finally, the bioactive potential arising directly or indirectly from cocoa proteins has been elucidated. The state of the art suggests that exploration of other potentially bioactive components in cocoa needs to be undertaken, while considering the complexity of reaction products occurring during the roasting phase of the post-harvest processing. Finally, the utilization of partially processed cocoa beans (e.g., fermented, conciliatory thermal treatment) can be recommended, providing a large reservoir of bioactive potentials arising from the protein components that could be instrumented in functionalizing foods.
Flood disasters severely impact human subjective well-being (SWB). Nevertheless, few studies have examined the influence of flood events on individual well-being and how such impacts may be limited by flood protection measures. This study estimates the long term impacts on individual subjective well-being of flood experiences, individual subjective flood risk perceptions, and household flood preparedness decisions. These effects are monetised and placed in context through a comparison with impacts of other adverse events on well-being. We collected data from households in flood-prone areas in France. The results indicate that experiencing a flood has a large negative impact on subjective well-being that is incompletely attenuated over time. Moreover, individuals do not need to be directly affected by floods to suffer SWB losses since subjective well-being is lower for those who expect their flood risk to increase or who have seen a neighbour being flooded. Floodplain inhabitants who prepared for flooding by elevating their home have a higher subjective well-being. A monetisation of the aforementioned well-being impacts shows that a flood requires Euro150,000 in immediate compensation to attenuate SWB losses. The decomposition of the monetised impacts of flood experience into tangible losses and intangible effects on SWB shows that intangible effects are about twice as large as the tangible direct monetary flood losses. Investments in flood protection infrastructure may be under funded if the intangible SWB benefits of flood protection are not taken into account.
Accumulating behavioral and neurophysiological evidence supports the idea of language being grounded in sensorimotor processes, with indications of a functional role of motor, sensory and emotional systems in processing both concrete and abstract linguistic concepts. However, most of the available studies focused on native language speakers (L1), with only a limited number of investigations testing embodied language processing in the case of a second language (L2). In this paper we review the available evidence on embodied effects in L2 and discuss their possible integration into existing models of linguistic processing in L1 and L2. Finally, we discuss possible avenues for future research towards an integrated model of L1 and L2 sensorimotor and emotional grounding.
Sortases are enzymes occurring in the cell wall of Gram-positive bacteria. Sortase A (SrtA), the best studied sortase class, plays a key role in anchoring surface proteins with the recognition sequence LPXTG covalently to oligoglycine units of the bacterial cell wall. This unique transpeptidase activity renders SrtA attractive for various purposes and motivated researchers to study multiple in vivo and in vitro ligations in the last decades. This ligation technique is known as sortase-mediated ligation (SML) or sortagging and developed to a frequently used method in basic research. The advantages are manifold: extremely high substrate specificity, simple access to substrates and enzyme, robust nature and easy handling of sortase A. In addition to the ligation of two proteins or peptides, early studies already included at least one artificial (peptide equipped) substrate into sortagging reactions - which demonstrates the versatility and broad applicability of SML. Thus, SML is not only a biology-related technique, but has found prominence as a major interdisciplinary research tool. In this review, we provide an overview about the use of sortase A in interdisciplinary research, mainly for protein modification, synthesis of protein-polymer conjugates and immobilization of proteins on surfaces.
Microplastics (MP) provide a unique and extensive surface for microbial colonization in aquatic ecosystems. The formation of microorganism-microplastic complexes, such as biofilms, maximizes the degradation of organic matter and horizontal gene transfer. In this context, MP affect the structure and function of microbial communities, which in turn render the physical and chemical fate of MP. This new paradigm generates challenges for microbiology, ecology, and ecotoxicology. Dispersal of MP is concomitant with that of their associated microorganisms and their mobile genetic elements, including antibiotic resistance genes, islands of pathogenicity, and diverse metabolic pathways. Functional changes in aquatic microbiomes can alter carbon metabolism and food webs, with unknown consequences on higher organisms or human microbiomes and hence health. Here, we examine a variety of effects of MP pollution from the microbial ecology perspective, whose repercussions on aquatic ecosystems begin to be unraveled. (C) 2018 Elsevier B.V. All rights reserved.
For successful growth and development, plants constantly have to gauge their environment. Plants are capable to monitor their current environmental conditions, and they are also able to integrate environmental conditions over time and store the information induced by the cues. In a developmental context, such an environmental memory is used to align developmental transitions with favourable environmental conditions. One temperature-related example of this is the transition to flowering after experiencing winter conditions, that is, vernalization. In the context of adaptation to stress, such an environmental memory is used to improve stress adaptation even when the stress cues are intermittent. A somatic stress memory has now been described for various stresses, including extreme temperatures, drought, and pathogen infection. At the molecular level, such a memory of the environment is often mediated by epigenetic and chromatin modifications. Histone modifications in particular play an important role. In this review, we will discuss and compare different types of temperature memory and the histone modifications, as well as the reader, writer, and eraser proteins involved.
BackgroundAlthough the efficacy of Internet- and mobile-based interventions (IMIs) for anxiety is established, little is known about the intervention components responsible for therapeutic change. We conducted the first comprehensive meta-analytic review of intervention components of IMIs for adult anxiety disorders. MethodsRandomized controlled trials (RCTs) comparing IMIs for anxiety disorders to active online control groups, or IMIs to dismantled variations of the same intervention ( specific components) were identified by a systematic literature search in six databases. Outcomes were validated observer-rated or self-report measures for anxiety symptom severity and treatment adherence (number of completed modules and completer rate). This meta-analytic review is registered with PROSPERO (CRD42017068268). ResultsWe extracted the data of 34 RCTs (with 3,724 participants) and rated the risk of bias independently by two reviewers. Random-effects meta-analyses were performed on 19 comparisons of intervention components (i.a., different psychotherapeutic orientations, disorder-specific vs. transdiagnostic approaches, guidance factors). IMIs had a large effect when compared to active online controls on symptom severity (standardized mean difference [SMD] of -1.67 [95% CI: -2.93, -0.42]; P=0.009). Thereby, guided IMIs were superior to unguided interventions on symptom severity (SMD of -0.39 [95% CI: -0.59, -0.18]; P=0.0002) and adherence (SMD of 0.38 [95% CI: 0.10, 0.66]; P=0.007). ConclusionsOverall, the results of this meta-analysis lend further support to the efficacy of IMIs for anxiety, pointing to their potential to augment service supplies. Still, future research is needed to determine which ingredients are essential, as this meta-analytic review found no evidence for incremental effects of several single intervention components apart from guidance.
Growing attention to phytoplankton mixotrophy as a trophic strategy has led to significant revisions of traditional pelagic food web models and ecosystem functioning. Although some empirical estimates of mixotrophy do exist, a much broader set of in situ measurements are required to (i) identify which organisms are acting as mixotrophs in real time and to (ii) assess the contribution of their heterotrophy to biogeochemical cycling. Estimates are needed through time and across space to evaluate which environmental conditions or habitats favour mixotrophy: conditions still largely unknown. We review methodologies currently available to plankton ecologists to undertake estimates of plankton mixotrophy, in particular nanophytoplankton phago-mixotrophy. Methods are based largely on fluorescent or isotopic tracers, but also take advantage of genomics to identify phylotypes and function. We also suggest novel methods on the cusp of use for phago-mixotrophy assessment, including single-cell measurements improving our capacity to estimate mixotrophic activity and rates in wild plankton communities down to the single-cell level. Future methods will benefit from advances in nanotechnology, micromanipulation and microscopy combined with stable isotope and genomic methodologies. Improved estimates of mixotrophy will enable more reliable models to predict changes in food web structure and biogeochemical flows in a rapidly changing world.
We evaluated the effectiveness and acceptability of metacognitive interventions for mental disorders. We searched electronic databases and included randomized and nonrandomized controlled trials comparing metacognitive interventions with other treatments in adults with mental disorders. Primary effectiveness and acceptability outcomes were symptom severity and dropout, respectively. We performed random-effects meta-analyses. We identified Metacognitive Training (MCTrain), Metacognitive Therapy (MCTherap), and Metacognition Reflection and Insight Therapy (MERIT). We included 49 trials with 2,609 patients. In patients with schizophrenia, MCTrain was more effective than a psychological treatment (cognitive remediation, SMD = -0.39). It bordered significance when compared with standard or other psychological treatments. In a post hoc analysis, across all studies, the pooled effect was significant (SMD = -0.31). MCTrain was more effective than standard treatment in patients with obsessive-compulsive disorder (SMD = -0.40). MCTherap was more effective than a waitlist in patients with depression (SMD = -2.80), posttraumatic stress disorder (SMD = -2.36), and psychological treatments (cognitive-behavioural) in patients with anxiety (SMD = -0.46). In patients with depression, MCTherap was not superior to psychological treatment (cognitive-behavioural). For MERIT, the database was too small to allow solid conclusions. Acceptability of metacognitive interventions among patients was high on average. Methodological quality was mostly unclear or moderate. Metacognitive interventions are likely to be effective in alleviating symptom severity in mental disorders. Although their add-on value against existing psychological interventions awaits to be established, potential advantages are their low threshold and economy.
Carbon nanomaterials doped with some other lightweight elements were recently described as powerful, heterogeneous, metal-free organocatalysts, adding to their high performance in electrocatalysis. Here, recent observations in traditional catalysis are reviewed, and the underlying reaction mechanisms of the catalyzed organic transformations are explored. In some cases, these are due to specific active functional sites, but more generally the catalytic activity relates to collective properties of the conjugated nanocarbon frameworks and the electron transfer from and to the catalytic centers and substrates. It is shown that the !earnings are tightly related to those of electrocatalysis; i.e., the search for better electrocatalysts also improves chemocatalysis, and vice versa. Carbon-carbon heterojunction effects and some perspectives on future possibilities are discussed at the end.
Ecological effects of alien species can be dramatic, but management and prevention of negative impacts are often hindered by crypticity of the species or their ecological functions. Ecological functions can change dramatically over time, or manifest after long periods of an innocuous presence. Such cryptic processes may lead to an underestimation of long-term impacts and constrain management effectiveness. Here, we present a conceptual framework of crypticity in biological invasions. We identify the underlying mechanisms, provide evidence of their importance, and illustrate this phenomenon with case studies. This framework has potential to improve the recognition of the full risks and impacts of invasive species.
Sexual aggression is a major public health issue worldwide, but most knowledge is derived from studies conducted in North America and Western Europe. Little research has been conducted on the prevalence of sexual aggression in developing countries, including Chile. This article presents the first systematic review of the evidence on the prevalence of sexual aggression victimization and perpetration among women and men in Chile. Furthermore, it reports differences in prevalence rates in relation to victim and perpetrator characteristics and victim–perpetrator relationships. A total of N = 28 studies were identified by a three-stage literature search, including the screening of academic databases, publications of Chilean institutions, and reference lists. A great heterogeneity was found for prevalence rates of sexual victimization, ranging between 1.0% and 51.9% for women and 0.4% and 48.0% for men. Only four studies provided perpetration rates, which varied between 0.8% and 26.8% for men and 0.0% and 16.5% for women. No consistent evidence emerged for differences in victimization rates in relation to victims’ gender, age, and education. Perpetrators were more likely to be persons known to the victim. Conceptual and methodological differences between the studies are discussed as reasons for the great variability in prevalence rates, and recommendations are provided for a more harmonized and gender-inclusive approach for future research on sexual aggression in Chile.
Electrochemical synthesis and signal generation dominate among the almost 1200 articles published annually on protein-imprinted polymers. Such polymers can be easily prepared directly on the electrode surface, and the polymer thickness can be precisely adjusted to the size of the target to enable its free exchange. In this architecture, the molecularly imprinted polymer (MIP) layer represents only one ‘separation plate’; thus, the selectivity does not reach the values of ‘bulk’ measurements. The binding of target proteins can be detected straightforwardly by their modulating effect on the diffusional permeability of a redox marker through the thin MIP films. However, this generates an ‘overall apparent’ signal, which may include nonspecific interactions in the polymer layer and at the electrode surface. Certain targets, such as enzymes or redox active proteins, enables a more specific direct quantification of their binding to MIPs by in situ determination of the enzyme activity or direct electron transfer, respectively.
Background Small-sided games have been suggested as a viable alternative to conventional endurance training to enhance endurance performance in youth soccer players. This has important implications for long-term athlete development because it suggests that players can increase aerobic endurance through activities that closely resemble their sport of choice. Data Sources The data sources utilised were Google Scholar, PubMed and Microsoft Academic. Study Eligibility Criteria Studies were eligible for inclusion if interventions were carried out in male soccer players (aged < 18years) and compared the effects of small-sided games and conventional endurance training on aerobic endurance performance. We defined small-sided games as modified [soccer] games played on reduced pitch areas, often using adapted rules and involving a smaller number of players than traditional games. We defined conventional endurance training as continuous running or extensive interval training consisting of work durations>3min. Study Appraisal and Synthesis Methods The inverse-variance random-effects model for meta-analyses was used because it allocates a proportionate weight to trials based on the size of their individual standard errors and facilitates analysis whilst accounting for heterogeneity across studies. Effect sizes were represented by the standardised mean difference and presented alongside 95% confidence intervals. Results Seven studies were included in this meta-analysis. Both modes of training were effective in increasing endurance performance. Within-mode effect sizes were both of moderate magnitude [small-sided games: 0.82 (95% confidence interval 0.05, 1.60), Z=2.07 (p=0.04); conventional endurance training: 0.89 (95% confidence interval 0.06, 1.72), Z=2.10 (p=0.04)]. There were only trivial differences [0.04 (95% confidence interval -0.36, 0.43), Z=0.18 (p=0.86)] between the effects on aerobic endurance performance of small-sided games and conventional endurance training. Subgroup analyses showed mostly trivial differences between the training methods across key programming variables such as set duration (>= or < 4 min) and recovery period between sets (>= or< 3min). Programmes that were longer than 8 weeks favoured small-sided games [effect size=0.45 (95% confidence interval -0.12, 1.02), Z=1.54 (p=0.12)], with the opposite being true for conventional endurance training [effect size=-0.33 (95% confidence interval -0.79, 0.14), Z=1.39 (p=0.16)]. Programmes with more than 4 sets per session favoured small-sided games [effect size=0.53 (95% confidence interval -0.52, 1.58), Z=0.98 (p=0.33)] with only a trivial difference between those with 4, or fewer, sets [effect size=-0.13 (95% confidence interval -0.52, 0.26), Z=0.65 (p=0.52)]. Conclusions Small-sided games are as effective as conventional endurance training for increasing aerobic endurance performance in male youth soccer players. This is important for practitioners as it means that small-sided games can allow both endurance and skills training to be carried out simultaneously, thus providing a more efficient training stimulus. Small-sided games offer the same benefits as conventional endurance training with two sessions per week, with4 sets of 4 min of activity, interspersed with recovery periods of 3min, recommended in this population.
Objective: To critically review developments over the first fifty years of research (1967-2017) on (a) how people feel when they participate in exercise and physical activity, and (b) the implications of these responses for their willingness to become and remain active. Design: Non-systematic narrative review. Method: Representative sources were selected through a combination of computer searches and cross-referencing. Results: For over three decades, exercise psychology exhibited a fixation on the idea that exercise and physical activity make people feel better. This notion, however, seemed to contrast with evidence that most adults in industrialized countries exhibit low levels of activity. In the last two decades, a critical examination and overhaul of the methodological platform resulted in the delineation of a dose-response pattern that encompasses positive as well as negative affective responses, and revealed marked interindividual differences. An emerging literature is aimed at refining and testing integrative dual-process models that can offer specific predictions about the behaviors that may result from the interaction of automatic processes (theorized to be heavily influenced by past affective experiences) and deliberative processes (such as cognitive appraisals). Conclusions: Affective responses to exercise and physical activity are more complex than the long-popularized "feel-better" effect, encompassing both pleasant and unpleasant experiences and exhibiting marked inter individual variation. The potential of affective experiences to influence subsequent behavior offers an opportunity for an expanded theoretical perspective in exercise psychology.
In virtue of the rising demand for metal-free polymeric materials, organocatalytic polymerization has emerged and blossomed unprecedentedly in the past 15 years into an appealing research area and a powerful arsenal for polymer synthesis. In addition to the inherent merits as being metal-free, small molecule organocatalysts have also provided opportunities to develop alternative and, in many cases, more expedient synthetic approaches toward macromolecular architectures, that play a crucial role in shaping the properties of the obtained polymers. A majority of preliminary studies exploring for new catalysts, catalytic mechanisms and optimized polymerization conditions are extended to application of the catalytic systems on rational design and controlled synthesis of various macromolecular architectures. Such endeavors are described in this review, categorized by the architectural elements including chain structure (types, sequence and composition of monomeric units constituting the polymer chains), topological structure (the fashion different polymer chains are covalently attached to each other within the macromolecule) and functionality (position and amount of functional groups that endow the entire macromolecule with specific chemical, physico-chemical or biological properties). (C) 2017 Published by Elsevier B.V.
Patient involvement (PI) in research is increasingly required as a means to improve relevance and meaningfulness of research results. PI has been widely promoted by the National Institute for Health Research in England in the last years. In Germany, widespread involvement of patients in research is still missing. The methods used to realize PI have been developed mainly in English research contexts, and detailed information on how to involve patients in systematic reviews is rare. Therefore, the aim of the study was that patients contribute and prioritize clinically relevant outcomes to a systematic review on meta-cognitive interventions, and to evaluate a patient workshop as well as patients’ perceptions of research involvement. Seven patients with experience in psychiatric care participated in our workshop. They focused on outcomes pre-defined in the review protocol (e.g., meta-cognitive or cognitive changes, symptomatology, quality of life), neglected other outcomes (like satisfaction with treatment, acceptability), and added relevant new ones (e.g., scope of action/autonomy, applicability). Altogether, they valued the explicit workshop participation positively. However, some suggested to involve patients at an earlier stage and to adapt the amount of information given. Further systematic reviews would benefit from the involvement of patients in the definition of other components of the review question (like patients or interventions), in the interpretation of key findings or in drafting a lay summary.
Arboreal epiphytes (plants residing in forest canopies) are present across all major climate zones and play important roles in forest biogeochemistry. The substantial water storage capacity per unit area of the epiphyte "bucket" is a key attribute underlying their capability to influence forest hydrological processes and their related mass and energy flows. It is commonly assumed that the epiphyte bucket remains saturated, or near-saturated, most of the time; thus, epiphytes (particularly vascular epiphytes) can store little precipitation, limiting their impact on the forest canopy water budget. We present evidence that contradicts this common assumption from (i) an examination of past research; (ii) new datasets on vascular epiphyte and epi-soil water relations at a tropical montane cloud forest (Monteverde, Costa Rica); and (iii) a global evaluation of non-vascular epiphyte saturation state using a process-based vegetation model, LiBry. All analyses found that the external and internal water storage capacity of epiphyte communities is highly dynamic and frequently available to intercept precipitation. Globally, non-vascular epiphytes spend <20% of their time near saturation and regionally, including the humid tropics, model results found that non-vascular epiphytes spend similar to 1/3 of their time in the dry state (0-10% of water storage capacity). Even data from Costa Rican cloud forest sites found the epiphyte community was saturated only 1/3 of the time and that internal leaf water storage was temporally dynamic enough to aid in precipitation interception. Analysis of the epi-soils associated with epiphytes further revealed the extent to which the epiphyte bucket emptied-as even the canopy soils were often <50% saturated (29-53% of all days observed). Results clearly show that the epiphyte bucket is more dynamic than currently assumed, meriting further research on epiphyte roles in precipitation interception, redistribution to the surface and chemical composition of "net" precipitation waters reaching the surface.
Fungi in aquatic ecosystems
(2019)
Fungi are phylogenetically and functionally diverse ubiquitous components of almost all ecosystems on Earth, including aquatic environments stretching from high montane lakes down to the deep ocean. Aquatic ecosystems, however, remain frequently overlooked as fungal habitats, although fungi potentially hold important roles for organic matter cycling and food web dynamics. Recent methodological improvements have facilitated a greater appreciation of the importance of fungi in many aquatic systems, yet a conceptual framework is still missing. In this Review, we conceptualize the spatiotemporal dimensions, diversity, functions and organismic interactions of fungi in structuring aquatic food webs. We focus on currently unexplored fungal diversity, highlighting poorly understood ecosystems, including emerging artificial aquatic habitats.
Purpose: The aims of this systematic review are to provide a critical overview of short-term memory (STM) and working memory (WM) treatments in stroke aphasia and to systematically evaluate the internal and external validity of STM/WM treatments. Method: A systematic search was conducted in February 2014 and then updated in December 2016 using 13 electronic databases. We provided descriptive characteristics of the included studies and assessed their methodological quality using the Risk of Bias in N-of-1 Trials quantitative scale (Tate et al., 2015), which was completed by 2 independent raters. Results: The systematic search and inclusion/exclusion procedure yielded 17 single-case or case-series studies with 37 participants for inclusion. Nine studies targeted auditory STM consisting of repetition and/or recognition tasks, whereas 8 targeted attention and WM, such as attention process training including n-back tasks with shapes and clock faces as well as mental math tasks. In terms of their methodological quality, quality scores on the Risk of Bias in N-of-1 Trials scale ranged from 4 to 17 (M = 9.5) on a 0-30 scale, indicating a high risk of bias in the reviewed studies. Effects of treatment were most frequently assessed on STM, WM, and spoken language comprehension. Transfer effects on communication and memory in activities of daily living were tested in only 5 studies. Conclusions: Methodological limitations of the reviewed studies make it difficult, at present, to draw firm conclusions about the effects of STM/WM treatments in poststroke aphasia. Further studies with more rigorous methodology and stronger experimental control are needed to determine the beneficial effects of this type of intervention. To understand the underlying mechanisms of STM/WM treatment effects and how they relate to language functioning, a careful choice of outcome measures and specific hypotheses about potential improvements on these measures are required. Future studies need to include outcome measures of memory functioning in everyday life and psychosocial functioning more generally to demonstrate the ecological validity of STM and WM treatments.
Large earthquakes initiate chains of surface processes that last much longer than the brief moments of strong shaking. Most moderate‐ and large‐magnitude earthquakes trigger landslides, ranging from small failures in the soil cover to massive, devastating rock avalanches. Some landslides dam rivers and impound lakes, which can collapse days to centuries later, and flood mountain valleys for hundreds of kilometers downstream. Landslide deposits on slopes can remobilize during heavy rainfall and evolve into debris flows. Cracks and fractures can form and widen on mountain crests and flanks, promoting increased frequency of landslides that lasts for decades. More gradual impacts involve the flushing of excess debris downstream by rivers, which can generate bank erosion and floodplain accretion as well as channel avulsions that affect flooding frequency, settlements, ecosystems, and infrastructure. Ultimately, earthquake sequences and their geomorphic consequences alter mountain landscapes over both human and geologic time scales. Two recent events have attracted intense research into earthquake‐induced landslides and their consequences: the magnitude M 7.6 Chi‐Chi, Taiwan earthquake of 1999, and the M 7.9 Wenchuan, China earthquake of 2008. Using data and insights from these and several other earthquakes, we analyze how such events initiate processes that change mountain landscapes, highlight research gaps, and suggest pathways toward a more complete understanding of the seismic effects on the Earth's surface.
In this volume, Egeberg and Trondal put forward an ‘organizational approach to public governance’ (p. 1) that, in their view, complements existing explanations for organizational change and behaviour in governance processes (‘Understanding’) and produces relevant advice for practitioners, specifically anyone involved in reorganizing public administration (‘Design’). Following the authors’ introduction of the theoretical reasoning behind their approach (chapter 1), they present supporting findings that are based on new material (chapters 2 and 9), but mainly draw on six previously published research articles (chapters 3–8). Egeberg and Trondal conclude with possible ‘design implications’ of said findings (chapter 9). Their ‘organizational approach’ focuses on the impact of selected organizational characteristics on decision‐making in and on behalf of government organizations in policy‐making generally (‘public governance’) and administrative politics more specifically (‘meta‐governance’). The authors concentrate on three sets of ‘classical’ organizational characteristics: structure (mainly vertical and horizontal specialization), demography (personnel composition), and locus (geographical location). The conceptual part of the volume convincingly summarizes ‘formal organization matters’—arguments from the literature for each of the individual organizational factors. Their main, already well‐established argument is that the way an organization is formally set up makes some (reform) decisions more likely than others—a line of reasoning that the authors present as neglected in governance literature.
In the following five empirical chapters, the authors show that aspects of horizontal and vertical specialization—mainly operationalized by Gulicks’ principles of horizontal specialization and the idea of primary versus secondary affiliation of staff—affect organizational behaviour. Readers learn that whether government levels are organized according to a territorial or non‐territorial principle impacts the power relationship between levels: non‐territorial organization at the supranational level tends to empower the centre against lower levels of government. There are two chapters on the decision‐making behaviour of commissioners and officials in the European Commission, both showing that organizational affiliation trumps demographic background factors such as nationality, even with temporary staff.
Chapter 5 addresses coordination dynamics in the European multi‐level system and finds that coordination at the territorially organized national level thwarts non‐territorially organized coordination at the supranational level, resulting in the phenomenon of ‘direct’ national administration bypassing their national executives. Further, the authors show that vertical specialization—while controlling for other factors such as issue salience—has an effect on officials’ behaviour at the national level: agency officials in Norway report significantly less sensitivity towards political signals from the political executive than their colleagues in ministries. Chapter 7 discusses the relevance of geographical location for the relationship between subordinated organizations and their political executive. The authors find that the site of Norwegian agencies does not significantly affect their autonomy, influence, or inter‐institutional coordination with the superior ministry.
The last empirical chapter focuses on the effect of formal organization on meta‐governance, that is, administrative politics. Based on a qualitative case study of a reorganization process in Norway in 2003 involving the synchronized relocation of several agencies after many failed attempts, the authors conclude that administrative reforms can be politically steered and controlled through the organization of the reform process. They argue that amongst other factors the strategic exclusion of opposing actors from the reform process as well as the deliberate increase in situations demanding quick decisions (‘action rationality’, p. 119) by political leaders helps explain the reform's unexpected success. The last chapter is dedicated to the synthesis of the results and to design implications. Supported by new data from a 2016 survey among Norwegian public officials, the authors conclude that organizational position is the most important influencer of decision‐making behaviour, with educational background and previous job experience also playing a large role (p. 135). Consequently, their suggestions for practitioners involved in meta‐governance processes concentrate on aspects of the deliberate crafting of organizational specialization to shape organizational positions, and spend less time discussing location and employee demographics. The authors illustrate and contextualize their recommendations with the help of three empirical examples: organizing good governance by balancing political control and independence in the case of agencification, organizing for coping with boundary‐spanning challenges such as climate change through inter‐organizational structural arrangements, and designing permanent organizational structures for innovative reforms in the public sector (pp. 137 ff.).
This volume is an excellent compilation of theoretically informed applications of the all too often undefined ‘organization matters’ argument. It juxtaposes—particularly in the theory chapter and in the last chapter on design implications—organizational arguments against other explanations of organizational change like historical institutionalism or the garbage can model of decision‐making. However, two major aspects of the book's approach are less convincing. First, supplementary explanations such as the garbage can model that are discussed in the reflections on meta‐governance are neither argumentatively nor empirically applied to public governance; why should, for example, the ‘solutions in search of a problem’ idea only be applicable to decisions on reform policy, but not to decisions in all other policy areas? Similarly, it would have been nice to read more on the authors’ idea on the interaction between organizational factors and between them and other explanations in the empirical cases on public governance—this would have allowed the reader to get a better idea about how much formal organization matters. The view on bureaucrats’ demographic background is slightly confusing: it is presented as a competing approach (p. 7), but also as one of the main organizational factors (p. 12).
Second, as the authors themselves state, the concept of governance is about ‘steering through collective action’ (p. 3) and focuses on interactive processes, and explicitly includes non‐governmental actors in the policy‐making equation. Against this background it seems unfortunate that most of the work presented in the book takes an exclusively governmental perspective and the justification for it remains rather superficial. It would be preferable and even necessary to see the organizational arguments—at least theoretically or through discussing appropriate literature—applied to interactive governance processes involving other actors and/or to non‐bureaucratic organizations.
Regarding its methodology, the specifics of the proposed approach deserve to be addressed more systematically and critically in the book. Except for chapters 2, 3 and 5 (literature‐based studies) as well as chapter 8 (single case study), the empirical studies follow a quantitative logic and are informed by data on self‐reported behaviour through large‐N panel surveys with public officials. In terms of analysis, descriptive statistics or basic inferential statistics (linear regression) are employed. Certainly, the authors are aware of the limitations of their data sources, such as the results being possibly affected by social desirability, and they discuss and justify them in the chapters individually (e.g., on pp. 47, 89). Still, their approach could be strengthened with a more cautious account on the extent to which their choice of data and methods is able to uncover the ‘causal impact of organizational factors in public governance processes’ (p. 131, emphasis added) and with some suggestions for widening their methodological toolbox in the future. On this note, the survey method presented as new on p. 135 is not a particularly convincing choice. The authors do not lay out a research agenda; a surprising omission. This is, however, somewhat made up for by the concluding chapter's stimulating discussion of the possible real‐world implications of their findings and perspective, skilfully using organization theory as a ‘craft’ (p. 29).
A wide variety of processes controls the time of occurrence, duration, extent, and severity of river floods. Classifying flood events by their causative processes may assist in enhancing the accuracy of local and regional flood frequency estimates and support the detection and interpretation of any changes in flood occurrence and magnitudes. This paper provides a critical review of existing causative classifications of instrumental and preinstrumental series of flood events, discusses their validity and applications, and identifies opportunities for moving toward more comprehensive approaches. So far no unified definition of causative mechanisms of flood events exists. Existing frameworks for classification of instrumental and preinstrumental series of flood events adopt different perspectives: hydroclimatic (large-scale circulation patterns and atmospheric state at the time of the event), hydrological (catchment scale precipitation patterns and antecedent catchment state), and hydrograph-based (indirectly considering generating mechanisms through their effects on hydrograph characteristics). All of these approaches intend to capture the flood generating mechanisms and are useful for characterizing the flood processes at various spatial and temporal scales. However, uncertainty analyses with respect to indicators, classification methods, and data to assess the robustness of the classification are rarely performed which limits the transferability across different geographic regions. It is argued that more rigorous testing is needed. There are opportunities for extending classification methods to include indicators of space-time dynamics of rainfall, antecedent wetness, and routing effects, which will make the classification schemes even more useful for understanding and estimating floods. This article is categorized under: Science of Water > Water Extremes Science of Water > Hydrological Processes Science of Water > Methods
The 2015 Paris Agreement (PA) has been widely hailed as a diplomatic triumph and a breakthrough in global climate cooperation. However, it is commonly accepted that the PA's collective goal—keeping global warming “well below” 2°C above preindustrial levels—remains ambitious. Making matters even more challenging, in 2017, global CO2 emissions resumed growth after 3 years of near standstill. In 2018, this growth accelerated. It is therefore extremely important that the PA's institutional architecture meet expectations concerning its ability to induce member countries to promise and deliver emissions reductions. This study offers a review of the rapidly growing literature on the PA, to assess its strengths and weaknesses, its significance, and its prospects. We focus on evaluations of its institutional structure and its ability to induce member countries to implement policies. We frame the issues as a trilemma: the challenge of simultaneously satisfying all three main conditions for effectiveness—broad participation, deep commitments, and satisfactory compliance rates. Based on our review, we conclude that the key challenge for the PA will likely be to facilitate sufficiently fast ratcheting‐up of nationally determined contributions, while keeping compliance rates high.
Malnutrition is widespread in older people and represents a major geriatric syndrome with multifactorial etiology and severe consequences for health outcomes and quality of life. The aim of the present paper is to describe current approaches and evidence regarding malnutrition treatment and to highlight relevant knowledge gaps that need to be addressed. Recently published guidelines of the European Society for Clinical Nutrition and Metabolism (ESPEN) provide a summary of the available evidence and highlight the wide range of different measures that can be taken—from the identification and elimination of potential causes to enteral and parenteral nutrition—depending on the patient’s abilities and needs. However, more than half of the recommendations therein are based on expert consensus because of a lack of evidence, and only three are concern patient-centred outcomes. Future research should further clarify the etiology of malnutrition and identify the most relevant causes in order to prevent malnutrition. Based on limited and partly conflicting evidence and the limitations of existing studies, it remains unclear which interventions are most effective in which patient groups, and if specific situations, diseases or etiologies of malnutrition require specific approaches. Patient-relevant outcomes such as functionality and quality of life need more attention, and research methodology should be harmonised to allow for the comparability of studies.
Particle filters contain the promise of fully nonlinear data assimilation. They have been applied in numerous science areas, including the geosciences, but their application to high-dimensional geoscience systems has been limited due to their inefficiency in high-dimensional systems in standard settings. However, huge progress has been made, and this limitation is disappearing fast due to recent developments in proposal densities, the use of ideas from (optimal) transportation, the use of localization and intelligent adaptive resampling strategies. Furthermore, powerful hybrids between particle filters and ensemble Kalman filters and variational methods have been developed. We present a state-of-the-art discussion of present efforts of developing particle filters for high-dimensional nonlinear geoscience state-estimation problems, with an emphasis on atmospheric and oceanic applications, including many new ideas, derivations and unifications, highlighting hidden connections, including pseudo-code, and generating a valuable tool and guide for the community. Initial experiments show that particle filters can be competitive with present-day methods for numerical weather prediction, suggesting that they will become mainstream soon.
Aggression Replacement Training (ART) is a multimodal intervention for chronically aggressive youth. The program has been frequently administered in a variety of samples in the original form or in modified versions. This review examines evaluations of the efficacy of ART on aggressive behavior and secondary outcomes in children and youth, including modifications of ART and evaluations of the original version not covered by earlier reviews. Method: Scholarly databases were searched to identify 10 articles reporting 11 independent studies evaluating the efficacy ART in reducing aggressive behavior and improving anger control, social skills, and moral reasoning in children and youth. Results: The majority of studies found positive effects of ART on aggression and other outcomes related to anger control, social skills, and moral reasoning. However, most studies were based on small samples, and few included a control group to evaluate intervention success. Conclusions: The studies reviewed in this paper tentatively suggest that ART is an efficacious intervention to reduce aggressive behavior and improve anger control, social skills, and moral reasoning in at-risk children and youth. However, this conclusion is qualified by a number of methodological limitations that highlight the need for further, more rigorous evaluation studies.
Alcohol in the aging brain
(2019)
As our society grows older new challenges for medicine and healthcare emerge. Age-related changes of the body have been observed in essential body functions, particularly in the loco-motor system, in the cardiovascular system and in cognitive functions concerning both brain plasticity and changes in behavior. Nutrition and lifestyle, such as nicotine intake and chronic alcohol consumption, also contribute to biological changes in the brain. This review addresses the effect of alcohol consumption on cognitive decline, changes in brain plasticity in the aging brain and on cardiovascular health in aging. Thus, studies on the interplay of chronic alcohol intake and either cognitive decline or cognitive preservation are outlined. Because of the inconsistency in the literature of whether alcohol consumption preserves cognitive functions in the aging brain or whether it accelerates cognitive decline, it is crucial to consider individual contributing factors such as culture, health and lifestyle in future studies.