Filtern
Erscheinungsjahr
- 2018 (81) (entfernen)
Dokumenttyp
- Rezension (81) (entfernen)
Sprache
- Englisch (81) (entfernen)
Schlagworte
- insulin resistance (3)
- Cohort studies (2)
- Review (2)
- non-alcoholic fatty liver disease (2)
- p53 (2)
- AC electrokinetics (1)
- ARCH (1)
- ARIMA (1)
- Acid sphingomyelinase (1)
- Adaptability (1)
Institut
- Institut für Biochemie und Biologie (20)
- Department Sport- und Gesundheitswissenschaften (10)
- Institut für Geowissenschaften (10)
- Institut für Ernährungswissenschaft (7)
- Department Psychologie (6)
- Institut für Chemie (5)
- Institut für Physik und Astronomie (5)
- Historisches Institut (3)
- Department für Inklusionspädagogik (2)
- Fachgruppe Soziologie (2)
The purpose of this conceptual article is to advance theory and research on one critical aspect of the context of ethnic–racial identity (ERI) development: ethnic–racial settings, or the objective and subjective nature of group representation within an individual's context. We present a new conceptual framework that consists of four dimensions: (1) perspective (that settings can be understood in both objective and subjective terms); (2) differentiation (how groups are defined in a setting); (3) heterogeneity (the range of groups in a setting); and (4) proximity (the distance between the individual and the setting). Clarifying this complexity is crucial for advancing a more coherent understanding of how ethnic–racial settings are related to ERI development.
Knowledge of the present-day crustal in-situ stress field is a key for the understanding of geodynamic processes such as global plate tectonics and earthquakes. It is also essential for the management of geo-reservoirs and underground storage sites for energy and waste. Since 1986, the World Stress Map (WSM) project has systematically compiled the orientation of maximum horizontal stress (S-Hmax). For the 30th anniversary of the project, the WSM database has been updated significantly with 42,870 data records which is double the amount of data in comparison to the database release in 2008. The update focuses on areas with previously sparse data coverage to resolve the stress pattern on different spatial scales. In this paper, we present details of the new WSM database release 2016 and an analysis of global and regional stress pattern. With the higher data density, we can now resolve stress pattern heterogeneities from plate-wide to local scales. In particular, we show two examples of 40 degrees-60 degrees S-Hmax rotations within 70 km. These rotations can be used as proxies to better understand the relative importance of plate boundary forces that control the long wave-length pattern in comparison to regional and local controls of the crustal stress state. In the new WSM project phase IV that started in 2017, we will continue to further refine the information on the S-Hmax orientation and the stress regime. However, we will also focus on the compilation of stress magnitude data as this information is essential for the calibration of geomechanical-numerical models. This enables us to derive a 3-D continuous description of the stress tensor from point-wise and incomplete stress tensor information provided with the WSM database. Such forward models are required for safety aspects of anthropogenic activities in the underground and for a better understanding of tectonic processes such as the earthquake cycle.
Introduction:
We aim to highlight the utility of this model in the analysis of the psycho-behavioral implications of family cancer, presenting the scientific literature that used Leventhal’s model as the theoretical framework of approach.
Material and methods:
A systematic search was performed in six databases (EBSCO, ScienceDirect, PubMed Central, ProQuest, Scopus, and Web of Science) with empirical studies published between 2006 and 2015 in English with regard to the Common Sense Model of Self-Regulation (CSMR) and familial/hereditary cancer. The key words used were: illness representations, common sense model, self regulatory model, familial/hereditary/genetic cancer, genetic cancer counseling. The selection of studies followed the PRISMA-P guidelines (Moher et al., 2009; Shamseer et al., 2015), which suggest a three-stage procedure.
Results:
Individuals create their own cognitive and emotional representation of the disease when their health is threatened, being influenced by the presence of a family history of cancer, causing them to adopt or not a salutogenetic behavior. Disease representations, particularly the cognitive ones, can be predictors of responses to health threats that determine different health behaviors. Age, family history of cancer, and worrying about the disease are factors associated with undergoing screening. No consensus has been reached as to which factors act as predictors of compliance with cancer screening programs.
Conclusions:
This model can generate interventions that are conceptually clear as well as useful in regulating the individuals’ behaviors by reducing the risk of developing the disease and by managing as favorably as possible health and/or disease.
We analyze a general class of self-adjoint difference operators H-epsilon = T-epsilon + V-epsilon on l(2)((epsilon Z)(d)), where V-epsilon is a multi-well potential and v(epsilon) is a small parameter. We give a coherent review of our results on tunneling up to new sharp results on the level of complete asymptotic expansions (see [30-35]). Our emphasis is on general ideas and strategy, possibly of interest for a broader range of readers, and less on detailed mathematical proofs. The wells are decoupled by introducing certain Dirichlet operators on regions containing only one potential well. Then the eigenvalue problem for the Hamiltonian H-epsilon is treated as a small perturbation of these comparison problems. After constructing a Finslerian distance d induced by H-epsilon, we show that Dirichlet eigenfunctions decay exponentially with a rate controlled by this distance to the well. It follows with microlocal techniques that the first n eigenvalues of H-epsilon converge to the first n eigenvalues of the direct sum of harmonic oscillators on R-d located at several wells. In a neighborhood of one well, we construct formal asymptotic expansions of WKB-type for eigenfunctions associated with the low-lying eigenvalues of H-epsilon. These are obtained from eigenfunctions or quasimodes for the operator H-epsilon acting on L-2(R-d), via restriction to the lattice (epsilon Z)(d). Tunneling is then described by a certain interaction matrix, similar to the analysis for the Schrodinger operator (see [22]), the remainder is exponentially small and roughly quadratic compared with the interaction matrix. We give weighted l(2)-estimates for the difference of eigenfunctions of Dirichlet-operators in neighborhoods of the different wells and the associated WKB-expansions at the wells. In the last step, we derive full asymptotic expansions for interactions between two "wells" (minima) of the potential energy, in particular for the discrete tunneling effect. Here we essentially use analysis on phase space, complexified in the momentum variable. These results are as sharp as the classical results for the Schrodinger operator in [22].
Organic semiconductors are of great interest for a broad range of optoelectronic applications due to their solution processability, chemical tunability, highly scalable fabrication, and mechanical flexibility. In contrast to traditional inorganic semiconductors, organic semiconductors are intrinsically disordered systems and therefore exhibit much lower charge carrier mobilities-the Achilles heel of organic photovoltaic cells. In this progress review, the authors discuss recent important developments on the impact of charge carrier mobility on the charge transfer state dissociation, and the interplay of free charge extraction and recombination. By comparing the mobilities on different timescales obtained by different techniques, the authors highlight the dispersive nature of these materials and how this reflects on the key processes defining the efficiency of organic photovoltaics.
Serious knee pain and related disability have an annual prevalence of approximately 25% on those over the age of 55 years. As curative treatments for the common knee problems are not available to date, knee pathologies typically progress and often lead to osteoarthritis (OA). While the roles that the meniscus plays in knee biomechanics are well characterized, biological mechanisms underlying meniscus pathophysiology and roles in knee pain and OA progression are not fully clear. Experimental treatments for knee disorders that are successful in animal models often produce unsatisfactory results in humans due to species differences or the inability to fully replicate disease progression in experimental animals. The use of animals with spontaneous knee pathologies, such as dogs, can significantly help addressing this issue. As microscopic and macroscopic anatomy of the canine and human menisci are similar, spontaneous meniscal pathologies in canine patients are thought to be highly relevant for translational medicine. However, it is not clear whether the biomolecular mechanisms of pain, degradation of extracellular matrix, and inflammatory responses are species dependent. The aims of this review are (1) to provide an overview of the anatomy, physiology, and pathology of the human and canine meniscus, (2) to compare the known signaling pathways involved in spontaneous meniscus pathology between both species, and (3) to assess the relevance of dogs with spontaneous meniscal pathology as a translational model. Understanding these mechanisms in human and canine meniscus can help to advance diagnostic and therapeutic strategies for painful knee disorders and improve clinical decision making.
Over a lifetime, rhythmic contractions of the heart provide a continuous flow of blood throughout the body. An essential morphogenetic process during cardiac development which ensures unidirectional blood flow is the formation of cardiac valves. These structures are largely composed of extracellular matrix and of endocardial cells, a specialized population of endothelial cells that line the interior of the heart and that are subjected to changing hemodynamic forces. Recent studies have significantly expanded our understanding of this morphogenetic process. They highlight the importance of the mechanobiology of cardiac valve formation and show how biophysical forces due to blood flow drive biochemical and electrical signaling required for the differentiation of cells to produce cardiac valves.
Combining training of muscle strength and cardiorespiratory fitness within a training cycle could increase athletic performance more than single-mode training. However, the physiological effects produced by each training modality could also interfere with each other, improving athletic performance less than single-mode training. Because anthropometric, physiological, and biomechanical differences between young and adult athletes can affect the responses to exercise training, young athletes might respond differently to concurrent training (CT) compared with adults. Thus, the aim of the present systematic review with meta-analysis was to determine the effects of concurrent strength and endurance training on selected physical fitness components and athletic performance in youth. A systematic literature search of PubMed and Web of Science identified 886 records. The studies included in the analyses examined children (girls age 6-11 years, boys age 6-13 years) or adolescents (girls age 12-18 years, boys age 14-18 years), compared CT with single-mode endurance (ET) or strength training (ST), and reported at least one strength/power-(e.g., jump height), endurance-(e.g., peak. VO2, exercise economy), or performance-related (e.g., time trial) outcome. We calculated weighted standardized mean differences (SMDs). CT compared to ET produced small effects in favor of CT on athletic performance (n = 11 studies, SMD = 0.41, p = 0.04) and trivial effects on cardiorespiratory endurance (n = 4 studies, SMD = 0.04, p = 0.86) and exercise economy (n = 5 studies, SMD = 0.16, p = 0.49) in young athletes. A sub-analysis of chronological age revealed a trend toward larger effects of CT vs. ET on athletic performance in adolescents (SMD = 0.52) compared with children (SMD = 0.17). CT compared with ST had small effects in favor of CT on muscle power (n = 4 studies, SMD = 0.23, p = 0.04). In conclusion, CT is more effective than single-mode ET or ST in improving selected measures of physical fitness and athletic performance in youth. Specifically, CT compared with ET improved athletic performance in children and particularly adolescents. Finally, CT was more effective than ST in improving muscle power in youth.
Nowadays, the role of trace elements (TE) is of growing interest because dyshomeostasis of selenium (Se), manganese (Mn), zinc (Zn), and copper (Cu) is supposed to be a risk factor for several diseases. Thereby, research focuses on identifying new biomarkers for the TE status to allow for a more reliable description of the individual TE and health status. This review mirrors a lack of well-defined, sensitive, and selective biomarkers and summarizes technical limitations to measure them. Thus, the capacity to assess the relationship between dietary TE intake, homeostasis, and health is restricted, which would otherwise provide the basis to define adequate intake levels of single TE in both healthy and diseased humans. Besides that, our knowledge is even more limited with respect to the real life situation of combined TE intake and putative interactions between single TE.
The AlpArray seismic network
(2018)
The AlpArray programme is a multinational, European consortium to advance our understanding of orogenesis and its relationship to mantle dynamics, plate reorganizations, surface processes and seismic hazard in the Alps-Apennines-Carpathians-Dinarides orogenic system. The AlpArray Seismic Network has been deployed with contributions from 36 institutions from 11 countries to map physical properties of the lithosphere and asthenosphere in 3D and thus to obtain new, high-resolution geophysical images of structures from the surface down to the base of the mantle transition zone. With over 600 broadband stations operated for 2 years, this seismic experiment is one of the largest simultaneously operated seismological networks in the academic domain, employing hexagonal coverage with station spacing at less than 52 km. This dense and regularly spaced experiment is made possible by the coordinated coeval deployment of temporary stations from numerous national pools, including ocean-bottom seismometers, which were funded by different national agencies. They combine with permanent networks, which also required the cooperation of many different operators. Together these stations ultimately fill coverage gaps. Following a short overview of previous large-scale seismological experiments in the Alpine region, we here present the goals, construction, deployment, characteristics and data management of the AlpArray Seismic Network, which will provide data that is expected to be unprecedented in quality to image the complex Alpine mountains at depth.
We review the evidence for a putative early 21st-century divergence between global mean surface temperature (GMST) and Coupled Model Intercomparison Project Phase 5 (CMIP5) projections. We provide a systematic comparison between temperatures and projections using historical versions of GMST products and historical versions of model projections that existed at the times when claims about a divergence were made. The comparisons are conducted with a variety of statistical techniques that correct for problems in previous work, including using continuous trends and a Monte Carlo approach to simulate internal variability. The results show that there is no robust statistical evidence for a divergence between models and observations. The impression of a divergence early in the 21st century was caused by various biases in model interpretation and in the observations, and was unsupported by robust statistics.
Unravelling the spatiotemporal evolution of the Cenozoic Andean (Altiplano-Puna) plateau has been one of the most intriguing problems of South American geology. Despite a number of investigations, the early deformation and uplift history of this area remained largely enigmatic. This paper analyses the Paleogene tectono-sedimentary history of the Casa Grande Basin, in the present-day transition zone between the northern sector of the Puna Plateau and the northern part of the Argentine Eastern Cordillera. Our detailed mapping of synsedimentary structures records the onset of regional contractional deformation during the middle Eocene, revealing reactivation of Cretaceous extensional structures and the development of doubly vergent thrusts. This is in agreement with records from other southern parts of the Puna Plateau and the Eastern Cordillera. These observations indicate the existence of an Eocene broken foreland setting within the region, characterized by low-lying compressional basins and ranges with spatially disparate sectors of deformation, which was subsequently subjected to regional uplift resulting in the attainment of present-day elevations during the Neogene.
Aim: To scrutinize to what extent modern ideas about nutrition effects on growth are supported by historic observations in European populations. Method: We reviewed 19th and early 20th century paediatric journals in the Staatsbibliothek zu Berlin, the third largest European library with an almost complete collection of the German medical literature. During a three-day visit, we inspected 15 bookshelf meters of literature not available in electronic format. Results: Late 19th and early 20th century breastfed European infants and children, independent of social strata, grew far below World Health Organisation (WHO) standards and 15-30% of adequately-fed children would be classified as stunted by the WHO standards. Historic sources indicate that growth in height is largely independent of the extent and nature of the diet. Height catch-up after starvation was greater than catch-up reported in modern nutrition intervention studies, and allowed for unimpaired adult height. Conclusion: Historical studies are indispensable to understand why stunting does not equate with undernutrition and why modern diet interventions frequently fail to prevent stunting. Appropriateness and effect size of modern nutrition interventions on growth need revision.
ObjectivesThe use of simulated and standardized patients (SP) is widely accepted in the medical field and, from there, is beginning to disseminate into clinical psychology and psychotherapy. The purpose of this study was therefore to systematically review barriers and facilitators that should be considered in the implementation of SP interventions specific to clinical psychology and psychotherapy.MethodsFollowing current guidelines, a scoping review was conducted. The literature search focused on the MEDLINE, PsycINFO and Web of Science databases, including Dissertation Abstracts International. After screening for titles and abstracts, full texts were screened independently and in duplicate according to our inclusion criteria. For data extraction, a pre-defined form was piloted and used. Units of meaning with respect to barriers and facilitators were extracted and categorized inductively using content-analysis techniques. From the results, a matrix of interconnections and a network graph were compiled.ResultsThe 41 included publications were mainly in the fields of psychiatry and mental health nursing, as well as in training and education. The detailed category system contrasts four supercategories, i.e., which organizational and economic aspects to consider, which persons to include as eligible SPs, how to develop adequate scenarios, and how to authentically and consistently portray mental health patients.ConclusionsPublications focused especially on the interrelation between authenticity and consistency of portrayals, on how to evoke empathy in learners, and on economic and training aspects. A variety of recommendations for implementing SP programs, from planning to training, monitoring, and debriefing, is provided, for example, ethical screening of and ongoing support for SPs.
Studies over the past several years have demonstrated the important role of sphingolipids in cystic fibrosis (CF), chronic obstructive pulmonary disease and acute lung injury. Ceramide is increased in airway epithelial cells and alveolar macrophages of CF mice and humans, while sphingosine is dramatically decreased. This increase in ceramide results in chronic inflammation, increased death of epithelial cells, release of DNA into the bronchial lumen and thereby an impairment of mucociliary clearance; while the lack of sphingosine in airway epithelial cells causes high infection susceptibility in CF mice and possibly patients. The increase in ceramide mediates an ectopic expression of beta 1-integrins in the luminal membrane of CF epithelial cells, which results, via an unknown mechanism, in a down-regulation of acid ceramidase. It is predominantly this down-regulation of acid ceramidase that results in the imbalance of ceramide and sphingosine in CF cells. Correction of ceramide and sphingosine levels can be achieved by inhalation of functional acid sphingomyelinase inhibitors, recombinant acid ceramidase or by normalization of beta 1-integrin expression and subsequent re-expression of endogenous acid ceramidase. These treatments correct pulmonary inflammation and prevent or treat, respectively, acute and chronic pulmonary infections in CF mice with Staphylococcus aureus and mucoid or non-mucoid Pseudomonas aeruginosa. Inhalation of sphingosine corrects sphingosine levels only and seems to mainly act against the infection. Many antidepressants are functional inhibitors of the acid sphingomyelinase and were designed for systemic treatment of major depression. These drugs could be repurposed to treat CF by inhalation.
The book by Božena Bednaříková, Slovo a jeho konverze (‘BSJK’), was originally published in 2009. However, in our view, there has not yet been given a due consideration and certainly not recognition as a genuine new territory of word formation. This is the reason to write a short review in order to give this book the consideration it has by large and far deserved. For in this book, two theoretically interesting working hypotheses are represented and covered by numerous examples of the Czech contemporary language: (i) conversion is the central process (not derivation), and (ii) conversion belongs to morphology and not (just) to word formation.
The book is divided into 9 sections. The section 1 (p. 13–14) gives the road map of the book, in section 2 (p. 15–42), the central concern about the position of word as a central unit of morphology (form formation) is established. In this chapter, the traditional views of Czech descriptive and Academic grammars but also manuals and handbooks or teacher’s books for high schools are reviewed. In most of them, word formation is considered being a part of lexicology, and not an integral part of morphology or better form formation. The review serves not only the improvement towards a unifying grammatical terminology in academic circles (university and academy of science) but it should also improve the quality of teaching at elementary and high schools (cf. 2.6., p. 31–42: Školský exkurz). Bednaříková is famous for her leading role as missing link between the Academia and the consumers of grammars. In chapter 3, entitled Návrat slova ‘The return of the word’ (into the Morphology, p. 43–54), arguments in favor of a morphological approach are raised. In this important methodological chapter, the main reasons are given why the word must be a central part of the form formation (morphology/grammar) and not of the lexicology. In addition, key terms such as stem, root and affix are subject to revision. The chapter is very brief, but very precise in its reasoning and arguments, in which the formal teaching is assigned a central supporting role in the context of conversion and transposition. In chapter 4 Slovo jako slovní druh (‘The word as a pars orationis’, p. 55–70), the syntactic function of transposition of one pars orationis to another with the means of conversion is considered. In Chapter 5, the central role of morphology for word formation is analyzed taking as starting point Mel’čuks theory which is understood as the analysis of morphological processes (cf. Mel’čuk, I. 2000. Morphological processes. In G. Booji, Ch. Lehmann, J. Mugdan, & S. Skopeteas (eds.), Morphologie/Morphology. Vol. 1, 523–535. Berlin & New York). The innovative part of the book are without any doubt the chapters 6–9, in which the internal structure of the word is introduced (chapter 6, 79–122), furthermore the part of speech transfer (or PS Transfer) including the conversion (Chapter 7, 123–149), once more the transposition understood as the shift from one part of speech to the other and concentrating on nouns, verbs and adjectives (Chapter 8, 150–201), and, finally, transflexion, “transflexe” (chapter 9, 203–219), which belongs rather to the domain of derivation than to a new type of word formation, and which does not include the transposition from one part of speech to another but rather the transition from one declension class to another. However, it is to be criticized that in some chapters, certain systematics are missing (this is expressed for example in the repetition of the same phenomenon in several places), and the illustrations in the form of derivation trees or the abbreviations are not always transparent and explicitly defined. It took a very long time until I received information about the abbreviation “S”.
I would now like to give a short statement concerning the innovative potential and the contribution of the book itself as compared to the western standard on the same topic. At the beginning of the monograph, the author raises the central concerns of her two hypotheses. In her study, she is concerned with the bases of morphemic analysis of word formation and with the function of the syntagma. In view of methodology, two central acts of actualization are, following Mathesius’ terminology, under review: first, the category called “pojmenovávací”, and second, the category called “usouvztažňovací” (cf. also Mathesius, V. 1982. Jazyk, kultura a slovesnost; Daneš, F. 1991.Mathesiova koncepce funkční gramatiky v kontextu dnešní jazykovědy. In SaS 52. 161–174 and Panevová, J. 2010. Kategorie pojmenovávací a usouvztažňovací [Jak František Daneš rozvíjí Viléma Mathesia]. In S. Čmejrková & J. Hoffmannová ad. [eds.], Užívání a prožívání jazyka, 21–26.). Her major concern is thus to establish a missing link between an analysis of word formation and form formation (morphology). Her morphemic analysis of word formation processes wants to “combat traditional school views of word formation as a (mechanical) connection of the root, prefix, and suffix”. Doing so, she analyzes in the book the relationship between transfer, transposition (as change of partes orationis) and conversion (as the operation process serving transposition). In the last chapter 8, BSJK re-introduces and refines the term transflection (BSJK 2009,13).
This book is important for its consistent satisfactory treatment of the term conversion as a morphological process in the Czech tradition; still we cannot confirm that in European context, this topic would be “seriously under-researched” (cited from the introduction, Chapter 1, p. 13). The contrary is true, in context of English word formation besides the most influential work by Marchand (Marchand, H. 1996. The categories and types of present-day English word-formation: A synchronic diachronic approach. 2nd ed. München), conversion as the most productive process of word formation has become perhaps the most researched object recently: to mention just a few influential monographs: Martsa, S. 2013. Conversion in English: A Cognitive Semantic Approach. Cambridge; Vogel, P. M. 1996. Wortarten und Wortartenwechsel. Berlin & New York.
The word formation called conversion originally comes from analytic languages such as English and French. Especially English is a language in which the derivation of a noun from a verb and vice versa causes a considerable large amount of homonymous forms in the dictionary and of course, this is not just a problem of morphology but especially a problem of any theoretical approach to language acquisition, cognitive semantics or even generative morphosyntax. Thus, in his seminal book, Language Instinct (1995), Steven Pinker argues persuasively that prescriptive grammar rules disallowing, among other things, the sentence-final use of prepositions, the splitting of infinitives and the conversion of nouns to verbs are both useless and nonsensical (371–379). As regards the conversion of nouns to verbs, he says: “[i]n fact the easy conversion of nouns to verbs has been part of English grammar for centuries; it is one of the processes that make English English” (ibidem: 379). To illustrate the easiness characterizing this type of conversion, he lists verbs converted from nouns designating human body parts, some of which are reproduced in (1):
(1) head a committee, scalp a missionary, eye a babe, nose around the office, mouth the lyrics, tongue each note on the flute, neck in the back seat, back the candidate, arm the militia, shoulder the burden, elbow your way in, finger the culprit, knuckle under, thumb a ride, belly up to the bar, stomach someone’s complaints, knee the goalie, leg it across the town, foot the bill, toe the line (cf. Pinker, S. 1995. The Language Instinct. New York, 379–380 and Pinker, S. 1996. Language learnability and language development. Cambridge MA)
Pinker estimates that approximately a fifth of English verbs originate from nouns, which, as documented extensively in Clark & Clark (Clark, E. V. & H. H. Clark. 1979. When nouns surface as verbs. In Language 55. 767–811), may also have to do with the fact that new or innovative verbs in English arise predominantly from conversion of nouns to verbs.
Without questioning the dominance of noun to verb conversion, I shall claim in this review that it is not only the easy conversion of verbs from nouns, but, more broadly, conversion as a word-formation process that makes English English. Consider, for instance, (2) below demonstrating that the easiness of forming conversion verbs equally characterizes, though in a lesser degree, the conversion of nouns from verbs. The expressions given in (2) are modelled on Pinker’s above examples by the seminal work of Sándor Martsa (2013. Conversion in English: A Cognitive Semantic Approach. Cambridge), and they contain nouns converted from verbs designating actions functionally related to different parts of the human body.
(2) have your say, give a shout, let out a shriek / a cry, give a talk, take a look at the notes, keep a close watch, down the whisky with a swallow, have a chew on it, have a smell of this cheese, with a smile, the touch of her fingers, Hey! Nice catch! go for a run, it’s worth a go, go for a walk
Thus, the major difference between the term konverze as introduced and defended in BSJK (2009, 149) on one hand, and the English type of conversion mostly called “Zero-Derivation” by a zero morpheme (as Marchand 1969 op. cit., has called it) is to be found inside of the two quite different systems of word formation.
Czech very rarely allows for pure zero derivation such as demonstrated in the English examples (1)-(2).
Despite this major difference, even Czech language being still a highly inflectional language with rich case, number and declension system and agreement, nevertheless more and more allows for similar word formations typical for English with a true zero affixation, e. g. tunnel > to tunnel : Cz tunel > tunelovat and this process is an integral part of the grammar because it includes even the category of verbal aspect deriving also the perfective forms and negated verbs such as nevytunelovalo peníze,
ve snaze “politicky korektně” uctít Havlovu památku jednotliví ministři české vlády přislíbili, že přestanou tento stát vykrádat a tunelovat, tedy alespoň do začátku příštího roku; Nové vedení Obce spisovatelů a jeho sekretariát nevytunelovalo peníze Obce spisovatelů, vždyť nebylo ani co tunelovat, naopak zachránilo tuto organizaci před téměř nezvratným koncem (ČNK. Last accessed July 10, 2018).
Thus conversion is becoming more and more an important process of word and form formation in the system of Czech word formation and morphology.
One critical observation remains to be mentioned: The book is solid but in a certain sense restricted to just functional approaches not considering or even including the important contribution of alternative approaches in formal linguistics. Thus, mainstream generative syntax, based on the theory of government and binding or minimalism (introduced by Noam Chomsky in 1981 and 1995), are not reviewed in this book even though there are many allusions including the important role of syntax for word formation (this is an important demand on any theory of word formation, cf. also Dokulil, M. 1962. K vzájemnému poměru slovotvorby a skladby. In Acta Universitatis Carolinae: SLAVICA PRAGENSIA IV, 369–375. UK, Praha).
Most of the recent work devoted to a theoretical approach of minimalism considers conversion as a “syntactic decomposition” based on root semantics (cf. e. g. Borer, H. 2005. In name only: Structuring sense Vol. I. & The normal course of events: Structuring sense Vol. II. Oxford; Harley, H. & R. Noyer. 1999. State-of-the article: Distributed Morphology. In GLOT 4. 3–9; Halle, M. & A. Marantz. 1993. Distributed morphology and the pieces of inflection. In Keyser, S. J. & K. Hale (eds.), The view from Building 20, 111–176. Cambridge.). A recent development in minimalism is the concept of roots and categorial features (cf. Panagiotidis, Ph. 2014. Categorial Features. A Generative Theory of Word Class Categories. Cambridge.). This theory differentiates between so-called true “denominal verbs tape-type verbs” as opposed to those verbs which are “directly derived from a root hammer-type”. The structural differences between them are argued by Panagiotidis (2014: 63) “to account for the idiosyncratic meaning of the latter, as opposed to the predictable and systematic meaning of the former”. The two types are demonstrated under (3) vs. (4)
(3) nP vP
/ \ / \
N HAMMER v xP
/ \
HAMMER x
(Panagiotidis op. cit., 2014: 63)
In (3) to the left, the nominalizer head n takes a root complement, nominalizing it syntactically. In the tree to the right, the root h a m m e r is a manner adjunct to an xP (schematically rendered) inside the vP. On the other hand, verbs like tape behave differently. They seem to be truly denominal, formed by converting a noun into a verb, by recategorizing the noun and not by categorizing a root. By hypothesis, the verbalizing head takes as its complement a structure that already contains a noun – that is, an nP in which the root tape has already been nominalized:
(4) nP vP
/ \ / \
N TAPE v xP
/ \
np. X
/ \
n TAPE
(Panagiotidis 2014:63)
As opposed to this approach, the present monograph uses the term “transpozice” (‘transposition’) as the change of parts of speech of different classes by the means of konverze (‘conversion’) (chapter 8, 151–201). We will just mention one typical class or type of such conversions as given under (5) and (6):
(5) kapř
/ \
Kapř í
(BSJK,156)
(6) výlov [vylovit]
/ \
vý [vy] lov [lovit]
(BSJK, 180)
In summary, I would see the great merit of the publication especially in a new view on ancient phenomena. Additionally, the work also excels in a thorough multi-level analysis of conversion, transposition and transflexion, including consideration of morphonological alternations and differences of semantic interpretation by adding or removing a specific onomasiological feature (according to the onomasiological word formation theory of Dokulil, M. 1962. Tvoření slov v češtině. Teorie odvozování slov. Praha.).
Above all, I value the book because of its consistent insistence on the role of shaping for conversion as a part of morphology (form formation). I also think that conversion will play an increasingly important role in the further development of the Czech language, both for system external reasons, as a language contact phenomenon for English, but also for system inherent reasons, triggered and flanked by the tendency towards analytism and simplification, and finally the gradual reduction of the complex inflectional system of nouns and verbs.
For the theoretical linguist, this book may not be a substitute for word-formation theories such as Marchand, op. cit. (1969) or Dokulil, op. cit. (1962, 1968); but it is a very stimulating and original study in which a more thorough reading could lead to a differentiated view than that given here, showing the differences between a true zero-derivative language such as English based on a more elaborated morpho-syntactic generative theory of root semantics by Panagiotidis (2014) in which the term conversion is very different from that presented in Bednaříkovás book (see Examples 1 and 2), and a derivational language such as Czech with additional affixes and other word-forming means more clearly.
The author is to be recommended for bridging the gap with traditional (and, in my view, not negligible) theories and newer views. The work must necessarily have place in every slavist’s and bohemist’s book shelf.
Work has become more precarious in recent years. Although this claim is more or less uncontested among social scientists, there are a still many questions that have not yet been conclusively answered. What exactly constitutes precariousness? How should it be operationalized and measured? How does the character of precarious employment vary across organizations, occupations, demographic groups, and countries?
The edited volume by Arne Kalleberg and Steven Vallas seeks to provide answers to these and related questions. Sociologists from around the world employed different methodologies in a broad range of economic sectors and countries to identify the origins, manifestations, and consequences of precarious work. The different contributions not only illustrate the great heterogeneity that exists within precarious employment but also point to some central features of precarious work independent of the geographical context in which it occurs. Moreover, they highlight some challenges for the study of precarious work.
First, drawing on their earlier work, Kalleberg and Vallas conceptualize precarious employment as work that is characterized by uncertainty and insecurity with regard to pay and the stability of the work arrangement; workers in precarious jobs only have limited access to social benefits and statutory protections and bear the entrepreneurial risk of the employment relationship. This broad definition not only captures various forms of nonstandard employment, such as temporary employment, part-time work, or one-person businesses, but also covers informal workers or workers who are at risk of losing their jobs. Nonetheless, this definition does not seem to be broad enough or specific enough to fit the needs of all types of research and to appropriately capture the multifaceted nature of precarious work. Kiersztyn, for example, shows the necessity to distinguish between objective and subjective insecurities when measuring precarious work. Likewise, Rogan et al. point out that the concept of “precarious employment” has little resonance in the developing world, where most of the workforce is at or near poverty and informal work is the default employment type.
Second, the book repeatedly illustrates that the increase in precarious work can be attributed to the rise of neoliberal doctrines and practices, the deinstitutionalization of organized workers, and the dismantling of the welfare state. This applies not only to the United States, where market logics have often been equated with economic freedom, but also to countries like Germany with its corporatist tradition and a strong welfare state (Brady and Biegert) as well as to emerging economies like India (Sapkal and Sundar). In the opening chapter, Pulignano, moreover, convincingly argues that the institutional determinants of precariousness should not only be sought at the national level but that the supranational context plays a major role when it comes to explain precarity.
Third, by focusing on different aspects of precariousness and employment, the book shows the need for differentiation when studying precarious work. This is nicely illustrated by the following three chapters, which draw different conclusions on the gendered nature of precarious employment. Wallace and Kwak study the rise of “bad jobs” in U.S. metropolitan areas and show that men’s work became more precarious during the Great Financial Crisis. By contrast, Banch and Hanley, who have investigated the prevalence of different forms of nonstandard work since the 1980s in the United States, show that the risk of working in precarious jobs has declined over time for men. Likewise, Witteveen shows that the employment trajectories of young men are less precarious than those of young women in the United States. These seemingly contradictory claims stem from the fact that the authors focused on different aspects of precariousness, used different methodologies and datasets, and took on slightly different populations and time frames. The work on precarious work is hence not yet done.
Fourth, precarious work is certainly no longer a characteristic of those with low levels of education but has increasingly become common among professional and technical workers as well. It might come in disguise and is oftentimes perceived as an opportunity, a means for career advancement, and a personal choice. These disguises and perceptions are evident in chapters by Zukin and Papadantonakis on the unpaid work performed by programmers in hackathons, the chapter by Rao on young professionals in international organizations, and to some degree also the chapter by Williams on professional female workers in the oil and gas industry.
These insights (and more that are not mentioned here) make the book relevant and interesting to read. A summary chapter to synthesize the diverse findings and potentially also outline some of the methodological challenges in the study of precarious work would have had been a nice close of the book. Furthermore, such a summation would have been the place to speculate about the consequences of recent changes in the world of work, such as the rise of the gig economy and cloud or crowd work, which add new forms of precarity to the ones that we have known thus far.
Although it has primarily been written for an academic audience, the book is a highly commendable and enjoyable read for both social scientists and practitioners such as labor activists, human resources managers, and policy makers. Moreover, the book is certainly a valuable teaching resource suitable for graduate and master’s seminars in sociology due to its broad coverage of various aspects of precariousness, geographical regions, and methodological approaches.
Reviews and syntheses
(2018)
The cycling of carbon (C) between the Earth surface and the atmosphere is controlled by biological and abiotic processes that regulate C storage in biogeochemical compartments and release to the atmosphere. This partitioning is quantified using various forms of C-use efficiency (CUE) - the ratio of C remaining in a system to C entering that system. Biological CUE is the fraction of C taken up allocated to biosynthesis. In soils and sediments, C storage depends also on abiotic processes, so the term C-storage efficiency (CSE) can be used. Here we first review and reconcile CUE and CSE definitions proposed for autotrophic and heterotrophic organisms and communities, food webs, whole ecosystems and watersheds, and soils and sediments using a common mathematical framework. Second, we identify general CUE patterns; for example, the actual CUE increases with improving growth conditions, and apparent CUE decreases with increasing turnover. We then synthesize > 5000CUE estimates showing that CUE decreases with increasing biological and ecological organization - from uni-cellular to multicellular organisms and from individuals to ecosystems. We conclude that CUE is an emergent property of coupled biological-abiotic systems, and it should be regarded as a flexible and scale-dependent index of the capacity of a given system to effectively retain C.
In public perception, abnormal animal behavior is widely assumed to be a potential earthquake precursor, in strong contrast to the viewpoint in natural sciences. Proponents of earthquake prediction via animals claim that animals feel and react abnormally to small changes in environmental and physico-chemical parameters related to the earthquake preparation process. In seismology, however, observational evidence for changes of physical parameters before earthquakes is very weak. In this study, we reviewed 180 publications regarding abnormal animal behavior before earthquakes and analyze and discuss them with respect to (1) magnitude-distance relations, (2) foreshock activity, and (3) the quality and length of the published observations. More than 700 records of claimed animal precursors related to 160 earthquakes are reviewed with unusual behavior of more than 130 species. The precursor time ranges from months to seconds prior to the earthquakes, and the distances from a few to hundreds of kilometers. However, only 14 time series were published, whereas all other records are single observations. The time series are often short (the longest is 1 yr), or only small excerpts of the full data set are shown. The probability density of foreshocks and the occurrence of animal precursors are strikingly similar, suggesting that at least parts of the reported animal precursors are in fact related to foreshocks. Another major difficulty for a systematic and statistical analysis is the high diversity of data, which are often only anecdotal and retrospective. The study clearly demonstrates strong weaknesses or even deficits in many of the published reports on possible abnormal animal behavior. To improve the research on precursors, we suggest a scheme of yes and no questions to be assessed to ensure the quality of such claims.
We review recent progress in the field of light responsive soft nano-objects. These are systems the shape, size, surface area and surface energy of which can be easily changed by low-intensity external irradiation. Here we shall specifically focus on microgels, DNA molecules, polymer brushes and colloidal particles. One convenient way to render these objects photosensitive is to couple them via ionic and/or hydrophobic interactions with azobenzene containing surfactants in a non-covalent way. The advantage of this strategy is that these surfactants can make any type of charged object light responsive without the need for possibly complicated (and irreversible) chemical conjugation. In the following, we will exclusively discuss only photosensitive surfactant systems. These contain a charged head and a hydrophobic tail into which an azobenzene group is incorporated, which can undergo reversible photo-isomerization from a trans-to a cis-configuration under UV illumination. These kinds of photo-isomerizations occur on a picosecond timescale and are fully reversible. The two isomers in general possess different polarity, i.e. the trans-state is less polar with a dipole moment of usually close to 0 Debye, while the cis-isomer has a dipole moment up to 3 Debye or more, depending on additional phenyl ring substituents. As part of the hydrophobic tail of a surfactant molecule, the photo-isomerization also changes the hydrophobicity of the molecule as a whole and hence its solubility, surface energy, and strength of interaction with other substances. Being a molecular actuator, which converts optical energy in to mechanical work, the azobenzene group in the shape of surfactant molecule can be utilized in order to actuate matter on larger time and length scale. In this paper we show several interesting examples, where azobenzene containing surfactants play the role of a transducer mediating between different states of size, shape, surface energy and spatial arrangement of various nanoscale soft-material systems.
Singlet oxygen can be released in the dark in nearly quantitative yield from endoperoxides of naphthalenes, anthracenes and pyridones as an alternative to its generation by photosensitization. Recently, new donor systems have been designed which operate at very low temperatures but which are prepared from their parent forms at acceptable rates. Enhancement of the reactivity of donors is conveniently achieved by the design of the substitution pattern or through the use of plasmonic heating of nanoparticle-bound donors. The most important aim of these donor molecules is to transfer singlet oxygen in a controlled and directed manner to a target. Low temperatures and the linking between donors and acceptors reduce the random walk of oxygen and may force an attack at the desired position. By using chiral donor systems, new stereocenters might be introduced into prochiral acceptors.
Psychosocial risk factors for chronic back pain in the general population and in competitive sports
(2018)
Lumbar back pain and the high risk of chronic complaints is not only an important health concern in the general population but also in high performance athletes. In contrast to non-athletes, there is a lack of research into psychosocial risk factors in athletes. Moreover, the development of psychosocial screening questionnaires that would be qualified to detect athletes with a high risk of chronicity is in the early stages. The purpose of this review is to give an overview of research into psychosocial risk factors in both populations and to evaluate the performance of screening instruments in non-athletes. The databases MEDLINE, PubMed, and PsycINFO were searched from March to June 2016 using the keywords "psychosocial screening", "low back pain", "sciatica" and "prognosis", "athletes". We included prospective studies conducted in patients with low back pain with and without radiation to the legs, aged ae<yen>18 years and a follow-up of at least 3 months. We identified 16 eligible studies, all of them conducted in samples of non-athletes. Among the most frequently published screening questionnaires, the A-rebro Musculoskeletal Pain Screening Questionnaire (A-MPSQ) demonstrated a sufficient early prediction of return to work and the STarT Back Screening Tool (SBT) revealed acceptable performance predicting pain-related impairment. The prediction of future pain was sufficient with the Risk Analysis of Back Pain Chronification (RISC-BP) and the Heidelberg Short Questionnaire (HKF). Psychosocial risk factors of chronic back pain, such as chronic stress, depressive mood, and maladaptive pain processing are becoming increasingly more recognized in competitive sports. Screening instruments that have been shown to be predictive in the general population are currently being tested for suitability in the German MiSpEx research consortium.
(1) Background: Sexual violence (SV) is a major public health problem, with negative socio-economic, physical, mental, sexual, and reproductive health consequences. Migrants, applicants for international protection, and refugees (MARs) are vulnerable to SV. Since many European countries are seeing high migratory pressure, the development of prevention strategies and care paths focusing on victimised MARs is highly needed. To this end, this study reviews evidence on the prevalence of SV among MAR groups in Europe and the challenges encountered in research on this topic. (2) Methods: A critical interpretive synthesis of 25 peer-reviewed academic studies and 22 relevant grey literature documents was conducted based on a socio-ecological model. (3) Results: Evidence shows that SV is highly frequent in MARs in Europe, yet comparison with other groups is still difficult. Methodologically and ethically sound representative studies comparing between populations are still lacking. Challenges in researching SV in MARs are located at the intrapersonal, interpersonal, community, societal, and policy levels. (4) Conclusions: Future research should start with a clear definition of the concerned population and acts of SV to generate comparable data. Participatory qualitative research approaches could be applied to better grasp the complexity of interplaying determinants of SV in MARs.
The Renewable energy power generation capacity has been rapidly increasing in China recently. Meanwhile, the contradiction between power supply and demand is becoming increasingly more prominent due to the intermittence of renewable energies. On the other hand, on the mitigation of carbon dioxide (CO2) emissions in China needs immediate attention. Power-to-Gas (PtG), a chemical energy storage technology, can convert surplus electricity into combustible gases. Subsurface energy storage can meet the requirements of long term storage with its large capacity. This paper provides a discussion of the entire PtG energy storage technology process and the current research progress. Based on the comparative study of different geological storage schemes for synthetic methane, their respective research progress and limitations are noted. In addition, a full investigation of the distribution and implementation of global PtG and CO2 capture and storage (CCS) demonstration projects is performed. Subsequently, the opportunities and challenges of the development of this technology in China are discussed based on techno-economic and ecological effects analysis. While PtG is expected to be a revolutionary technology that will replace traditional power systems, the main issues of site selection, energy efficiency and the economy still need to be adequately addressed. Additionally, based on the comprehensive discussion of the results of the analysis, power-to-gas and subsurface energy storage implementation strategies, as well as outlook in China are presented.
Second harmonic generation (SHG) is a nonlinear optical process that inherently generates signal in non-centrosymmetric materials, such as starch granules, and therefore can be used for label-free imaging. Both intensity and polarization of SHG are determined by material properties that are characterized by the nonlinear susceptibility tensor, ((2)). Examination of the tensor is performed for each focal volume of the image by measuring the outgoing polarization state of the SHG signal for a set of incoming laser beam polarizations. Mapping of nonlinear properties expressed as the susceptibility ratio reveals structural features including the organization of crystalline material within a single starch granule, and the distribution of structural properties in a population of granules. Isolated granules, as well as in situ starch, can be analyzed using polarimetric SHG microscopy. Due to the fast sample preparation and short imaging times, polarimetric SHG microscopy allows for a quick assessment of starch structure and permits rapid feedback for bioengineering applications. This article presents the basics of SHG theory and microscopy applications for starch-containing materials. Quantification of ultrastructural features within individual starch granules is described. New results obtained by polarization resolved SHG microscopy of starch granules are presented for various maize genotypes revealing heterogeneity within a single starch particle and between various granules.
As a tumor suppressor and the most frequently mutated gene in cancer, p53 is among the best-described molecules in medical research. As cancer is in most cases an age-related disease, it seems paradoxical that p53 is so strongly conserved from early multicellular organisms to humans. A function not directly related to tumor suppression, such as the regulation of metabolism in nontransformed cells, could explain this selective pressure. While this role of p53 in cellular metabolism is gradually emerging, it is imperative to dissect the tissue-and cell-specific actions of p53 and its downstream signaling pathways. In this review, we focus on studies reporting p53's impact on adipocyte development, function, and maintenance, as well as the causes and consequences of altered p53 levels in white and brown adipose tissue (AT) with respect to systemic energy homeostasis. While whole body p53 knockout mice gain less weight and fat mass under a high-fat diet owing to increased energy expenditure, modifying p53 expression specifically in adipocytes yields more refined insights: (1) p53 is a negative regulator of in vitro adipogenesis; (2) p53 levels in white AT are increased in diet-induced and genetic obesity mouse models and in obese humans; (3) functionally, elevated p53 in white AT increases senescence and chronic inflammation, aggravating systemic insulin resistance; (4) p53 is not required for normal development of brown AT; and (5) when p53 is activated in brown AT in mice fed a high-fat diet, it increases brown AT temperature and brown AT marker gene expression, thereby contributing to reduced fat mass accumulation. In addition, p53 is increasingly being recognized as crucial player in nutrient sensing pathways. Hence, despite existence of contradictory findings and a varying density of evidence, several functions of p53 in adipocytes and ATs have been emerging, positioning p53 as an essential regulatory hub in ATs. Future studies need to make use of more sophisticated in vivo model systems and should identify an AT-specific set of p53 target genes and downstream pathways upon different (nutrient) challenges to identify novel therapeutic targets to curb metabolic diseases
Lifestyle-related disorders, such as the metabolic syndrome, have become a primary risk factor for the development of liver pathologies that can progress from hepatic steatosis, hepatic insulin resistance, steatohepatitis, fibrosis and cirrhosis, to the most severe condition of hepatocellular carcinoma (HCC). While the prevalence of liver pathologies is steadily increasing in modern societies, there are currently no approved drugs other than chemotherapeutic intervention in late stage HCC. Hence, there is a pressing need to identify and investigate causative molecular pathways that can yield new therapeutic avenues. The transcription factor p53 is well established as a tumor suppressor and has recently been described as a central metabolic player both in physiological and pathological settings. Given that liver is a dynamic tissue with direct exposition to ingested nutrients, hepatic p53, by integrating cellular stress response, metabolism and cell cycle regulation, has emerged as an important regulator of liver homeostasis and dysfunction. The underlying evidence is reviewed herein, with a focus on clinical data and animal studies that highlight a direct influence of p53 activity on different stages of liver diseases. Based on current literature showing that activation of p53 signaling can either attenuate or fuel liver disease, we herein discuss the hypothesis that, while hyper-activation or loss of function can cause disease, moderate induction of hepatic p53 within physiological margins could be beneficial in the prevention and treatment of liver pathologies. Hence, stimuli that lead to a moderate and temporary p53 activation could present new therapeutic approaches through several entry points in the cascade from hepatic steatosis to HCC.
Plant roots control uptake of water and nutrients and cope with environmental challenges. The root epidermis provides the first selective interface for nutrient absorption, while the endodermis produces the main apoplastic diffusion barrier in the form of a structure called the Casparian strip. The positioning of root hairs on epidermal cells, and of the Casparian strip around endodermal cells, requires asymmetries along cellular axes (cell polarity). Cell polarity is termed planar polarity, when coordinated within the plane of a given tissue layer. Here, we review recent molecular advances towards understanding both the polar positioning of the proteo-lipid membrane domain instructing root hair initiation, and the cytoskeletal, trafficking and polar tethering requirements of proteins at outer or inner plasma membrane domains. Finally, we highlight progress towards understanding mechanisms of Casparian strip formation and underlying endodermal cell polarity.
Up to now pathological health anxiety has been classified primarily as a somatoform disorder or a somatic symptom disorder in ICD and DSM. Theoretical and empirical evidence, however, suggest that pathological health anxiety basically represents an anxiety disorder. In this paper, it is argued that deficits in the treatment and perception of patients with pathological health anxiety as "difficult patients" are partly attributable to a lack of clarity in terms of nosology and with respect to central mechanisms of etiology and pathogenesis. Based on novel theoretical approaches for the explanation of pathological health anxiety, suggestions for an improved therapeutic practice are outlined. This approach focuses on a more intensive use of exposure-based treatment elements that are oriented to the inhibitory learning approach, which has already proven its effectiveness for other anxiety disorders.
Numerical knowledge, including number concepts and arithmetic procedures, seems to be a clear-cut case for abstract symbol manipulation. Yet, evidence from perceptual and motor behaviour reveals that natural number knowledge and simple arithmetic also remain closely associated with modal experiences. Following a review of behavioural, animal and neuroscience studies of number processing, we propose a revised understanding of psychological number concepts as grounded in physical constraints, embodied in experience and situated through task-specific intentions. The idea that number concepts occupy a range of positions on the continuum between abstract and modal conceptual knowledge also accounts for systematic heuristics and biases in mental arithmetic, thus inviting psycho-logical approaches to the study of the mathematical mind.
Cardiogenesis is a complex developmental process involving multiple overlapping stages of cell fate specification, proliferation, differentiation, and morphogenesis. Precise spatiotemporal coordination between the different cardiogenic processes is ensured by intercellular signalling crosstalk and tissue-tissue interactions. Notch is an intercellular signalling pathway crucial for cell fate decisions during multicellular organismal development and is aptly positioned to coordinate the complex signalling crosstalk required for progressive cell lineage restriction during cardiogenesis. In this Review, we describe the role of Notch signalling and the crosstalk with other signalling pathways during the differentiation and patterning of the different cardiac tissues and in cardiac valve and ventricular chamber development. We examine how perturbation of Notch signalling activity is linked to congenital heart diseases affecting the neonate and adult, and discuss studies that shed light on the role of Notch signalling in heart regeneration and repair after injury.
Moving Beyond ERP Components
(2018)
Relationships between neuroimaging measures and behavior provide important clues about brain function and cognition in healthy and clinical populations. While electroencephalography (EEG) provides a portable, low cost measure of brain dynamics, it has been somewhat underrepresented in the emerging field of model-based inference. We seek to address this gap in this article by highlighting the utility of linking EEG and behavior, with an emphasis on approaches for EEG analysis that move beyond focusing on peaks or "components" derived from averaging EEG responses across trials and subjects (generating the event-related potential, ERP). First, we review methods for deriving features from EEG in order to enhance the signal within single-trials. These methods include filtering based on user-defined features (i.e., frequency decomposition, time-frequency decomposition), filtering based on data-driven properties (i.e., blind source separation, BSS), and generating more abstract representations of data (e.g., using deep learning). We then review cognitive models which extract latent variables from experimental tasks, including the drift diffusion model (DDM) and reinforcement learning (RL) approaches. Next, we discuss ways to access associations among these measures, including statistical models, data-driven joint models and cognitive joint modeling using hierarchical Bayesian models (HBMs). We think that these methodological tools are likely to contribute to theoretical advancements, and will help inform our understandings of brain dynamics that contribute to moment-to-moment cognitive function.
Children’s motor competence is known to have a determinant role in learning and engaging later in complex motor skills and, thus, in physical activity. The development of adequate motor competence is a central aim of physical education, and assuring that pupils are learning and developing motor competence depends on accurate assessment protocols. The MOBAK 1 test battery is a recent instrument developed to assess motor competence in primary physical education. This study used the MOBAK 1 to explore motor competence levels and gender differences among 249 (Mage = 6.3, SD = 0.5 years; 127 girls and 122 boys) Grade 1 primary school Portuguese children. On independent sample t tests, boys presented higher object movement motor competence than girls (boys: M = 5.8, SD = 1.7; girls: M = 4.0, SD = 1.7; p < .001), while girls were more proficient among self-movement skills (girls: M = 5.1, SD = 1.8; boys: M = 4.3, SD = 1.7; p < .01). On “total motor competence,” boys (M = 10.3, SD = 2.6) averaged one point ahead of girls (M = 9.1, SD = 2.9). The percentage of girls in the first quartile of object movement was 18.9%, while, for “self movement,” the percentage of boys in the first quartile was almost double that of girls (30.3% and 17.3%, respectively). The confirmatory model to test for construct validity confirmed the assumed theoretical two-factor structure of MOBAK 1 test items in this Portuguese sample. These results support the MOBAK 1 instrument for assessing motor competence and highlighted gender differences, of relevance to intervention efforts.
Within the wealth of molecules constituting marine dissolved organic matter, carbohydrates make up the largest coherent and quantifiable fraction. Their main sources are from primary producers, which release large amounts of photosynthetic products – mainly polysaccharides – directly into the surrounding water via passive and active exudation. The organic carbon and other nutrients derived from these photosynthates enrich the ‘phycosphere’ and attract heterotrophic bacteria. The rapid uptake and remineralization of dissolved free monosaccharides by heterotrophic bacteria account for the barely detectable levels of these compounds. By contrast, dissolved combined polysaccharides can reach high concentrations, especially during phytoplankton blooms. Polysaccharides are too large to be taken up directly by heterotrophic bacteria, instead requiring hydrolytic cleavage to smaller oligo- or monomers by bacteria with a suitable set of exoenzymes. The release of diverse polysaccharides by various phytoplankton taxa is generally interpreted as the deposition of excess organic material. However, these molecules likely also fulfil distinct, yet not fully understood functions, as inferred from their active modulation in terms of quality and quantity when phytoplankton becomes nutrient limited or is exposed to heterotrophic bacteria. This minireview summarizes current knowledge regarding the exudation and composition of phytoplankton-derived exopolysaccharides and acquisition of these compounds by heterotrophic bacteria.
Recently, there has been a proliferation of published articles on the effect of plyometric jump training, including several review articles and meta-analyses. However, these types of research articles are generally of narrow scope. Furthermore, methodological limitations among studies (e.g., a lack of active/passive control groups) prevent the generalization of results, and these factors need to be addressed by researchers. On that basis, the aims of this scoping review were to (1) characterize the main elements of plyometric jump training studies (e.g., training protocols) and (2) provide future directions for research. From 648 potentially relevant articles, 242 were eligible for inclusion in this review. The main issues identified related to an insufficient number of studies conducted in females, youths, and individual sports (~ 24.0, ~ 37.0, and ~ 12.0% of overall studies, respectively); insufficient reporting of effect size values and training prescription (~ 34.0 and ~ 55.0% of overall studies, respectively); and studies missing an active/passive control group and randomization (~ 40.0 and ~ 20.0% of overall studies, respectively). Furthermore, plyometric jump training was often combined with other training methods and added to participants’ daily training routines (~ 47.0 and ~ 39.0% of overall studies, respectively), thus distorting conclusions on its independent effects. Additionally, most studies lasted no longer than 7 weeks. In future, researchers are advised to conduct plyometric training studies of high methodological quality (e.g., randomized controlled trials). More research is needed in females, youth, and individual sports. Finally, the identification of specific dose-response relationships following plyometric training is needed to specifically tailor intervention programs, particularly in the long term.
Cardiovascular complications are commonly associated with obesity. However, a subgroup of obese individuals may not be at an increased risk for cardiovascular complications; these individuals are said to have metabolically healthy obesity (MHO). In contrast, metabolically unhealthy individuals are at high risk of cardiovascular disease (CVD), irrespective of BMI; thus, this group can include individuals within the normal weight category (BMI 18.5-24.9kg/m(2)). This review provides a summary of prospective studies on MHO and metabolically unhealthy normal-weight (MUHNW) phenotypes. Notably, there is ongoing dispute surrounding the concept of MHO, including the lack of a uniform definition and the potentially transient nature of metabolic health status. This review highlights the relevance of alternative measures of body fatness, specifically measures of fat distribution, for determining MHO and MUHNW. It also highlights alternative approaches of risk stratification, which account for the continuum of risk in relation to CVD, which is observable for most risk factors. Moreover, studies evaluating the transition from metabolically healthy to unhealthy phenotypes and potential determinants for such conversions are discussed. Finally, the review proposes several strategies for the use of epidemiological research to further inform the current debate on metabolic health and its determination across different stages of body fatness.
The primary function of leaves is to provide an interface between plants and their environment for gas exchange, light exposure and thermoregulation. Leaves have, therefore a central contribution to plant fitness by allowing an efficient absorption of sunlight energy through photosynthesis to ensure an optimal growth. Their final geometry will result from a balance between the need to maximize energy uptake while minimizing the damage caused by environmental stresses. This intimate relationship between leaf and its surroundings has led to an enormous diversification in leaf forms. Leaf shape varies between species, populations, individuals or even within identical genotypes when those are subjected to different environmental conditions. For instance, the extent of leaf margin dissection has, for long, been found to inversely correlate with the mean annual temperature, such that Paleobotanists have used models based on leaf shape to predict the paleoclimate from fossil flora. Leaf growth is not only dependent on temperature but is also regulated by many other environmental factors such as light quality and intensity or ambient humidity. This raises the question of how the different signals can be integrated at the molecular level and converted into clear developmental decisions. Several recent studies have started to shed the light on the molecular mechanisms that connect the environmental sensing with organ-growth and patterning. In this review, we discuss the current knowledge on the influence of different environmental signals on leaf size and shape, their integration as well as their importance for plant adaptation.
Purpose
The purpose of this paper is to investigate whether and how evolving ideas about management control (MC) emerge in research about public sector performance management (PSPM).
Design/methodology/approach
This is a literature review on PSPM research through using a set of key terms derived from a review of recent developments in MC.
Findings
MC research, originating in the management accounting discipline, is largely disconnected from PSPM research as part of public administration and public management disciplines. Overlaps between MC and PSPM research are visible in a cybernetic control approach, control variety and contingency-based reasoning. Both academic communities share an understanding of certain issues, although under diverging labels, especially enabling controls or, in a more general sense, usable performance controls, horizontal controls and control packaging. Specific MC concepts are valuable for future PSPM research, i.e. trust as a complement of performance-based controls in complex settings, and strategy as a variable in contingency-based studies.
Research limitations/implications
Breaking the boundaries between two currently remote research disciplines, on the one hand, might dismantle “would-be” innovations in one of these disciplines, and, on the other hand, may provide a fertile soil for mutual transfer of knowledge. A limitation of the authors’ review of PSPM research is that it may insufficiently cover research published in the public sector accounting journals, which could be an outlet for MC-inspired PSPM research.
Originality/value
The paper unravels the “apparent” and “real” differences between MC and PSPM research, and, in doing so, takes the detected “real” differences as a starting point for discussing in what ways PSPM research can benefit from MC achievements.
Langmuir monolayers provide a fast and elegant route to analyze the degradation behavior of biodegradable polymer materials. In contrast to bulk materials, diffusive transport of reactants and reaction products in the (partially degraded) material can be neglected at the air-water interface, allowing for the study of molecular degradation kinetics in experiments taking less than a day and in some cases just a few minutes, in contrast to experiments with bulk materials that can take years. Several aspects of the biodegradation behavior of polymer materials, such as the interaction with biomolecules and degradation products, are directly observable. Expanding the technique with surface-sensitive instrumental techniques enables evaluating the evolution of the morphology, chemical composition, and the mechanical properties of the degrading material in situ. The potential of the Langmuir monolayer degradation technique as a predictive tool for implant degradation when combined with computational methods is outlined, and related open questions and strategies to overcome these challenges are pointed out.
This article presents some insights into the German developments of studying Judaism and the Jewish tradition and relates them to the ongoing development of the subject at universities in the Nordic countries in general and Norway in particular. It also aims to present some conclusions concerning why it might be interesting for Norwegian society to intensify the study of Judaism at its universities.