Refine
Year of publication
- 2017 (2399) (remove)
Document Type
- Article (1460)
- Doctoral Thesis (301)
- Postprint (249)
- Other (117)
- Part of a Book (93)
- Monograph/Edited Volume (68)
- Review (54)
- Part of Periodical (17)
- Master's Thesis (12)
- Conference Proceeding (9)
Is part of the Bibliography
- yes (2399) (remove)
Keywords
- Germany (16)
- climate change (14)
- Holocene (11)
- climate (11)
- stars: massive (11)
- Genisa (10)
- Geniza (10)
- German (10)
- Jewish Studies (10)
- Jüdische Studien (10)
Institute
- Institut für Biochemie und Biologie (370)
- Institut für Geowissenschaften (297)
- Institut für Physik und Astronomie (281)
- Institut für Chemie (167)
- Department Psychologie (132)
- Department Sport- und Gesundheitswissenschaften (98)
- Institut für Ernährungswissenschaft (84)
- Department Linguistik (74)
- Mathematisch-Naturwissenschaftliche Fakultät (74)
- Sozialwissenschaften (65)
Immunodeficient mice are crucial models to evaluate the efficacy of monoclonal antibodies (mAbs). When studying mAb pharmacokinetics (PK), protection from elimination by binding to the neonatal Fc receptor (FcRn) is known to be a major process influencing the unspecific clearance of endogenous and therapeutic IgG. The concentration of endogenous IgG in immunodeficient mice, however is reduced, and this effect on the FcRn protection mechanism and subsequently on unspecific mAb clearance is unknown, yet of great importance for the interpretation of mAb PK data. We used a PBPK modelling approach to elucidate the influence of altered endogenous IgG concentrations on unspecific mAb clearance. To this end, we used PK data in immunodeficient mice, i.e. nude and severe combined immunodeficiency mice. To avoid impact of target-mediated clearance processes, we focussed on mAbs without affinity to a target antigen in these mice. In addition, intravenous immunoglobulin (IVIG) data of immunocompetent mice was used to study the impact of increased total IgG concentrations on unspecific therapeutic antibody clearance. The unspecific clearance is linear, whenever therapeutic IgG concentrations, i.e. mAb and IVIG concentrations are lower than FcRn; it can be non-linear if therapeutic IgG concentrations are larger than FcRn and endogenous IgG concentrations (e.g., under IVIG therapy). Unspecific mAb clearance of immunodeficient mice is effectively linear (under mAb doses as typically used in human). Studying the impact of reduced endogenous IgG concentrations on unspecific mAb clearance is of great relevance for the extrapolation to clinical species, e.g., when predicting mAb PK in immunosuppressed cancer patients.
The term Adaptive Force (AF) describes the capability of adaptation of the nerve-muscle-system to externally applied forces during isometric and eccentric muscle action. This ability plays an important role in real life motions as well as in sports. The focus of this paper is on the specific measurement method of this neuromuscular action, which can be seen as innovative. A measuring system based on the use of compressed air was constructed and evaluated for this neuromuscular function. It depends on the physical conditions of the subject, at which force level it deviates from the quasi isometric position and merges into eccentric muscle action. The device enables – in contrast to the isokinetic systems – a measure of strength without forced motion. Evaluation of the scientific quality criteria of the devices was done by measurements regarding the intra- and interrater-, the test-retest-reliability and fatiguing measurements. Comparisons of the pneumatic device with a dynamometer were also done. Looking at the mechanical evaluation, the results show a high level of consistency (r²=0.94 to 0.96). The parallel test reliability delivers a very high and significant correlation (ρ=0.976; p=0.000). Including the biological system, the concordance of three different raters is very high (p=0.001, Cronbachs alpha α=0.987). The test retest with 4 subjects over five weeks speaks for the reliability of the device in showing no statistically significant differences. These evaluations indicate that the scientific evaluation criteria are fulfilled. The specific feature of this system is that an isometric position can be maintained while the externally impacting force rises. Moreover, the device can capture concentric, static and eccentric strength values. Fields of application are performance diagnostics in sports and medicine.
The aim of our study was to examine the extent to which linguistic
approaches to sentence comprehension deficits in aphasia can
account for differential impairment patterns in the comprehension
of wh-questions in bilingual persons with aphasia (PWA). We investi-
gated the comprehension of subject and object wh-questions in both
Turkish, a wh-in-situ language, and German, a wh-fronting language,
in two bilingual PWA using a sentence-to-picture matching task. Both
PWA showed differential impairment patterns in their two languages.
SK, an early bilingual PWA, had particular difficulty comprehending
subject which-questions in Turkish but performed normal across all
conditions in German. CT, a late bilingual PWA, performed more
poorly for object which-questions in German than in all other condi-
tions, whilst in Turkish his accuracy was at chance level across all
conditions. We conclude that the observed patterns of selective
cross-linguistic impairments cannot solely be attributed either to
difficulty with wh-movement or to problems with the integration of
discourse-level information. Instead our results suggest that differ-
ences between our PWA’s individual bilingualism profiles (e.g. onset
of bilingualism, premorbid language dominance) considerably
affected the nature and extent of their impairments.
Aims and objectives: Our study addresses the following research questions: To what extent is L2 comprehenders’ online sensitivity to morphosyntactic disambiguation cues affected by L1 background? Does noticing the error signal trigger successful reanalysis in both L1 and L2 comprehension? Can previous findings suggesting that case is a better reanalysis cue than agreement be replicated and extended to L2 processing when using closely matched materials? Design/methodology/approach: We carried out a self-paced reading study using temporarily ambiguous object-initial sentences in German. These were disambiguated either by number marking on the verb or by nominative case marking on the subject. End-of-trial comprehension questions probed whether or not our participants ultimately succeeded in computing the correct interpretation. Data and analysis: We tested a total of 121 participants (25 Italian, 32 Russian, 32 Korean and 32 native German speakers), measuring their word-by-word reading times and comprehension accuracy. The data were analysed using linear mixed-effects and logistic regression modelling. Findings/conclusions: All three learner groups showed online sensitivity to both case and agreement disambiguation cues. Noticing case disambiguations did not necessarily lead to a correct interpretation, whereas noticing agreement disambiguations did. We conclude that intermediate to advanced learners are sensitive to morphosyntactic interpretation cues during online processing regardless of whether or not corresponding grammatical distinctions exist in their L1. Our results also suggest that case is not generally a better reanalysis cue than agreement. Significance/implications: L1 influence on L2 processing is more limited than might be expected. Contra previous findings, even intermediate learners show sensitivity to both agreement and case information during processing.
How do late proficient bilinguals process morphosyntactic and lexical-semantic information in their non-native language (L2)? How is this information represented in the L2 mental lexicon? And what are the neural signatures of L2 morphosyntactic and lexical-semantic processing? We addressed these questions in one behavioral and two ERP priming experiments on inflected German adjectives testing a group of advanced late Russian learners of German in comparison to native speaker (L1) controls. While in the behavioral experiment, the L2 learners performed native-like, the ERP data revealed clear L1/L2 differences with respect to the temporal dynamics of grammatical processing. Specifically, our results show that L2 morphosyntactic processing yielded temporally and spatially extended brain responses relative to L1 processing, indicating that grammatical processing of inflected words in an L2 is more demanding and less automatic than in the L1. However, this group of advanced L2 learners showed native-like lexical-semantic processing.
Sensitivity to parasitic gaps inside subject islands in native and non-native sentence processing
(2017)
Trunk loading and back pain
(2017)
An essential function of the trunk is the compensation of external forces and loads in order to guarantee stability. Stabilising the trunk during sudden, repetitive loading in everyday tasks, as well as during performance is important in order to protect against injury. Hence, reduced trunk stability is accepted as a risk factor for the development of back pain (BP). An altered activity pattern including extended response and activation times as well as increased co-contraction of the trunk muscles as well as a reduced range of motion and increased movement variability of the trunk are evident in back pain patients (BPP). These differences to healthy controls (H) have been evaluated primarily in quasi-static test situations involving isolated loading directly to the trunk. Nevertheless, transferability to everyday, dynamic situations is under debate. Therefore, the aim of this project is to analyse 3-dimensional motion and neuromuscular reflex activity of the trunk as response to dynamic trunk loading in healthy (H) and back pain patients (BPP).
A measurement tool was developed to assess trunk stability, consisting of dynamic test situations. During these tests, loading of the trunk is generated by the upper and lower limbs with and without additional perturbation. Therefore, lifting of objects and stumbling while walking are adequate represents. With the help of a 12-lead EMG, neuromuscular activity of the muscles encompassing the trunk was assessed. In addition, three-dimensional trunk motion was analysed using a newly developed multi-segmental trunk model. The set-up was checked for reproducibility as well as validity. Afterwards, the defined measurement set-up was applied to assess trunk stability in comparisons of healthy and back pain patients.
Clinically acceptable to excellent reliability could be shown for the methods (EMG/kinematics) used in the test situations. No changes in trunk motion pattern could be observed in healthy adults during continuous loading (lifting of objects) of different weights. In contrast, sudden loading of the trunk through perturbations to the lower limbs during walking led to an increased neuromuscular activity and ROM of the trunk. Moreover, BPP showed a delayed muscle response time and extended duration until maximum neuromuscular activity in response to sudden walking perturbations compared to healthy controls. In addition, a reduced lateral flexion of the trunk during perturbation could be shown in BPP.
It is concluded that perturbed gait seems suitable to provoke higher demands on trunk stability in adults. The altered neuromuscular and kinematic compensation pattern in back pain patients (BPP) can be interpreted as increased spine loading and reduced trunk stability in patients. Therefore, this novel assessment of trunk stability is suitable to identify deficits in BPP. Assignment of affected BPP to therapy interventions with focus on stabilisation of the trunk aiming to improve neuromuscular control in dynamic situations is implied. Hence, sensorimotor training (SMT) to enhance trunk stability and compensation of unexpected sudden loading should be preferred.
Background
Recently, the incidence rate of back pain (BP) in adolescents has been reported at 21%. However, the development of BP in adolescent athletes is unclear. Hence, the purpose of this study was to examine the incidence of BP in young elite athletes in relation to gender and type of sport practiced.
Methods
Subjective BP was assessed in 321 elite adolescent athletes (m/f 57%/43%; 13.2 ± 1.4 years; 163.4 ± 11.4 cm; 52.6 ± 12.6 kg; 5.0 ± 2.6 training yrs; 7.6 ± 5.3 training h/week). Initially, all athletes were free of pain. The main outcome criterion was the incidence of back pain [%] analyzed in terms of pain development from the first measurement day (M1) to the second measurement day (M2) after 2.0 ± 1.0 year. Participants were classified into athletes who developed back pain (BPD) and athletes who did not develop back pain (nBPD). BP (acute or within the last 7 days) was assessed with a 5-step face scale (face 1–2 = no pain; face 3–5 = pain). BPD included all athletes who reported faces 1 and 2 at M1 and faces 3 to 5 at M2. nBPD were all athletes who reported face 1 or 2 at both M1 and M2. Data was analyzed descriptively. Additionally, a Chi2 test was used to analyze gender- and sport-specific differences (p = 0.05).
Results
Thirty-two athletes were categorized as BPD (10%). The gender difference was 5% (m/f: 12%/7%) but did not show statistical significance (p = 0.15). The incidence of BP ranged between 6 and 15% for the different sport categories. Game sports (15%) showed the highest, and explosive strength sports (6%) the lowest incidence. Anthropometrics or training characteristics did not significantly influence BPD (p = 0.14 gender to p = 0.90 sports; r2 = 0.0825).
Conclusions
BP incidence was lower in adolescent athletes compared to young non-athletes and even to the general adult population. Consequently, it can be concluded that high-performance sports do not lead to an additional increase in back pain incidence during early adolescence. Nevertheless, back pain prevention programs should be implemented into daily training routines for sport categories identified as showing high incidence rates.
Organic matter deposited in ancient, ice-rich permafrost sediments is vulnerable to climate change and may contribute to the future release of greenhouse gases; it is thus important to get a better characterization of the plant organic matter within such sediments. From a Late Quaternary permafrost sediment core from the Buor Khaya Peninsula, we analysed plant-derived sedimentary ancient DNA (sedaDNA) to identify the taxonomic composition of plant organic matter, and undertook palynological analysis to assess the environmental conditions during deposition. Using sedaDNA, we identified 154 taxa and from pollen and non-pollen palynomorphs we identified 83 taxa. In the deposits dated between 54 and 51 kyr BP, sedaDNA records a diverse low-centred polygon plant community including recurring aquatic pond vegetation while from the pollen record we infer terrestrial open-land vegetation with relatively dry environmental conditions at a regional scale. A fluctuating dominance of either terrestrial or swamp and aquatic taxa in both proxies allowed the local hydrological development of the polygon to be traced. In deposits dated between 11.4 and 9.7 kyr BP (13.4-11.1 cal kyr BP), sedaDNA shows a taxonomic turnover to moist shrub tundra and a lower taxonomic richness compared to the older samples. Pollen also records a shrub tundra community, mostly seen as changes in relative proportions of the most dominant taxa, while a decrease in taxonomic richness was less pronounced compared to sedaDNA. Our results show the advantages of using sedaDNA in combination with palynological analyses when macrofossils are rarely preserved. The high resolution of the sedaDNA record provides a detailed picture of the taxonomic composition of plant-derived organic matter throughout the core, and palynological analyses prove valuable by allowing for inferences of regional environmental conditions.
The large variety of atmospheric circulation systems affecting the eastern Asian climate is reflected by the complex Asian vegetation distribution. Particularly in the transition zones of these circulation systems, vegetation is supposed to be very sensitive to climate change. Since proxy records are scarce, hitherto a mechanistic understanding of the past spatio-temporal climate-vegetation relationship is lacking. To assess the Holocene vegetation change and to obtain an ensemble of potential mid-Holocene biome distributions for eastern Asia, we forced the diagnostic biome model BIOME4 with climate anomalies of different transient Holocene climate simulations performed in coupled atmosphere-ocean(-vegetation) models. The simulated biome changes are compared with pollen-based biome records for different key regions.
In all simulations, substantial biome shifts during the last 6000 years are confined to the high northern latitudes and the monsoon-westerly wind transition zone, but the temporal evolution and amplitude of change strongly depend on the climate forcing. Large parts of the southern tundra are replaced by taiga during the mid-Holocene due to a warmer growing season and the boreal treeline in northern Asia is shifted northward by approx. 4 degrees in the ensemble mean, ranging from 1.5 to 6 degrees in the individual simulations, respectively. This simulated treeline shift is in agreement with pollen-based reconstructions from northern Siberia. The desert fraction in the transition zone is reduced by 21% during the mid-Holocene compared to pre-industrial due to enhanced precipitation. The desert-steppe margin is shifted westward by 5 degrees (1-9 degrees in the individual simulations). The forest biomes are expanded north-westward by 2 degrees, ranging from 0 to 4 degrees in the single simulations. These results corroborate pollen-based reconstructions indicating an extended forest area in north-central China during the mid-Holocene. According to the model, the forest-to-non-forest and steppe-to-desert changes in the climate transition zones are spatially not uniform and not linear since the mid-Holocene.
Reliable information on past and present vegetation is important to project future changes, especially for rapidly transitioning areas such as the boreal treeline. To study past vegetation, pollen analysis is common, while current vegetation is usually assessed by field surveys. Application of detailed sedimentary DNA (sedDNA) records has the potential to enhance our understanding of vegetation changes, but studies systematically investigating the power of this proxy are rare to date. This study compares sedDNA metabarcoding and pollen records from surface sediments of 31 lakes along a north-south gradient of increasing forest cover in northern Siberia (Taymyr peninsula) with data from field surveys in the surroundings of the lakes. sedDNA metabarcoding recorded 114 plant taxa, about half of them to species level, while pollen analyses identified 43 taxa, both exceeding the 31 taxa found by vegetation field surveys. Increasing Larix percentages from north to south were consistently recorded by all three methods and principal component analyses based on percentage data of vegetation surveys and DNA sequences separated tundra from forested sites. Comparisons of the ordinations using procrustes and protest analyses show a significant fit among all compared pairs of records. Despite similarities of sedDNA and pollen records, certain idiosyncrasies, such as high percentages of Alnus and Betula in all pollen and high percentages of Salix in all sedDNA spectra, are observable. Our results from the tundra to single-tree tundra transition zone show that sedDNA analyses perform better than pollen in recording site-specific richness (i.e., presence/absence of taxa in the vicinity of the lake) and perform as well as pollen in tracing vegetation composition.
Early agriculture can be detected in palaeovegetation records, but quantification of the relative importance of climate and land use in influencing regional vegetation composition since the onset of agriculture is a topic that is rarely addressed. We present a novel approach that combines pollen-based REVEALS estimates of plant cover with climate, anthropogenic land-cover and dynamic vegetation modelling results. This is used to quantify the relative impacts of land use and climate on Holocene vegetation at a sub-continental scale, i.e. northern and western Europe north of the Alps. We use redundancy analysis and variation partitioning to quantify the percentage of variation in vegetation composition explained by the climate and land-use variables, and Monte Carlo permutation tests to assess the statistical significance of each variable. We further use a similarity index to combine pollen based REVEALS estimates with climate-driven dynamic vegetation modelling results. The overall results indicate that climate is the major driver of vegetation when the Holocene is considered as a whole and at the sub-continental scale, although land use is important regionally. Four critical phases of land-use effects on vegetation are identified. The first phase (from 7000 to 6500 BP) corresponds to the early impacts on vegetation of farming and Neolithic forest clearance and to the dominance of climate as a driver of vegetation change. During the second phase (from 4500 to 4000 BP), land use becomes a major control of vegetation. Climate is still the principal driver, although its influence decreases gradually. The third phase (from 2000 to 1500 BP) is characterised by the continued role of climate on vegetation as a consequence of late-Holocene climate shifts and specific climate events that influence vegetation as well as land use. The last phase (from 500 to 350 BP) shows an acceleration of vegetation changes, in particular during the last century, caused by new farming practices and forestry in response to population growth and industrialization. This is a unique signature of anthropogenic impact within the Holocene but European vegetation remains climatically sensitive and thus may continue to respond to ongoing climate change. (C) 2017 Elsevier Ltd. All rights reserved.
Star formation is a hierarchical process, forming young stellar structures of star clusters, associations, and complexes over a wide range of scales. The star-forming complex in the bar region of the Large Magellanic Cloud is investigated with upper main-sequence stars observed by the VISTA Survey of the Magellanic Clouds. The upper main-sequence stars exhibit highly nonuniform distributions. Young stellar structures inside the complex are identified from the stellar density map as density enhancements of different significance levels. We find that these structures are hierarchically organized such that larger, lower-density structures contain one or several smaller, higher-density ones. They follow power-law size and mass distributions, as well as a lognormal surface density distribution. All these results support a scenario of hierarchical star formation regulated by turbulence. The temporal evolution of young stellar structures is explored by using subsamples of upper main-sequence stars with different magnitude and age ranges. While the youngest subsample, with a median age of log(tau/yr) = 7.2, contains the most substructure, progressively older ones are less and less substructured. The oldest subsample, with a median age of log(tau/yr) = 8.0, is almost indistinguishable from a uniform distribution on spatial scales of 30-300. pc, suggesting that the young stellar structures are completely dispersed on a timescale of similar to 100. Myr. These results are consistent with the characteristics of the 30. Doradus complex and the entire Large Magellanic Cloud, suggesting no significant environmental effects. We further point out that the fractal dimension may be method dependent for stellar samples with significant age spreads.
Climatic and limnological changes at Lake Karakul (Tajikistan) during the last similar to 29 cal ka
(2017)
We present results of analyses on a sediment core from Lake Karakul, located in the eastern Pamir Mountains, Tajikistan. The core spans the last similar to 29 cal ka. We investigated and assessed processes internal and external to the lake to infer changes in past moisture availability. Among the variables used to infer lake-external processes, high values of grain-size end-member (EM) 3 (wide grain-size distribution that reflects fluvial input) and high Sr/Rb and Zr/Rb ratios (coinciding with coarse grain sizes), are indicative of moister conditions. High values in EM1, EM2 (peaks of small grain sizes that reflect long-distance dust transport or fine, glacially derived clastic input) and TiO2 (terrigenous input) are thought to reflect greater influence of dry air masses, most likely of Westerly origin. High input of dust from distant sources, beginning before the Last Glacial Maximum (LGM) and continuing to the late glacial, reflects the influence of dry Westerlies, whereas peaks in fluvial input suggest increased moisture availability. The early to early-middle Holocene is characterised by coarse mean grain sizes, indicating constant, high fluvial input and moister conditions in the region. A steady increase in terrigenous dust and a decrease in fluvial input from 6.6 cal ka BP onwards points to the Westerlies as the predominant atmospheric circulation through to present, and marks a return to drier and even arid conditions in the area. Proxies for productivity (TOC, TOC/TN, TOCBr), redox potential (Fe/Mn) and changes in the endogenic carbonate precipitation (TIC, delta(18) OCarb) indicate changes within the lake. Low productivity characterised the lake from the late Pleistocene until 6.6 cal ka BP, and increased rapidly afterwards. Lake level remained low until the LGM, but water depth increased to a maximum during the late glacial and remained high into the early Holocene. Subsequently, the water level decreased to its present stage. Today the lake system is mainly climatically controlled, but the depositional regime is also driven by internal limnogeological processes.
Arctic and alpine treelines worldwide differ in their reactions to climate change. A northward advance of or densification within the treeline ecotone will likely influence climate-vegetation feedback mechanisms. In our study, which was conducted in the Taimyr Depression in the North Siberian Lowlands, w present a combined field-and model-based approach helping us to better understand the population processes involved in the responses of the whole treeline ecotone, spanning from closed forest to single-tree tundra, to climate warming. Using information on stand structure, tree age, and seed quality and quantity from seven sites, we investigate effects of intra-specific competition and seed availability on the specific impact of recent climate warming on larch stands. Field data show that tree density is highest in the forest-tundra, and average tree size decreases from closed forest to single-tree tundra. Age-structure analyses indicate that the trees in the closed forest and forest-tundra have been present for at least similar to 240 yr. At all sites except the most southerly ones, past establishment is positively correlated with regional temperature increase. In the single-tree tundra, however, a change in growth form from krummholz to erect trees, beginning similar to 130 yr ago, rather than establishment date has been recorded. Seed mass decreases from south to north, while seed quantity increases. Simulations with LAVESI (Larix Vegetation Simulator) further suggest that relative density changes strongly in response to a warming signal in the forest-tundra while intra-specific competition limits densification in the closed forest and seed limitation hinders densification in the single-tree tundra. We find striking differences in strength and timing of responses to recent climate warming. While forest-tundra stands recently densified, recruitment is almost non-existent at the southern and northern end of the ecotone due to autecological processes. Palaeo-treelines may therefore be inappropriate to infer past temperature changes at a fine scale. Moreover, a lagged treeline response to past warming will, via feedback mechanisms, influence climate change in the future.
Retrogressive thaw slumps (RTSs) are among the most active landforms in the Arctic; their number has increased significantly over the past decades. While processes initiating discrete RTSs are well identified, the major terrain controls on the development of coastal RTSs at a regional scale are not yet defined. Our research reveals the main geomorphic factors that determine the development of RTSs along a 238km segment of the Yukon Coast, Canada. We (1) show the current extent of RTSs, (2) ascertain the factors controlling their activity and initiation, and (3) explain the spatial differences in the density and areal coverage of RTSs. We mapped and classified 287 RTSs using high-resolution satellite images acquired in 2011. We highlighted the main terrain controls over their development using univariate regression trees model. Coastal geomorphology influenced both the activity and initiation of RTSs: active RTSs and RTSs initiated after 1972 occurred primarily on terrains with slope angles greater than 3.9 degrees and 5.9 degrees, respectively. The density and areal coverage of RTSs were constrained by the volume and thickness of massive ice bodies. Differences in rates of coastal change along the coast did not affect the model. We infer that rates of coastal change averaged over a 39year period are unable to reflect the complex relationship between RTSs and coastline dynamics. We emphasize the need for large-scale studies of RTSs to evaluate their impact on the ecosystem and to measure their contribution to the global carbon budget. Plain Language Summary Retrogressive thaw slumps, henceforth slumps are a type of landslides that occur when permafrost thaws. Slumps are active landforms: they develop quickly and extend over several hectares. Satellite imagery allows to map such slumps over large areas. Our research shows where slumps develop along a 238 km segment of the Yukon Coast in Canada and explains which environments are most suitable for slump occurrence. We found that active and newly developed slumps were triggered where coastal slopes were greater than 3.9 degrees and 5.9 degrees, respectively. We explain that coastal erosion influences the development of slumps by modifying coastal slopes. We found that the highest density of slumps as well as the largest slumps occurred on terrains with high amounts of ice bodies in the ground. This study provides tools to better identify areas in the Arctic that are prone to slump development.
Radiocarbon and optically stimulated luminescence dating of sediments from Lake Karakul, Tajikistan
(2017)
Lake Karakul in the eastern Pamirs is a large and closed-basin lake in a partly glaciated catchment. Two parallel sediment cores were collected from 12 m water depth. The cores were correlated using XRF analysis and dated using radiocarbon and OSL techniques. The age results of the two dating methods are generally in agreement. The correlated composite core of 12.26 m length represents continuous accumulation of sediments in the lake basin since 31 ka. The lake reservoir effect (LRE) remained relatively constant over this period. High sediment accumulation rates (SedARs) were recorded before 23 ka and after 6.5 ka. The relatively close position of the coring location near the eastern shore of the lake implies that high SedARs resulted from low lake levels. Thus, high SedARs and lower lake levels before 23 ka probably reflect cold and dry climate conditions that inhibited the arrival of moist air at high elevation in the eastern Pamirs. Low lake levels after 6.5 ka were probably caused by declining temperatures after the warmer early Holocene, which had caused a reduction in water resources stored as snow, ice and frozen ground in the catchment. Low SedARs during 23-6.5 ka suggest increased lake levels in Lake Karakul. A short-lived increase of SedARs at 15 ka probably corresponds to the rapid melting of glaciers in the Karakul catchment during the Greenland Interstadial le, shortly after glaciers in the catchment had reached their maximum extents. The sediment cores from Lake Karakul represent an important climate archive with robust chronology for the last glacial interglacial cycle from Central Asia. (C) 2017 Elsevier B.V. All rights reserved.
We analyzed chlorophyll-a and Colored Dissolved Organic Matter (CDOM) dynamics from field measurements and assessed the potential of multispectral satellite data for retrieving water-quality parameters in three small surface reservoirs in the Brazilian semiarid region. More specifically, this work is comprised of: (i) analysis of Chl-a and trophic dynamics; (ii) characterization of CDOM; (iii) estimation of Chl-a and CDOM from OLI/Landsat-8 and RapidEye imagery. The monitoring lasted 20 months within a multi-year drought, which contributed to water-quality deterioration. Chl-a and trophic state analysis showed a highly eutrophic status for the perennial reservoir during the entire study period, while the non-perennial reservoirs ranged from oligotrophic to eutrophic, with changes associated with the first events of the rainy season. CDOM characterization suggests that the perennial reservoir is mostly influenced by autochthonous sources, while allochthonous sources dominate the non-perennial ones. Spectral-group classification assigned the perennial reservoir as a CDOM-moderate and highly eutrophic reservoir, whereas the non-perennial ones were assigned as CDOM-rich and oligotrophic-dystrophic reservoirs. The remote sensing initiative was partially successful: the Chl-a was best modelled using RapidEye for the perennial one; whereas CDOM performed best with Landsat-8 for non-perennial reservoirs. This investigation showed potential for retrieving water quality parameters in dry areas with small reservoirs.
Extreme weather events can pervasively influence ecosystems. Observations in lakes indicate that severe storms in particular can have pronounced ecosystem-scale consequences, but the underlying mechanisms have not been rigorously assessed in experiments. One major effect of storms on lakes is the redistribution of mineral resources and plankton communities as a result of abrupt thermocline deepening. We aimed at elucidating the importance of this effect by mimicking in replicated large enclosures (each 9 m in diameter, ca. 20 m deep, ca. 1300 m 3 in volume) a mixing event caused by a severe natural storm that was previously observed in a deep clear-water lake. Metabolic rates were derived from diel changes in vertical profiles of dissolved oxygen concentrations using a Bayesian modelling approach, based on high-frequency measurements. Experimental thermocline deepening stimulated daily gross primary production (GPP) in surface waters by an average of 63% for > 4 weeks even though thermal stratification re-established within 5 days. Ecosystem respiration (ER) was tightly coupled to GPP, exceeding that in control enclosures by 53% over the same period. As GPP responded more strongly than ER, net ecosystem productivity (NEP) of the entire water column was also increased. These protracted increases in ecosystem metabolism and autotrophy were driven by a proliferation of inedible filamentous cyanobacteria released from light and nutrient limitation after they were entrained from below the thermocline into the surface water. Thus, thermocline deepening by a single severe storm can induce prolonged responses of lake ecosystem metabolism independent of other storm-induced effects, such as inputs of terrestrial materials by increased catchment run-off. This highlights that future shifts in frequency, severity or timing of storms are an important component of climate change, whose impacts on lake thermal structure will superimpose upon climate trends to influence algal dynamics and organic matter cycling in clear-water lakes. Keywords: climate variability, ecosystem productivity, extreme events, gross primary production, mesocosm, respiration stratified lakes
Hypolimnetic oxygen demand in lakes is often assumed to be driven mainly by sediment microbial processes, while the role of Chaoborus larvae, which are prevalent in eutrophic lakes with hypoxic to anoxic bottoms, has been overlooked. We experimentally measured the respiration rates of C flavicans at different temperatures yielding a Q(10) of 1.44-1.71 and a respiratory quotient of 0.84-0.98. Applying the experimental data in a system analytical approach, we showed that migrating Chaoborus larvae can significantly add to the water column and sediment oxygen demand, and contribute to the observed linear relationship between water column respiration and depth. The estimated phosphorus excretion by Chaoborus in sediment is comparable in magnitude to the required phosphorus loading for eutrophication. Migrating Chaoborus larvae thereby essentially trap nutrients between the water column and the sediment, and this continuous internal loading of nutrients would delay lake remediation even when external inputs are stopped. (C) 2017 Elsevier Ltd. All rights reserved.
The Large and Small Magellanic Clouds are unique local laboratories for studying the formation and evolution of small galaxies in exquisite detail. The Survey of the MAgellanic Stellar History (SMASH) is an NOAO community Dark Energy Camera (DECam) survey of the Clouds mapping 480 deg2 (distributed over similar to 2400 square degrees at similar to 20% filling factor) to similar to 24th. mag in ugriz. The primary goals of SMASH are to identify low surface brightness stellar populations associated with the stellar halos and tidal debris of the Clouds, and to derive spatially resolved star formation histories. Here, we present a summary of the survey, its data reduction, and a description of the first public Data Release (DR1). The SMASH DECam data have been reduced with a combination of the NOAO Community Pipeline, the PHOTRED automated point-spread-function photometry pipeline, and custom calibration software. The astrometric precision is similar to 15 mas and the accuracy is similar to 2 mas with respect to the Gaia reference frame. The photometric precision is similar to 0.5%-0.7% in griz and similar to 1% in u with a calibration accuracy of similar to 1.3% in all bands. The median 5s point source depths in ugriz are 23.9, 24.8, 24.5, 24.2, and 23.5 mag. The SMASH data have already been used to discover the Hydra II Milky Way satellite, the SMASH 1 old globular cluster likely associated with the LMC, and extended stellar populations around the LMC out to R. similar to. 18.4 kpc. SMASH DR1 contains measurements of similar to 100 million objects distributed in 61 fields. A prototype version of the NOAO Data Lab provides data access and exploration tools.
The large variety of atmospheric circulation systems affecting the eastern Asian climate is reflected by the complex Asian vegetation distribution. Particularly in the transition zones of these circulation systems, vegetation is supposed to be very sensitive to climate change. Since proxy records are scarce, hitherto a mechanistic understanding of the past spatio-temporal climate-vegetation relationship is lacking. To assess the Holocene vegetation change and to obtain an ensemble of potential mid-Holocene biome distributions for eastern Asia, we forced the diagnostic biome model BIOME4 with climate anomalies of different transient Holocene climate simulations performed in coupled atmosphere-ocean(-vegetation) models. The simulated biome changes are compared with pollen-based biome records for different key regions. In all simulations, substantial biome shifts during the last 6000 years are confined to the high northern latitudes and the monsoon-westerly wind transition zone, but the temporal evolution and amplitude of change strongly depend on the climate forcing. Large parts of the southern tundra are replaced by taiga during the mid-Holocene due to a warmer growing season and the boreal treeline in northern Asia is shifted northward by approx. 4 degrees in the ensemble mean, ranging from 1.5 to 6 degrees in the individual simulations, respectively. This simulated treeline shift is in agreement with pollen-based reconstructions from northern Siberia. The desert fraction in the transition zone is reduced by 21% during the mid-Holocene compared to pre-industrial due to enhanced precipitation. The desert-steppe margin is shifted westward by 5 degrees (1-9 degrees in the individual simulations). The forest biomes are expanded north-westward by 2 degrees, ranging from 0 to 4 degrees in the single simulations. These results corroborate pollen-based reconstructions indicating an extended forest area in north-central China during the mid-Holocene. According to the model, the forest-to-non-forest and steppe-to-desert changes in the climate transition zones are spatially not uniform and not linear since the mid-Holocene.
Temporal and spatial stability of the vegetation climate relationship is a basic ecological assumption for pollen-based quantitative inferences of past climate change and for predicting future vegetation. We explore this assumption for the Holocene in eastern continental Asia (China, Mongolia). Boosted regression trees (BRT) between fossil pollen taxa percentages (Abies, Artemisia, Betula, Chenopodiaceae, Cyperaceae, Ephedra, Picea, Pinus, Poaceae and Quercus) and climate model outputs of mean annual precipitation (P-ann) and mean temperature of the warmest month (Mt(wa)) for 9 and 6 ka (ka = thousand years before present) were set up and results compared to those obtained from relating modern pollen to modern climate. Overall, our results reveal only slight temporal differences in the pollen climate relationships. Our analyses suggest that the importance of P-ann compared with Mt(wa) for taxa distribution is higher today than it was at 6 ka and 9 ka. In particular, the relevance of P-ann for Picea and Pinus increases and has become the main determinant. This change in the climate tree pollen relationship parallels a widespread tree pollen decrease in north-central China and the eastern Tibetan Plateau. We assume that this is at least partly related to vegetation climate disequilibrium originating from human impact. Increased atmospheric CO2 concentration may have permitted the expansion of moisture-loving herb taxa (Cyperaceae and Poaceae) during the late Holocene into arid/semi-arid areas. We furthermore find that the pollen climate relationship between north-central China and the eastern Tibetan Plateau is generally similar, but that regional differences are larger than temporal differences. In summary, vegetation climate relationships in China are generally stable in space and time, and pollen-based climate reconstructions can be applied to the Holocene. Regional differences imply the calibration-set should be restricted spatially.
Background Low back pain (LBP) is a common pain syndrome in athletes, responsible for 28% of missed training days/year. Psychosocial factors contribute to chronic pain development. This study aims to investigate the transferability of psychosocial screening tools developed in the general population to athletes and to define athlete-specific thresholds.
Methods Data from a prospective multicentre study on LBP were collected at baseline and 1-year follow-up (n=52 athletes, n=289 recreational athletes and n=246 non-athletes). Pain was assessed using the Chronic Pain Grade questionnaire. The psychosocial Risk Stratification Index (RSI) was used to obtain prognostic information regarding the risk of chronic LBP (CLBP). Individual psychosocial risk profile was gained with the Risk Prevention Index – Social (RPI-S). Differences between groups were calculated using general linear models and planned contrasts. Discrimination thresholds for athletes were defined with receiver operating characteristics (ROC) curves.
Results Athletes and recreational athletes showed significantly lower psychosocial risk profiles and prognostic risk for CLBP than non-athletes. ROC curves suggested discrimination thresholds for athletes were different compared with non-athletes. Both screenings demonstrated very good sensitivity (RSI=100%; RPI-S: 75%–100%) and specificity (RSI: 76%–93%; RPI-S: 71%–93%). RSI revealed two risk classes for pain intensity (area under the curve (AUC) 0.92(95% CI 0.85 to 1.0)) and pain disability (AUC 0.88(95% CI 0.71 to 1.0)).
Conclusions Both screening tools can be used for athletes. Athlete-specific thresholds will improve physicians’ decision making and allow stratified treatment and prevention.
Background
Total hip or knee replacement is one of the most frequently performed surgical procedures. Physical rehabilitation following total hip or knee replacement is an essential part of the therapy to improve functional outcomes and quality of life. After discharge from inpatient rehabilitation, a subsequent postoperative exercise therapy is needed to maintain functional mobility. Telerehabilitation may be a potential innovative treatment approach. We aim to investigate the superiority of an interactive telerehabilitation intervention for patients after total hip or knee replacement, in comparison to usual care, regarding physical performance, functional mobility, quality of life and pain.
Methods/design
This is an open, randomized controlled, multicenter superiority study with two prospective arms. One hundred and ten eligible and consenting participants with total knee or hip replacement will be recruited at admission to subsequent inpatient rehabilitation. After comprehensive, 3-week, inpatient rehabilitation, the intervention group performs a 3-month, interactive, home-based exercise training with a telerehabilitation system. For this purpose, the physiotherapist creates an individual training plan out of 38 different strength and balance exercises which were implemented in the system. Data about the quality and frequency of training are transmitted to the physiotherapist for further adjustment. Communication between patient and physiotherapist is possible with the system. The control group receives voluntary, usual aftercare programs. Baseline assessments are investigated after discharge from rehabilitation; final assessments 3 months later. The primary outcome is the difference in improvement between intervention and control group in 6-minute walk distance after 3 months. Secondary outcomes include differences in the Timed Up and Go Test, the Five-Times-Sit-to-Stand Test, the Stair Ascend Test, the Short-Form 36, the Western Ontario and McMaster Universities Osteoarthritis Index, the International Physical Activity Questionnaire, and postural control as well as gait and kinematic parameters of the lower limbs. Baseline-adjusted analysis of covariance models will be used to test for group differences in the primary and secondary endpoints.
Discussion
We expect the intervention group to benefit from the interactive, home-based exercise training in many respects represented by the study endpoints. If successful, this approach could be used to enhance the access to aftercare programs, especially in structurally weak areas.
Introduction: Carbohydrate (CHO) and fat are the main substrates to fuel prolonged endurance exercise, each having its oxidation patterns regulated by several factors such as intensity, duration and mode of the activity, dietary intake pattern, muscle glycogen concentrations, gender and training status. Exercising at intensities where fat oxidation rates are high has been shown to induce metabolic benefits in recreational and health-oriented sportsmen. The exercise intensity (Fatpeak) eliciting peak fat oxidation rates is therefore of particular interest when aiming to prescribe exercise for the purpose of fat oxidation and related metabolic effects. Although running and walking are feasible and popular among the target population, no reliable protocols are available to assess Fatpeak as well as its actual velocity (VPFO) during treadmill ergometry. Moreover, to date, it remains unclear how pre-exercise CHO availability modulates the oxidative regulation of substrates when exercise is conducted at the intensity where the individual anaerobic threshold (IAT) is located (VIAT). That is, a metabolic marker representing the upper border where constant load endurance exercise can be sustained, being commonly used to guide athletic training or in performance diagnostics. The research objectives of the current thesis were therefore, 1) to assess the reliability and day-to-day variability of VPFO and Fatpeak during treadmill ergometry running; 2) to assess the impact of high CHO (HC) vs. low CHO (LC) diets (where on the LC day a combination of low CHO diet and a glycogen depleting exercise was implemented) on the oxidative regulation of CHOs and fat while exercise is conducted at VIAT. Methods: Research objective 1: Sixteen recreational athletes (f=7, m=9; 25 ± 3 y; 1.76 ± 0.09 m; 68.3 ± 13.7 kg; 23.1 ± 2.9 kg/m²) performed 2 different running protocols on 3 different days with standardized nutrition the day before testing. At day 1, peak oxygen uptake (VO2peak) and the velocities at the aerobic threshold (VLT) and respiratory exchange ratio (RER) of 1.00 (VRER) were assessed. At days 2 and 3, subjects ran an identical submaximal incremental test (Fat-peak test) composed of a 10 min warm-up (70% VLT) followed by 5 stages of 6 min with equal increments (stage 1 = VLT, stage 5 = VRER). Breath-by-breath gas exchange data was measured continuously and used to determine fat oxidation rates. A third order polynomial function was used to identify VPFO and subsequently Fatpeak. The reproducibility and variability of variables was verified with an intraclass correlation coefficient (ICC), Pearson’s correlation coefficient, coefficient of variation (CV) and the mean differences (bias) ± 95% limits of agreement (LoA). Research objective 2: Sixteen recreational runners (m=8, f=8; 28 ± 3 y; 1.76 ± 0.09 m; 72 ± 13 kg; 23 ± 2 kg/m²) performed 3 different running protocols, each allocated on a different day. At day 1, a maximal stepwise incremental test was implemented to assess the IAT and VIAT. During days 2 and 3, participants ran a constant-pace bout (30 min) at VIAT that was combined with randomly assigned HC (7g/kg/d) or LC (3g/kg/d) diets for the 24 h before testing. Breath-by-breath gas exchange data was measured continuously and used to determine substrate oxidation. Dietary data and differences in substrate oxidation were analyzed with a paired t-test. A two-way ANOVA tested the diet X gender interaction (α = 0.05). Results: Research objective 1: ICC, Pearson’s correlation and CV for VPFO and Fatpeak were 0.98, 0.97, 5.0%; and 0.90, 0.81, 7.0%, respectively. Bias ± 95% LoA was -0.3 ± 0.9 km/h for VPFO and -2 ± 8% of VO2peak for Fatpeak. Research objective 2: Overall, the IAT and VIAT were 2.74 ± 0.39 mmol/l and 11.1 ± 1.4 km/h, respectively. CHO oxidation was 3.45 ± 0.08 and 2.90 ± 0.07 g/min during HC and LC bouts respectively (P < 0.05). Likewise, fat oxidation was 0.13 ± 0.03 and 0.36 ± 0.03 g/min (P < 0.05). Females had 14% (P < 0.05) and 12% (P > 0.05) greater fat oxidation compared to males during HC and LC bouts, respectively. Conclusions: Research objective 1: In summary, relative and absolute reliability indicators for VPFO and Fatpeak were found to be excellent. The observed LoA may now serve as a basis for future training prescriptions, although fat oxidation rates at prolonged exercise bouts at this intensity still need to be investigated. Research objective 2: Twenty-four hours of high CHO consumption results in concurrent higher CHO oxidation rates and overall utilization, whereas maintaining a low systemic CHO availability significantly increases the contribution of fat to the overall energy metabolism. The observed gender differences underline the necessity of individualized dietary planning before exerting at intensities associated with performance exercise. Ultimately, future research should establish how these findings can be extrapolated to training and competitive situations and with that provide trainers and nutritionists with improved data to derive training prescriptions.
On the basis of many years of personal experience the paper describes Buddhist meditation (Zazen, Vipassanā) as a mystical practice. After a short discussion of the role of some central concepts (longing, suffering, and love) in Buddhism, William James’ concept of religious experience is used to explain the goal of meditators as the achievement of a special kind of an experience of this kind. Systematically, its main point is to explain the difference between (on the one hand) a craving for pleasant ‘mental events’ in the sense of short-term moods, and (on the other) the long-term project of achieving a deep change in one’s attitude to life as a whole, a change that allows the acceptance of suffering and death. The last part argues that there is no reason to call the discussed practice irrational in a negative sense. Changes of attitude of the discussed kind cannot be brought about by argument alone. Therefore, a considered use of age-old practices like meditation should be seen as an addition, not as an undermining of reason.
5-Jahres-Verlauf der LRS
(2017)
Fragestellung: Untersucht wird der Verlauf von Kindern mit Lese-Rechtschreibstörungen (LRS) über gut 5 Jahre unter Berücksichtigung des Einflusses des Geschlechts der Betroffenen. Außerdem werden Auswirkungen der LRS auf das spätere Schriftsprachniveau und den Schulerfolg überprüft. Methodik: Eingangs wurden 995 Schüler zwischen 6 und 16 Jahren untersucht. Ein Teil dieser Kinder ist nach 43 sowie 63 Monaten nachuntersucht worden. Eine LRS wurde diagnostiziert, wenn für das Lesen bzw. Rechtschreiben das doppelte Diskrepanzkriterium von 1.5 Standardabweichungen zur nonverbalen Intelligenz und dem Mittelwert der Klassenstufe erfüllt war und gleichzeitig keine Minderbegabung vorlag. Ergebnisse: Die LRS weist über einen Zeitraum von 63 Monaten eine hohe Störungspersistenz von knapp 70 % auf. Der 5-Jahres-Verlauf der mittleren Lese- und Rechtschreibleistungen wurde nicht vom Geschlecht beeinflusst. Trotz durchschnittlicher Intelligenz blieben die LRS-Schüler in der Schriftsprache mindestens eine Standardabweichung hinter durchschnittlich und etwa 0.5 Standardabweichungseinheiten hinter unterdurchschnittlich intelligenten Kindern zurück. Der Schulerfolg der LRS-Schüler glich dem unterdurchschnittlich intelligenter Kinder und fiel deutlich schlechter aus als bei durchschnittlich intelligenten Kontrollkindern. Schlussfolgerungen: Eine LRS stellt ein erhebliches Entwicklungsrisiko dar, was frühzeitige Diagnostik- und Therapiemaßnahmen erfordert. Dafür sind reliable und im Hinblick auf die resultierenden Prävalenzraten sinnvolle, allgemein anerkannte Diagnosekriterien essenziell.
Novel metal-doped bacteriostatic hybrid clay composites for point-of-use disinfection of water
(2017)
This study reports the facile microwave-assisted thermal preparation of novel metal-doped hybrid clay composite adsorbents consisting of Kaolinite clay, Carica papaya seeds and/or plantain peels (Musa paradisiaca) and ZnCl2. Fourier Transformed IR spectroscopy, X-ray diffraction, Scanning Electron Microscopy and Brunauer-Emmett-Teller (BET) analysis are employed to characterize these composite adsorbents. The physicochemical analysis of these composites suggests that they act as bacteriostatic rather than bacteriacidal agents. This bacterostactic action is induced by the ZnO phase in the composites whose amount correlates with the efficacy of the composite. The composite prepared with papaya seeds (PS-HYCA) provides the best disinfection efficacy (when compared with composite prepared with Musa paradisiaca peels-PP-HYCA) against gram-negative enteric bacteria with a breakthrough time of 400 and 700 min for the removal of 1.5 x10(6) cfu/mL S. typhi and V. cholerae from water respectively. At 10(3) cfu/mL of each bacterium in solution, 2 g of both composite adsorbents kept the levels the bacteria in effluent solutions at zero for up to 24 h. Steam regeneration of 2 g of bacteria-loaded Carica papaya prepared composite adsorbent shows a loss of ca. 31% of its capacity even after the 3rd regeneration cycle of 25 h of service time. The composite adsorbent prepared with Carica papaya seeds will be useful for developing simple point-of-use water treatment systems for water disinfection application. This composite adsorbent is comparatively of good performance and shows relatively long hydraulic contact times and is expected to minimize energy intensive traditional treatment processes.
New hybrid clay materials with good affinity for phosphate ions were developed from a combination of biomass-Carica papaya seeds (PS) and Musa paradisiaca (Plantain peels-PP), ZnCl2 and Kaolinite clay to produce iPS-HYCA and iPP-HYCA composite adsorbents respectively. Functionalization of these adsorbents with an organosilane produced NPS-HYCA and NPP-HYCA composite adsorbents. The pH(pzc) for the adsorbents were 7.83, 6.91, 7.66 and 6.55 for iPS-HYCA, NPS-HYCA, iPP-HYCA and NPP-HYCA respectively. Using the Brouer-Sotolongo isotherm model which best predict the adsorption capacity of composites for phosphate, iPP-HYCA, iPS-HYCA, NPP-HYCA, and NPS-HYCA composite adsorbents respectively. When compared with some commercial resins, the amino-functionalized adsorbents had better adsorption capacities. Furthermore, amino-functionalized adsorbents showed improved adsorption capacity and rate of phosphate uptake (as much as 40-fold), as well as retain 94% (for NPS-HYCA) and 84.1% (for NPP-HYCA) efficiency for phosphate adsorption after 5 adsorption-desorption cycles (96 h of adsorption time with 100 mg/L of phosphate ions) as against 37.5% (for iPS-HYCA) and 35% (for iPP-HYCA) under similar conditions. In 25 min desorption of phosphate ion attained equilibrium. These new amino-functionalized hybrid clay composite adsorbents, which were prepared by a simple means that is sustainable, have potentials for the efficient capture of phosphate ions from aqueous solution. They are quickly recovered from aqueous solution, non-biodegradable (unlike many biosorbent) with potentials to replace expensive adsorbents in the future. They have the further advantage of being useful in the recovery of phosphate for use in agriculture which could positively impact the global food security programme. (C) 2017 Elsevier Ltd. All rights reserved.
Battle of plates
(2017)
Objective: Approach-avoidance training (AAT) is a promising approach in obesity treatment. The present study examines whether an AAT is feasible and able to influence approach tendencies in children and adolescents, comparing implicit and explicit training approaches. Design/Setting/Subjects: Fifty-nine overweight children and adolescents (aged 8-16 years; twenty-six boys) participated in an AAT for food cues, learning to reject snack items and approach vegetable items. Reaction times in the AAT and an implicit association rest (IAT) were assessed pre- and post-intervention. Results: A significant increase in the AAT compatibility scores with a large effect (eta(2) = 0.18) was found. No differences between the implicit and explicit training approaches and no change in the IAT scores were observed. Conclusions: Automatic tendencies in children can be trained, too. The implementation of AAT in the treatment of obesity might support the modification of an unhealthy nutrition behaviour pattern. Further data from randomized controlled clinical trials are needed.
X-ray reflectivity measurements of femtosecond laser-induced transient gratings (TG) are applied to demonstrate the spatiotemporal coherent control of thermally induced surface deformations on ultrafast time scales. Using grazing incidence x-ray diffraction we unambiguously measure the amplitude of transient surface deformations with sub-angstrom resolution. Understanding the dynamics of femtosecond TG excitations in terms of superposition of acoustic and thermal gratings makes it possible to develop new ways of coherent control in x-ray diffraction experiments. Being the dominant source of TG signal, the long-living thermal grating with spatial period. can be canceled by a second, time-delayed TG excitation shifted by Lambda/2. The ultimate speed limits of such an ultrafast x-ray shutter are inferred from the detailed analysis of thermal and acoustic dynamics in TG experiments.
We present time-resolved x-ray reflectivity measurements on laser excited coherent and incoherent surface deformations of thin metallic films. Based on a kinematical diffraction model, we derive the surface amplitude from the diffracted x-ray intensity and resolve transient surface excursions with sub-angstrom spatial precision and 70 ps temporal resolution. The analysis allows for decomposition of the surface amplitude into multiple coherent acoustic modes and a substantial contribution from incoherent phonons which constitute the sample heating. Published by AIP Publishing.
Depressive symptoms have been related to anxious rejection sensitivity, but little is known about relations with angry rejection sensitivity and justice sensitivity. We measured rejection sensitivity, justice sensitivity, and depressive symptoms in 1,665 9-to-21-year olds at two points of measurement. Participants with high T1 levels of depressive symptoms reported higher anxious and angry rejection sensitivity and higher justice sensitivity than controls at T1 and T2. T1 rejection, but not justice sensitivity predicted T2 depressive symptoms; high victim justice sensitivity, however, added to the stabilization of depressive symptoms. T1 depressive symptoms positively predicted T2 anxious and angry rejection and victim justice sensitivity. Hence, sensitivity toward negative social cues may be cause and consequence of depressive symptoms and requires consideration in cognitive-behavioral treatment of depression.
We have analyzed the recently developed pan-European strong motion database, RESORCE-2012: spectral parameters, such as stress drop (stress parameter, Delta sigma), anelastic attenuation (Q), near surface attenuation (kappa(0)) and site amplification have been estimated from observed strong motion recordings. The selected dataset exhibits a bilinear distance-dependent Q model with average kappa(0) value 0.0308 s. Strong regional variations in inelastic attenuation were also observed: frequency-independent Q(0) of 1462 and 601 were estimated for Turkish and Italian data respectively. Due to the strong coupling between Q and kappa(0), the regional variations in Q have strong impact on the estimation of near surface attenuation kappa(0). kappa(0) was estimated as 0.0457 and 0.0261 s for Turkey and Italy respectively. Furthermore, a detailed analysis of the variability in estimated kappa(0) revealed significant within-station variability. The linear site amplification factors were constrained from residual analysis at each station and site-class type. Using the regional Q(0) model and a site-class specific kappa(0), seismic moments (M-0) and source corner frequencies f (c) were estimated from the site corrected empirical Fourier spectra. Delta sigma did not exhibit magnitude dependence. The median Delta sigma value was obtained as 5.75 and 5.65 MPa from inverted and database magnitudes respectively. A comparison of response spectra from the stochastic model (derived herein) with that from (regional) ground motion prediction equations (GMPEs) suggests that the presented seismological parameters can be used to represent the corresponding seismological attributes of the regional GMPEs in a host-to-target adjustment framework. The analysis presented herein can be considered as an update of that undertaken for the previous Euro-Mediterranean strong motion database presented by Edwards and Fah (Geophys J Int 194(2):1190-1202, 2013a).
Receiver functions (RF) have been used for several decades to study structures beneath seismic stations. Although most available stations are deployed on shore, the number of ocean bottom station (OBS) experiments has increased in recent years. Almost all OBSs have to deal with higher noise levels and a limited deployment time (approximate to 1year), resulting in a small number of usable records of teleseismic earthquakes. Here we use OBSs deployed as midaperture array in the deep ocean (4.5-5.5km water depth) of the eastern mid-Atlantic. We use evaluation criteria for OBS data and beamforming to enhance the quality of the RFs. Although some stations show reverberations caused by sedimentary cover, we are able to identify the Moho signal, indicating a normal thickness (5-8km) of oceanic crust. Observations at single stations with thin sediments (300-400m) indicate that a probable sharp lithosphere-asthenosphere boundary (LAB) might exist at a depth of approximate to 70-80km which is in line with LAB depth estimates for similar lithospheric ages in the Pacific. The mantle discontinuities at approximate to 410km and approximate to 660km are clearly identifiable. Their delay times are in agreement with PREM. Overall the usage of beam-formed earthquake recordings for OBS RF analysis is an excellent way to increase the signal quality and the number of usable events.
Hybrid lead halide perovskites are introduced as charge generation layers (CGLs) for the accurate determination of electron mobilities in thin organic semiconductors. Such hybrid perovskites have become a widely studied photovoltaic material in their own right, for their high efficiencies, ease of processing from solution, strong absorption, and efficient photogeneration of charge. Time-of-flight (ToF) measurements on bilayer samples consisting of the perovskite CGL and an organic semiconductor layer of different thickness are shown to be determined by the carrier motion through the organic material, consistent with the much higher charge carrier mobility in the perovskite. Together with the efficient photon-to-electron conversion in the perovskite, this high mobility imbalance enables electron-only mobility measurement on relatively thin application-relevant organic films, which would not be possible with traditional ToF measurements. This architecture enables electron-selective mobility measurements in single components as well as bulk-heterojunction films as demonstrated in the prototypical polymer/fullerene blends. To further demonstrate the potential of this approach, electron mobilities were measured as a function of electric field and temperature in an only 127 nm thick layer of a prototypical electron-transporting perylene diimide-based polymer, and found to be consistent with an exponential trap distribution of ca. 60 meV. Our study furthermore highlights the importance of high mobility charge transporting layers when designing perovskite solar cells.
Alternative electron acceptors are being actively explored in order to advance the development of bulk-heterojunction (BHJ) organic solar cells (OSCs). The indene-C-60 bisadduct (ICBA) has been regarded as a promising candidate, as it provides high open-circuit voltage in BHJ solar cells; however, the photovoltaic performance of such ICBA-based devices is often inferior when compared to cells with the omnipresent PCBM electron acceptor. Here, by pairing the high performance polymer (FTAZ) as the donor with either PCBM or ICBA as the acceptor, we explore the physical mechanism behind the reduced performance of the ICBA-based device. Time delayed collection field (TDCF) experiments reveal reduced, yet field-independent free charge generation in the FTAZ:ICBA system, explaining the overall lower photocurrent in its cells. Through the analysis of the photoluminescence, photogeneration, and electroluminescence, we find that the lower generation efficiency is neither caused by inefficient exciton splitting, nor do we find evidence for significant energy back-transfer from the CT state to singlet excitons. In fact, the increase in open circuit voltage when replacing PCBM by ICBA is entirely caused by the increase in the CT energy, related to the shift in the LUMO energy, while changes in the radiative and nonradiative recombination losses are nearly absent. On the other hand, space charge limited current (SCLC) and bias-assisted charge extraction (BACE) measurements consistently reveal a severely lower electron mobilitiy in the FTAZ:ICBA blend. Studies of the blends with resonant soft X-ray scattering (R-SoXS), grazing incident wide-angle X-ray scattering (GIWAXS), and scanning transmission X-ray microscopy (STXM) reveal very little differences in the mesoscopic morphology but significantly less nanoscale molecular ordering of the fullerene domains in the ICBA based blends, which we propose as the main cause for the lower generation efficiency and smaller electron mobility. Calculations of the JV curves with an analytical model, using measured values, show good agreement with the experimentally determined JV characteristics, proving that these devices suffer from slow carrier extraction, resulting in significant bimolecular recombination losses. Therefore, this study highlights the importance of high charge carrier mobility for newly synthesized acceptor materials, in addition to having suitable energy levels.
Two nuclear explosions were carried out by the Democratic People’s Republic of North Korea in January and September 2016. Epicenters were located close to those of the 2006, 2009, and 2013 previous explosions. We perform a seismological analysis of the 2016 events combining the analysis of full waveforms at regional distances and seismic array beams at teleseismic distances. We estimate the most relevant source parameters, such as source depth, moment release, and full moment tensor (MT). The best MT solution can be decomposed into an isotropic source, directly related with the explosion and an additional deviatoric term, likely due to near‐source interactions with topographic and/or underground facilities features. We additionally perform an accurate resolution test to assess source parameters uncertainties and trade‐offs. This analysis sheds light on source parameters inconsistencies among studies on previous shallow explosive sources. The resolution of the true MT is hindered by strong source parameters trade‐offs, so that a broad range of well‐fitting MT solutions can be found, spanning from a dominant positive isotropic term to a dominant negative vertical compensated linear vector dipole. The true mechanism can be discriminated by additionally modeling first‐motion polarities at seismic arrays at teleseismic distances. A comparative assessment of the 2016 explosion with earlier nuclear tests documents similar vertical waveforms but a significant increase of amplitude for the 2016 explosions, which proves that the 9 September 2016 was the largest nuclear explosion ever performed in North Korea with a magnitude Mw 4.9 and a shallow depth of less than 2 km, although there are no proofs of a fusion explosion. Modeling transversal component waveforms suggests variable size and orientation of the double‐couple components of the 2009, 2013, and 2016 sources.
Organic solar cells demonstrate external quantum efficiencies and fill factors approaching those of conventional photovoltaic technologies. However, as compared with the optical gap of the absorber materials, their open-circuit voltage is much lower, largely due to the presence of significant non-radiative recombination. Here, we study a large data set of published and new material combinations and find that non-radiative voltage losses decrease with increasing charge-transfer-state energies. This observation is explained by considering non-radiative charge-transfer-state decay as electron transfer in the Marcus inverted regime, being facilitated by a common skeletal molecular vibrational mode. Our results suggest an intrinsic link between non-radiative voltage losses and electron-vibration coupling, indicating that these losses are unavoidable. Accordingly, the theoretical upper limit for the power conversion efficiency of single-junction organic solar cells would be reduced to about 25.5% and the optimal optical gap increases to (1.45-1.65) eV, that is, (0.2-0.3) eV higher than for technologies with minimized non-radiative voltage losses.
Charge carrier recombination in organic disordered semiconductors is strongly influenced by the thermalization of charge carriers in the density of states (DOS). Measurements of recombination dynamics, conducted under transient or steady-state conditions, can easily be misinterpreted when a detailed understanding of the interplay of thermalization and recombination is missing. To enable adequate measurement analysis, we solve the multiple-trapping problem for recombining charge carriers and analyze it in the transient and steady excitation paradigm for different DOS distributions. We show that recombination rates measured after pulsed excitation are inherently time dependent since recombination gradually slows down as carriers relax in the DOS. When measuring the recombination order after pulsed excitation, this leads to an apparent high-order recombination at short times. As times goes on, the recombination order approaches an asymptotic value. For the Gaussian and the exponential DOS distributions, this asymptotic value equals the recombination order of the equilibrated system under steady excitation. For a more general DOS distribution, the recombination order can also depend on the carrier density, under both transient and steady-state conditions. We conclude that transient experiments can provide rich information about recombination in and out of equilibrium and the underlying DOS occupation provided that consistent modeling of the system is performed.
In this Letter, we study the role of the donor:acceptor interface nanostructure upon charge separation and recombination in organic photovoltaic devices and blend films, using mixtures of PBTTT and two different fullerene derivatives (PC70BM and ICTA) as models for intercalated and nonintercalated morphologies, respectively. Thermodynamic simulations show that while the completely intercalated system exhibits a large free-energy barrier for charge separation, this barrier is significantly lower in the nonintercalated system and almost vanishes when energetic disorder is included in the model. Despite these differences, both femtosecond-resolved transient absorption spectroscopy (TAS) and time-delayed collection field (TDCF) exhibit extensive first-order losses in both systems, suggesting that geminate pairs are the primary product of photoexcitation. In contrast, the system that comprises a combination of fully intercalated polymer:fullerene areas and fullerene-aggregated domains (1:4 PBTTT:PC70BM) is the only one that shows slow, second-order recombination of free charges, resulting in devices with an overall higher short-circuit current and fill factor. This study therefore provides a novel consideration of the role of the interfacial nanostructure and the nature of bound charges and their impact upon charge generation and recombination.
High photon energy losses limit the open-circuit voltage (V-OC) and power conversion efficiency of organic solar cells (OSCs). In this work, an optimization route is presented which increases the V-OC by reducing the interfacial area between donor (D) and acceptor (A). This optimization route concerns a cascade device architecture in which the introduction of discontinuous interlayers between alpha-sexithiophene (alpha-6T) (D) and chloroboron subnaphthalocyanine (SubNc) (A) increases the V-OC of an alpha-6T/SubNc/SubPc fullerene-free cascade OSC from 0.98 V to 1.16 V. This increase of 0.18 V is attributed solely to the suppression of nonradiative recombination at the D-A interface. By accurately measuring the optical gap (E-opt) and the energy of the charge-transfer state (E-CT) of the studied OSC, a detailed analysis of the overall voltage losses is performed. E-opt - qV(OC) losses of 0.58 eV, which are among the lowest observed for OSCs, are obtained. Most importantly, for the V-OC-optimized devices, the low-energy (700 nm) external quantum efficiency (EQE) peak remains high at 79%, despite a minimal driving force for charge separation of less than 10 meV. This work shows that low-voltage losses can be combined with a high EQE in organic photovoltaic devices.
The increasing numbers of recordings at individual sites allows quantification of empirical linear site-response adjustment factors (delta S2S(s)) from the ground motion prediction equation (GMPE) residuals. The delta S2S(s) are then used to linearly scale the ergodic GMPE predictions to obtain site-specific ground motion predictions in a partially non-ergodic Probabilistic Seismic Hazard Assessment (PSHA). To address key statistical and conceptual issues in the current practice, we introduce a novel empirical region-and site-specific PSHA methodology wherein, (1) site-to-site variability (phi(S2S)) is first estimated as a random-variance in a mixed-effects GMPE regression, (2) delta S2S(s) at new sites with strong motion are estimated using the a priori phi(S2S), and (3) the GMPE site-specific single-site aleatory variability sigma(ss,s) is replaced with a generic site-corrected aleatory variability sigma(0). Comparison of region- and site-specific hazard curves from our method against the traditional ergodic estimates at 225 sites in Europe and Middle East shows an approximate 50% difference in predicted ground motions over a range of hazard levels-a strong motivation to increase seismological monitoring of critical facilities and enrich regional ground motion data sets.
Charge carrier recombination in organic disordered semiconductors is strongly influenced by the thermalization of charge carriers in the density of states (DOS). Measurements of recombination dynamics, conducted under transient or steady-state conditions, can easily be misinterpreted when a detailed understanding of the interplay of thermalization and recombination is missing. To enable adequate measurement analysis, we solve the multiple-trapping problem for recombining charge carriers and analyze it in the transient and steady excitation paradigm for different DOS distributions. We show that recombination rates measured after pulsed excitation are inherently time dependent since recombination gradually slows down as carriers relax in the DOS. When measuring the recombination order after pulsed excitation, this leads to an apparent high-order recombination at short times. As times goes on, the recombination order approaches an asymptotic value. For the Gaussian and the exponential DOS distributions, this asymptotic value equals the recombination order of the equilibrated system under steady excitation. For a more general DOS distribution, the recombination order can also depend on the carrier density, under both transient and steady-state conditions. We conclude that transient experiments can provide rich information about recombination in and out of equilibrium and the underlying DOS occupation provided that consistent modeling of the system is performed.
Site-Corrected Magnitude- and Region-Dependent Correlations of Horizontal Peak Spectral Amplitudes
(2017)
Empirical correlations of horizontal peak spectral amplitudes (PSA) are modeled using the total-residuals obtained in a ground motion prediction equation (GMPE) regression. Recent GMPEs moved toward partially non-ergodic region-and site-specific predictions, while the residual correlation models remained largely ergodic. Using mixed-effects regression, we decompose the total-residuals of a pan-European GMPE into between-event, between-site, and event-and-site corrected residuals to investigate the ergodicity in empirical PSA correlations. We first observed that the between-event correlations are magnitude-dependent, partially due to the differences in source spectra, and influence of stress-drop parameter on small and large events. Next, removing the between-site residuals from within-event residuals yields the event-and-site corrected residuals which are found to be region-dependent, possibly due to the regional differences in distance-decay of short period PSAs. Using our site-corrected magnitude- and region-dependent correlations, and the between-site residuals as empirical site-specific ground motion adjustments, we compute partially non-ergodic conditional mean spectra at four well-recorded sites in Europe and Middle Eastern regions.
Background
In isometric muscle function, there are subjectively two different modes of performance: one can either hold isometrically – thus resist an impacting force – or push isometrically – therefore work against a stable resistance. The purpose of this study is to investigate whether or not two different isometric muscle actions – the holding vs. pushing one (HIMA vs PIMA) – can be distinguished by objective parameters.
Methods
Ten subjects performed two different measuring modes at 80% of MVC realized by a special pneumatic system. During HIMA the subject had to resist the defined impacting force of the pneumatic system in an isometric position, whereby the force of the cylinder works in direction of elbow flexion against the subject. During PIMA the subject worked isometrically in direction of elbow extension against a stable position of the system. The signals of pressure, force, acceleration and mechanomyography/-tendography (MMG/MTG) of the elbow extensor (MMGtri/MTGtri) and the abdominal muscle (MMGobl) were recorded and evaluated concerning the duration of maintaining the force level (force endurance) and the characteristics of MMG-/MTG-signals. Statistical group differences comparing HIMA vs. PIMA were estimated using SPSS.
Results
Significant differences between HIMA and PIMA were especially apparent regarding the force endurance: During HIMA the subjects showed a decisively shorter time of stable isometric position (19 ± 8 s) in comparison with PIMA (41 ± 24 s; p = .005). In addition, during PIMA the longest isometric plateau amounted to 59.4% of the overall duration time of isometric measuring, during HIMA it lasted 31.6% (p = .000). The frequency of MMG/MTG did not show significant differences. The power in the frequency ranges of 8–15 Hz and 10–29 Hz was significantly higher in the MTGtri performing HIMA compared to PIMA (but not for the MMGs). The amplitude of MMG/MTG did not show any significant difference considering the whole measurement. However, looking only at the last 10% of duration time (exhaustion), the MMGtri showed significantly higher amplitudes during PIMA.
Conclusion
The results suggest that under holding isometric conditions muscles exhaust earlier. That means that there are probably two forms of isometric muscle action. We hypothesize two potential reasons for faster yielding during HIMA: (1) earlier metabolic fatigue of the muscle fibers and (2) the complexity of neural control strategies.
The term Adaptive Force (AF) describes the capability of adaptation of the nerve-muscle-system to externally applied forces during isometric and eccentric muscle action. This ability plays an important role in real life motions as well as in sports. The focus of this paper is on the specific measurement method of this neuromuscular action, which can be seen as innovative. A measuring system based on the use of compressed air was constructed and evaluated for this neuromuscular function. It depends on the physical conditions of the subject, at which force level it deviates from the quasi isometric position and merges into eccentric muscle action. The device enables – in contrast to the isokinetic systems – a measure of strength without forced motion. Evaluation of the scientific quality criteria of the devices was done by measurements regarding the intra- and interrater-, the test-retest-reliability and fatiguing measurements. Comparisons of the pneumatic device with a dynamometer were also done. Looking at the mechanical evaluation, the results show a high level of consistency (r²=0.94 to 0.96). The parallel test reliability delivers a very high and significant correlation (ρ=0.976; p=0.000). Including the biological system, the concordance of three different raters is very high (p=0.001, Cronbachs alpha α=0.987). The test retest with 4 subjects over five weeks speaks for the reliability of the device in showing no statistically significant differences. These evaluations indicate that the scientific evaluation criteria are fulfilled. The specific feature of this system is that an isometric position can be maintained while the externally impacting force rises. Moreover, the device can capture concentric, static and eccentric strength values. Fields of application are performance diagnostics in sports and medicine.
Photonic sensing in highly concentrated biotechnical processes by photon density wave spectroscopy
(2017)
Photon Density Wave (PDW) spectroscopy is introduced as a new approach for photonic sensing in highly concentrated biotechnical processes. It independently quantifies the absorption and reduced scattering coefficient calibration-free and as a function of time, thus describing the optical properties in the vis/NIR range of the biomaterial during their processing. As examples of industrial relevance, enzymatic milk coagulation, beer mashing, and algae cultivation in photo bioreactors are discussed.
Altered scapular muscle activity is mostly described under unloaded and submaximal loaded conditions in impingement patients. However, there is no clear evidence on muscle activity with respect to movement phases under maximum load in healthy subjects. Therefore, this study aimed to investigate scapular muscle activity under unloaded and maximum loaded isokinetic shoulder flexion and extension in regard to the movement phase. Fourteen adults performed unloaded (continuous passive motion [CPM]) as well as maximum loaded (concentric [CON], eccentric [ECC]) isokinetic shoulder flexion (Flex) and extension (Ext). Simultaneously, scapular muscle activity was measured by EMG. Root mean square was calculated for the whole ROM and four movement phases. Data were analyzed descriptively and by two-way repeated measures ANOVA. CPMFlex resulted in a linear increase of muscle activity for all muscles. Muscle activity during CONFlex and ECCFlex resulted in either constant activity levels or in an initial increase followed by a plateau in the second half of movement. CPMExt decreased with the progression of movement, whereas CONExt and ECCExt initially decreased and either levelled off or increased in the second half of movement. Scapular muscle activity of unloaded shoulder flexion and extension changed under maximum load showing increased activity levels and an altered pattern over the course of movement.
Background
In isometric muscle function, there are subjectively two different modes of performance: one can either hold isometrically – thus resist an impacting force – or push isometrically – therefore work against a stable resistance. The purpose of this study is to investigate whether or not two different isometric muscle actions – the holding vs. pushing one (HIMA vs PIMA) – can be distinguished by objective parameters.
Methods
Ten subjects performed two different measuring modes at 80% of MVC realized by a special pneumatic system. During HIMA the subject had to resist the defined impacting force of the pneumatic system in an isometric position, whereby the force of the cylinder works in direction of elbow flexion against the subject. During PIMA the subject worked isometrically in direction of elbow extension against a stable position of the system. The signals of pressure, force, acceleration and mechanomyography/-tendography (MMG/MTG) of the elbow extensor (MMGtri/MTGtri) and the abdominal muscle (MMGobl) were recorded and evaluated concerning the duration of maintaining the force level (force endurance) and the characteristics of MMG-/MTG-signals. Statistical group differences comparing HIMA vs. PIMA were estimated using SPSS.
Results
Significant differences between HIMA and PIMA were especially apparent regarding the force endurance: During HIMA the subjects showed a decisively shorter time of stable isometric position (19 ± 8 s) in comparison with PIMA (41 ± 24 s; p = .005). In addition, during PIMA the longest isometric plateau amounted to 59.4% of the overall duration time of isometric measuring, during HIMA it lasted 31.6% (p = .000). The frequency of MMG/MTG did not show significant differences. The power in the frequency ranges of 8–15 Hz and 10–29 Hz was significantly higher in the MTGtri performing HIMA compared to PIMA (but not for the MMGs). The amplitude of MMG/MTG did not show any significant difference considering the whole measurement. However, looking only at the last 10% of duration time (exhaustion), the MMGtri showed significantly higher amplitudes during PIMA.
Conclusion
The results suggest that under holding isometric conditions muscles exhaust earlier. That means that there are probably two forms of isometric muscle action. We hypothesize two potential reasons for faster yielding during HIMA: (1) earlier metabolic fatigue of the muscle fibers and (2) the complexity of neural control strategies.
Paclitaxel is a commonly used cytotoxic anticancer drug with potentially life-threatening toxicity at therapeutic doses and high interindividual pharmacokinetic variability. Thus, drug and effect monitoring is indicated to control dose-limiting neutropenia. Joerger et al. (2016) developed a dose individualization algorithm based on a pharmacokinetic (PK)/pharmacodynamic (PD) model describing paclitaxel and neutrophil concentrations. Furthermore, the algorithm was prospectively compared in a clinical trial against standard dosing (Central European Society for Anticancer Drug Research Study of Paclitaxel Therapeutic Drug Monitoring; 365 patients, 720 cycles) but did not substantially improve neutropenia. This might be caused by misspecifications in the PK/PD model underlying the algorithm, especially without consideration of the observed cumulative pattern of neutropenia or the platinum-based combination therapy, both impacting neutropenia. This work aimed to externally evaluate the original PK/PD model for potential misspecifications and to refine the PK/PD model while considering the cumulative neutropenia pattern and the combination therapy. An underprediction was observed for the PK (658 samples), the PK parameters, and these parameters were re-estimated using the original estimates as prior information. Neutrophil concentrations (3274 samples) were over-predicted by the PK/PD model, especially for later treatment cycles when the cumulative pattern aggravated neutropenia. Three different modeling approaches (two from the literature and one newly developed) were investigated. The newly developed model, which implemented the bone marrow hypothesis semiphysiologically, was superior. This model further included an additive effect for toxicity of carboplatin combination therapy. Overall, a physiologically plausible PK/PD model was developed that can be used for dose adaptation simulations and prospective studies to further improve paclitaxel/ carboplatin combination therapy.
Background: Infliximab (IFX), an anti-TNF monoclonal antibody approved for the treatment of inflammatory bowel disease, is dosed per kg body weight (BW). However, the rationale for body size adjustment has not been unequivocally demonstrated [1], and first attempts to improve IFX therapy have been undertaken [2]. The aim of our study was to assess the impact of different dosing strategies (i.e. body size-adjusted and fixed dosing) on drug exposure and pharmacokinetic (PK) target attainment. For this purpose, a comprehensive simulation study was performed, using patient characteristics (n=116) from an in-house clinical database.
Methods: IFX concentration-time profiles of 1000 virtual, clinically representative patients were generated using a previously published PK model for IFX in patients with Crohn's disease [3]. For each patient 1000 profiles accounting for PK variability were considered. The IFX exposure during maintenance treatment after the following dosing strategies was compared: i) fixed dose, and per ii) BW, iii) lean BW (LBW), iv) body surface area (BSA), v) height (HT), vi) body mass index (BMI) and vii) fat-free mass (FFM)). For each dosing strategy the variability in maximum concentration Cmax, minimum concentration Cmin (= C8weeks) and area under the concentration-time curve (AUC), as well as percent of patients achieving the PK target, Cmin=3 μg/mL [4] were assessed.
Results: For all dosing strategies the variability of Cmin (CV ≈110%) was highest, compared to Cmax and AUC, and was of similar extent regardless of dosing strategy. The proportion of patients reaching the PK target (≈⅓ was approximately equal for all dosing strategies.
As a potentially toxic agent on nervous system and bone, the safety of aluminium exposure from adjuvants in vaccines and subcutaneous immune therapy (SCIT) products has to be continuously reevaluated, especially regarding concomitant administrations. For this purpose, knowledge on absorption and disposition of aluminium in plasma and tissues is essential. Pharmacokinetic data after vaccination in humans, however, are not available, and for methodological and ethical reasons difficult to obtain. To overcome these limitations, we discuss the possibility of an in vitro-in silico approach combining a toxicokinetic model for aluminium disposition with biorelevant kinetic absorption parameters from adjuvants. We critically review available kinetic aluminium-26 data for model building and, on the basis of a reparameterized toxicokinetic model (Nolte et al., 2001), we identify main modelling gaps. The potential of in vitro dissolution experiments for the prediction of intramuscular absorption kinetics of aluminium after vaccination is explored. It becomes apparent that there is need for detailed in vitro dissolution and in vivo absorption data to establish an in vitro-in vivo correlation (IVIVC) for aluminium adjuvants. We conclude that a combination of new experimental data and further refinement of the Nolte model has the potential to fill a gap in aluminium risk assessment. (C) 2017 Elsevier Inc. All rights reserved.
Broad-spectrum antibiotic combination therapy is frequently applied due to increasing resistance development of infective pathogens. The objective of the present study was to evaluate two common empiric broad-spectrum combination therapies consisting of either linezolid (LZD) or vancomycin (VAN) combined with meropenem (MER) against Staphylococcus aureus (S. aureus) as the most frequent causative pathogen of severe infections. A semimechanistic pharmacokinetic-pharmacodynamic (PK-PD) model mimicking a simplified bacterial life-cycle of S. aureus was developed upon time-kill curve data to describe the effects of LZD, VAN, and MER alone and in dual combinations. The PK-PD model was successfully (i) evaluated with external data from two clinical S. aureus isolates and further drug combinations and (ii) challenged to predict common clinical PK-PD indices and breakpoints. Finally, clinical trial simulations were performed that revealed that the combination of VAN-MER might be favorable over LZD-MER due to an unfavorable antagonistic interaction between LZD and MER.
During the drug discovery & development process, several phases encompassing a number of preclinical and clinical studies have to be successfully passed to demonstrate safety and efficacy of a new drug candidate. As part of these studies, the characterization of the drug's pharmacokinetics (PK) is an important aspect, since the PK is assumed to strongly impact safety and efficacy. To this end, drug concentrations are measured repeatedly over time in a study population. The objectives of such studies are to describe the typical PK time-course and the associated variability between subjects. Furthermore, underlying sources significantly contributing to this variability, e.g. the use of comedication, should be identified. The most commonly used statistical framework to analyse repeated measurement data is the nonlinear mixed effect (NLME) approach. At the same time, ample knowledge about the drug's properties already exists and has been accumulating during the discovery & development process: Before any drug is tested in humans, detailed knowledge about the PK in different animal species has to be collected. This drug-specific knowledge and general knowledge about the species' physiology is exploited in mechanistic physiological based PK (PBPK) modeling approaches -it is, however, ignored in the classical NLME modeling approach.
Mechanistic physiological based models aim to incorporate relevant and known physiological processes which contribute to the overlying process of interest. In comparison to data--driven models they are usually more complex from a mathematical perspective. For example, in many situations, the number of model parameters outrange the number of measurements and thus reliable parameter estimation becomes more complex and partly impossible. As a consequence, the integration of powerful mathematical estimation approaches like the NLME modeling approach -which is widely used in data-driven modeling -and the mechanistic modeling approach is not well established; the observed data is rather used as a confirming instead of a model informing and building input.
Another aggravating circumstance of an integrated approach is the inaccessibility to the details of the NLME methodology so that these approaches can be adapted to the specifics and needs of mechanistic modeling. Despite the fact that the NLME modeling approach exists for several decades, details of the mathematical methodology is scattered around a wide range of literature and a comprehensive, rigorous derivation is lacking. Available literature usually only covers selected parts of the mathematical methodology. Sometimes, important steps are not described or are only heuristically motivated, e.g. the iterative algorithm to finally determine the parameter estimates.
Thus, in the present thesis the mathematical methodology of NLME modeling is systemically described and complemented to a comprehensive description,
comprising the common theme from ideas and motivation to the final parameter estimation. Therein, new insights for the interpretation of different approximation methods used in the context of the NLME modeling approach are given and illustrated; furthermore, similarities and differences between them are outlined. Based on these findings, an expectation-maximization (EM) algorithm to determine estimates of a NLME model is described.
Using the EM algorithm and the lumping methodology by Pilari2010, a new approach on how PBPK and NLME modeling can be combined is presented and exemplified for the antibiotic levofloxacin. Therein, the lumping identifies which processes are informed by the available data and the respective model reduction improves the robustness in parameter estimation. Furthermore, it is shown how apriori known factors influencing the variability and apriori known unexplained variability is incorporated to further mechanistically drive the model development. Concludingly, correlation between parameters and between covariates is automatically accounted for due to the mechanistic derivation of the lumping and the covariate relationships.
A useful feature of PBPK models compared to classical data-driven PK models is in the possibility to predict drug concentration within all organs and tissue in the body. Thus, the resulting PBPK model for levofloxacin is used to predict drug concentrations and their variability within soft tissues which are the site of action for levofloxacin. These predictions are compared with data of muscle and adipose tissue obtained by microdialysis, which is an invasive technique to measure a proportion of drug in the tissue, allowing to approximate the concentrations in the interstitial fluid of tissues. Because, so far, comparing human in vivo tissue PK and PBPK predictions are not established, a new conceptual framework is derived. The comparison of PBPK model predictions and microdialysis measurements shows an adequate agreement and reveals further strengths of the presented new approach.
We demonstrated how mechanistic PBPK models, which are usually developed in the early stage of drug development, can be used as basis for model building in the analysis of later stages, i.e. in clinical studies. As a consequence, the extensively collected and accumulated knowledge about species and drug are utilized and updated with specific volunteer or patient data. The NLME approach combined with mechanistic modeling reveals new insights for the mechanistic model, for example identification and quantification of variability in mechanistic processes. This represents a further contribution to the learn & confirm paradigm across different stages of drug development.
Finally, the applicability of mechanism--driven model development is demonstrated on an example from the field of Quantitative Psycholinguistics to analyse repeated eye movement data. Our approach gives new insight into the interpretation of these experiments and the processes behind.
Mathematical models of bacterial growth have been successfully applied to study the relationship between antibiotic drug exposure and the antibacterial effect. Since these models typically lack a representation of cellular processes and cell physiology, the mechanistic integration of drug action is not possible on the cellular level. The cellular mechanisms of drug action, however, are particularly relevant for the prediction, analysis and understanding of interactions between antibiotics. Interactions are also studied experimentally, however, a lacking consent on the experimental protocol hinders direct comparison of results. As a consequence, contradictory classifications as additive, synergistic or antagonistic are reported in literature.
In the present thesis we developed a novel mathematical model for bacterial growth that integrates cell-level processes into the population growth level. The scope of the model is to predict bacterial growth under antimicrobial perturbation by multiple antibiotics in vitro.
To this end, we combined cell-level data from literature with population growth data for Bacillus subtilis, Escherichia coli and Staphylococcus aureus. The cell-level data described growth-determining characteristics of a reference cell, including the ribosomal concentration and efficiency. The population growth data comprised extensive time-kill curves for clinically relevant antibiotics (tetracycline, chloramphenicol, vancomycin, meropenem, linezolid, including dual combinations).
The new cell-level approach allowed for the first time to simultaneously describe single and combined effects of the aforementioned antibiotics for different experimental protocols, in particular different growth phases (lag and exponential phase). Consideration of ribosomal dynamics and persisting sub-populations explained the decreased potency of linezolid on cultures in the lag phase compared to exponential phase cultures. The model captured growth rate dependent killing and auto-inhibition of meropenem and - also for vancomycin exposure - regrowth of the bacterial cultures due to adaptive resistance development. Stochastic interaction surface analysis demonstrated the pronounced antagonism between meropenem and linezolid to be robust against variation in the growth phase and pharmacodynamic endpoint definition, but sensitive to a change in the experimental duration.
Furthermore, the developed approach included a detailed representation of the bacterial cell-cycle. We used this representation to describe septation dynamics during the transition of a bacterial culture from the exponential to stationary growth phase. Resulting from a new mechanistic understanding of transition processes, we explained the lag time between the increase in cell number and bacterial biomass during the transition from the lag to exponential growth phase. Furthermore, our model reproduces the increased intracellular RNA mass fraction during long term exposure of bacteria to chloramphenicol.
In summary, we contribute a new approach to disentangle the impact of drug effects, assay readout and experimental protocol on antibiotic interactions. In the absence of a consensus on the corresponding experimental protocols, this disentanglement is key to translate information between heterogeneous experiments and also ultimately to the clinical setting.
Judging the animacy of words
(2017)
The age at which members of a semantic category are learned (age of acquisition), the typicality they demonstrate within their corresponding category, and the semantic domain to which they belong (living, non-living) are known to influence the speed and accuracy of lexical/semantic processing. So far, only a few studies have looked at the origin of age of acquisition and its interdependence with typicality and semantic domain within the same experimental design. Twenty adult participants performed an animacy decision task in which nouns were classified according to their semantic domain as being living or non-living. Response times were influenced by the independent main effects of each parameter: typicality, age of acquisition, semantic domain, and frequency. However, there were no interactions. The results are discussed with respect to recent models concerning the origin of age of acquisition effects.
Background: Severe bacterial infections remain a major challenge in intensive care units because of their high prevalence and mortality. Adequate antibiotic exposure has been associated with clinical success in critically ill patients. The objective of this study was to investigate the target attainment of standard meropenem dosing in a heterogeneous critically ill population, to quantify the impact of the full renal function spectrum on meropenem exposure and target attainment, and ultimately to translate the findings into a tool for practical application. Methods: A prospective observational single-centre study was performed with critically ill patients with severe infections receiving standard dosing of meropenem. Serial blood samples were drawn over 4 study days to determine meropenem serum concentrations. Renal function was assessed by creatinine clearance according to the Cockcroft and Gault equation (CLCRCG). Variability in meropenem serum concentrations was quantified at the middle and end of each monitored dosing interval. The attainment of two pharmacokinetic/pharmacodynamic targets (100% T->MIC, 50% T->4xMIC) was evaluated for minimum inhibitory concentration (MIC) values of 2 mg/L and 8 mg/L and standard meropenem dosing (1000 mg, 30-minute infusion, every 8 h). Furthermore, we assessed the impact of CLCRCG on meropenem concentrations and target attainment and developed a tool for risk assessment of target non-attainment. Results: Large inter-and intra-patient variability in meropenem concentrations was observed in the critically ill population (n = 48). Attainment of the target 100% T->MIC was merely 48.4% and 20.6%, given MIC values of 2 mg/L and 8 mg/L, respectively, and similar for the target 50% T->4xMIC. A hyperbolic relationship between CLCRCG (25-255 ml/minute) and meropenem serum concentrations at the end of the dosing interval (C-8h) was derived. For infections with pathogens of MIC 2 mg/L, mild renal impairment up to augmented renal function was identified as a risk factor for target non-attainment (for MIC 8 mg/L, additionally, moderate renal impairment). Conclusions: The investigated standard meropenem dosing regimen appeared to result in insufficient meropenem exposure in a considerable fraction of critically ill patients. An easy-and free-to-use tool (the MeroRisk Calculator) for assessing the risk of target non-attainment for a given renal function and MIC value was developed.
This paper is focused on the temperature-dependent synthesis of gold nanotriangles in a vesicular template phase, containing phosphatidylcholine and AOT, by adding the strongly alternating polyampholyte PalPhBisCarb.
UV-vis absorption spectra in combination with TEM micrographs show that flat gold nanoplatelets are formed predominantly in the presence of the polyampholyte at 45°C. The formation of triangular and hexagonal nanoplatelets can be directly influenced by the kinetic approach, i.e., by varying the polyampholyte dosage rate at 45°C. Corresponding zeta potential measurements indicate that a temperature-dependent adsorption of the polyampholyte on the {111} faces will induce the symmetry breaking effect, which is responsible for the kinetically controlled hindered vertical and preferred lateral growth of the nanoplatelets.
Electron flux in the Earth’s outer radiation belt is highly variable due to a delicate balance between competing acceleration and loss processes. It has been long recognized that Electromagnetic Ion Cyclotron (EMIC) waves may play a crucial role in the loss of radiation belt electrons. Previous theoretical studies proposed that EMIC waves may account for the loss of the relativistic electron population. However, recent observations showed that while EMIC waves are responsible for the significant loss of ultra-relativistic electrons, the relativistic electron population is almost unaffected. In this study, we provide a theoretical explanation for this discrepancy between previous theoretical studies and recent observations. We demonstrate that EMIC waves mainly contribute to the loss of ultra-relativistic electrons. This study significantly improves the current understanding of the electron dynamics in the Earth’s radiation belt and also can help us understand the radiation environments of the exoplanets and outer planets.
In this paper, we address the question of whether spacecraft potential depends on the ambient electron density. In Maxwellian space plasmas, the onset of spacecraft charging does not depend on the ambient electron density. The balance of electron currents causes the incoming electrons to balance with the outgoing secondary electrons. The onset is controlled by the critical or anticritical temperature of the ambient electrons, but not the electron density. Above the critical temperature, charging to negative potential occurs. If the energy of the incoming electrons increases to well beyond the second crossing point of the secondary electron yield (SEY), the value of SEY decreases to well below unity. When the secondary electron current is negligible compared with the primary electron current, the spacecraft potential is governed solely by the balance of the incoming electrons and the sum of the currents of the repelled electrons and the attracted ions. In neutral space plasma, the electron and ion charges cancel each other. But if the space plasma deviates from being neutral, then the densities can have effect on the spacecraft potential. If the ambient plasma deviates significantly from equilibrium, a non-Maxwellian electron distribution may result. For a kappa distribution, one can show that the spacecraft charging level is independent of the ambient electron density. For a double Maxwellian distribution, the spacecraft charging level depends on the electron densities. For a conducting spacecraft charging in sunlight, the charging level is low and positive. It also depends on the ambient electron density. For a dielectric spacecraft in sunlight, the high-level negative-voltage charging on the shadowed side may extend to the sunlit side and block the photoelectrons trying to escape from the sunlit side. In this case, the charging level does not depend on ambient electron density. Using coordinated environmental and spacecraft charging data obtained from the Los Alamos National Laboratory geosynchronous satellites, we showed some results confirming that spacecraft potential is indeed often independent of the ambient electron density.
Inner magnetosphere coupling
(2017)
The dynamics of the inner magnetosphere is strongly governed by the interactions between different plasma populations that are coupled through large-scale electric and magnetic fields, currents, and wave-particle interactions. Inner magnetospheric plasma undergoes self-consistent interactions with global electric and magnetic fields. Waves excited in the inner magnetosphere from unstable particle distributions can provide energy exchange between different particle populations in the inner magnetosphere and affect the ring current and radiation belt dynamics. The ionosphere serves as an energy sink and feeds the magnetosphere back through the cold plasma outflow. The precipitating inner magnetospheric particles influence the ionosphere and upper atmospheric chemistry and affect climate. Satellite measurements and theoretical studies have advanced our understanding of the dynamics of various plasma populations in the inner magnetosphere. However, our knowledge of the coupling processes among the plasmasphere, ring current, radiation belts, global magnetic and electric fields, and plasma waves generated within these systems is still incomplete. This special issue incorporates extended papers presented at the Inner Magnetosphere Coupling III conference held 23–27 March 2015 in Los Angeles, California, USA, and includes modeling and observational contributions addressing interactions within different plasma populations in the inner magnetosphere (plasmasphere, ring current, and radiation belts), coupling between fields and plasma populations, as well as effects of the inner magnetosphere on the ionosphere and atmosphere.
Most of the deformation associated with the seismic cycle in subduction zones occurs offshore and has been therefore difficult to quantify with direct observations at millennial timescales. Here we study millennial deformation associated with an active splay-fault system in the Arauco Bay area off south central Chile. We describe hitherto unrecognized drowned shorelines using high-resolution multibeam bathymetry, geomorphic, sedimentologic, and paleontologic observations and quantify uplift rates using a Landscape Evolution Model. Along a margin-normal profile, uplift rates are 1.3m/ka near the edge of the continental shelf, 1.5m/ka at the emerged Santa Maria Island, -0.1m/ka at the center of the Arauco Bay, and 0.3m/ka in the mainland. The bathymetry images a complex pattern of folds and faults representing the surface expression of the crustal-scale Santa Maria splay-fault system. We modeled surface deformation using two different structural scenarios: deep-reaching normal faults and deep-reaching reverse faults with shallow extensional structures. Our preferred model comprises a blind reverse fault extending from 3km depth down to the plate interface at 16km that slips at a rate between 3.0 and 3.7m/ka. If all the splay-fault slip occurs during every great megathrust earthquake, with a recurrence of similar to 150-200years, the fault would slip similar to 0.5m per event, equivalent to a magnitude similar to 6.4 earthquake. However, if the splay-fault slips only with a megathrust earthquake every similar to 1000years, the fault would slip similar to 3.7m per event, equivalent to a magnitude similar to 7.5 earthquake.
We present the PINE (Plasma density in the Inner magnetosphere Neural network‐based Empirical) model ‐ a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural‐network‐based Upper hybrid Resonance Determination) algorithm for the period of 1 October 2012 to 1 July 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2≤L≤6 and all local times. We validate and test the model by measuring its performance on independent data sets withheld from the training set and by comparing the model‐predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). The optimal model is based on the 96 h time history of Kp, AE, SYM‐H, and F10.7 indices. The model successfully reproduces erosion of the plasmasphere on the nightside and plume formation and evolution. We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in situ observations by using machine learning techniques.
We present a setup combining a liquid flatjet sample delivery and a MHz laser system for time-resolved soft X-ray absorption measurements of liquid samples at the high brilliance undulator beamline UE52-SGM at Bessy II yielding unprecedented statistics in this spectral range. We demonstrate that the efficient detection of transient absorption changes in transmission mode enables the identification of photoexcited species in dilute samples. With iron(II)-trisbipyridine in aqueous solution as a benchmark system, we present absorption measurements at various edges in the soft X-ray regime. In combination with the wavelength tunability of the laser system, the set-up opens up opportunities to study the photochemistry of many systems at low concentrations, relevant to materials sciences, chemistry, and biology. (C) 2017 Author(s).
We present the dependence of the magnetosonic wave amplitudes both outside and inside the plasmapause on the solar wind and AE index using Van Allen Probe-A spacecraft during the time period of 1 October 2012 to 31 December 2015, based on a correlation and regression analysis. Solar wind parameters considered are the southward interplanetary magnetic field (IMF B-S), solar wind number density (N-SW), and bulk speed (V-SW). We find that the wave amplitudes outside (inside) the plasmapause are well correlated with the preceding AE, IMF B-S, and N-SW with time delays, each corresponding to 2-3 h (3-4 h), 4-5 h (3-4 h), and 2-3 h (8-9 h), while the correlation with V-SW is ambiguous both inside and outside the plasmapause. As measured by the correlation coefficient, the IMF B-S is the most influential solar wind parameter that affects the dayside wave amplitudes both outside and inside the plasmapause, while N-SW contributes to enhancing the duskside waves outside the plasmapause. The AE effect on wave amplitudes is comparable to that of IMF B-S. More interestingly, regression with time histories of the solar wind parameters and the AE index preceding the wave measurements outside the plasmapause shows significant dependence on the IMF B-S, N-SW, and AE: the region of peak coefficients is changed with time delay for IMF B-S and AE, while isolated peaks around duskside remain gradually decrease with time for N-SW. In addition, the regression with magnetosonic waves inside the plasmapause shows high coefficients around prenoon sector with preceding IMF B-S and V-SW.
Electromagnetic ion cyclotron (EMIC) waves play an important role in the dynamics of ultrarelativistic electron population in the radiation belts. However, as EMIC waves are very sporadic, developing a parameterization of such wave properties is a challenging task. Currently, there are no dynamic, activity-dependent models of EMIC waves that can be used in the long-term (several months) simulations, which makes the quantitative modeling of the radiation belt dynamics incomplete. In this study, we investigate Kp, Dst, and AE indices, solar wind speed, and dynamic pressure as possible parameters of EMIC wave presence. The EMIC waves are included in the long-term simulations (1year, including different geomagnetic activity) performed with the Versatile Electron Radiation Belt code, and we compare results of the simulation with the Van Allen Probes observations. The comparison shows that modeling with EMIC waves, parameterized by solar wind dynamic pressure, provides a better agreement with the observations among considered parameterizations. The simulation with EMIC waves improves the dynamics of ultrarelativistic fluxes and reproduces the formation of the local minimum in the phase space density profiles.
Up until recently, signatures of the ultrarelativistic electron loss driven by electromagnetic ion cyclotron (EMIC) waves in the Earth's outer radiation belt have been limited to direct or indirect measurements of electron precipitation or the narrowing of normalized pitch angle distributions in the heart of the belt. In this study, we demonstrate additional observational evidence of ultrarelativistic electron loss that can be driven by resonant interaction with EMIC waves. We analyzed the profiles derived from Van Allen Probe particle data as a function of time and three adiabatic invariants between 9 October and 29 November 2012. New local minimums in the profiles are accompanied by the narrowing of normalized pitch angle distributions and ground‐based detection of EMIC waves. Such a correlation may be indicative of ultrarelativistic electron precipitation into the Earth's atmosphere caused by resonance with EMIC waves.
Scientific Objectives of Electron Losses and Fields INvestigation Onboard Lomonosov Satellite
(2017)
The objective of the Electron Losses and Fields INvestigation on board the Lomonosov satellite ( ELFIN-L) project is to determine the energy spectrum of precipitating energetic electrons and ions and, together with other polar-orbiting and equatorial missions, to better understand the mechanisms responsible for scattering these particles into the atmosphere. This mission will provide detailed measurements of the radiation environment at low altitudes. The 400-500 km sun-synchronous orbit of Lomonosov is ideal for observing electrons and ions precipitating into the atmosphere. This mission provides a unique opportunity to test the instruments. Similar suite of instruments will be flown in the future NSF-and NASA-supported spinning CubeSat ELFIN satellites which will augment current measurements by providing detailed information on pitch-angle distributions of precipitating and trapped particles.
Significant progress has been made in recent years in understanding acceleration mechanisms in the Earth's radiation belts. In particular, a number of studies demonstrated the importance of the local acceleration by analyzing the radial profiles of phase space density (PSD) and observing building up peaks in PSD. In this study, we focus on understanding of the local loss using very similar tools. The profiles of PSD for various values of the first adiabatic invariants during the previously studied 17 January 2013 storm are presented and discussed. The profiles of PSD show clear deepening minimums consistent with the scattering by electromagnetic ion cyclotron waves. Long-term evolution shows that local minimums in PSD can persist for relatively long times. During considered interval of time the deepening minimums were observed around L* = 4 during 17 January 2013 storm and around L* = 3.5 during 1 March 2013 storm. This study shows a new method that can help identify the location, magnitude, and time of the local loss and will help quantify local loss in the future. This study also provides additional clear and definitive evidence that local loss plays a major role for the dynamics of the multi-MeV electrons.
Simultaneous dynamic characterization of charge and structural motion during ferroelectric switching
(2017)
Monitoring structural changes in ferroelectric thin films during electric field induced polarization switching is important for a full microscopic understanding of the coupled motion of charges, atoms, and domainwalls in ferroelectric nanostructures. We combine standard ferroelectric test sequences of switching and nonswitching electrical pulses with time-resolved x-ray diffraction to investigate the structural response of a nanoscale Pb(Zr0.2Ti0.8)O-3 ferroelectric oxide capacitor upon charging, discharging, and polarization reversal. We observe that a nonlinear piezoelectric response of the ferroelectric layer develops on a much longer time scale than the RC time constant of the device. The complex atomic motion during the ferroelectric polarization reversal starts with a contraction of the lattice, whereas the expansive piezoelectric response sets in after considerable charge flow due to the applied voltage pulses on the electrodes of the capacitor. Our simultaneous measurements on a working device elucidate and visualize the complex interplay of charge flow and structural motion and challenges theoretical modeling.
In the strong coupling regime, exciton and plasmon excitations are hybridized into combined system excitations. The correct identification of the coupling regime in these systems is currently debated, from both experimental and theoretical perspectives. In this article we show that the extinction spectra may show a large peak splitting, although the energy loss encoded in the absorption spectra clearly rules out the strong coupling regime. We investigate the coupling of J-aggregate excitons to the localized surface plasmon polaritons on gold nanospheres and nanorods by fine-tuning the plasmon resonance via layer-by-layer deposition of polyelectrolytes. While both structures show a characteristic anticrossing in extinction and scattering experiments, the careful assessment of the systems’ light absorption reveals that strong coupling of the plasmon to the exciton is not present in the nanosphere system. In a phenomenological model of two classical coupled oscillators, a Fano-like regime causes only the resonance of the light-driven oscillator to split up, while the other one still dissipates energy at its original frequency. Only in the strong-coupling limit do both oscillators split up the frequencies at which they dissipate energy, qualitatively explaining our experimental finding.
Despite the ongoing progress in nanotechnology and its applications, the development of strategies for connecting nano-scale systems to micro-or macroscale elements is hampered by the lack of structural components that have both, nano-and macroscale dimensions. The production of nano-scale wires with macroscale length is one of the most interesting challenges here. There are a lot of strategies to fabricate long nanoscopic stripes made of metals, polymers or ceramics but none is suitable for mass production of ordered and dense arrangements of wires at large numbers. In this paper, we report on a technique for producing arrays of ordered, flexible and free-standing polymer nano-wires filled with different types of nano-particles. The process utilizes the strong response of photosensitive polymer brushes to irradiation with UV-interference patterns, resulting in a substantial mass redistribution of the polymer material along with local rupturing of polymer chains. The chains can wind up in wires of nano-scale thickness and a length of up to several centimeters. When dispersing nano-particles within the film, the final arrangement is similar to a core-shell geometry with mainly nano-particles found in the core region and the polymer forming a dielectric jacket.
Despite the ongoing progress in nanotechnology and its applications, the development of strategies for connecting nano-scale systems to micro- or macroscale elements is hampered by the lack of structural components that have both, nano- and macroscale dimensions. The production of nano-scale wires with macroscale length is one of the most interesting challenges here. There are a lot of strategies to fabricate long nanoscopic stripes made of metals, polymers or ceramics but none is suitable for mass production of ordered and dense arrangements of wires at large numbers. In this paper, we report on a technique for producing arrays of ordered, flexible and free-standing polymer nano-wires filled with different types of nano-particles. The process utilizes the strong response of photosensitive polymer brushes to irradiation with UV-interference patterns, resulting in a substantial mass redistribution of the polymer material along with local rupturing of polymer chains. The chains can wind up in wires of nano-scale thickness and a length of up to several centimeters. When dispersing nano-particles within the film, the final arrangement is similar to a core-shell geometry with mainly nano-particles found in the core region and the polymer forming a dielectric jacket.
Carbohydrate-protein interactions are ubiquitous in nature. They provide the initial molecular contacts in many cell-cell processes as for example immune responses, signal transduction, egg fertilization and infection processes of pathogenic viruses and bacteria. Furthermore, bacteria themselves are infected by bacteriophages, viruses which can cause the bacterial lysis, but do not affect other hosts. The infection process of a bacteriophage involves the specific detection and binding of the bacterium, which can be based on a carbohydrate-protein interaction. The mechanism of specific detection of pathogenic bacteria can thereby be useful for the development of bacteria sensors in the food industry or for tools in diagnostics.
Bacteriophages of the Podoviridae family use tailspike proteins for the specific detection of enteritis causing bacteria as Escherichia coli, Salmonella spp. or Shigella flexneri. The tailspike protein provides the first contact by binding to the carbohydrate containing O-antigen part of lipopolysaccharide in the Gram-negative cell wall. After binding to O-antigen repeating units, the enzymatic activity of tailspike proteins leads to cleavage of the carbohydrate chains, which enables the bacteriophage to approach the bacterial surface for DNA injection. Tailspike proteins thereby exhibit a relatively low affinity to the oligosaccharide structures of O-antigen due to the necessary binding, cleavage and release cycle, compared for example to antibodies. In this work it was aimed to study the determinants that influence carbohydrate affinity in the extended TSP binding grooves. This is a prerequisite to design a high-affinity tailspike protein based bacteria sensor.
For this purpose the tailspike protein of the bacteriophage Sf6 (Sf6 TSP) was used, which specifically binds Shigella flexneri Y O-antigen with two tetrasaccharide repeating units at the intersubunits of the trimeric β-helix protein. The Sf6 TSP endorhamnosidase cleaves the O-antigen, which leads to an octasaccharide as the main product. The binding affinity of inactive Sf6 TSP towards polysaccharide was characterized by fluorescence titration experiments and surface plasmon resonance (SPR).
Moreover, cysteine mutations were introduced into the Sf6 TSP binding site for the covalent thiol-coupling of an environment-sensitive fluorescent label to obtain a sensor for Shigella flexneri Y based on TSP-O-antigen recognition. This sensor showed a more than 100 % amplitude increase of a visible light fluorescence upon the binding of a polysaccharide test solution. Improvements of the TSP sensor can be achieved by increasing the tailspike affinity towards the O-antigen. Therefore molecular dynamics simulations evaluating ligand flexibility, hydrogen bond occupancies and water network distributions were used for affinity prediction on the available cysteine mutants of Sf6 TSP. The binding affinities were experimentally analyzed by SPR. This combined computational and experimental set-up for the design of a high-affinity carbohydrate binding protein could successfully distinguish strongly increased and decreased affinities of single amino acid mutants.
A thermodynamically and structurally well characterized set of another tailspike protein HK620 TSP with high-affinity mutants was used to evaluate the influence of water molecules on binding affinity. The free enthalpy of HK620 TSP oligosaccharide complex formation thereby either derived from the replacement of a conserved water molecule or by immobilization of two water molecules upon ligand binding. Furthermore, the enthalpic and entropic contributions of water molecules in a hydrophobic binding pocket could be assigned by free energy calculations. The findings in this work can be helpful for the improvement of carbohydrate docking and carbohydrate binding protein engineering algorithms in the future.
Purpose: Comparison of the dissociation kinetics of rapid-acting insulins lispro, aspart, glulisine and human insulin under physiologically relevant conditions.
Methods: Dissociation kinetics after dilution were monitored directly in terms of the average molecular mass using combined static and dynamic light scattering. Changes in tertiary structure were detected by near-UV circular dichroism.
Results: Glulisine forms compact hexamers in formulation even in the absence of Zn2+. Upon severe dilution, these rapidly dissociate into monomers in less than 10 s. In contrast, in formulations of lispro and aspart, the presence of Zn2+ and phenolic compounds is essential for formation of compact R6 hexamers. These slowly dissociate in times ranging from seconds to one hour depending on the concentration of phenolic additives. The disadvantage of the long dissociation times of lispro and aspart can be diminished by a rapid depletion of the concentration of phenolic additives independent of the insulin dilution. This is especially important in conditions similar to those after subcutaneous injection, where only minor dilution of the insulins occurs.
Conclusion: Knowledge of the diverging dissociation mechanisms of lispro and aspart compared to glulisine will be helpful for optimizing formulation conditions of rapid-acting insulins.
Comparison of the dissociation kinetics of rapid-acting insulins lispro, aspart, glulisine and human insulin under physiologically relevant conditions. Dissociation kinetics after dilution were monitored directly in terms of the average molecular mass using combined static and dynamic light scattering. Changes in tertiary structure were detected by near-UV circular dichroism. Glulisine forms compact hexamers in formulation even in the absence of Zn2+. Upon severe dilution, these rapidly dissociate into monomers in less than 10 s. In contrast, in formulations of lispro and aspart, the presence of Zn2+ and phenolic compounds is essential for formation of compact R6 hexamers. These slowly dissociate in times ranging from seconds to one hour depending on the concentration of phenolic additives. The disadvantage of the long dissociation times of lispro and aspart can be diminished by a rapid depletion of the concentration of phenolic additives independent of the insulin dilution. This is especially important in conditions similar to those after subcutaneous injection, where only minor dilution of the insulins occurs. Knowledge of the diverging dissociation mechanisms of lispro and aspart compared to glulisine will be helpful for optimizing formulation conditions of rapid-acting insulins.
The research aimed to investigate back pain (BP) prevalence in a large cohort of young athletes with respect to age, gender, and sport discipline. BP (within the last 7days) was assessed with a face scale (face 1-2=no pain; face 3-5=pain) in 2116 athletes (m/f 61%/39%; 13.3 +/- 1.7years; 163.0 +/- 11.8cm; 52.6 +/- 13.9kg; 4.9 +/- 2.7 training years; 8.4 +/- 5.7 training h/week). Four different sports categories were devised (a: combat sports, b: game sports; c: explosive strength sport; d: endurance sport). Analysis was described descriptively, regarding age, gender, and sport. In addition, 95% confidence intervals (CI) were calculated. About 168 (8%) athletes were allocated into the BP group. About 9% of females and 7% of males reported BP. Athletes, 11-13years, showed a prevalence of 2-4%; while prevalence increased to 12-20% in 14- to 17-year olds. Considering sport discipline, prevalence ranged from 3% (soccer) to 14% (canoeing). Prevalences in weight lifting, judo, wrestling, rowing, and shooting were 10%; in boxing, soccer, handball, cycling, and horse riding, 6%. 95% CI ranged between 0.08-0.11. BP exists in adolescent athletes, but is uncommon and shows no gender differences. A prevalence increase after age 14 is obvious. Differentiated prevention programs in daily training routines might address sport discipline-specific BP prevalence.
The molecular structure and conformational preferences of 1-phenyl-1-X-1-silacyclohexanes C5H10Si(Ph,X) (X = F (3), Cl (4)) were studied by gas-phase electron diffraction, low-temperature NMR spectroscopy, and high-level quantum chemical calculations. In the gas phase only three (3) and two (4) stable conformers differing in the axial or equatorial location of the phenyl group and the angle of rotation about the Si-C-ph bond (axi and axo denote the Ph group lying in or out of the X-Si-C-ph plane) contribute to the equilibrium. In 3 the ratio Ph-eq:Ph-axo:Ph-axi is 40(12):55(24):5 and 64:20:16 by experiment and theory, respectively. In 4 the ratio Ph-eq:Ph-axo is 79(15):21(15) and 71:29 by experiment and theory (M06-2X calculations), respectively. The gas-phase electron diffraction parameters are in good agreement with those obtained from theory at the M06-2X/aug-ccPVTZ and MP2/aug-cc-pVTZ levels. Unlike the case for M06-2X, MP2 calculations indicate that 3-Ph-eq conformer lies 0.5 kcal/mol higher than the 3-Ph-axo, conformer. As follows from QTAIM analysis, the phenyl group is more stable when it is located in the axial position but produces destabilization of the silacyclohexane ring: By low temperature NMR spectroscopy the six-membered ring interconversion could be frozen, at 103 K and the present conformational equilibria of 3 and 4 could be determined. The ratio of the conformers is 3-Ph-eq:3-Ph-ax = (75-77):(23-25) and 4-Ph-eq:4-Ph-ax = 82:18.
The synthesis of new N,N-dimethyl carbamoyl 5-aryloxytetrazoles have been reported. Their dynamic H-1-NMR via rotation about C-N bonds in moiety of urea group [a; CO-NMe2 and b; (2-tetrazolyl)N-CO rotations] in the solvents CDCl3 (223-333 K) and DMSO (298-363 K) is studied. Accordingly, the free energies of activation, obtained 16.5 and 16.9 kcal mol(-1) respectively, attributed to the conformational isomerization about the Me2N-C=O bond (a rotation). Moreover, a and b barrier to rotations in 5-((4-methylphenoxy)-N,N-dimethyl-2H-tetrazole-2-carboxamide (P) also were computed at level of B3LYP using 6-311++G** basis set. The optimized geometry parameters are in good agreement with X-ray structure data. The computation of energy barrier for a and b was determined 16.9 and 2.5 kcal mol(-1), respectively. The former is completely in agreement with the result obtained via dynamic NMR. X-ray structure analysis data demonstrate that just 2-acylated tetrazole was formed in the case of 5-(p-tolyloxy)-N,N-dimethyl-2H-tetrazole-2-carboxamide. X-ray data also revealed a planar trigonal orientation of the Me2N group which is coplanar to carbonyl group with the partial double-bond C-N character. It also demonstrates the synperiplanar position of C=O group with tetrazolyl ring. On average, in solution the plane containing carbonyl bond is almost perpendicular to the plane of the tetrazolyl ring (because of steric effects as confirmed by B3LY12/6-311++G**) while the plane containing Me2N group is coplanar with carbonyl bond which is in contrast with similar urea derivatives and it demonstrates the unusually high rotational energy barrier of these compounds. (C) 2016 Elsevier B.V. All rights reserved.
Aim: Across the planet, grass-dominated biomes are experiencing shrub encroachment driven by atmospheric CO2 enrichment and land-use change. By altering resource structure and availability, shrub encroachment may have important impacts on vertebrate communities. We sought to determine the magnitude and variability of these effects across climatic gradients, continents, and taxa, and to learn whether shrub thinning restores the structure of vertebrate communities. Location: Worldwide. Time period: Contemporary. Major taxa studied: Terrestrial vertebrates. Methods: We estimated relationships between percentage shrub cover and the structure of terrestrial vertebrate communities (species richness, Shannon diversity and community abundance) in experimentally thinned and unmanipulated shrub-encroached grass-dominated biomes using systematic review and meta-analyses of 43 studies published from 1978 to 2016. We modelled the effects of continent, biome, mean annual precipitation, net primary productivity and the normalized difference vegetation index (NDVI) on the relationship between shrub cover and vertebrate community structure. Results: Species richness, Shannon diversity and total abundance had no consistent relationship with shrub encroachment and experimental thinning did not reverse encroachment effects on vertebrate communities. However, some effects of shrub encroachment on vertebrate communities differed with net primary productivity, amongst vertebrate groups, and across continents. Encroachment had negative effects on vertebrate diversity at low net primary productivity. Mammalian and herpetofaunal diversity decreased with shrub encroachment. Shrub encroachment also had negative effects on species richness and total abundance in Africa but positive effects in North America. Main conclusions: Biodiversity conservation and mitigation efforts responding to shrub encroachment should focus on low-productivity locations, on mammals and herpetofauna, and in Africa. However, targeted research in neglected regions such as central Asia and India will be needed to fill important gaps in our knowledge of shrub encroachment effects on vertebrates. Additionally, our findings provide an impetus for determining the mechanisms associated with changes in vertebrate diversity and abundance in shrub-encroached grass-dominated biomes.
A new method is proposed for tracking individual motor units (MUs) across multiple experimental sessions on different days. The technique is based on a novel decomposition approach for high-density surface electromyography and was tested with two experimental studies for reliability and sensitivity. Experiment I (reliability): ten participants performed isometric knee extensions at 10, 30, 50 and 70% of their maximum voluntary contraction (MVC) force in three sessions, each separated by 1 week. Experiment II (sensitivity): seven participants performed 2 weeks of endurance training (cycling) and were tested pre-post intervention during isometric knee extensions at 10 and 30% MVC. The reliability (Experiment I) and sensitivity (Experiment II) of the measured MU properties were compared for the MUs tracked across sessions, with respect to all MUs identified in each session. In Experiment I, on average 38.3% and 40.1% of the identified MUs could be tracked across two sessions (1 and 2 weeks apart), for the vastus medialis and vastus lateralis, respectively. Moreover, the properties of the tracked MUs were more reliable across sessions than those of the full set of identified MUs (intra-class correlation coefficients ranged between 0.63-0.99 and 0.39-0.95, respectively). In Experiment II, similar to 40% of the MUs could be tracked before and after the training intervention and training-induced changes in MU conduction velocity had an effect size of 2.1 (tracked MUs) and 1.5 (group of all identified motor units). These results show the possibility of monitoring MU properties longitudinally to document the effect of interventions or the progression of neuromuscular disorders.
Background
Foot orthoses are usually assumed to be effective by optimizing mechanically dynamic rearfoot configuration. However, the effect from a foot orthosis on kinematics that has been demonstrated scientifically has only been marginal. The aim of this study was to examine the effect of different heights in medial arch-supported foot orthoses on rear foot motion during gait.
Methods
Nineteen asymptomatic runners (36±11years, 180±5cm, 79±10kg; 41±22km/week) participated in the study. Trials were recorded at 3.1 mph (5 km/h) on a treadmill. Athletes walked barefoot and with 4 different not customized medial arch-supported foot orthoses of various arch heights (N:0 mm, M:30 mm, H:35 mm, E:40mm). Six infrared cameras and the `Oxford Foot Model´ were used to capture motion. The average stride in each condition was calculated from 50 gait cycles per condition. Eversion excursion and internal tibia rotation were analyzed. Descriptive statistics included calculating the mean ± SD and 95% CIs. Group differences by condition were analyzed by one factor (foot orthoses) repeated measures ANOVA (α = 0.05).
Results
Eversion excursion revealed the lowest values for N and highest for H (B:4.6°±2.2°; 95% CI [3.1;6.2]/N:4.0°±1.7°; [2.9;5.2]/M:5.2°±2.6°; [3.6;6.8]/H:6.2°±3.3°; [4.0;8.5]/E:5.1°±3.5°; [2.8;7.5]) (p>0.05). Range of internal tibia rotation was lowest with orthosis H and highest with E (B:13.3°±3.2°; 95% CI [11.0;15.6]/N:14.5°±7.2°; [9.2;19.6]/M:13.8°±5.0°; [10.8;16.8]/H:12.3°±4.3°; [9.0;15.6]/E:14.9°±5.0°; [11.5;18.3]) (p>0.05). Differences between conditions were small and the intrasubject variation high.
Conclusion
Our results indicate that different arch support heights have no systematic effect on eversion excursion or the range of internal tibia rotation and therefore might not exert a crucial influence on rear foot alignment during gait.
This longitudinal study investigated patterns of developmental problems across depression, aggression, and academic achievement during adolescence, using two measurement points two years apart (N = 1665; age T1: M = 13.14; female = 49.6%). Latent Profile Analyses and Latent Transition Analyses yielded four main findings: A three-type solution provided the best fit to the data: an asymptomatic type (i.e., low problem scores in all three domains), a depressed type (i.e., high scores in depression), an aggressive type (i.e., high scores in aggression). Profile types were invariant over the two data waves but differed between girls and boys, revealing gender specific patterns of comorbidity. Stabilities over time were high for the asymptomatic type and for types that represented problems in one domain, but moderate for comorbid types. Differences in demographic variables (i.e., age, socio-economic status) and individual characteristics (i.e., self-esteem, dysfunctional cognitions, cognitive capabilities) predicted profile type memberships and longitudinal transitions between types.
Previous studies contrasted the effects of plyometric training (PT) conducted on stable vs. unstable surfaces on components of physical fitness in child and adolescent soccer players. Depending on the training modality (stable vs. unstable), specific performance improvements were found for jump (stable PT) and balance performances (unstable PT). In an attempt to combine the effects of both training modalities, this study examined the effects of PT on stable surfaces compared with combined PT on stable and unstable surfaces on components of physical fitness in prepuberal male soccer athletes. Thirty-three boys were randomly assigned to either a PT on stable surfaces (PTS; n = 17; age = 12.1 +/- 0.5 years; height = 151.6 +/- 5.7 cm; body mass = 39.2 +/- 6.5 kg; and maturity offset = 22.3 +/- 0.5 years) or a combined PT on stable and unstable surfaces (PTC; n = 16; age = 12.2 +/- 0.6 years; height = 154.6 +/- 8.1 cm; body mass = 38.7 +/- 5.0 kg; and maturity offset = 22.2 +/- 0.6 years). Both intervention groups conducted 4 soccer-specific training sessions per week combined with either 2 PTS or PTC sessions. Before and after 8 weeks of training, proxies of muscle power (e.g., countermovement jump [CMJ], standing long jump [SLJ]), muscle strength (e.g., reactive strength index [RSI]), speed (e.g., 20-m sprint test), agility (e.g., modified Illinois change of direction test [MICODT]), static balance (e.g., stable stork bal-ance test [SSBT]), and dynamic balance (unstable stork balance test [USBT]) were tested. An analysis of covariance model was used to test between-group differences (PTS vs. PTC) at posttest using baseline outcomes as covariates. No significant between-group differences at posttest were observed for CMJ (p > 0.05, d = 0.41), SLJ (p > 0.05, d = 0.36), RSI (p > 0.05, d = 0.57), 20-m sprint test (p > 0.05, d = 0.06), MICODT (p > 0.05, d = 0.23), and SSBT (p > 0.05, d = 0.20). However, statistically significant between-group differences at posttest were noted for the USBT (p < 0.01, d = 1.49) in favor of the PTC group. For most physical fitness tests (except RSI), significant pre-to-post improvements were observed for both groups (p < 0.01, d = 0.55-3.96). Eight weeks of PTS or PTC resulted in similar performance improvements in components of physical fitness except for dynamic balance. From a performance-enhancing perspective, PTC is recommended for pediatric strength and conditioning coaches because it produced comparable training effects as PTS on proxies of muscle power, muscle strength, speed, agility, static balance, and additional effects on dynamic balance.
The complexity of eye-movement control during reading allows measurement of many dependent variables, the most prominent ones being fixation durations and their locations in words. In current practice, either variable may serve as dependent variable or covariate for the other in linear mixed models (LMMs) featuring also psycholinguistic covariates of word recognition and sentence comprehension. Rather than analyzing fixation location and duration with separate LMMs, we propose linking the two according to their sequential dependency. Specifically, we include predicted fixation location (estimated in the first LMM from psycholinguistic covariates) and its associated residual fixation location as covariates in the second, fixation-duration LMM. This linked LMM affords a distinction between direct and indirect effects (mediated through fixation location) of psycholinguistic covariates on fixation durations. Results confirm the robustness of distributed processing in the perceptual span. They also offer a resolution of the paradox of the inverted optimal viewing position (IOVP) effect (i.e., longer fixation durations in the center than at the beginning and end of words) although the opposite (i.e., an OVP effect) is predicted from default assumptions of psycholinguistic processing efficiency: The IOVP effect in fixation durations is due to the residual fixation-location covariate, presumably driven primarily by saccadic error, and the OVP effect (at least the left part of it) is uncovered with the predicted fixation-location covariate, capturing the indirect effects of psycholinguistic covariates. We expect that linked LMMs will be useful for the analysis of other dynamically related multiple outcomes, a conundrum of most psychonomic research.
Background
Much of the organismal variation we observe in nature is due to differences in organ size. The observation that even closely related species can show large, stably inherited differences in organ size indicates a strong genetic component to the control of organ size. Despite recent progress in identifying factors controlling organ growth in plants, our overall understanding of this process remains limited, partly because the individual factors have not yet been connected into larger regulatory pathways or networks. To begin addressing this aim, we have studied the upstream regulation of expression of BIG BROTHER (BB), a central growth-control gene in Arabidopsis thaliana that prevents overgrowth of organs. Final organ size and BB expression levels are tightly correlated, implying the need for precise control of its expression. BB expression mirrors proliferative activity, yet the gene functions to limit proliferation, suggesting that it acts in an incoherent feedforward loop downstream of growth activators to prevent over-proliferation.
Results
To investigate the upstream regulation of BB we combined a promoter deletion analysis with a phylogenetic footprinting approach. We were able to narrow down important, highly conserved, cis-regulatory elements within the BB promoter. Promoter sequences of other Brassicaceae species were able to partially complement the A. thaliana bb-1 mutant, suggesting that at least within the Brassicaceae family the regulatory pathways are conserved.
Conclusions
This work underlines the complexity involved in precise quantitative control of gene expression and lays the foundation for identifying important upstream regulators that determine BB expression levels and thus final organ size.
This dissertation explores whether the processing of ellipsis is affected by changes in the complexity of the antecedent, either due to added linguistic material or to the presence of a temporary ambiguity. Murphy (1985) hypothesized that ellipsis is resolved via a string copying procedure when the antecedent is within the same sentence, and that copying longer strings takes more time. Such an account also implies that the antecedent is copied without its structure, which in turn implies that recomputing its syntax and semantics may be necessary at the ellipsis gap. Alternatively, several accounts predict null effects of antecedent complexity, as well as no reparsing. These either involve a structure copying mechanism that is cost-free and whose finishing time is thus independent of the form of the antecedent (Frazier & Clifton, 2001), treat ellipsis as a pointer into content-addressable memory with direct access (Martin & McElree, 2008, 2009), or assume that one structure is ‘shared’ between antecedent and gap (Frazier & Clifton, 2005).
In a self-paced reading study on German sluicing, temporarily ambiguous garden-path clauses were used as antecedents, but no evidence of reparsing in the form of a slowdown at the ellipsis site was found. Instead, results suggest that antecedents which had been reanalyzed from an initially incorrect structure were easier to retrieve at the gap. This finding that can be explained within the framework of cue-based retrieval parsing (Lewis & Vasishth, 2005), where additional syntactic operations on a structure yield memory reactivation effects.
Two further self-paced reading studies on German bare argument ellipsis and English verb phrase ellipsis investigated if adding linguistic content to the antecedent would increase processing times for the ellipsis, and whether insufficiently demanding comprehension tasks may have been responsible for earlier null results (Frazier & Clifton, 2000; Martin & McElree, 2008). It has also been suggested that increased antecedent complexity should shorten rather than lengthen retrieval times by providing more unique memory features (Hofmeister, 2011). Both experiments failed to yield reliable evidence that antecedent complexity affects ellipsis processing times in either direction, irrespectively of task demands.
Finally, two eye-tracking studies probed more deeply into the proposed reactivation-induced speedup found in the first experiment. The first study used three different kinds of French garden-path sentences as antecedents, with two of them failing to yield evidence for reactivation. Moreover, the third sentence type showed evidence suggesting that having failed to assign a structure to the antecedent leads to a slowdown at the ellipsis site, as well as regressions towards the ambiguous part of the sentence. The second eye-tracking study used the same materials as the initial self-paced reading study on German, with results showing a pattern similar to the one originally observed, with some notable differences.
Overall, the experimental results are compatible with the view that adding linguistic material to the antecedent has no or very little effect on the ease with which ellipsis is resolved, which is consistent with the predictions of cost-free copying, pointer-based approaches and structure sharing. Additionally, effects of the antecedent’s parsing history on ellipsis processing may be due to reactivation, the availability of multiple representations in memory, or complete failure to retrieve a matching target.
Age-related decline in executive functions and postural control due to degenerative processes in the central nervous system have been related to increased fall-risk in old age. Many studies have shown cognitive-postural dual-task interference in old adults, but research on the role of specific executive functions in this context has just begun. In this study, we addressed the question whether postural control is impaired depending on the coordination of concurrent response-selection processes related to the compatibility of input and output modality mappings as compared to impairments related to working-memory load in the comparison of cognitive dual and single tasks. Specifically, we measured total center of pressure (CoP) displacements in healthy female participants aged 19–30 and 66–84 years while they performed different versions of a spatial one-back working memory task during semi-tandem stance on an unstable surface (i.e., balance pad) while standing on a force plate. The specific working-memory tasks comprised: (i) modality compatible single tasks (i.e., visual-manual or auditory-vocal tasks), (ii) modality compatible dual tasks (i.e., visual-manual and auditory-vocal tasks), (iii) modality incompatible single tasks (i.e., visual-vocal or auditory-manual tasks), and (iv) modality incompatible dual tasks (i.e., visual-vocal and auditory-manual tasks). In addition, participants performed the same tasks while sitting. As expected from previous research, old adults showed generally impaired performance under high working-memory load (i.e., dual vs. single one-back task). In addition, modality compatibility affected one-back performance in dual-task but not in single-task conditions with strikingly pronounced impairments in old adults. Notably, the modality incompatible dual task also resulted in a selective increase in total CoP displacements compared to the modality compatible dual task in the old but not in the young participants. These results suggest that in addition to effects of working-memory load, processes related to simultaneously overcoming special linkages between input- and output modalities interfere with postural control in old but not in young female adults. Our preliminary data provide further evidence for the involvement of cognitive control processes in postural tasks.