Refine
Year of publication
Document Type
- Article (237)
- Doctoral Thesis (143)
- Conference Proceeding (122)
- Postprint (69)
- Working Paper (39)
- Monograph/Edited Volume (16)
- Preprint (6)
- Review (6)
- Master's Thesis (5)
- Habilitation Thesis (2)
Language
- English (647) (remove)
Keywords
- climate change (8)
- USA (7)
- United States (7)
- Arktis (6)
- moderne jüdische Geschichte (6)
- Arctic (5)
- COVID-19 (5)
- Fernerkundung (5)
- football (5)
- modern Jewish history (5)
Institute
- Extern (647) (remove)
On Track to Success?
(2022)
Many countries consider expanding vocational curricula in secondary education to boost skills and labour market outcomes among non-university-bound students. However, critics fear this could divert other students from more profitable academic education. We study labour market returns to vocational education in England, where until recently students chose between a vocational track, an academic track and quitting education at age 16. Identification is challenging because self-selection is strong and because students’ next-best alternatives are unknown. Against this back- drop, we leverage multiple instrumental variables to estimate margin-specific treatment effects, i.e., causal returns to vocational education for students at the margin with academic education and, separately, for students at the margin with quitting education. Identification comes from variation in distance to the nearest vocational provider conditional on distance to the nearest academic provider (and vice-versa), while controlling for granular student, school and neighbourhood characteristics. The analysis is based on population-wide administrative education data linked to tax records. We find that the vast majority of marginal vocational students are indifferent be- tween vocational and academic education. For them, vocational enrolment substantially decreases earnings at age 30. This earnings penalty grows with age and is due to wages, not employment. However, consistent with comparative advantage, the penalty is smaller for students with higher revealed preferences for the vocational track. For the few students at the margin with no further education, we find merely tentative evidence of increased employment and earnings from vocational enrolment.
The paper investigates the question of sustainability of capacity building initiatives by reporting about the multiplication training in the frame of DIES NMT Programme on quality assurance in Uganda and how it could make use of the social capital within the existing quality assurance network to sustain and address challenges during its implementation. The purpose of the article is to explore the nature of networking (social and institutional) which was established by the Ugandan Universities Quality Assurance Forum (UUQAF) and share the strategies used in this training experience for future sustainable capacity building training initiatives in emerging economies. The paper employed a qualitative research method to describe and analyse the training framework based on primary and secondary documents.
The key to reduce the energy required for specific transformations in a selective manner is the employment of a catalyst, a very small molecular platform that decides which type of energy to use. The field of photocatalysis exploits light energy to shape one type of molecules into others, more valuable and useful.
However, many challenges arise in this field, for example, catalysts employed usually are based on metal derivatives, which abundance is limited, they cannot be recycled and are expensive. Therefore, carbon nitrides materials are used in this work to expand horizons in the field of photocatalysis.
Carbon nitrides are organic materials, which can act as recyclable, cheap, non-toxic, heterogeneous photocatalysts. In this thesis, they have been exploited for the development of new catalytic methods, and shaped to develop new types of processes.
Indeed, they enabled the creation of a new photocatalytic synthetic strategy, the dichloromethylation of enones by dichloromethyl radical generated in situ from chloroform, a novel route for the making of building blocks to be used for the productions of active pharmaceutical compounds.
Then, the ductility of these materials allowed to shape carbon nitride into coating for lab vials, EPR capillaries, and a cell of a flow reactor showing the great potential of such flexible technology in photocatalysis.
Afterwards, their ability to store charges has been exploited in the reduction of organic substrates under dark conditions, gaining new insights regarding multisite proton coupled electron transfer processes.
Furthermore, the combination of carbon nitrides with flavins allowed the development of composite materials with improved photocatalytic activity in the CO2 photoreduction.
Concluding, carbon nitrides are a versatile class of photoactive materials, which may help to unveil further scientific discoveries and to develop a more sustainable future.
Functional nanoporous carbon-based materials derived from oxocarbon-metal coordination complexes
(2017)
Nanoporous carbon based materials are of particular interest for both science and industry due to their exceptional properties such as a large surface area, high pore volume, high electroconductivity as well as high chemical and thermal stability. Benefiting from these advantageous properties, nanoporous carbons proved to be useful in various energy and environment related applications including energy storage and conversion, catalysis, gas sorption and separation technologies. The synthesis of nanoporous carbons classically involves thermal carbonization of the carbon precursors (e.g. phenolic resins, polyacrylonitrile, poly(vinyl alcohol) etc.) followed by an activation step and/or it makes use of classical hard or soft templates to obtain well-defined porous structures. However, these synthesis strategies are complicated and costly; and make use of hazardous chemicals, hindering their application for large-scale production. Furthermore, control over the carbon materials properties is challenging owing to the relatively unpredictable processes at the high carbonization temperatures.
In the present thesis, nanoporous carbon based materials are prepared by the direct heat treatment of crystalline precursor materials with pre-defined properties. This synthesis strategy does not require any additional carbon sources or classical hard- or soft templates. The highly stable and porous crystalline precursors are based on coordination compounds of the squarate and croconate ions with various divalent metal ions including Zn2+, Cu2+, Ni2+, and Co2+, respectively. Here, the structural properties of the crystals can be controlled by the choice of appropriate synthesis conditions such as the crystal aging temperature, the ligand/metal molar ratio, the metal ion, and the organic ligand system. In this context, the coordination of the squarate ions to Zn2+ yields porous 3D cube crystalline particles. The morphology of the cubes can be tuned from densely packed cubes with a smooth surface to cubes with intriguing micrometer-sized openings and voids which evolve on the centers of the low index faces as the crystal aging temperature is raised. By varying the molar ratio, the particle shape can be changed from truncated cubes to perfect cubes with right-angled edges.
These crystalline precursors can be easily transformed into the respective carbon based materials by heat treatment at elevated temperatures in a nitrogen atmosphere followed by a facile washing step. The resulting carbons are obtained in good yields and possess a hierarchical pore structure with well-organized and interconnected micro-, meso- and macropores. Moreover, high surface areas and large pore volumes of up to 1957 m2 g-1 and 2.31 cm3 g-1 are achieved, respectively, whereby the macroscopic structure of the precursors is preserved throughout the whole synthesis procedure.
Owing to these advantageous properties, the resulting carbon based materials represent promising supercapacitor electrode materials for energy storage applications. This is exemplarily demonstrated by employing the 3D hierarchical porous carbon cubes derived from squarate-zinc coordination compounds as electrode material showing a specific capacitance of 133 F g-1 in H2SO4 at a scan rate of 5 mV s-1 and retaining 67% of this specific capacitance when the scan rate is increased to 200 mV s-1.
In a further application, the porous carbon cubes derived from squarate-zinc coordination compounds are used as high surface area support material and decorated with nickel nanoparticles via an incipient wetness impregnation. The resulting composite material combines a high surface area, a hierarchical pore structure with high functionality and well-accessible pores. Moreover, owing to their regular micro-cube shape, they allow for a good packing of a fixed-bed flow reactor along with high column efficiency and a minimized pressure drop throughout the packed reactor. Therefore, the composite is employed as heterogeneous catalyst in the selective hydrogenation of 5-hydroxymethylfurfural to 2,5-dimethylfuran showing good catalytic performance and overcoming the conventional problem of column blocking.
Thinking about the rational design of 3D carbon geometries, the functions and properties of the resulting carbon-based materials can be further expanded by the rational introduction of heteroatoms (e.g. N, B, S, P, etc.) into the carbon structures in order to alter properties such as wettability, surface polarity as well as the electrochemical landscape. In this context, the use of crystalline materials based on oxocarbon-metal ion complexes can open a platform of highly functional materials for all processes that involve surface processes.
The foreland of the Andes in South America is characterised by distinct along strike changes in surface deformational styles. These styles are classified into two end-members, the thin-skinned and the thick-skinned style. The superficial expression of thin-skinned deformation is a succession of narrowly spaced hills and valleys, that form laterally continuous ranges on the foreland facing side of the orogen. Each of the hills is defined by a reverse fault that roots in a basal décollement surface within the sedimentary cover, and acted as thrusting ramp to stack the sedimentary pile. Thick-skinned deformation is morphologically characterised by spatially disparate, basement-cored mountain ranges. These mountain ranges are uplifted along reactivated high-angle crustal-scale discontinuities, such as suture zones between different tectonic terranes.
Amongst proposed causes for the observed variation are variations in the dip angle of the Nazca plate, variation in sediment thickness, lithospheric thickening, volcanism or compositional differences. The proposed mechanisms are predominantly based on geological observations or numerical thermomechanical modelling, but there has been no attempt to understand the mechanisms from a point of data-integrative 3D modelling. The aim of this dissertation is therefore to understand how lithospheric structure controls the deformational behaviour. The integration of independent data into a consistent model of the lithosphere allows to obtain additional evidence that helps to understand the causes for the different deformational styles. Northern Argentina encompasses the transition from the thin-skinned fold-and-thrust belt in Bolivia, to the thick-skinned Sierras Pampeanas province, which makes this area a well suited location for such a study. The general workflow followed in this study first involves data-constrained structural- and density-modelling in order to obtain a model of the study area. This model was then used to predict the steady-state thermal field, which was then used to assess the present-day rheological state in northern Argentina.
The structural configuration of the lithosphere in northern Argentina was determined by means of data-integrative, 3D density modelling verified by Bouguer gravity. The model delineates the first-order density contrasts in the lithosphere in the uppermost 200 km, and discriminates bodies for the sediments, the crystalline crust, the lithospheric mantle and the subducting Nazca plate. To obtain the intra-crustal density structure, an automated inversion approach was developed and applied to a starting structural model that assumed a homogeneously dense crust. The resulting final structural model indicates that the crustal structure can be represented by an upper crust with a density of 2800 kg/m³, and a lower crust of 3100 kg/m³. The Transbrazilian Lineament, which separates the Pampia terrane from the Río de la Plata craton, is expressed as a zone of low average crustal densities.
In an excursion, we demonstrate in another study, that the gravity inversion method developed to obtain intra-crustal density structures, is also applicable to obtain density variations in the uppermost lithospheric mantle. Densities in such sub-crustal depths are difficult to constrain from seismic tomographic models due to smearing of crustal velocities. With the application to the uppermost lithospheric mantle in the north Atlantic, we demonstrate in Tan et al. (2018) that lateral density trends of at least 125\,km width are robustly recovered by the inversion method, thereby providing an important tool for the delineation of subcrustal density trends.
Due to the genetic link between subduction, orogenesis and retroarc foreland basins the question rises whether the steady-state assumption is valid in such a dynamic setting. To answer this question, I analysed (i) the impact of subduction on the conductive thermal field of the overlying continental plate, (ii) the differences between the transient and steady-state thermal fields of a geodynamic coupled model. Both studies indicate that the assumption of a thermal steady-state is applicable in most parts of the study area. Within the orogenic wedge, where the assumption cannot be applied, I estimated the transient thermal field based on the results of the conducted analyses.
Accordingly, the structural model that had been obtained in the first step, could be used to obtain a 3D conductive steady-state thermal field. The rheological assessment based on this thermal field indicates that the lithosphere of the thin-skinned Subandean ranges is characterised by a relatively strong crust and a weak mantle. Contrarily, the adjacent foreland basin consists of a fully coupled, very strong lithosphere. Thus, shortening in northern Argentina can only be accommodated within the weak lithosphere of the orogen and the Subandean ranges. The analysis suggests that the décollements of the fold-and-thrust belt are the shallow continuation of shear zones that reside in the ductile sections of the orogenic crust. Furthermore, the localisation of the faults that provide strain transfer between the deeper ductile crust and the shallower décollement is strongly influenced by crustal weak zones such as foliation. In contrast to the northern foreland, the lithosphere of the thick-skinned Sierras Pampeanas is fully coupled and characterised by a strong crust and mantle. The high overall strength prevents the generation of crustal-scale faults by tectonic stresses. Even inherited crustal-scale discontinuities, such as sutures, cannot sufficiently reduce the strength of the lithosphere in order to be reactivated. Therefore, magmatism that had been identified to be a precursor of basement uplift in the Sierras Pampeanas, is the key factor that leads to the broken foreland of this province. Due to thermal weakening, and potentially lubrication of the inherited discontinuities, the lithosphere is locally weakened such that tectonic stresses can uplift the basement blocks. This hypothesis explains both the spatially disparate character of the broken foreland, as well as the observed temporal delay between volcanism and basement block uplift.
This dissertation provides for the first time a data-driven 3D model that is consistent with geophysical data and geological observations, and that is able to causally link the thermo-rheological structure of the lithosphere to the observed variation of surface deformation styles in the retroarc foreland of northern Argentina.
Extreme flooding displaces an average of 12 million people every year. Marginalized populations in low-income countries are in particular at high risk, but also industrialized countries are susceptible to displacement and its inherent societal impacts. The risk of being displaced results from a complex interaction of flood hazard, population exposed in the floodplains, and socio-economic vulnerability. Ongoing global warming changes the intensity, frequency, and duration of flood hazards, undermining existing protection measures. Meanwhile, settlements in attractive yet hazardous flood-prone areas have led to a higher degree of population exposure. Finally, the vulnerability to displacement is altered by demographic and social change, shifting economic power, urbanization, and technological development. These risk components have been investigated intensively in the context of loss of life and economic damage, however, only little is known about the risk of displacement under global change.
This thesis aims to improve our understanding of flood-induced displacement risk under global climate change and socio-economic change. This objective is tackled by addressing the following three research questions. First, by focusing on the choice of input data, how well can a global flood modeling chain reproduce flood hazards of historic events that lead to displacement? Second, what are the socio-economic characteristics that shape the vulnerability to displacement? Finally, to what degree has climate change potentially contributed to recent flood-induced displacement events?
To answer the first question, a global flood modeling chain is evaluated by comparing simulated flood extent with satellite-derived inundation information for eight major flood events. A focus is set on the sensitivity to different combinations of the underlying climate reanalysis datasets and global hydrological models which serve as an input for the global hydraulic model. An evaluation scheme of performance scores shows that simulated flood extent is mostly overestimated without the consideration of flood protection and only for a few events dependent on the choice of global hydrological models. Results are more sensitive to the underlying climate forcing, with two datasets differing substantially from a third one. In contrast, the incorporation of flood protection standards results in an underestimation of flood extent, pointing to potential deficiencies in the protection level estimates or the flood frequency distribution within the modeling chain.
Following the analysis of a physical flood hazard model, the socio-economic drivers of vulnerability to displacement are investigated in the next step. For this purpose, a satellite- based, global collection of flood footprints is linked with two disaster inventories to match societal impacts with the corresponding flood hazard. For each event the number of affected population, assets, and critical infrastructure, as well as socio-economic indicators are computed. The resulting datasets are made publicly available and contain 335 displacement events and 695 mortality/damage events. Based on this new data product, event-specific displacement vulnerabilities are determined and multiple (national) dependencies with the socio-economic predictors are derived. The results suggest that economic prosperity only partially shapes vulnerability to displacement; urbanization, infant mortality rate, the share of elderly, population density and critical infrastructure exhibit a stronger functional relationship, suggesting that higher levels of development are generally associated with lower vulnerability.
Besides examining the contextual drivers of vulnerability, the role of climate change in the context of human displacement is also being explored. An impact attribution approach is applied on the example of Cyclone Idai and associated extreme coastal flooding in Mozambique. A combination of coastal flood modeling and satellite imagery is used to construct factual and counterfactual flood events. This storyline-type attribution method allows investigating the isolated or combined effects of sea level rise and the intensification of cyclone wind speeds on coastal flooding. The results suggest that displacement risk has increased by 3.1 to 3.5% due to the total effects of climate change on coastal flooding, with the effects of increasing wind speed being the dominant factor.
In conclusion, this thesis highlights the potentials and challenges of modeling flood- induced displacement risk. While this work explores the sensitivity of global flood modeling to the choice of input data, new questions arise on how to effectively improve the reproduction of flood return periods and the representation of protection levels. It is also demonstrated that disentangling displacement vulnerabilities is feasible, with the results providing useful information for risk assessments, effective humanitarian aid, and disaster relief. The impact attribution study is a first step in assessing the effects of global warming on displacement risk, leading to new research challenges, e.g., coupling fluvial and coastal flood models or the attribution of other hazard types and displacement events. This thesis is one of the first to address flood-induced displacement risk from a global perspective. The findings motivate for further development of the global flood modeling chain to improve our understanding of displacement vulnerability and the effects of global warming.
Earthquake modeling is the key to a profound understanding of a rupture. Its kinematics or dynamics are derived from advanced rupture models that allow, for example, to reconstruct the direction and velocity of the rupture front or the evolving slip distribution behind the rupture front. Such models are often parameterized by a lattice of interacting sub-faults with many degrees of freedom, where, for example, the time history of the slip and rake on each sub-fault are inverted. To avoid overfitting or other numerical instabilities during a finite-fault estimation, most models are stabilized by geometric rather than physical constraints such as smoothing.
As a basis for the inversion approach of this study, we build on a new pseudo-dynamic rupture model (PDR) with only a few free parameters and a simple geometry as a physics-based solution of an earthquake rupture. The PDR derives the instantaneous slip from a given stress drop on the fault plane, with boundary conditions on the developing crack surface guaranteed at all times via a boundary element approach. As a side product, the source time function on each point on the rupture plane is not constraint and develops by itself without additional parametrization. The code was made publicly available as part of the Pyrocko and Grond Python packages. The approach was compared with conventional modeling for different earthquakes. For example, for the Mw 7.1 2016 Kumamoto, Japan, earthquake, the effects of geometric changes in the rupture surface on the slip and slip rate distributions could be reproduced by simply projecting stress vectors. For the Mw 7.5 2018 Palu, Indonesia, strike-slip earthquake, we also modelled rupture propagation using the 2D Eikonal equation and assuming a linear relationship between rupture and shear wave velocity. This allowed us to give a deeper and faster propagating rupture front and the resulting upward refraction as a new possible explanation for the apparent supershear observed at the Earth's surface.
The thesis investigates three aspects of earthquake inversion using PDR: (1) to test whether implementing a simplified rupture model with few parameters into a probabilistic Bayesian scheme without constraining geometric parameters is feasible, and whether this leads to fast and robust results that can be used for subsequent fast information systems (e.g., ground motion predictions). (2) To investigate whether combining broadband and strong-motion seismic records together with near-field ground deformation data improves the reliability of estimated rupture models in a Bayesian inversion. (3) To investigate whether a complex rupture can be represented by the inversion of multiple PDR sources and for what type of earthquakes this is recommended.
I developed the PDR inversion approach and applied the joint data inversions to two seismic sequences in different tectonic settings. Using multiple frequency bands and a multiple source inversion approach, I captured the multi-modal behaviour of the Mw 8.2 2021 South Sandwich subduction earthquake with a large, curved and slow rupturing shallow earthquake bounded by two faster and deeper smaller events. I could cross-validate the results with other methods, i.e., P-wave energy back-projection, a clustering analysis of aftershocks and a simple tsunami forward model.
The joint analysis of ground deformation and seismic data within a multiple source inversion also shed light on an earthquake triplet, which occurred in July 2022 in SE Iran. From the inversion and aftershock relocalization, I found indications for a vertical separation between the shallower mainshocks within the sedimentary cover and deeper aftershocks at the sediment-basement interface. The vertical offset could be caused by the ductile response of the evident salt layer to stress perturbations from the mainshocks.
The applications highlight the versatility of the simple PDR in probabilistic seismic source inversion capturing features of rather different, complex earthquakes. Limitations, as the evident focus on the major slip patches of the rupture are discussed as well as differences to other finite fault modeling methods.
Fronting of an infinite VP across a finite main verb-akin to German "VP-topicalization"-can be found also in Czech and Polish. The paper discusses evidence from large corpora for this process and some of its properties, both syntactic and information-structural. Based on this case, criteria for more user-friedly searching and retrieval of corpus data in syntactic research are being developed.
One aspect of achieving a more sustainable chemical industry is the minimization of the usage of solvents and chemicals. Thus, optimization and development of chemical processes for large-scale production is favourably performed in small batches. The critical step in this approach is upscaling the batches from the small reaction systems to the large reactors mandatory for cost efficient production in an industrial environment. Scaling up the bulk volume always goes along with increasing the surface where the reaction medium is in contact with the confining vessel. Since volume scales proportional with the cubic dimension while the surface scales quadratic, their ratio is size-dependent. The influence of reaction vessel walls can change the reaction performance. A number of phenomena occurring at the surface-liquid interface can affect reaction rates and yields, resulting in possible difficulties in predicting and extrapolating from small size production scale to large industrial processes. The application of levitated droplets as a containerless reaction vessels provides a promising possibility to avoid the above-mentioned issues.
In the presented work, an efficient coupling of acoustically levitated droplets to an ion mobility (IM) spectrometer, operating at ambient conditions, was designed for real-time monitoring of chemical reactions. The design of the system comprises noncontact sampling and ionization of the droplet realised by laser desorption/ionization at 2,94 µm. The scope of the work includes fundamental studies covering understanding of laser irradiation of droplets enclosed in an acoustical field. Understanding of this phenomenon is crucial to comprehending the effects of temporal and spatial resolution of the generated ion plume that influence the resolution of the system.
The set-up includes an acoustic trap, laser irradiation and ion manipulation electrostatic lenses operating at high voltage at ambient pressure. The complexity of the design needs to fully be considered for an effective ion transfer at the interface region between the levitated droplet and IM spectrometer. For sampling and ionization, two distinct laser pulse lengths were evaluated, ns and µs. Irradiation via µs laser pulses provides several advantages: i) the droplet volume is not extensively impinged, as in case of ns laser pulses, allowing the sampling of only the small volume of the droplet; ii) the lower fluence results in less pronounced oscillations of the droplet confined in the acoustic field. The droplet will not be dissipated out of the acoustic field leading to loss of the sample; iii) the mild laser irradiation results in better spatial and temporal ion plume confinement, leading to better resolution of the detected ion packets. Finally, this knowledge allows the application of ion optics necessary to induce ion flow between the droplet suspended in the acoustic field and the IM spectrometer. The ion optics, composed of 2 electrostatic lenses placed in the near vicinity of the droplet, allow effective focusing of the ion plume and its redirection directly to the IM spectrometer entrance. This novel coupling has proved to be successful for detection of some simple molecules ionizable at the 2.94 µm wavelength. To further demonstrate the applicability of the system, a proof-of-principle reaction was selected, fulfilling the requirements of the system, and was subjected to comprehensive investigation of its performance. Herein, the reaction between N-Boc cysteine methyl ester and allyl alcohol has been performed in a batch reactor and on-line monitored via 1H NMR to establish reaction propagation. With the additional assessment, it was confirmed that the thiol-ene coupling can be performed within first 20 minutes of the irradiation with a reaction yield above 50%, proving that the reaction can be applied as a study case to assess the possibilities of the developed system.
Science education researchers have developed a refined understanding of the structure of science teachers’ pedagogical content knowledge (PCK), but how to develop applicable and situation-adequate PCK remains largely unclear. A potential problem lies in the diverse conceptualisations of the PCK used in PCK research. This study sought to systematize existing science education research on PCK through the lens of the recently proposed refined consensus model (RCM) of PCK. In this review, the studies’ approaches to investigating PCK and selected findings were characterised and synthesised as an overview comparing research before and after the publication of the RCM. We found that the studies largely employed a qualitative case-study methodology that included specific PCK models and tools. However, in recent years, the studies focused increasingly on quantitative aspects. Furthermore, results of the reviewed studies can mostly be integrated into the RCM. We argue that the RCM can function as a meaningful theoretical lens for conceptualizing links between teaching practice and PCK development by proposing pedagogical reasoning as a mechanism and/or explanation for PCK development in the context of teaching practice.
In numerical processing, the functional role of Spatial-Numerical Associations (SNAs, such as the association of smaller numbers with left space and larger numbers with right space, the Mental Number Line hypothesis) is debated. Most studies demonstrate SNAs with lateralized responses, and there is little evidence that SNAs appear when no response is required. We recorded passive holding grip forces in no-go trials during number processing. In Experiment 1, participants performed a surface numerical decision task (“Is it a number or a letter?”). In Experiment 2, we used a deeper semantic task (“Is this number larger or smaller than five?”). Despite instruction to keep their grip force constant, participants' spontaneous grip force changed in both experiments: Smaller numbers led to larger force increase in the left than in the right hand in the numerical decision task (500–700 ms after stimulus onset). In the semantic task, smaller numbers again led to larger force increase in the left hand, and larger numbers increased the right-hand holding force. This effect appeared earlier (180 ms) and lasted longer (until 580 ms after stimulus onset). This is the first demonstration of SNAs with passive holding force. Our result suggests that (1) explicit motor response is not a prerequisite for SNAs to appear, and (2) the timing and strength of SNAs are task-dependent. (216 words).
A key problem for models of dialogue is to explain the mechanisms involved in generating and responding to clarification requests. We report a 'Maze task' experiment that investigates the effect of 'spoof' clarification requests on the development of semantic co-ordination. The results provide evidence of both local and global semantic co-ordination phenomena that are not captured by existing dialogue co-ordination models.
Parafoveal Load of Word N+1 Modulates Preprocessing Effectivenessof Word N+2 in Chinese Reading
(2010)
Preview benefits (PBs) from two words to the right of the fixated one (i.e., word N+2)and associated parafoveal-on-foveal effects are critical for proposals of distributed lexical processing during reading. This experiment examined parafoveal processing during reading of Chinese sentences, using a boundary manipulation of N+2-word preview with low- and high-frequency words N+1. The main findings were (a) an identity PB for word N+2 that was (b) primarily observed when word N+1 was of high frequency (i.e., an interaction between frequency of word N+1 and PB for word N+2), and (c) a parafoveal-on-foveal frequency effect of word N+1 for fixation durations on word N. We discuss implications for theories of serial attention shifts and parallel distributed processing of words during reading.
The end of culture?
(2000)
Our dynamic Sun manifests its activity by different phenomena: from the 11-year cyclic sunspot pattern to the unpredictable and violent explosions in the case of solar flares. During flares, a huge amount of the stored magnetic energy is suddenly released and a substantial part of this energy is carried by the energetic electrons, considered to be the source of the nonthermal radio and X-ray radiation. One of the most important and still open question in solar physics is how the electrons are accelerated up to high energies within (the observed in the radio emission) short time scales. Because the acceleration site is extremely small in spatial extent as well (compared to the solar radius), the electron acceleration is regarded as a local process. The search for localized wave structures in the solar corona that are able to accelerate electrons together with the theoretical and numerical description of the conditions and requirements for this process, is the aim of the dissertation. Two models of electron acceleration in the solar corona are proposed in the dissertation: I. Electron acceleration due to the solar jet interaction with the background coronal plasma (the jet--plasma interaction) A jet is formed when the newly reconnected and highly curved magnetic field lines are relaxed by shooting plasma away from the reconnection site. Such jets, as observed in soft X-rays with the Yohkoh satellite, are spatially and temporally associated with beams of nonthermal electrons (in terms of the so-called type III metric radio bursts) propagating through the corona. A model that attempts to give an explanation for such observational facts is developed here. Initially, the interaction of such jets with the background plasma leads to an (ion-acoustic) instability associated with growing of electrostatic fluctuations in time for certain range of the jet initial velocity. During this process, any test electron that happen to feel this electrostatic wave field is drawn to co-move with the wave, gaining energy from it. When the jet speed has a value greater or lower than the one, required by the instability range, such wave excitation cannot be sustained and the process of electron energization (acceleration and/or heating) ceases. Hence, the electrons can propagate further in the corona and be detected as type III radio burst, for example. II. Electron acceleration due to attached whistler waves in the upstream region of coronal shocks (the electron--whistler--shock interaction) Coronal shocks are also able to accelerate electrons, as observed by the so-called type II metric radio bursts (the radio signature of a shock wave in the corona). From in-situ observations in space, e.g., at shocks related to co-rotating interaction regions, it is known that nonthermal electrons are produced preferably at shocks with attached whistler wave packets in their upstream regions. Motivated by these observations and assuming that the physical processes at shocks are the same in the corona as in the interplanetary medium, a new model of electron acceleration at coronal shocks is presented in the dissertation, where the electrons are accelerated by their interaction with such whistlers. The protons inflowing toward the shock are reflected there by nearly conserving their magnetic moment, so that they get a substantial velocity gain in the case of a quasi-perpendicular shock geometry, i.e, the angle between the shock normal and the upstream magnetic field is in the range 50--80 degrees. The so-accelerated protons are able to excite whistler waves in a certain frequency range in the upstream region. When these whistlers (comprising the localized wave structure in this case) are formed, only the incoming electrons are now able to interact resonantly with them. But only a part of these electrons fulfill the the electron--whistler wave resonance condition. Due to such resonant interaction (i.e., of these electrons with the whistlers), the electrons are accelerated in the electric and magnetic wave field within just several whistler periods. While gaining energy from the whistler wave field, the electrons reach the shock front and, subsequently, a major part of them are reflected back into the upstream region, since the shock accompanied with a jump of the magnetic field acts as a magnetic mirror. Co-moving with the whistlers now, the reflected electrons are out of resonance and hence can propagate undisturbed into the far upstream region, where they are detected in terms of type II metric radio bursts. In summary, the kinetic energy of protons is transfered into electrons by the action of localized wave structures in both cases, i.e., at jets outflowing from the magnetic reconnection site and at shock waves in the corona.
In the old days (pre ∼1990) hot stellar winds were assumed to be smooth, which made life fairly easy and bothered no one. Then after suspicious behaviour had been revealed, e.g. stochastic temporal variability in broadband polarimetry of single hot stars, it took the emerging CCD technology developed in the preceding decades (∼1970-80’s) to reveal that these winds were far from smooth. It was mainly high-S/N, time-dependent spectroscopy of strong optical recombination emission lines in WR, and also a few OB and other stars with strong hot winds, that indicated all hot stellar winds likely to be pervaded by thousands of multiscale (compressible supersonic turbulent?) structures, whose driver is probably some kind of radiative instability. Quantitative estimates of clumping-independent mass-loss rates came from various fronts, mainly dependent directly on density (e.g. electron-scattering wings of emission lines, UV spectroscopy of weak resonance lines, and binary-star properties including orbital-period changes, electron-scattering, and X-ray fluxes from colliding winds) rather than the more common, easier-to-obtain but clumping-dependent density-squared diagnostics (e.g. free-free emission in the IR/radio and recombination lines, of which the favourite has always been Hα). Many big questions still remain, such as: What do the clumps really look like? Do clumping properties change as one recedes from the mother star? Is clumping universal? Does the relative clumping correction depend on $\dot{M}$ itself?
General Discussion
(2007)
Current climate warming is affecting arctic regions at a faster rate than the rest of the world. This has profound effects on permafrost that underlies most of the arctic land area. Permafrost thawing can lead to the liberation of considerable amounts of greenhouse gases as well as to significant changes in the geomorphology, hydrology, and ecology of the corresponding landscapes, which may in turn act as a positive feedback to the climate system. Vast areas of the east Siberian lowlands, which are underlain by permafrost of the Yedoma-type Ice Complex, are particularly sensitive to climate warming because of the high ice content of these permafrost deposits. Thermokarst and thermal erosion are two major types of permafrost degradation in periglacial landscapes. The associated landforms are prominent indicators of climate-induced environmental variations on the regional scale. Thermokarst lakes and basins (alasses) as well as thermo-erosional valleys are widely distributed in the coastal lowlands adjacent to the Laptev Sea. This thesis investigates the spatial distribution and morphometric properties of these degradational features to reconstruct their evolutionary conditions during the Holocene and to deduce information on the potential impact of future permafrost degradation under the projected climate warming. The methodological approach is a combination of remote sensing, geoinformation, and field investigations, which integrates analyses on local to regional spatial scales. Thermokarst and thermal erosion have affected the study region to a great extent. In the Ice Complex area of the Lena River Delta, thermokarst basins cover a much larger area than do present thermokarst lakes on Yedoma uplands (20.0 and 2.2 %, respectively), which indicates that the conditions for large-area thermokarst development were more suitable in the past. This is supported by the reconstruction of the development of an individual alas in the Lena River Delta, which reveals a prolonged phase of high thermokarst activity since the Pleistocene/Holocene transition that created a large and deep basin. After the drainage of the primary thermokarst lake during the mid-Holocene, permafrost aggradation and degradation have occurred in parallel and in shorter alternating stages within the alas, resulting in a complex thermokarst landscape. Though more dynamic than during the first phase, late Holocene thermokarst activity in the alas was not capable of degrading large portions of Pleistocene Ice Complex deposits and substantially altering the Yedoma relief. Further thermokarst development in existing alasses is restricted to thin layers of Holocene ice-rich alas sediments, because the Ice Complex deposits underneath the large primary thermokarst lakes have thawed completely and the underlying deposits are ice-poor fluvial sands. Thermokarst processes on undisturbed Yedoma uplands have the highest impact on the alteration of Ice Complex deposits, but will be limited to smaller areal extents in the future because of the reduced availability of large undisturbed upland surfaces with poor drainage. On Kurungnakh Island in the central Lena River Delta, the area of Yedoma uplands available for future thermokarst development amounts to only 33.7 %. The increasing proximity of newly developing thermokarst lakes on Yedoma uplands to existing degradational features and other topographic lows decreases the possibility for thermokarst lakes to reach large sizes before drainage occurs. Drainage of thermokarst lakes due to thermal erosion is common in the study region, but thermo-erosional valleys also provide water to thermokarst lakes and alasses. Besides these direct hydrological interactions between thermokarst and thermal erosion on the local scale, an interdependence between both processes exists on the regional scale. A regional analysis of extensive networks of thermo-erosional valleys in three lowland regions of the Laptev Sea with a total study area of 5,800 km² found that these features are more common in areas with higher slopes and relief gradients, whereas thermokarst development is more pronounced in flat lowlands with lower relief gradients. The combined results of this thesis highlight the need for comprehensive analyses of both, thermokarst and thermal erosion, in order to assess past and future impacts and feedbacks of the degradation of ice-rich permafrost on hydrology and climate of a certain region.
Hα observations of Rigel obtained on 184 nights during the past ten years with the 1-m telescope and ´echelle spectrograph of Ritter Observatory are surveyed. The line profiles were classified in terms of morphology. About 1/4 of them are of P Cygni type, about 15% inverse P Cygni, about 25% double-peaked, about 1/3 pure absorption, and a few are single emission lines. Transformation of the profile from one type to another typically takes a few days. Although the line stays in absorption for extended intervals, only one high-velocity absorption event of the intensity reported by Kaufer et al. (1996a) was observed, in late 2006. Late in this event, Hα absorption occurred farther to the red than the red wing of a plausible photospheric absorption component, an indication of infalling material. In general, as the absorption events come to an end, the emission typically returns with an inverse P Cygni profile. The Hα profile class shows no obvious correlation with the radial velocity of C II λ6578, a photospheric absorption line.
The study examined the potential future changes of drought characteristics in the Greater Lake Malawi Basin in Southeast Africa. This region strongly depends on water resources to generate electricity and food. Future projections (considering both moderate and high emission scenarios) of temperature and precipitation from an ensemble of 16 bias-corrected climate model combinations were blended with a scenario-neutral response surface approach to analyses changes in: (i) the meteorological conditions, (ii) the meteorological water balance, and (iii) selected drought characteristics such as drought intensity, drought months, and drought events, which were derived from the Standardized Precipitation and Evapotranspiration Index. Changes were analyzed for a near-term (2021–2050) and far-term period (2071–2100) with reference to 1976–2005. The effect of bias-correction (i.e., empirical quantile mapping) on the ability of the climate model ensemble to reproduce observed drought characteristics as compared to raw climate projections was also investigated. Results suggest that the bias-correction improves the climate models in terms of reproducing temperature and precipitation statistics but not drought characteristics. Still, despite the differences in the internal structures and uncertainties that exist among the climate models, they all agree on an increase of meteorological droughts in the future in terms of higher drought intensity and longer events. Drought intensity is projected to increase between +25 and +50% during 2021–2050 and between +131 and +388% during 2071–2100. This translates into +3 to +5, and +7 to +8 more drought months per year during both periods, respectively. With longer lasting drought events, the number of drought events decreases. Projected droughts based on the high emission scenario are 1.7 times more severe than droughts based on the moderate scenario. That means that droughts in this region will likely become more severe in the coming decades. Despite the inherent high uncertainties of climate projections, the results provide a basis in planning and (water-)managing activities for climate change adaptation measures in Malawi. This is of particular relevance for water management issues referring hydro power generation and food production, both for rain-fed and irrigated agriculture.
The highly conserved protein complex containing the Target of Rapamycin (TOR) kinase is known to integrate intra- and extra-cellular stimuli controlling nutrient allocation and cellular growth. This thesis describes three studies aimed to understand how TOR signaling pathway influences carbon and nitrogen metabolism in Chlamydomonas reinhardtii. The first study presents a time-resolved analysis of the molecular and physiological features across the diurnal cycle. The inhibition of TOR leads to 50% reduction in growth followed by nonlinear delays in the cell cycle progression. The metabolomics analysis showed that the growth repression is mainly driven by differential carbon partitioning between anabolic and catabolic processes. Furthermore, the high accumulation of nitrogen-containing compounds indicated that TOR kinase controls the carbon to nitrogen balance of the cell, which is responsible for biomass accumulation, growth and cell cycle progression. In the second study the cause of the high accumulation of amino acids is explained. For this purpose, the effect of TOR inhibition on Chlamydomonas was examined under different growth regimes using stable 13C- and 15N-isotope labeling. The data clearly showed that an increased nitrogen uptake is induced within minutes after the inhibition of TOR. Interestingly, this increased N-influx is accompanied by increased activities of nitrogen assimilating enzymes. Accordingly, it was concluded that TOR inhibition induces de-novo amino acid synthesis in Chlamydomonas. The recognition of this novel process opened an array of questions regarding potential links between central metabolism and TOR signaling. Therefore a detailed phosphoproteomics study was conducted to identify the potential substrates of TOR pathway regulating central metabolism. Interestingly, some of the key enzymes involved in carbon metabolism as well as amino acid synthesis exhibited significant changes in the phosphosite intensities immediately after TOR inhibition. Altogether, these studies provide a) detailed insights to metabolic response of Chlamydomonas to TOR inhibition, b) identification of a novel process causing rapid upshifts in amino acid levels upon TOR inhibition and c) finally highlight potential targets of TOR signaling regulating changes in central metabolism. Further biochemical and molecular investigations could confirm these observations and advance the understanding of growth signaling in microalgae.
Background: Clinicians often refer anthropometric measures of a child to so-called “growth standards” and “growth references. Over 140 countries have meanwhile adopted WHO growth standards.
Objectives: The present study was conducted to thoroughly examine the idea of growth standards as a common yardstick for all populations. Weight depends on height. We became interested in whether also weight-for-height depends on height. First, we studied the age-group effect on weight-for-height. Thereafter, we tested the applicability of weight-for-height references in short and in historic populations.
Sample and Methods: We analyzed body height and body weight and weight-for-height of 3795 healthy boys and 3726 healthy girls aged 2 to 5 years measured in East-Germany between 1986 and 1990.
We chose contemporary height and weight charts from Germany, the UK, and the WHO growth chart and compared these with three geographically commensurable growth charts from the end of the 19th century.
Results: We analyzed body height and body weight and weight-for-height of 3795 healthy boys and 3726 healthy girls aged 2 to 5 years measured in East-Germany between 1986 and 1990.
We chose contemporary height and weight charts from Germany, the UK, and the WHO growth chart and compared these with three geographically commensurable growth charts of the end of the 19th century.
Conclusion: Weight-for-height depends on age and sex and apart from the nutritional state, reflects body proportion and body built particularly during infancy and early childhood. Populations with a relatively short average height are prone to high values of weight-for-height for arithmetic reasons independent of the nutritional state.
Background
The association between bivariate variables may not necessarily be homogeneous throughout the whole range of the variables. We present a new technique to describe inhomogeneity in the association of bivariate variables.
Methods
We consider the correlation of two normally distributed random variables. The 45° diagonal through the origin of coordinates represents the line on which all points would lie if the two variables completely agreed. If the two variables do not completely agree, the points will scatter on both sides of the diagonal and form a cloud. In case of a high association between the variables, the band width of this cloud will be narrow, in case of a low association, the band width will be wide. The band width directly relates to the magnitude of the correlation coefficient. We then determine the Euclidean distances between the diagonal and each point of the bivariate correlation, and rotate the coordinate system clockwise by 45°. The standard deviation of all Euclidean distances, named “global standard deviation”, reflects the band width of all points along the former diagonal. Calculating moving averages of the standard deviation along the former diagonal results in “locally structured standard deviations” and reflect patterns of “locally structured correlations (LSC)”. LSC highlight inhomogeneity of bivariate correlations. We exemplify this technique by analyzing the association between body mass index (BMI) and hip circumference (HC) in 6313 healthy East German adults aged 18 to 70 years.
Results
The correlation between BMI and HC in healthy adults is not homogeneous. LSC is able to identify regions where the predictive power of the bivariate correlation between BMI and HC increases or decreases, and highlights in our example that slim people have a higher association between BMI and HC than obese people.
Conclusion
Locally structured correlations (LSC) identify regions of higher or lower than average correlation between two normally distributed variables.
Whilst providing a framework for learning and scientific emancipation, a proposal writing training is confronted with various organisational and didactic challenges, which influence the achievement of the set training objectives. Based on observations made during the workshops for proposal writing organised in Kinshasa, Democratic Republic of Congo, as part of the NMT Programme, the article raises two main questions: (a) How could these challenges be overcome and successfully addressed in the training? (b) What is the level of learning outcomes of the participants at the end of the training? The article shows that the success of the training lays in the relevance of the employed training approaches. The use of a participatory approach encouraged constructive exchanges between participants, trainers, and experts, and enabled all participants to finalise coherent projects to apply for national and international funding.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
The widespread usage of products containing volatile organic compounds (VOC) has lead to a general human exposure to these chemicals in work places or homes being suspected to contribute to the growing incidence of environmental diseases. Since the causal molecular mechanisms for the development of these disorders are not completely understood, the overall objective of this thesis was to investigate VOC-mediated molecular effects on human lung cells in vitro at VOC concentrations comparable to exposure scenarios below current occupational limits. Although differential expression of single proteins in response to VOCs has been reported, effects on complex protein networks (proteome) have not been investigated. However, this information is indispensable when trying to ascertain a mechanism for VOC action on the cellular level and establishing preventive strategies. For this study, the alveolar epithelial cell line A549 has been used. This cell line, cultured in a two-phase (air/liquid) model allows the most direct exposure and had been successfully applied for the analysis of inflammatory effects in response to VOCs. Mass spectrometric identification of 266 protein spots provided the first proteomic map of A549 cell line to this extent that may foster future work with this frequently used cellular model. The distribution of three typical air contaminants, monochlorobenzene (CB), styrene and 1,2 dichlorobenzene (1,2-DCB), between gas and liquid phase of the exposure model has been analyzed by gas chromatography. The obtained VOC partitioning was in agreement with available literature data. Subsequently the adapted in vitro system has been successfully employed to characterize the effects of the aromatic compound styrene on the proteome of A549 cells (Chapter 4). Initially, the cell toxicity has been assessed in order to ensure that most of the concentrations used in the following proteomic approach were not cytotoxic. Significant changes in abundance and phosphorylation in the total soluble protein fraction of A549 cells have been detected following styrene exposure. All proteins have been identified using mass spectrometry and the main cellular functions have been assigned. Validation experiments on protein and transcript level confirmed the results of the 2-DE experiments. From the results, two main cellular pathways have been identified that were induced by styrene: the cellular oxidative stress response combined with moderate pro-apoptotic signaling. Measurement of cellular reactive oxygen species (ROS) as well as the styrene-mediated induction of oxidative stress marker proteins confirmed the hypothesis of oxidative stress as the main molecular response mechanism. Finally, adducts of cellular proteins with the reactive styrene metabolite styrene 7,8 oxide (SO) have been identified. Especially the SO-adducts observed at both the reactive centers of thioredoxin reductase 1, which is a key element in the control of the cellular redox state, may be involved in styrene-induced ROS formation and apoptosis. A similar proteomic approach has been carried out with the halobenzenes CB and 1,2-DCB (Chapter 5). In accordance with previous findings, cell toxicity assessment showed enhanced toxicity compared to the one caused by styrene. Significant changes in abundance and phosphorylation of total soluble proteins of A549 cells have been detected following exposure to subtoxic concentrations of CB and 1,2-DCB. All proteins have been identified using mass spectrometry and the main cellular functions have been assigned. As for the styrene experiment, the results indicated two main pathways to be affected in the presence of chlorinated benzenes, cell death signaling and oxidative stress response. The strong induction of pro-apoptotic signaling has been confirmed for both treatments by detection of the cleavage of caspase 3. Likewise, the induction of redox-sensitive protein species could be correlated to an increased cellular level of ROS observed following CB treatment. Finally, common mechanisms in the cellular response to aromatic VOCs have been investigated (Chapter 6). A similar number (4.6-6.9%) of all quantified protein spots showed differential expression (p<0.05) following cell exposure to styrene, CB or 1,2-DCB. However, not more than three protein spots showed significant regulation in the same direction for all three volatile compounds: voltage-dependent anion-selective channel protein 2, peroxiredoxin 1 and elongation factor 2. However, all of these proteins are important molecular targets in stress- and cell death-related signaling pathways.
Expanding public or publicly subsidized childcare has been a top social policy priority in many industrialized countries. It is supposed to increase fertility, promote children’s development and enhance mothers’ labor market attachment. In this paper, we analyze the causal effect of one of the largest expansions of subsidized childcare for children up to three years among industrialized countries on the employment of mothers in Germany. Identification is based on spatial and temporal variation in the expansion of publicly subsidized childcare triggered by two comprehensive childcare policy reforms. The empirical analysis is based on the German Microcensus that is matched to county level data on childcare availability. Based on our preferred specification which includes time and county fixed effects we find that an increase in childcare slots by one percentage point increases mothers’ labor market participation rate by 0.2 percentage points. The overall increase in employment is explained by the rise in part-time employment with relatively long hours (20-35 hours per week). We do not find a change in full-time employment or lower part-time employment that is causally related to the childcare expansion. The effect is almost entirely driven by mothers with medium-level qualifications. Mothers with low education levels do not profit from this reform calling for a stronger policy focus on particularly disadvantaged groups in coming years.
In biological cells, the long-range intracellular traffic is powered by molecular motors which transport various cargos along microtubule filaments. The microtubules possess an intrinsic direction, having a 'plus' and a 'minus' end. Some molecular motors such as cytoplasmic dynein walk to the minus end, while others such as conventional kinesin walk to the plus end. Cells typically have an isopolar microtubule network. This is most pronounced in neuronal axons or fungal hyphae. In these long and thin tubular protrusions, the microtubules are arranged parallel to the tube axis with the minus ends pointing to the cell body and the plus ends pointing to the tip. In such a tubular compartment, transport by only one motor type leads to 'motor traffic jams'. Kinesin-driven cargos accumulate at the tip, while dynein-driven cargos accumulate near the cell body. We identify the relevant length scales and characterize the jamming behaviour in these tube geometries by using both Monte Carlo simulations and analytical calculations. A possible solution to this jamming problem is to transport cargos with a team of plus and a team of minus motors simultaneously, so that they can travel bidirectionally, as observed in cells. The presumably simplest mechanism for such bidirectional transport is provided by a 'tug-of-war' between the two motor teams which is governed by mechanical motor interactions only. We develop a stochastic tug-of-war model and study it with numerical and analytical calculations. We find a surprisingly complex cooperative motility behaviour. We compare our results to the available experimental data, which we reproduce qualitatively and quantitatively.
The spread of antibiotic-resistant bacteria poses a globally increasing threat to public health care. The excessive use of antibiotics in animal husbandry can develop resistances in the stables. Transmission through direct contact with animals and contamination of food has already been proven. The excrements of the animals combined with a binding material enable a further potential path of spread into the environment, if they are used as organic manure in agricultural landscapes. As most of the airborne bacteria are attached to particulate matter, the focus of the work will be the atmospheric dispersal via the dust fraction.
Field measurements on arable lands in Brandenburg, Germany and wind erosion studies in a wind tunnel were conducted to investigate the risk of a potential atmospheric dust-associated spread of antibiotic-resistant bacteria from poultry manure fertilized agricultural soils. The focus was to (i) characterize the conditions for aerosolization and (ii) qualify and quantify dust emissions during agricultural operations and wind erosion.
PM10 (PM, particulate matter with an aerodynamic diameter smaller than 10 µm) emission factors and bacterial fluxes for poultry manure application and incorporation have not been previously reported before. The contribution to dust emissions depends on the water content of the manure, which is affected by the manure pretreatment (fresh, composted, stored, dried), as well as by the intensity of manure spreading from the manure spreader. During poultry manure application, PM10 emission ranged between 0.05 kg ha-1 and 8.37 kg ha-1. For comparison, the subsequent land preparation contributes to 0.35 – 1.15 kg ha-1 of PM10 emissions. Manure particles were still part of dust emissions but they were accounted to be less than 1% of total PM10 emissions due to the dilution of poultry manure in the soil after manure incorporation. Bacterial emissions of fecal origin were more relevant during manure application than during the subsequent manure incorporation, although PM10 emissions of manure incorporation were larger than PM10 emissions of manure application for the non-dried manure variants.
Wind erosion leads to preferred detachment of manure particles from sandy soils, when poultry manure has been recently incorporated. Sorting effects were determined between the low-density organic particles of manure origin and the soil particles of mineral origin close above the threshold of 7 m s-1. In dependence to the wind speed, potential erosion rates between 101 and 854 kg ha-1 were identified, if 6 t ha-1 of poultry manure were applied. Microbial investigation showed that manure bacteria got detached more easily from the soil surface during wind erosion, due to their attachment on manure particles.
Although antibiotic-resistant bacteria (ESBL-producing E. coli) were still found in the poultry barns, no further contamination could be detected with them in the manure, fertilized soils or in the dust generated by manure application, land preparation or wind erosion. Parallel studies of this project showed that storage of poultry manure for a few days (36 – 72 h) is sufficient to inactivate ESBL-producing E. coli. Further antibiotic-resistant bacteria, i.e. MRSA and VRE, were only found sporadically in the stables and not at all in the dust. Therefore, based on the results of this work, the risk of a potential infection by dust-associated antibiotic-resistant bacteria can be considered as low.
Earth's climate varies continuously across space and time, but humankind has witnessed only a small snapshot of its entire history, and instrumentally documented it for a mere 200 years. Our knowledge of past climate changes is therefore almost exclusively based on indirect proxy data, i.e. on indicators which are sensitive to changes in climatic variables and stored in environmental archives. Extracting the data from these archives allows retrieval of the information from earlier times. Obtaining accurate proxy information is a key means to test model predictions of the past climate, and only after such validation can the models be used to reliably forecast future changes in our warming world. The polar ice sheets of Greenland and Antarctica are one major climate archive, which record information about local air temperatures by means of the isotopic composition of the water molecules embedded in the ice. However, this temperature proxy is, as any indirect climate data, not a perfect recorder of past climatic variations. Apart from local air temperatures, a multitude of other processes affect the mean and variability of the isotopic data, which hinders their direct interpretation in terms of climate variations. This applies especially to regions with little annual accumulation of snow, such as the Antarctic Plateau. While these areas in principle allow for the extraction of isotope records reaching far back in time, a strong corruption of the temperature signal originally encoded in the isotopic data of the snow is expected. This dissertation uses observational isotope data from Antarctica, focussing especially on the East Antarctic low-accumulation area around the Kohnen Station ice-core drilling site, together with statistical and physical methods, to improve our understanding of the spatial and temporal isotope variability across different scales, and thus to enhance the applicability of the proxy for estimating past temperature variability. The presented results lead to a quantitative explanation of the local-scale (1–500 m) spatial variability in the form of a statistical noise model, and reveal the main source of the temporal variability to be the mixture of a climatic seasonal cycle in temperature and the effect of diffusional smoothing acting on temporally uncorrelated noise. These findings put significant limits on the representativity of single isotope records in terms of local air temperature, and impact the interpretation of apparent cyclicalities in the records. Furthermore, to extend the analyses to larger scales, the timescale-dependency of observed Holocene isotope variability is studied. This offers a deeper understanding of the nature of the variations, and is crucial for unravelling the embedded true temperature variability over a wide range of timescales.
Japan launched the new Course of Study in April 2012, which has been carried out in elementary schools and junior high schools. It will also be implemented in senior high schools from April 2013. This article presents an overview of the information studies education in the new Course of Study for K-12. Besides, the authors point out what role experts of informatics and information studies education should play in the general education centered around information studies that is meant to help people of the nation to lead an active, powerful, and flexible life until the satisfying end.
Plate tectonics describes the movement of rigid plates at the surface of the Earth as well as their complex deformation at three types of plate boundaries: 1) divergent boundaries such as rift zones and mid-ocean ridges, 2) strike-slip boundaries where plates grind past each other, such as the San Andreas Fault, and 3) convergent boundaries that form large mountain ranges like the Andes. The generally narrow deformation zones that bound the plates exhibit complex strain patterns that evolve through time. During this evolution, plate boundary deformation is driven by tectonic forces arising from Earth’s deep interior and from within the lithosphere, but also by surface processes, which erode topographic highs and deposit the resulting sediment into regions of low elevation. Through the combination of these factors, the surface of the Earth evolves in a highly dynamic way with several feedback mechanisms. At divergent boundaries, for example, tensional stresses thin the lithosphere, forcing uplift and subsequent erosion of rift flanks, which creates a sediment source. Meanwhile, the rift center subsides and becomes a topographic low where sediments accumulate. This mass transfer from foot- to hanging wall plays an important role during rifting, as it prolongs the activity of individual normal faults. When rifting continues, continents are eventually split apart, exhuming Earth’s mantle and creating new oceanic crust. Because of the complex interplay between deep tectonic forces that shape plate boundaries and mass redistribution at the Earth’s surface, it is vital to understand feedbacks between the two domains and how they shape our planet.
In this study I aim to provide insight on two primary questions: 1) How do divergent and strike-slip plate boundaries evolve? 2) How is this evolution, on a large temporal scale and a smaller structural scale, affected by the alteration of the surface through erosion and deposition? This is done in three chapters that examine the evolution of divergent and strike-slip plate boundaries using numerical models. Chapter 2 takes a detailed look at the evolution of rift systems using two-dimensional models. Specifically, I extract faults from a range of rift models and correlate them through time to examine how fault networks evolve in space and time. By implementing a two-way coupling between the geodynamic code ASPECT and landscape evolution code FastScape, I investigate how the fault network and rift evolution are influenced by the system’s erosional efficiency, which represents many factors like lithology or climate. In Chapter 3, I examine rift evolution from a three-dimensional perspective. In this chapter I study linkage modes for offset rifts to determine when fast-rotating plate-boundary structures known as continental microplates form. Chapter 4 uses the two-way numerical coupling between tectonics and landscape evolution to investigate how a strike-slip boundary responds to large sediment loads, and whether this is sufficient to form an entirely new type of flexural strike-slip basin.
Background
Wearables, as small portable computer systems worn on the body, can track user fitness and health data, which can be used to customize health insurance contributions individually. In particular, insured individuals with a healthy lifestyle can receive a reduction of their contributions to be paid. However, this potential is hardly used in practice.
Objective
This study aims to identify which barrier factors impede the usage of wearables for assessing individual risk scores for health insurances, despite its technological feasibility, and to rank these barriers according to their relevance.
Methods
To reach these goals, we conduct a ranking-type Delphi study with the following three stages. First, we collected possible barrier factors from a panel of 16 experts and consolidated them to a list of 11 barrier categories. Second, the panel was asked to rank them regarding their relevance. Third, to enhance the panel consensus, the ranking was revealed to the experts, who were then asked to re-rank the barriers.
Results
The results suggest that regulation is the most important barrier. Other relevant barriers are false or inaccurate measurements and application errors caused by the users. Additionally, insurers could lack the required technological competence to use the wearable data appropriately.
Conclusion
A wider use of wearables and health apps could be achieved through regulatory modifications, especially regarding privacy issues. Even after assuring stricter regulations, users’ privacy concerns could partly remain, if the data exchange between wearables manufacturers, health app providers, and health insurers does not become more transparent.
Improvement of a fluorescence immunoassay with a compact diode-pumped solid state laser at 315 nm
(2006)
We demonstrate the improvement of fluorescence immunoassay (FIA) diagnostics in deploying a newly developed compact diode-pumped solid state (DPSS) laser with emission at 315 nm. The laser is based on the quasi-three-level transition in Nd:YAG at 946 nm. The pulsed operation is either realized by an active Q-switch using an electro-optical device or by introduction of a Cr<SUP>4+</SUP>:YAG saturable absorber as passive Q-switch element. By extra-cavity second harmonic generation in different nonlinear crystal media we obtained blue light at 473 nm. Subsequent mixing of the fundamental and the second harmonic in a β-barium-borate crystal provided pulsed emission at 315 nm with up to 20 μJ maximum pulse energy and 17 ns pulse duration. Substitution of a nitrogen laser in a FIA diagnostics system by the DPSS laser succeeded in considerable improvement of the detection limit. Despite significantly lower pulse energies (7 μJ DPSS laser versus 150 μJ nitrogen laser), in preliminary investigations the limit of detection was reduced by a factor of three for a typical FIA.
The optical spectrum of Eta Carinae (η Car) is prominent in H I, He i and Fe ii wind lines, all of which vary both in absorption and emission with phase. The phase dependance is a consequence of the interaction between the two objects in the η Car binary (η Car A & B). The binary system is enshrouded by ejecta from previous mass ejection events and consequently, η Car B is not directly observable. We have traced the He i lines over η Car’s spectroscopic period, using HST/STIS data obtained with medium spectral, but high angular, resolving power, and created a radial velocity curve for the system. The He I lines are formed in the core of the system, and appear to be a composite of multiple features formed in spatially separated regions. The sources of their irregular line profiles are still not fully understood, but can be attributed to emission/absorption near the wind-wind interface and/or a direct consequence of the η Car A’s, massive, clumpy wind. This paper will discuss the spectral variability, the narrow emission structure of the He i lines and how clumpiness of the winds may impede the construction of the reliable radial velocity curve, necessary for characterizations of especially η Car B.
Enhanced geothermal systems (EGS) are considered a cornerstone of future sustainable energy production. In such systems, high-pressure fluid injections break the rock to provide pathways for water to circulate in and heat up. This approach inherently induces small seismic events that, in rare cases, are felt or can even cause damage. Controlling and reducing the seismic impact of EGS is crucial for a broader public acceptance. To evaluate the applicability of hydraulic fracturing (HF) in EGS and to improve the understanding of fracturing processes and the hydromechanical relation to induced seismicity, six in-situ, meter-scale HF experiments with different injection schemes were performed under controlled conditions in crystalline rock in a depth of 410 m at the Äspö Hard Rock Laboratory (Sweden).
I developed a semi-automated, full-waveform-based detection, classification, and location workflow to extract and characterize the acoustic emission (AE) activity from the continuous recordings of 11 piezoelectric AE sensors. Based on the resulting catalog of 20,000 AEs, with rupture sizes of cm to dm, I mapped and characterized the fracture growth in great detail. The injection using a novel cyclic injection scheme (HF3) had a lower seismic impact than the conventional injections. HF3 induced fewer AEs with a reduced maximum magnitude and significantly larger b-values, implying a decreased number of large events relative to the number of small ones. Furthermore, HF3 showed an increased fracture complexity with multiple fractures or a fracture network. In contrast, the conventional injections developed single, planar fracture zones (Publication 1).
An independent, complementary approach based on a comparison of modeled and observed tilt exploits transient long-period signals recorded at the horizontal components of two broad-band seismometers a few tens of meters apart from the injections. It validated the efficient creation of hydraulic fractures and verified the AE-based fracture geometries. The innovative joint analysis of AEs and tilt signals revealed different phases of the fracturing process, including the (re-)opening, growth, and aftergrowth of fractures, and provided evidence for the reactivation of a preexisting fault in one of the experiments (Publication 2). A newly developed network-based waveform-similarity analysis applied to the massive AE activity supports the latter finding.
To validate whether the reduction of the seismic impact as observed for the cyclic injection schemes during the Äspö mine-scale experiments is transferable to other scales, I additionally calculated energy budgets for injection experiments from previously conducted laboratory tests and from a field application. Across all three scales, the cyclic injections reduce the seismic impact, as depicted by smaller maximum magnitudes, larger b-values, and decreased injection efficiencies (Publication 3).
Injuries in professional soccer are a significant concern for teams, and they are caused amongst others by high training load. This cohort study describes the relationship between workload parameters and the occurrence of non-contact injuries, during weeks with high and low workload in professional soccer players throughout the season. Twenty-one professional soccer players aged 28.3 ± 3.9 yrs. who competed in the Iranian Persian Gulf Pro League participated in this 48-week study. The external load was monitored using global positioning system (GPS, GPSPORTS Systems Pty Ltd) and the type of injury was documented daily by the team's medical staff. Odds ratio (OR) and relative risk (RR) were calculated for non-contact injuries for high- and low-load weeks according to acute (AW), chronic (CW), acute to chronic workload ratio (ACWR), and AW variation (Δ-Acute) values. By using Poisson distribution, the interval between previous and new injuries were estimated. Overall, 12 non-contact injuries occurred during high load and 9 during low load weeks. Based on the variables ACWR and Δ-AW, there was a significantly increased risk of sustaining non-contact injuries (p < 0.05) during high-load weeks for ACWR (OR: 4.67), and Δ-AW (OR: 4.07). Finally, the expected time between injuries was significantly shorter in high load weeks for ACWR [1.25 vs. 3.33, rate ratio time (RRT)] and Δ-AW (1.33 vs. 3.45, RRT) respectively, compared to low load weeks. The risk of sustaining injuries was significantly larger during high workload weeks for ACWR, and Δ-AW compared with low workload weeks. The observed high OR in high load weeks indicate that there is a significant relationship between workload and occurrence of non-contact injuries. The predicted time to new injuries is shorter in high load weeks compared to low load weeks. Therefore, the frequency of injuries is higher during high load weeks for ACWR and Δ-AW. ACWR and Δ-AW appear to be good indicators for estimating the injury risk, and the time interval between injuries.
Background: Network models are useful tools for researchers to simplify and understand investigated systems. Yet, the assessment of methods for network construction is often uncertain. Random resampling simulations can aid to assess methods, provided synthetic data exists for reliable network construction.
Objectives: We implemented a new Monte Carlo algorithm to create simulated data for network reconstruction, tested the influence of adjusted parameters and used simulations to select a method for network model estimation based on real-world data. We hypothesized, that reconstructs based on Monte Carlo data are scored at least as good compared to a benchmark.
Methods: Simulated data was generated in R using the Monte Carlo algorithm of the mcgraph package. Benchmark data was created by the huge package. Networks were reconstructed using six estimator functions and scored by four classification metrics. For compatibility tests of mean score differences, Welch’s t-test was used. Network model estimation based on real-world data was done by stepwise selection.
Samples: Simulated data was generated based on 640 input graphs of various types and sizes. The real-world dataset consisted of 67 medieval skeletons of females and males from the region of Refshale (Lolland) and Nordby (Jutland) in Denmark.
Results: Results after t-tests and determining confidence intervals (CI95%) show, that evaluation scores for network reconstructs based on the mcgraph package were at least as good compared to the benchmark huge. The results even indicate slightly better scores on average for the mcgraph package.
Conclusion: The results confirmed our objective and suggested that Monte Carlo data can keep up with the benchmark in the applied test framework. The algorithm offers the feature to use (weighted) un- and directed graphs and might be useful for assessing methods for network construction.
We present the tool Kato which is, to the best of our knowledge, the first tool for plagiarism detection that is directly tailored for answer-set programming (ASP). Kato aims at finding similarities between (segments of) logic programs to help detecting cases of plagiarism. Currently, the tool is realised for DLV programs but it is designed to handle various logic-programming syntax versions. We review basic features and the underlying methodology of the tool.
Chloroplasts as bioreactors : high-yield production of active bacteriolytic protein antibiotics
(2008)
Plants, more precisely their chloroplasts with their bacterial-like expression machinery inherited from their cyanobacterial ancestors, can potentially offer a cheap expression system for proteinaceous pharmaceuticals. This system would be easily scalable and provides appropriate safety due to chloroplasts maternal inheritance. In this work, it was shown that three phage lytic enzymes (Pal, Cpl-1 and PlyGBS) could be successfully expressed at very high levels and with high stability in tobacco chloroplasts. PlyGBS expression reached an amount of foreign protein accumulation (> 70% TSP) that has never been obtained before. Although the high expression levels of PlyGBS caused a pale green phenotype with retarded growth, presumably due to exhaustion of plastid protein synthesis capacity, development and seed production were not impaired under greenhouse conditions. Since Pal and Cpl-1 showed toxic effects when expressed in E. coli, a special plastid transformation vector (pTox) was constructed to allow DNA amplification in bacteria. The construction of the pTox transformation vector allowing a recombinase-mediated deletion of an E. coli transcription block in the chloroplast, leading to an increase of foreign protein accumulation to up to 40% of TSP for Pal and 20% of TSP for Cpl-1. High dose-dependent bactericidal efficiency was shown for all three plant-derived lytic enzymes using their pathogenic target bacteria S. pyogenes and S. pneumoniae. Confirmation of specificity was obtained for the endotoxic proteins Pal and Cpl-1 by application to E. coli cultures. These results establish tobacco chloroplasts as a new cost-efficient and convenient production platform for phage lytic enzymes and address the greatest obstacle for clinical application. The present study is the first report of lysin production in a non-bacterial system. The properties of chloroplast-produced lysins described in this work, their stability, high accumulation rate and biological activity make them highly attractive candidates for future antibiotics.
Sulfur is an important element that is incorporated into many biomolecules in humans. The incorporation and transfer of sulfur into biomolecules is, however, facilitated by a series of different sulfurtransferases. Among these sulfurtransferases is the human mercaptopyruvate sulfurtransferase (MPST) also designated as tRNA thiouridine modification protein (TUM1). The role of the human TUM1 protein has been suggested in a wide range of physiological processes in the cell among which are but not limited to involvement in Molybdenum cofactor (Moco) biosynthesis, cytosolic tRNA thiolation and generation of H2S as signaling molecule both in mitochondria and the cytosol. Previous interaction studies showed that TUM1 interacts with the L-cysteine desulfurase NFS1 and the Molybdenum cofactor biosynthesis protein 3 (MOCS3). Here, we show the roles of TUM1 in human cells using CRISPR/Cas9 genetically modified Human Embryonic Kidney cells. Here, we show that TUM1 is involved in the sulfur transfer for Molybdenum cofactor synthesis and tRNA thiomodification by spectrophotometric measurement of the activity of sulfite oxidase and liquid chromatography quantification of the level of sulfur-modified tRNA. Further, we show that TUM1 has a role in hydrogen sulfide production and cellular bioenergetics.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
The development of phonetic codes in memory of 141 pairs of normal and disabled readers from 7.8 to 16.8 years of age was tested with a task adapted from L. S. Mark, D. Shankweiler, I. Y. Liberman, and C. A. Fowler (Memory & Cognition, 1977, 5, 623–629) that measured false-positive errors in recognition memory for foil words which rhymed with words in the memory list versus foil words that did not rhyme. Our younger subjects replicated Mark et al., showing a larger difference between rhyming and nonrhyming false-positive errors for the normal readers. The older disabled readers' phonetic effect was comparable to that of the younger normal readers, suggesting a developmental lag in their use of phonetic coding in memory. Surprisingly, the normal readers' phonetic effect declined with age in the recognition task, but they maintained a significant advantage across age in the auditory WISC-R digit span recall test, and a test of phonological nonword decoding. The normals' decline with age in rhyming confusion may be due to an increase in the precision of their phonetic codes.
This paper investigates the predictions of the Derivational Complexity Hypothesis by studying the acquisition of wh-questions in 4- and 5-year-old Akan-speaking children in an experimental approach using an elicited production and an elicited imitation task. Akan has two types of wh-question structures (wh-in-situ and wh-ex-situ questions), which allows an investigation of children’s acquisition of these two question structures and their preferences for one or the other. Our results show that adults prefer to use wh-ex-situ questions over wh-in-situ questions. The results from the children show that both age groups have the two question structures in their linguistic repertoire. However, they differ in their preferences in usage in the elicited production task: while the 5-year-olds preferred the wh-in-situ structure over the wh-ex-situ structure, the 4-year-olds showed a selective preference for the wh-in-situ structure in who-questions. These findings suggest a developmental change in wh-question preferences in Akan-learning children between 4 and 5 years of age with a so far unobserved u-shaped developmental pattern. In the elicited imitation task, all groups showed a strong tendency to maintain the structure of in-situ and ex-situ questions in repeating grammatical questions. When repairing ungrammatical ex-situ questions, structural changes to grammatical in-situ questions were hardly observed but the insertion of missing morphemes while keeping the ex-situ structure. Together, our findings provide only partial support for the Derivational Complexity Hypothesis.
Acquiring Syntactic Variability: The Production of Wh-Questions in Children and Adults Speaking Akan
(2020)
This paper investigates the predictions of the Derivational Complexity Hypothesis by studying the acquisition of wh-questions in 4- and 5-year-old Akan-speaking children in an experimental approach using an elicited production and an elicited imitation task. Akan has two types of wh-question structures (wh-in-situ and wh-ex-situ questions), which allows an investigation of children’s acquisition of these two question structures and their preferences for one or the other. Our results show that adults prefer to use wh-ex-situ questions over wh-in-situ questions. The results from the children show that both age groups have the two question structures in their linguistic repertoire. However, they differ in their preferences in usage in the elicited production task: while the 5-year-olds preferred the wh-in-situ structure over the wh-ex-situ structure, the 4-year-olds showed a selective preference for the wh-in-situ structure in who-questions. These findings suggest a developmental change in wh-question preferences in Akan-learning children between 4 and 5 years of age with a so far unobserved u-shaped developmental pattern. In the elicited imitation task, all groups showed a strong tendency to maintain the structure of in-situ and ex-situ questions in repeating grammatical questions. When repairing ungrammatical ex-situ questions, structural changes to grammatical in-situ questions were hardly observed but the insertion of missing morphemes while keeping the ex-situ structure. Together, our findings provide only partial support for the Derivational Complexity Hypothesis.
We collect a network dataset of tenured economics faculty in Austria, Germany and Switzerland. We rank the 100 institutions included with a minimum violation ranking. This ranking is positively and significantly correlated with the Times Higher Education ranking of economics institutions. According to the network ranking, individuals on average go down about 23 ranks from their doctoral institution to their employing institution. While the share of females in our dataset is only 15%, we do not observe a significant gender hiring gap (a difference in rank changes between male and female faculty). We conduct a robustness check with the Handelsblatt and the Times Higher Education ranking. According to these rankings, individuals on average go down only about two ranks. We do not observe a significant gender hiring gap using these two rankings (although the dataset underlying this analysis is small and these estimates are likely to be noisy). Finally, we discuss the limitations of the network ranking in our context.
I perform and analyse the first ever calculations of rotating stellar iron core collapse in {3+1} general relativity that start out with presupernova models from stellar evolutionary calculations and include a microphysical finite-temperature nuclear equation of state, an approximate scheme for electron capture during collapse and neutrino pressure effects. Based on the results of these calculations, I obtain the to-date most realistic estimates for the gravitational wave signal from collapse, bounce and the early postbounce phase of core collapse supernovae. I supplement my {3+1} GR hydrodynamic simulations with 2D Newtonian neutrino radiation-hydrodynamic supernova calculations focussing on (1) the late postbounce gravitational wave emission owing to convective overturn, anisotropic neutrino emission and protoneutron star pulsations, and (2) on the gravitational wave signature of accretion-induced collapse of white dwarfs to neutron stars.
Higher education institutions in Guinea face many challenges, including reporting responsibilities, globalisation, and massification. Institutional evaluations of higher education and research institutions in 2013 could not initiate the implementation of change processes within the institutions. Recently, however, various initiatives have been started to change this situation with the purpose to sensitise and raise awareness and capabilities for quality assurance structures in Guinean HEIs. So far, the emphasis has been put on quality enhancement in higher education, especially on teaching evaluation, curriculum development, as well as on establishing quality assurance structures. This article gives an overview of the state of play and takes stock of the activities that have been initiated to set up quality assurance mechanisms in higher education and research institutions, and presents perspectives for further development of the quality approach in Guinea. The project ‘Quality Assurance Multiplication 2017-2018’ serves as an example to describe approaches and activities in setting up stable quality assurance structures, and to strengthen and raise awareness for a ‘quality culture’.
Dynamical simulation of the “velocity-porosity” reduction in observed strength of stellar wind lines
(2007)
I use dynamical simulations of the line-driven instability to examine the potential role of the resulting flow structure in reducing the observed strength of wind absorption lines. Instead of the porosity length formalism used to model effects on continuum absorption, I suggest reductions in line strength can be better characterized in terms of a velocity clumping factor that is insensitive to spatial scales. Examples of dynamic spectra computed directly from instability simulations do exhibit a net reduction in absorption, but only at a modest 10-20% level that is well short of the ca. factor 10 required by recent analyses of PV lines.
In this work an extension of CSSR algorithm using Maximum Entropy Models is introduced. Preliminary experiments to perform Named Entity Recognition with this new system are presented.
In the here presented work we discuss a series of results that are all in one way or another connected to the phenomenon of trapping in black hole spacetimes.
First we present a comprehensive review of the Kerr-Newman-Taub-NUT-de-Sitter family of black hole spacetimes and their most important properties. From there we go into a detailed analysis of the bahaviour of null geodesics in the exterior region of a sub-extremal Kerr spacetime. We show that most well known fundamental properties of null geodesics can be represented in one plot. In particular, one can see immediately that the ergoregion and trapping are separated in phase space.
We then consider the sets of future/past trapped null geodesics in the exterior region of a sub-extremal Kerr-Newman-Taub-NUT spacetime. We show that from the point of view of any timelike observer outside of such a black hole, trapping can be understood as two smooth sets of spacelike directions on the celestial sphere of the observer. Therefore the topological structure of the trapped set on the celestial sphere of any observer is identical to that in Schwarzschild.
We discuss how this is relevant to the black hole stability problem.
In a further development of these observations we introduce the notion of what it means for the shadow of two observers to be degenerate. We show that, away from the axis of symmetry, no continuous degeneration exists between the shadows of observers at any point in the exterior region of any Kerr-Newman black hole spacetime of unit mass. Therefore, except possibly for discrete changes, an observer can, by measuring the black holes shadow, determine the angular momentum and the charge of the black hole under observation, as well as the observer's radial position and angle of elevation above the equatorial plane. Furthermore, his/her relative velocity compared to a standard observer can also be measured. On the other hand, the black hole shadow does not allow for a full parameter resolution in the case of a Kerr-Newman-Taub-NUT black hole, as a continuous degeneration relating specific angular momentum, electric charge, NUT charge and elevation angle exists in this case.
We then use the celestial sphere to show that trapping is a generic feature of any black hole spacetime.
In the last chapter we then prove a generalization of the mode stability result of Whiting (1989) for the Teukolsky equation for the case of real frequencies. The main result of the last chapter states that a separated solution of the Teukolsky equation governing massless test fields on the Kerr spacetime, which is purely outgoing at infinity, and purely ingoing at the horizon, must vanish. This has the consequence, that for real frequencies, there are linearly independent fundamental solutions of the radial Teukolsky equation which are purely ingoing at the horizon, and purely outgoing at infinity, respectively. This fact yields a representation formula for solutions of the inhomogenous Teukolsky equation, and was recently used by Shlapentokh-Rothman (2015) for the scalar wave equation.
Here, we demonstrate the utility of native membrane derived vesicles (nMVs) as tools for expeditious electrophysiological analysis of membrane proteins. We used a cell-free (CF) and a cell-based (CB) approach for preparing protein-enriched nMVs. We utilized the Chinese Hamster Ovary (CHO) lysate-based cell-free protein synthesis (CFPS) system to enrich ER-derived microsomes in the lysate with the primary human cardiac voltage-gated sodium channel 1.5 (hNaV1.5; SCN5A) in 3 h. Subsequently, CB-nMVs were isolated from fractions of nitrogen-cavitated CHO cells overexpressing the hNaV1.5. In an integrative approach, nMVs were micro-transplanted into Xenopus laevis oocytes. CB-nMVs expressed native lidocaine-sensitive hNaV1.5 currents within 24 h; CF-nMVs did not elicit any response. Both the CB- and CF-nMV preparations evoked single-channel activity on the planar lipid bilayer while retaining sensitivity to lidocaine application. Our findings suggest a high usability of the quick-synthesis CF-nMVs and maintenance-free CB-nMVs as ready-to-use tools for in-vitro analysis of electrogenic membrane proteins and large, voltage-gated ion channels.
There are two common approaches to implement a virtual machine (VM) for a dynamic object-oriented language. On the one hand, it can be implemented in a C-like language for best performance and maximum control over the resulting executable. On the other hand, it can be implemented in a language such as Java that allows for higher-level abstractions. These abstractions, such as proper object-oriented modularization, automatic memory management, or interfaces, are missing in C-like languages but they can simplify the implementation of prevalent but complex concepts in VMs, such as garbage collectors (GCs) or just-in-time compilers (JITs). Yet, the implementation of a dynamic object-oriented language in Java eventually results in two VMs on top of each other (double stack), which impedes performance. For statically typed languages, the Maxine VM solves this problem; it is written in Java but can be executed without a Java virtual machine (JVM). However, it is currently not possible to execute dynamic object-oriented languages in Maxine. This work presents an approach to bringing object models and execution models of dynamic object-oriented languages to the Maxine VM and the application of this approach to Squeak/Smalltalk. The representation of objects in and the execution of dynamic object-oriented languages pose certain challenges to the Maxine VM that lacks certain variation points necessary to enable an effortless and straightforward implementation of dynamic object-oriented languages' execution models. The implementation of Squeak/Smalltalk in Maxine as a feasibility study is to unveil such missing variation points.
Background:
Research into the application of virtual reality technology in the health care sector has rapidly increased, resulting in a large body of research that is difficult to keep up with.
Objective:
We will provide an overview of the annual publication numbers in this field and the most productive and influential countries, journals, and authors, as well as the most used, most co-occurring, and most recent keywords.
Methods:
Based on a data set of 356 publications and 20,363 citations derived from Web of Science, we conducted a bibliometric analysis using BibExcel, HistCite, and VOSviewer.
Results:
The strongest growth in publications occurred in 2020, accounting for 29.49% of all publications so far. The most productive countries are the United States, the United Kingdom, and Spain; the most influential countries are the United States, Canada, and the United Kingdom. The most productive journals are the Journal of Medical Internet Research (JMIR), JMIR Serious Games, and the Games for Health Journal; the most influential journals are Patient Education and Counselling, Medical Education, and Quality of Life Research. The most productive authors are Riva, del Piccolo, and Schwebel; the most influential authors are Finset, del Piccolo, and Eide. The most frequently occurring keywords other than “virtual” and “reality” are “training,” “trial,” and “patients.” The most relevant research themes are communication, education, and novel treatments; the most recent research trends are fitness and exergames.
Conclusions:
The analysis shows that the field has left its infant state and its specialization is advancing, with a clear focus on patient usability.
Environmental pollution by microplastics has become a severe problem in terrestrial and aquatic ecosystems and, according to actual prognoses, problems will further increase in the future. Therefore, assessing and quantifying the risk for the biota is crucial. Standardized short-term toxicological procedures as well as methods quantifying potential toxic effects over the whole life span of an animal are required. We studied the effect of the microplastic polystyrene on the survival and reproduction of a common freshwater invertebrate, the rotifer Brachionus calyciflorus, at different timescales. We used pristine polystyrene spheres of 1, 3, and 6 µm diameter and fed them to the animals together with food algae in different ratios ranging from 0 to 50% nonfood particles. As a particle control, we used silica to distinguish between a pure particle effect and a plastic effect. After 24 h, no toxic effect was found, neither with polystyrene nor with silica. After 96 h, a toxic effect was detectable for both particle types. The size of the particles played a negligible role. Studying the long-term effect by using life table experiments, we found a reduced reproduction when the animals were fed with 3 µm spheres together with similar-sized food algae. We conclude that the fitness reduction is mainly driven by the dilution of food by the nonfood particles rather than by a direct toxic effect.
This dissertation aimed to determine differential expressed miRNAs in the context of chronic pain in polyneuropathy. For this purpose, patients with chronic painful polyneuropathy were compared with age matched healthy patients. Taken together, all miRNA pre library preparation quality controls were successful and none of the samples was identified as an outlier or excluded for library preparation. Pre sequencing quality control showed that library preparation worked for all samples as well as that all samples were free of adapter dimers after BluePippin size selection and reached the minimum molarity for further processing. Thus, all samples were subjected to sequencing. The sequencing control parameters were in their optimal range and resulted in valid sequencing results with strong sample to sample correlation for all samples. The resulting FASTQ file of each miRNA library was analyzed and used to perform a differential expression analysis. The differentially expressed and filtered miRNAs were subjected to miRDB to perform a target prediction. Three of those four miRNAs were downregulated: hsa-miR-3135b, hsa-miR-584-5p and hsa-miR-12136, while one was upregulated: hsa-miR-550a-3p. miRNA target prediction showed that chronic pain in polyneuropathy might be the result of a combination of miRNA mediated high blood flow/pressure and neural activity dysregulations/disbalances. Thus, leading to the promising conclusion that these four miRNAs could serve as potential biomarkers for the diagnosis of chronic pain in polyneuropathy.
Since TRPV1 seems to be one of the major contributors of nociception and is associated with neuropathic pain, the influence of PKA phosphorylated ARMS on the sensitivity of TRPV1 as well as the part of AKAP79 during PKA phosphorylation of ARMS was characterized. Therefore, possible PKA-sites in the sequence of ARMS were identified. This revealed five canonical PKA-sites: S882, T903, S1251/52, S1439/40 and S1526/27. The single PKA-site mutants of ARMS revealed that PKA-mediated ARMS phosphorylation seems not to influence the interaction rate of TRPV1/ARMS. While phosphorylation of ARMST903 does not increase the interaction rate with TRPV1, ARMSS1526/27 is probably not phosphorylated and leads to an increased interaction rate. The calcium flux measurements indicated that the higher the interaction rate of TRPV1/ARMS, the lower the EC50 for capsaicin of TRPV1, independent of the PKA phosphorylation status of ARMS. In addition, the western blot analysis confirmed the previously observed TRPV1/ARMS interaction. More importantly, AKAP79 seems to be involved in the TRPV1/ARMS/PKA signaling complex. To overcome the problem of ARMS-mediated TRPV1 sensitization by interaction, ARMS was silenced by shRNA. ARMS silencing resulted in a restored TRPV1 desensitization without affecting the TRPV1 expression and therefore could be used as new topical therapeutic analgesic alternative to stop ARMS mediated TRPV1 sensitization.
Cellulose and chitin are the most abundant polymeric, organic carbon source globally. Thus, microbes degrading these polymers significantly influence global carbon cycling and greenhouse gas production. Fungi are recognized as important for cellulose decomposition in terrestrial environments, but are far less studied in marine environments, where bacterial organic matter degradation pathways tend to receive more attention. In this study, we investigated the potential of fungi to degrade kelp detritus, which is a major source of cellulose in marine systems. Given that kelp detritus can be transported considerable distances in the marine environment, we were specifically interested in the capability of endophytic fungi, which are transported with detritus, to ultimately contribute to kelp detritus degradation. We isolated 10 species and two strains of endophytic fungi from the kelp Ecklonia radiata. We then used a dye decolorization assay to assess their ability to degrade organic polymers (lignin, cellulose, and hemicellulose) under both oxic and anoxic conditions and compared their degradation ability with common terrestrial fungi. Under oxic conditions, there was evidence that Ascomycota isolates produced cellulose-degrading extracellular enzymes (associated with manganese peroxidase and sulfur-containing lignin peroxidase), while Mucoromycota isolates appeared to produce both lignin and cellulose-degrading extracellular enzymes, and all Basidiomycota isolates produced lignin-degrading enzymes (associated with laccase and lignin peroxidase). Under anoxic conditions, only three kelp endophytes degraded cellulose. We concluded that kelp fungal endophytes can contribute to cellulose degradation in both oxic and anoxic environments. Thus, endophytic kelp fungi may play a significant role in marine carbon cycling via polymeric organic matter degradation.
Comprior
(2021)
Background
Reproducible benchmarking is important for assessing the effectiveness of novel feature selection approaches applied on gene expression data, especially for prior knowledge approaches that incorporate biological information from online knowledge bases. However, no full-fledged benchmarking system exists that is extensible, provides built-in feature selection approaches, and a comprehensive result assessment encompassing classification performance, robustness, and biological relevance. Moreover, the particular needs of prior knowledge feature selection approaches, i.e. uniform access to knowledge bases, are not addressed. As a consequence, prior knowledge approaches are not evaluated amongst each other, leaving open questions regarding their effectiveness.
Results
We present the Comprior benchmark tool, which facilitates the rapid development and effortless benchmarking of feature selection approaches, with a special focus on prior knowledge approaches. Comprior is extensible by custom approaches, offers built-in standard feature selection approaches, enables uniform access to multiple knowledge bases, and provides a customizable evaluation infrastructure to compare multiple feature selection approaches regarding their classification performance, robustness, runtime, and biological relevance.
Conclusion
Comprior allows reproducible benchmarking especially of prior knowledge approaches, which facilitates their applicability and for the first time enables a comprehensive assessment of their effectiveness
4-Phenylphenoxazinones were isolated after biomimetic oxidation, using diphenoloxidases of insect cuticle, mushroom tyrosinase, or after autoxidation of N-acetyldopamine (Image ) in the presence of β-alanine, β-alanine methyl ester or N-acetyl-L-lysine. They are formed presumably by addition of 2-aminoalkyl-5-alkylphenols to the o-quinone of biphenyltetrol which, in turn, arises from oxidative coupling of. The structures of present the first examples for the assembly of reasonably stable intermediates in the rather complex process of chemical modifications of aliphatic amino acid residues by o-quinones.
One type of internal diachronic change that has been extensively studied for spoken languages is grammaticalization whereby lexical elements develop into free or bound grammatical elements. Based on a wealth of spoken languages, a large amount of prototypical grammaticalization pathways has been identified. Moreover, it has been shown that desemanticization, decategorialization, and phonetic erosion are typical characteristics of grammaticalization processes. Not surprisingly, grammaticalization is also responsible for diachronic change in sign languages. Drawing data from a fair number of sign languages, we show that grammaticalization in visual-gestural languages – as far as the development from lexical to grammatical element is concerned – follows the same developmental pathways as in spoken languages. That is, the proposed pathways are modalityindependent. Besides these intriguing parallels, however, sign languages have the possibility of developing grammatical markers from manual and non-manual co-speech gestures. We will discuss various instances of grammaticalized gestures and we will also briefly address the issue of the modality-specificity of this phenomenon.
Basic psychological needs theory postulates that a social environment that satisfies individuals’ three basic psychological needs of autonomy, competence, and relatedness leads to optimal growth and well-being. On the other hand, the frustration of these needs is associated with ill-being and depressive symptoms foremost investigated in non-clinical samples; yet, there is a paucity of research on need frustration in clinical samples. Survey data were compared between adult individuals with major depressive disorder (MDD; n = 115; 48.69% female; 38.46 years, SD = 10.46) with those of a non-depressed comparison sample (n = 201; 53.23% female; 30.16 years, SD = 12.81). Need profiles were examined with a linear mixed model (LMM). Individuals with depression reported higher levels of frustration and lower levels of satisfaction in relation to the three basic psychological needs when compared to non-depressed adults. The difference between depressed and non-depressed groups was significantly larger for frustration than satisfaction regarding the needs for relatedness and competence. LMM correlation parameters confirmed the expected positive correlation between the three needs. This is the first study showing substantial differences in need-based experiences between depressed and non-depressed adults. The results confirm basic assumptions of the self-determination theory and have preliminary implications in tailoring therapy for depression.
Deans at Institutions of Higher Education are seldom recipients of effective or specific professional management training, institutional mentorship, and coaching despite an increasing demand on them to play a more dynamic leadership role in the face of ever-changing local and global challenges. To address this deficiency, the inaugural Malaysian Chapter of the International Deans’ Course (MyIDC) was held in three parts over 2019 and 2020. In this paper, findings related to feedback on the programme are presented and discussed. Responses from the participants from two sets of surveys, and written feedback provided by two IDC international trainers involved in MyIDC were analysed. These reveal potential areas of improvement for the forthcoming MyIDC programme, such as in terms of planning and organisation, duration, content, and delivery. The article explores the lessons learnt from the MyIDC 2019/2020 training programme and discusses the improvements that can be made arising from the feedback received.
The emergence of information extraction (IE) oriented pattern engines has been observed during the last decade. Most of them exploit heavily finite-state devices. This paper introduces ExPRESS – a new extraction pattern engine, whose rules are regular expressions over flat feature structures. The underlying pattern language is a blend of two previously introduced IE oriented pattern formalisms, namely, JAPE, used in the widely known GATE system, and the unificationbased XTDL formalism used in SProUT. A brief and technical overview of ExPRESS, its pattern language and the pool of its native linguistic components is given. Furthermore, the implementation of the grammar interpreter is addressed too.
We present a concept of better integration of practical teaching in student teacher education in Computer Science. As an introduction to the workshop different possible scenarios are discussed on the basis of examples. Afterwards workshop participants will have the opportunity to discuss the application of the aconcepts in other settings.
Background:
Children’s spontaneous focusing on numerosity (SFON) is related to numerical skills. This study aimed to examine (1) the developmental trajectory of SFON and (2) the interrelations between SFON and early numerical skills at pre-school as well as their influence on arithmetical skills at school.
Method:
Overall, 1868 German pre-school children were repeatedly assessed until second grade. Nonverbal intelligence, visual attention, visuospatial working memory, SFON and numerical skills were assessed at age five (M = 63 months, Time 1) and age six (M = 72 months, Time 2), and arithmetic was assessed at second grade (M = 95 months, Time 3).
Results:
SFON increased significantly during pre-school. Path analyses revealed interrelations between SFON and several numerical skills, except number knowledge. Magnitude estimation and basic calculation skills (Time 1 and Time 2), and to a small degree number knowledge (Time 2), contributed directly to arithmetic in second grade. The connection between SFON and arithmetic was fully mediated by magnitude estimation and calculation skills at pre-school.
Conclusion:
Our results indicate that SFON first and foremost influences deeper understanding of numerical concepts at pre-school and—in contrast to previous findings –affects only indirectly children’s arithmetical development at school.
The Andean Cordillera is a mountain range located at the western South American margin and is part of the Eastern- Circum-Pacific orogenic Belt. The ~7000 km long mountain range is one of the longest on Earth and hosts the second largest orogenic plateau in the world, the Altiplano-Puna plateau. The Andes are known as a non-collisional subduction-type orogen which developed as a result of the interaction between the subducted oceanic Nazca plate and the South American continental plate. The different Andean segments exhibit along-strike variations of morphotectonic provinces characterized by different elevations, volcanic activity, deformation styles, crustal thickness, shortening magnitude and oceanic plate geometry. Most of the present-day elevation can be explained by crustal shortening in the last ~50 Ma, with the shortening magnitude decreasing from ~300 km in the central (15°S-30°S) segment to less than half that in the southern part (30°S-40°S). Several factors were proposed that might control the magnitude and acceleration of shortening of the Central Andes in the last 15 Ma. One important factor is likely the slab geometry. At 27-33°S, the slab dips horizontally at ~100 km depth due to the subduction of the buoyant Juan Fernandez Ridge, forming the Pampean flat-slab. This horizontal subduction is thought to influence the thermo-mechanical state of the Sierras Pampeanas foreland, for instance, by strengthening the lithosphere and promoting the thick-skinned propagation of deformation to the east, resulting in the uplift of the Sierras Pampeanas basement blocks. The flat-slab has migrated southwards from the Altiplano latitude at ~30 Ma to its present-day position and the processes and consequences associated to its passage on the contemporaneous acceleration of the shortening rate in Central Andes remain unclear. Although the passage of the flat-slab could offer an explanation to the acceleration of the shortening, the timing does not explain the two pulses of shortening at about 15 Ma and 4 Ma that are suggested from geological observations. I hypothesize that deformation in the Central Andes is controlled by a complex interaction between the subduction dynamics of the Nazca plate and the dynamic strengthening and weakening of the South American plate due to several upper plate processes. To test this hypothesis, a detailed investigation into the role of the flat-slab, the structural inheritance of the continental plate, and the subduction dynamics in the Andes is needed. Therefore, I have built two classes of numerical thermo-mechanical models: (i) The first class of models are a series of generic E-W-oriented high-resolution 2D subduction models thatinclude flat subduction in order to investigate the role of the subduction dynamics on the temporal variability of the shortening rate in the Central Andes at Altiplano latitudes (~21°S). The shortening rate from the models was then validated with the observed tectonic shortening rate in the Central Andes. (ii) The second class of models are a series of 3D data-driven models of the present-day Pampean flat-slab configuration and the Sierras Pampeanas (26-42°S). The models aim to investigate the relative contribution of the present-day flat subduction and inherited structures in the continental lithosphere on the strain localization. Both model classes were built using the advanced finite element geodynamic code ASPECT.
The first main finding of this work is to suggest that the temporal variability of shortening in the Central Andes is primarily controlled by the subduction dynamics of the Nazca plate while it penetrates into the mantle transition zone. These dynamics depends on the westward velocity of the South American plate that provides the main crustal shortening force to the Andes and forces the trench to retreat. When the subducting plate reaches the lower mantle, it buckles on it-self until the forced trench retreat causes the slab to steepen in the upper mantle in contrast with the classical slab-anchoring model. The steepening of the slab hinders the trench causing it to resist the advancing South American plate, resulting in the pulsatile shortening. This buckling and steepening subduction regime could have been initiated because of the overall decrease in the westwards velocity of the South American plate. In addition, the passage of the flat-slab is required to promote the shortening of the continental plate because flat subduction scrapes the mantle lithosphere, thus weakening the continental plate. This process contributes to the efficient shortening when the trench is hindered, followed by mantle lithosphere delamination at ~20 Ma. Finally, the underthrusting of the Brazilian cratonic shield beneath the orogen occurs at ~11 Ma due to the mechanical weakening of the thick sediments covered the shield margin, and due to the decreasing resistance of the weakened lithosphere of the orogen.
The second main finding of this work is to suggest that the cold flat-slab strengthens the overriding continental lithosphere and prevents strain localization. Therefore, the deformation is transmitted to the eastern front of the flat-slab segment by the shear stress operating at the subduction interface, thus the flat-slab acts like an indenter that “bulldozes” the mantle-keel of the continental lithosphere. The offset in the propagation of deformation to the east between the flat and steeper slab segments in the south causes the formation of a transpressive dextral shear zone. Here, inherited faults of past tectonic events are reactivated and further localize the deformation in an en-echelon strike-slip shear zone, through a mechanism that I refer to as “flat-slab conveyor”. Specifically, the shallowing of the flat-slab causes the lateral deformation, which explains the timing of multiple geological events preceding the arrival of the flat-slab at 33°S. These include the onset of the compression and of the transition between thin to thick-skinned deformation styles resulting from the crustal contraction of the crust in the Sierras Pampeanas some 10 and 6 Myr before the Juan Fernandez Ridge collision at that latitude, respectively.
Conventional energy sources are diminishing and non-renewable, take million years to form and cause environmental degradation. In the 21st century, we have to aim at achieving sustainable, environmentally friendly and cheap energy supply by employing renewable energy technologies associated with portable energy storage devices. Lithium-ion batteries can repeatedly generate clean energy from stored materials and convert reversely electric into chemical energy. The performance of lithium-ion batteries depends intimately on the properties of their materials. Presently used battery electrodes are expensive to be produced; they offer limited energy storage possibility and are unsafe to be used in larger dimensions restraining the diversity of application, especially in hybrid electric vehicles (HEVs) and electric vehicles (EVs). This thesis presents a major progress in the development of LiFePO4 as a cathode material for lithium-ion batteries. Using simple procedure, a completely novel morphology has been synthesized (mesocrystals of LiFePO4) and excellent electrochemical behavior was recorded (nanostructured LiFePO4). The newly developed reactions for synthesis of LiFePO4 are single-step processes and are taking place in an autoclave at significantly lower temperature (200 deg. C) compared to the conventional solid-state method (multi-step and up to 800 deg. C). The use of inexpensive environmentally benign precursors offers a green manufacturing approach for a large scale production. These newly developed experimental procedures can also be extended to other phospho-olivine materials, such as LiCoPO4 and LiMnPO4. The material with the best electrochemical behavior (nanostructured LiFePO4 with carbon coating) was able to delive a stable 94% of the theoretically known capacity.
Decisions for the conservation of biodiversity and sustainable management of natural resources are typically related to large scales, i.e. the landscape level. However, understanding and predicting the effects of land use and climate change on scales relevant for decision-making requires to include both, large scale vegetation dynamics and small scale processes, such as soil-plant interactions. Integrating the results of multiple BIOTA subprojects enabled us to include necessary data of soil science, botany, socio-economics and remote sensing into a high resolution, process-based and spatially-explicit model. Using an example from a sustainably-used research farm and a communally used and degraded farming area in semiarid southern Namibia we show the power of simulation models as a tool to integrate processes across disciplines and scales.
In this paper, we investigate the continuous version of modified iterative Runge–Kutta-type methods for nonlinear inverse ill-posed problems proposed in a previous work. The convergence analysis is proved under the tangential cone condition, a modified discrepancy principle, i.e., the stopping time T is a solution of ∥𝐹(𝑥𝛿(𝑇))−𝑦𝛿∥=𝜏𝛿+ for some 𝛿+>𝛿, and an appropriate source condition. We yield the optimal rate of convergence.
The mid- to late Holocene interval is characterised by a highly variable climate in response to a gradual change in orbital insolation. The seasonal impact of these changes on the Eifel Maar region is not yet well documented largely due to uncertainties about the completeness of this archive ("missing varves" in the well known Lake Holzmaar) and a limited understanding of the factors (e.g. temperature, precipitation) influencing the seasonality archived within the lamination/varves. In this study we approach these challenges from a different perspective. Using detailed microfacies investigations we: (1) demonstrate that the ambiguity about the "missing varves" is related to the climate induced complex biotic and abiotic laminations that led to mis-identification of varves; (2) use a combination of detailed microfacies investigations (varve structure, seasonality of biotic and abiotic signals), lamination quality, varve counts on multiple cores, published and new radiocarbon dates to develop a continuous master chronology based on the Bayesian modelling approach. The dates of major climate, volcanic, and archaeological event(s) determined using our model are in good agreement with the independently determined ages of the same events from other archives, confirming the accuracy of our age model; (3) test the sensitivity of the seasonal proxies to the available data on mid-Holocene changes in temperature and precipitation; (4) demonstrate that the changes in lake eutrophicity are correlative with temperature changes in NW Europe and probably triggered by solar variability; and (5) show that the early Iron Age onset of eutrophication in Lake Holzmaar was climate induced and began several decades before the impact of anthropogenic activity was seen in the form of intensified detrital erosion in the catchment area. Our work has implications for understanding the impact of climate change and anthropogenic activities on limnological systems. (C) 2014 Elsevier B.V. All rights reserved.
Large-scale databases that report the inhibitory capacities of many combinations of candidate drug compounds and cultivated cancer cell lines have driven the development of preclinical drug-sensitivity models based on machine learning. However, cultivated cell lines have devolved from human cancer cells over years or even decades under selective pressure in culture conditions. Moreover, models that have been trained on in vitro data cannot account for interactions with other types of cells. Drug-response data that are based on patient-derived cell cultures, xenografts, and organoids, on the other hand, are not available in the quantities that are needed to train high-capacity machine-learning models. We found that pre-training deep neural network models of drug sensitivity on in vitro drug-sensitivity databases before fine-tuning the model parameters on patient-derived data improves the models’ accuracy and improves the biological plausibility of the features, compared to training only on patient-derived data. From our experiments, we can conclude that pre-trained models outperform models that have been trained on the target domains in the vast majority of cases.
The layer-by-layer assembly (LBL) of polyelectrolytes has been extensively studied for the preparation of ultrathin films due to the versatility of the build-up process. The control of the permeability of these layers is particularly important as there are potential drug delivery applications. Multilayered polyelectrolyte microcapsules are also of great interest due to their possible use as microcontainers. This work will present two methods that can be used as employable drug delivery systems, both of which can encapsulate an active molecule and tune the release properties of the active species. Poly-(N-isopropyl acrylamide), (PNIPAM) is known to be a thermo-sensitive polymer that has a Lower Critical Solution Temperature (LCST) around 32oC; above this temperature PNIPAM is insoluble in water and collapses. It is also known that with the addition of salt, the LCST decreases. This work shows Differential Scanning Calorimetry (DSC) and Confocal Laser Scanning Microscopy (CLSM) evidence that the LCST of the PNIPAM can be tuned with salt type and concentration. Microcapsules were used to encapsulate this thermo-sensitive polymer, resulting in a reversible and tunable stimuli- responsive system. The encapsulation of the PNIPAM inside of the capsule was proven with Raman spectroscopy, DSC (bulk LCST measurements), AFM (thickness change), SEM (morphology change) and CLSM (in situ LCST measurement inside of the capsules). The exploitation of the capsules as a microcontainer is advantageous not only because of the protection the capsules give to the active molecules, but also because it facilitates easier transport. The second system investigated demonstrates the ability to reduce the permeability of polyelectrolyte multilayer films by the addition of charged wax particles. The incorporation of this hydrophobic coating leads to a reduced water sensitivity particularly after heating, which melts the wax, forming a barrier layer. This conclusion was proven with Neutron Reflectivity by showing the decreased presence of D2O in planar polyelectrolyte films after annealing creating a barrier layer. The permeability of capsules could also be decreased by the addition of a wax layer. This was proved by the increase in recovery time measured by Florescence Recovery After Photobleaching, (FRAP) measurements. In general two advanced methods, potentially suitable for drug delivery systems, have been proposed. In both cases, if biocompatible elements are used to fabricate the capsule wall, these systems provide a stable method of encapsulating active molecules. Stable encapsulation coupled with the ability to tune the wall thickness gives the ability to control the release profile of the molecule of interest.
We exploit time-series $FUSE$ spectroscopy to {\it uniquely} probe spatial structure and clumping in the fast wind of the central star of the H-rich planetary nebula NGC~6543 (HD~164963). Episodic and recurrent optical depth enhancements are discovered in the P{\sc v} absorption troughs, with some evidence for a $\sim$ 0.17-day modulation time-scale. The characteristics of these features are essentially identical to the discrete absorption components' (DACs) commonly seen in the UV lines of massive OB stars, suggesting the temporal structures seen in NGC~6543 likely have a physical origin that is similar to that operating in massive, luminous stars. The mechanism for forming coherent perturbations in the outflows is therefore apparently operating equally in the radiation-pressure-driven winds of widely differing momenta ($\mdot$$v_\infty$$R_\star^{0.5}$) and flow times, as represented by OB stars and CSPN.
Background
Millions of people in Germany suffer from chronic pain, in which course and intensity are multifactorial. Besides physical injuries, certain psychosocial risk factors are involved in the disease process. The national health care guidelines for the diagnosis and treatment of non-specific low back pain recommend the screening of psychosocial risk factors as early as possible, to be able to adapt the therapy to patient needs (e.g., unimodal or multimodal). However, such a procedure has been difficult to implement in practice and has not yet been integrated into the rehabilitation care structures across the country.
Methods
The aim of this study is to implement an individualized therapy and aftercare program within the rehabilitation offer of the German Pension Insurance in the area of orthopedics and to examine its success and sustainability in comparison to the previous standard aftercare program.
The study is a multicenter randomized controlled trial including 1204 patients from six orthopedic rehabilitation clinics. A 2:1 allocation ratio to intervention (individualized and home-based rehabilitation aftercare) versus the control group (regular outpatient rehabilitation aftercare) is set. Upon admission to the rehabilitation clinic, participants in the intervention group will be screened according to their psychosocial risk profile. They could then receive either unimodal or multimodal, together with an individualized training program. The program is instructed in the clinic (approximately 3 weeks) and will continue independently at home afterwards for 3 months. The success of the program is examined by means of a total of four surveys. The co-primary outcomes are the Characteristic Pain Intensity and Disability Score assessed by the German version of the Chronic Pain Grade questionnaire (CPG).
Discussion
An improvement in terms of pain, work ability, patient compliance, and acceptance in our intervention program compared to the standard aftercare is expected. The study contributes to provide individualized care also to patients living far away from clinical centers.
Trial registration
DRKS, DRKS00020373. Registered on 15 April 2020
Discussion : X-rays
(2007)
Clumping in O-star winds
(2007)
Quantitative geomorphic research depends on accurate topographic data often collected via remote sensing. Lidar, and photogrammetric methods like structure-from-motion, provide the highest quality data for generating digital elevation models (DEMs). Unfortunately, these data are restricted to relatively small areas, and may be expensive or time-consuming to collect. Global and near-global DEMs with 1 arcsec (∼30 m) ground sampling from spaceborne radar and optical sensors offer an alternative gridded, continuous surface at the cost of resolution and accuracy. Accuracy is typically defined with respect to external datasets, often, but not always, in the form of point or profile measurements from sources like differential Global Navigation Satellite System (GNSS), spaceborne lidar (e.g., ICESat), and other geodetic measurements. Vertical point or profile accuracy metrics can miss the pixel-to-pixel variability (sometimes called DEM noise) that is unrelated to true topographic signal, but rather sensor-, orbital-, and/or processing-related artifacts. This is most concerning in selecting a DEM for geomorphic analysis, as this variability can affect derivatives of elevation (e.g., slope and curvature) and impact flow routing. We use (near) global DEMs at 1 arcsec resolution (SRTM, ASTER, ALOS, TanDEM-X, and the recently released Copernicus) and develop new internal accuracy metrics to assess inter-pixel variability without reference data. Our study area is in the arid, steep Central Andes, and is nearly vegetation-free, creating ideal conditions for remote sensing of the bare-earth surface. We use a novel hillshade-filtering approach to detrend long-wavelength topographic signals and accentuate short-wavelength variability. Fourier transformations of the spatial signal to the frequency domain allows us to quantify: 1) artifacts in the un-projected 1 arcsec DEMs at wavelengths greater than the Nyquist (twice the nominal resolution, so > 2 arcsec); and 2) the relative variance of adjacent pixels in DEMs resampled to 30-m resolution (UTM projected). We translate results into their impact on hillslope and channel slope calculations, and we highlight the quality of the five DEMs. We find that the Copernicus DEM, which is based on a carefully edited commercial version of the TanDEM-X, provides the highest quality landscape representation, and should become the preferred DEM for topographic analysis in areas without sufficient coverage of higher-quality local DEMs.
Development of chronic pain after a low back pain episode is associated with increased pain sensitivity, altered pain processing mechanisms and the influence of psychosocial factors. Although there is some evidence that multimodal therapy (such as behavioral or motor control therapy) may be an important therapeutic strategy, its long-term effect on pain reduction and psychosocial load is still unclear. Prospective longitudinal designs providing information about the extent of such possible long-term effects are missing. This study aims to investigate the long-term effects of a homebased uni- and multidisciplinary motor control exercise program on low back pain intensity, disability and psychosocial variables. 14 months after completion of a multicenter study comparing uni- and multidisciplinary exercise interventions, a sample of one study center (n = 154) was assessed once more. Participants filled in questionnaires regarding their low back pain symptoms (characteristic pain intensity and related disability), stress and vital exhaustion (short version of the Maastricht Vital Exhaustion Questionnaire), anxiety and depression experiences (the Hospital and Anxiety Depression Scale), and pain-related cognitions (the Fear Avoidance Beliefs Questionnaire). Repeated measures mixed ANCOVAs were calculated to determine the long-term effects of the interventions on characteristic pain intensity and disability as well as on the psychosocial variables. Fifty four percent of the sub-sample responded to the questionnaires (n = 84). Longitudinal analyses revealed a significant long-term effect of the exercise intervention on pain disability. The multidisciplinary group missed statistical significance yet showed a medium sized long-term effect. The groups did not differ in their changes of the psychosocial variables of interest. There was evidence of long-term effects of the interventions on pain-related disability, but there was no effect on the other variables of interest. This may be partially explained by participant's low comorbidities at baseline. Results are important regarding costless homebased alternatives for back pain patients and prevention tasks. Furthermore, this study closes the gap of missing long-term effect analysis in this field.
Development of chronic pain after a low back pain episode is associated with increased pain sensitivity, altered pain processing mechanisms and the influence of psychosocial factors. Although there is some evidence that multimodal therapy (such as behavioral or motor control therapy) may be an important therapeutic strategy, its long-term effect on pain reduction and psychosocial load is still unclear. Prospective longitudinal designs providing information about the extent of such possible long-term effects are missing. This study aims to investigate the long-term effects of a homebased uni- and multidisciplinary motor control exercise program on low back pain intensity, disability and psychosocial variables. 14 months after completion of a multicenter study comparing uni- and multidisciplinary exercise interventions, a sample of one study center (n = 154) was assessed once more. Participants filled in questionnaires regarding their low back pain symptoms (characteristic pain intensity and related disability), stress and vital exhaustion (short version of the Maastricht Vital Exhaustion Questionnaire), anxiety and depression experiences (the Hospital and Anxiety Depression Scale), and pain-related cognitions (the Fear Avoidance Beliefs Questionnaire). Repeated measures mixed ANCOVAs were calculated to determine the long-term effects of the interventions on characteristic pain intensity and disability as well as on the psychosocial variables. Fifty four percent of the sub-sample responded to the questionnaires (n = 84). Longitudinal analyses revealed a significant long-term effect of the exercise intervention on pain disability. The multidisciplinary group missed statistical significance yet showed a medium sized long-term effect. The groups did not differ in their changes of the psychosocial variables of interest. There was evidence of long-term effects of the interventions on pain-related disability, but there was no effect on the other variables of interest. This may be partially explained by participant's low comorbidities at baseline. Results are important regarding costless homebased alternatives for back pain patients and prevention tasks. Furthermore, this study closes the gap of missing long-term effect analysis in this field.
Both horizontal-to-vertical (H/V) spectral ratios and the spatial autocorrelation method (SPAC) have proven to be valuable tools to gain insight into local site effects by ambient noise measurements. Here, the two methods are employed to assess the subsurface velocity structure at the Piano delle Concazze area on Mt Etna. Volcanic tremor records from an array of 26 broadband seismometers is processed and a strong variability of H/V ratios during periods of increased volcanic activity is found. From the spatial distribution of H/V peak frequencies, a geologic structure in the north-east of Piano delle Concazze is imaged which is interpreted as the Ellittico caldera rim. The method is extended to include both velocity data from the broadband stations and distributed acoustic sensing data from a co-located 1.5 km long fibre optic cable. High maximum amplitude values of the resulting ratios along the trajectory of the cable coincide with known faults. The outcome also indicates previously unmapped parts of a fault. The geologic interpretation is in good agreement with inversion results from magnetic survey data. Using the neighborhood algorithm, spatial autocorrelation curves obtained from the modified SPAC are inverted alone and jointly with the H/V peak frequencies for 1D shear wave velocity profiles. The obtained models are largely consistent with published models and were able to validate the results from the fibre optic cable.
Recreational exercising and self-reported cardiometabolic diseases in German people living with HIV
(2021)
Exercise is known for its beneficial effects on preventing cardiometabolic diseases (CMDs) in the general population. People living with the human immunodeficiency virus (PLWH) are prone to sedentarism, thus raising their already elevated risk of developing CMDs in comparison to individuals without HIV. The aim of this cross-sectional study was to determine if exercise is associated with reduced risk of self-reported CMDs in a German HIV-positive sample (n = 446). Participants completed a self-report survey to assess exercise levels, date of HIV diagnosis, CD4 cell count, antiretroviral therapy, and CMDs. Participants were classified into exercising or sedentary conditions. Generalized linear models with Poisson regression were conducted to assess the prevalence ratio (PR) of PLWH reporting a CMD. Exercising PLWH were less likely to report a heart arrhythmia for every increase in exercise duration (PR: 0.20: 95% CI: 0.10–0.62, p < 0.01) and diabetes mellitus for every increase in exercise session per week (PR: 0.40: 95% CI: 0.10–1, p < 0.01). Exercise frequency and duration are associated with a decreased risk of reporting arrhythmia and diabetes mellitus in PLWH. Further studies are needed to elucidate the mechanisms underlying exercise as a protective factor for CMDs in PLWH.
Metabolic derangement with poor glycemic control accompanying overweight and obesity is associated with chronic low-grade inflammation and hyperinsulinemia. Macrophages, which present a very heterogeneous population of cells, play a key role in the maintenance of normal tissue homeostasis, but functional alterations in the resident macrophage pool as well as newly recruited monocyte-derived macrophages are important drivers in the development of low-grade inflammation. While metabolic dysfunction, insulin resistance and tissue damage may trigger or advance pro-inflammatory responses in macrophages, the inflammation itself contributes to the development of insulin resistance and the resulting hyperinsulinemia. Macrophages express insulin receptors whose downstream signaling networks share a number of knots with the signaling pathways of pattern recognition and cytokine receptors, which shape macrophage polarity. The shared knots allow insulin to enhance or attenuate both pro-inflammatory and anti-inflammatory macrophage responses. This supposedly physiological function may be impaired by hyperinsulinemia or insulin resistance in macrophages. This review discusses the mutual ambiguous relationship of low-grade inflammation, insulin resistance, hyperinsulinemia and the insulin-dependent modulation of macrophage activity with a focus on adipose tissue and liver.
Increase in prostanoid formation in rat liver macrophages (Kupffer cells) by human anaphylatoxin C3a
(1993)
Human anaphylatoxin C3a increases glycogenolysis in perfused rat liver. This action is inhibited by prostanoid synthesis inhibitors and prostanoid antagonists. Because prostanoids but not anaphylatoxin C3a can increase glycogenolysis in hepatocytes, it has been proposed that prostanoid formation in nonparenchymal cells represents an important step in the C3a-dependent increase in hepatic glycogenolysis. This study shows that (a) human anaphylatoxin C3a (0.1 to 10 mug/ml) dose-dependently increased prostaglandin D2, thromboxane B, and prostaglandin F2alpha formation in rat liver macrophages (Kupffer cells); (b) the C3a-mediated increase in prostanoid formation was maximal after 2 min and showed tachyphylaxis; and (c) the C3a-elicited prostanoid formation could be inhibited specifically by preincubation of C3a with carboxypeptidase B to remove the essential C-terminal arginine or by preincubation of C3a with Fab fragments of a neutralizing monoclonal antibody. These data support the hypothesis that the C3a-dependent activation of hepatic glycogenolysis is mediated by way of a C3a-induced prostanoid production in Kupffer cells.
In the isolated rat liver perfused in situ stimulation of the nerve bundles around the portal vein and the hepatic artery caused an increase of urate formation that was inhibited by the α1-blocker prazosine and the xanthine oxidase inhibitor allopurinol. Moreover, nerve stimulation increased glucose and lactate output and decreased perfusion flow. Infusion of noradrenaline had similar effects. Compared to nerve stimulation infusion of glucagon led to a less pronounced increase of urate formation and a twice as large increase in glucose output but a decrease in lactate release without affecting the flow rate. Insulin had no effect on any of the parameters studied.
The complement fragments C3a and C5a were purified from zymosan-activated human serum by column chromatographic procedures after the bulk of the proteins had been removed by acidic polyethylene glycol precipitation. In the isolated in situ perfused rat liver C3a increased glucose and lactate output and reduced flow. Its effects were enhanced in the presence of the carboxypeptidase inhibitor DL-mercaptomethyl-3-guanidinoethylthio-propanoic acid (MERGETPA) and abolished by preincubation of the anaphylatoxin with carboxypeptidase B or with Fab fragments of an anti-C3a monoclonal antibody. The C3a effects were partially inhibited by the thromboxane antagonist BM13505. C5a had no effect. It is concluded that locally but not systemically produced C3a may play an important role in the regulation of local metabolism and hemodynamics during inflammatory processes in the liver.
Achilles tendinopathy (AT) is a debilitating injury in athletes, especially for those engaged in repetitive stretch-shortening cycle activities. Clinical risk factors are numerous, but it has been suggested that altered biomechanics might be associated with AT. No systematic review has been conducted investigating these biomechanical alterations in specifically athletic populations. Therefore, the aim of this systematic review was to compare the lower-limb biomechanics of athletes with AT to athletically matched asymptomatic controls. Databases were searched for relevant studies investigating biomechanics during gait activities and other motor tasks such as hopping, isolated strength tasks, and reflex responses. Inclusion criteria for studies were an AT diagnosis in at least one group, cross-sectional or prospective data, at least one outcome comparing biomechanical data between an AT and healthy group, and athletic populations. Studies were excluded if patients had Achilles tendon rupture/surgery, participants reported injuries other than AT, and when only within-subject data was available.. Effect sizes (Cohen's d) with 95% confidence intervals were calculated for relevant outcomes. The initial search yielded 4,442 studies. After screening, twenty studies (775 total participants) were synthesised, reporting on a wide range of biomechanical outcomes. Females were under-represented and patients in the AT group were three years older on average. Biomechanical alterations were identified in some studies during running, hopping, jumping, strength tasks and reflex activity. Equally, several biomechanical variables studied were not associated with AT in included studies, indicating a conflicting picture. Kinematics in AT patients appeared to be altered in the lower limb, potentially indicating a pattern of “medial collapse”. Muscular activity of the calf and hips was different between groups, whereby AT patients exhibited greater calf electromyographic amplitudes despite lower plantar flexor strength. Overall, dynamic maximal strength of the plantar flexors, and isometric strength of the hips might be reduced in the AT group. This systematic review reports on several biomechanical alterations in athletes with AT. With further research, these factors could potentially form treatment targets for clinicians, although clinical approaches should take other contributing health factors into account. The studies included were of low quality, and currently no solid conclusions can be drawn.
Semi-natural habitats (SNHs) are becoming increasingly scarce in modern agricultural landscapes. This may reduce natural ecosystem services such as pest control with its putatively positive effect on crop production. In agreement with other studies, we recently reported wheat yield reductions at field borders which were linked to the type of SNH and the distance to the border. In this experimental landscape-wide study, we asked whether these yield losses have a biotic origin while analyzing fungal seed and fungal leaf pathogens, herbivory of cereal leaf beetles, and weed cover as hypothesized mediators between SNHs and yield. We established experimental winter wheat plots of a single variety within conventionally managed wheat fields at fixed distances either to a hedgerow or to an in-field kettle hole. For each plot, we recorded the fungal infection rate on seeds, fungal infection and herbivory rates on leaves, and weed cover. Using several generalized linear mixed-effects models as well as a structural equation model, we tested the effects of SNHs at a field scale (SNH type and distance to SNH) and at a landscape scale (percentage and diversity of SNHs within a 1000-m radius). In the dry year of 2016, we detected one putative biotic culprit: Weed cover was negatively associated with yield values at a 1-m and 5-m distance from the field border with a SNH. None of the fungal and insect pests, however, significantly affected yield, neither solely nor depending on type of or distance to a SNH. However, the pest groups themselves responded differently to SNH at the field scale and at the landscape scale. Our findings highlight that crop losses at field borders may be caused by biotic culprits; however, their negative impact seems weak and is putatively reduced by conventional farming practices.
This article explores the multi-directional geographic trajectories and ties of Jews who came to the United States in the 19th century, working to complicate simplistic understandings of “German” Jewish immigration. It focuses on the case study of Henry Cohn, an ordinary Russian-born Jew whose journeys took him to Prussia, New York, Savannah, and California. Once in the United States he returned to Europe twice, the second time permanently, although a grandson ended up in California, where he worked to ensure the preservation of Cohn’s records. This story highlights how Jews navigated and transgressed national boundaries in the 19th century and the limitations of the historical narratives that have been constructed from their experiences.
The Adana Basin of southern Turkey, situated at the SE margin of the Central Anatolian Plateau is ideally located to record Neogene topographic and tectonic changes in the easternmost Mediterranean realm. Using industry seismic reflection data we correlate 34 seismic profiles with corresponding exposed units in the Adana Basin. The time-depth conversion of the interpreted seismic profiles allows us to reconstruct the subsidence curve of the Adana Basin and to outline the occurrence of a major increase in both subsidence and sedimentation rates at 5.45 – 5.33 Ma, leading to the deposition of almost 1500 km3 of conglomerates and marls. Our provenance analysis of the conglomerates reveals that most of the sediment is derived from and north of the SE margin of the Central Anatolian Plateau. A comparison of these results with the composition of recent conglomerates and the present drainage basins indicates major changes between late Messinian and present-day source areas. We suggest that these changes in source areas result of uplift and ensuing erosion of the SE margin of the plateau. This hypothesis is supported by the comparison of the Adana Basin subsidence curve with the subsidence curve of the Mut Basin, a mainly Neogene basin located on top of the Central Anatolian Plateau southern margin, showing that the Adana Basin subsidence event is coeval with an uplift episode of the plateau southern margin. The collection of several fault measurements in the Adana region show different deformation styles for the NW and SE margins of the Adana Basin. The weakly seismic NW portion of the basin is characterized by extensional and transtensional structures cutting Neogene deposits, likely accomodating the differential uplift occurring between the basin and the SE margin of the plateau. We interpret the tectonic evolution of the southern flank of the Central Anatolian Plateau and the coeval subsidence and sedimentation in the Adana Basin to be related to deep lithospheric processes, particularly lithospheric delamination and slab break-off.
Systemic inflammation is a hallmark of cancer cachexia. Among tumor-host interactions, the white adipose tissue (WAT) is an important contributor to inflammation as it suffers morphological reorganization and lipolysis, releasing free fatty acids (FA), bioactive lipid mediators (LM) and pro-inflammatory cytokines, which accentuate the activation of pro-inflammatory signaling pathways and the recruitment of immune cells to the tissue. This project aimed to investigate which inflammatory factors are involved in the local adipose tissue inflammation and what is the influence of such factors upon enzymes involved in FA or LM metabolism in healthy individuals (Control), weight stable gastro-intestinal cancer patients (WSC) and cachectic cancer patients (CC). The results demonstrated that the inflammatory signature of systemic inflammation is different from local adipose tissue inflammation. The systemic inflammation of the cachectic cancer patients was characterized by higher levels of circulating saturated fatty acids (SFA), tumor-necrosis-factor-α (TNF-α), interleukins IL-6, IL-8 and CRP while levels of polyunsaturated fatty acids (PUFAs), especially n3-PUFAs, were lower in CC than in the other groups. In vitro and in adipose tissue explants, pro-inflammatory cytokines and SFAs were shown to increase the chemokines IL-8 and CXCL10 that were found to be augmented in adipose tissue inflammation in CC which was more profound in the visceral adipose tissue (VAT) than in subcutaneous adipose tissue (SAT). Systemic inflammation was negatively associated with the expression of PUFA synthesizing enzymes, though gene and protein expression did hardly differ between groups. The effects of inflammatory factors on enzymes in the whole tissue could have been masked by differentiated modulation of the diverse cell types in the same tissue. In vitro experiments showed that the expression of FA-modifying enzymes such as desaturases and elongases in adipocytes and macrophages was regulated into opposing directions by TNF-α, IL-6, LPS or palmitate. The higher plasma concentration of the pro-resolving LM resolvin D1 in CC cannot compensate the overall inflammatory status and the results indicate that inflammatory cytokines interfere with synthesis pathways of pro-resolving LM. In summary, the data revealed a complex inter-tissue and inter-cellular crosstalk mediated by pro-inflammatory cytokines and lipid compounds enhancing inflammation in cancer cachexia by feed-forward mechanisms.
Problem solving is one of the central activities performed by computer scientists as well as by computer science learners. Whereas the teaching of algorithms and programming languages is usually well structured within a curriculum, the development of learners’ problem-solving skills is largely implicit and less structured. Students at all levels often face difficulties in problem analysis and solution construction. The basic assumption of the workshop is that without some formal instruction on effective strategies, even the most inventive learner may resort to unproductive trial-and-error problemsolving processes. Hence, it is important to teach problem-solving strategies and to guide teachers on how to teach their pupils this cognitive tool. Computer science educators should be aware of the difficulties and acquire appropriate pedagogical tools to help their learners gain and experience problem-solving skills.
The aim of this review was to describe and summarize the scientific literature on programming parameters related to jump or plyometric training in male and female soccer players of different ages and fitness levels. A literature search was conducted in the electronic databases PubMed, Web of Science and Scopus using keywords related to the main topic of this study (e.g., “ballistic” and “plyometric”). According to the PICOS framework, the population for the review was restricted to soccer players, involved in jump or plyometric training. Among 7556 identified studies, 90 were eligible for inclusion. Only 12 studies were found for females. Most studies (n = 52) were conducted with youth male players. Moreover, only 35 studies determined the effectiveness of a given jump training programming factor. Based on the limited available research, it seems that a dose of 7 weeks (1–2 sessions per week), with ~80 jumps (specific of combined types) per session, using near-maximal or maximal intensity, with adequate recovery between repetitions (<15 s), sets (≥30 s) and sessions (≥24–48 h), using progressive overload and taper strategies, using appropriate surfaces (e.g., grass), and applied in a well-rested state, when combined with other training methods, would increase the outcome of effective and safe plyometric-jump training interventions aimed at improving soccer players physical fitness. In conclusion, jump training is an effective and easy-to-administer training approach for youth, adult, male and female soccer players. However, optimal programming for plyometric-jump training in soccer is yet to be determined in future research.
Background
A growing body of literature is available regarding the effects of plyometric jump training (PJT) on measures of physical fitness (PF) and sport-specific performance (SSP) in-water sports athletes (WSA, i.e. those competing in sports that are practiced on [e.g. rowing] or in [e.g. swimming; water polo] water). Indeed, incoherent findings have been observed across individual studies making it difficult to provide the scientific community and coaches with consistent evidence. As such, a comprehensive systematic literature search should be conducted to clarify the existent evidence, identify the major gaps in the literature, and offer recommendations for future studies.
Aim
To examine the effects of PJT compared with active/specific-active controls on the PF (one-repetition maximum back squat strength, squat jump height, countermovement jump height, horizontal jump distance, body mass, fat mass, thigh girth) and SSP (in-water vertical jump, in-water agility, time trial) outcomes in WSA, through a systematic review with meta-analysis of randomized and non-randomized controlled studies.
Methods
The electronic databases PubMed, Scopus, and Web of Science were searched up to January 2022. According to the PICOS approach, the eligibility criteria were: (population) healthy WSA; (intervention) PJT interventions involving unilateral and/or bilateral jumps, and a minimal duration of ≥ 3 weeks; (comparator) active (i.e. standard sports training) or specific-active (i.e. alternative training intervention) control group(s); (outcome) at least one measure of PF (e.g. jump height) and/or SSP (e.g. time trial) before and after training; and (study design) multi-groups randomized and non-randomized controlled trials. The Physiotherapy Evidence Database (PEDro) scale was used to assess the methodological quality of the included studies. The DerSimonian and Laird random-effects model was used to compute the meta-analyses, reporting effect sizes (ES, i.e. Hedges’ g) with 95% confidence intervals (95% CIs). Statistical significance was set at p ≤ 0.05. Certainty or confidence in the body of evidence for each outcome was assessed using Grading of Recommendations Assessment, Development, and Evaluation (GRADE), considering its five dimensions: risk of bias in studies, indirectness, inconsistency, imprecision, and risk of publication bias.
Results
A total of 11,028 studies were identified with 26 considered eligible for inclusion. The median PEDro score across the included studies was 5.5 (moderate-to-high methodological quality). The included studies involved a total of 618 WSA of both sexes (330 participants in the intervention groups [31 groups] and 288 participants in the control groups [26 groups]), aged between 10 and 26 years, and from different sports disciplines such as swimming, triathlon, rowing, artistic swimming, and water polo. The duration of the training programmes in the intervention and control groups ranged from 4 to 36 weeks. The results of the meta-analysis indicated no effects of PJT compared to control conditions (including specific-active controls) for in-water vertical jump or agility (ES = − 0.15 to 0.03; p = 0.477 to 0.899), or for body mass, fat mass, and thigh girth (ES = 0.06 to 0.15; p = 0.452 to 0.841). In terms of measures of PF, moderate-to-large effects were noted in favour of the PJT groups compared to the control groups (including specific-active control groups) for one-repetition maximum back squat strength, horizontal jump distance, squat jump height, and countermovement jump height (ES = 0.67 to 1.47; p = 0.041 to < 0.001), in addition to a small effect noted in favour of the PJT for SSP time-trial speed (ES = 0.42; p = 0.005). Certainty of evidence across the included studies varied from very low-to-moderate.
Conclusions
PJT is more effective to improve measures of PF and SSP in WSA compared to control conditions involving traditional sport-specific training as well as alternative training interventions (e.g. resistance training). It is worth noting that the present findings are derived from 26 studies of moderate-to-high methodological quality, low-to-moderate impact of heterogeneity, and very low-to-moderate certainty of evidence based on GRADE.
Trial registration The protocol for this systematic review with meta-analysis was published in the Open Science platform (OSF) on January 23, 2022, under the registration doi https://doi.org/10.17605/OSF.IO/NWHS3 (internet archive link: https://archive.org/details/osf-registrations-nwhs3-v1).
Dermal Delivery of the High-Molecular-Weight Drug Tacrolimus by Means of Polyglycerol-Based Nanogels
(2019)
Polyglycerol-based thermoresponsive nanogels (tNGs) have been shown to have excellent skin hydration properties and to be valuable delivery systems for sustained release of drugs into skin. In this study, we compared the skin penetration of tacrolimus formulated in tNGs with a commercial 0.1% tacrolimus ointment. The penetration of the drug was investigated in ex vivo abdominal and breast skin, while different methods for skin barrier disruption were investigated to improve skin permeability or simulate inflammatory conditions with compromised skin barrier. The amount of penetrated tacrolimus was measured in skin extracts by liquid chromatography tandem-mass spectrometry (LC-MS/MS), whereas the inflammatory markers IL-6 and IL-8 were detected by enzyme-linked immunosorbent assay (ELISA). Higher amounts of tacrolimus penetrated in breast as compared to abdominal skin or in barrier-disrupted as compared to intact skin, confirming that the stratum corneum is the main barrier for tacrolimus skin penetration. The anti-proliferative effect of the penetrated drug was measured in skin tissue/Jurkat cells co-cultures. Interestingly, tNGs exhibited similar anti-proliferative effects as the 0.1% tacrolimus ointment. We conclude that polyglycerol-based nanogels represent an interesting alternative to paraffin-based formulations for the treatment of inflammatory skin conditions.
Dermal Delivery of the High-Molecular-Weight Drug Tacrolimus by Means of Polyglycerol-Based Nanogels
(2019)
Polyglycerol-based thermoresponsive nanogels (tNGs) have been shown to have excellent skin hydration properties and to be valuable delivery systems for sustained release of drugs into skin. In this study, we compared the skin penetration of tacrolimus formulated in tNGs with a commercial 0.1% tacrolimus ointment. The penetration of the drug was investigated in ex vivo abdominal and breast skin, while different methods for skin barrier disruption were investigated to improve skin permeability or simulate inflammatory conditions with compromised skin barrier. The amount of penetrated tacrolimus was measured in skin extracts by liquid chromatography tandem-mass spectrometry (LC-MS/MS), whereas the inflammatory markers IL-6 and IL-8 were detected by enzyme-linked immunosorbent assay (ELISA). Higher amounts of tacrolimus penetrated in breast as compared to abdominal skin or in barrier-disrupted as compared to intact skin, confirming that the stratum corneum is the main barrier for tacrolimus skin penetration. The anti-proliferative effect of the penetrated drug was measured in skin tissue/Jurkat cells co-cultures. Interestingly, tNGs exhibited similar anti-proliferative effects as the 0.1% tacrolimus ointment. We conclude that polyglycerol-based nanogels represent an interesting alternative to paraffin-based formulations for the treatment of inflammatory skin conditions.
Wild bee species are important pollinators in agricultural landscapes. However, population decline was reported over the last decades and is still ongoing. While agricultural intensification is a major driver of the rapid loss of pollinating species, transition zones between arable fields and forest or grassland patches, i.e., agricultural buffer zones, are frequently mentioned as suitable mitigation measures to support wild bee populations and other pollinator species. Despite the reported general positive effect, it remains unclear which amount of buffer zones is needed to ensure a sustainable and permanent impact for enhancing bee diversity and abundance. To address this question at a pollinator community level, we implemented a process-based, spatially explicit simulation model of functional bee diversity dynamics in an agricultural landscape. More specifically, we introduced a variable amount of agricultural buffer zones (ABZs) at the transition of arable to grassland, or arable to forest patches to analyze the impact on bee functional diversity and functional richness. We focused our study on solitary bees in a typical agricultural area in the Northeast of Germany. Our results showed positive effects with at least 25% of virtually implemented agricultural buffer zones. However, higher amounts of ABZs of at least 75% should be considered to ensure a sufficient increase in Shannon diversity and decrease in quasi-extinction risks. These high amounts of ABZs represent effective conservation measures to safeguard the stability of pollination services provided by solitary bee species. As the model structure can be easily adapted to other mobile species in agricultural landscapes, our community approach offers the chance to compare the effectiveness of conservation measures also for other pollinator communities in future.
We launched an original large-scale experiment concerning informatics learning in French high schools. We are using the France-IOI platform to federate resources and share observation for research. The first step is the implementation of an adaptive hypermedia based on very fine grain epistemic modules for Python programming learning. We define the necessary traces to be built in order to study the trajectories of navigation the pupils will draw across this hypermedia. It may be browsed by pupils either as a course support, or an extra help to solve the list of exercises (mainly for algorithmics discovery). By leaving the locus of control to the learner, we want to observe the different trajectories they finally draw through our system. These trajectories may be abstracted and interpreted as strategies and then compared for their relative efficiency. Our hypothesis is that learners have different profiles and may use the appropriate strategy accordingly. This paper presents the research questions, the method and the expected results.
The comprehension of figurative language : electrophysiological evidence on the processing of irony
(2008)
This dissertation investigates the comprehension of figurative language, in particular the temporal processing of verbal irony. In six experiments using event-related potentials(ERP) brain activity during the comprehension of ironic utterances in relation to equivalent non-ironic utterances was measured and analyzed. Moreover, the impact of various language-accompanying cues, e.g., prosody or the use of punctuation marks, as well as non-verbal cues such as pragmatic knowledge has been examined with respect to the processing of irony. On the basis of these findings different models on figurative language comprehension, i.e., the 'standard pragmatic model', the 'graded salience hypothesis', and the 'direct access view', are discussed.