Refine
Has Fulltext
- yes (365) (remove)
Year of publication
- 2016 (365) (remove)
Document Type
- Postprint (210)
- Doctoral Thesis (96)
- Article (21)
- Preprint (17)
- Monograph/Edited Volume (8)
- Master's Thesis (4)
- Working Paper (4)
- Habilitation Thesis (2)
- Part of Periodical (2)
- Conference Proceeding (1)
Language
- English (365) (remove)
Keywords
- model (6)
- climate-change (5)
- sentence processing (5)
- German (4)
- aggression (4)
- evolution (4)
- language (4)
- machine learning (4)
- prosody (4)
- protein (4)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (80)
- Humanwissenschaftliche Fakultät (38)
- Institut für Geowissenschaften (38)
- Institut für Chemie (32)
- Institut für Physik und Astronomie (30)
- Institut für Biochemie und Biologie (24)
- Department Linguistik (23)
- Institut für Mathematik (19)
- Department Psychologie (18)
- Department Sport- und Gesundheitswissenschaften (15)
In this dissertation, an electric field-assisted method was developed and applied to achieve immobilization and alignment of biomolecules on metal electrodes in a simple one-step experiment. Neither modifications of the biomolecule nor of the electrodes were needed. The two major electrokinetic effects that lead to molecule motion in the chosen electrode configurations used were identified as dielectrophoresis and AC electroosmotic flow. To minimize AC electroosmotic flow, a new 3D electrode configuration was designed. Thus, the influence of experimental parameters on the dielectrophoretic force and the associated molecule movement could be studied. Permanent immobilization of proteins was examined and quantified absolutely using an atomic force microscope. By measuring the volumes of the immobilized protein deposits, a maximal number of proteins contained therein was calculated. This was possible since the proteins adhered to the tungsten electrodes even after switching off the electric field. The permanent immobilization of functional proteins on surfaces or electrodes is one crucial prerequisite for the fabrication of biosensors.
Furthermore, the biofunctionality of the proteins must be retained after immobilization. Due to the chemical or physical modifications on the proteins caused by immobilization, their biofunctionality is sometimes hampered. The activity of dielectrophoretically immobilized proteins, however, was proven here for an enzyme for the first time. The enzyme horseradish peroxidase was used exemplarily, and its activity was demonstrated with the oxidation of dihydrorhodamine 123, a non-fluorescent precursor of the fluorescence dye rhodamine 123.
Molecular alignment and immobilization - reversible and permanent - was achieved under the influence of inhomogeneous AC electric fields. For orientational investigations, a fluorescence microscope setup, a reliable experimental procedure and an evaluation protocol were developed and validated using self-made control samples of aligned acridine orange molecules in a liquid crystal.
Lambda-DNA strands were stretched and aligned temporarily between adjacent interdigitated electrodes, and the orientation of PicoGreen molecules, which intercalate into the DNA strands, was determined. Similarly, the aligned immobilization of enhanced Green Fluorescent Protein was demonstrated exploiting the protein's fluorescence and structural properties. For this protein, the angle of the chromophore with respect to the protein's geometrical axis was determined in good agreement with X-ray crystallographic data. Permanent immobilization with simultaneous alignment of the proteins was achieved along the edges, tips and on the surface of interdigitated electrodes. This was the first demonstration of aligned immobilization of proteins by electric fields.
Thus, the presented electric field-assisted immobilization method is promising with regard to enhanced antibody binding capacities and enzymatic activities, which is a requirement for industrial biosensor production, as well as for general interaction studies of proteins.
We elicited the production of various types of relative clauses in a group of German-speaking children with specific language impairment (SLI) and typically developing controls in order to test the movement optionality account of grammatical difficulty in SLI. The results show that German-speaking children with SLI are impaired in relative clause production compared to typically developing children. The alternative structures that they produce consist of simple main clauses, as well as nominal and prepositional phrases produced in isolation, sometimes contextually appropriate, and sometimes not. Crucially for evaluating the movement optionality account, children with SLI produce very few instances of embedded clauses where the relative clause head noun is pronounced in situ; in fact, such responses are more common among the typically developing child controls. These results underscore the difficulty German-speaking children with SLI have with structures involving movement, but provide no specific support for the movement optionality account.
En route towards advanced catalyst materials for the electrocatalytic water splitting reaction
(2016)
The thesis on hand deals with the development of new types of catalysts based on pristine metals and ceramic materials and their application as catalysts for the electrocatalytic water splitting reaction. In order to breathe life into this technology, cost-efficient, stable and efficient catalysts are imploringly desired. In this manner, the preparation of Mn-, N-, S-, P-, and C-containing nickel materials has been investigated together with the theoretical and electrochemical elucidation of their activity towards the hydrogen (and oxygen) evolution reaction. The Sabatier principle has been used as the principal guideline towards successful tuning of catalytic sites. Furthermore, two pathways have been chosen to ameliorate the electrocatalytic performance, namely, the direct improvement of intrinsic properties through appropriate material selection and secondly the increase of surface area of the catalytic material with an increased amount of active sites. In this manner, bringing materials with optimized hydrogen adsorption free energy onto high surface area support, catalytic performances approaching the golden standards of noble metals were feasible. Despite varying applied synthesis strategies (wet chemistry in organic solvents, ionothermal reaction, gas phase reaction), one goal has been systematically pursued: to understand the driving mechanism of the growth. Moreover, deeper understanding of inherent properties and kinetic parameters of the catalytic materials has been gained.
This article explores a recent performance of excerpts from T.S. Eliot’s Four Quartets (1935/36–1942) entitled Engaging Eliot: Four Quartets in Word, Color, and Sound as an example of live poetry. In this context, Eliot’s poem can be analysed as an auditory artefact that interacts strongly with other oral performances (welcome addresses and artists’ conversations), as well as with the musical performance of Christopher Theofanidis’s quintet “At the Still Point” at the end of the opening of Engaging Eliot. The event served as an introduction to a 13-day art exhibition and engaged in a re-evaluation of Eliot’s poem after 9/11: while its first part emphasises the connection between Eliot’s poem and Christian doctrine, its second part – especially the combination of poetry reading and musical performance – highlights the philosophical and spiritual dimensions of Four Quartets.
Arctic coastal infrastructure and cultural and archeological sites are increasingly vulnerable to erosion and flooding due to amplified warming of the Arctic, sea level rise, lengthening of open water periods, and a predicted increase in frequency of major storms. Mitigating these hazards necessitates decision-making tools at an appropriate scale. The objectives of this paper are to provide such a tool by assessing potential erosion and flood hazards at Herschel Island, a UNESCO World Heritage candidate site. This study focused on Simpson Point and the adjacent coastal sections because of their archeological, historical, and cultural significance. Shoreline movement was analyzed using the Digital Shoreline Analysis System (DSAS) after digitizing shorelines from 1952, 1970, 2000, and 2011. For purposes of this analysis, the coast was divided in seven coastal reaches (CRs) reflecting different morphologies and/or exposures. Using linear regression rates obtained from these data, projections of shoreline position were made for 20 and 50 years into the future. Flood hazard was assessed using a least cost path analysis based on a high-resolution light detection and ranging (LiDAR) dataset and current Intergovernmental Panel on Climate Change sea level estimates. Widespread erosion characterizes the study area. The rate of shoreline movement in different periods of the study ranges from −5.5 to 2.7 m·a⁻¹ (mean −0.6 m·a⁻¹). Mean coastal retreat decreased from −0.6 m·a⁻¹ to −0.5 m·a⁻¹, for 1952–1970 and 1970–2000, respectively, and increased to −1.3 m·a⁻¹ in the period 2000–2011. Ice-rich coastal sections most exposed to wave attack exhibited the highest rates of coastal retreat. The geohazard map combines shoreline projections and flood hazard analyses to show that most of the spit area has extreme or very high flood hazard potential, and some buildings are vulnerable to coastal erosion. This study demonstrates that transgressive forcing may provide ample sediment for the expansion of depositional landforms, while growing more susceptible to overwash and flooding.
Calcularis is a computer-based training program which focuses on basic numerical skills, spatial representation of numbers and arithmetic operations. The program includes a user model allowing flexible adaptation to the child's individual knowledge and learning profile. The study design to evaluate the training comprises three conditions (Calcularis group, waiting control group, spelling training group). One hundred and thirty-eight children from second to fifth grade participated in the study. Training duration comprised a minimum of 24 training sessions of 20 min within a time period of 6-8 weeks. Compared to the group without training (waiting control group) and the group with an alternative training (spelling training group), the children of the Calcularis group demonstrated a higher benefit in subtraction and number line estimation with medium to large effect sizes. Therefore, Calcularis can be used effectively to support children in arithmetic performance and spatial number representation.
Using an algorithm based on a retrospective rejection sampling scheme, we propose an exact simulation of a Brownian diffusion whose drift admits several jumps. We treat explicitly and extensively the case of two jumps, providing numerical simulations. Our main contribution is to manage the technical difficulty due to the presence of two jumps thanks to a new explicit expression of the transition density of the skew Brownian motion with two semipermeable barriers and a constant drift.
Much research on language control in bilinguals has relied on the interpretation of the costs of switching between two languages. Of the two types of costs that are linked to language control, switching costs are assumed to be transient in nature and modulated by trial-specific manipulations (e.g., by preparation time), while mixing costs are supposed to be more stable and less affected by trial-specific manipulations. The present study investigated the effect of preparation time on switching and mixing costs, revealing that both types of costs can be influenced by trial-specific manipulations.
Exhaustivity
(2016)
The dissertation proposes an answer to the question of how to model exhaustive inferences and what the meaning of the linguistic material that triggers these inferences is. In particular, it deals with the semantics of exclusive particles, clefts, and progressive aspect in Ga, an under-researched language spoken in Ghana. Based on new data coming from the author’s original fieldwork in Accra, the thesis points to a previously unattested variation in the semantics of exclusives in a cross-linguistic perspective, analyzes the connections between exhaustive interpretation triggered by clefts and the aspectual interpretation of the sentence, and identifies a cross-categorial definite determiner. By that it sheds new light on several exhaustivity-related phenomena in both the nominal and the verbal domain and shows that both domains are closely connected.
The aim of this thesis was the elucidation of different ionization methods (resonance-enhanced multiphoton ionization – REMPI, electrospray ionization – ESI, atmospheric pressure chemical ionization – APCI) in ion mobility (IM) spectrometry. In order to gain a better understanding of the ionization processes, several spectroscopic, mass spectrometric and theoretical methods were also used. Another focus was the development of experimental techniques, including a high resolution spectrograph and various combinations of IM and mass spectrometry.
The novel high resolution 2D spectrograph facilitates spectroscopic resolutions in the range of commercial echelle spectrographs. The lowest full width at half maximum of a peak achieved was 25 pm. The 2D spectrograph is based on the wavelength separation of light by the combination of a prism and a grating in one dimension, and an etalon in the second dimension. This instrument was successfully employed for the acquisition of Raman and laser-induced breakdown spectra.
Different spectroscopic methods (light scattering and fluorescence spectroscopy) permitting a spatial as well as spectral resolution, were used to investigate the release of ions in the electrospray. The investigation is based on the 50 nm shift of the fluorescence band of rhodamine 6G ions of during the transfer from the electrospray droplets to the gas phase.
A newly developed ionization chamber operating at reduced pressure (0.5 mbar) was coupled to a time-of-flight mass spectrometer. After REMPI of H2S, an ionization chemistry analogous to H2O was observed with this instrument. Besides H2S+ and its fragments, H3S+ and protonated analyte ions could be observed as a result of proton-transfer reactions.
For the elucidation of the peaks in IM spectra, a combination of IM spectrometer and linear quadrupole ion trap mass spectrometer was developed. The instrument can be equipped with various ionization sources (ESI, REMPI, APCI) and was used for the characterization of the peptide bradykinin and the neuroleptic promazine.
The ionization of explosive compounds in an APCI source based on soft x-radiation was investigated in a newly developed ionization chamber attached to the ion trap mass spectrometer. The major primary and secondary reactions could be characterized and explosive compound ions could be identified and assigned to the peaks in IM spectra. The assignment is based on the comparison of experimentally determined and calculated IM. The methods of calculation currently available exhibit large deviations, especially in the case of anions. Therefore, on the basis of an assessment of available methods, a novel hybrid method was developed and characterized.
We present new experimental data of the low-temperature metastable region of liquid water derived from high-density synthetic fluid inclusions (996–916 kg m−3) in quartz. Microthermometric measurements include: (i) prograde (upon heating) and retrograde (upon cooling) liquid–vapour homogenisation. We used single ultrashort laser pulses to stimulate vapour bubble nucleation in initially monophase liquid inclusions. Water densities were calculated based on prograde homogenisation temperatures using the IAPWS-95 formulation. We found retrograde liquid–vapour homogenisation temperatures in excellent agreement with IAPWS-95. (ii) Retrograde ice nucleation. Raman spectroscopy was used to determine the nucleation of ice in the absence of the vapour bubble. Our ice nucleation data in the doubly metastable region are inconsistent with the low-temperature trend of the spinodal predicted by IAPWS-95, as liquid water with a density of 921 kg m−3 remains in a homogeneous state during cooling down to a temperature of −30.5 °C, where it is transformed into ice whose density corresponds to zero pressure. (iii) Ice melting. Ice melting temperatures of up to 6.8 °C were measured in the absence of the vapour bubble, i.e. in the negative pressure region. (iv) Spontaneous retrograde and, for the first time, prograde vapour bubble nucleation. Prograde bubble nucleation occurred upon heating at temperatures above ice melting. The occurrence of prograde and retrograde vapour bubble nucleation in the same inclusions indicates a maximum of the bubble nucleation curve in the ϱ–T plane at around 40 °C. The new experimental data represent valuable benchmarks to evaluate and further improve theoretical models describing the p–V–T properties of metastable water in the low-temperature region.
Complexity in software systems is a major factor driving development and maintenance costs. To master this complexity, software is divided into modules that can be developed and tested separately. In order to support this separation of modules, each module should provide a clean and concise public interface. Therefore, the ability to selectively hide functionality using access control is an important feature in a programming language intended for complex software systems.
Software systems are increasingly distributed, adding not only to their inherent complexity, but also presenting security challenges. The object-capability approach addresses these challenges by defining language properties providing only minimal capabilities to objects. One programming language that is based on the object-capability approach is Newspeak, a dynamic programming language designed for modularity and security. The Newspeak specification describes access control as one of Newspeak’s properties, because it is a requirement for the object-capability approach. However, access control, as defined in the Newspeak specification, is currently not enforced in its implementation.
This work introduces an access control implementation for Newspeak, enabling the security of object-capabilities and enhancing modularity. We describe our implementation of access control for Newspeak. We adapted the runtime environment, the reflective system, the compiler toolchain, and the virtual machine. Finally, we describe a migration strategy for the existing Newspeak code base, so that our access control implementation can be integrated with minimal effort.
The Gradient Symbolic Computation (GSC) model presented in the keynote article (Goldrick, Putnam & Schwarz) constitutes a significant theoretical development, not only as a model of bilingual code-mixing, but also as a general framework that brings together symbolic grammars and graded representations. The authors are to be commended for successfully integrating a theory of grammatical knowledge with the voluminous research on lexical co-activation in bilinguals. It is, however, unfortunate that a certain conception of bilingualism was inherited from this latter research tradition, one in which the contrast between native and non-native language takes a back seat.
Eye movements serve as a window into ongoing visual-cognitive processes and can thus be used to investigate how people perceive real-world scenes. A key issue for understanding eye-movement control during scene viewing is the roles of central and peripheral vision, which process information differently and are therefore specialized for different tasks (object identification and peripheral target selection respectively). Yet, rather little is known about the contributions of central and peripheral processing to gaze control and how they are coordinated within a fixation during scene viewing. Additionally, the factors determining fixation durations have long been neglected, as scene perception research has mainly been focused on the factors determining fixation locations. The present thesis aimed at increasing the knowledge on how central and peripheral vision contribute to spatial and, in particular, to temporal aspects of eye-movement control during scene viewing. In a series of five experiments, we varied processing difficulty in the central or the peripheral visual field by attenuating selective parts of the spatial-frequency spectrum within these regions. Furthermore, we developed a computational model on how foveal and peripheral processing might be coordinated for the control of fixation duration. The thesis provides three main findings. First, the experiments indicate that increasing processing demands in central or peripheral vision do not necessarily prolong fixation durations; instead, stimulus-independent timing is adapted when processing becomes too difficult. Second, peripheral vision seems to play a prominent role in the control of fixation durations, a notion also implemented in the computational model. The model assumes that foveal and peripheral processing proceed largely in parallel and independently during fixation, but can interact to modulate fixation duration. Thus, we propose that the variation in fixation durations can in part be accounted for by the interaction between central and peripheral processing. Third, the experiments indicate that saccadic behavior largely adapts to processing demands, with a bias of avoiding spatial-frequency filtered scene regions as saccade targets. We demonstrate that the observed saccade amplitude patterns reflect corresponding modulations of visual attention. The present work highlights the individual contributions and the interplay of central and peripheral vision for gaze control during scene viewing, particularly for the control of fixation duration. Our results entail new implications for computational models and for experimental research on scene perception.
Optical biosensors based on porous silicon were fabricated by metal assisted chemical etching. Thereby double layered porous silicon structures were obtained consisting of porous pillars with large pores on top of a porous silicon layer with smaller pores. These structures showed a similar sensing performance in comparison to electrochemically produced porous silicon interferometric sensors.
Injection of fluids into deep saline aquifers causes a pore pressure increase in the storage formation, and thus displacement of resident brine. Via hydraulically conductive faults, brine may migrate upwards into shallower aquifers and lead to unwanted salinisation of potable groundwater resources. In the present study, we investigated different scenarios for a potential storage site in the Northeast German Basin using a three-dimensional (3-D) regional-scale model that includes four major fault zones. The focus was on assessing the impact of fault length and the effect of a secondary reservoir above the storage formation, as well as model boundary conditions and initial salinity distribution on the potential salinisation of shallow groundwater resources. We employed numerical simulations of brine injection as a representative fluid.
Our simulation results demonstrate that the lateral model boundary settings and the effective fault damage zone volume have the greatest influence on pressure build-up and development within the reservoir, and thus intensity and duration of fluid flow through the faults. Higher vertical pressure gradients for short fault segments or a small effective fault damage zone volume result in the highest salinisation potential due to a larger vertical fault height affected by fluid displacement. Consequently, it has a strong impact on the degree of shallow aquifer salinisation, whether a gradient in salinity exists or the saltwater-freshwater interface lies below the fluid displacement depth in the faults. A small effective fault damage zone volume or low fault permeability further extend the duration of fluid flow, which can persist for several tens to hundreds of years, if the reservoir is laterally confined. Laterally open reservoir boundaries, large effective fault damage zone volumes and intermediate reservoirs significantly reduce vertical brine migration and the potential of freshwater salinisation because the origin depth of displaced brine is located only a few decametres below the shallow aquifer in maximum.
The present study demonstrates that the existence of hydraulically conductive faults is not necessarily an exclusion criterion for potential injection sites, because salinisation of shallower aquifers strongly depends on initial salinity distribution, location of hydraulically conductive faults and their effective damage zone volumes as well as geological boundary conditions.
We present results on ultrafast gas electron diffraction (UGED) experiments with femtosecond resolution using the MeV electron gun at SLAC National Accelerator Laboratory. UGED is a promising method to investigate molecular dynamics in the gas phase because electron pulses can probe the structure with a high spatial resolution. Until recently, however, it was not possible for UGED to reach the relevant timescale for the motion of the nuclei during a molecular reaction. Using MeV electron pulses has allowed us to overcome the main challenges in reaching femtosecond resolution, namely delivering short electron pulses on a gas target, overcoming the effect of velocity mismatch between pump laser pulses and the probe electron pulses, and maintaining a low timing jitter. At electron kinetic energies above 3 MeV, the velocity mismatch between laser and electron pulses becomes negligible. The relativistic electrons are also less susceptible to temporal broadening due to the Coulomb force. One of the challenges of diffraction with relativistic electrons is that the small de Broglie wavelength results in very small diffraction angles. In this paper we describe the new setup and its characterization, including capturing static diffraction patterns of molecules in the gas phase, finding time-zero with sub-picosecond accuracy and first time-resolved diffraction experiments. The new device can achieve a temporal resolution of 100 fs root-mean-square, and sub-angstrom spatial resolution. The collimation of the beam is sufficient to measure the diffraction pattern, and the transverse coherence is on the order of 2 nm. Currently, the temporal resolution is limited both by the pulse duration of the electron pulse on target and by the timing jitter, while the spatial resolution is limited by the average electron beam current and the signal-to-noise ratio of the detection system. We also discuss plans for improving both the temporal resolution and the spatial resolution.
Filling the Silence
(2016)
In a self-paced reading experiment, we investigated the processing of sluicing constructions (“sluices”) whose antecedent contained a known garden-path structure in German. Results showed decreased processing times for sluices with garden-path antecedents as well as a disadvantage for antecedents with non-canonical word order downstream from the ellipsis site. A post-hoc analysis showed the garden-path advantage also to be present in the region right before the ellipsis site. While no existing account of ellipsis processing explicitly predicted the results, we argue that they are best captured by combining a local antecedent mismatch effect with memory trace reactivation through reanalysis.
Many previous studies have shown that the turbulent mixing layer under periodic forcing tends to adopt a lock-on state, where the major portion of the fluctuations in the flow are synchronized at the forcing frequency. The goal of this experimental study is to apply closed-loop control in order to provoke the lock-on state, using information from the flow itself. We aim to determine the range of frequencies for which the closed-loop control can establish the lock-on, and what mechanisms are contributing to the selection of a feedback frequency. In order to expand the solution space for optimal closed-loop control laws, we use the genetic programming control (CPC) framework. The best closed-loop control laws obtained by CPC are analysed along with the associated physical mechanisms in the mixing layer flow. The resulting closed-loop control significantly outperforms open-loop forcing in terms of robustness to changes in the free-stream velocities. In addition, the selection of feedback frequencies is not locked to the most amplified local mode, but rather a range of frequencies around it.
Fruits exhibit a vast array of different 3D shapes, from simple spheres and cylinders to more complex curved forms; however, the mechanism by which growth is oriented and coordinated to generate this diversity of forms is unclear. Here, we compare the growth patterns and orientations for two very different fruit shapes in the Brassicaceae: the heart-shaped Capsella rubella silicle and the near-cylindrical Arabidopsis thaliana silique. We show, through a combination of clonal and morphological analyses, that the different shapes involve different patterns of anisotropic growth during three phases. These experimental data can be accounted for by a tissue level model in which specified growth rates vary in space and time and are oriented by a proximodistal polarity field. The resulting tissue conflicts lead to deformation of the tissue as it grows. The model allows us to identify tissue-specific and temporally specific activities required to obtain the individual shapes. One such activity may be provided by the valve-identity gene FRUITFULL, which we show through comparative mutant analysis to modulate fruit shape during post-fertilisation growth of both species. Simple modulations of the model presented here can also broadly account for the variety of shapes in other Brassicaceae species, thus providing a simplified framework for fruit development and shape diversity.
Since the 1960ies, Germany has been host to a large Turkish immigrant community. While migrant communities often shift to the majority language over the course of time, Turkish is a very vital minority language in Germany and bilingualism in this community is an obvious fact which has been subject to several studies. The main focus usually is on German, the second language (L2) of these speakers (e.g. Hinnenkamp 2000, Keim 2001, Auer 2003, Cindark & Aslan (2004), Kern & Selting 2006, Selting 2009, Kern 2013). Research on the Turkish spoken by Turkish bilinguals has also attracted attention although to a lesser extend mainly in the framework of so called heritage language research (cf. Polinski 2011). Bilingual Turkish has been investigated under the perspective of code-switching and codemixing (e.g. Kallmeyer & Keim 2003, Keim 2003, 2004, Keim & Cindark 2003, Hinnenkamp 2003, 2005, 2008, Dirim & Auer 2004), and with respect to changes in the morphologic, the syntactic and the orthographic system (e.g. Rehbein & Karakoç 2004, Schroeder 2007). Attention to the changes in the prosodic system of bilingual Turkish on the other side has been exceptional so far (Queen 2001, 2006).
With the present dissertation, I provide a study on contact induced linguistic changes on the prosodic level in the Turkish heritage language of adult early German-Turkish bilinguals. It describes structural changes in the L1 Turkish intonation of yes/no questions of a representative sample of bilingual Turkish speakers. All speakers share a similar sociolinguistic background. All acquired Turkish as their first language from their families and the majority language German as an early L2 at latest in the kinder garden by the age of 3.
A study of changes in bilingual varieties requires a previous cross-linguistic comparison of both of the involved languages in language contact in order to draw conclusions on the contact-induced language change in delimitation to language-internal development.
While German is one of the best investigated languages with respect to its prosodic system, research on Turkish intonational phonology is not as progressed. To this effect, the analysis of bilingual Turkish, as elicited for the present dissertation, is preceded by an experimental study on monolingual Turkish. In this regard an additional experiment with 11 monolingual university students of non-linguistic subjects was conducted at the Ege University in Izmir in 2013. On these grounds the present dissertation additionally contributes new insights with respect to Turkish intonational phonology and typology. The results of the contrastive analysis of German and Turkish bring to light that the prosodic systems of both languages differ with respect to the use of prosodic cues in the marking of information structure (IS) and sentence type. Whereas German distinguishes in the prosodic marking between explicit categories for focus and givenness, Turkish uses only one prosodic cue to mark IS. Furthermore it is shown that Turkish in contrast to German does not use a prosodic correlate to mark yes/no questions, but a morphological question marker.
To elicit Turkish yes/no questions in a bilingual context which differ with respect to their information structure in a further step the methodology of Xu (1999) to elicit in-situ focus on different constituents was adapted in the experimental study. A data set of 400 Turkish yes/no questions of 20 bilingual Turkish speakers was compiled at the Zentrum für Allgemeine Sprachwissenschaft (ZAS) in Berlin and at the University of Potsdam in 2013. The prosodic structure of the yes/no questions was phonologically and phonetically analyzed with respect to changes in the f0 contour according to IS modifications and the use of prosodic cues to indicate sentence type.
The results of the analyses contribute surprising observations to the research of bilingual prosody. Studies on bilingual language change and language acquisition have repeatedly shown that the use of prosodic features that are considered as marked by means of lower and implicational use across and within a language cause difficulties in language contact and second language acquisition. Especially, they are not expected to pass from one language to another through language contact. However, this structurally determined expectation on language development is refuted by the results of the present study. Functionally related prosody, such as the cues to indicate IS, are transferred from German L2 to the Turkish L1 of German-Turkish bilingual speakers. This astonishing observation provides the base for an approach to language change centered on functional motivation. Based on Matras’ (2007, 2010) assumption of functionality in language change, Paradis’ (1993, 2004, 2008) approach of Language Activation and the Subsystem Theory and the Theory of Language as a Dynamic System (Heredina & Jessner 2002), it will be shown that prosodic features which are absent in one of the languages of bilingual speech communities are transferred from the respective language to the other when they contribute to the contextualization of a pragmatic concept which is not expressed by other linguistic means in the target language. To this effect language interaction is based on language activation and inhibition mechanisms dealing with differences in the implicit pragmatic knowledge between bilinguals and monolinguals. The motivator for this process of language change is the contextualization of the message itself and not the structure of the respective feature on the surface. It is shown that structural consideration may influence language change but that bilingual language change does not depend on structural restrictions nor does the structure cause a change. The conclusions drawn on the basis of empirical facts can especially contribute to a better understanding of the processes of bilingual language development as it combines methodologies and theoretical aspects of different linguistic subfields.
Lake Towuti is a tectonic basin, surrounded by ultramafic rocks. Lateritic soils form through weathering and deliver abundant iron (oxy)hydroxides but very little sulfate to the lake and its sediment. To characterize the sediment biogeochemistry, we collected cores at three sites with increasing water depth and decreasing bottom water oxygen concentrations. Microbial cell densities were highest at the shallow site a feature we attribute to the availability of labile organic matter (OM) and the higher abundance of electron acceptors due to oxic bottom water conditions. At the two other sites, OM degradation and reduction processes below the oxycline led to partial electron acceptor depletion. Genetic information preserved in the sediment as extracellular DNA (eDNA) provided information on aerobic and anaerobic heterotrophs related to Nitrospirae. Chloroflexi, and Therrnoplasmatales. These taxa apparently played a significant role in the degradation of sinking OM. However, eDNA concentrations rapidly decreased with core depth. Despite very low sulfate concentrations, sulfate-reducing bacteria were present and viable in sediments at all three sites, as confirmed by measurement of potential sulfate reduction rates. Microbial community fingerprinting supported the presence of taxa related to Deltaproteobacteria and Firmicutes with demonstrated capacity for iron and sulfate reduction. Concomitantly, sequences of Ruminococcaceae, Clostridiales, and Methanornicrobiales indicated potential for fermentative hydrogen and methane production. Such first insights into ferruginous sediments showed that microbial populations perform successive metabolisms related to sulfur, iron, and methane. In theory, iron reduction could reoxidize reduced sulfur compounds and desorb OM from iron minerals to allow remineralization to methane. Overall, we found that biogeochemical processes in the sediments can be linked to redox differences in the bottom waters of the three sites, like oxidant concentrations and the supply of labile OM. At the scale of the lacustrine record, our geomicrobiological study should provide a means to link the extant subsurface biosphere to past environments.
We analyzed the population genetic pattern of 12 fragmented Geropogon hybridus ecological range edge populations in Israel along a steep precipitation gradient. In the investigation area (45 x 20 km(2)), the annual mean precipitation changes rapidly from 450 mm in the north (Mediterranean-influenced climate zone) to 300 mm in the south (semiarid climate zone) without significant temperature changes. Our analysis (91 individuals, 12 populations, 123 polymorphic loci) revealed strongly structured populations (AMOVA I broken vertical bar(ST) = 0.35; P < 0.001); however, differentiation did not change gradually toward range edge. IBD was significant (Mantel test r = 0.81; P = 0.001) and derived from sharply divided groups between the northernmost populations and the others further south, due to dispersal or environmental limitations. This was corroborated by the PCA and STRUCTURE analyses. IBD and IBE were significant despite the micro-geographic scale of the study area, which indicates that reduced precipitation toward range edge leads to population genetic divergence. However, this pattern diminished when the hypothesized gene flow barrier was taken into account. Applying the spatial analysis method revealed 11 outlier loci that were correlated to annual precipitation and, moreover, were indicative for putative precipitation-related adaptation (BAYESCAN, MCHEZA). The results suggest that even on micro-geographic scales, environmental factors play prominent roles in population divergence, genetic drift, and directional selection. The pattern is typical for strong environmental gradients, e.g., at species range edges and ecological limits, and if gene flow barriers and mosaic-like structures of fragmented habitats hamper dispersal.
The gravitational field of a laser pulse of finite lifetime, is investigated in the framework of linearized gravity. Although the effects are very small, they may be of fundamental physical interest. It is shown that the gravitational field of a linearly polarized light pulse is modulated as the norm of the corresponding electric field strength, while no modulations arise for circular polarization. In general, the gravitational field is independent of the polarization direction. It is shown that all physical effects are confined to spherical shells expanding with the speed of light, and that these shells are imprints of the spacetime events representing emission and absorption of the pulse. Nearby test particles at rest are attracted towards the pulse trajectory by the gravitational field due to the emission of the pulse, and they are repelled from the pulse trajectory by the gravitational field due to its absorption. Examples are given for the size of the attractive effect. It is recovered that massless test particles do not experience any physical effect if they are co-propagating with the pulse, and that the acceleration of massless test particles counter-propagating with respect to the pulse is four times stronger than for massive particles at rest. The similarities between the gravitational effect of a laser pulse and Newtonian gravity in two dimensions are pointed out. The spacetime curvature close to the pulse is compared to that induced by gravitational waves from astronomical sources.
We do magnetohydrodynamic (MHD) simulations of local box models of turbulent Interstellar Medium (ISM) and analyse the process of amplification and saturation of mean magnetic fields with methods of mean field dynamo theory. It is shown that the process of saturation of mean fields can be partially described by the prolonged diffusion time scales in presence of the dynamically significant magnetic fields. However, the outward wind also plays an essential role in the saturation in higher SN rate case. Algebraic expressions for the back reaction of the magnetic field onto the turbulent transport coefficients are derived, which allow a complete description of the nonlinear dynamo. We also present the effects of dynamically significant mean fields on the ISM configuration and pressure distribution. We further add the cosmic ray component in the simulations and investigate the kinematic growth of mean fields with a dynamo perspective.
Background:
First metabolomics studies have indicated that metabolic fingerprints from accessible tissues might
be useful to better understand the etiological links between metabolism and cancer. However, there is still a lack
of prospective metabolomics studies on pre-diagnostic metabolic alterations and cancer risk.
Methods:
Associations between pre-diagnostic levels of 120 circulating metabolites (acylcarnitines, amino acids,
biogenic amines, phosphatidylcholines, sphingolipids, and hexoses) and the risks of breast, prostate, and colorectal
cancer were evaluated by Cox regression analyses using data of a prospective case-cohort study including 835
incident cancer cases.
Results:
The median follow-up duration was 8.3 years among non-cases and 6.5 years among incident cases of
cancer. Higher levels of lysophosphatidylcholines (lysoPCs), and especially lysoPC a C18:0, were consistently related
to lower risks of breast, prostate, and colorectal cancer, independent of background factors. In contrast, higher
levels of phosphatidylcholine PC ae C30:0 were associated with increased cancer risk. There was no heterogeneity
in the observed associations by lag time between blood draw and cancer diagnosis.
Conclusion:
Changes in blood lipid composition precede the diagnosis of common malignancies by several years.
Considering the consistency of the present results across three cancer types the observed alterations point to a
global metabolic shift in phosphatidylcholine metabolism that may drive tumorigenesis.
Understanding the role of natural climate variability under the pressure of human induced changes of climate and landscapes, is crucial to improve future projections and adaption strategies. This doctoral thesis aims to reconstruct Holocene climate and environmental changes in NE Germany based on annually laminated lake sediments. The work contributes to the ICLEA project (Integrated CLimate and Landscape Evolution Analyses). ICLEA intends to compare multiple high-resolution proxy records with independent chronologies from the N central European lowlands, in order to disentangle the impact of climate change and human land use on landscape development during the Lateglacial and Holocene. In this respect, two study sites in NE Germany are investigated in this doctoral project, Lake Tiefer See and palaeolake Wukenfurche. While both sediment records are studied with a combination of high-resolution sediment microfacies and geochemical analyses (e.g. µ-XRF, carbon geochemistry and stable isotopes), detailed proxy understanding mainly focused on the continuous 7.7 m long sediment core from Lake Tiefer See covering the last ~6000 years. Three main objectives are pursued at Lake Tiefer See: (1) to perform a reliable and independent chronology, (2) to establish microfacies and geochemical proxies as indicators for climate and environmental changes, and (3) to trace the effects of climate variability and human activity on sediment deposition.
Addressing the first aim, a reliable chronology of Lake Tiefer See is compiled by using a multiple-dating concept. Varve counting and tephra findings form the chronological framework for the last ~6000 years. The good agreement with independent radiocarbon dates of terrestrial plant remains verifies the robustness of the age model. The resulting reliable and independent chronology of Lake Tiefer See and, additionally, the identification of nine tephras provide a valuable base for detailed comparison and synchronization of the Lake Tiefer See data set with other climate records. The sediment profile of Lake Tiefer See exhibits striking alternations between well-varved and non-varved sediment intervals. The combination of microfacies, geochemical and microfossil (i.e. Cladocera and diatom) analyses indicates that these changes of varve preservation are caused by variations of lake circulation in Lake Tiefer See. An exception is the well-varved sediment deposited since AD 1924, which is mainly influenced by human-induced lake eutrophication. Well-varved intervals before the 20th century are considered to reflect phases of reduced lake circulation and, consequently, stronger anoxic conditions. Instead, non-varved intervals indicate increased lake circulation in Lake Tiefer See, leading to more oxygenated conditions at the lake ground. Furthermore, lake circulation is not only influencing sediment deposition, but also geochemical processes in the lake. As, for example, the proxy meaning of δ13COM varies in time in response to changes of the oxygen regime in the lake hypolinion. During reduced lake circulation and stronger anoxic conditions δ13COM is influenced by microbial carbon cycling. In contrast, organic matter degradation controls δ13COM during phases of intensified lake circulation and more oxygenated conditions. The varve preservation indicates an increasing trend of lake circulation at Lake Tiefer See after ~4000 cal a BP. This trend is superimposed by decadal to centennial scale variability of lake circulation intensity. Comparison to other records in Central Europe suggests that the long-term trend is probably related to gradual changes in Northern Hemisphere orbital forcing, which induced colder and windier conditions in Central Europe and, therefore, reinforced lake circulation. Decadal to centennial scale periods of increased lake circulation coincide with settlement phases at Lake Tiefer See, as inferred from pollen data of the same sediment record. Deforestation reduced the wind shelter of the lake, which probably increased the sensitivity of lake circulation to wind stress. However, results of this thesis also suggest that several of these phases of increased lake circulation are additionally reinforced by climate changes. A first indication is provided by the comparison to the Baltic Sea record, which shows striking correspondence between major non-varved intervals at Lake Tiefer See and bioturbated sediments in the Baltic Sea. Furthermore, a preliminary comparison to the ICLEA study site Lake Czechowskie (N central Poland) shows a coincidence of at least three phases of increased lake circulation in both lakes, which concur with periods of known climate changes (2.8 ka event, ’Migration Period’ and ’Little Ice Age’). These results suggest an additional over-regional climate forcing also on short term increased of lake circulation in Lake Tiefer See.
In summary, the results of this thesis suggest that lake circulation at Lake Tiefer See is driven by a combination of long-term and short-term climate changes as well as of anthropogenic deforestation phases. Furthermore, the lake circulation drives geochemical cycles in the lake affecting the meaning of proxy data. Therefore, the work presented here expands the knowledge of climate and environmental variability in NE Germany. Furthermore, the integration of the Lake Tiefer See multi-proxy record in a regional comparison with another ICLEA side, Lake Czechowskie, enabled to better decipher climate changes and human impact on the lake system. These first results suggest a huge potential for further detailed regional comparisons to better understand palaeoclimate dynamics in N central Europe.
The extent of gene flow during the range expansion of non-native species influences the amount of genetic diversity retained in expanding populations. Here, we analyse the population genetic structure of the raccoon dog (Nyctereutes procyonoides) in north-eastern and central Europe. This invasive species is of management concern because it is highly susceptible to fox rabies and an important secondary host of the virus. We hypothesized that the large number of introduced animals and the species' dispersal capabilities led to high population connectivity and maintenance of genetic diversity throughout the invaded range. We genotyped 332 tissue samples from seven European countries using 16 microsatellite loci. Different algorithms identified three genetic clusters corresponding to Finland, Denmark and a large 'central' population that reached from introduction areas in western Russia to northern Germany. Cluster assignments provided evidence of long-distance dispersal. The results of an Approximate Bayesian Computation analysis supported a scenario of equal effective population sizes among different pre-defined populations in the large central cluster. Our results are in line with strong gene flow and secondary admixture between neighbouring demes leading to reduced genetic structuring, probably a result of its fairly rapid population expansion after introduction. The results presented here are remarkable in the sense that we identified a homogenous genetic cluster inhabiting an area stretching over more than 1500km. They are also relevant for disease management, as in the event of a significant rabies outbreak, there is a great risk of a rapid virus spread among raccoon dog populations.
The aim of the present study was to test the functional relevance of the spatial concepts UP or DOWN for words that use these concepts either literally (space) or metaphorically (time, valence). A functional relevance would imply a symmetrical relationship between the spatial concepts and words related to these concepts, showing that processing words activate the related spatial concepts on one hand, but also that an activation of the concepts will ease the retrieval of a related word on the other. For the latter, the rotation angle of participant's body position was manipulated either to an upright or a head-down tilted body position to activate the related spatial concept. Afterwards participants produced in a within-subject design previously memorized words of the concepts space, time and valence according to the pace of a metronome. All words were related either to the spatial concept UP or DOWN. The results including Bayesian analyses show (1) a significant interaction between body position and words using the concepts UP and DOWN literally, (2) a marginal significant interaction between body position and temporal words and (3) no effect between body position and valence words. However, post-hoc analyses suggest no difference between experiments. Thus, the authors concluded that integrating sensorimotor experiences is indeed of functional relevance for all three concepts of space, time and valence. However, the strength of this functional relevance depends on how close words are linked to mental concepts representing vertical space.
Processes involved in late bilinguals' production of morphologically complex words were studied using an event-related brain potentials (ERP) paradigm in which EEGs were recorded during participants' silent productions of English past- and present-tense forms. Twenty-three advanced second language speakers of English (first language [L1] German) were compared to a control group of 19 L1 English speakers from an earlier study. We found a frontocentral negativity for regular relative to irregular past-tense forms (e.g., asked vs. held) during (silent) production, and no difference for the present-tense condition (e.g., asks vs. holds), replicating the ERP effect obtained for the L1 group. This ERP effect suggests that combinatorial processing is involved in producing regular past-tense forms, in both late bilinguals and L1 speakers. We also suggest that this paradigm is a useful tool for future studies of online language production.
Processes involved in late bilinguals' production of morphologically complex words were studied using an event-related brain potentials (ERP) paradigm in which EEGs were recorded during participants' silent productions of English past- and present-tense forms. Twenty-three advanced second language speakers of English (first language [L1] German) were compared to a control group of 19 L1 English speakers from an earlier study. We found a frontocentral negativity for regular relative to irregular past-tense forms (e.g., asked vs. held) during (silent) production, and no difference for the present-tense condition (e.g., asks vs. holds), replicating the ERP effect obtained for the L1 group. This ERP effect suggests that combinatorial processing is involved in producing regular past-tense forms, in both late bilinguals and L1 speakers. We also suggest that this paradigm is a useful tool for future studies of online language production.
In this paper, we show experimentally that inside a microfluidic device, where the reactants are segregated, the reaction rate of an autocatalytic clock reaction is accelerated in comparison to the case where all the reactants are well mixed. We also find that, when mixing is enhanced inside the microfluidic device by introducing obstacles into the flow, the clock reaction becomes slower in comparison to the device where mixing is less efficient. Based on numerical simulations, we show that this effect can be explained by the interplay of nonlinear reaction kinetics (cubic autocatalysis) and differential diffusion, where the autocatalytic species diffuses slower than the substrate.
The current study investigates to what extent masked morphological priming is modulated by language-particular properties, specifically by its writing system. We present results from two masked priming experiments investigating the processing of complex Japanese words written in less common (moraic) scripts. In Experiment 1, participants performed lexical decisions on target verbs; these were preceded by primes which were either (i) a past-tense form of the same verb, (ii) a stem-related form with the epenthetic vowel -i, (iii) a semantically-related form, and (iv) a phonologically-related form. Significant priming effects were obtained for prime types (i), (ii), and (iii), but not for (iv). This pattern of results differs from previous findings on languages with alphabetic scripts, which found reliable masked priming effects for morphologically related prime/target pairs of type (i), but not for non-affixal and semantically-related primes of types (ii), and (iii). In Experiment 2, we measured priming effects for prime/target pairs which are neither morphologically, semantically, phonologically nor - as presented in their moraic scripts—orthographically related, but which—in their commonly written form—share the same kanji, which are logograms adopted from Chinese. The results showed a significant priming effect, with faster lexical-decision times for kanji-related prime/target pairs relative to unrelated ones. We conclude that affix-stripping is insufficient to account for masked morphological priming effects across languages, but that language-particular properties (in the case of Japanese, the writing system) affect the processing of (morphologically) complex words.
In this contribution, we study using first principles the co-adsorption and catalytic behaviors of CO and O2 on a single gold atom deposited at defective magnesium oxide surfaces. Using cluster models and point charge embedding within a density functional theory framework, we simulate the CO oxidation reaction for Au1 on differently charged oxygen vacancies of MgO(001) to rationalize its experimentally observed lack of catalytic activity. Our results show that: (1) co-adsorption is weakly supported at F0 and F2+ defects but not at F1+ sites, (2) electron redistribution from the F0 vacancy via the Au1 cluster to the adsorbed molecular oxygen weakens the O2 bond, as required for a sustainable catalytic cycle, (3) a metastable carbonate intermediate can form on defects of the F0 type, (4) only a small activation barrier exists for the highly favorable dissociation of CO2 from F0, and (5) the moderate adsorption energy of the gold atom on the F0 defect cannot prevent insertion of molecular oxygen inside the defect. Due to the lack of protection of the color centers, the surface becomes invariably repaired by the surrounding oxygen and the catalytic cycle is irreversibly broken in the first oxidation step.
Skipping a grade, one specific form of acceleration, is an intervention used for gifted students. Quantitative research has shown acceleration to be a highly successful intervention regarding academic achievement, but less is known about the social-emotional outcomes of grade-skipping. In the present study, the authors used the grounded theory approach to examine the experiences of seven gifted students aged 8 to 16 years who skipped a grade. The interviewees perceived their feeling of being in the wrong place before the grade-skipping as strongly influenced by their teachers, who generally did not respond adequately to their needs. We observed a close interrelationship between the gifted students' intellectual fit and their social situation in class. Findings showed that the grade-skipping in most of the cases bettered the situation in school intellectually as well as socially, but soon further interventions, for instance, a specialized and demanding class- or subject-specific acceleration were added to provide sufficiently challenging learning opportunities.
HPI Future SOC Lab
(2016)
The “HPI Future SOC Lab” is a cooperation of the Hasso Plattner Institute (HPI) and industrial partners. Its mission is to enable and promote exchange and interaction between the research community and the industrial partners.
The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores and 2 TB main memory. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies.
This technical report presents results of research projects executed in 2016. Selected projects have presented their results on April 5th and November 3th 2016 at the Future SOC Lab Day events.
The goal of the presented work is to explore the interaction between gold nanorods (GNRs) and hyper-sound waves. For the generation of the hyper-sound I have used Azobenzene-containing polymer transducers. Multilayer polymer structures with well-defined thicknesses and smooth interfaces were built via layer-by-layer deposition. Anionic polyelectrolytes with Azobenzene side groups (PAzo) were alternated with cationic polymer PAH, for the creation of transducer films. PSS/PAH multilayer were built for spacer layers, which do not absorb in the visible light range. The properties of the PAzo/PAH film as a transducer are carefully characterized by static and transient optical spectroscopy. The optical and mechanical properties of the transducer are studied on the picosecond time scale. In particular the relative change of the refractive index of the photo-excited and expanded PAH/PAzo is Δn/n = - 2.6*10‐4. Calibration of the generated strain is performed by ultrafast X-ray diffraction calibrated the strain in a Mica substrate, into which the hyper-sound is transduced. By simulating the X-ray data with a linear-chain-model the strain in the transducer under the excitation is derived to be Δd/d ~ 5*10‐4.
Additional to the investigation of the properties of the transducer itself, I have performed a series of experiments to study the penetration of the generated strain into various adjacent materials. By depositing the PAzo/PAH film onto a PAH/PSS structure with gold nanorods incorporated in it, I have shown that nanoscale impurities can be detected via the scattering of hyper-sound.
Prior to the investigation of complex structures containing GNRs and the transducer, I have performed several sets of experiments on GNRs deposited on a small buffer of PSS/PAH. The static and transient response of GNRs is investigated for different fluence of the pump beam and for different dielectric environments (GNRs covered by PSS/PAH).
A systematic analysis of sample architectures is performed in order to construct a sample with the desired effect of GNRs responding to the hyper-sound strain wave. The observed shift of a feature related to the longitudinal plasmon resonance in the transient reflection spectra is interpreted as the event of GNRs sensing the strain wave. We argue that the shift of the longitudinal plasmon resonance is caused by the viscoelastic deformation of the polymer around the nanoparticle. The deformation is induced by the out of plane difference in strain in the area directly under a particle and next to it. Simulations based on the linear chain model support this assumption. Experimentally this assumption is proven by investigating the same structure, with GNRs embedded in a PSS/PAH polymer layer.
The response of GNRs to the hyper-sound wave is also observed for the sample structure with GNRs embedded in PAzo/PAH films. The response of GNRs in this case is explained to be driven by the change of the refractive index of PAzo during the strain propagation.
The lakes in the Kenyan Rift Valley offer the unique opportunity to study a wide range of hydrochemical environmental conditions, ranging from freshwater to highly saline and alkaline lakes. Because little is known about the hydro- and biogeochemical conditions in the underlying lake sediments, it was the aim of this study to extend the already existing data sets with data from porewater and biomarker analyses. Additionally, reduced sulphur compounds and sulphate reduction rates in the sediment were determined. The new data was used to examine the anthropogenic and microbial influence on the lakes sediments as well as the influence of the water chemistry on the degradation and preservation of organic matter in the sediment column. The lakes discussed in this study are: Logipi, Eight (a small crater lake in the region of Kangirinyang), Baringo, Bogoria, Naivasha, Oloiden, and Sonachi.
The biomarker compositions were similar in all studied lake sediments; nevertheless, there were some differences between the saline and freshwater lakes. One of those differences is the occurrence of a molecule related to β-carotene, which was only found in the saline lakes. This molecule most likely originates from cyanobacteria, single-celled organisms which are commonly found in saline lakes. In the two freshwater lakes, stigmasterol, a sterol characteristic for freshwater algae, was found. In this study, it was shown that Lakes Bogoria and Sonachi can be used for environmental reconstructions with biomarkers, because the absence of oxygen at the lake bottoms slowed the degradation process. Other lakes, like for example Lake Naivasha, cannot be used for such reconstructions, because of the large anthropogenic influence. But the biomarkers proved to be a useful tool to study those anthropogenic influences. Additionally, it was observed that horizons with a high concentration of elemental sulphur can be used as temporal markers. Those horizons were deposited during times when the lake levels were very low. The sulphur was deposited by microorganisms which are capable of anoxygenic photosynthesis or sulphide oxidation.
The new sediment record from the deep Dead Sea basin (ICDP core 5017-1) provides a unique archive for hydroclimatic variability in the Levant. Here, we present high-resolution sediment facies analysis and elemental composition by micro-X-ray fluorescence (mu XRF) scanning of core 5017-1 to trace lake levels and responses of the regional hydroclimatology during the time interval from ca. 117 to 75 ka, i. e. the transition between the last interglacial and the onset of the last glaciation. We distinguished six major micro-facies types and interpreted these and their alterations in the core in terms of relative lake level changes. The two end-member facies for highest and lowest lake levels are (a) up to several metres thick, greenish sediments of alternating aragonite and detrital marl laminae (aad) and (b) thick halite facies, respectively. Intermediate lake levels are characterised by detrital marls with varying amounts of aragonite, gypsum or halite, reflecting lower-amplitude, shorter-term variability. Two intervals of pronounced lake level drops occurred at similar to 110-108 +/- 5 and similar to 93-87 +/- 7 ka. They likely coincide with stadial conditions in the central Mediterranean (Melisey I and II pollen zones in Monticchio) and low global sea levels during Marine Isotope Stage (MIS) 5d and 5b. However, our data do not support the current hypothesis of an almost complete desiccation of the Dead Sea during the earlier of these lake level low stands based on a recovered gravel layer. Based on new petrographic analyses, we propose that, although it was a low stand, this well-sorted gravel layer may be a vestige of a thick turbidite that has been washed out during drilling rather than an in situ beach deposit. Two intervals of higher lake stands at similar to 108-93 +/- 6 and similar to 87-75 +/- 7 ka correspond to interstadial conditions in the central Mediterranean, i. e. pollen zones St. Germain I and II in Monticchio, and Greenland interstadials (GI) 24+23 and 21 in Greenland, as well as to sapropels S4 and S3 in the Mediterranean Sea. These apparent correlations suggest a close link of the climate in the Levant to North Atlantic and Mediterranean climates during the time of the build-up of Northern Hemisphere ice shields in the early last glacial period.
Subsurface microbial communities undertake many terminal electron-accepting processes, often simultaneously. Using a tritium-based assay, we measured the potential hydrogen oxidation catalyzed by hydrogenase enzymes in several subsurface sedimentary environments (Lake Van, Barents Sea, Equatorial Pacific, and Gulf of Mexico) with different predominant electron-acceptors. Hydrogenases constitute a diverse family of enzymes expressed by microorganisms that utilize molecular hydrogen as a metabolic substrate, product, or intermediate. The assay reveals the potential for utilizing molecular hydrogen and allows qualitative detection of microbial activity irrespective of the predominant electron-accepting process. Because the method only requires samples frozen immediately after recovery, the assay can be used for identifying microbial activity in subsurface ecosystems without the need to preserve live material. We measured potential hydrogen oxidation rates in all samples from multiple depths at several sites that collectively span a wide range of environmental conditions and biogeochemical zones. Potential activity normalized to total cell abundance ranges over five orders of magnitude and varies, dependent upon the predominant terminal electron acceptor. Lowest per-cell potential rates characterize the zone of nitrate reduction and highest per-cell potential rates occur in the methanogenic zone. Possible reasons for this relationship to predominant electron acceptor include (i) increasing importance of fermentation in successively deeper biogeochemical zones and (ii) adaptation of H(2)ases to successively higher concentrations of H-2 in successively deeper zones.
This publications-based thesis summarizes my contribution to the scientific field of ultrafast structural dynamics. It consists of 16 publications, about the generation, detection and coupling of coherent gigahertz longitudinal acoustic phonons, also called hypersonic waves. To generate such high frequency phonons, femtosecond near infrared laser pulses were used to heat nanostructures composed of perovskite oxides on an ultrashort timescale. As a consequence the heated regions of such a nanostructure expand and a high frequency acoustic phonon pulse is generated. To detect such coherent acoustic sound pulses I use ultrafast variants of optical Brillouin and x-ray scattering. Here an incident optical or x-ray photon is scattered by the excited sound wave in the sample. The scattered light intensity measures the occupation of the phonon modes.
The central part of this work is the investigation of coherent high amplitude phonon wave packets which can behave nonlinearly, quite similar to shallow water waves which show a steepening of wave fronts or solitons well known as tsunamis. Due to the high amplitude of the acoustic wave packets in the solid, the acoustic properties can change significantly in the vicinity of the sound pulse. This may lead to a shape change of the pulse. I have observed by time-resolved Brillouin scattering, that a single cycle hypersound pulse shows a wavefront steepening. I excited hypersound pulses with strain amplitudes until 1% which I have calibrated by ultrafast x-ray diffraction (UXRD).
On the basis of this first experiment we developed the idea of the nonlinear mixing of narrowband phonon wave packets which we call "nonlinear phononics" in analogy with the nonlinear optics, which summarizes a kaleidoscope of surprising optical phenomena showing up at very high electric fields. Such phenomena are for instance Second Harmonic Generation, four-wave-mixing or solitons. But in case of excited coherent phonons the wave packets have usually very broad spectra which make it nearly impossible to look at elementary scattering processes between phonons with certain momentum and energy.
For that purpose I tested different techniques to excite narrowband phonon wave packets which mainly consist of phonons with a certain momentum and frequency. To this end epitaxially grown metal films on a dielectric substrate were excited with a train of laser pulses. These excitation pulses drive the metal film to oscillate with the frequency given by their inverse temporal displacement and send a hypersonic wave of this frequency into the substrate. The monochromaticity of these wave packets was proven by ultrafast optical Brillouin and x-ray scattering.
Using the excitation of such narrowband phonon wave packets I was able to observe the Second Harmonic Generation (SHG) of coherent phonons as a first example of nonlinear wave mixing of nanometric phonon wave packets.
Neisseria gonorrhoeae is one of the most prevalent sexually transmitted diseases worldwide with more than 100 million new infections per year. A lack of intense research over the last decades and increasing resistances to the recommended antibiotics call for a better understanding of gonococcal infection, fast diagnostics and therapeutic measures against N. gonorrhoeae. Therefore, the aim of this work was to identify novel immunogenic proteins as a first step to advance those unresolved problems. For the identification of immunogenic proteins, pHORF oligopeptide phage display libraries of the entire N. gonorrhoeae genome were constructed. Several immunogenic oligopeptides were identified using polyclonal rabbit antibodies against N. gonorrhoeae. Corresponding full-length proteins of the identified oligopeptides were expressed and their immunogenic character was verified by ELISA. The immunogenic character of six proteins was identified for the first time. Additional 13 proteins were verified as immunogenic proteins in N. gonorrhoeae.
Background
Dietary calcium (Ca) concentrations might affect regulatory pathways within the Ca and vitamin D metabolism and consequently excretory mechanisms. Considering large variations in Ca concentrations of feline diets, the physiological impact on Ca homeostasis has not been evaluated to date. In the present study, diets with increasing concentrations of dicalcium phosphate were offered to ten healthy adult cats (Ca/phosphorus (P): 6.23/6.02, 7.77/7.56, 15.0/12.7, 19.0/17.3, 22.2/19.9, 24.3/21.6 g/kg dry matter). Each feeding period was divided into a 10-day adaptation and an 8-day sampling period in order to collect urine and faeces. On the last day of each feeding period, blood samples were taken.
Results
Urinary Ca concentrations remained unaffected, but faecal Ca concentrations increased (P < 0.001) with increasing dietary Ca levels. No effect on whole and intact parathyroid hormone levels, fibroblast growth factor 23 and calcitriol concentrations in the blood of the cats were observed. However, the calcitriol precursors 25(OH)D-2 and 25(OH)D-3, which are considered the most useful indicators for the vitamin D status, decreased with higher dietary Ca levels (P = 0.013 and P = 0.033). Increasing dietary levels of dicalcium phosphate revealed an acidifying effect on urinary fasting pH (6.02) and postprandial pH (6.01) (P < 0.001), possibly mediated by an increase of urinary phosphorus (P) concentrations (P < 0.001).
Conclusions
In conclusion, calcitriol precursors were linearly affected by increasing dietary Ca concentrations. The increase in faecal Ca excretion indicates that Ca homeostasis of cats is mainly regulated in the intestine and not by the kidneys. Long-term studies should investigate the physiological relevance of the acidifying effect observed when feeding diets high in Ca and P.
Ongoing climate change is known to cause an increase in the frequency and amplitude of local temperature and precipitation extremes in many regions of the Earth. While gradual changes in the climatological conditions have already been shown to strongly influence plant flowering dates, the question arises if and how extremes specifically impact the timing of this important phenological phase. Studying this question calls for the application of statistical methods that are tailored to the specific properties of event time series. Here, we employ event coincidence analysis, a novel statistical tool that allows assessing whether or not two types of events exhibit similar sequences of occurrences in order to systematically quantify simultaneities between meteorological extremes and the timing of the flowering of four shrub species across Germany. Our study confirms previous findings of experimental studies by highlighting the impact of early spring temperatures on the flowering of the investigated plants. However, previous studies solely based on correlation analysis do not allow deriving explicit estimates of the strength of such interdependencies without further assumptions, a gap that is closed by our analysis. In addition to direct impacts of extremely warm and cold spring temperatures, our analysis reveals statistically significant indications of an influence of temperature extremes in the autumn preceding the flowering.
This dissertation examines the impact of the type of referring expression on the acquisition of word order variation in German-speaking preschoolers. A puzzle in the area of language acquisition concerns the production-comprehension asymmetry for non-canonical sentences like "Den Affen fängt die Kuh." (“The monkey, the cow chases.”), that is, preschoolers usually have difficulties in accurately understanding non-canonical sentences approximately until age six (e.g., Dittmar et al., 2008) although they produce non-canonical sentences already around age three (e.g., Poeppel & Wexler, 1993; Weissenborn, 1990). This dissertation investigated the production and comprehension of non-canonical sentences to address this issue.
Three corpus analyses were conducted to investigate the impact of givenness, topic status and the type of referring expression on word order in the spontaneous speech of two- to four-year-olds and the child-directed speech produced by their mothers. The positioning of the direct object in ditransitive sentences was examined; in particular, sentences in which the direct object occurred before or after the indirect object in the sentence-medial positions and sentences in which it occurred in the sentence-initial position. The results reveal similar ordering patterns for children and adults. Word order variation was to a large extent predictable from the type of referring expression, especially with respect to the word order involving the sentence-medial positions. Information structure (e.g., topic status) had an additional impact only on word order variation that involved the sentence-initial position.
Two comprehension experiments were conducted to investigate whether the type of referring expression and topic status influences the comprehension of non-canonical transitive sentences in four- and five-year-olds. In the first experiment, the topic status of the one of the sentential arguments was established via a preceding context sentence, and in the second experiment, the type of referring expression for the sentential arguments was additionally manipulated by using either a full lexical noun phrase (NP) or a personal pronoun. The results demonstrate that children’s comprehension of non-canonical sentences improved when the topic argument was realized as a personal pronoun and this improvement was independent of the grammatical role of the arguments. However, children’s comprehension was not improved when the topic argument was realized as a lexical NP.
In sum, the results of both production and comprehension studies support the view that referring expressions may be seen as a sentence-level cue to word order and to the information status of the sentential arguments. The results highlight the important role of the type of referring expression on the acquisition of word order variation and indicate that the production-comprehension asymmetry is reduced when the type of referring expression is considered.
Aim: We aimed to identify patient characteristics and comorbidities that correlate with the initial exercise capacity of
cardiac rehabilitation (CR) patients and to study the significance of patient characteristics, comorbidities and training
methods for training achievements and final fitness of CR patients.
Methods: We studied 557 consecutive patients (51.7 Æ 6.9 years; 87.9% men) admitted to a three-week in-patient CR.
Cardiopulmonary exercise testing (CPX) was performed at discharge. Exercise capacity (watts) at entry, gain in training
volume and final physical fitness (assessed by peak O 2 utilization (VO 2peak ) were analysed using analysis of covariance
(ANCOVA) models.
Results: Mean training intensity was 90.7 Æ 9.7% of maximum heart rate (81% continuous/19% interval training, 64%
additional strength training). A total of 12.2 Æ 2.6 bicycle exercise training sessions were performed. Increase of training
volume by an average of more than 100% was achieved (difference end/beginning of CR: 784 Æ 623 watts  min). In the
multivariate model the gain in training volume was significantly associated with smoking, age and exercise capacity at
entry of CR. The physical fitness level achieved at discharge from CR as assessed by VO 2peak was mainly dependent on
age, but also on various factors related to training, namely exercise capacity at entry, increase of training volume and
training method.
Conclusion: CR patients were trained in line with current guidelines with moderate-to-high intensity and reached a
considerable increase of their training volume. The physical fitness level achieved at discharge from CR depended on
various factors associated with training, which supports the recommendation that CR should be offered to all cardiac
patients.
Prevalence of Achilles tendinopathy increases with age, leading to a weaker tendon with predisposition to rupture. Previous studies, investigating Achilles tendon (AT) properties, are restricted to standardized isometric conditions. Knowledge regarding the influence of age and pa-thology on AT response under functional tasks remains limited. Therefore, the aim of the thesis was to investigate the influence of age and pathology on AT properties during a single-leg vertical jump.
Healthy children, asymptomatic adults and patients with Achilles tendinopathy participated. Ultrasonography was used to assess AT-length, AT-cross-sectional area and AT-elongation. The reliability of the methodology used was evaluated both Intra- and inter-rater at rest and at maximal isometric plantar-flexion contraction and was further implemented to investigate tendon properties during functional task. During the functional task a single-leg vertical jump on a force plate was performed while simultaneously AT elongation and vertical ground reaction forces were recorded. AT compliance [mm/N] (elongation/force) and AT strain [%] (elongation/length) were calculated. Differences between groups were evaluated with respect to age (children vs. adults) and pathology (asymptomatic adults vs. patients).
Good to excellent reliability with low levels of variability was achieved in the assessment of AT properties. During the jumps AT elongation was found to be statistical significant higher in children. However, no statistical significant difference was found for force among the groups. AT compliance and strain were found to be statistical significant higher only in children. No significant differences were found between asymptomatic adults and patients with tendinopathy.
The methodology used to assess AT properties is reliable, allowing its implementation into further investigations. Higher AT-compliance in children might be considered as a protective factor against load-related injuries. During functional task, when higher forces are acting on the AT, tendinopathy does not result in a weaker tendon.
Background
Overweight and obesity are increasing health problems that are not restricted to adults only. Childhood obesity is associated with metabolic, psychological and musculoskeletal comorbidities. However, knowledge about the effect of obesity on the foot function across maturation is lacking. Decreased foot function with disproportional loading characteristics is expected for obese children. The aim of this study was to examine foot loading characteristics during gait of normal-weight, overweight and obese children aged 1-12 years.
Methods
A total of 10382 children aged one to twelve years were enrolled in the study. Finally, 7575 children (m/f: n = 3630/3945; 7.0 +/- 2.9yr; 1.23 +/- 0.19m; 26.6 +/- 10.6kg; BMI: 17.1 +/- 2.4kg/m(2)) were included for (complete case) data analysis. Children were categorized to normalweight (>= 3rd and <90th percentile; n = 6458), overweight (>= 90rd and <97th percentile; n = 746) or obese (>97th percentile; n = 371) according to the German reference system that is based on age and gender-specific body mass indices (BMI). Plantar pressure measurements were assessed during gait on an instrumented walkway. Contact area, arch index (AI), peak pressure (PP) and force time integral (FTI) were calculated for the total, fore-, mid-and hindfoot. Data was analyzed descriptively (mean +/- SD) followed by ANOVA/Welch-test (according to homogeneity of variances: yes/no) for group differences according to BMI categorization (normal-weight, overweight, obesity) and for each age group 1 to 12yrs (post-hoc Tukey Kramer/Dunnett's C; alpha = 0.05).
Results
Mean walking velocity was 0.95 +/- 0.25 m/s with no differences between normal-weight, overweight or obese children (p = 0.0841). Results show higher foot contact area, arch index, peak pressure and force time integral in overweight and obese children (p< 0.001). Obese children showed the 1.48-fold (1 year-old) to 3.49-fold (10 year-old) midfoot loading (FTI) compared to normal-weight.
Conclusion
Additional body mass leads to higher overall load, with disproportional impact on the midfoot area and longitudinal foot arch showing characteristic foot loading patterns. Already the feet of one and two year old children are significantly affected. Childhood overweight and obesity is not compensated by the musculoskeletal system. To avoid excessive foot loading with potential risk of discomfort or pain in childhood, prevention strategies should be developed and validated for children with a high body mass index and functional changes in the midfoot area. The presented plantar pressure values could additionally serve as reference data to identify suspicious foot loading patterns in children.
Since 1998, elite athletes’ sport injuries have been monitored in single sport event, which leads to the development of first comprehensive injury surveillance system in multi-sport Olympic Games in 2008. However, injury and illness occurred in training phases have not been systematically studied due to its multi-facets, potentially interactive risk related factors. The present thesis aim to address issues of feasibility of establishing a validated measure for injury/illness, training environment and psychosocial risk factors by creating the evaluation tool namely risk of injury questionnaire (Risk-IQ) for elite athletes, which based on IOC consensus statement 2009 recommended content of preparticipation evaluation(PPE) and periodic health exam (PHE).
A total of 335 top level athletes and a total of 88 medical care providers from Germany and Taiwan participated in tow “cross-sectional plus longitudinal” Risk-IQ and MCPQ surveys respectively. Four categories of injury/illness related risk factors questions were asked in Risk-IQ for athletes while injury risk and psychological related questions were asked in MCPQ for MCP cohorts. Answers were quantified scales wise/subscales wise before analyzed with other factors/scales. In addition, adapted variables such as sport format were introduced for difference task of analysis.
Validated with 2-wyas translation and test-retest reliabilities, the Risk-IQ was proved to be in good standard which were further confirmed by analyzed results from official surveys in both Germany and Taiwan. The result of Risk-IQ revealed that elite athletes’ accumulated total injuries, in general, were multi-factor dependent; influencing factors including but not limited to background experiences, medical history, PHE and PPE medical resources as well as stress from life events. Injuries of different body parts were sport format and location specific. Additionally, medical support of PPE and PHE indicated significant difference between German and Taiwan.
The result of the present thesis confirmed that it is feasible to construct a comprehensive evalua-tion instrument for heterogeneous elite athletes cohorts’ risk factor analysis for injury/illness oc-curred during their non-competition periods. In average and with many moderators involved, Ger-man elite athletes have superior medical care support yet suffered more severe injuries than Tai-wanese counterparts. Opinions of injury related psychological issues reflected differently on vari-ous MCP groups irrespective of different nationalities. In general, influencing factors and interac-tions existed among relevant factors in both studies which implied further investigation with multiple regression analysis is needed for better understanding.
Background:
Environmental stress puts organisms at risk and requires specific stress-tailored responses to maximize
survival. Long-term exposure to stress necessitates a global reprogramming of the cellular activities at different
levels of gene expression.
Results:
Here, we use ribosome profiling and RNA sequencing to globally profile the adaptive response of
Arabidopsis thaliana
to prolonged heat stress. To adapt to long heat exposure, the expression of many genes is
modulated in a coordinated manner at a transcriptional and translational level. However, a significant group of
genes opposes this trend and shows mainly translational regulation. Different secondary structure elements are
likely candidates to play a role in regulating translation of those genes.
Conclusions:
Our data also uncover on how the subunit stoichiometry of multimeric protein complexes in plastids
is maintained upon heat exposure.
Comparative literature on institutional reforms in multi-level systems proceeds from a global trend towards the decentralization of state functions. However, there is only scarce knowledge about the impact that decentralization has had, in particular, upon the sub-central governments involved. How does it affect regional and local governments? Do these reforms also have unintended outcomes on the sub-central level and how can this be explained? This article aims to develop a conceptual framework to assess the impacts of decentralization on the sub-central level from a comparative and policyoriented perspective. This framework is intended to outline the major patterns and models of decentralization and the theoretical assumptions regarding de-/re-centralization impacts, as well as pertinent cross-country approaches meant to evaluate and compare institutional reforms. It will also serve as an analytical guideline and a structural basis for all the country-related articles in this Special Issue.
The ever-increasing fat content in Western diet, combined with decreased levels of physical activity, greatly enhance the incidence of metabolic-related diseases. Cancer cachexia (CC) and Metabolic syndrome (MetS) are both multifactorial highly complex metabolism related syndromes, whose etiology is not fully understood, as the mechanisms underlying their development are not completely unveiled. Nevertheless, despite being considered “opposite sides”, MetS and CC share several common issues such as insulin resistance and low-grade inflammation. In these scenarios, tissue macrophages act as key players, due to their capacity to produce and release inflammatory mediators. One of the main features of MetS is hyperinsulinemia, which is generally associated with an attempt of the β-cell to compensate for diminished insulin sensitivity (insulin resistance). There is growing evidence that hyperinsulinemia per se may contribute to the development of insulin resistance, through the establishment of low grade inflammation in insulin responsive tissues, especially in the liver (as insulin is secreted by the pancreas into the portal circulation). The hypothesis of the present study was that insulin may itself provoke an inflammatory response culminating in diminished hepatic insulin sensitivity. To address this premise, firstly, human cell line U937 differentiated macrophages were exposed to insulin, LPS and PGE2. In these cells, insulin significantly augmented the gene expression of the pro-inflammatory mediators IL-1β, IL-8, CCL2, Oncostatin M (OSM) and microsomal prostaglandin E2 synthase (mPGES1), and of the anti-inflammatory mediator IL-10. Moreover, the synergism between insulin and LPS enhanced the induction provoked by LPS in IL-1β, IL-8, IL-6, CCL2 and TNF-α gene. When combined with PGE2, insulin enhanced the induction provoked by PGE2 in IL-1β, mPGES1 and COX2, and attenuated the inhibition induced by PGE2 in CCL2 and TNF-α gene expression contributing to an enhanced inflammatory response by both mechanisms. Supernatants of insulin-treated U937 macrophages reduced the insulin-dependent induction of glucokinase in hepatocytes by 50%. Cytokines contained in the supernatant of insulin-treated U937 macrophages also activated hepatocytes ERK1/2, resulting in inhibitory serine phosphorylation of the insulin receptor substrate. Additionally, the transcription factor STAT3 was activated by phosphorylation resulting in the induction of SOCS3, which is capable of interrupting the insulin receptor signal chain. MicroRNAs, non-coding RNAs linked to protein expression regulation, nowadays recognized as active players in the generation of several inflammatory disorders such as cancer and type II diabetes are also of interest. Considering that in cancer cachexia, patients are highly affected by insulin resistance and inflammation, control, non-cachectic and cachectic cancer patients were selected and the respective circulating levels of pro-inflammatory mediators and microRNA-21-5p, a posttranscriptional regulator of STAT3 expression, assessed and correlated. Cachectic patients circulating cytokines IL-6 and IL-8 levels were significantly higher than those of non-cachectic and controls, and the expression of microRNA-21-5p was significantly lower. Additionally, microRNA-21-5p reduced expression correlated negatively with IL-6 plasma levels. These results indicate that hyperinsulinemia per se might contribute to the low grade inflammation prevailing in MetS patients and thereby promote the development
of insulin resistance particularly in the liver. Diminished MicroRNA-21-5p expression may enhance inflammation and STAT3 expression in cachectic patients, contributing to the development of insulin resistance.
Personal fabrication tools, such as 3D printers, are on the way of enabling a future in which non-technical users will be able to create custom objects. However, while the hardware is there, the current interaction model behind existing design tools is not suitable for non-technical users. Today, 3D printers are operated by fabricating the object in one go, which tends to take overnight due to the slow 3D printing technology. Consequently, the current interaction model requires users to think carefully before printing as every mistake may imply another overnight print. Planning every step ahead, however, is not feasible for non-technical users as they lack the experience to reason about the consequences of their design decisions.
In this dissertation, we propose changing the interaction model around personal fabrication tools to better serve this user group. We draw inspiration from personal computing and argue that the evolution of personal fabrication may resemble the evolution of personal computing: Computing started with machines that executed a program in one go before returning the result to the user. By decreasing the interaction unit to single requests, turn-taking systems such as the command line evolved, which provided users with feedback after every input. Finally, with the introduction of direct-manipulation interfaces, users continuously interacted with a program receiving feedback about every action in real-time. In this dissertation, we explore whether these interaction concepts can be applied to personal fabrication as well.
We start with fabricating an object in one go and investigate how to tighten the feedback-cycle on an object-level: We contribute a method called low-fidelity fabrication, which saves up to 90% fabrication time by creating objects as fast low-fidelity previews, which are sufficient to evaluate key design aspects. Depending on what is currently being tested, we propose different conversions that enable users to focus on different parts: faBrickator allows for a modular design in the early stages of prototyping; when users move on WirePrint allows quickly testing an object's shape, while Platener allows testing an object's technical function. We present an interactive editor for each technique and explain the underlying conversion algorithms.
By interacting on smaller units, such as a single element of an object, we explore what it means to transition from systems that fabricate objects in one go to turn-taking systems. We start with a 2D system called constructable: Users draw with a laser pointer onto the workpiece inside a laser cutter. The drawing is captured with an overhead camera. As soon as the the user finishes drawing an element, such as a line, the constructable system beautifies the path and cuts it--resulting in physical output after every editing step. We extend constructable towards 3D editing by developing a novel laser-cutting technique for 3D objects called LaserOrigami that works by heating up the workpiece with the defocused laser until the material becomes compliant and bends down under gravity. While constructable and LaserOrigami allow for fast physical feedback, the interaction is still best described as turn-taking since it consists of two discrete steps: users first create an input and afterwards the system provides physical output.
By decreasing the interaction unit even further to a single feature, we can achieve real-time physical feedback: Input by the user and output by the fabrication device are so tightly coupled that no visible lag exists. This allows us to explore what it means to transition from turn-taking interfaces, which only allow exploring one option at a time, to direct manipulation interfaces with real-time physical feedback, which allow users to explore the entire space of options continuously with a single interaction. We present a system called FormFab, which allows for such direct control. FormFab is based on the same principle as LaserOrigami: It uses a workpiece that when warmed up becomes compliant and can be reshaped. However, FormFab achieves the reshaping not based on gravity, but through a pneumatic system that users can control interactively. As users interact, they see the shape change in real-time.
We conclude this dissertation by extrapolating the current evolution into a future in which large numbers of people use the new technology to create objects. We see two additional challenges on the horizon: sustainability and intellectual property. We investigate sustainability by demonstrating how to print less and instead patch physical objects. We explore questions around intellectual property with a system called Scotty that transfers objects without creating duplicates, thereby preserving the designer's copyright.
The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit numbers. The relative numerical size of the digits was varied, such that the target item was either among the numerically large or small numbers in the search display and the relation between numerical and physical size was either congruent or incongruent. Perceptual differences of the stimuli were controlled by a condition in which participants had to search for a differently coloured target item with the same physical size and by the usage of LCD-style numbers that were matched in visual similarity by shape transformations. The results of all three experiments consistently revealed that detecting a physically large target item is significantly faster when the numerical size of the target item is large as well (congruent), compared to when it is small (incongruent). This novel finding of a size congruity effect in visual search demonstrates an interaction between numerical and physical size in an experimental setting beyond typically used binary comparison tasks, and provides important new evidence for the notion of shared cognitive codes for numbers and sensorimotor magnitudes. Theoretical consequences for recent models on attention, magnitude representation and their interactions are discussed.
Precision horticulture encompasses site- or tree-specific management in fruit plantations. Of decisive importance is spatially resolved data (this means data from each tree) from the production site, since it may enable customized and, therefore, resource-efficient production measures.
The present thesis involves an examination of the apparent electrical conductivity of the soil (ECa), the plant water status spatially measured by means of the crop water stress index (CWSI), and the fruit quality (e.g. fruit size) for Prunus domestica L. (plums) and Citrus x aurantium, Syn. Citrus paradisi (grapefruit). The goals of the present work were i) characterization of the 3D distribution of the apparent electrical conductivity of the soil and variability of the plant’s water status; ii) investigation of the interaction between ECa, CWSI, and fruit quality; and iii) an approach for delineating management zones with respect to managing trees individually.
To that end, the main investigations took place in the plum orchard. This plantation got a slope of 3° grade on Pleistocene and post-Pleistocene substrates in a semi-humid climate (Potsdam, Germany) and encloses an area of 0.37 ha with 156 trees of the cultivar ˈTophit Plusˈ on a Wavit rootstock. The plantation was laid in 2009 with annual and biannual trees spaced 4 m distance along the irrigation system and 5 m between the rows. The trees were watered three times a week with a drip irrigation system positioned 50 cm above ground level providing 1.6 l per tree per event. With the help of geoelectric measurements, the apparent electrical conductivity of the upper soil (0.25 m) was measured for each tree with an electrode spacing of 0.5 m (4-point light hp). In this manner, the plantation was spatially charted with respect to the soil’s ECa. Additionally, tomography measurements were performed for 3D mapping of the soil ECa and spot checks of drilled cores with a profile of up to 1 m. The vegetative, generative, and fruit quality data were collected for each tree. The instantaneous plant water status was comprehensively determined in spot checks with the established Scholander method for water potential analysis (Scholander pressure bomb) as well as thermal imaging. An infrared camera was used for the thermal imaging (ThermaCam SC 500), mounted on a tractor 3.3 m above ground level. The thermal images (320 x 240 px) of the canopy surface were taken with an aperture of 45° and a geometric resolution of 8.54 x 6.41 mm. With the aid of the canopy temperature readings from the thermal images, cross-checked with manual temperature measurements of a dry and a wet reference leaf, the crop water stress index (CWSI) was calculated. Adjustments in CWSI for measurements in a semi-humid climate were developed, whereas the collection of reference temperatures was automatically collected from thermal images.
The bonitur data were transformed with the help of a variance stabilization process into a normal distribution. The statistical analyses as well as the automatic evaluation routine were performed with several scripts in MATLAB® (R2010b and R2016a) and a free program (spatialtoolbox). The hot spot analysis served to check whether an observed pattern is statistically significant. The method was evaluated with an established k-mean analysis. To test the hot-spot analysis by comparison, data from a grapefruit plantation (Adana, Turkey) was collected, including soil ECa, trunk circumference, and yield data. The plantation had 179 trees on a soil of type Xerofkuvent with clay and clay-loamy texture. The examination of the interaction between the critical values from the soil and plant water status information and the vegetative and generative plant growth variables was performed with the application from ANOVA.
The study indicates that the variability of the soil and plant information in fruit production is high, even considering small orchards. It was further indicated that the spatial patterns found in the soil ECa stayed constant through the years (r = 0.88 in 2011-2012 and r = 0.71 in 2012-2013). It was also demonstrated that CWSI determination may also be possible in semi-humid climate. A correlation (r = - 0.65, p < 0.0001) with the established method of leaf water potential analysis was found. The interaction between the ECa from various depths and the plant variables produced a highly significant connection with the topsoil in which the irrigation system was to be found. A correlation between yield and ECatopsoil of r = 0.52 was determined. By using the hot-spot analysis, extreme values in the spatial data could be determined. These extremes served to divide the zones (cold-spot, random, hot-spot). The random zone showed the highest correlation to the plant variables.
In summary it may be said that the cumulative water use efficiency (WUEc) was enhanced with high crop load. While the CWSI had no effect on fruit quality, the interaction of CWSI and WUEc even outweighed the impact of soil ECa on fruit quality in the production system with irrigation. In the plum orchard, irrigation was relevant for obtaining high quality produce even in the semi-humid climate.
Unlike for other retroviruses, only a few host cell factors that aid the replication of foamy viruses (FVs) via interaction with viral structural components are known. Using a yeast-two-hybrid (Y2H) screen with prototype FV (PFV) Gag protein as bait we identified human polo-like kinase 2 (hPLK2), a member of cell cycle regulatory kinases, as a new interactor of PFV capsids. Further Y2H studies confirmed interaction of PFV Gag with several PLKs of both human and rat origin. A consensus Ser-Thr/Ser-Pro (S-T/S-P) motif in Gag, which is conserved among primate FVs and phosphorylated in PFV virions, was essential for recognition by PLKs. In the case of rat PLK2, functional kinase and polo-box domains were required for interaction with PFV Gag. Fluorescently-tagged PFV Gag, through its chromatin tethering function, selectively relocalized ectopically expressed eGFP-tagged PLK proteins to mitotic chromosomes in a Gag STP motif-dependent manner, confirming a specific and dominant nature of the Gag-PLK interaction in mammalian cells. The functional relevance of the Gag-PLK interaction was examined in the context of replication-competent FVs and single-round PFV vectors. Although STP motif mutated viruses displayed wild type (wt) particle release, RNA packaging and intra-particle reverse transcription, their replication capacity was decreased 3-fold in single-cycle infections, and up to 20-fold in spreading infections over an extended time period. Strikingly similar defects were observed when cells infected with single-round wt Gag PFV vectors were treated with a pan PLK inhibitor. Analysis of entry kinetics of the mutant viruses indicated a post-fusion defect resulting in delayed and reduced integration, which was accompanied with an enhanced preference to integrate into heterochromatin. We conclude that interaction between PFV Gag and cellular PLK proteins is important for early replication steps of PFV within host cells.
What are the physical laws of the mutual interactions of objects bound to cell membranes, such as various membrane proteins or elongated virus particles? To rationalise this, we here investigate by extensive computer simulations mutual interactions of rod-like particles adsorbed on the surface of responsive elastic two-dimensional sheets. Specifically, we quantify sheet deformations as a response to adhesion of such filamentous particles. We demonstrate that tip-to-tip contacts of rods are favoured for relatively soft sheets, while side-by-side contacts are preferred for stiffer elastic substrates. These attractive orientation-dependent substrate-mediated interactions between the rod-like particles on responsive sheets can drive their aggregation and self-assembly. The optimal orientation of the membrane-bound rods is established via responding to the elastic energy profiles created around the particles. We unveil the phase diagramme of attractive–repulsive rod–rod interactions in the plane of their separation and mutual orientation. Applications of our results to other systems featuring membrane-associated particles are also discussed.
Intermontane valley fills
(2016)
Sedimentary valley fills are a widespread characteristic of mountain belts around the world. They transiently store material over time spans ranging from thousands to millions of years and therefore play an important role in modulating the sediment flux from the orogen to the foreland and to oceanic depocenters. In most cases, their formation can be attributed to specific fluvial conditions, which are closely related to climatic and tectonic processes. Hence, valley-fill deposits constitute valuable archives that offer fundamental insight into landscape evolution, and their study may help to assess the impact of future climate change on sediment dynamics.
In this thesis I analyzed intermontane valley-fill deposits to constrain different aspects of the climatic and tectonic history of mountain belts over multiple timescales. First, I developed a method to estimate the thickness distribution of valley fills using artificial neural networks (ANNs). Based on the assumption of geometrical similarity between exposed and buried parts of the landscape, this novel and highly automated technique allows reconstructing fill thickness and bedrock topography on the scale of catchments to entire mountain belts.
Second, I used the new method for estimating the spatial distribution of post-glacial sediments that are stored in the entire European Alps. A comparison with data from exploratory drillings and from geophysical surveys revealed that the model reproduces the measurements with a root mean squared error (RMSE) of 70m and a coefficient of determination (R2) of 0.81. I used the derived sediment thickness estimates in combination with a model of the Last Glacial Maximum (LGM) icecap to infer the lithospheric response to deglaciation, erosion and deposition, and deduce their relative contribution to the present-day rock-uplift rate. For a range of different lithospheric and upper mantle-material properties, the results suggest that the long-wavelength uplift signal can be explained by glacial isostatic adjustment with a small erosional contribution and a substantial but localized tectonic component exceeding 50% in parts of the Eastern Alps and in the Swiss Rhône Valley. Furthermore, this study reveals the particular importance of deconvolving the potential components of rock uplift when interpreting recent movements along active orogens and how this can be used to constrain physical properties of the Earth’s interior.
In a third study, I used the ANN approach to estimate the sediment thickness of alluviated reaches of the Yarlung Tsangpo River, upstream of the rapidly uplifting Namche Barwa massif. This allowed my colleagues and me to reconstruct the ancient river profile of the Yarlung Tsangpo, and to show that in the past, the river had already been deeply incised into the eastern margin of the Tibetan Plateau. Dating of basal sediments from drill cores that reached the paleo-river bed to 2–2.5 Ma are consistent with mineral cooling ages from the Namche Barwa massif, which indicate initiation of rapid uplift at ~4 Ma. Hence, formation of the Tsangpo gorge and aggradation of the voluminous valley fill was most probably a consequence of rapid uplift of the Namche Barwa massif and thus tectonic activity.
The fourth and last study focuses on the interaction of fluvial and glacial processes at the southeastern edge of the Karakoram. Paleo-ice-extent indicators and remnants of a more than 400-m-thick fluvio-lacustrine valley fill point to blockage of the Shyok River, a main tributary of the upper Indus, by the Siachen Glacier, which is the largest glacier in the Karakoram Range. Field observations and 10Be exposure dating attest to a period of recurring lake formation and outburst flooding during the penultimate glaciation prior to ~110 ka. The interaction of Rivers and Glaciers all along the Karakorum is considered a key factor in landscape evolution and presumably promoted headward erosion of the Indus-Shyok drainage system into the western margin of the Tibetan Plateau.
The results of this thesis highlight the strong influence of glaciation and tectonics on valley-fill formation and how this has affected the evolution of different mountain belts. In the Alps valley-fill deposition influenced the magnitude and pattern of rock uplift since ice retreat approximately 17,000 years ago. Conversely, the analyzed valley fills in the Himalaya are much older and reflect environmental conditions that prevailed at ~110 ka and ~2.5 Ma, respectively. Thus, the newly developed method has proven useful for inferring the role of sedimentary valley-fill deposits in landscape evolution on timescales ranging from 1,000 to 10,000,000 years.
Interplay of coupling and common noise at the transition to synchrony in oscillator populations
(2016)
There are two ways to synchronize oscillators: by coupling and by common forcing, which can be pure noise. By virtue of the Ott-Antonsen ansatz for sine-coupled phase oscillators, we obtain analytically tractable equations for the case where both coupling and common noise are present. While noise always tends to synchronize the phase oscillators, the repulsive coupling can act against synchrony, and we focus on this nontrivial situation. For identical oscillators, the fully synchronous state remains stable for small repulsive coupling; moreover it is an absorbing state which always wins over the asynchronous regime. For oscillators with a distribution of natural frequencies, we report on a counter-intuitive effect of dispersion (instead of usual convergence) of the oscillators frequencies at synchrony; the latter effect disappears if noise vanishes.
Several personality dispositions with common features capturing sensitivities to negative social cues have recently been introduced into psychological research. To date, however, little is known about their interrelations, their conjoint effects on behavior, or their interplay with other risk factors. We asked N = 349 adults from Germany to rate their justice, rejection, moral disgust, and provocation sensitivity, hostile attribution bias, trait anger, and forms and functions of aggression. The sensitivity measures were mostly positively correlated; particularly those with an egoistic focus, such as victim justice, rejection, and provocation sensitivity, hostile attributions and trait anger as well as those with an altruistic focus, such as observer justice, perpetrator justice, and moral disgust sensitivity. The sensitivity measures had independent and differential effects on forms and functions of aggression when considered simultaneously and when controlling for hostile attributions and anger. They could not be integrated into a single factor of interpersonal sensitivity or reduced to other well-known risk factors for aggression. The sensitivity measures, therefore, require consideration in predicting and preventing aggression.
In this thesis sentence processing was investigated using a psychophysiological measure known as pupillometry as well as Event-Related Potentials (ERP). The scope of the the- sis was broad, investigating the processing of several different movement constructions with native speakers of English and second language learners of English, as well as word order and case marking in German speaking adults and children. Pupillometry and ERP allowed us to test competing linguistic theories and use novel methodologies to investigate the processing of word order. In doing so we also aimed to establish pupillometry as an effective way to investigate the processing of word order thus broadening the methodological spectrum.
Background:
All living cells display a rapid molecular response to adverse environmental conditions, and
the heat shock protein family reflects one such example. Hence, failing to activate heat shock proteins can impair
the cellular response. In the present study, we evaluated whether the loss of different isoforms of heat shock
protein (
hsp
) genes in
Caenorhabditis elegans
would affect their vulnerability to Manganese (Mn) toxicity.
Methods:
We exposed wild type and selected
hsp
mutant worms to Mn (30 min) and next evaluated
further the most susceptible strains. We analyzed survi
val, protein carbonylation (as a marker of oxidative
stress) and Parkinson
’
s disease related gene expression immediately after Mn exposure. Lastly, we observed
dopaminergic neurons in wild type worms and in
hsp-70
mutants following Mn treatment. Analysis of the
data was performed by one-way or two way ANOVA, depending on the case, followed by post-hoc
Bonferroni test if the overall
p
value was less than 0.05.
Results:
We verified that the loss of
hsp-70, hsp-3 and chn-1
increased the vulnerability to Mn, as
exposed mutant worms showed lower survival rate and increased protein oxidation. The importance of
hsp-70
against Mn toxicity was then corroborated in dopaminergic neurons, where Mn neurotoxicity was
aggravated. The lack of
hsp-70
also blocked the transcriptional upregulation of
pink1
, a gene that has been
linked to Parkinson
’
sdisease.
Conclusions:
Taken together, our data suggest that Mn exposu
re modulates heat shock protein expression,
particularly HSP-70, in
C. elegans
.Furthermore,lossof
hsp-70
increases protein oxidation and dopaminergic
neuronal degeneration following manganese exposure, which is associated with the inhibition of
pink1
increased expression, thus pot
entially exacerbating the v
ulnerability to this metal.
Ionothermal carbon materials
(2016)
Alternative concepts for energy storage and conversion have to be developed, optimized and employed to fulfill the dream of a fossil-independent energy economy. Porous carbon materials play a major role in many energy-related devices. Among different characteristics, distinct porosity features, e.g., specific surface area (SSA), total pore volume (TPV), and the pore size distribution (PSD), are important to maximize the performance in the final device. In order to approach the aim to synthesize carbon materials with tailor-made porosity in a sustainable fashion, the present thesis focused on biomass-derived precursors employing and developing the ionothermal carbonization.
During the ionothermal carbonization, a salt melt simultaneously serves as solvent and porogen. Typically, eutectic mixtures containing zinc chloride are employed as salt phase. The first topic of the present thesis addressed the possibility to precisely tailor the porosity of ionothermal carbon materials by an experimentally simple variation of the molar composition of the binary salt mixture. The developed pore tuning tool allowed the synthesis of glucose derived carbon materials with predictable SSAs in the range of ~ 900 to ~ 2100 m2 g-1. Moreover, the nucleobase adenine was employed as precursor introducing nitrogen functionalities in the final material. Thereby, the chemical properties of the carbon materials are varied leading to new application fields. Nitrogen doped carbons (NDCs) are able to catalyze the oxygen reduction reaction (ORR) which takes place on the cathodic site of a fuel cell. The herein developed porosity tailoring allowed the synthesis of adenine derived NDCs with outstanding SSAs of up to 2900 m2 g-1 and very large TPV of 5.19 cm3 g-1. Furthermore, the influence of the porosity on the ORR could be directly investigated enabling the precise optimization of the porosity characteristics of NDCs for this application. The second topic addressed the development of a new method to investigate the not-yet unraveled mechanism of the oxygen reduction reaction using a rotating disc electrode setup. The focus was put on noble-metal free catalysts. The results showed that the reaction pathway of the investigated catalysts is pH-dependent indicating different active species at different pH-values. The third topic addressed the expansion of the used salts for the ionothermal approach towards hydrated calcium and magnesium chloride. It was shown that hydrated salt phases allowed the introduction of a secondary templating effect which was connected to the coexistence of liquid and solid salt phases. The method enabled the synthesis of fibrous NDCs with SSAs of up to 2780 m2 g-1 and very large TPV of 3.86 cm3 g-1. Moreover, the concept of active site implementation by a facile low-temperature metalation employing the obtained NDCs as solid ligands could be shown for the first time in the context of ORR.
Overall, the thesis may pave the way towards highly porous carbon with tailor-made porosity materials prepared by an inexpensive and sustainable pathway, which can be applied in energy related field thereby supporting the needed expansion of the renewable energy sector.
Due to their multifunctionality, tablets offer tremendous advantages for research on handwriting dynamics or for interactive use of learning apps in schools. Further, the widespread use of tablet computers has had a great impact on handwriting in the current generation. But, is it advisable to teach how to write and to assess handwriting in pre- and primary schoolchildren on tablets rather than on paper? Since handwriting is not automatized before the age of 10 years, children's handwriting movements require graphomotor and visual feedback as well as permanent control of movement execution during handwriting. Modifications in writing conditions, for instance the smoother writing surface of a tablet, might influence handwriting performance in general and in particular those of non-automatized beginning writers. In order to investigate how handwriting performance is affected by a difference in friction of the writing surface, we recruited three groups with varying levels of handwriting automaticity: 25 preschoolers, 27 second graders, and 25 adults. We administered three tasks measuring graphomotor abilities, visuomotor abilities, and handwriting performance (only second graders and adults). We evaluated two aspects of handwriting performance: the handwriting quality with a visual score and the handwriting dynamics using online handwriting measures [e.g., writing duration, writing velocity, strokes and number of inversions in velocity (NIV)]. In particular, NIVs which describe the number of velocity peaks during handwriting are directly related to the level of handwriting automaticity. In general, we found differences between writing on paper compared to the tablet. These differences were partly task-dependent. The comparison between tablet and paper revealed a faster writing velocity for all groups and all tasks on the tablet which indicates that all participants—even the experienced writers—were influenced by the lower friction of the tablet surface. Our results for the group-comparison show advancing levels in handwriting automaticity from preschoolers to second graders to adults, which confirms that our method depicts handwriting performance in groups with varying degrees of handwriting automaticity. We conclude that the smoother tablet surface requires additional control of handwriting movements and therefore might present an additional challenge for learners of handwriting.
Venomous snakes often display extensive variation in venom composition both between and within species. However, the mechanisms underlying the distribution of different toxins and venom types among populations and taxa remain insufficiently known. Rattlesnakes (Crotalus, Sistrurus) display extreme inter-and intraspecific variation in venom composition, centered particularly on the presence or absence of presynaptically neurotoxic phospholipases A2 such as Mojave toxin (MTX). Interspecific hybridization has been invoked as a mechanism to explain the distribution of these toxins across rattlesnakes, with the implicit assumption that they are adaptively advantageous. Here, we test the potential of adaptive hybridization as a mechanism for venom evolution by assessing the distribution of genes encoding the acidic and basic subunits of Mojave toxin across a hybrid zone between MTX-positive Crotalus scutulatus and MTX-negative C. viridis in southwestern New Mexico, USA. Analyses of morphology, mitochondrial and single copy-nuclear genes document extensive admixture within a narrow hybrid zone. The genes encoding the two MTX subunits are strictly linked, and found in most hybrids and backcrossed individuals, but not in C. viridis away from the hybrid zone. Presence of the genes is invariably associated with presence of the corresponding toxin in the venom. We conclude that introgression of highly lethal neurotoxins through hybridization is not necessarily favored by natural selection in rattlesnakes, and that even extensive hybridization may not lead to introgression of these genes into another species.
The concept of three journeys as a way to denote spiritual development was introduced
by Dhu al-Nun, one of the founding fathers of Islamic mysticism. The use of this
concept was later refined by combining it with the Sufi technique of adding different
prepositions to a certain term, in order to differentiate between spiritual stages. By
using the words journey (Safar) and God (Allah) and inserting a preposition before the
word God, Sufi writers could map the different roads to God or the stations (Maqamat) on this road. Ibn al-'Arabi, in the beginning of the thirteenth century, speaks of three
different ways: from God, toward God and in God. Tanchum ha-Yerushalmi, the Judeo
Arabic biblical commentator from the end of this century, speaks of the three journeys
as three stations of one continuous way. A nearly identical description we can find in
the writing of the Muslim scholar Ibn Qayyim al-Jawziyya, a generation later. Later in
the fourteenth century, in the writing of the Sufi writer al-Qashani, the three travels
become four, although the scheme of three prepositions is preserved. Near the end of
the fourteenth century, in the writings of R. David ha-Nagid, we find only two journeys:
to God and in God. All this tells us that Judeo Arabic literature can help us map
with greater precision the historical development of Sufi ideas.
Judging the animacy of words
(2016)
The age at which members of a semantic category are learned (age of acquisition), the typicality they demonstrate within their corresponding category, and the semantic domain to which they belong (living, non-living) are known to influence the speed and accuracy of lexical/semantic processing. So far, only a few studies have looked at the origin of age of acquisition and its interdependence with typicality and semantic domain within the same experimental design. Twenty adult participants performed an animacy decision task in which nouns were classified according to their semantic domain as being living or non-living. Response times were influenced by the independent main effects of each parameter: typicality, age of acquisition, semantic domain, and frequency. However, there were no interactions. The results are discussed with respect to recent models concerning the origin of age of acquisition effects.
Relatedness strongly influences social behaviors in a wide variety of species. For most species, the highest typical degree of relatedness is between full siblings with 50% shared genes. However, this is poorly understood in species with unusually high relatedness between individuals: clonal organisms. Although there has been some investigation into clonal invertebrates and yeast, nothing is known about kin selection in clonal vertebrates. We show that a clonal fish, the Amazon molly (Poecilia formosa), can distinguish between different clonal lineages, associating with genetically identical, sister clones, and use multiple sensory modalities. Also, they scale their aggressive behaviors according to the relatedness to other females: they are more aggressive to non-related clones. Our results demonstrate that even in species with very small genetic differences between individuals, kin recognition can be adaptive. Their discriminatory abilities and regulation of costly behaviors provides a powerful example of natural selection in species with limited genetic diversity.
This paper is focused on the temperature dependent synthesis of gold nanotriangles in a vesicular template phase, containing phosphatidylcholin and AOT, by adding the strongly alternating polyampholyte PalPhBisCarb.
UV-vis absorption spectra in combination with TEM micrographs show that flat gold nanoplatelets are formed predominantly in presence of the polyampholyte at 45 °C. The formation of triangular and hexagonal nanoplatelets can be directly influenced by the kinetic approach, i.e., by varying the polyampholyte dosage rate at 45 °C. Corresponding zeta potential measurements indicate that a temperature dependent adsorption of the polyampholyte on the {111} faces will induce the symmetry breaking effect, which is responsible for the kinetically controlled hindered vertical and preferred lateral growth of the nanoplatelets.
Experience has shown that river floods can significantly hamper the reliability of railway networks and cause extensive structural damage and disruption. As a result, the national railway operator in Austria had to cope with financial losses of more than EUR 100 million due to flooding in recent years. Comprehensive information on potential flood risk hot spots as well as on expected flood damage in Austria is therefore needed for strategic flood risk management. In view of this, the flood damage model RAIL (RAilway Infrastructure Loss) was applied to estimate (1) the expected structural flood damage and (2) the resulting repair costs of railway infrastructure due to a 30-, 100- and 300-year flood in the Austrian Mur River catchment. The results were then used to calculate the expected annual damage of the railway subnetwork and subsequently analysed in terms of their sensitivity to key model assumptions. Additionally, the impact of risk aversion on the estimates was investigated, and the overall results were briefly discussed against the background of climate change and possibly resulting changes in flood risk. The findings indicate that the RAIL model is capable of supporting decision-making in risk management by providing comprehensive risk information on the catchment level. It is furthermore demonstrated that an increased risk aversion of the railway operator has a marked influence on flood damage estimates for the study area and, hence, should be considered with regard to the development of risk management strategies.
We tested the influence of two light intensities [40 and 300 μmol PAR / (m2s)] on the fatty acid composition of three distinct lipid classes in four freshwater phytoplankton species. We chose species of different taxonomic classes in order to detect potentially similar reaction characteristics that might also be present in natural phytoplankton communities. From samples of the bacillariophyte Asterionella formosa, the chrysophyte Chromulina sp., the cryptophyte Cryptomonas ovata and the zygnematophyte Cosmarium botrytis we first separated glycolipids (monogalactosyldiacylglycerol, digalactosyldiacylglycerol, and sulfoquinovosyldiacylglycerol), phospholipids (phosphatidylcholine, phosphatidylethanolamine, phosphatidylglycerol, phosphatidylinositol, and phosphatidylserine) as well as non-polar lipids (triacylglycerols), before analyzing the fatty acid composition of each lipid class. High variation in the fatty acid composition existed among different species. Individual fatty acid compositions differed in their reaction to changing light intensities in the four species. Although no generalizations could be made for species across taxonomic classes, individual species showed clear but small responses in their ecologically-relevant omega-3 and omega-6 polyunsaturated fatty acids (PUFA) in terms of proportions and of per tissue carbon quotas. Knowledge on how lipids like fatty acids change with environmental or culture conditions is of great interest in ecological food web studies, aquaculture, and biotechnology, since algal lipids are the most important sources of omega-3 long-chain PUFA for aquatic and terrestrial consumers, including humans.
Light-triggered release of bioactive compounds from HA/PLL multilayer films for stimulation of cells
(2016)
The concept of targeting cells and tissues by controlled delivery of molecules is essential in the field of biomedicine. The layer-by-layer (LbL) technology for the fabrication of polymer multilayer films is widely implemented as a powerful tool to assemble tailor-made materials for controlled drug delivery. The LbL films can as well be engineered to act as mimics of the natural cellular microenvironment. Thus, due to the myriad possibilities such as controlled cellular adhesion and drug delivery offered by LbL films, it becomes easily achievable to direct the fate of cells by growing them on the films.
The aim of this work was to develop an approach for non-invasive and precise control of the presentation of bioactive molecules to cells. The strategy is based on employment of the LbL films, which function as support for cells and at the same time as reservoirs for bioactive molecules to be released in a controlled manner. UV light is used to trigger the release of the stored ATP with high spatio-temporal resolution. Both physico-chemical (competitive intermolecular interactions in the film) and biological aspects (cellular response and viability) are addressed in this study.
Biopolymers hyaluronic acid (HA) and poly-L-lysine (PLL) were chosen as the building blocks for the LbL film assembly. Poor cellular adhesion to native HA/PLL films as well as significant degradation by cells within a few days were shown. However, coating the films with gold nanoparticles not only improved cellular adhesion and protected the films from degradation, but also formed a size-exclusion barrier with adjustable cut-off in the size range of a few tens of kDa.
The films were shown to have high reservoir capacity for small charged molecules (reaching mM levels in the film). Furthermore, they were able to release the stored molecules in a sustained manner. The loading and release are explained by a mechanism based on interactions between charges of the stored molecules and uncompensated charges of the biopolymers in the film. Charge balance and polymer dynamics in the film play the pivotal role.
Finally, the concept of light-triggered release from the films has been proven using caged ATP loaded into the films from which ATP was released on demand. ATP induces a fast cellular response, i.e. increase in intracellular [Ca2+], which was monitored in real-time. Limitations of the cellular stimulation by the proposed approach are highlighted by studying the stimulation as a function of irradiation parameters (time, distance, light power). Moreover, caging molecules bind to the film stronger than ATP does, which opens new perspectives for the use of the most diverse chemical compounds as caging molecules.
Employment of HA/PLL films as a nouvelle support for cellular growth and hosting of bioactive molecules, along with the possibility to stimulate individual cells using focused light renders this approach highly efficient and unique in terms of precision and spatio-temporal resolution among those previously described. With its high potential, the concept presented herein provides the foundation for the design of new intelligent materials for single cell studies, with the focus on tissue engineering, diagnostics, and other cell-based applications.
Turbidity measurements are frequently implemented for the monitoring of heterogeneous chemical, physical, or biotechnological processes. However, for quantitative measurements, turbidity probes need calibration, as is requested and regulated by the ISO 7027:1999. Accordingly, a formazine suspension has to be produced. Despite this regulatory demand, no scientific publication on the stability and reproducibility of this polymerization process is available. In addition, no characterization of the optical properties of this calibration material with other optical methods had been achieved so far. Thus, in this contribution, process conditions such as temperature and concentration have been systematically investigated by turbidity probe measurements and Photon Density Wave (PDW) spectroscopy, revealing an influence on the temporal formazine formation onset. In contrast, different reaction temperatures do not lead to different scattering properties for the final formazine suspensions, but give an access to the activation energy for this condensation reaction. Based on PDW spectroscopy data, the synthesis of formazine is reproducible. However, very strong influences of the ambient conditions on the measurements of the turbidity probe have been observed, limiting its applicability. The restrictions of the turbidity probe with respect to scatterer concentration are examined on the basis of formazine and polystyrene suspensions. Compared to PDW spectroscopy data, signal saturation is observed at already low reduced scattering coefficients.
Linked linear mixed models
(2016)
The complexity of eye-movement control during reading allows measurement of many dependent variables, the most prominent ones being fixation durations and their locations in words. In current practice, either variable may serve as dependent variable or covariate for the other in linear mixed models (LMMs) featuring also psycholinguistic covariates of word recognition and sentence comprehension. Rather than analyzing fixation location and duration with separate LMMs, we propose linking the two according to their sequential dependency. Specifically, we include predicted fixation location (estimated in the first LMM from psycholinguistic covariates) and its associated residual fixation location as covariates in the second, fixation-duration LMM. This linked LMM affords a distinction between direct and indirect effects (mediated through fixation location) of psycholinguistic covariates on fixation durations. Results confirm the robustness of distributed processing in the perceptual span. They also offer a resolution of the paradox of the inverted optimal viewing position (IOVP) effect (i.e., longer fixation durations in the center than at the beginning and end of words) although the opposite (i.e., an OVP effect) is predicted from default assumptions of psycholinguistic processing efficiency: The IOVP effect in fixation durations is due to the residual fixation-location covariate, presumably driven primarily by saccadic error, and the OVP effect (at least the left part of it) is uncovered with the predicted fixation-location covariate, capturing the indirect effects of psycholinguistic covariates. We expect that linked LMMs will be useful for the analysis of other dynamically related multiple outcomes, a conundrum of most psychonomic research.
SOPARSE predicts so-called local coherence effects: locally plausible but globally impossible parses of substrings can exert a distracting influence during sentence processing. Additionally, it predicts digging-in effects: the longer the parser stays committed to a particular analysis, the harder it becomes to inhibit that analysis. We investigated the interaction of these two predictions using German sentences. Results from a self-paced reading study show that the processing difficulty caused by a local coherence can be reduced by first allowing the globally correct parse to become entrenched, which supports SOPARSE’s assumptions.
It has been long agreed by formal and functional researchers (primarily based on English data) that contrastive topic marking, namely marking a constituent as a contrastive topic via the B-accent/the rising intonation contour) requires the co-occurrence of focus marking via the A-accent/the falling intonation contour (see Sturgeon 2006, and references therein). However, this consensus has recently been disputed by new findings indicating the occurrence of utterances with only B-accent, dubbed as lone contrastive topic (Büring 2003, Constant 2014). In this paper, I argue, based on the data in Vietnamese, that the presence of lone contrastive topic is just apparent, and that the focus that co-occurs with the seemingly lone contrastive topic is a verum focus.
This two-wave longitudinal study examined how developmental changes in students’ mastery goal orientation, academic effort, and intrinsic motivation were predicted by student-perceived support of motivational support (support for autonomy, competence, and relatedness) in secondary classrooms. The study extends previous knowledge that showed that support for motivational support in class is related to students’ intrinsic motivation as it focused on the developmental changes of a set of different motivational variables and the relations of these changes to student-perceived motivational support in class. Thus, differential classroom effects on students’ motivational development were investigated. A sample of 1088 German students was assessed in the beginning of the school year when students were in grade 8 (Mean age D 13.70, SD D 0.53, 54% girls) and again at the end of the next school year when students were in grade 9. Results of latent change models showed a tendency toward decline in mastery goal orientation and a significant decrease in academic effort from grade 8 to 9. Intrinsic motivation did not decrease significantly across time. Student-perceived support of competence in class predicted the level and change in students’ academic effort. The findings emphasized that it is beneficial to create classroom learning environments that enhance students’ perceptions of competence in class when aiming to enhance students’ academic effort in secondary school classrooms.
Loss to follow-up in a randomized controlled trial study for pediatric weight management (EPOC)
(2016)
Background
Attrition is a serious problem in intervention studies. The current study analyzed the attrition rate during follow-up in a randomized controlled pediatric weight management program (EPOC study) within a tertiary care setting.
Methods
Five hundred twenty-three parents and their 7–13-year-old children with obesity participated in the randomized controlled intervention trial. Follow-up data were assessed 6 and 12 months after the end of treatment. Attrition was defined as providing no objective weight data. Demographic and psychological baseline characteristics were used to predict attrition at 6- and 12-month follow-up using multivariate logistic regression analyses.
Results
Objective weight data were available for 49.6 (67.0) % of the children 6 (12) months after the end of treatment. Completers and non-completers at the 6- and 12-month follow-up differed in the amount of weight loss during their inpatient stay, their initial BMI-SDS, educational level of the parents, and child’s quality of life and well-being. Additionally, completers supported their child more than non-completers, and at the 12-month follow-up, families with a more structured eating environment were less likely to drop out. On a multivariate level, only educational background and structure of the eating environment remained significant.
Conclusions
The minor differences between the completers and the non-completers suggest that our retention strategies were successful. Further research should focus on prevention of attrition in families with a lower educational background.
Luhmann in da Contact Zone
(2016)
Our aim in this contribution is to productively engage with the abstractions and complexities of Luhmann’s conceptions of society from a postcolonial perspective, with a particular focus on the explanatory powers of his sociological systems theory when it leaves the realms of Europe and ventures to describe regions of the global South. In view of its more recent global reception beyond Europe, our aim is to thus – following the lead of Dipesh Chakrabarty – provincialize Luhmann’s system theory especially with regard to its underlying assumptions about a global “world society”. For these purposes, we intend to revisit Luhmann in the post/colonial contact zone: We wish to reread Luhmann in the context of spaces of transcultural encounter where “global designs and local histories” (Mignolo), where inclusion into and exclusion from “world society” (Luhmann) clash and interact in intricate ways. The title of our contribution, ‘Luhmann in da Contact Zone’ is deliberately ambiguous: On the one hand, we of course use ‘Luhmann’ metonymically, as representative of a highly complex theoretical design. We shall cursorily outline this design with a special focus on the notion of a singular, modern “world society”, only to confront it with the epistemic challenges of the contact zone. On the other hand, this critique will also involve the close observation of Niklas Luhman as a human observer (a category which within the logic of systems theory actually does not exist) who increasingly transpires in his late writings on exclusion in the global South. By following this dual strategy, we wish to trace an increasing fracture between one Luhmann and the other, between abstract theoretical design and personalized testimony. It is by exploring and measuring this fracture that we hope to eventually be able to map out the potential of a possibly more productive encounter between systems theory and specific strands of postcolonial theory for a pluritopic reading of global modernity.
Magic screens
(2016)
Garcilaso de la Vega el Inca, for several centuries doubtlessly the most discussed and most eminent writer of Andean America in the 16th and 17th centuries, throughout his life set the utmost value on the fact that he descended matrilineally from Atahualpa Yupanqui and from the last Inca emperor, Huayna Capac. Thus, both in his person and in his creative work he combined different cultural worlds in a polylogical way. (1) Two painters boasted that very same Inca descent - they were the last two great masters of the Cuzco school of painting, which over several generations of artists had been an institution of excellent renown and prestige, and whose economic downfall and artistic marginalization was vividly described by the French traveller Paul Mancoy in 1837.(2) While, during the 18th century, Cuzco school paintings were still much cherished and sought after, by the beginning of the following century the elite of Lima regarded them as behind the times and provincial, committed to an 'indigenous' painting style. The artists from up-country - such was the reproach - could not keep up with the modern forms of seeing and creating, as exemplified by European paragons. Yet, just how 'provincial', truly, was this art?
In order to evade detection by network-traffic analysis, a growing proportion of malware uses the encrypted HTTPS protocol. We explore the problem of detecting malware on client computers based on HTTPS traffic analysis. In this setting, malware has to be detected based on the host IP address, ports, timestamp, and data volume information of TCP/IP packets that are sent and received by all the applications on the client. We develop a scalable protocol that allows us to collect network flows of known malicious and benign applications as training data and derive a malware-detection method based on a neural networks and sequence classification. We study the method's ability to detect known and new, unknown malware in a large-scale empirical study.
The strong adhesion of sub-micron sized particles to surfaces is a nuisance, both for removing contaminating colloids from surfaces and for conscious manipulation of particles to create and test novel micro/nano-scale assemblies. The obvious idea of using detergents to ease these processes suffers from a lack of control: the action of any conventional surface-modifying agent is immediate and global. With photosensitive azobenzene containing surfactants we overcome these limitations. Such photo-soaps contain optical switches (azobenzene molecules), which upon illumination with light of appropriate wavelength undergo reversible trans-cis photo-isomerization resulting in a subsequent change of the physico-chemical molecular properties. In this work we show that when a spatial gradient in the composition of trans- and cis- isomers is created near a solid-liquid interface, a substantial hydrodynamic flow can be initiated, the spatial extent of which can be set, e.g., by the shape of a laser spot. We propose the concept of light induced diffusioosmosis driving the flow, which can remove, gather or pattern a particle assembly at a solid-liquid interface. In other words, in addition to providing a soap we implement selectivity: particles are mobilized and moved at the time of illumination, and only across the illuminated area.
The human immunodeficiency virus (HIV) has resisted nearly three decades of efforts targeting a cure. Sustained suppression of the virus has remained a challenge, mainly due
to the remarkable evolutionary adaptation that the virus exhibits by the accumulation of drug-resistant mutations in its genome. Current therapeutic strategies aim at achieving and maintaining a low viral burden and typically involve multiple drugs. The choice of optimal combinations of these drugs is crucial, particularly in the background of treatment failure having occurred previously with certain other drugs. An understanding of the dynamics of viral mutant genotypes aids in the assessment of treatment failure with a certain drug
combination, and exploring potential salvage treatment regimens.
Mathematical models of viral dynamics have proved invaluable in understanding the viral life cycle and the impact of antiretroviral drugs. However, such models typically use simplified and coarse-grained mutation schemes, that curbs the extent of their application to drug-specific clinical mutation data, in order to assess potential next-line therapies. Statistical
models of mutation accumulation have served well in dissecting mechanisms of resistance evolution by reconstructing mutation pathways under different drug-environments. While these models perform well in predicting treatment outcomes by statistical learning, they do not incorporate drug effect mechanistically. Additionally, due to an inherent lack of
temporal features in such models, they are less informative on aspects such as predicting mutational abundance at treatment failure. This limits their application in analyzing the
pharmacology of antiretroviral drugs, in particular, time-dependent characteristics of HIV therapy such as pharmacokinetics and pharmacodynamics, and also in understanding the impact of drug efficacy on mutation dynamics.
In this thesis, we develop an integrated model of in vivo viral dynamics incorporating drug-specific mutation schemes learned from clinical data. Our combined modelling
approach enables us to study the dynamics of different mutant genotypes and assess mutational abundance at virological failure. As an application of our model, we estimate in vivo
fitness characteristics of viral mutants under different drug environments. Our approach also extends naturally to multiple-drug therapies. Further, we demonstrate the versatility of our model by showing how it can be modified to incorporate recently elucidated mechanisms of drug action including molecules that target host factors.
Additionally, we address another important aspect in the clinical management of HIV disease, namely drug pharmacokinetics. It is clear that time-dependent changes in in vivo
drug concentration could have an impact on the antiviral effect, and also influence decisions on dosing intervals. We present a framework that provides an integrated understanding
of key characteristics of multiple-dosing regimens including drug accumulation ratios and half-lifes, and then explore the impact of drug pharmacokinetics on viral suppression.
Finally, parameter identifiability in such nonlinear models of viral dynamics is always a concern, and we investigate techniques that alleviate this issue in our setting.
Background:
Plant phenotypic data shrouds a wealth of information which, when accurately analysed and linked
to other data types, brings to light the knowledge about the mechanisms of life. As phenotyping is a field of research
comprising manifold, diverse and time
‑consuming experiments, the findings can be fostered by reusing and combin‑
ing existing datasets. Their correct interpretation, and thus replicability, comparability and interoperability, is possible
provided that the collected observations are equipped with an adequate set of metadata. So far there have been no
common standards governing phenotypic data description, which hampered data exchange and reuse.
Results:
In this paper we propose the guidelines for proper handling of the information about plant phenotyping
experiments, in terms of both the recommended content of the description and its formatting. We provide a docu‑
ment called “Minimum Information About a Plant Phenotyping Experiment”, which specifies what information about
each experiment should be given, and a Phenotyping Configuration for the ISA
‑Tab format, which allows to practically
organise this information within a dataset. We provide examples of ISA
‑Tab
‑formatted phenotypic data, and a general
description of a few systems where the recommendations have been implemented.
Conclusions:
Acceptance of the rules described in this paper by the plant phenotyping community will help to
achieve findable, accessible, interoperable and reusable data.
Infants' lexical processing is modulated by featural manipulations made to words, suggesting that early lexical representations are sufficiently specified to establish a match with the corresponding label. However, the precise degree of detail in early words requires further investigation due to equivocal findings. We studied this question by assessing children’s sensitivity to the degree of featural manipulation (Chapters 2 and 3), and sensitivity to the featural makeup of homorganic and heterorganic consonant clusters (Chapter 4). Gradient sensitivity on the one hand and sensitivity to homorganicity on the other hand would suggest that lexical processing makes use of sub-phonemic information, which in turn would indicate that early words contain sub-phonemic detail. The studies presented in this thesis assess children’s sensitivity to sub-phonemic detail using minimally demanding online paradigms suitable for infants: single-picture pupillometry and intermodal preferential looking. Such paradigms have the potential to uncover lexical knowledge that may be masked otherwise due to cognitive limitations. The study reported in Chapter 2 obtained a differential response in pupil dilation to the degree of featural manipulation, a result consistent with gradient sensitivity. The study reported in Chapter 3 obtained a differential response in proportion of looking time and pupil dilation to the degree of featural manipulation, a result again consistent with gradient sensitivity. The study reported in Chapter 4 obtained a differential response to the manipulation of homorganic and heterorganic consonant clusters, a result consistent with sensitivity to homorganicity. These results suggest that infants' lexical representations are not only specific, but also detailed to the extent that they contain sub-phonemic information.
The cell interior is a highly packed environment in which biological macromolecules evolve and function. This crowded media has effects in many biological processes such as protein-protein binding, gene regulation, and protein folding. Thus, biochemical reactions that take place in such crowded conditions differ from diluted test tube conditions, and a considerable effort has been invested in order to understand such differences.
In this work, we combine different computationally tools to disentangle the effects of molecular crowding on biochemical processes. First, we propose a lattice model to study the implications of molecular crowding on enzymatic reactions. We provide a detailed picture of how crowding affects binding and unbinding events and how the separate effects of crowding on binding equilibrium act together. Then, we implement a lattice model to study the effects of molecular crowding on facilitated diffusion. We find that obstacles on the DNA impair facilitated diffusion. However, the extent of this effect depends on how dynamic obstacles are on the DNA. For the scenario in which crowders are only present in the bulk solution, we find that at some conditions presence of crowding agents can enhance specific-DNA binding. Finally, we make use of structure-based techniques to look at the impact of the presence of crowders on the folding a protein. We find that polymeric crowders have stronger effects on protein stability than spherical crowders. The strength of this effect increases as the polymeric crowders become longer. The methods we propose here are general and can also be applied to more complicated systems.
In a network with a mixture of different electrophysiological types of neurons linked by excitatory and inhibitory connections, temporal evolution leads through repeated epochs of intensive global activity separated by intervals with low activity level. This behavior mimics "up" and "down" states, experimentally observed in cortical tissues in absence of external stimuli. We interpret global dynamical features in terms of individual dynamics of the neurons. In particular, we observe that the crucial role both in interruption and in resumption of global activity is played by distributions of the membrane recovery variable within the network. We also demonstrate that the behavior of neurons is more influenced by their presynaptic environment in the network than by their formal types, assigned in accordance with their response to constant current.
Hantaviruses are zoonotic viruses transmitted to humans by persistently infected rodents, giving rise to serious outbreaks of hemorrhagic fever with renal syndrome (HFRS) or of hantavirus pulmonary syndrome (HPS), depending on the virus, which are associated with high case fatality rates. There is only limited knowledge about the organization of the viral particles and in particular, about the hantavirus membrane fusion glycoprotein Gc, the function of which is essential for virus entry. We describe here the X-ray structures of Gc from Hantaan virus, the type species hantavirus and responsible for HFRS, both in its neutral pH, monomeric pre-fusion conformation, and in its acidic pH, trimeric post-fusion form. The structures confirm the prediction that Gc is a class II fusion protein, containing the characteristic beta-sheet rich domains termed I, II and III as initially identified in the fusion proteins of arboviruses such as alpha-and flaviviruses. The structures also show a number of features of Gc that are distinct from arbovirus class II proteins. In particular, hantavirus Gc inserts residues from three different loops into the target membrane to drive fusion, as confirmed functionally by structure-guided mutagenesis on the HPS-inducing Andes virus, instead of having a single "fusion loop". We further show that the membrane interacting region of Gc becomes structured only at acidic pH via a set of polar and electrostatic interactions. Furthermore, the structure reveals that hantavirus Gc has an additional N-terminal "tail" that is crucial in stabilizing the post-fusion trimer, accompanying the swapping of domain III in the quaternary arrangement of the trimer as compared to the standard class II fusion proteins. The mechanistic understandings derived from these data are likely to provide a unique handle for devising treatments against these human pathogens.
Background:
Arising from the relevance of sensorimotor training in the therapy of nonspecific low back pain patients and from the value of individualized therapy, the present trial aims to test the feasibility and efficacy of individualized sensorimotor training interventions in patients suffering from nonspecific low back pain.
Methods and study design:
A multicentre, single-blind two-armed randomized controlled trial to evaluate the
effects of a 12-week (3 weeks supervised centre-based and 9 weeks home-based) individualized sensorimotor exercise program is performed. The control group stays inactive during this period. Outcomes are pain, and pain-associated function as well as motor function in adults with nonspecific low back pain. Each participant is scheduled to five measurement dates: baseline (M1), following centre-based training (M2), following home-based training (M3) and at two follow-up time points 6 months (M4) and 12 months (M5) after M1. All investigations and the assessment of the primary and secondary outcomes are performed in a standardized order: questionnaires – clinical examination – biomechanics (motor function). Subsequent statistical procedures are executed after the examination of underlying assumptions for parametric or rather non-parametric testing.
Discussion:
The results and practical relevance of the study will be of clinical and practical relevance not only for researchers and policy makers but also for the general population suffering from nonspecific low back pain.
Trial registration:
Identification number DRKS00010129. German Clinical Trial registered on 3 March 2016.
Dietary approaches contribute to the prevention and treatment of type 2 diabetes. High protein diets were shown to exert beneficial as well as adverse effects on metabolism. However, it is unclear whether the protein origin plays a role in these effects. The LeguAN study investigated in detail the effects of two high protein diets, either from plant or animal origin, in type 2 diabetic patients. Both diets contained 30 EN% protein, 40 EN% carbohydrates, and 30 EN% fat. Fiber content, glycemic index, and composition of dietary fats were similar in both diets. In comparison to previous dietary habits, the fat content was exchanged for protein, while the carbohydrate intake was not modified. Overall, both high protein diets led to improvements of glycemic control, insulin sensitivity, liver fat, and cardiovascular risk markers without remarkable differences between the protein types.
Fasting glucose together with indices of insulin resistance were ameliorated by both interventions to varying extents but without significant differences between protein types. The decline of HbA1c was more pronounced in the plant protein group, whereby the improvement of insulin sensitivity in the animal protein group. The high protein intake had only slight influence on postprandial metabolism seen for free fatty acids and indices of insulin secretion, sensitivity and degradation. Except for GIP release, ingestion of animal and plant meals did not provoke differential metabolic and hormonal responses despite diverse circulating amino acid levels.
The animal protein diets led to a selective increase of fat-free mass and decrease of total fat mass, which was not significantly different from the plant protein diet. Moreover, the high protein diets potently decreased liver fat content by 42% on average which was linked to significantly diminished lipogenesis, free fatty acids flux and lipolysis in adipose tissue. Moderate decline of circulating liver enzymes was induced by both interventions. The liver fat reduction was associated with improved glucose homeostasis and insulin sensitivity which underlines the protective effect of the diets.
Blood lipid profile improved in all subjects and was probably related to the lower fat intake. Reductions in uric acid and markers of inflammation further argued for metabolic benefits of both high protein diets. Systolic and diastolic blood pressure declined only in the PP group pointing a possible role of arginine.
Kidney function was not altered by high protein consumption over 6 weeks. The rapid decrease of serum creatinine in the PP group was noteworthy and should be further investigated. Protein type did not seem to play a role but long-term studies are warranted to fully elucidate safety of high protein regimen.
Varying the source of dietary proteins did not affect the mTOR pathway in adipose tissue and blood cells under neither acute nor chronic settings. Enhancement of whole-body insulin sensitivity suggested also no alteration of mTOR and no impairment of insulin sensitivity in skeletal muscle.
A remarkable outcome was the extensive reduction of FGF21, critical regulator of metabolic processes, by approximately 50% independently of protein type. Whether hepatic ER-stress, ammonia flux or rather macronutrient preferences is behind this paradoxical finding remains to be investigated in detail.
Unlike initial expectations and previous reports plant protein based diet had no clear advantage over animal proteins. The pronounced beneficial effect of animal protein on insulin homeostasis despite high BCAA and methionine intake was certainly unexpected assuming more complex metabolic adaptations occurring upon prolonged consumption. In addition, the reduced fat intake may have also contributed to the overall improvements in both groups.
Taking into account the above observed study results, a short-term diet containing 30 EN% protein (either from plant or animal origin), 40 EN% carbohydrates, and 30 EN% fat with lower SFA amount leads to metabolic improvements in diabetic patients, regardless of protein source.
The spider mite Tetranychus urticae Koch and the aphid Myzus persicae (Sulzer) both infest a number of economically significant crops, including tomato (Solanurn lycopersicum). Although used for decades to control pests, the impact of green lacewing larvae Chrysoperla carnea (Stephens) on plant biochemistry was not investigated. Here, we used profiling methods and targeted analyses to explore the impact of the predator and herbivore(s)-predator interactions on tomato biochemistry. Each pest and pest -predator combination induced a characteristic metabolite signature in the leaf and the fruit thus, the plant exhibited a systemic response. The treatments had a stronger impact on non-volatile metabolites including abscisic acid and amino acids in the leaves in comparison with the fruits. In contrast, the various biotic factors had a greater impact on the carotenoids in the fruits. We identified volatiles such as myrcene and alpha-terpinene which were induced by pest -predator interactions but not by single species, and we demonstrated the involvement of the phytohormone abscisic acid in tritrophic interactions for the first time. More importantly, C. carnea larvae alone impacted the plant metabolome, but the predator did not appear to elicit particular defense pathways on its own. Since the presence of both C. carnea larvae and pest individuals elicited volatiles which were shown to contribute to plant defense, C. carnea larvae could therefore contribute to the reduction of pest infestation, not only by its preying activity, but also by priming responses to generalist herbivores such as T urticae and M. persicae. On the other hand, the use of C. carnea larvae alone did not impact carotenoids thus, was not prejudicial to the fruit quality. The present piece of research highlights the specific impact of predator and tritrophic interactions with green lacewing larvae, spider mites, and aphids on different components of the tomato primary and secondary metabolism for the first time, and provides cues for further in-depth studies aiming to integrate entomological approaches and plant biochemistry.
Macrocycles based on L-cystine were synthesized by ring-closing metathesis (RCM) and subsequently polymerized by entropy-driven ring-opening metathesis polymerization (ED-ROMP). Monomer conversion reached ∼80% in equilibrium and the produced poly(ester-amine-disulfide-alkene)s exhibited apparent molar masses (Mappw) of up to 80 kDa and dispersities (Đ) of ∼2. The polymers can be further functionalized with acid anhydrides and degraded by reductive cleavage of the main-chain disulfide.
Observed recent and expected future increases in frequency and intensity of climatic extremes in central Europe may pose critical challenges for domestic tree species. Continuous dendrometer recordings provide a valuable source of information on tree stem radius variations, offering the possibility to study a tree's response to environmental influences at a high temporal resolution. In this study, we analyze stem radius variations (SRV) of three domestic tree species (beech, oak, and pine) from 2012 to 2014. We use the novel statistical approach of event coincidence analysis (ECA) to investigate the simultaneous occurrence of extreme daily weather conditions and extreme SRVs, where extremes are defined with respect to the common values at a given phase of the annual growth period. Besides defining extreme events based on individual meteorological variables, we additionally introduce conditional and joint ECA as new multivariate extensions of the original methodology and apply them for testing 105 different combinations of variables regarding their impact on SRV extremes. Our results reveal a strong susceptibility of all three species to the extremes of several meteorological variables. Yet, the inter-species differences regarding their response to the meteorological extremes are comparatively low. The obtained results provide a thorough extension of previous correlation-based studies by emphasizing on the timings of climatic extremes only. We suggest that the employed methodological approach should be further promoted in forest research regarding the investigation of tree responses to changing environmental conditions.
The increase in atmospheric methane concentration, which is determined by an imbalance between its sources and sinks, has led to investigations of the methane cycle in various environments. Aquatic environments are of an exceptional interest due to their active involvement in methane cycling worldwide and in particular in areas sensitive to climate change. Furthermore, being connected with each other aquatic environments form networks that can be spread on vast areas involving marine, freshwater and terrestrial ecosystems. Thus, aquatic systems have a high potential to translate local or regional environmental and subsequently ecosystem changes to a bigger scale. Many studies neglect this connectivity and focus on individual aquatic or terrestrial ecosystems.
The current study focuses on environmental controls of the distribution and aerobic oxidation of methane at the example of two aquatic ecosystems. These ecosystems are Arctic fresh water bodies and the Elbe estuary which represent interfaces between freshwater-terrestrial and freshwater-marine environments, respectively.
Arctic water bodies are significant atmospheric sources of methane. At the same time the methane cycle in Arctic water bodies is strongly affected by the surrounding permafrost environment, which is characterized by high amounts of organic carbon. The results of this thesis indicate that the methane concentrations in Arctic lakes and streams substantially vary between each other being regulated by local landscape features (e.g. floodplain area) and the morphology of the water bodies (lakes, streams and river). The highest methane concentrations were detected in the lake outlets and in a floodplain lake complex. In contrast, the methane concentrations measured at different sites of the Lena River did not vary substantially. The lake complexes in comparison to the Lena River, thus, appear as more individual and heterogeneous systems with a pronounced imprint of the surrounding soil environment. Furthermore, connected with each other Arctic aquatic environments have a large potential to transport methane from methane-rich water bodies such as streams and floodplain lakes to aquatic environments relatively poor in methane such as the Lena River.
Estuaries represent hot spots of oceanic methane emissions. Also, estuaries are intermediate zones between methane-rich river water and methane depleted marine water. Substantiated through this thesis at the example of the Elbe estuary, the methane distribution in estuaries, however, cannot entirely be described by the conservative mixing model i.e. gradual decrease from the freshwater end-member to the marine water end-member. In addition to the methane-rich water from the Elbe River mouth substantial methane input occurs from tidal flats, areas of significant interaction between aquatic and terrestrial environments. Thus, this study demonstrates the complex interactions and their consequences for the methane distribution within estuaries. Also it reveals how important it is to investigate estuaries at larger spatial scales.
Methane oxidation (MOX) rates are commonly correlated with methane concentrations. This was shown in previous research studies and was substantiated by the present thesis. In detail, the highest MOX rates in the Arctic water bodies were detected in methane-rich streams and in the floodplain area while in the Elbe estuary the highest MOX rates were observed at the coastal stations. However, in these bordering environments, MOX rates are affected not only via the regulation through methane concentrations. The MOX rates in the Arctic lakes were shown to be also dependent on the abundance and community composition of methane-oxidising bacteria (MOB), that in turn are controlled by local landscape features (regardless of the methane concentrations) and by the transport of MOB between neighbouring environments. In the Elbe estuary, the MOX rates in addition to the methane concentrations are largely affected by the salinity, which is in turn regulated by the mixing of fresh- and marine waters. The magnitude of the salinity impact on MOX rates thereby depends on the MOB community composition and on the rate of the salinity change.
This study extends our knowledge of environmental controls of methane distribution and aerobic methane oxidation in aquatic environments. It also illustrates how important it is to investigate complex ecosystems rather than individual ecosystems to better understand the functioning of whole biomes.
It is the intention of this study to contribute to further rethinking and innovating in the Microcredit business which stands at a turning point – after around 40 years of practice it is endangered to fail as a tool for economic development and to become a doubtful finance product with a random scope instead. So far, a positive impact of Microfinance on the improvement of the lives of the poor could not be confirmed. Over-indebtment of borrowers due to the pre-dominance of consumption Microcredits has become a widespread problem. Furthermore, a rising number of abusive and commercially excessive practices have been reported.
In fact, the Microfinance sector appears to suffer from a major underlying deficit: there does not exist a coherent and transparent understanding of its meaning and objectives so that Microfinance providers worldwide follow their own approaches of Microfinance which tend to differ considerably from each other.
In this sense the study aims at consolidating the multi-faced and very often confusingly different Microcredit profiles that exist nowadays. Subsequently, in this study, the Microfinance spectrum will be narrowed to one clear-cut objective, in fact away from the mere monetary business transactions to poor people it has gradually been reduced to back towards a tool for economic development as originally envisaged by its pioneers.
Hence, the fundamental research question of this study is whether, and under which conditions, Microfinance may attain a positive economic impact leading to an improvement of the living of the poor.
The study is structured in five parts: the three main parts (II.-IV.) are surrounded by an introduction (I.) and conclusion (V.). In part II., the Microfinance sector is analysed critically aiming to identify the challenges persisting as well as their root causes. In the third part, a change to the macroeconomic perspective is undertaken in oder to learn about the potential and requirements of small-scale finance to enhance economic development, particularly within the economic context of less developed countries. By consolidating the insights gained in part IV., the elements of a new concept of Microfinance with the objecitve to achieve economic development of its borrowers are elaborated.
Microfinance is a rather sensitive business the great fundamental idea of which is easily corruptible and, additionally, the recipients of which are predestined victims of abuse due to their limited knowledge in finance. It therefore needs to be practiced responsibly, but also according to clear cut definitions of its meaning and objectives all institutions active in the sector should be devoted to comply with. This is especially relevant as the demand for Microfinance services is expected to rise further within the years coming. For example, the recent refugee migration movement towards Europe entails a vast potential for Microfinance to enable these people to make a new start into economic life. This goes to show that Microfinance may no longer mainly be associated with a less developed economic context, but that it will gain importance as a financial instrument in the developed economies, too.
We present a summary on the current status of two inversion algorithms that are used in EARLINET (European Aerosol Research Lidar Network) for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on a manually controlled inversion of optical data which allows for detailed sensitivity studies. The algorithms allow us to derive particle effective radius as well as volume and surface area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light absorption needs to be known with high accuracy. It is an extreme challenge to retrieve the real part with an accuracy better than 0.05 and the imaginary part with accuracy better than 0.005-0.1 or +/- 50 %. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high-and low-absorbing aerosols.
On the basis of a few exemplary simulations with synthetic optical data we discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work. One algorithm was used with the purpose of testing how well microphysical parameters can be derived if the real part of the complex refractive index is known to at least 0.05 or 0.1. The other algorithm was used to find out how well microphysical parameters can be derived if this constraint for the real part is not applied.
The optical data used in our study cover a range of Angstrom exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested aerosol scenarios that are considered highly unlikely, e.g. the lidar ratios fall outside the commonly accepted range of values measured with Raman lidar, even though the underlying microphysical particle properties are not uncommon. The goal of this part of the study is to test the robustness of the algorithms towards their ability to identify aerosol types that have not been measured so far, but cannot be ruled out based on our current knowledge of aerosol physics.
We computed the optical data from monomodal logarithmic particle size distributions, i.e. we explicitly excluded the more complicated case of bimodal particle size distributions which is a topic of ongoing research work. Another constraint is that we only considered particles of spherical shape in our simulations. We considered particle radii as large as 7-10 mu m in our simulations where the Potsdam algorithm is limited to the lower value. We considered optical-data errors of 15% in the simulation studies. We target 50% uncertainty as a reasonable threshold for our data products, though we attempt to obtain data products with less uncertainty in future work.
Numerous reports of relatively rapid climate changes over the past century make a clear case of the impact of aerosols and clouds, identified as sources of largest uncertainty in climate projections. Earth’s radiation balance is altered by aerosols depending on their size, morphology and chemical composition. Competing effects in the atmosphere can be further studied by investigating the evolution of aerosol microphysical properties, which are the focus of the present work.
The aerosol size distribution, the refractive index, and the single scattering albedo are commonly used such properties linked to aerosol type, and radiative forcing. Highly advanced lidars (light detection and ranging) have reduced aerosol monitoring and optical profiling into a routine process. Lidar data have been widely used to retrieve the size distribution through the inversion of the so-called Lorenz-Mie model (LMM). This model offers a reasonable treatment for spherically approximated particles, it no longer provides, though, a viable description for other naturally occurring arbitrarily shaped particles, such as dust particles. On the other hand, non-spherical geometries as simple as spheroids reproduce certain optical properties with enhanced accuracy. Motivated by this, we adapt the LMM to accommodate the spheroid-particle approximation introducing the notion of a two-dimensional (2D) shape-size distribution.
Inverting only a few optical data points to retrieve the shape-size distribution is classified as a non-linear ill-posed problem. A brief mathematical analysis is presented which reveals the inherent tendency towards highly oscillatory solutions, explores the available options for a generalized solution through regularization methods and quantifies the ill-posedness. The latter will improve our understanding on the main cause fomenting instability in the produced solution spaces. The new approach facilitates the exploitation of additional lidar data points from depolarization measurements, associated with particle non-sphericity. However, the generalization of LMM vastly increases the complexity of the problem. The underlying theory for the calculation of the involved optical cross sections (T-matrix theory) is computationally so costly, that would limit a retrieval analysis to an unpractical point. Moreover the discretization of the model equation by a 2D collocation method, proposed in this work, involves double integrations which are further time consuming. We overcome these difficulties by using precalculated databases and a sophisticated retrieval software (SphInX: Spheroidal Inversion eXperiments) especially developed for our purposes, capable of performing multiple-dataset inversions and producing a wide range of microphysical retrieval outputs.
Hybrid regularization in conjunction with minimization processes is used as a basis for our algorithms. Synthetic data retrievals are performed simulating various atmospheric scenarios in order to test the efficiency of different regularization methods. The gap in contemporary literature in providing full sets of uncertainties in a wide variety of numerical instances is of major concern here. For this, the most appropriate methods are identified through a thorough analysis on an overall-behavior basis regarding accuracy and stability. The general trend of the initial size distributions is captured in our numerical experiments and the reconstruction quality depends on data error level. Moreover, the need for more or less depolarization points is explored for the first time from the point of view of the microphysical retrieval. Finally, our approach is tested in various measurement cases giving further insight for future algorithm improvements.