Refine
Has Fulltext
- yes (212) (remove)
Year of publication
- 2014 (212) (remove)
Document Type
- Postprint (97)
- Doctoral Thesis (73)
- Article (13)
- Preprint (13)
- Monograph/Edited Volume (10)
- Habilitation Thesis (2)
- Master's Thesis (2)
- Part of Periodical (2)
Language
- English (212) (remove)
Keywords
- anomalous diffusion (5)
- Gammastrahlungsastronomie (4)
- data analysis (4)
- gamma-ray astronomy (4)
- living cells (4)
- Crab Nebula (3)
- Datenanalyse (3)
- Krebsnebel (3)
- Systembiologie (3)
- cloud computing (3)
Institute
- Institut für Chemie (36)
- Institut für Physik und Astronomie (23)
- Mathematisch-Naturwissenschaftliche Fakultät (22)
- Institut für Biochemie und Biologie (20)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (16)
- Institut für Mathematik (16)
- Humanwissenschaftliche Fakultät (14)
- Institut für Geowissenschaften (12)
- Bürgerliches Recht (9)
- Wirtschaftswissenschaften (6)
The Great Hungarian Plain was a crossroads of cultural transformations that have shaped European prehistory. Here we analyse a 5,000-year transect of human genomes, sampled from petrous bones giving consistently excellent endogenous DNA yields, from 13 Hungarian Neolithic, Copper, Bronze and Iron Age burials including two to high (similar to 22x) and seven to similar to 1x coverage, to investigate the impact of these on Europe's genetic landscape. These data suggest genomic shifts with the advent of the Neolithic, Bronze and Iron Ages, with interleaved periods of genome stability. The earliest Neolithic context genome shows a European hunter-gatherer genetic signature and a restricted ancestral population size, suggesting direct contact between cultures after the arrival of the first farmers into Europe. The latest, Iron Age, sample reveals an eastern genomic influence concordant with introduced Steppe burial rites. We observe transition towards lighter pigmentation and surprisingly, no Neolithic presence of lactase persistence.
Background: Development of eukaryotic organisms is controlled by transcription factors that trigger specific and global changes in gene expression programs. In plants, MADS-domain transcription factors act as master regulators of developmental switches and organ specification. However, the mechanisms by which these factors dynamically regulate the expression of their target genes at different developmental stages are still poorly understood.
Results: We characterized the relationship of chromatin accessibility, gene expression, and DNA binding of two MADS-domain proteins at different stages of Arabidopsis flower development. Dynamic changes in APETALA1 and SEPALLATA3 DNA binding correlated with changes in gene expression, and many of the target genes could be associated with the developmental stage in which they are transcriptionally controlled. We also observe dynamic changes in chromatin accessibility during flower development. Remarkably, DNA binding of APETALA1 and SEPALLATA3 is largely independent of the accessibility status of their binding regions and it can precede increases in DNA accessibility. These results suggest that APETALA1 and SEPALLATA3 may modulate chromatin accessibility, thereby facilitating access of other transcriptional regulators to their target genes.
Conclusions: Our findings indicate that different homeotic factors regulate partly overlapping, yet also distinctive sets of target genes in a partly stage-specific fashion. By combining the information from DNA-binding and gene expression data, we are able to propose models of stage-specific regulatory interactions, thereby addressing dynamics of regulatory networks throughout flower development. Furthermore, MADS-domain TFs may regulate gene expression by alternative strategies, one of which is modulation of chromatin accessibility.
DNA origami nanostructures allow for the arrangement of different functionalities such as proteins, specific DNA structures, nanoparticles, and various chemical modifications with unprecedented precision. The arranged functional entities can be visualized by atomic force microscopy (AFM) which enables the study of molecular processes at a single-molecular level. Examples comprise the investigation of chemical reactions, electron-induced bond breaking, enzymatic binding and cleavage events, and conformational transitions in DNA. In this paper, we provide an overview of the advances achieved in the field of single-molecule investigations by applying atomic force microscopy to functionalized DNA origami substrates.
Modern 3D geovisualization systems (3DGeoVSs) are complex and evolving systems that are required to be adaptable and leverage distributed resources, including massive geodata. This article focuses on 3DGeoVSs built based on the principles of service-oriented architectures, standards and image-based representations (SSI) to address practically relevant challenges and potentials. Such systems facilitate resource sharing and agile and efficient system construction and change in an interoperable manner, while exploiting images as efficient, decoupled and interoperable representations. The software architecture of a 3DGeoVS and its underlying visualization model have strong effects on the system's quality attributes and support various system life cycle activities. This article contributes a software reference architecture (SRA) for 3DGeoVSs based on SSI that can be used to design, describe and analyze concrete software architectures with the intended primary benefit of an increase in effectiveness and efficiency in such activities. The SRA integrates existing, proven technology and novel contributions in a unique manner. As the foundation for the SRA, we propose the generalized visualization pipeline model that generalizes and overcomes expressiveness limitations of the prevalent visualization pipeline model. To facilitate exploiting image-based representations (IReps), the SRA integrates approaches for the representation, provisioning and styling of and interaction with IReps. Five applications of the SRA provide proofs of concept for the general applicability and utility of the SRA. A qualitative evaluation indicates the overall suitability of the SRA, its applications and the general approach of building 3DGeoVSs based on SSI.
This paper reports a problematic case of unequivocally evidencing participant orientation to the projective force of some turn-initial demonstrative wh-clefts (DCs) within the framework of Conversation Analysis (CA) and Interactional Linguistics (IL). Conducting rhythmic analyses appears helpful in this regard, in that they disclose rhythmic regularities which suggest a speaker's orientation towards a projected turn continuation. In this particular case, rhythmic analyses can therefore be shown to meaningfully complement sequential analyses and analyses of turn-design, so as to gather additional evidence for participant orientations. In conclusion, I will point to possibly more extensive relations between rhythmicity and projection and proffer a tentative outlook for the usability of rhythmic analyses as an analytic tool in CA and IL.
Background
The forelimb-specific gene tbx5 is highly conserved and essential for the development of forelimbs in zebrafish, mice, and humans. Amongst birds, a single order, Dinornithiformes, comprising the extinct wingless moa of New Zealand, are unique in having no skeletal evidence of forelimb-like structures.
Results
To determine the sequence of tbx5 in moa, we used a range of PCR-based techniques on ancient DNA to retrieve all nine tbx5 exons and splice sites from the giant moa, Dinornis. Moa Tbx5 is identical to chicken Tbx5 in being able to activate the downstream promotors of fgf10 and ANF. In addition we show that missexpression of moa tbx5 in the hindlimb of chicken embryos results in the formation of forelimb features, suggesting that Tbx5 was fully functional in wingless moa. An alternatively spliced exon 1 for tbx5 that is expressed specifically in the forelimb region was shown to be almost identical between moa and ostrich, suggesting that, as well as being fully functional, tbx5 is likely to have been expressed normally in moa since divergence from their flighted ancestors, approximately 60 mya.
Conclusions
The results suggests that, as in mice, moa tbx5 is necessary for the induction of forelimbs, but is not sufficient for their outgrowth. Moa Tbx5 may have played an important role in the development of moa’s remnant forelimb girdle, and may be required for the formation of this structure. Our results further show that genetic changes affecting genes other than tbx5 must be responsible for the complete loss of forelimbs in moa.
Biosensors for the detection of benzaldehyde and g-aminobutyric acid (GABA) are reported using aldehyde oxidoreductase PaoABC from Escherichia coli immobilized in a polymer containing bound low potential osmium redox complexes. The electrically connected enzyme already electrooxidizes benzaldehyde at potentials below −0.15 V (vs. Ag|AgCl, 1 M KCl). The pH-dependence of benzaldehyde oxidation can be strongly influenced by the ionic strength. The effect is similar with the soluble osmium redox complex and therefore indicates a clear electrostatic effect on the bioelectrocatalytic efficiency of PaoABC in the osmium containing redox polymer. At lower ionic strength, the pH-optimum is high and can be switched to low pH-values at high ionic strength. This offers biosensing at high and low pH-values. A “reagentless” biosensor has been formed with enzyme wired onto a screen-printed electrode in a flow cell device. The response time to addition of benzaldehyde is 30 s, and the measuring range is between 10–150 µM and the detection limit of 5 µM (signal to noise ratio 3:1) of benzaldehyde. The relative standard deviation in a series (n = 13) for 200 µM benzaldehyde is 1.9%. For the biosensor, a response to succinic semialdehyde was also identified. Based on this response and the ability to work at high pH a biosensor for GABA is proposed by coimmobilizing GABA-aminotransferase (GABA-T) and PaoABC in the osmium containing redox polymer.
We present an electrochemical MIP sensor for tamoxifen (TAM)-a nonsteroidal anti-estrogen-which is based on the electropolymerisation of an O-phenylenediamine. resorcinol mixture directly on the electrode surface in the presence of the template molecule. Up to now only. bulk. MIPs for TAM have been described in literature, which are applied for separation in chromatography columns. Electro-polymerisation of the monomers in the presence of TAM generated a film which completely suppressed the reduction of ferricyanide. Removal of the template gave a markedly increased ferricyanide signal, which was again suppressed after rebinding as expected for filling of the cavities by target binding. The decrease of the ferricyanide peak of the MIP electrode depended linearly on the TAM concentration between 1 and 100 nM. The TAM-imprinted electrode showed a 2.3 times higher recognition of the template molecule itself as compared to its metabolite 4-hydroxytamoxifen and no cross-reactivity with the anticancer drug doxorubucin was found. Measurements at + 1.1 V caused a fouling of the electrode surface, whilst pretreatment of TAM with peroxide in presence of HRP generated an oxidation product which was reducible at 0 mV, thus circumventing the polymer formation and electrochemical interferences.
The structures and synthesis of polyzwitterions ("polybetaines") are reviewed, emphasizing the literature of the past decade. Particular attention is given to the general challenges faced, and to successful strategies to obtain polymers with a true balance of permanent cationic and anionic groups, thus resulting in an overall zero charge. Also, the progress due to applying new methodologies from general polymer synthesis, such as controlled polymerization methods or the use of "click" chemical reactions is presented. Furthermore, the emerging topic of responsive ("smart") polyzwitterions is addressed. The considerations and critical discussions are illustrated by typical examples.
Dimensional psychiatry
(2014)
A dimensional approach in psychiatry aims to identify core mechanisms of mental disorders across nosological boundaries.
We compared anticipation of reward between major psychiatric disorders, and investigated whether reward anticipation is impaired in several mental disorders and whether there is a common psychopathological correlate (negative mood) of such an impairment.
We used functional magnetic resonance imaging (fMRI) and a monetary incentive delay (MID) task to study the functional correlates of reward anticipation across major psychiatric disorders in 184 subjects, with the diagnoses of alcohol dependence (n = 26), schizophrenia (n = 44), major depressive disorder (MDD, n = 24), bipolar disorder (acute manic episode, n = 13), attention deficit/hyperactivity disorder (ADHD, n = 23), and healthy controls (n = 54). Subjects' individual Beck Depression Inventory-and State-Trait Anxiety Inventory-scores were correlated with clusters showing significant activation during reward anticipation.
During reward anticipation, we observed significant group differences in ventral striatal (VS) activation: patients with schizophrenia, alcohol dependence, and major depression showed significantly less ventral striatal activation compared to healthy controls. Depressive symptoms correlated with dysfunction in reward anticipation regardless of diagnostic entity. There was no significant correlation between anxiety symptoms and VS functional activation.
Our findings demonstrate a neurobiological dysfunction related to reward prediction that transcended disorder categories and was related to measures of depressed mood. The findings underline the potential of a dimensional approach in psychiatry and strengthen the hypothesis that neurobiological research in psychiatric disorders can be targeted at core mechanisms that are likely to be implicated in a range of clinical entities.
Background
Nucleic acid amplification is the most sensitive and specific method to detect Plasmodium falciparum. However the polymerase chain reaction remains laboratory-based and has to be conducted by trained personnel. Furthermore, the power dependency for the thermocycling process and the costly equipment necessary for the read-out are difficult to cover in resource-limited settings. This study aims to develop and evaluate a combination of isothermal nucleic acid amplification and simple lateral flow dipstick detection of the malaria parasite for point-of-care testing.
Methods
A specific fragment of the 18S rRNA gene of P. falciparum was amplified in 10 min at a constant 38°C using the isothermal recombinase polymerase amplification (RPA) method. With a unique probe system added to the reaction solution, the amplification product can be visualized on a simple lateral flow strip without further labelling. The combination of these methods was tested for sensitivity and specificity with various Plasmodium and other protozoa/bacterial strains, as well as with human DNA. Additional investigations were conducted to analyse the temperature optimum, reaction speed and robustness of this assay.
Results
The lateral flow RPA (LF-RPA) assay exhibited a high sensitivity and specificity. Experiments confirmed a detection limit as low as 100 fg of genomic P. falciparum DNA, corresponding to a sensitivity of approximately four parasites per reaction. All investigated P. falciparum strains (n = 77) were positively tested while all of the total 11 non-Plasmodium samples, showed a negative test result. The enzymatic reaction can be conducted under a broad range of conditions from 30-45°C with high inhibitory concentration of known PCR inhibitors. A time to result of 15 min from start of the reaction to read-out was determined.
Conclusions
Combining the isothermal RPA and the lateral flow detection is an approach to improve molecular diagnostic for P. falciparum in resource-limited settings. The system requires none or only little instrumentation for the nucleic acid amplification reaction and the read-out is possible with the naked eye. Showing the same sensitivity and specificity as comparable diagnostic methods but simultaneously increasing reaction speed and dramatically reducing assay requirements, the method has potential to become a true point-of-care test for the malaria parasite.
Flood damage has increased significantly and is expected to rise further in many parts of the world. For assessing potential changes in flood risk, this paper presents an integrated model chain quantifying flood hazards and losses while considering climate and land use changes. In the case study region, risk estimates for the present and the near future illustrate that changes in flood risk by 2030 are relatively low compared to historic periods. While the impact of climate change on the flood hazard and risk by 2030 is slight or negligible, strong urbanisation associated with economic growth contributes to a remarkable increase in flood risk. Therefore, it is recommended to frequently consider land use scenarios and economic developments when assessing future flood risks. Further, an adapted and sustainable risk management is necessary to encounter rising flood losses, in which non-structural measures are becoming more and more important. The case study demonstrates that adaptation by non-structural measures such as stricter land use regulations or enhancement of private precaution is capable of reducing flood risk by around 30 %. Ignoring flood risks, in contrast, always leads to further increasing losses-with our assumptions by 17 %. These findings underline that private precaution and land use regulation could be taken into account as low cost adaptation strategies to global climate change in many flood prone areas. Since such measures reduce flood risk regardless of climate or land use changes, they can also be recommended as no-regret measures.
The Runge-Kutta type regularization method was recently proposed as a potent tool for the iterative solution of nonlinear ill-posed problems. In this paper we analyze the applicability of this regularization method for solving inverse problems arising in atmospheric remote sensing, particularly for the retrieval of spheroidal particle distribution. Our numerical simulations reveal that the Runge-Kutta type regularization method is able to retrieve two-dimensional particle distributions using optical backscatter and extinction coefficient profiles, as well as depolarization information.
Stress drop is a key factor in earthquake mechanics and engineering seismology. However, stress drop calculations based on fault slip can be significantly biased, particularly due to subjectively determined smoothing conditions in the traditional least-square slip inversion. In this study, we introduce a mechanically constrained Bayesian approach to simultaneously invert for fault slip and stress drop based on geodetic measurements. A Gaussian distribution for stress drop is implemented in the inversion as a prior. We have done several synthetic tests to evaluate the stability and reliability of the inversion approach, considering different fault discretization, fault geometries, utilized datasets, and variability of the slip direction, respectively. We finally apply the approach to the 2010 M8.8 Maule earthquake and invert for the coseismic slip and stress drop simultaneously. Two fault geometries from the literature are tested. Our results indicate that the derived slip models based on both fault geometries are similar, showing major slip north of the hypocenter and relatively weak slip in the south, as indicated in the slip models of other studies. The derived mean stress drop is 5-6 MPa, which is close to the stress drop of similar to 7 MPa that was independently determined according to force balance in this region Luttrell et al. (J Geophys Res, 2011). These findings indicate that stress drop values can be consistently extracted from geodetic data.
In a recent paper, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
This reference paper describes the sampling and contents of the IZA Evaluation Dataset Survey and outlines its vast potential for research in labor economics. The data have been part of a unique IZA project to connect administrative data from the German Federal Employment Agency with innovative survey data to study the out-mobility of individuals to work. This study makes the survey available to the research community as a Scientific Use File by explaining the development, structure, and access to the data. Furthermore, it also summarizes previous findings with the survey data.
Fluid intelligence (fluid IQ), defined as the capacity for rapid problem solving and behavioral adaptation, is known to be modulated by learning and experience. Both stressful life events (SLES) and neural correlates of learning [specifically, a key mediator of adaptive learning in the brain, namely the ventral striatal representation of prediction errors (PE)] have been shown to be associated with individual differences in fluid IQ. Here, we examine the interaction between adaptive learning signals (using a well-characterized probabilistic reversal learning task in combination with fMRI) and SLES on fluid IQ measures. We find that the correlation between ventral striatal BOLD PE and fluid IQ, which we have previously reported, is quantitatively modulated by the amount of reported SLES. Thus, after experiencing adversity, basic neuronal learning signatures appear to align more closely with a general measure of flexible learning (fluid IQ), a finding complementing studies on the effects of acute stress on learning. The results suggest that an understanding of the neurobiological correlates of trait variables like fluid IQ needs to take socioemotional influences such as chronic stress into account.
The Arabidopsis Kinome
(2014)
Background
Protein kinases constitute a particularly large protein family in Arabidopsis with important functions in cellular signal transduction networks. At the same time Arabidopsis is a model plant with high frequencies of gene duplications. Here, we have conducted a systematic analysis of the Arabidopsis kinase complement, the kinome, with particular focus on gene duplication events. We matched Arabidopsis proteins to a Hidden-Markov Model of eukaryotic kinases and computed a phylogeny of 942 Arabidopsis protein kinase domains and mapped their origin by gene duplication.
Results
The phylogeny showed two major clades of receptor kinases and soluble kinases, each of which was divided into functional subclades. Based on this phylogeny, association of yet uncharacterized kinases to families was possible which extended functional annotation of unknowns. Classification of gene duplications within these protein kinases revealed that representatives of cytosolic subfamilies showed a tendency to maintain segmentally duplicated genes, while some subfamilies of the receptor kinases were enriched for tandem duplicates. Although functional diversification is observed throughout most subfamilies, some instances of functional conservation among genes transposed from the same ancestor were observed. In general, a significant enrichment of essential genes was found among genes encoding for protein kinases.
Conclusions
The inferred phylogeny allowed classification and annotation of yet uncharacterized kinases. The prediction and analysis of syntenic blocks and duplication events within gene families of interest can be used to link functional biology to insights from an evolutionary viewpoint. The approach undertaken here can be applied to any gene family in any organism with an annotated genome.
Background
In the past, plyometric training (PT) has been predominantly performed on stable surfaces. The purpose of this pilot study was to examine effects of a 7-week lower body PT on stable vs. unstable surfaces. This type of exercise condition may be denoted as metastable equilibrium.
Methods
Thirty-three physically active male sport science students (age: 24.1 ± 3.8 years) were randomly assigned to a PT group (n = 13) exercising on stable (STAB) and a PT group (n = 20) on unstable surfaces (INST). Both groups trained countermovement jumps, drop jumps, and practiced a hurdle jump course. In addition, high bar squats were performed. Physical fitness tests on stable surfaces (hexagonal obstacle test, countermovement jump, hurdle drop jump, left-right hop, dynamic and static balance tests, and leg extension strength) were used to examine the training effects.
Results
Significant main effects of time (ANOVA) were found for the countermovement jump, hurdle drop jump, hexagonal test, dynamic balance, and leg extension strength. A significant interaction of time and training mode was detected for the countermovement jump in favor of the INST group. No significant improvements were evident for either group in the left-right hop and in the static balance test.
Conclusions
These results show that lower body PT on unstable surfaces is a safe and efficient way to improve physical performance on stable surfaces.
The B fields in OB stars (BOB) survey is an ESO large programme collecting spectropolarimetric observations for a large number of early-type stars in order to study the occurrence rate, properties, and ultimately the origin of magnetic fields in massive stars. As of July 2014, a total of 98 objects were observed over 20 nights with FORS2 and HARPSpol. Our preliminary results indicate that the fraction of magnetic OB stars with an organised, detectable field is low. This conclusion, now independently reached by two different surveys, has profound implications for any theoretical model attempting to explain the field formation in these objects. We discuss in this contribution some important issues addressed by our observations (e.g., the lower bound of the field strength) and the discovery of some remarkable objects.
Nested application conditions generalise the well-known negative application conditions and are important for several application domains. In this paper, we present Local Church-Rosser, Parallelism, Concurrency and Amalgamation Theorems for rules with nested application conditions in the framework of M-adhesive categories, where M-adhesive categories are slightly more general than weak adhesive high-level replacement categories. Most of the proofs are based on the corresponding statements for rules without application conditions and two shift lemmas stating that nested application conditions can be shifted over morphisms and rules.
Recently, interest in collecting and mining large sets of educational data on student background and performance to conduct research on learning and instruction has developed as an area generally referred to as learning analytics. Higher education leaders are recognising the value of learning analytics for improving not only learning and teaching but also the entire educational arena. However, theoretical concepts and empirical evidence need to be generated within the fast evolving field of learning analytics. In this paper, we introduce a holistic learning analytics framework. Based on this framework, student, learning, and curriculum profiles have been developed which include relevant static and dynamic parameters for facilitating the learning analytics framework. Based on the theoretical model, an empirical study was conducted to empirically validate the parameters included in the student profile. The paper concludes with practical implications and issues for future research.
Portal Wissen = Time
(2014)
“What then is time?”, Augustine of Hippo sighs melancholically in Book XI of “Confessions” and continues, “If no one asks me, I know; if I want to explain it to a questioner, I don’t know.” Even today, 1584 years after Augustine, time still appears mysterious. Treatises about the essence of time fill whole libraries – and this magazine.
However, questions of essence are alien to modern sciences. Time is – at least in physics – unproblematic: “Time is defined so that motion looks simple”, briefly and prosaically phrased, waves goodbye to Augustine’s riddle and to the Newtonian concept of absolute time, whose mathematical flow can only be approximately recorded with earthly instruments anyway.
In our everyday language and even in science we still speak of the flow of time but time has not been a natural condition for quite a while now. It is rather a conventional order parameter for change and movement. Processes are arranged by using a class of processes as a counting system in order to compare other processes and to organize them with the help of the temporary categories “before”, “during”, and “after”.
During Galileo’s time one’s own pulse was seen as the time standard for the flight of cannon balls. More sophisticated examination methods later made this seem too impractical. The distance-time diagrams of free-flying cannon balls turned out to be rather imprecise, difficult to replicate, and in no way “simple”. Nowadays, we use cesium atoms. A process is said to take one second when a caesium-133 atom completes 9,192,631,770 periods of the radiation corresponding to the transition between two hyperfine levels of the ground state. A meter is the length of the path travelled by light in a vacuum in exactly 1/299,792,458 of a second. Fortunately, these data are hard-coded in the Global Positioning System GPS so users do not have to reenter them each time they want to know where they are. In the future, however, they might have to download an app because the time standard has been replaced by sophisticated transitions to ytterbium.
The conventional character of the time concept should not tempt us to believe that everything is somehow relative and, as a result, arbitrary. The relation of one’s own pulse to an atomic clock is absolute and as real as the relation of an hourglass to the path of the sun. The exact sciences are relational sciences. They are not about the thing-initself as Newton and Kant dreamt, but rather about relations as Leibniz and, later, Mach pointed out.
It is not surprising that the physical time standard turned out to be rather impractical for other scientists. The psychology of time perception tells us – and you will all agree – that the perceived age is quite different from the physical age. The older we get the shorter the years seem. If we simply assume that perceived duration is inversely related to physical age and that a 20-year old also perceives a physical year as a psychological one, we come to the surprising discovery that at 90 years we are 90 years old. With an assumed life expectancy of 90 years, 67% (or 82%) of your felt lifetime is behind you at the age of 20 (or 40) physical years.
Before we start to wallow in melancholy in the face of the “relativity of time”, let me again quote Augustine. “But at any rate this much I dare affirm I know: that if nothing passed there would be no past time; if nothing were approaching, there would be no future time; if nothing were, there would be no present time.” Well, – or as Bob Dylan sings “The times they are a-changin”.
I wish you an exciting time reading this issue.
Prof. Martin Wilkens
Professor of Quantum Optics
Portal Wissen = Believe
(2014)
People want to know what is real. Children enjoy listening to a story but when my children were about four years old they started asking whether the story really happened or was just invented. Likewise, only on a higher level, our academic curiosity is fuelled by our interest in knowing what is real. When we analyze poetic texts or dreams we are trying to distinguish between the facts (e.g. neurological ones or linguistic structures) and merely assumed influences. Ideally we can present results that were logically understood by others and that we can repeat empirically. But in most cases this is not possible. We cannot read every book and cannot look through every microscope, not even within our own discipline. In the world we live in we depend on trusting the information of others, like how to get to the train station or what the weather is like in Ulaanbataar. This is why we are used to believing others, our friends or the news anchors. This is not a childish behavior but a necessity. Of course, it is risky because they could all be lying to us, like in a Truman Show situation. The only time we are able to know that we are in reality is when we transcend our selfconsciousness and when we accept two propositions: first, that we are not only objects but also subjects in the consciousness of others and second that our dialogic relations are again observed by a third party that is not part of this intersubjective world.
For religious people this is “belief” - belief as the assumption that all human relations only become real, serious and beyond any doubt if they know they are under the eyes of God. Only before Him something is in itself and not only “for me” or “among us”. That is why biblical language distinguishes between three forms of belief: the relationship with the world of things (“to believe that”), the relationship to the world of subjects (“to believe somebody”) and the assumption of a subjective supernatural reality (“to believe in” or “faith”). From an academic point of view belief is a holistic hypothesis. Belief is not the opposite of knowledge but it is the attempt to save reality from doubt by comprehending the fragile empirical world as an expression of a stable transcendent world.
When I talk to students they often ask not only about what I know but what I believe. As a professor for Religious Studies and a believing Catholic I am caught in the middle. On the one hand, it is my duty as a professor to doubt everything, i.e. to attribute each religious text to its historical context and sociological functions. On the other hand, I, as a Christian, consider certain religious documents, in my case the Bible, an interpretable but nevertheless irreversible, revealed text about the origin of reality. On weekdays the New Testament is a collection of ancient writings among many others, on Sundays it is the revelation. You can make a clear distinction between these two perspectives but it is difficult to decide whether doubt or belief is more real.
This issue of “Portal Wissen” explores this dual relationship of belief. What is the attitude of science towards belief – is it a religious one? Where does science bring things to light that we can hardly believe or that make us believe (again)? What happens if research clears up erroneous assumptions or myths? Is science able to investigate things that are convincing but inexplicable? How can it maintain its credibility and develop even so?
These questions appear again and again in the contributions of this “Portal Wissen”. They form a manifold, exciting and surprising picture of the research projects and academics at the University of Potsdam. Believe me, it will be an enjoyable read.
Prof. Johann Hafner
Professor of Religious Studies with Focus on Christianity Dean of the Faculty of Arts
We report on the development of an on-chip RPA (recombinase polymerase amplification) with simultaneous multiplex isothermal amplification and detection on a solid surface. The isothermal RPA was applied to amplify specific target sequences from the pathogens Neisseria gonorrhoeae, Salmonella enterica and methicillin-resistant Staphylococcus aureus (MRSA) using genomic DNA. Additionally, a positive plasmid control was established as an internal control. The four targets were amplified simultaneously in a quadruplex reaction. The amplicon is labeled during on-chip RPA by reverse oligonucleotide primers coupled to a fluorophore. Both amplification and spatially resolved signal generation take place on immobilized forward primers bount to expoxy-silanized glass surfaces in a pump-driven hybridization chamber. The combination of microarray technology and sensitive isothermal nucleic acid amplification at 38 °C allows for a multiparameter analysis on a rather small area. The on-chip RPA was characterized in terms of reaction time, sensitivity and inhibitory conditions. A successful enzymatic reaction is completed in <20 min and results in detection limits of 10 colony-forming units for methicillin-resistant Staphylococcus aureus and Salmonella enterica and 100 colony-forming units for Neisseria gonorrhoeae. The results show this method to be useful with respect to point-of-care testing and to enable simplified and miniaturized nucleic acid-based diagnostics.
The Babylonian Talmud (BT) attributes the idea of committing a transgression for the sake of God to R. Nahman b. Isaac (RNBI). RNBI's statement appears in two parallel sugyot in the BT (Nazir 23a; Horayot 10a). Each sugya has four textual witnesses. By comparing these textual witnesses, this paper will attempt to reconstruct the sugya's earlier (or, what some might term, original) dialectical form, from which the two familiar versions of the text in Nazir and Horayot evolved. This article reveals the specific ways in which, value-laden conceptualizations have a major impact on the Talmud's formulation, as we know it today.
Ultraschall Berlin
(2014)
We propose a novel cluster-based reduced-order modelling (CROM) strategy for unsteady flows. CROM combines the cluster analysis pioneered in Gunzburger's group (Burkardt, Gunzburger & Lee, Comput. Meth. Appl. Mech. Engng, vol. 196, 2006a, pp. 337-355) and transition matrix models introduced in fluid dynamics in Eckhardt's group (Schneider, Eckhardt & Vollmer, Phys. Rev. E, vol. 75, 2007, art. 066313). CROM constitutes a potential alternative to POD models and generalises the Ulam-Galerkin method classically used in dynamical systems to determine a finite-rank approximation of the Perron-Frobenius operator. The proposed strategy processes a time-resolved sequence of flow snapshots in two steps. First, the snapshot data are clustered into a small number of representative states, called centroids, in the state space. These centroids partition the state space in complementary non-overlapping regions (centroidal Voronoi cells). Departing from the standard algorithm, the probabilities of the clusters are determined, and the states are sorted by analysis of the transition matrix. Second, the transitions between the states are dynamically modelled using a Markov process. Physical mechanisms are then distilled by a refined analysis of the Markov process, e. g. using finite-time Lyapunov exponent (FTLE) and entropic methods. This CROM framework is applied to the Lorenz attractor (as illustrative example), to velocity fields of the spatially evolving incompressible mixing layer and the three-dimensional turbulent wake of a bluff body. For these examples, CROM is shown to identify non-trivial quasi-attractors and transition processes in an unsupervised manner. CROM has numerous potential applications for the systematic identification of physical mechanisms of complex dynamics, for comparison of flow evolution models, for the identification of precursors to desirable and undesirable events, and for flow control applications exploiting nonlinear actuation dynamics.
claspfolio 2
(2014)
Building on the award-winning, portfolio-based ASP solver claspfolio, we present claspfolio 2, a modular and open solver architecture that integrates several different portfolio-based algorithm selection approaches and techniques. The claspfolio 2 solver framework supports various feature generators, solver selection approaches, solver portfolios, as well as solver-schedule-based pre-solving techniques. The default configuration of claspfolio 2 relies on a light-weight version of the ASP solver clasp to generate static and dynamic instance features. The flexible open design of claspfolio 2 is a distinguishing factor even beyond ASP. As such, it provides a unique framework for comparing and combining existing portfolio-based algorithm selection approaches and techniques in a single, unified framework. Taking advantage of this, we conducted an extensive experimental study to assess the impact of different feature sets, selection approaches and base solver portfolios. In addition to gaining substantial insights into the utility of the various approaches and techniques, we identified a default configuration of claspfolio 2 that achieves substantial performance gains not only over clasp's default configuration and the earlier version of claspfolio, but also over manually tuned configurations of clasp.
Research in rodents has shown that dietary vitamin A reduces body fat by enhancing fat mobilisation and energy utilisation; however, their effects in growing dogs remain unclear. In the present study, we evaluated the development of body weight and body composition and compared observed energy intake with predicted energy intake in forty-nine puppies from two breeds (twenty-four Labrador Retriever (LAB) and twenty-five Miniature Schnauzer (MS)). A total of four different diets with increasing vitamin A content between 5.24 and 104.80 mu mol retinol (5000-100 000 IU vitamin A)/4184 kJ (1000 kcal) metabolisable energy were fed from the age of 8 weeks up to 52 (MS) and 78 weeks (LAB). The daily energy intake was recorded throughout the experimental period. The body condition score was evaluated weekly using a seven-category system, and food allowances were adjusted to maintain optimal body condition. Body composition was assessed at the age of 26 and 52 weeks for both breeds and at the age of 78 weeks for the LAB breed only using dual-energy X-ray absorptiometry. The growth curves of the dogs followed a breed-specific pattern. However, data on energy intake showed considerable variability between the two breeds as well as when compared with predicted energy intake. In conclusion, the data show that energy intakes of puppies particularly during early growth are highly variable; however, the growth pattern and body composition of the LAB and MS breeds are not affected by the intake of vitamin A at levels up to 104.80 mu mol retinol (100 000 IU vitamin A)/4184 kJ (1000 kcal).
The complementary advantages of high-rate Global Positioning System (GPS) and accelerometer observations for measuring seismic ground motion have been recognised in previous research. Here we propose an approach of tight integration of GPS and accelerometer measurements. The baseline shifts of the accelerometer are introduced as unknown parameters and estimated by a random walk process in the Precise Point Positioning (PPP) solution. To demonstrate the performance of the new strategy, we carried out several experiments using collocated GPS and accelerometer. The experimental results show that the baseline shifts of the accelerometer are automatically corrected, and high precision coseismic information of strong ground motion can be obtained in real-time. Additionally, the convergence and precision of the PPP is improved by the combined solution.
Object and action naming in Russian- and German- speaking monolingual and bilingual children*
(2014)
The present study investigates the influence of word category on naming performance in two populations: bilingual and monolingual children. The question is whether and, if so, to what extent monolingual and bilingual children differ with respect to noun and verb naming and whether a noun bias exists in the lexical abilities of bilingual children. Picture naming of objects and actions by Russian-German bilingual children (aged 4-7 years) was compared to age-matched monolingual children. The results clearly demonstrate a naming deficit of bilingual children in comparison to monolingual children that increases with age. Noun learning is more fragile in bilingual contexts than is verb learning. In bilingual language acquisition, nouns do not predominate over verbs as much as is seen in monolingual German and Russian children. The results are discussed with respect to semantic-conceptual aspects and language-specific features of nouns and verbs, and the impact of input on the acquisition of these word categories.
Bats are important components in tropical mammal assemblages. Unravelling the mechanisms allowing multiple syntopic bat species to coexist can provide insights into community ecology. However, dietary information on component species of these assemblages is often difficult to obtain. Here we measuredstable carbon and nitrogen isotopes in hair samples clipped from the backs of 94 specimens to indirectly examine whether trophic niche differentiation and microhabitat segregation explain the coexistence of 16 bat species at Ankarana, northern Madagascar. The assemblage ranged over 4.4% in delta N-15 and was structured into two trophic levels with phytophagous Pteropodidae as primary consumers (c. 3% enriched over plants) and different insectivorous bats as secondary consumers (c. 4% enriched over primary consumers). Bat species utilizing different microhabitats formed distinct isotopic clusters (metric analyses of delta C-13-delta N-15 bi-plots), but taxa foraging in the same microhabitat did not show more pronounced trophic differentiation than those occupying different microhabitats. As revealed by multivariate analyses, no discernible feeding competition was found in the local assemblage amongst congeneric species as compared with non-congeners. In contrast to ecological niche theory, but in accordance with studies on New and Old World bat assemblages, competitive interactions appear to be relaxed at Ankarana and not a prevailing structuring force.
Regular and irregular inflection in children's production has been examined in many previous studies. Yet, little is known about the processes involved in children's recognition of inflected words. To gain insight into how children process inflected words, the current study examines regular -t and irregular -n participles of German using the cross-modal priming technique testing 108 monolingual German-speaking children in two age groups (group I, mean age: 8;4, group II, mean age: 9;9) and a control group of.. adults. Although both age groups of children had the same full priming effect as adults for -t forms, only children of age group II showed an adult-like (partial) priming effect for -n participles. We argue that children (within the age range tested) employ the same mechanisms for regular inflection as adults but that the lexical retrieval processes required for irregular forms become more efficient when children get older.
This study investigates whether number dissimilarities on subject and object DPs facilitate the comprehension of subject-and object-extracted centre-embedded relative clauses in children with Grammatical Specific Language Impairment (G-SLI). We compared the performance of a group of English-speaking children with G-SLI (mean age: 12; 11) with that of two groups of younger typically developing (TD) children, matched on grammar and receptive vocabulary, respectively. All groups were more accurate on subject-extracted relative clauses than object-extracted ones and, crucially, they all showed greater accuracy for sentences with dissimilar number features (i.e., one singular, one plural) on the head noun and the embedded DP. These findings are interpreted in the light of current psycholinguistic models of sentence comprehension in TD children and provide further insight into the linguistic nature of G-SLI.
Two experiments tested how faithfully German children aged 4; 5 to 5; 6 reproduce ditransitive sentences that are unmarked or marked with respect to word order and focus (Exp1) or definiteness (Exp2). Adopting an optimality theory (OT) approach, it is assumed that in the German adult grammar word order is ranked lower than focus and definiteness. Faithfulness of children's reproductions decreased as markedness of inputs increased; unmarked structures were reproduced most faithfully and unfaithful outputs had most often an unmarked form. Consistent with the OT proposal, children were more tolerant against inputs marked for word order than for focus; in conflict with the proposal, children were less tolerant against inputs marked for word order than for definiteness. Our results suggest that the linearization of objects in German double object constructions is affected by focus and definiteness, but that prosodic principles may have an impact on the position of a focused constituent.
Masked priming research with late (non-native) bilinguals has reported facilitation effects following morphologically derived prime words (scanner - scan). However, unlike for native speakers, there are suggestions that purely orthographic prime-target overlap (scandal - scan) also produces priming in non-native visual word recognition. Our study directly compares orthographically related and derived prime-target pairs. While native readers showed morphological but not formal overlap priming, the two prime types yielded the same magnitudes of facilitation for non-natives. We argue that early word recognition processes in a non-native language are more influenced by surface-form properties than in one's native language.
In the aftermath of the severe flooding in Central Europe in August 2002, a number of changes in flood policies were launched in Germany and other European countries, aiming at improved risk management. The question arises as to whether these changes have already had an impact on the residents' ability to cope with floods, and whether flood-affected private households are now better prepared than they were in 2002. Therefore, computer-aided telephone interviews with private households in Germany that suffered from property damage due to flooding in 2005, 2006, 2010 or 2011 were performed and analysed with respect to flood awareness, precaution, preparedness and recovery. The data were compared to a similar investigation conducted after the flood in 2002.
After the flood in 2002, the level of private precautions taken increased considerably. One contributing factor is the fact that, in general, a larger proportion of people knew that they were at risk of flooding. The best level of precaution was found before the flood events in 2006 and 2011. The main reason for this might be that residents had more experience with flooding than residents affected in 2005 or 2010. Yet, overall, flood experience and knowledge did not necessarily result in building retrofitting or flood-proofing measures, which are considered as mitigating damages most effectively. Hence, investments still need to be stimulated in order to reduce future damage more efficiently.
Early warning and emergency responses were substantially influenced by flood characteristics. In contrast to flood-affected people in 2006 or 2011, people affected by flooding in 2005 or 2010 had to deal with shorter lead times and therefore had less time to take emergency measures. Yet, the lower level of emergency measures taken also resulted from the people's lack of flood experience and insufficient knowledge of how to protect themselves. Overall, it was noticeable that these residents suffered from higher losses. Therefore, it is important to further improve early warning systems and communication channels, particularly in hilly areas with rapid-onset flooding.
Extreme weather events are likely to occur more often under climate change and the resulting effects on ecosystems could lead to a further acceleration of climate change. But not all extreme weather events lead to extreme ecosystem response. Here, we focus on hazardous ecosystem behaviour and identify coinciding weather conditions. We use a simple probabilistic risk assessment based on time series of ecosystem behaviour and climate conditions. Given the risk assessment terminology, vulnerability and risk for the previously defined hazard are estimated on the basis of observed hazardous ecosystem behaviour.
We apply this approach to extreme responses of terrestrial ecosystems to drought, defining the hazard as a negative net biome productivity over a 12-month period. We show an application for two selected sites using data for 1981-2010 and then apply the method to the pan-European scale for the same period, based on numerical modelling results (LPJmL for ecosystem behaviour; ERA-Interim data for climate).
Our site-specific results demonstrate the applicability of the proposed method, using the SPEI to describe the climate condition. The site in Spain provides an example of vulnerability to drought because the expected value of the SPEI is 0.4 lower for hazardous than for non-hazardous ecosystem behaviour. In northern Germany, on the contrary, the site is not vulnerable to drought because the SPEI expectation values imply wetter conditions in the hazard case than in the non-hazard case.
At the pan-European scale, ecosystem vulnerability to drought is calculated in the Mediterranean and temperate region, whereas Scandinavian ecosystems are vulnerable under conditions without water shortages. These first model- based applications indicate the conceptual advantages of the proposed method by focusing on the identification of critical weather conditions for which we observe hazardous ecosystem behaviour in the analysed data set. Application of the method to empirical time series and to future climate would be important next steps to test the approach.
This study investigates the spatial and temporal distributions of 14 key arboreal taxa and their driving forces during the last 22,000 calendar years before ad 1950 (kyr BP) using a taxonomically harmonized and temporally standardized fossil pollen dataset with a 500-year resolution from the eastern part of continental Asia. Logistic regression was used to estimate pollen abundance thresholds for vegetation occurrence (presence or dominance), based on modern pollen data and present ranges of 14 taxa in China. Our investigation reveals marked changes in spatial and temporal distributions of the major arboreal taxa. The thermophilous (Castanea, Castanopsis, Cyclobalanopsis, Fagus, Pterocarya) and eurythermal (Juglans, Quercus, Tilia, Ulmus) broadleaved tree taxa were restricted to the current tropical or subtropical areas of China during the Last Glacial Maximum (LGM) and spread northward since c. 14.5 kyr BP. Betula and conifer taxa (Abies, Picea, Pinus), in contrast, retained a wider distribution during the LGM and showed no distinct expansion direction during the Late Glacial. Since the late mid-Holocene, the abundance but not the spatial extent of most trees decreased. The changes in spatial and temporal distributions for the 14 taxa are a reflection of climate changes, in particular monsoonal moisture, and, in the late Holocene, human impact. The post-LGM expansion patterns in eastern continental China seem to be different from those reported for Europe and North America, for example, the westward spread for eurythermal broadleaved taxa.
This study examines the course and driving forces of recent vegetation change in the Mongolian steppe. A sediment core covering the last 55years from a small closed-basin lake in central Mongolia was analyzed for its multi-proxy record at annual resolution. Pollen analysis shows that highest abundances of planted Poaceae and highest vegetation diversity occurred during 1977-1992, reflecting agricultural development in the lake area. A decrease in diversity and an increase in Artemisia abundance after 1992 indicate enhanced vegetation degradation in recent times, most probably because of overgrazing and farmland abandonment. Human impact is the main factor for the vegetation degradation within the past decades as revealed by a series of redundancy analyses, while climate change and soil erosion play subordinate roles. High Pediastrum (a green algae) influx, high atomic total organic carbon/total nitrogen (TOC/TN) ratios, abundant coarse detrital grains, and the decrease of C-13(org) and N-15 since about 1977 but particularly after 1992 indicate that abundant terrestrial organic matter and nutrients were transported into the lake and caused lake eutrophication, presumably because of intensified land use. Thus, we infer that the transition to a market economy in Mongolia since the early 1990s not only caused dramatic vegetation degradation but also affected the lake ecosystem through anthropogenic changes in the catchment area.
Two of a kind?
(2014)
School attacks are attracting increasing attention in aggression research. Recent systematic analyses provided new insights into offense and offender characteristics. Less is known about attacks in institutes of higher education (e.g., universities). It is therefore questionable whether the term “school attack” should be limited to institutions of general education or could be extended to institutions of higher education. Scientific literature is divided in distinguishing or unifying these two groups and reports similarities as well as differences. We researched 232 school attacks and 45 attacks in institutes of higher education throughout the world and conducted systematic comparisons between the two groups. The analyses yielded differences in offender (e.g., age, migration background) and offense characteristics (e.g., weapons, suicide rates), and some similarities (e.g., gender). Most differences can apparently be accounted for by offenders’ age and situational influences. We discuss the implications of our findings for future research and the development of preventative measures.
Leaking comprises observable behavior or statements that signal intentions of committing a violent offense and is considered an important warning sign for school shootings. School staff who are confronted with leaking have to assess its seriousness and react appropriately - a difficult task, because knowledge about leaking is sparse. The present study, therefore, examined how frequently leaking occurs in schools and how teachers identify leaking and respond to it. To achieve this aim, we informed teachers from eight schools in Germany about the definition of leaking and other warning signs and risk factors for school shootings in a one-hour information session. Teachers were then asked to report cases of leaking over a six- to nine-month period and to answer a questionnaire on leaking and its treatment after the information session and six to nine months later. Our results suggest that leaking is a relevant problem in German schools. Teachers mostly rated the information session positively and benefited in several aspects (e.g. reported more perceived courses of action or improved knowledge about leaking), but also expressed a constant need for support. Our findings highlight teachers' needs for further support and training and may be used in the planning of prevention measures for school shootings.
Although politicization is a perennial research topic in public administration to investigate relationships between ministers and civil servants, the concept still lacks clarification. This article contributes to this literature by systematically identifying different conceptualizations of politicization and suggests a typology including three politicization mechanisms to strengthen the political responsiveness of the ministerial bureaucracy: formal, functional and administrative politicization. The typology is empirically validated through a comparative case analysis of politicization mechanisms in Germany, Belgium, the UK and Denmark. The empirical analysis further refines the general idea of Western democracies becoming ‘simply’ more politicized, by illustrating how some politicization mechanisms do not continue to increase, but stabilize – at least for the time being.
Background Transcatheter aortic-valve implantation (TAVI) is an established alternative therapy in patients with severe aortic stenosis and a high surgical risk. Despite a rapid growth in its use, very few data exist about the efficacy of cardiac rehabilitation (CR) in these patients. We assessed the hypothesis that patients after TAVI benefit from CR, compared to patients after surgical aortic-valve replacement (sAVR).
Methods From September 2009 to August 2011, 442 consecutive patients after TAVI (n=76) or sAVR (n=366) were referred to a 3-week CR. Data regarding patient characteristics as well as changes of functional (6-min walk test. 6-MWT), bicycle exercise test), and emotional status (Hospital Anxiety and Depression Scale) were retrospectively evaluated and compared between groups after propensity score adjustment.
Results Patients after TAVI were significantly older (p<0.001), more female (p<0.001), and had more often coronary artery disease (p=0.027), renal failure (p=0.012) and a pacemaker (p=0.032). During CR, distance in 6-MWT (both groups p0.001) and exercise capacity (sAVR p0.001, TAVI p0.05) significantly increased in both groups. Only patients after sAVR demonstrated a significant reduction in anxiety and depression (p0.001). After propensity scores adjustment, changes were not significantly different between sAVR and TAVI, with the exception of 6-MWT (p=0.004).
Conclusions Patients after TAVI benefit from cardiac rehabilitation despite their older age and comorbidities. CR is a helpful tool to maintain independency for daily life activities and participation in socio-cultural life.
Background: Chronic kidney disease (CKD) is a frequent comorbidity among elderly patients and those with cardiovascular disease. CKD carries prognostic relevance. We aimed to describe patient characteristics, risk factor management and control status of patients in cardiac rehabilitation (CR), differentiated by presence or absence of CKD.
Design and methods: Data from 92,071 inpatients with adequate information to calculate glomerular filtration rate (GFR) based on the Cockcroft-Gault formula were analyzed at the beginning and the end of a 3-week CR stay. CKD was defined as estimated GFR <60 ml/min/1.73 m(2).
Results: Compared with non-CKD patients, CKD patients were significantly older (72.0 versus 58.0 years) and more often had diabetes mellitus, arterial hypertension, and atherothrombotic manifestations (previous stroke, peripheral arterial disease), but fewer were current or previous smokers had a CHD family history. Exercise capacity was much lower in CKD (59 vs. 92Watts). Fewer patients with CKD were treated with percutaneous coronary intervention (PCI), but more had coronary artery bypass graft (CABG) surgery. Patients with CKD compared with non-CKD less frequently received statins, acetylsalicylic acid (ASA), clopidogrel, beta blockers, and angiotensin converting enzyme (ACE) inhibitors, and more frequently received angiotensin receptor blockers, insulin and oral anticoagulants. In CKD, mean low density lipoprotein cholesterol (LDL-C), total cholesterol, and high density lipoprotein cholesterol (HDL-C) were slightly higher at baseline, while triglycerides were substantially lower. This lipid pattern did not change at the discharge visit, but overall control rates for all described parameters (with the exception of HDL-C) were improved substantially. At discharge, systolic blood pressure (BP) was higher in CKD (124 versus 121 mmHg) and diastolic BP was lower (72 versus 74 mmHg). At discharge, 68.7% of CKD versus 71.9% of non-CKD patients had LDL-C <100 mg/dl. Physical fitness on exercise testing improved substantially in both groups. When the Modification of Diet in Renal Disease (MDRD) formula was used for CKD classification, there was no clinically relevant change in these results.
Conclusion: Within a short period of 3-4 weeks, CR led to substantial improvements in key risk factors such as lipid profile, blood pressure, and physical fitness for all patients, even if CKD was present.
Background: Knowing and, if necessary, altering competitive athletes' real attitudes towards the use of banned performance-enhancing substances is an important goal of worldwide doping prevention efforts. However athletes will not always be willing to reporting their real opinions. Reaction time-based attitude tests help conceal the ultimate goal of measurement from the participant and impede strategic answering. This study investigated how well a reaction time-based attitude test discriminated between athletes who were doping and those who were not. We investigated whether athletes whose urine samples were positive for at least one banned substance (dopers) evaluated doping more favorably than clean athletes (non-dopers).
Methods: We approached a group of 61 male competitive bodybuilders and collected urine samples for biochemical testing. The pictorial doping Brief Implicit Association Test (BIAT) was used for attitude measurement. This test quantifies the difference in response latencies (in milliseconds) to stimuli representing related concepts (i.e. doping-dislike/like-[health food]).
Results: Prohibited substances were found in 43% of all tested urine samples. Dopers had more lenient attitudes to doping than non-dopers (Hedges's g = -0.76). D-scores greater than -0.57 (CI95 = -0.72 to -0.46) might be indicative of a rather lenient attitude to doping. In urine samples evidence of administration of combinations of substances, complementary administration of substances to treat side effects and use of stimulants to promote loss of body fat was common.
Conclusion: This study demonstrates that athletes' attitudes to doping can be assessed indirectly with a reaction time-based test, and that their attitudes are related to their behavior. Although bodybuilders may be more willing to reveal their attitude to doping than other athletes, these results still provide evidence that the pictorial doping BIAT may be useful in athletes from other sports, perhaps as a complementary measure in evaluations of the effectiveness of doping prevention interventions.
Background
The Amazon molly, Poecilia formosa (Teleostei: Poeciliinae) is an unisexual, all-female species. It evolved through the hybridisation of two closely related sexual species and exhibits clonal reproduction by sperm dependent parthenogenesis (or gynogenesis) where the sperm of a parental species is only used to activate embryogenesis of the apomictic, diploid eggs but does not contribute genetic material to the offspring.
Here we provide and describe the first de novo assembled transcriptome of the Amazon molly in comparison with its maternal ancestor, the Atlantic molly Poecilia mexicana. The transcriptome data were produced through sequencing of single end libraries (100 bp) with the Illumina sequencing technique.
Results
83,504,382 reads for the Amazon molly and 81,625,840 for the Atlantic molly were assembled into 127,283 and 78,961 contigs for the Amazon molly and the Atlantic molly, respectively. 63% resp. 57% of the contigs could be annotated with gene ontology terms after sequence similarity comparisons. Furthermore, we were able to identify genes normally involved in reproduction and especially in meiosis also in the transcriptome dataset of the apomictic reproducing Amazon molly.
Conclusions
We assembled and annotated the transcriptome of a non-model organism, the Amazon molly, without a reference genome (de novo). The obtained dataset is a fundamental resource for future research in functional and expression analysis. Also, the presence of 30 meiosis-specific genes within a species where no meiosis is known to take place is remarkable and raises new questions for future research.
Background: Cross-sectional studies detected associations between physical fitness, living area, and sports participation in children. Yet, their scientific value is limited because the identification of cause-and-effect relationships is not possible. In a longitudinal approach, we examined the effects of living area and sports club participation on physical fitness development in primary school children from classes 3 to 6.
Methods: One-hundred and seventy-two children (age: 9-12 years; sex: 69 girls, 103 boys) were tested for their physical fitness (i.e., endurance [9-min run], speed [50-m sprint], lower- [triple hop] and upper-extremity muscle strength [1-kg ball push], flexibility [stand-and-reach], and coordination [star coordination run]). Living area (i.e., urban or rural) and sports club participation were assessed using parent questionnaire.
Results: Over the 4 year study period, urban compared to rural children showed significantly better performance development for upper- (p = 0.009, ES = 0.16) and lower-extremity strength (p < 0.001, ES = 0.22). Further, significantly better performance development were found for endurance (p = 0.08, ES = 0.19) and lower-extremity strength (p = 0.024, ES = 0.23) for children continuously participating in sports clubs compared to their non-participating peers.
Conclusions: Our findings suggest that sport club programs with appealing arrangements appear to represent a good means to promote physical fitness in children living in rural areas.
Background: Doping attitude is a key variable in predicting athletes' intention to use forbidden performance enhancing drugs. Indirect reaction-time based attitude tests, such as the implicit association test, conceal the ultimate goal of measurement from the participant better than questionnaires. Indirect tests are especially useful when socially sensitive constructs such as attitudes towards doping need to be described. The present study serves the development and validation of a novel picture-based brief implicit association test (BIAT) for testing athletes' attitudes towards doping in sport. It shall provide the basis for a transnationally compatible research instrument able to harmonize anti-doping research efforts.
Method: Following a known-group differences validation strategy, the doping attitudes of 43 athletes from bodybuilding (representative for a highly doping prone sport) and handball (as a contrast group) were compared using the picture-based doping-BIAT. The Performance Enhancement Attitude Scale (PEAS) was employed as a corresponding direct measure in order to additionally validate the results.
Results: As expected, in the group of bodybuilders, indirectly measured doping attitudes as tested with the picture-based doping-BIAT were significantly less negative (eta(2) = .11). The doping-BIAT and PEAS scores correlated significantly at r = .50 for bodybuilders, and not significantly at r = .36 for handball players. There was a low error rate (7%) and a satisfactory internal consistency (r(dagger dagger) = .66) for the picture-based doping-BIAT.
Conclusions: The picture-based doping-BIAT constitutes a psychometrically tested method, ready to be adopted by the international research community. The test can be administered via the internet. All test material is available "open source". The test might be implemented, for example, as a new effect-measure in the evaluation of prevention programs.
The nature of the links between speech production and perception has been the subject of longstanding debate. The present study investigated the articulatory parameter of tongue height and the acoustic F1–F0 difference for the phonological distinction of vowel height in American English front vowels. Multiple repetitions of /i, ɪ, e, ɛ, æ/ in [(h)Vd] sequences were recorded in seven adult speakers. Articulatory (ultrasound) and acoustic data were collected simultaneously to provide a direct comparison of variability in vowel production in both domains. Results showed idiosyncratic patterns of articulation for contrasting the three front vowel pairs /i-ɪ/, /e-ɛ/, and /ɛ-æ/ across subjects, with the degree of variability in vowel articulation comparable to that observed in the acoustics for all seven participants. However, contrary to what was expected, some speakers showed reversals for tongue height for /ɪ/-/e/ that were also reflected in acoustics, with F1 higher for /ɪ/ than for /e/. The data suggest the phonological distinction of height is conveyed via speaker-specific articulatory-acoustic patterns that do not strictly match features descriptions. However, the acoustic signal is faithful to the articulatory configuration that generated it, carrying the crucial information for perceptual contrast.
Previous studies suggest that there are special timing relations in syllable onsets. The consonants are assumed to be timed, on the one hand, with the vocalic nucleus and, on the other hand, with each other. These competing timing relations result in the C-center effect. However, the C-center effect has not consistently been found in languages with complex onsets. Moreover, it has occasionally been found in languages disallowing complex onsets. The present study investigates onset timing in German while discussing alternative explanations (not related to bonding) for the timing patterns observed. Six German speakers were recorded via Electromagnetic Articulography. The corpus contained items with four clusters (/sk/, /kv/, /gl/, and /pl/). The clusters occur in word-initial position, word-medial position, and across a word boundary preceding different vowels. The results suggest that segmental properties (i.e., oral-laryngeal coordination, coarticulatory resistance) determine the observed timing patterns, and specifically the absence or presence of the C-center effect.
Rezensiertes Werk:
George, Rosemary Marangoly, Indian English and the Fiction of National Literature - Cambridge: Cambridge University Press, 2013. - Hb. viii, 285 pp. - (Zeitschrift für Anglistik und Amerikanistik ; 62(4)) ISBN 978-1-107-04000-7.
Downscaling of microfluidic cell culture and detection devices for electrochemical monitoring has mostly focused on miniaturization of the microfluidic chips which are often designed for specific applications and therefore lack functional flexibility. We present a compact microfluidic cell culture and electrochemical analysis platform with in-built fluid handling and detection, enabling complete cell based assays comprising on-line electrode cleaning, sterilization, surface functionalization, cell seeding, cultivation and electrochemical real-time monitoring of cellular dynamics. To demonstrate the versatility and multifunctionality of the platform, we explored amperometric monitoring of intracellular redox activity in yeast (Saccharomyces cerevisiae) and detection of exocytotically released dopamine from rat pheochromocytoma cells (PC12). Electrochemical impedance spectroscopy was used in both applications for monitoring cell sedimentation and adhesion as well as proliferation in the case of PC12 cells. The influence of flow rate on the signal amplitude in the detection of redox metabolism as well as the effect of mechanical stimulation on dopamine release were demonstrated using the programmable fluid handling capability. The here presented platform is aimed at applications utilizing cell based assays, ranging from e.g. monitoring of drug effects in pharmacological studies, characterization of neural stem cell differentiation, and screening of genetically modified microorganisms to environmental monitoring.
Hemolysis, the rupturing of red blood cells, can result from numerous medical conditions (in vivo) or occur after collecting blood specimen or extracting plasma and serum out of whole blood (in vitro). In clinical laboratory practice, hemolysis can be a serious problem due to its potential to bias detection of various analytes or biomarkers. Here we present the first ‘‘mix-and-measure’’ method to assess the degree of hemolysis in biosamples using luminescence spectroscopy. Luminescent terbium complexes (LTC) were studied in the presence of free hemoglobin (Hb) as indicators for hemolysis in TRIS-buffer, and in fresh human plasma with absorption, excitation and emission measurements. Our findings indicate dynamic as well as resonance energy transfer (FRET) between the LTC and the porphyrin ligand of hemoglobin. This transfer leads to a decrease in luminescence intensity and decay time even at nanomolar hemoglobin concentrations either in buffer or plasma. Luminescent terbium complexes are very sensitive to free hemoglobin in buffer and blood plasma. Due to the instant change in luminescence properties of the LTC in presence of Hb it is possible to access the concentration of hemoglobin via spectroscopic methods without incubation time or further treatment of the sample thus enabling a rapid and sensitive detection of hemolysis in clinical diagnostics.
Surface modification with thermoresponsive polymer brushes for a switchable electrochemical sensor
(2014)
Elaboration of switchable surfaces represents an interesting way for the development of a new generation of electrochemical sensors. In this paper, a method for growing thermoresponsive polymer brushes from a gold surface pre-modified with polyethyleneimine (PEI), subsequent layer-by-layer polyelectrolyte assembly and adsorption of a charged macroinitiator is described. We propose an easy method for monitoring the coil-to-globule phase transition of the polymer brush using an electrochemical quartz crystal microbalance with dissipation (E-QCM-D). The surface of these polymer modified electrodes shows reversible switching from the swollen to the collapsed state with temperature. As demonstrated from E-QCM-D measurements using an original signal processing method, the switch is operating in three reversible steps related to different interfacial viscosities. Moreover, it is shown that the one electron oxidation of ferrocene carboxylic acid is dramatically affected by the change from the swollen to the collapsed state of the polymer brush, showing a spectacular 86% decrease of the charge transfer resistance between the two states.
Two-photon polymerization of hydrogels – versatile solutions to fabricate well-defined 3D structures
(2014)
Hydrogels are cross-linked water-containing polymer networks that are formed by physical, ionic or covalent interactions. In recent years, they have attracted significant attention because of their unique physical properties, which make them promising materials for numerous applications in food and cosmetic processing, as well as in drug delivery and tissue engineering. Hydrogels are highly water-swellable materials, which can considerably increase in volume without losing cohesion, are biocompatible and possess excellent tissue-like physical properties, which can mimic in vivo conditions. When combined with highly precise manufacturing technologies, such as two-photon polymerization (2PP), well-defined three-dimensional structures can be obtained. These structures can become scaffolds for selective cell-entrapping, cell/drug delivery, sensing and prosthetic implants in regenerative medicine. 2PP has been distinguished from other rapid prototyping methods because it is a non-invasive and efficient approach for hydrogel cross-linking. This review discusses the 2PP-based fabrication of 3D hydrogel structures and their potential applications in biotechnology. A brief overview regarding the 2PP methodology and hydrogel properties relevant to biomedical applications is given together with a review of the most important recent achievements in the field.
Hemocompatible materials are needed for internal and extracorporeal biomedical applications, which should be realizable by reducing protein and thrombocyte adhesion to such materials. Polyethers have been demonstrated to be highly efficient in this respect on smooth surfaces. Here, we investigate the grafting of oligo- and polyglycerols to rough poly(ether imide) membranes as a polymer relevant to biomedical applications and show the reduction of protein and thrombocyte adhesion as well as thrombocyte activation. It could be demonstrated that, by performing surface grafting with oligo- and polyglycerols of relatively high polydispersity (>1.5) and several reactive groups for surface anchoring, full surface shielding can be reached, which leads to reduced protein adsorption of albumin and fibrinogen. In addition, adherent thrombocytes were not activated. This could be clearly shown by immunostaining adherent proteins and analyzing the thrombocyte covered area. The presented work provides an important strategy for the development of application relevant hemocompatible 3D structured materials.
Polyglycolide (PGA) is a biodegradable polymer with multiple applications in the medical sector. Here the synthesis of high molecular weight polyglycolide by ring-opening polymerization of diglycolide is reported. For the first time stabilizer free supercritical carbon dioxide (scCO2) was used as a reaction medium. scCO2 allowed for a reduction in reaction temperature compared to conventional processes. Together with the lowering of monomer concentration and consequently reduced heat generation compared to bulk reactions thermal decomposition of the product occurring already during polymerization is strongly reduced. The reaction temperatures and pressures were varied between 120 and 150 °C and 145 to 1400 bar. Tin(II) ethyl hexanoate and 1-dodecanol were used as catalyst and initiator, respectively. The highest number average molecular weight of 31 200 g mol−1 was obtained in 5 hours from polymerization at 120 °C and 530 bar. In all cases the products were obtained as a dry white powder. Remarkably, independent of molecular weight the melting temperatures were always at (219 ± 2) °C.
The large-scale green synthesis of graphene-type two-dimensional materials is still challenging. Herein, we describe the ionothermal synthesis of carbon-based composites from fructose in the iron-containing ionic liquid 1-butyl-3-methylimidazolium tetrachloridoferrate(III), [Bmim][FeCl4] serving as solvent, catalyst, and template for product formation. The resulting composites consist of oligo-layer graphite nanoflakes and iron carbide particles. The mesoporosity, strong magnetic moment, and high specific surface area of the composites make them attractive for water purification with facile magnetic separation. Moreover, Fe3Cfree graphite can be obtained via acid etching, providing access to fairly large amounts of graphite material. The current approach is versatile and scalable, and thus opens the door to ionothermal synthesis towards the larger-scale synthesis of materials that are, although not made via a sustainable process, useful for water treatment such as the removal of organic molecules.
Catalytic bio–chemo and bio–bio tandem oxidation reactions for amide and carboxylic acid synthesis
(2014)
A catalytic toolbox for three different water-based one-pot cascades to convert aryl alcohols to amides and acids and cyclic amines to lactams, involving combination of oxidative enzymes (monoamine oxidase, xanthine dehydrogenase, galactose oxidase and laccase) and chemical oxidants (TBHP or CuI(cat)/H2O2) at mild temperatures, is presented. Mutually compatible conditions were found to afford products in good to excellent yields.
Zinc deficiency has a fundamental influence on the immune defense, with multiple effects on different immune cells, resulting in a major impairment of human health. Monocytes and macrophages are among the immune cells that are most fundamentally affected by zinc, but the impact of zinc on these cells is still far from being completely understood. Therefore, this study investigates the influence of zinc deficiency on monocytes of healthy human donors. Peripheral blood mononuclear cells, which include monocytes, were cultured under zinc deficient conditions for 3 days. This was achieved by two different methods: by application of the membrane permeable chelator N,N,N0´,N0´-tetrakis-(2-pyridylmethyl)ethylenediamine (TPEN) or by removal of zinc from the culture medium using a CHELEX 100 resin. Subsequently, monocyte functions were analyzed in response to Escherichia coli, Staphylococcus aureus, and Streptococcus pneumoniae. Zinc depletion had differential effects. On the one hand, elimination of bacterial pathogens by phagocytosis and oxidative burst was elevated. On the other hand, the production of the inflammatory cytokines tumor necrosis factor (TNF)-a and interleukin (IL)-6 was reduced. This suggests that monocytes shift from intercellular communication to basic innate defensive functions in response to zinc deficiency. These results were obtained regardless of the method by which zinc deficiency was achieved. However, CHELEX-treated medium strongly augmented cytokine production, independently from its capability for zinc removal. This side-effect severely limits the use of CHELEX for investigating the effects of zinc deficiency on innate immunity.
The synthesis of two novel types of π-expanded coumarins has been developed. Modified Knoevenagel bis-condensation afforded 3,9-dioxa-perylene-2,8-diones. Subsequent oxidative aromatic coupling or light driven electrocyclization reaction led to dibenzo-1,7-dioxacoronene-2,8-dione. Unparalleled synthetic simplicity, straightforward purification and superb optical properties have the potential to bring these perylene and coronene analogs towards various applications.
Photoinduced excitation energy transfer and accompanying charge separation are elucidated for a supramolecular system of a single fullerene covalently linked to six pyropheophorbide-a dye molecules. Molecular dynamics simulations are performed to gain an atomistic picture of the architecture and the surrounding solvent. Excitation energy transfer among the dye molecules and electron transfer from the excited dyes to the fullerene are described by a mixed quantum–classical version of the Förster rate and the semiclassical Marcus rate, respectively. The mean characteristic time of energy redistribution lies in the range of 10 ps, while electron transfer proceeds within 150 ps. In between, on a 20 to 50 ps time-scale, conformational changes take place in the system. This temporal hierarchy of processes guarantees efficient charge separation, if the structure is exposed to a solvent. The fast energy transfer can adopt the dye excitation to the actual conformation. In this sense, the probability to achieve charge separation is large enough since any dominance of unfavorable conformations that exhibit a large dye–fullerene distance is circumvented. And the slow electron transfer may realize an averaging with respect to different conformations. To confirm the reliability of our computations, ensemble measurements on the charge separation dynamics are simulated and a very good agreement with the experimental data is obtained.
Based on extensive Monte Carlo simulations and analytical considerations we study the electrostatically driven adsorption of flexible polyelectrolyte chains onto charged Janus nanospheres. These net-neutral colloids are composed of two equally but oppositely charged hemispheres. The critical binding conditions for polyelectrolyte chains are analysed as function of the radius of the Janus particle and its surface charge density, as well as the salt concentration in the ambient solution. Specifically for the adsorption of finite-length polyelectrolyte chains onto Janus nanoparticles, we demonstrate that the critical adsorption conditions drastically differ when the size of the Janus particle or the screening length of the electrolyte are varied. We compare the scaling laws obtained for the adsorption–desorption threshold to the known results for uniformly charged spherical particles, observing significant disparities. We also contrast the changes to the polyelectrolyte chain conformations close to the surface of the Janus nanoparticles as compared to those for simple spherical particles. Finally, we discuss experimentally relevant physico-chemical systems for which our simulations results may become important. In particular, we observe similar trends with polyelectrolyte complexation with oppositely but heterogeneously charged proteins.
Formation of a Eu(III) borate solid species from a weak Eu(III) borate complex in aqueous solution
(2014)
In the presence of polyborates (detected by 11B-NMR) the formation of a weak Eu(III) borate complex (lg β11 ∼ 2, estimated) was observed by time-resolved laser-induced fluorescence spectroscopy (TRLFS). This complex is a precursor for the formation of a solid Eu(III) borate species. The formation of this solid in solution was investigated by TRLFS as a function of the total boron concentration: the lower the total boron concentration, the slower is the solid formation. The solid Eu(III) borate was characterized by IR spectroscopy, powder XRD and solid-state TRLFS. The determination of the europium to boron ratio portends the existence of pentaborate units in the amorphous solid.
A feasible approach to construct multilayer films of sulfonated polyanilines – PMSA1 and PABMSA1 – containing different ratios of aniline, 2-methoxyaniline-5-sulfonic acid (MAS) and 3-aminobenzoic acid (AB), with the entrapped redox enzyme pyrroloquinoline quinone-dependent glucose dehydrogenase (PQQ-GDH) on Au and ITO electrode surfaces, is described. The formation of layers has been followed and confirmed by electrochemical impedance spectroscopy (EIS), which demonstrates that the multilayer assembly can be achieved in a progressive and uniform manner. The gold and ITO electrodes subsequently modified with PMSA1:PQQ-GDH and PABMSA1 films are studied by cyclic voltammetry (CV) and UV-Vis spectroscopy which show a significant direct bioelectrocatalytical response to the oxidation of the substrate glucose without any additional mediator. This response correlates linearly with the number of deposited layers. Furthermore, the constructed polymer/enzyme multilayer system exhibits a rather good long-term stability, since the catalytic current response is maintained for more than 60% of the initial value even after two weeks of storage. This verifies that a productive interaction of the enzyme embedded in the film of substituted polyaniline can be used as a basis for the construction of bioelectronic units, which are useful as indicators for processes liberating glucose and allowing optical and electrochemical transduction.
As an engineering material derived from renewable resources, wood possesses excellent mechanical properties in view of its light weight but also has some disadvantages such as low dimensional stability upon moisture changes and low durability against biological attack. Polymerization of hydrophobic monomers in the cell wall is one of the potential approaches to improve the dimensional stability of wood. A major challenge is to insert hydrophobic monomers into the hydrophilic environment of the cell walls, without increasing the bulk density of the material due to lumen filling. Here, we report on an innovative and simple method to insert styrene monomers into tosylated cell walls (i.e. –OH groups from natural wood polymers are reacted with tosyl chloride) and carry out free radical polymerization under relatively mild conditions, generating low wood weight gains. In-depth SEM and confocal Raman microscopy analysis are applied to reveal the distribution of the polystyrene in the cell walls and the lumen. The embedding of polystyrene in wood results in reduced water uptake by the wood cell walls, a significant increase in dimensional stability, as well as slightly improved mechanical properties measured by nanoindentation.
Herein, we report the chain-growth tin-free room temperature polymerization method to synthesize n-type perylene diimide-dithiophene-based conjugated polymers (PPDIT2s) suitable for solar cell and transistor applications. The palladium/electron-rich tri-tert-butylphosphine catalyst is effective to enable the chain-growth polymerization of anion-radical monomer Br-TPDIT-Br/Zn to PPDIT2 with a molecular weight up to Mw ≈ 50 kg mol−1 and moderate polydispersity. This is the second example of the polymerization of unusual anion-radical aromatic complexes formed in a reaction of active Zn and electron-deficient diimide-based aryl halides. As such, the discovered polymerization method is not a specific reactivity feature of the naphthalene-diimide derivatives but is rather a general polymerization tool. This is an important finding, given the significantly higher maximum external quantum efficiency that can be reached with PDI-based copolymers (32–45%) in all-polymer solar cells compared to NDI-based materials (15–30%). Our studies revealed that PPDIT2 synthesized by the new method and the previously published polymer prepared by step-growth Stille polycondensation show similar electron mobility and all-polymer solar cell performance. At the same time, the polymerization reported herein has several technological advantages as it proceeds relatively fast at room temperature and does not involve toxic tin-based compounds. Because several chain-growth polymerization reactions are well-suited for the preparation of well-defined multi-functional polymer architectures, the next target is to explore the utility of the discovered polymerization in the synthesis of end-functionalized polymers and block copolymers. Such materials would be helpful to improve the nanoscale morphology of polymer blends in all-polymer solar cells.
Materials derived from renewable resources are highly desirable in view of more sustainable manufacturing. Among the available natural materials, wood is one of the key candidates, because of its excellent mechanical properties. However, wood and wood-based materials in engineering applications suffer from various restraints, such as dimensional instability upon humidity changes. Several wood modification treatments increase water repellence, but the insertion of hydrophobic polymers can result in a composite material which cannot be considered as renewable anymore. In this study, we report on the grafting of the fully biodegradable poly(ε-caprolactone) (PCL) inside the wood cell walls by Sn(Oct)2 catalysed ring-opening polymerization (ROP). The presence of polyester chains within the wood cell wall structure is monitored by confocal Raman imaging and spectroscopy as well as scanning electron microscopy. Physical tests reveal that the modified wood is more hydrophobic due to the bulking of the cell wall structure with the polyester chains, which results in a novel fully biodegradable wood material with improved dimensional stability.
A new functional luminescent lanthanide complex (LLC) has been synthesized with terbium as a central lanthanide ion and biotin as a functional moiety. Unlike in typical lanthanide complexes assembled via carboxylic moieties, in the presented complex, four phosphate groups are chelating the central lanthanide ion. This special chemical assembly enhances the complex stability in phosphate buffers conventionally used in biochemistry. The complex synthesis strategy and photophysical properties are described as well as the performance in time-resolved Förster Resonance Energy Transfer (FRET) assays. In those assays, this biotin-LLC transferred energy either to acceptor organic dyes (Cy5 or AF680) labelled on streptavidin or to quantum dots (QD655 or QD705) surface-functionalised with streptavidins. The permanent spatial donor–acceptor proximity is assured through strong and stable biotin–streptavidin binding. The energy transfer is evidenced from the quenching observed in donor emission and from a decrease in donor luminescence decay, both associated with simultaneous increase in acceptor intensity and in the decay time. The dye-based assays are realised in TRIS and in PBS, whereas QD-based systems are studied in borate buffer. The delayed emission analysis allows for quantifying the recognition process and for auto-fluorescence-free detection, which is particularly relevant for application in bioanalysis. In accordance with Förster theory, Förster-radii (R0) were found to be around 60 Å for organic dyes and around 105 Å for QDs. The FRET efficiency (η) reached 80% and 25% for dye and QD acceptors, respectively. Physical donor–acceptor distances (r) have been determined in the range 45–60 Å for organic dye acceptors, while for acceptor QDs between 120 Å and 145 Å. This newly synthesised biotin-LLC extends the class of highly sensitive analytical tools to be applied in the bioanalytical methods such as time-resolved fluoroimmunoassays (TR-FIA), luminescent imaging and biosensing.
This essay approaches T. S. Eliot’s Four Quartets (1935–1942) from the perspectives of Eve Kosofsky Sedgwick’s critical practice of reparative reading and of Paul Ricoeur’s poststructuralist hermeneutics. It demonstrates that Sedgwick’s and Ricoeur’s approaches can be productively combined to investigate hermeneutic processes in which the textual energy of a dissemination of meaning is redirected by a reparative or integrative impulse. In Four Quartets, this impetus induces the creation of semantic innovation through a violation of semantic pertinence, that is, through novel, tensional and provisional connections between formerly separate textual elements and semantic units.
HPI Future SOC Lab
(2014)
The “HPI Future SOC Lab” is a cooperation of the Hasso-Plattner-Institut (HPI) and industrial partners. Its mission is to enable and promote exchange and interaction between the research community and the industrial partners.
The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard- and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies.
This technical report presents results of research projects executed in 2014. Selected projects have presented their results on April 9th and September 29th 2014 at the Future SOC Lab Day events.
The aim of the present thesis is to answer the question to what degree the processes involved in sentence comprehension are sensitive to task demands. A central phenomenon in this regard is the so-called ambiguity advantage, which is the finding that ambiguous sentences can be easier to process than unambiguous sentences. This finding may appear counterintuitive, because more meanings should be associated with a higher computational effort. Currently, two theories exist that can explain this finding.
The Unrestricted Race Model (URM) by van Gompel et al. (2001) assumes that several sentence interpretations are computed in parallel, whenever possible, and that the first interpretation to be computed is assigned to the sentence. Because the duration of each structure-building process varies from trial to trial, the parallelism in structure-building predicts that ambiguous sentences should be processed faster. This is because when two structures are permissible, the chances that some interpretation will be computed quickly are higher than when only one specific structure is permissible. Importantly, the URM is not sensitive to task demands such as the type of comprehension questions being asked.
A radically different proposal is the strategic underspecification model by Swets et al. (2008). It assumes that readers do not attempt to resolve ambiguities unless it is absolutely necessary. In other words, they underspecify. According the strategic underspecification hypothesis, all attested replications of the ambiguity advantage are due to the fact that in those experiments, readers were not required to fully understand the sentence.
In this thesis, these two models of the parser’s actions at choice-points in the sentence are presented and evaluated. First, it is argued that the Swets et al.’s (2008) evidence against the URM and in favor of underspecification is inconclusive. Next, the precise predictions of the URM as well as the underspecification model are refined. Subsequently, a self-paced reading experiment involving the attachment of pre-nominal relative clauses in Turkish is presented, which provides evidence against strategical underspecification. A further experiment is presented which investigated relative clause attachment in German using the speed-accuracy tradeoff (SAT) paradigm. The experiment provides evidence against strategic underspecification and in favor of the URM. Furthermore the results of the experiment are used to argue that human sentence comprehension is fallible, and that theories of parsing should be able to account for that fact. Finally, a third experiment is presented, which provides evidence for the sensitivity to task demands in the treatment of ambiguities. Because this finding is incompatible with the URM, and because the strategic underspecification model has been ruled out, a new model of ambiguity resolution is proposed: the stochastic multiple-channel model of ambiguity resolution (SMCM). It is further shown that the quantitative predictions of the SMCM are in agreement with experimental data.
In conclusion, it is argued that the human sentence comprehension system is parallel and fallible, and that it is sensitive to task-demands.
Co-doping of the MOF 3∞[Zn(2-methylimidazolate-4-amide-5-imidate)] (IFP-1 = Imidazolate Framework Potsdam-1) with luminescent Eu3+ and Tb3+ ions presents an approach to utilize the porosity of the MOF for the intercalation of luminescence centers and for tuning of the chromaticity to the emission of white light of the quality of a three color emitter. Organic based fluorescence processes of the MOF backbone as well as metal based luminescence of the dopants are combined to one homogenous single source emitter while retaining the MOF's porosity. The lanthanide ions Eu3+ and Tb3+ were doped in situ into IFP-1 upon formation of the MOF by intercalation into the micropores of the growing framework without a structure directing effect. Furthermore, the color point is temperature sensitive, so that a cold white light with a higher blue content is observed at 77 K and a warmer white light at room temperature (RT) due to the reduction of the organic emission at higher temperatures. The study further illustrates the dependence of the amount of luminescent ions on porosity and sorption properties of the MOF and proves the intercalation of luminescence centers into the pore system by low-temperature site selective photoluminescence spectroscopy, SEM and EDX. It also covers an investigation of the border of homogenous uptake within the MOF pores and the formation of secondary phases of lanthanide formates on the surface of the MOF. Crossing the border from a homogenous co-doping to a two-phase composite system can be beneficially used to adjust the character and warmth of the white light. This study also describes two-color emitters of the formula Ln@IFP-1a–d (Ln: Eu, Tb) by doping with just one lanthanide Eu3+ or Tb3+.
Scientific inquiry requires that we formulate not only what we know, but also what we do not know and by how much. In climate data analysis, this involves an accurate specification of measured quantities and a consequent analysis that consciously propagates the measurement errors at each step. The dissertation presents a thorough analytical method to quantify errors of measurement inherent in paleoclimate data. An additional focus are the uncertainties in assessing the coupling between different factors that influence the global mean temperature (GMT).
Paleoclimate studies critically rely on `proxy variables' that record climatic signals in natural archives. However, such proxy records inherently involve uncertainties in determining the age of the signal. We present a generic Bayesian approach to analytically determine the proxy record along with its associated uncertainty, resulting in a time-ordered sequence of correlated probability distributions rather than a precise time series. We further develop a recurrence based method to detect dynamical events from the proxy probability distributions. The methods are validated with synthetic examples and
demonstrated with real-world proxy records. The proxy estimation step reveals the interrelations between proxy variability and uncertainty. The recurrence analysis of the East Asian Summer Monsoon during the last 9000 years confirms the well-known `dry' events at 8200 and 4400 BP, plus an additional significantly dry event at 6900 BP.
We also analyze the network of dependencies surrounding GMT. We find an intricate, directed network with multiple links between the different factors at multiple time delays. We further uncover a significant feedback from the GMT to the El Niño Southern Oscillation at quasi-biennial timescales. The analysis highlights the need of a more nuanced formulation of influences between different climatic factors, as well as the limitations in trying to estimate such dependencies.
Polyadenylation is a decisive 3’ end processing step during the maturation of pre-mRNAs. The length of the poly(A) tail has an impact on mRNA stability, localization and translatability. Accordingly, many eukaryotic organisms encode several copies of canonical poly(A) polymerases (cPAPs). The disruption of cPAPs in mammals results in lethality. In plants, reduced cPAP activity is non-lethal. Arabidopsis encodes three nuclear cPAPs, PAPS1, PAPS2 and PAPS4, which are constitutively expressed throughout the plant. Recently, the detailed analysis of Arabidopsis paps1 mutants revealed a subset of genes that is preferentially polyadenylated by the cPAP isoform PAPS1 (Vi et al. 2013). Thus, the specialization of cPAPs might allow the regulation of different sets of genes in order to optimally face developmental or environmental challenges.
To gain insights into the cPAP-based gene regulation in plants, the phenotypes of Arabidopsis cPAPs mutants under different conditions are characterized in detail in the following work. An involvement of all three cPAPs in flowering time regulation and stress response regulation is shown. While paps1 knockdown mutants flower early, paps4 and paps2 paps4 knockout mutants exhibit a moderate late-flowering phenotype. PAPS1 promotes the expression of the major flowering inhibitor FLC, supposedly by specific polyadenylation of an FLC activator. PAPS2 and PAPS4 exhibit partially overlapping functions and ensure timely flowering by repressing FLC and at least one other unidentified flowering inhibitor. The latter two cPAPs act in a novel regulatory pathway downstream of the autonomous pathway component FCA and act independently from the polyadenylation factors and flowering time regulators CstF64 and FY. Moreover, PAPS1 and PAPS2/PAPS4 are implicated in different stress response pathways in Arabidopsis. Reduced activity of the poly(A) polymerase PAPS1 results in enhanced resistance to osmotic and oxidative stress. Simultaneously, paps1 mutants are cold-sensitive. In contrast, PAPS2/PAPS4 are not involved in the regulation of osmotic or cold stress, but paps2 paps4 loss-of-function mutants exhibit enhanced sensitivity to oxidative stress provoked in the chloroplast. Thus, both PAPS1 and PAPS2/PAPS4 are required to maintain a balanced redox state in plants. PAPS1 seems to fulfil this function in concert with CPSF30, a polyadenylation factor that regulates alternative polyadenylation and tolerance to oxidative stress.
The individual paps mutant phenotypes and the cPAP-specific genetic interactions support the model of cPAP-dependent polyadenylation of selected mRNAs. The high similarity of the polyadenylation machineries in yeast, mammals and plants suggests that similar regulatory mechanisms might be present in other organism groups. The cPAP-dependent developmental and physiological pathways identified in this work allow the design of targeted experiments to better understand the ecological and molecular context underlying cPAP-specialization.
An important contribution of geosciences to the renewable energy production portfolio is the exploration and utilization of geothermal resources. For the development of a geothermal project at great depths a detailed geological and geophysical exploration program is required in the first phase. With the help of active seismic methods high-resolution images of the geothermal reservoir can be delivered. This allows potential transport routes for fluids to be identified as well as regions with high potential of heat extraction to be mapped, which indicates favorable conditions for geothermal exploitation. The presented work investigates the extent to which an improved characterization of geothermal reservoirs can be achieved with the new methods of seismic data processing. The summations of traces (stacking) is a crucial step in the processing of seismic reflection data. The common-reflection-surface (CRS) stacking method can be applied as an alternative for the conventional normal moveout (NMO) or the dip moveout (DMO) stack. The advantages of the CRS stack beside an automatic determination of stacking operator parameters include an adequate imaging of arbitrarily curved geological boundaries, and a significant increase in signal-to-noise (S/N) ratio by stacking far more traces than used in a conventional stack. A major innovation I have shown in this work is that the quality of signal attributes that characterize the seismic images can be significantly improved by this modified type of stacking in particular. Imporoved attribute analysis facilitates the interpretation of seismic images and plays a significant role in the characterization of reservoirs. Variations of lithological and petro-physical properties are reflected by fluctuations of specific signal attributes (eg. frequency or amplitude characteristics). Its further interpretation can provide quality assessment of the geothermal reservoir with respect to the capacity of fluids within a hydrological system that can be extracted and utilized. The proposed methodological approach is demonstrated on the basis on two case studies. In the first example, I analyzed a series of 2D seismic profile sections through the Alberta sedimentary basin on the eastern edge of the Canadian Rocky Mountains. In the second application, a 3D seismic volume is characterized in the surroundings of a geothermal borehole, located in the central part of the Polish basin. Both sites were investigated with the modified and improved stacking attribute analyses. The results provide recommendations for the planning of future geothermal plants in both study areas.
It is generally agreed upon that stars typically form in open clusters and stellar associations, but little is known about the structure of the open cluster system. Do open clusters and stellar associations form isolated or do they prefer to form in groups and complexes? Open cluster groups and complexes could verify star forming regions to be larger than expected, which would explain the chemical homogeneity over large areas in the Galactic disk. They would also define an additional level in the hierarchy of star formation and could be used as tracers for the scales of fragmentation in giant molecular clouds? Furthermore, open cluster groups and complexes could affect Galactic dynamics and should be considered in investigations and simulations on the dynamical processes, such as radial migration, disc heating, differential rotation, kinematic resonances, and spiral structure.
In the past decade there were a few studies on open cluster pairs (de La Fuente Marcos & de La Fuente Marcos 2009a,b,c) and on open cluster groups and complexes (Piskunov et al. 2006). The former only considered spatial proximity for the identification of the pairs, while the latter also required tangential velocities to be similar for the members. In this work I used the full set of 6D phase-space information to draw a more detailed picture on these structures. For this purpose I utilised the most homogeneous cluster catalogue available, namely the Catalogue of Open Cluster Data (COCD; Kharchenko et al. 2005a,b), which contains parameters for 650 open clusters and compact associations, as well as for their uniformly selected members. Additional radial velocity (RV) and metallicity ([M/H]) information on the members were obtained from the RAdial Velocity Experiment (RAVE; Steinmetz et al. 2006; Kordopatis et al. 2013) for 110 and 81 clusters, respectively. The RAVE sample was cleaned considering quality parameters and flags provided by RAVE (Matijevič et al. 2012; Kordopatis et al. 2013). To ensure that only real members were included for the mean values, also the cluster membership, as provided by Kharchenko et al. (2005a,b), was considered for the stars cross-matched in RAVE.
6D phase-space information could be derived for 432 out of the 650 COCD objects and I used an adaption of the Friends-of-Friends algorithm, as used in cosmology, to identify potential groupings. The vast majority of the 19 identified groupings were pairs, but I also found four groups of 4-5 members and one complex with 15 members. For the verification of the identified structures, I compared the results to a randomly selected subsample of the catalogue for the Milky Way global survey of Star Clusters (MWSC; Kharchenko et al. 2013), which became available recently, and was used as reference sample. Furthermore, I implemented Monte-Carlo simulations with randomised samples created from two distinguished input distributions for the spatial and velocity parameters. On the one hand, assuming a uniform distribution in the Galactic disc and, on the other hand, assuming the COCD data distributions to be representative for the whole open cluster population.
The results suggested that the majority of identified pairs are rather by chance alignments, but the groups and the complex seemed to be genuine. A comparison of my results to the pairs, groups and complexes proposed in the literature yielded a partial overlap, which was most likely because of selection effects and different parameters considered. This is another verification for the existence of such structures.
The characteristics of the found groupings favour that members of an open cluster grouping originate from a common giant molecular cloud and formed in a single, but possibly sequential, star formation event. Moreover, the fact that the young open cluster population showed smaller spatial separations between nearest neighbours than the old cluster population indicated that the lifetime of open cluster groupings is most likely comparable to that of the Galactic open cluster population itself. Still even among the old open clusters I could identify groupings, which suggested that the detected structure could be in some cases more long lived as one might think.
In this thesis I could only present a pilot study on structures in the Galactic open cluster population, since the data sample used was highly incomplete. For further investigations a far more complete sample would be required. One step in this direction would be to use data from large current surveys, like SDSS, RAVE, Gaia-ESO and VVV, as well as including results from studies on individual clusters. Later the sample can be completed by data from upcoming missions, like Gaia and 4MOST. Future studies using this more complete open cluster sample will reveal the effect of open cluster groupings on star formation theory and their significance for the kinematics, dynamics and evolution of the Milky Way, and thereby of spiral galaxies.
The data quality of real-world datasets need to be constantly monitored and maintained to allow organizations and individuals to reliably use their data. Especially, data integration projects suffer from poor initial data quality and as a consequence consume more effort and money. Commercial products and research prototypes for data cleansing and integration help users to improve the quality of individual and combined datasets. They can be divided into either standalone systems or database management system (DBMS) extensions. On the one hand, standalone systems do not interact well with DBMS and require time-consuming data imports and exports. On the other hand, DBMS extensions are often limited by the underlying system and do not cover the full set of data cleansing and integration tasks.
We overcome both limitations by implementing a concise set of five data cleansing and integration operators on the parallel data analytics platform Stratosphere. We define the semantics of the operators, present their parallel implementation, and devise optimization techniques for individual operators and combinations thereof. Users specify declarative queries in our query language METEOR with our new operators to improve the data quality of individual datasets or integrate them to larger datasets. By integrating the data cleansing operators into the higher level language layer of Stratosphere, users can easily combine cleansing operators with operators from other domains, such as information extraction, to complex data flows. Through a generic description of the operators, the Stratosphere optimizer reorders operators even from different domains to find better query plans.
As a case study, we reimplemented a part of the large Open Government Data integration project GovWILD with our new operators and show that our queries run significantly faster than the original GovWILD queries, which rely on relational operators. Evaluation reveals that our operators exhibit good scalability on up to 100 cores, so that even larger inputs can be efficiently processed by scaling out to more machines. Finally, our scripts are considerably shorter than the original GovWILD scripts, which results in better maintainability of the scripts.
Following the principles of green chemistry, a simple and efficient synthesis of functionalised imidazolium zwitterionic compounds from renewable resources was developed based on a modified one-pot Debus-Radziszewski reaction. The combination of different carbohydrate-derived 1,2-dicarbonyl compounds and amino acids is a simple way to modulate the properties and introduce different functionalities. A representative compound was assessed as an acid catalyst, and converted into acidic ionic liquids by reaction with several strong acids. The reactivity of the double carboxylic functionality was explored by esterification with long and short chain alcohols, as well as functionalised amines, which led to the straightforward formation of surfactant-like molecules or bifunctional esters and amides. One of these di-esters is currently being investigated for the synthesis of poly(ionic liquids). The functionalisation of cellulose with one of the bifunctional esters was investigated and preliminary tests employing it for the functionalisation of filter papers were carried out successfully. The imidazolium zwitterions were converted into ionic liquids via hydrothermal decarboxylation in flow, a benign and scalable technique. This method provides access to imidazolium ionic liquids via a simple and sustainable methodology, whilst completely avoiding contamination with halide salts. Different ionic liquids can be generated depending on the functionality contained in the ImZw precursor. Two alanine-derived ionic liquids were assessed for their physicochemical properties and applications as solvents for the dissolution of cellulose and the Heck coupling.
Monoclonal antibodies (mAbs) are engineered immunoglobulins G (IgG) used for more than 20 years as targeted therapy in oncology, infectious diseases and (auto-)immune disorders. Their protein nature greatly influences their pharmacokinetics (PK), presenting typical linear and non-linear behaviors.
While it is common to use empirical modeling to analyze clinical PK data of mAbs, there is neither clear consensus nor guidance to, on one hand, select the structure of classical compartment models and on the other hand, interpret mechanistically PK parameters. The mechanistic knowledge present in physiologically-based PK (PBPK) models is likely to support rational classical model selection and thus, a methodology to link empirical and PBPK models is desirable. However, published PBPK models for mAbs are quite diverse in respect to the physiology of distribution spaces and the parameterization of the non-specific elimination involving the neonatal Fc receptor (FcRn) and endogenous IgG (IgGendo). The remarkable discrepancy between the simplicity of biodistribution data and the complexity of published PBPK models translates in parameter identifiability issues.
In this thesis, we address this problem with a simplified PBPK model—derived from a hierarchy of more detailed PBPK models and based on simplifications of tissue distribution model. With the novel tissue model, we are breaking new grounds in mechanistic modeling of mAbs disposition: We demonstrate that binding to FcRn is indeed linear and that it is not possible to infer which tissues are involved in the unspecific elimination of wild-type mAbs. We also provide a new approach to predict tissue partition coefficients based on mechanistic insights: We directly link tissue partition coefficients (Ktis) to data-driven and species-independent published antibody biodistribution coefficients (ABCtis) and thus, we ensure the extrapolation from pre-clinical species to human with the simplified PBPK model. We further extend the simplified PBPK model to account for a target, relevant to characterize the non-linear clearance due to mAb-target interaction.
With model reduction techniques, we reduce the dimensionality of the simplified PBPK model to design 2-compartment models, thus guiding classical model development with physiological and mechanistic interpretation of the PK parameters. We finally derive a new scaling approach for anatomical and physiological parameters in PBPK models that translates the inter-individual variability into the design of mechanistic covariate models with direct link to classical compartment models, specially useful for PK population analysis during clinical development.
Protein-metal coordination complexes are well known as active centers in enzymatic catalysis, and to contribute to signal transduction, gas transport, and to hormone function. Additionally, they are now known to contribute as load-bearing cross-links to the mechanical properties of several biological materials, including the jaws of Nereis worms and the byssal threads of marine mussels. The primary aim of this thesis work is to better understand the role of protein-metal cross-links in the mechanical properties of biological materials, using the mussel byssus as a model system. Specifically, the focus is on histidine-metal cross-links as sacrificial bonds in the fibrous core of the byssal thread (Chapter 4) and L-3,4-dihydroxyphenylalanine (DOPA)-metal bonds in the protective thread cuticle (Chapter 5).
Byssal threads are protein fibers, which mussels use to attach to various substrates at the seashore. These relatively stiff fibers have the ability to extend up to about 100 % strain, dissipating large amounts of mechanical energy from crashing waves, for example. Remarkably, following damage from cyclic loading, initial mechanical properties are subsequently recovered by a material-intrinsic self-healing capability. Histidine residues coordinated to transition metal ions in the proteins comprising the fibrous thread core have been suggested as reversible sacrificial bonds that contribute to self-healing; however, this remains to be substantiated in situ. In the first part of this thesis, the role of metal coordination bonds in the thread core was investigated using several spectroscopic methods. In particular, X-ray absorption spectroscopy (XAS) was applied to probe the coordination environment of zinc in Mytilus californianus threads at various stages during stretching and subsequent healing. Analysis of the extended X-ray absorption fine structure (EXAFS) suggests that tensile deformation of threads is correlated with the rupture of Zn-coordination bonds and that self-healing is connected with the reorganization of Zn-coordination bond topologies rather than the mere reformation of Zn-coordination bonds. These findings have interesting implications for the design of self-healing metallopolymers.
The byssus cuticle is a protective coating surrounding the fibrous thread core that is both as hard as an epoxy and extensible up to 100 % strain before cracking. It was shown previously that cuticle stiffness and hardness largely depend on the presence of Fe-DOPA coordination bonds. However, the byssus is known to concentrate a large variety of metals from seawater, some of which are also capable of binding DOPA (e.g. V). Therefore, the question arises whether natural variation of metal composition can affect the mechanical performance of the byssal thread cuticle. To investigate this hypothesis, nanoindentation and confocal Raman spectroscopy were applied to the cuticle of native threads, threads with metals removed (EDTA treated), and threads in which the metal ions in the native tissue were replaced by either Fe or V. Interestingly, replacement of metal ions with either Fe or V leads to the full recovery of native mechanical properties with no statistical difference between each other or the native properties. This likely indicates that a fixed number of metal coordination sites are maintained within the byssal thread cuticle – possibly achieved during thread formation – which may provide an evolutionarily relevant mechanism for maintaining reliable mechanics in an unpredictable environment.
While the dynamic exchange of bonds plays a vital role in the mechanical behavior and self-healing in the thread core by allowing them to act as reversible sacrificial bonds, the compatibility of DOPA with other metals allows an inherent adaptability of the thread cuticle to changing circumstances. The requirements to both of these materials can be met by the dynamic nature of the protein-metal cross-links, whereas covalent cross-linking would fail to provide the adaptability of the cuticle and the self-healing of the core. In summary, these studies of the thread core and the thread cuticle serve to underline the important and dynamic roles of protein-metal coordination in the mechanical function of load-bearing protein fibers, such as the mussel byssus.
The quantitative descriptions of the state of stress in the Earth’s crust, and spatial-temporal stress changes are of great importance in terms of scientific questions as well as applied geotechnical issues. Human activities in the underground (boreholes, tunnels, caverns, reservoir management, etc.) have a large impact on the stress state. It is important to assess, whether these activities may lead to (unpredictable) hazards, such as induced seismicity. Equally important is the understanding of the in situ stress state in the Earth’s crust, as it allows the determination of safe well paths, already during well planning. The same goes for the optimal configuration of the injection- and production wells, where stimulation for artificial fluid path ways is necessary.
The here presented cumulative dissertation consists of four separate manuscripts, which are already published, submitted or will be submitted for peer review within the next weeks. The main focus is on the investigation of the possible usage of geothermal energy in the province Alberta (Canada). A 3-D geomechanical–numerical model was designed to quantify the contemporary 3-D stress tensor in the upper crust. For the calibration of the regional model, 321 stress orientation data and 2714 stress magnitude data were collected, whereby the size and diversity of the database is unique. A calibration scheme was developed, where the model is calibrated versus the in situ stress data stepwise for each data type and gradually optimized using statistically test methods. The optimum displacement on the model boundaries can be determined by bivariate linear regression, based on only three model runs with varying deformation ratio. The best-fit model is able to predict most of the in situ stress data quite well. Thus, the model can provide the full stress tensor along any chosen virtual well paths. This can be used to optimize the orientation of horizontal wells, which e.g. can be used for reservoir stimulation. The model confirms regional deviations from the average stress orientation trend, such as in the region of the Peace River Arch and the Bow Island Arch.
In the context of data compilation for the Alberta stress model, the Canadian database of the World Stress Map (WSM) could be expanded by including 514 new data records. This publication of an update of the Canadian stress map after ~20 years with a specific focus on Alberta shows, that the maximum horizontal stress (SHmax) is oriented southwest to northeast over large areas in Northern America. The SHmax orientation in Alberta is very homogeneous, with an average of about 47°. In order to calculate the average SHmax orientation on a regular grid as well as to estimate the wave-length of stress orientation, an existing algorithm has been improved and is applied to the Canadian data. The newly introduced quasi interquartile range on the circle (QIROC) improves the variance estimation of periodic data, as it is less susceptible to its outliers.
Another geomechanical–numerical model was built to estimate the 3D stress tensor in the target area ”Nördlich Lägern” in Northern Switzerland. This location, with Opalinus clay as a host rock, is a potential repository site for high-level radioactive waste. The performed modelling aims to investigate the sensitivity of the stress tensor on tectonic shortening, topography, faults and variable rock properties within the Mesozoic sedimentary stack, according to the required stability needed for a suitable radioactive waste disposal site. The majority of the tectonic stresses caused by the far-field shortening from the South are admitted by the competent rock units in the footwall and hanging wall of the argillaceous target horizon, the Upper Malm and Upper Muschelkalk. Thus, the differential stress within the host rock remains relatively low. East-west striking faults release stresses driven by tectonic shortening. The purely gravitational influence by the topography is low; higher SHmax magnitudes below topographical depression and lower values below hills are mainly observed near the surface. A complete calibration of the model is not possible, as no stress magnitude data are available for calibration, yet. The collection of this data will begin in 2015; subsequently they will be used to adjust the geomechanical–numerical model again.
The third geomechanical–numerical model investigates the stress variation in an ultra-deep gold mine in South Africa. This reservoir model is spatially one order of magnitude smaller than the previous local model from Northern Switzerland. Here, the primary focus is to investigate the hypothesis that the Mw 1.9 earthquake on 27 December 2007 was induced by stress changes due to the mining process. The Coulomb failure stress change (DeltaCFS) was used to analyse the stress change. It confirmed that the seismic event was induced by static stress transfer due to the mining progress. The rock was brought closer to failure on the derived rupture plane by stress changes of up to 1.5–15MPa, in dependence of the DeltaCFS analysis type. A forward modelling of a generic excavation scheme reveals that with decreasing distance to the dyke the DeltaCFS values increase significantly. Hence, even small changes in the mining progress can have a significant impact on the seismic hazard risk, i.e. the change of the occurrence probability to induce a seismic event of economic concern.
The Epoch of Reionization marks after recombination the second major change in the ionization state of the universe, going from a neutral to an ionized state. It starts with the appearance of the first stars and galaxies; a fraction of high-energy photons emitted from galaxies permeate into the intergalactic medium (IGM) and gradually ionize the hydrogen, until the IGM is completely ionized at z~6 (Fan et al., 2006). While the progress of reionization is driven by galaxy evolution, it changes the ionization and thermal state of the IGM substantially and affects subsequent structure and galaxy formation by various feedback mechanisms.
Understanding this interaction between reionization and galaxy formation is further impeded by a lack of understanding of the high-redshift galactic properties such as the dust distribution and the escape fraction of ionizing photons. Lyman Alpha Emitters (LAEs) represent a sample of high-redshift galaxies that are sensitive to all these galactic properties and the effects of reionization.
In this thesis we aim to understand the progress of reionization by performing cosmological simulations, which allows us to investigate the limits of constraining reionization by high-redshift galaxies as LAEs, and examine how galactic properties and the ionization state of the IGM affect the visibility and observed quantities of LAEs and Lyman Break galaxies (LBGs).
In the first part of this thesis we focus on performing radiative transfer calculations to simulate reionization. We have developed a mapping-sphere-scheme, which, starting from spherically averaged temperature and density fields, uses our 1D radiative transfer code and computes the effect of each source on the IGM temperature and ionization (HII, HeII, HeIII) profiles, which are subsequently mapped onto a grid. Furthermore we have updated the 3D Monte-Carlo radiative transfer pCRASH, enabling detailed reionization simulations which take individual source characteristics into account.
In the second part of this thesis we perform a reionization simulation by post-processing a smoothed-particle hydrodynamical (SPH) simulation (GADGET-2) with 3D radiative transfer (pCRASH), where the ionizing sources are modelled according to the characteristics of the stellar populations in the hydrodynamical simulation. Following the ionization fractions of hydrogen (HI) and helium (HeII, HeIII), and temperature in our simulation, we find that reionization starts at z~11 and ends at z~6, and high density regions near sources are ionized earlier than low density regions far from sources.
In the third part of this thesis we couple the cosmological SPH simulation and the radiative transfer simulations with a physically motivated, self-consistent model for LAEs, in order to understand the importance of the ionization state of the IGM, the escape fraction of ionizing photons from galaxies and dust in the interstellar medium (ISM) on the visibility of LAEs. Comparison of our models results with the LAE Lyman Alpha (Lya) and UV luminosity functions at z~6.6 reveals a three-dimensional degeneracy between the ionization state of the IGM, the ionizing photons escape fraction and the ISM dust distribution, which implies that LAEs act not only as tracers of reionization but also of the ionizing photon escape fraction and of the ISM dust distribution. This degeneracy does not even break down when we compare simulated with observed clustering of LAEs at z~6.6. However, our results show that reionization has the largest impact on the amplitude of the LAE angular correlation functions, and its imprints are clearly distinguishable from those of properties on galactic scales. These results show that reionization cannot be constrained tightly by exclusively using LAE observations. Further observational constraints, e.g. tomographies of the redshifted hydrogen 21cm line, are required.
In addition we also use our LAE model to probe the question when a galaxy is visible as a LAE or a LBG. Within our model galaxies above a critical stellar mass can produce enough luminosity to be visible as a LBG and/or a LAE. By finding an increasing duty cycle of LBGs with Lya emission as the UV magnitude or stellar mass of the galaxy rises, our model reveals that the brightest (and most massive) LBGs most often show Lya emission.
Predicting the Lya equivalent width (Lya EW) distribution and the fraction of LBGs showing Lya emission at z~6.6, we reproduce the observational trend of the Lya EWs with UV magnitude. However, the Lya EWs of the UV brightest LBGs exceed observations and can only be reconciled by accounting for an increased Lya attenuation of massive galaxies, which implies that the observed Lya brightest LAEs do not necessarily coincide with the UV brightest galaxies. We have analysed the dependencies of LAE observables on the properties of the galactic and intergalactic medium and the LAE-LBG connection, and this enhances our understanding of the nature of LAEs.
We study the diffusion of a tracer particle, which moves in continuum space between a lattice of excluded volume, immobile non-inert obstacles. In particular, we analyse how the strength of the tracer–obstacle interactions and the volume occupancy of the crowders alter the diffusive motion of the tracer. From the details of partitioning of the tracer diffusion modes between trapping states when bound to obstacles and bulk diffusion, we examine the degree of localisation of the tracer in the lattice of crowders. We study the properties of the tracer diffusion in terms of the ensemble and time averaged mean squared displacements, the trapping time distributions, the amplitude variation of the time averaged mean squared displacements, and the non-Gaussianity parameter of the diffusing tracer. We conclude that tracer–obstacle adsorption and binding triggers a transient anomalous diffusion. From a very narrow spread of recorded individual time averaged trajectories we exclude continuous type random walk processes as the underlying physical model of the tracer diffusion in our system. For moderate tracer–crowder attraction the motion is found to be fully ergodic, while at stronger attraction strength a transient disparity between ensemble and time averaged mean squared displacements occurs. We also put our results into perspective with findings from experimental single-particle tracking and simulations of the diffusion of tagged tracers in dense crowded suspensions. Our results have implications for the diffusion, transport, and spreading of chemical components in highly crowded environments inside living cells and other structured liquids.
The purpose of this thesis is to develop an automated inversion scheme to derive point and finite source parameters for weak earthquakes, here intended with the unusual meaning of earthquakes with magnitudes at the limit or below the bottom magnitude threshold of standard source inversion routines. The adopted inversion approaches entirely rely on existing inversion software, the methodological work mostly targeting the development and tuning of optimized inversion flows. The resulting inversion scheme is tested for very different datasets, and thus allows the discussion on the source inversion problem at different scales. In the first application, dealing with mining induced seismicity, the source parameters determination is addressed at a local scale, with source-sensor distance of less than 3 km. In this context, weak seismicity corresponds to event below magnitude MW 2.0, which are rarely target of automated source inversion routines. The second application considers a regional dataset, namely the aftershock sequence of the 2010 Maule earthquake (Chile), using broadband stations at regional distances, below 300 km. In this case, the magnitude range of the target aftershocks range down to MW 4.0. This dataset is here considered as a weak seismicity case, since the analysis of such moderate seismicity is generally investigated only by moment tensor inversion routines, with no attempt to resolve source duration or finite source parameters. In this work, automated multi-step inversion schemes are applied to both datasets with the aim of resolving point source parameters, both using double couple (DC) and full moment tensor (MT) models, source duration and finite source parameters. A major result of the analysis of weaker events is the increased size of resulting moment tensor catalogues, which interpretation may become not trivial. For this reason, a novel focal mechanism clustering approach is used to automatically classify focal mechanisms, allowing the investigation of the most relevant and repetitive rupture features. The inversion of the mining induced seismicity dataset reveals the repetitive occurrence of similar rupture processes, where the source geometry is controlled by the shape of the mined panel. Moreover, moment tensor solutions indicate a significant contribution of tensile processes. Also the second application highlights some characteristic geometrical features of the fault planes, which show a general consistency with the orientation of the slab. The additional inversion for source duration allowed to verify the empirical correlation for moment normalized earthquakes in subduction zones among a decreasing rupture duration with increasing source depth, which was so far only observed for larger events.
The adaptation of cell growth and proliferation to environmental changes is essential for the surviving of biological systems. The evolutionary conserved Ser/Thr protein kinase “Target of Rapamycin” (TOR) has emerged as a major signaling node that integrates the sensing of numerous growth signals to the coordinated regulation of cellular metabolism and growth. Although the TOR signaling pathway has been widely studied in heterotrophic organisms, the research on TOR in photosynthetic eukaryotes has been hampered by the reported land plant resistance to rapamycin. Thus, the finding that Chlamydomonas reinhardtii is sensitive to rapamycin, establish this unicellular green alga as a useful model system to investigate TOR signaling in photosynthetic eukaryotes.
The observation that rapamycin does not fully arrest Chlamydomonas growth, which is different from observations made in other organisms, prompted us to investigate the regulatory function of TOR in Chlamydomonas in context of the cell cycle. Therefore, a growth system that allowed synchronously growth under widely unperturbed cultivation in a fermenter system was set up and the synchronized cells were characterized in detail. In a highly resolved kinetic study, the synchronized cells were analyzed for their changes in cytological parameters as cell number and size distribution and their starch content. Furthermore, we applied mass spectrometric analysis for profiling of primary and lipid metabolism. This system was then used to analyze the response dynamics of the Chlamydomonas metabolome and lipidome to TOR-inhibition by rapamycin
The results show that TOR inhibition reduces cell growth, delays cell division and daughter cell release and results in a 50% reduced cell number at the end of the cell cycle. Consistent with the growth phenotype we observed strong changes in carbon and nitrogen partitioning in the direction of rapid conversion into carbon and nitrogen storage through an accumulation of starch, triacylglycerol and arginine. Interestingly, it seems that the conversion of carbon into triacylglycerol occurred faster than into starch after TOR inhibition, which may indicate a more dominant role of TOR in the regulation of TAG biosynthesis than in the regulation of starch.
This study clearly shows, for the first time, a complex picture of metabolic and lipidomic dynamically changes during the cell cycle of Chlamydomonas reinhardtii and furthermore reveals a complex regulation and adjustment of metabolite pools and lipid composition in response to TOR inhibition.
The looping of polymers such as DNA is a fundamental process in the molecular biology of living cells, whose interior is characterised by a high degree of molecular crowding. We here investigate in detail the looping dynamics of flexible polymer chains in the presence of different degrees of crowding. From the analysis of the looping–unlooping rates and the looping probabilities of the chain ends we show that the presence of small crowders typically slows down the chain dynamics but larger crowders may in fact facilitate the looping. We rationalise these non-trivial and often counterintuitive effects of the crowder size on the looping kinetics in terms of an effective solution viscosity and standard excluded volume. It is shown that for small crowders the effect of an increased viscosity dominates, while for big crowders we argue that confinement effects (caging) prevail. The tradeoff between both trends can thus result in the impediment or facilitation of polymer looping, depending on the crowder size. We also examine how the crowding volume fraction, chain length, and the attraction strength of the contact groups of the polymer chain affect the looping kinetics and hairpin formation dynamics. Our results are relevant for DNA looping in the absence and presence of protein mediation, DNA hairpin formation, RNA folding, and the folding of polypeptide chains under biologically relevant high-crowding conditions.
ANG-2 for quantitative Na+ determination in living cells by time-resolved fluorescence microscopy
(2014)
Sodium ions (Na+) play an important role in a plethora of cellular processes, which are complex and partly still unexplored. For the investigation of these processes and quantification of intracellular Na+ concentrations ([Na+]i), two-photon coupled fluorescence lifetime imaging microscopy (2P-FLIM) was performed in the salivary glands of the cockroach Periplaneta americana. For this, the novel Na+-sensitive fluorescent dye Asante NaTRIUM Green-2 (ANG-2) was evaluated, both in vitro and in situ. In this context, absorption coefficients, fluorescence quantum yields and 2P action cross-sections were determined for the first time. ANG-2 was 2P-excitable over a broad spectral range and displayed fluorescence in the visible spectral range. Although the fluorescence decay behaviour of ANG-2 was triexponential in vitro, its analysis indicates a Na+-sensitivity appropriate for recordings in living cells. The Na+-sensitivity was reduced in situ, but the biexponential fluorescence decay behaviour could be successfully analysed in terms of quantitative [Na+]i recordings. Thus, physiological 2P-FLIM measurements revealed a dopamine-induced [Na+]i rise in cockroach salivary gland cells, which was dependent on a Na+-K+-2Cl− cotransporter (NKCC) activity. It was concluded that ANG-2 is a promising new sodium indicator applicable for diverse biological systems.
Arsenic-containing hydrocarbons (AsHC) constitute one group of arsenolipids that have been identified in seafood. In this first in vivo toxicity study for AsHCs, we show that AsHCs exert toxic effects in Drosophila melanogaster in a concentration range similar to that of arsenite. In contrast to arsenite, however, AsHCs cause developmental toxicity in the late developmental stages of Drosophila melanogaster. This work illustrates the need for a full characterisation of the toxicity of AsHCs in experimental animals to finally assess the risk to human health related to the presence of arsenolipids in seafood.
We report a 1,2,3-triazol fluoroionophore for detecting Na+ that shows in vitro enhancement in the Na+-induced fluorescence intensity and decay time. The Na+-selective molecule 1 was incorporated into a hydrogel as a part of a fiber optical sensor. This sensor allows the direct determination of Na+ in the range of 1–10 mM by measuring reversible fluorescence decay time changes.
Molecular motors pulling cargos in the viscoelastic cytosol: how power strokes beat subdiffusion
(2014)
The discovery of anomalous diffusion of larger biopolymers and submicron tracers such as endogenous granules, organelles, or virus capsids in living cells, attributed to the viscoelastic nature of the cytoplasm, provokes the question whether this complex environment equally impacts the active intracellular transport of submicron cargos by molecular motors such as kinesins: does the passive anomalous diffusion of free cargo always imply its anomalously slow active transport by motors, the mean transport distance along microtubule growing sublinearly rather than linearly in time? Here we analyze this question within the widely used two-state Brownian ratchet model of kinesin motors based on the continuous-state diffusion along microtubules driven by a flashing binding potential, where the cargo particle is elastically attached to the motor. Depending on the cargo size, the loading force, the amplitude of the binding potential, the turnover frequency of the molecular motor enzyme, and the linker stiffness we demonstrate that the motor transport may turn out either normal or anomalous, as indeed measured experimentally. We show how a highly efficient normal active transport mediated by motors may emerge despite the passive anomalous diffusion of the cargo, and study the intricate effects of the elastic linker. Under different, well specified conditions the microtubule-based motor transport becomes anomalously slow and thus significantly less efficient.
Anomalous diffusion is frequently described by scaled Brownian motion (SBM){,} a Gaussian process with a power-law time dependent diffusion coefficient. Its mean squared displacement is ?x2(t)? [similar{,} equals] 2K(t)t with K(t) [similar{,} equals] t[small alpha]-1 for 0 < [small alpha] < 2. SBM may provide a seemingly adequate description in the case of unbounded diffusion{,} for which its probability density function coincides with that of fractional Brownian motion. Here we show that free SBM is weakly non-ergodic but does not exhibit a significant amplitude scatter of the time averaged mean squared displacement. More severely{,} we demonstrate that under confinement{,} the dynamics encoded by SBM is fundamentally different from both fractional Brownian motion and continuous time random walks. SBM is highly non-stationary and cannot provide a physical description for particles in a thermalised stationary system. Our findings have direct impact on the modelling of single particle tracking experiments{,} in particular{,} under confinement inside cellular compartments or when optical tweezers tracking methods are used.