Refine
Has Fulltext
- yes (182) (remove)
Year of publication
- 2014 (182) (remove)
Document Type
- Postprint (96)
- Doctoral Thesis (58)
- Preprint (13)
- Monograph/Edited Volume (8)
- Habilitation Thesis (2)
- Master's Thesis (2)
- Part of Periodical (2)
- Article (1)
Language
- English (182) (remove)
Is part of the Bibliography
- yes (182) (remove)
Keywords
- anomalous diffusion (5)
- Gammastrahlungsastronomie (4)
- data analysis (4)
- gamma-ray astronomy (4)
- living cells (4)
- Crab Nebula (3)
- Datenanalyse (3)
- Krebsnebel (3)
- Systembiologie (3)
- cloud computing (3)
Institute
- Institut für Chemie (36)
- Mathematisch-Naturwissenschaftliche Fakultät (22)
- Institut für Physik und Astronomie (21)
- Institut für Biochemie und Biologie (15)
- Institut für Mathematik (15)
- Humanwissenschaftliche Fakultät (14)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (13)
- Institut für Geowissenschaften (11)
- Wirtschaftswissenschaften (5)
- Department Sport- und Gesundheitswissenschaften (4)
Downscaling of microfluidic cell culture and detection devices for electrochemical monitoring has mostly focused on miniaturization of the microfluidic chips which are often designed for specific applications and therefore lack functional flexibility. We present a compact microfluidic cell culture and electrochemical analysis platform with in-built fluid handling and detection, enabling complete cell based assays comprising on-line electrode cleaning, sterilization, surface functionalization, cell seeding, cultivation and electrochemical real-time monitoring of cellular dynamics. To demonstrate the versatility and multifunctionality of the platform, we explored amperometric monitoring of intracellular redox activity in yeast (Saccharomyces cerevisiae) and detection of exocytotically released dopamine from rat pheochromocytoma cells (PC12). Electrochemical impedance spectroscopy was used in both applications for monitoring cell sedimentation and adhesion as well as proliferation in the case of PC12 cells. The influence of flow rate on the signal amplitude in the detection of redox metabolism as well as the effect of mechanical stimulation on dopamine release were demonstrated using the programmable fluid handling capability. The here presented platform is aimed at applications utilizing cell based assays, ranging from e.g. monitoring of drug effects in pharmacological studies, characterization of neural stem cell differentiation, and screening of genetically modified microorganisms to environmental monitoring.
The Arabidopsis Kinome
(2014)
Background
Protein kinases constitute a particularly large protein family in Arabidopsis with important functions in cellular signal transduction networks. At the same time Arabidopsis is a model plant with high frequencies of gene duplications. Here, we have conducted a systematic analysis of the Arabidopsis kinase complement, the kinome, with particular focus on gene duplication events. We matched Arabidopsis proteins to a Hidden-Markov Model of eukaryotic kinases and computed a phylogeny of 942 Arabidopsis protein kinase domains and mapped their origin by gene duplication.
Results
The phylogeny showed two major clades of receptor kinases and soluble kinases, each of which was divided into functional subclades. Based on this phylogeny, association of yet uncharacterized kinases to families was possible which extended functional annotation of unknowns. Classification of gene duplications within these protein kinases revealed that representatives of cytosolic subfamilies showed a tendency to maintain segmentally duplicated genes, while some subfamilies of the receptor kinases were enriched for tandem duplicates. Although functional diversification is observed throughout most subfamilies, some instances of functional conservation among genes transposed from the same ancestor were observed. In general, a significant enrichment of essential genes was found among genes encoding for protein kinases.
Conclusions
The inferred phylogeny allowed classification and annotation of yet uncharacterized kinases. The prediction and analysis of syntenic blocks and duplication events within gene families of interest can be used to link functional biology to insights from an evolutionary viewpoint. The approach undertaken here can be applied to any gene family in any organism with an annotated genome.
Geometric electroelasticity
(2014)
In this work a diffential geometric formulation of the theory of electroelasticity is developed which also includes thermal and magnetic influences. We study the motion of bodies consisting of an elastic material that are deformed by the influence of mechanical forces, heat and an external electromagnetic field. To this end physical balance laws (conservation of mass, balance of momentum, angular momentum and energy) are established. These provide an equation that describes the motion of the body during the deformation. Here the body and the surrounding space are modeled as Riemannian manifolds, and we allow that the body has a lower dimension than the surrounding space. In this way one is not (as usual) restricted to the description of the deformation of three-dimensional bodies in a three-dimensional space, but one can also describe the deformation of membranes and the deformation in a curved space. Moreover, we formulate so-called constitutive relations that encode the properties of the used material. Balance of energy as a scalar law can easily be formulated on a Riemannian manifold. The remaining balance laws are then obtained by demanding that balance of energy is invariant under the action of arbitrary diffeomorphisms on the surrounding space. This generalizes a result by Marsden and Hughes that pertains to bodies that have the same dimension as the surrounding space and does not allow the presence of electromagnetic fields. Usually, in works on electroelasticity the entropy inequality is used to decide which otherwise allowed deformations are physically admissible and which are not. It is alsoemployed to derive restrictions to the possible forms of constitutive relations describing the material. Unfortunately, the opinions on the physically correct statement of the entropy inequality diverge when electromagnetic fields are present. Moreover, it is unclear how to formulate the entropy inequality in the case of a membrane that is subjected to an electromagnetic field. Thus, we show that one can replace the use of the entropy inequality by the demand that for a given process balance of energy is invariant under the action of arbitrary diffeomorphisms on the surrounding space and under linear rescalings of the temperature. On the one hand, this demand also yields the desired restrictions to the form of the constitutive relations. On the other hand, it needs much weaker assumptions than the arguments in physics literature that are employing the entropy inequality. Again, our result generalizes a theorem of Marsden and Hughes. This time, our result is, like theirs, only valid for bodies that have the same dimension as the surrounding space.
Picosecond X-ray absorption spectroscopy (XAS) is used to investigate the electronic and structural dynamics initiated by plasmon excitation of 1.8 nm diameter Au nanoparticles (NPs) functionalised with 1-hexanethiol. We show that 100 ps after photoexcitation the transient XAS spectrum is consistent with an 8% expansion of the Au–Au bond length and a large increase in disorder associated with melting of the NPs. Recovery of the ground state occurs with a time constant of ∼1.8 ns, arising from thermalisation with the environment. Simulations reveal that the transient spectrum exhibits no signature of charge separation at 100 ps and allows us to estimate an upper limit for the quantum yield (QY) of this process to be <0.1.
Synchronization is a fundamental phenomenon in nature. It can be considered as a general property of self-sustained oscillators to adjust their rhythm in the presence of an interaction.
In this work we investigate complex regimes of synchronization phenomena by means of theoretical analysis, numerical modeling, as well as practical analysis of experimental data.
As a subject of our investigation we consider chimera state, where due to spontaneous symmetry-breaking of an initially homogeneous oscillators lattice split the system into two parts with different dynamics. Chimera state as a new synchronization phenomenon was first found in non-locally coupled oscillators system, and has attracted a lot of attention in the last decade. However, the recent studies indicate that this state is also possible in globally coupled systems. In the first part of this work, we show under which conditions the chimera-like state appears in a system of globally coupled identical oscillators with intrinsic delayed feedback. The results of the research explain how initially monostable oscillators became effectivly bistable in the presence of the coupling and create a mean field that sustain the coexistence of synchronized and desynchronized states. Also we discuss other examples, where chimera-like state appears due to frequency dependence of the phase shift in the bistable system.
In the second part, we make further investigation of this topic by modeling influence of an external periodic force to an oscillator with intrinsic delayed feedback. We made stability analysis of the synchronized state and constructed Arnold tongues. The results explain formation of the chimera-like state and hysteric behavior of the synchronization area. Also, we consider two sets of parameters of the oscillator with symmetric and asymmetric Arnold tongues, that correspond to mono- and bi-stable regimes of the oscillator.
In the third part, we demonstrate the results of the work, which was done in collaboration with our colleagues from Psychology Department of University of Potsdam. The project aimed to study the effect of the cardiac rhythm on human perception of time using synchronization analysis. From our part, we made a statistical analysis of the data obtained from the conducted experiment on free time interval reproduction task. We examined how ones heartbeat influences the time perception and searched for possible phase synchronization between heartbeat cycles and time reproduction responses. The findings support the prediction that cardiac cycles can serve as input signals, and is used for reproduction of time intervals in the range of several seconds.
We present an electrochemical MIP sensor for tamoxifen (TAM)-a nonsteroidal anti-estrogen-which is based on the electropolymerisation of an O-phenylenediamine. resorcinol mixture directly on the electrode surface in the presence of the template molecule. Up to now only. bulk. MIPs for TAM have been described in literature, which are applied for separation in chromatography columns. Electro-polymerisation of the monomers in the presence of TAM generated a film which completely suppressed the reduction of ferricyanide. Removal of the template gave a markedly increased ferricyanide signal, which was again suppressed after rebinding as expected for filling of the cavities by target binding. The decrease of the ferricyanide peak of the MIP electrode depended linearly on the TAM concentration between 1 and 100 nM. The TAM-imprinted electrode showed a 2.3 times higher recognition of the template molecule itself as compared to its metabolite 4-hydroxytamoxifen and no cross-reactivity with the anticancer drug doxorubucin was found. Measurements at + 1.1 V caused a fouling of the electrode surface, whilst pretreatment of TAM with peroxide in presence of HRP generated an oxidation product which was reducible at 0 mV, thus circumventing the polymer formation and electrochemical interferences.
The large-scale green synthesis of graphene-type two-dimensional materials is still challenging. Herein, we describe the ionothermal synthesis of carbon-based composites from fructose in the iron-containing ionic liquid 1-butyl-3-methylimidazolium tetrachloridoferrate(III), [Bmim][FeCl4] serving as solvent, catalyst, and template for product formation. The resulting composites consist of oligo-layer graphite nanoflakes and iron carbide particles. The mesoporosity, strong magnetic moment, and high specific surface area of the composites make them attractive for water purification with facile magnetic separation. Moreover, Fe3Cfree graphite can be obtained via acid etching, providing access to fairly large amounts of graphite material. The current approach is versatile and scalable, and thus opens the door to ionothermal synthesis towards the larger-scale synthesis of materials that are, although not made via a sustainable process, useful for water treatment such as the removal of organic molecules.
The synthesis of two novel types of π-expanded coumarins has been developed. Modified Knoevenagel bis-condensation afforded 3,9-dioxa-perylene-2,8-diones. Subsequent oxidative aromatic coupling or light driven electrocyclization reaction led to dibenzo-1,7-dioxacoronene-2,8-dione. Unparalleled synthetic simplicity, straightforward purification and superb optical properties have the potential to bring these perylene and coronene analogs towards various applications.
Graphitic carbon nitride, g-C₃N₄, is a promising organic photo-catalyst for a variety of redox reactions. In order to improve its efficiency in a systematic manner, however, a fundamental understanding of the microscopic interaction between catalyst, reactants and products is crucial. Here we present a systematic study of water adsorption on g-C₃N₄ by means of density functional theory and the density functional based tight-binding method as a prerequisite for understanding photocatalytic water splitting. We then analyze this prototypical redox reaction on the basis of a thermodynamic model providing an estimate of the overpotential for both water oxidation and H⁺ reduction. While the latter is found to occur readily upon irradiation with visible light, we derive a prohibitive overpotential of 1.56 eV for the water oxidation half reaction, comparing well with the experimental finding that in contrast to H₂ production O₂ evolution is only possible in the presence of oxidation cocatalysts.
Portal Wissen = Time
(2014)
“What then is time?”, Augustine of Hippo sighs melancholically in Book XI of “Confessions” and continues, “If no one asks me, I know; if I want to explain it to a questioner, I don’t know.” Even today, 1584 years after Augustine, time still appears mysterious. Treatises about the essence of time fill whole libraries – and this magazine.
However, questions of essence are alien to modern sciences. Time is – at least in physics – unproblematic: “Time is defined so that motion looks simple”, briefly and prosaically phrased, waves goodbye to Augustine’s riddle and to the Newtonian concept of absolute time, whose mathematical flow can only be approximately recorded with earthly instruments anyway.
In our everyday language and even in science we still speak of the flow of time but time has not been a natural condition for quite a while now. It is rather a conventional order parameter for change and movement. Processes are arranged by using a class of processes as a counting system in order to compare other processes and to organize them with the help of the temporary categories “before”, “during”, and “after”.
During Galileo’s time one’s own pulse was seen as the time standard for the flight of cannon balls. More sophisticated examination methods later made this seem too impractical. The distance-time diagrams of free-flying cannon balls turned out to be rather imprecise, difficult to replicate, and in no way “simple”. Nowadays, we use cesium atoms. A process is said to take one second when a caesium-133 atom completes 9,192,631,770 periods of the radiation corresponding to the transition between two hyperfine levels of the ground state. A meter is the length of the path travelled by light in a vacuum in exactly 1/299,792,458 of a second. Fortunately, these data are hard-coded in the Global Positioning System GPS so users do not have to reenter them each time they want to know where they are. In the future, however, they might have to download an app because the time standard has been replaced by sophisticated transitions to ytterbium.
The conventional character of the time concept should not tempt us to believe that everything is somehow relative and, as a result, arbitrary. The relation of one’s own pulse to an atomic clock is absolute and as real as the relation of an hourglass to the path of the sun. The exact sciences are relational sciences. They are not about the thing-initself as Newton and Kant dreamt, but rather about relations as Leibniz and, later, Mach pointed out.
It is not surprising that the physical time standard turned out to be rather impractical for other scientists. The psychology of time perception tells us – and you will all agree – that the perceived age is quite different from the physical age. The older we get the shorter the years seem. If we simply assume that perceived duration is inversely related to physical age and that a 20-year old also perceives a physical year as a psychological one, we come to the surprising discovery that at 90 years we are 90 years old. With an assumed life expectancy of 90 years, 67% (or 82%) of your felt lifetime is behind you at the age of 20 (or 40) physical years.
Before we start to wallow in melancholy in the face of the “relativity of time”, let me again quote Augustine. “But at any rate this much I dare affirm I know: that if nothing passed there would be no past time; if nothing were approaching, there would be no future time; if nothing were, there would be no present time.” Well, – or as Bob Dylan sings “The times they are a-changin”.
I wish you an exciting time reading this issue.
Prof. Martin Wilkens
Professor of Quantum Optics
Rezensiertes Werk:
George, Rosemary Marangoly, Indian English and the Fiction of National Literature - Cambridge: Cambridge University Press, 2013. - Hb. viii, 285 pp. - (Zeitschrift für Anglistik und Amerikanistik ; 62(4)) ISBN 978-1-107-04000-7.
Magnetite is an iron oxide, which is ubiquitous in rocks and is usually deposited as small nanoparticulate matter among other rock material. It differs from most other iron oxides because it contains divalent and trivalent iron. Consequently, it has a special crystal structure and unique magnetic properties. These properties are used for paleoclimatic reconstructions where naturally occurring magnetite helps understanding former geological ages. Further on, magnetic properties are used in bio- and nanotechnological applications –synthetic magnetite serves as a contrast agent in MRI, is exploited in biosensing, hyperthermia or is used in storage media.
Magnetic properties are strongly size-dependent and achieving size control under preferably mild synthesis conditions is of interest in order to obtain particles with required properties. By using a custom-made setup, it was possible to synthesize stable single domain magnetite nanoparticles with the co-precipitation method. Furthermore, it was shown that magnetite formation is temperature-dependent, resulting in larger particles at higher temperatures. However, mechanistic approaches about the details are incomplete.
Formation of magnetite from solution was shown to occur from nanoparticulate matter rather than solvated ions. The theoretical framework of such processes has only started to be described, partly due to the lack of kinetic or thermodynamic data. Synthesis of magnetite nanoparticles at different temperatures was performed and the Arrhenius plot was used determine an activation energy for crystal growth of 28.4 kJ mol-1, which led to the conclusion that nanoparticle diffusion is the rate-determining step.
Furthermore, a study of the alteration of magnetite particles of different sizes as a function of their storage conditions is presented. The magnetic properties depend not only on particle size but also depend on the structure of the oxide, because magnetite oxidizes to maghemite under environmental conditions. The dynamics of this process have not been well described. Smaller nanoparticles are shown to oxidize more rapidly than larger ones and the lower the storage temperature, the lower the measured oxidation. In addition, the magnetic properties of the altered particles are not decreased dramatically, thus suggesting that this alteration will not impact the use of such nanoparticles as medical carriers.
Finally, the effect of biological additives on magnetite formation was investigated. Magnetotactic bacteria¬¬ are able to synthesize and align magnetite nanoparticles of well-defined size and morphology due to the involvement of special proteins with specific binding properties. Based on this model of morphology control, phage display experiments were performed to determine peptide sequences that preferably bind to (111)-magnetite faces. The aim was to control the shape of magnetite nanoparticles during the formation. Magnetotactic bacteria are also able to control the intracellular redox potential with proteins called magnetochromes. MamP is such a protein and its oxidizing nature was studied in vitro via biomimetic magnetite formation experiments based on ferrous ions. Magnetite and further trivalent oxides were found.
This work helps understanding basic mechanisms of magnetite formation and gives insight into non-classical crystal growth. In addition, it is shown that alteration of magnetite nanoparticles is mainly based on oxidation to maghemite and does not significantly influence the magnetic properties. Finally, biomimetic experiments help understanding the role of MamP within the bacteria and furthermore, a first step was performed to achieve morphology control in magnetite formation via co-precipitation.
New porous materials based on covalently connected monomers are presented. The key step of the synthesis is an acetalisation reaction. In previous years we used acetalisation reactions extensively to build up various molecular rods. Based on this approach, investigations towards porous polymeric materials were conducted by us. Here we wish to present the results of these studies in the synthesis of 1D polyacetals and porous 3D polyacetals. By scrambling experiments with 1D acetals we could prove that exchange reactions occur between different building blocks (evidenced by MALDI-TOF mass spectrometry). Based on these results we synthesized porous 3D polyacetals under the same mild conditions.
Stress drop is a key factor in earthquake mechanics and engineering seismology. However, stress drop calculations based on fault slip can be significantly biased, particularly due to subjectively determined smoothing conditions in the traditional least-square slip inversion. In this study, we introduce a mechanically constrained Bayesian approach to simultaneously invert for fault slip and stress drop based on geodetic measurements. A Gaussian distribution for stress drop is implemented in the inversion as a prior. We have done several synthetic tests to evaluate the stability and reliability of the inversion approach, considering different fault discretization, fault geometries, utilized datasets, and variability of the slip direction, respectively. We finally apply the approach to the 2010 M8.8 Maule earthquake and invert for the coseismic slip and stress drop simultaneously. Two fault geometries from the literature are tested. Our results indicate that the derived slip models based on both fault geometries are similar, showing major slip north of the hypocenter and relatively weak slip in the south, as indicated in the slip models of other studies. The derived mean stress drop is 5-6 MPa, which is close to the stress drop of similar to 7 MPa that was independently determined according to force balance in this region Luttrell et al. (J Geophys Res, 2011). These findings indicate that stress drop values can be consistently extracted from geodetic data.
In a recent paper, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
Background Transcatheter aortic-valve implantation (TAVI) is an established alternative therapy in patients with severe aortic stenosis and a high surgical risk. Despite a rapid growth in its use, very few data exist about the efficacy of cardiac rehabilitation (CR) in these patients. We assessed the hypothesis that patients after TAVI benefit from CR, compared to patients after surgical aortic-valve replacement (sAVR).
Methods From September 2009 to August 2011, 442 consecutive patients after TAVI (n=76) or sAVR (n=366) were referred to a 3-week CR. Data regarding patient characteristics as well as changes of functional (6-min walk test. 6-MWT), bicycle exercise test), and emotional status (Hospital Anxiety and Depression Scale) were retrospectively evaluated and compared between groups after propensity score adjustment.
Results Patients after TAVI were significantly older (p<0.001), more female (p<0.001), and had more often coronary artery disease (p=0.027), renal failure (p=0.012) and a pacemaker (p=0.032). During CR, distance in 6-MWT (both groups p0.001) and exercise capacity (sAVR p0.001, TAVI p0.05) significantly increased in both groups. Only patients after sAVR demonstrated a significant reduction in anxiety and depression (p0.001). After propensity scores adjustment, changes were not significantly different between sAVR and TAVI, with the exception of 6-MWT (p=0.004).
Conclusions Patients after TAVI benefit from cardiac rehabilitation despite their older age and comorbidities. CR is a helpful tool to maintain independency for daily life activities and participation in socio-cultural life.
Background: Chronic kidney disease (CKD) is a frequent comorbidity among elderly patients and those with cardiovascular disease. CKD carries prognostic relevance. We aimed to describe patient characteristics, risk factor management and control status of patients in cardiac rehabilitation (CR), differentiated by presence or absence of CKD.
Design and methods: Data from 92,071 inpatients with adequate information to calculate glomerular filtration rate (GFR) based on the Cockcroft-Gault formula were analyzed at the beginning and the end of a 3-week CR stay. CKD was defined as estimated GFR <60 ml/min/1.73 m(2).
Results: Compared with non-CKD patients, CKD patients were significantly older (72.0 versus 58.0 years) and more often had diabetes mellitus, arterial hypertension, and atherothrombotic manifestations (previous stroke, peripheral arterial disease), but fewer were current or previous smokers had a CHD family history. Exercise capacity was much lower in CKD (59 vs. 92Watts). Fewer patients with CKD were treated with percutaneous coronary intervention (PCI), but more had coronary artery bypass graft (CABG) surgery. Patients with CKD compared with non-CKD less frequently received statins, acetylsalicylic acid (ASA), clopidogrel, beta blockers, and angiotensin converting enzyme (ACE) inhibitors, and more frequently received angiotensin receptor blockers, insulin and oral anticoagulants. In CKD, mean low density lipoprotein cholesterol (LDL-C), total cholesterol, and high density lipoprotein cholesterol (HDL-C) were slightly higher at baseline, while triglycerides were substantially lower. This lipid pattern did not change at the discharge visit, but overall control rates for all described parameters (with the exception of HDL-C) were improved substantially. At discharge, systolic blood pressure (BP) was higher in CKD (124 versus 121 mmHg) and diastolic BP was lower (72 versus 74 mmHg). At discharge, 68.7% of CKD versus 71.9% of non-CKD patients had LDL-C <100 mg/dl. Physical fitness on exercise testing improved substantially in both groups. When the Modification of Diet in Renal Disease (MDRD) formula was used for CKD classification, there was no clinically relevant change in these results.
Conclusion: Within a short period of 3-4 weeks, CR led to substantial improvements in key risk factors such as lipid profile, blood pressure, and physical fitness for all patients, even if CKD was present.
The International Project for the Evaluation of Educational Achievement (IEA) was formed in the 1950s (Postlethwaite, 1967). Since that time, the IEA has conducted many studies in the area of mathematics, such as the First International Mathematics Study (FIMS) in 1964, the Second International Mathematics Study (SIMS) in 1980-1982, and a series of studies beginning with the Third International Mathematics and Science Study (TIMSS) which has been conducted every 4 years since 1995. According to Stigler et al. (1999), in the FIMS and the SIMS, U.S. students achieved low scores in comparison with students in other countries (p. 1). The TIMSS 1995 “Videotape Classroom Study” was therefore a complement to the earlier studies conducted to learn “more about the instructional and cultural processes that are associated with achievement” (Stigler et al., 1999, p. 1). The TIMSS Videotape Classroom Study is known today as the TIMSS Video Study. From the findings of the TIMSS 1995 Video Study, Stigler and Hiebert (1999) likened teaching to “mountain ranges poking above the surface of the water,” whereby they implied that we might see the mountaintops, but we do not see the hidden parts underneath these mountain ranges (pp. 73-78). By watching the videotaped lessons from Germany, Japan, and the United States again and again, they discovered that “the systems of teaching within each country look similar from lesson to lesson. At least, there are certain recurring features [or patterns] that typify many of the lessons within a country and distinguish the lessons among countries” (pp. 77-78). They also discovered that “teaching is a cultural activity,” so the systems of teaching “must be understood in relation to the cultural beliefs and assumptions that surround them” (pp. 85, 88). From this viewpoint, one of the purposes of this dissertation was to study some cultural aspects of mathematics teaching and relate the results to mathematics teaching and learning in Vietnam. Another research purpose was to carry out a video study in Vietnam to find out the characteristics of Vietnamese mathematics teaching and compare these characteristics with those of other countries. In particular, this dissertation carried out the following research tasks: - Studying the characteristics of teaching and learning in different cultures and relating the results to mathematics teaching and learning in Vietnam - Introducing the TIMSS, the TIMSS Video Study and the advantages of using video study in investigating mathematics teaching and learning - Carrying out the video study in Vietnam to identify the image, scripts and patterns, and the lesson signature of eighth-grade mathematics teaching in Vietnam - Comparing some aspects of mathematics teaching in Vietnam and other countries and identifying the similarities and differences across countries - Studying the demands and challenges of innovating mathematics teaching methods in Vietnam – lessons from the video studies Hopefully, this dissertation will be a useful reference material for pre-service teachers at education universities to understand the nature of teaching and develop their teaching career.
Deciphering the functioning of biological networks is one of the central tasks in systems biology. In particular, signal transduction networks are crucial for the understanding of the cellular response to external and internal perturbations. Importantly, in order to cope with the complexity of these networks, mathematical and computational modeling is required. We propose a computational modeling framework in order to achieve more robust discoveries in the context of logical signaling networks. More precisely, we focus on modeling the response of logical signaling networks by means of automated reasoning using Answer Set Programming (ASP). ASP provides a declarative language for modeling various knowledge representation and reasoning problems. Moreover, available ASP solvers provide several reasoning modes for assessing the multitude of answer sets. Therefore, leveraging its rich modeling language and its highly efficient solving capacities, we use ASP to address three challenging problems in the context of logical signaling networks: learning of (Boolean) logical networks, experimental design, and identification of intervention strategies. Overall, the contribution of this thesis is three-fold. Firstly, we introduce a mathematical framework for characterizing and reasoning on the response of logical signaling networks. Secondly, we contribute to a growing list of successful applications of ASP in systems biology. Thirdly, we present a software providing a complete pipeline for automated reasoning on the response of logical signaling networks.
This study aims to further mechanistically understand toxic modes of action after chronic inorganic arsenic exposure. Therefore long-term incubation studies in cultured cells were carried out, to display chronically attained changes, which cannot be observed in the generally applied in vitro short-term incubation studies. Particularly, the cytotoxic, genotoxic and epigenetic effects of an up to 21 days incubation of human urothelial (UROtsa) cells with pico- to nanomolar concentrations of iAsIII and its metabolite thio-DMAV were compared. After 21 days of incubation, cytotoxic effects were strongly enhanced in the case of iAsIII and might partly be due to glutathione depletion and genotoxic effects on the chromosomal level. These results are in strong contrast to cells exposed to thio-DMAV. Thus, cells seemed to be able to adapt to this arsenical, as indicated among others by an increase in the cellular glutathione level. Most interestingly, picomolar concentrations of both iAsIII and thio-DMAV caused global DNA hypomethylation in UROtsa cells, which was quantified in parallel by 5-medC immunostaining and a newly established, reliable, high resolution mass spectrometry (HRMS)-based test system. This is the first time that epigenetic effects are reported for thio-DMAV; iAsIII induced epigenetic effects occur in at least 8000 fold lower concentrations as reported in vitro before. The fact that both arsenicals cause DNA hypomethylation at really low, exposure-relevant concentrations in human urothelial cells suggests that this epigenetic effect might contribute to inorganic arsenic induced carcinogenicity, which for sure has to be further investigated in future studies.
The complementary advantages of high-rate Global Positioning System (GPS) and accelerometer observations for measuring seismic ground motion have been recognised in previous research. Here we propose an approach of tight integration of GPS and accelerometer measurements. The baseline shifts of the accelerometer are introduced as unknown parameters and estimated by a random walk process in the Precise Point Positioning (PPP) solution. To demonstrate the performance of the new strategy, we carried out several experiments using collocated GPS and accelerometer. The experimental results show that the baseline shifts of the accelerometer are automatically corrected, and high precision coseismic information of strong ground motion can be obtained in real-time. Additionally, the convergence and precision of the PPP is improved by the combined solution.
Software maintenance encompasses any changes made to a software system after its initial deployment and is thereby one of the key phases in the typical software-engineering lifecycle. In software maintenance, we primarily need to understand structural and behavioral aspects, which are difficult to obtain, e.g., by code reading. Software analysis is therefore a vital tool for maintaining these systems: It provides - the preferably automated - means to extract and evaluate information from their artifacts such as software structure, runtime behavior, and related processes. However, such analysis typically results in massive raw data, so that even experienced engineers face difficulties directly examining, assessing, and understanding these data. Among other things, they require tools with which to explore the data if no clear question can be formulated beforehand. For this, software analysis and visualization provide its users with powerful interactive means. These enable the automation of tasks and, particularly, the acquisition of valuable and actionable insights into the raw data. For instance, one means for exploring runtime behavior is trace visualization. This thesis aims at extending and improving the tool set for visual software analysis by concentrating on several open challenges in the fields of dynamic and static analysis of software systems. This work develops a series of concepts and tools for the exploratory visualization of the respective data to support users in finding and retrieving information on the system artifacts concerned. This is a difficult task, due to the lack of appropriate visualization metaphors; in particular, the visualization of complex runtime behavior poses various questions and challenges of both a technical and conceptual nature. This work focuses on a set of visualization techniques for visually representing control-flow related aspects of software traces from shared-memory software systems: A trace-visualization concept based on icicle plots aids in understanding both single-threaded as well as multi-threaded runtime behavior on the function level. The concept’s extensibility further allows the visualization and analysis of specific aspects of multi-threading such as synchronization, the correlation of such traces with data from static software analysis, and a comparison between traces. Moreover, complementary techniques for simultaneously analyzing system structures and the evolution of related attributes are proposed. These aim at facilitating long-term planning of software architecture and supporting management decisions in software projects by extensions to the circular-bundle-view technique: An extension to 3-dimensional space allows for the use of additional variables simultaneously; interaction techniques allow for the modification of structures in a visual manner. The concepts and techniques presented here are generic and, as such, can be applied beyond software analysis for the visualization of similarly structured data. The techniques' practicability is demonstrated by several qualitative studies using subject data from industry-scale software systems. The studies provide initial evidence that the techniques' application yields useful insights into the subject data and its interrelationships in several scenarios.
Bolivia is one of the poorest countries in Latin America. This study analyzes whether rural poverty increases the incidence of food insecurity and whether food insecurity perpetuates the condition of poverty among the rural poor in Bolivia. In order to achieve this aim, the risks that households face and the capacity of households to implement coping strategies in order to mitigate vulnerability shocks are identified. We suggest that efforts by households to become food secure may be difficult in rural areas because of poverty and the vulnerability associated with a lack of physical assets, low levels of human capital, poor infrastructure, and poor health; as well as the precarious regional environment aggravating the severity of vulnerability to food insecurity.
In processing and data storage mainly ferromagnetic (FM) materials are being used. Approaching physical limits, new concepts have to be found for faster, smaller switches, for higher data densities and more energy efficiency. Some of the discussed new concepts involve the material classes of correlated oxides and materials with antiferromagnetic coupling. Their applicability depends critically on their switching behavior, i.e., how fast and how energy efficient material properties can be manipulated. This thesis presents investigations of ultrafast non-equilibrium phase transitions on such new materials. In transition metal oxides (TMOs) the coupling of different degrees of freedom and resulting low energy excitation spectrum often result in spectacular changes of macroscopic properties (colossal magneto resistance, superconductivity, metal-to-insulator transitions) often accompanied by nanoscale order of spins, charges, orbital occupation and by lattice distortions, which make these material attractive. Magnetite served as a prototype for functional TMOs showing a metal-to-insulator-transition (MIT) at T = 123 K. By probing the charge and orbital order as well as the structure after an optical excitation we found that the electronic order and the structural distortion, characteristics of the insulating phase in thermal equilibrium, are destroyed within the experimental resolution of 300 fs. The MIT itself occurs on a 1.5 ps timescale. It shows that MITs in functional materials are several thousand times faster than switching processes in semiconductors. Recently ferrimagnetic and antiferromagnetic (AFM) materials have become interesting. It was shown in ferrimagnetic GdFeCo, that the transfer of angular momentum between two opposed FM subsystems with different time constants leads to a switching of the magnetization after laser pulse excitation. In addition it was theoretically predicted that demagnetization dynamics in AFM should occur faster than in FM materials as no net angular momentum has to be transferred out of the spin system. We investigated two different AFM materials in order to learn more about their ultrafast dynamics. In Ho, a metallic AFM below T ≈ 130 K, we found that the AFM Ho can not only be faster but also ten times more energy efficiently destroyed as order in FM comparable metals. In EuTe, an AFM semiconductor below T ≈ 10 K, we compared the loss of magnetization and laser-induced structural distortion in one and the same experiment. Our experiment shows that they are effectively disentangled. An exception is an ultrafast release of lattice dynamics, which we assign to the release of magnetostriction. The results presented here were obtained with time-resolved resonant soft x-ray diffraction at the Femtoslicing source of the Helmholtz-Zentrum Berlin and at the free-electron laser in Stanford (LCLS). In addition the development and setup of a new UHV-diffractometer for these experiments will be reported.
In the field of disk-based parallel database management systems exists a great variety of solutions based on a shared-storage or a shared-nothing architecture. In contrast, main memory-based parallel database management systems are dominated solely by the shared-nothing approach as it preserves the in-memory performance advantage by processing data locally on each server. We argue that this unilateral development is going to cease due to the combination of the following three trends: a) Nowadays network technology features remote direct memory access (RDMA) and narrows the performance gap between accessing main memory inside a server and of a remote server to and even below a single order of magnitude. b) Modern storage systems scale gracefully, are elastic, and provide high-availability. c) A modern storage system such as Stanford's RAMCloud even keeps all data resident in main memory. Exploiting these characteristics in the context of a main-memory parallel database management system is desirable. The advent of RDMA-enabled network technology makes the creation of a parallel main memory DBMS based on a shared-storage approach feasible.
This thesis describes building a columnar database on shared main memory-based storage. The thesis discusses the resulting architecture (Part I), the implications on query processing (Part II), and presents an evaluation of the resulting solution in terms of performance, high-availability, and elasticity (Part III).
In our architecture, we use Stanford's RAMCloud as shared-storage, and the self-designed and developed in-memory AnalyticsDB as relational query processor on top. AnalyticsDB encapsulates data access and operator execution via an interface which allows seamless switching between local and remote main memory, while RAMCloud provides not only storage capacity, but also processing power. Combining both aspects allows pushing-down the execution of database operators into the storage system. We describe how the columnar data processed by AnalyticsDB is mapped to RAMCloud's key-value data model and how the performance advantages of columnar data storage can be preserved.
The combination of fast network technology and the possibility to execute database operators in the storage system opens the discussion for site selection. We construct a system model that allows the estimation of operator execution costs in terms of network transfer, data processed in memory, and wall time. This can be used for database operators that work on one relation at a time - such as a scan or materialize operation - to discuss the site selection problem (data pull vs. operator push). Since a database query translates to the execution of several database operators, it is possible that the optimal site selection varies per operator. For the execution of a database operator that works on two (or more) relations at a time, such as a join, the system model is enriched by additional factors such as the chosen algorithm (e.g. Grace- vs. Distributed Block Nested Loop Join vs. Cyclo-Join), the data partitioning of the respective relations, and their overlapping as well as the allowed resource allocation.
We present an evaluation on a cluster with 60 nodes where all nodes are connected via RDMA-enabled network equipment. We show that query processing performance is about 2.4x slower if everything is done via the data pull operator execution strategy (i.e. RAMCloud is being used only for data access) and about 27% slower if operator execution is also supported inside RAMCloud (in comparison to operating only on main memory inside a server without any network communication at all). The fast-crash recovery feature of RAMCloud can be leveraged to provide high-availability, e.g. a server crash during query execution only delays the query response for about one second. Our solution is elastic in a way that it can adapt to changing workloads a) within seconds, b) without interruption of the ongoing query processing, and c) without manual intervention.
This study examines the course and driving forces of recent vegetation change in the Mongolian steppe. A sediment core covering the last 55years from a small closed-basin lake in central Mongolia was analyzed for its multi-proxy record at annual resolution. Pollen analysis shows that highest abundances of planted Poaceae and highest vegetation diversity occurred during 1977-1992, reflecting agricultural development in the lake area. A decrease in diversity and an increase in Artemisia abundance after 1992 indicate enhanced vegetation degradation in recent times, most probably because of overgrazing and farmland abandonment. Human impact is the main factor for the vegetation degradation within the past decades as revealed by a series of redundancy analyses, while climate change and soil erosion play subordinate roles. High Pediastrum (a green algae) influx, high atomic total organic carbon/total nitrogen (TOC/TN) ratios, abundant coarse detrital grains, and the decrease of C-13(org) and N-15 since about 1977 but particularly after 1992 indicate that abundant terrestrial organic matter and nutrients were transported into the lake and caused lake eutrophication, presumably because of intensified land use. Thus, we infer that the transition to a market economy in Mongolia since the early 1990s not only caused dramatic vegetation degradation but also affected the lake ecosystem through anthropogenic changes in the catchment area.
Flood damage has increased significantly and is expected to rise further in many parts of the world. For assessing potential changes in flood risk, this paper presents an integrated model chain quantifying flood hazards and losses while considering climate and land use changes. In the case study region, risk estimates for the present and the near future illustrate that changes in flood risk by 2030 are relatively low compared to historic periods. While the impact of climate change on the flood hazard and risk by 2030 is slight or negligible, strong urbanisation associated with economic growth contributes to a remarkable increase in flood risk. Therefore, it is recommended to frequently consider land use scenarios and economic developments when assessing future flood risks. Further, an adapted and sustainable risk management is necessary to encounter rising flood losses, in which non-structural measures are becoming more and more important. The case study demonstrates that adaptation by non-structural measures such as stricter land use regulations or enhancement of private precaution is capable of reducing flood risk by around 30 %. Ignoring flood risks, in contrast, always leads to further increasing losses-with our assumptions by 17 %. These findings underline that private precaution and land use regulation could be taken into account as low cost adaptation strategies to global climate change in many flood prone areas. Since such measures reduce flood risk regardless of climate or land use changes, they can also be recommended as no-regret measures.
Virtualized cloud data centers provide on-demand resources, enable agile resource provisioning, and host heterogeneous applications with different resource requirements. These data centers consume enormous amounts of energy, increasing operational expenses, inducing high thermal inside data centers, and raising carbon dioxide emissions. The increase in energy consumption can result from ineffective resource management that causes inefficient resource utilization. This dissertation presents detailed models and novel techniques and algorithms for virtual resource management in cloud data centers. The proposed techniques take into account Service Level Agreements (SLAs) and workload heterogeneity in terms of memory access demand and communication patterns of web applications and High Performance Computing (HPC) applications. To evaluate our proposed techniques, we use simulation and real workload traces of web applications and HPC applications and compare our techniques against the other recently proposed techniques using several performance metrics. The major contributions of this dissertation are the following: proactive resource provisioning technique based on robust optimization to increase the hosts' availability for hosting new VMs while minimizing the idle energy consumption. Additionally, this technique mitigates undesirable changes in the power state of the hosts by which the hosts' reliability can be enhanced in avoiding failure during a power state change. The proposed technique exploits the range-based prediction algorithm for implementing robust optimization, taking into consideration the uncertainty of demand. An adaptive range-based prediction for predicting workload with high fluctuations in the short-term. The range prediction is implemented in two ways: standard deviation and median absolute deviation. The range is changed based on an adaptive confidence window to cope with the workload fluctuations. A robust VM consolidation for efficient energy and performance management to achieve equilibrium between energy and performance trade-offs. Our technique reduces the number of VM migrations compared to recently proposed techniques. This also contributes to a reduction in energy consumption by the network infrastructure. Additionally, our technique reduces SLA violations and the number of power state changes. A generic model for the network of a data center to simulate the communication delay and its impact on VM performance, as well as network energy consumption. In addition, a generic model for a memory-bus of a server, including latency and energy consumption models for different memory frequencies. This allows simulating the memory delay and its influence on VM performance, as well as memory energy consumption. Communication-aware and energy-efficient consolidation for parallel applications to enable the dynamic discovery of communication patterns and reschedule VMs using migration based on the determined communication patterns. A novel dynamic pattern discovery technique is implemented, based on signal processing of network utilization of VMs instead of using the information from the hosts' virtual switches or initiation from VMs. The result shows that our proposed approach reduces the network's average utilization, achieves energy savings due to reducing the number of active switches, and provides better VM performance compared to CPU-based placement. Memory-aware VM consolidation for independent VMs, which exploits the diversity of VMs' memory access to balance memory-bus utilization of hosts. The proposed technique, Memory-bus Load Balancing (MLB), reactively redistributes VMs according to their utilization of a memory-bus using VM migration to improve the performance of the overall system. Furthermore, Dynamic Voltage and Frequency Scaling (DVFS) of the memory and the proposed MLB technique are combined to achieve better energy savings.
The paper is devoted to asymptotic analysis of the Dirichlet problem for a second order partial differential equation containing a small parameter multiplying the highest order derivatives. It corresponds to a small perturbation of a dynamical system having a stationary solution in the domain. We focus on the case where the trajectories of the system go into the domain and the stationary solution is a proper node.
Permafrost, defined as ground that is frozen for at least two consecutive years, is a distinct feature of the terrestrial unglaciated Arctic. It covers approximately one quarter of the land area of the Northern Hemisphere (23,000,000 km²). Arctic landscapes, especially those underlain by permafrost, are threatened by climate warming and may degrade in different ways, including active layer deepening, thermal erosion, and development of rapid thaw features. In Siberian and Alaskan late Pleistocene ice-rich Yedoma permafrost, rapid and deep thaw processes (called thermokarst) can mobilize deep organic carbon (below 3 m depth) by surface subsidence due to loss of ground ice. Increased permafrost thaw could cause a feedback loop of global significance if its stored frozen organic carbon is reintroduced into the active carbon cycle as greenhouse gases, which accelerate warming and inducing more permafrost thaw and carbon release. To assess this concern, the major objective of the thesis was to enhance the understanding of the origin of Yedoma as well as to assess the associated organic carbon pool size and carbon quality (concerning degradability). The key research questions were:
- How did Yedoma deposits accumulate?
- How much organic carbon is stored in the Yedoma region?
- What is the susceptibility of the Yedoma region's carbon for future decomposition?
To address these three research questions, an interdisciplinary approach, including detailed field studies and sampling in Siberia and Alaska as well as methods of sedimentology, organic biogeochemistry, remote sensing, statistical analyses, and computational modeling were applied. To provide a panarctic context, this thesis additionally includes results both from a newly compiled northern circumpolar carbon database and from a model assessment of carbon fluxes in a warming Arctic.
The Yedoma samples show a homogeneous grain-size composition. All samples were poorly sorted with a multi-modal grain-size distribution, indicating various (re-) transport processes. This contradicts the popular pure loess deposition hypothesis for the origin of Yedoma permafrost. The absence of large-scale grinding processes via glaciers and ice sheets in northeast Siberian lowlands, processes which are necessary to create loess as material source, suggests the polygenetic origin of Yedoma deposits.
Based on the largest available data set of the key parameters, including organic carbon content, bulk density, ground ice content, and deposit volume (thickness and coverage) from Siberian and Alaskan study sites, this thesis further shows that deep frozen organic carbon in the Yedoma region consists of two distinct major reservoirs, Yedoma deposits and thermokarst deposits (formed in thaw-lake basins). Yedoma deposits contain ~80 Gt and thermokarst deposits ~130 Gt organic carbon, or a total of ~210 Gt. Depending on the approach used for calculating uncertainty, the range for the total Yedoma region carbon store is ±75 % and ±20 % for conservative single and multiple bootstrapping calculations, respectively. Despite the fact that these findings reduce the Yedoma region carbon pool by nearly a factor of two compared to previous estimates, this frozen organic carbon is still capable of inducing a permafrost carbon feedback to climate warming. The complete northern circumpolar permafrost region contains between 1100 and 1500 Gt organic carbon, of which ~60 % is perennially frozen and decoupled from the short-term carbon cycle.
When thawed and reintroduced into the active carbon cycle, the organic matter qualities become relevant. Furthermore, results from investigations into Yedoma and thermokarst organic matter quality studies showed that Yedoma and thermokarst organic matter exhibit no depth-dependent quality trend. This is evidence that after freezing, the ancient organic matter is preserved in a state of constant quality. The applied alkane and fatty-acid-based biomarker proxies including the carbon-preference and the higher-land-plant-fatty-acid indices show a broad range of organic matter quality and thus no significantly different qualities of the organic matter stored in thermokarst deposits compared to Yedoma deposits. This lack of quality differences shows that the organic matter biodegradability depends on different decomposition trajectories and the previous decomposition/incorporation history. Finally, the fate of the organic matter has been assessed by implementing deep carbon pools and thermokarst processes in a permafrost carbon model. Under various warming scenarios for the northern circumpolar permafrost region, model results show a carbon release from permafrost regions of up to ~140 Gt and ~310 Gt by the years 2100 and 2300, respectively. The additional warming caused by the carbon release from newly-thawed permafrost contributes 0.03 to 0.14°C by the year 2100. The model simulations predict that a further increase by the 23rd century will add 0.4°C to global mean surface air temperatures.
In conclusion, Yedoma deposit formation during the late Pleistocene was dominated by water-related (alluvial/fluvial/lacustrine) as well as aeolian processes under periglacial conditions. The circumarctic permafrost region, including the Yedoma region, contains a substantial amount of currently frozen organic carbon. The carbon of the Yedoma region is well-preserved and therefore available for decomposition after thaw. A missing quality-depth trend shows that permafrost preserves the quality of ancient organic matter. When the organic matter is mobilized by deep degradation processes, the northern permafrost region may add up to 0.4°C to the global warming by the year 2300.
Despite remarkable progress made in the past century, which has revolutionized our understanding of the universe, there are numerous open questions left in theoretical physics. Particularly important is the fact that the theories describing the fundamental interactions of nature are incompatible. Einstein's theory of general relative describes gravity as a dynamical spacetime, which is curved by matter and whose curvature determines the motion of matter. On the other hand we have quantum field theory, in form of the standard model of particle physics, where particles interact via the remaining interactions - electromagnetic, weak and strong interaction - on a flat, static spacetime without gravity. A theory of quantum gravity is hoped to cure this incompatibility by heuristically replacing classical spacetime by quantum spacetime'. Several approaches exist attempting to define such a theory with differing underlying premises and ideas, where it is not clear which is to be preferred. Yet a minimal requirement is the compatibility with the classical theory, they attempt to generalize. Interestingly many of these models rely on discrete structures in their definition or postulate discreteness of spacetime to be fundamental. Besides the direct advantages discretisations provide, e.g. permitting numerical simulations, they come with serious caveats requiring thorough investigation: In general discretisations break fundamental diffeomorphism symmetry of gravity and are generically not unique. Both complicates establishing the connection to the classical continuum theory. The main focus of this thesis lies in the investigation of this relation for spin foam models. This is done on different levels of the discretisation / triangulation, ranging from few simplices up to the continuum limit. In the regime of very few simplices we confirm and deepen the connection of spin foam models to discrete gravity. Moreover, we discuss dynamical, e.g. diffeomorphism invariance in the discrete, to fix the ambiguities of the models. In order to satisfy these conditions, the discrete models have to be improved in a renormalisation procedure, which also allows us to study their continuum dynamics. Applied to simplified spin foam models, we uncover a rich, non--trivial fixed point structure, which we summarize in a phase diagram. Inspired by these methods, we propose a method to consistently construct the continuum theory, which comes with a unique vacuum state.
This work introduces concepts and corresponding tool support to enable a complementary approach in dealing with recovery. Programmers need to recover a development state, or a part thereof, when previously made changes reveal undesired implications. However, when the need arises suddenly and unexpectedly, recovery often involves expensive and tedious work. To avoid tedious work, literature recommends keeping away from unexpected recovery demands by following a structured and disciplined approach, which consists of the application of various best practices including working only on one thing at a time, performing small steps, as well as making proper use of versioning and testing tools. However, the attempt to avoid unexpected recovery is both time-consuming and error-prone. On the one hand, it requires disproportionate effort to minimize the risk of unexpected situations. On the other hand, applying recommended practices selectively, which saves time, can hardly avoid recovery. In addition, the constant need for foresight and self-control has unfavorable implications. It is exhaustive and impedes creative problem solving. This work proposes to make recovery fast and easy and introduces corresponding support called CoExist. Such dedicated support turns situations of unanticipated recovery from tedious experiences into pleasant ones. It makes recovery fast and easy to accomplish, even if explicit commits are unavailable or tests have been ignored for some time. When mistakes and unexpected insights are no longer associated with tedious corrective actions, programmers are encouraged to change source code as a means to reason about it, as opposed to making changes only after structuring and evaluating them mentally. This work further reports on an implementation of the proposed tool support in the Squeak/Smalltalk development environment. The development of the tools has been accompanied by regular performance and usability tests. In addition, this work investigates whether the proposed tools affect programmers’ performance. In a controlled lab study, 22 participants improved the design of two different applications. Using a repeated measurement setup, the study examined the effect of providing CoExist on programming performance. The result of analyzing 88 hours of programming suggests that built-in recovery support as provided with CoExist positively has a positive effect on programming performance in explorative programming tasks.
Galaxies are observational probes to study the Large Scale Structure. Their gravitational motions are tracers of the total matter density and therefore of the Large Scale Structure. Besides, studies of structure formation and galaxy evolution rely on numerical cosmological simulations. Still, only one universe observable from a given position, in time and space, is available for comparisons with simulations. The related cosmic variance affects our ability to interpret the results. Simulations constrained by observational data are a perfect remedy to this problem. Achieving such simulations requires the projects Cosmic flows and CLUES. Cosmic flows builds catalogs of accurate distance measurements to map deviations from the expansion. These measures are mainly obtained with the galaxy luminosity-rotation rate correlation. We present the calibration of that relation in the mid-infrared with observational data from Spitzer Space Telescope. Resulting accurate distance estimates will be included in the third catalog of the project. In the meantime, two catalogs up to 30 and 150 Mpc/h have been released. We report improvements and applications of the CLUES' method on these two catalogs. The technique is based on the constrained realization algorithm. The cosmic displacement field is computed with the Zel'dovich approximation. This latter is then reversed to relocate reconstructed three-dimensional constraints to their precursors' positions in the initial field. The size of the second catalog (8000 galaxies within 150 Mpc/h) highlighted the importance of minimizing the observational biases. By carrying out tests on mock catalogs, built from cosmological simulations, a method to minimize observational bias can be derived. Finally, for the first time, cosmological simulations are constrained solely by peculiar velocities. The process is successful as resulting simulations resemble the Local Universe. The major attractors and voids are simulated at positions approaching observational positions by a few megaparsecs, thus reaching the limit imposed by the linear theory.
The looping of polymers such as DNA is a fundamental process in the molecular biology of living cells, whose interior is characterised by a high degree of molecular crowding. We here investigate in detail the looping dynamics of flexible polymer chains in the presence of different degrees of crowding. From the analysis of the looping–unlooping rates and the looping probabilities of the chain ends we show that the presence of small crowders typically slows down the chain dynamics but larger crowders may in fact facilitate the looping. We rationalise these non-trivial and often counterintuitive effects of the crowder size on the looping kinetics in terms of an effective solution viscosity and standard excluded volume. It is shown that for small crowders the effect of an increased viscosity dominates, while for big crowders we argue that confinement effects (caging) prevail. The tradeoff between both trends can thus result in the impediment or facilitation of polymer looping, depending on the crowder size. We also examine how the crowding volume fraction, chain length, and the attraction strength of the contact groups of the polymer chain affect the looping kinetics and hairpin formation dynamics. Our results are relevant for DNA looping in the absence and presence of protein mediation, DNA hairpin formation, RNA folding, and the folding of polypeptide chains under biologically relevant high-crowding conditions.
The purpose of this thesis is to develop an automated inversion scheme to derive point and finite source parameters for weak earthquakes, here intended with the unusual meaning of earthquakes with magnitudes at the limit or below the bottom magnitude threshold of standard source inversion routines. The adopted inversion approaches entirely rely on existing inversion software, the methodological work mostly targeting the development and tuning of optimized inversion flows. The resulting inversion scheme is tested for very different datasets, and thus allows the discussion on the source inversion problem at different scales. In the first application, dealing with mining induced seismicity, the source parameters determination is addressed at a local scale, with source-sensor distance of less than 3 km. In this context, weak seismicity corresponds to event below magnitude MW 2.0, which are rarely target of automated source inversion routines. The second application considers a regional dataset, namely the aftershock sequence of the 2010 Maule earthquake (Chile), using broadband stations at regional distances, below 300 km. In this case, the magnitude range of the target aftershocks range down to MW 4.0. This dataset is here considered as a weak seismicity case, since the analysis of such moderate seismicity is generally investigated only by moment tensor inversion routines, with no attempt to resolve source duration or finite source parameters. In this work, automated multi-step inversion schemes are applied to both datasets with the aim of resolving point source parameters, both using double couple (DC) and full moment tensor (MT) models, source duration and finite source parameters. A major result of the analysis of weaker events is the increased size of resulting moment tensor catalogues, which interpretation may become not trivial. For this reason, a novel focal mechanism clustering approach is used to automatically classify focal mechanisms, allowing the investigation of the most relevant and repetitive rupture features. The inversion of the mining induced seismicity dataset reveals the repetitive occurrence of similar rupture processes, where the source geometry is controlled by the shape of the mined panel. Moreover, moment tensor solutions indicate a significant contribution of tensile processes. Also the second application highlights some characteristic geometrical features of the fault planes, which show a general consistency with the orientation of the slab. The additional inversion for source duration allowed to verify the empirical correlation for moment normalized earthquakes in subduction zones among a decreasing rupture duration with increasing source depth, which was so far only observed for larger events.
We report a 1,2,3-triazol fluoroionophore for detecting Na+ that shows in vitro enhancement in the Na+-induced fluorescence intensity and decay time. The Na+-selective molecule 1 was incorporated into a hydrogel as a part of a fiber optical sensor. This sensor allows the direct determination of Na+ in the range of 1–10 mM by measuring reversible fluorescence decay time changes.
The mystery of the origin of cosmic rays has been tackled for more than hundred years and is still not solved. Cosmic rays are detected with energies spanning more than 10 orders of magnitude and reaching energies up to ~10²¹ eV, far higher than any man-made accelerator can reach. Different theories on the astrophysical objects and processes creating such highly energetic particles have been proposed.
A very prominent explanation for a process producing highly energetic particles is shock acceleration. The observation of high-energy gamma rays from supernova remnants, some of them revealing a shell like structure, is clear evidence that particles are accelerated to ultrarelativistic energies in the shocks of these objects. The environments of supernova remnants are complex and challenge detailed modelling of the processes leading to high-energy gamma-ray emission.
The study of shock acceleration at bow shocks, created by the supersonic movement of individual stars through the interstellar medium, offers a unique possibility to determine the physical properties of shocks in a less complex environment. The shocked medium is heated by the stellar and the shock excited radiation, leading to thermal infrared emission. 28 bow shocks have been discovered through their infrared emission. Nonthermal radiation in radio and X-ray wavelengths has been detected from two bow shocks, pointing to the existence of relativistic particles in these systems. Theoretical models of the emission processes predict high-energy and very high-energy emission at a flux level in reach of current instruments. This work presents the search for gamma-ray emission from bow shocks of runaway stars in the energy regime from 100MeV to ~100TeV.
The search is performed with the large area telescope (LAT) on-board the Fermi satellite and the H.E.S.S. telescopes located in the Khomas Highland in Namibia. The Fermi-LAT was launched in 2008 and is continuously scanning the sky since then. It detects photons with energies from 20MeV to over 300 GeV and has an unprecedented sensitivity. The all-sky coverage allows us to study all 28 bow shocks of runaway stars listed in the E-BOSS catalogue of infrared bow shocks. No significant emission was detected from any of the objects, although predicted by several theoretical models describing the non-thermal emission of bow shocks of runaway stars.
The H.E.S.S. experiment is the most sensitive system of imaging atmospheric Cherenkov telescopes. It detects photons from several tens of GeV to ~100TeV. Seven of the bow shocks have been observed with H.E.S.S. and the data analysis is presented in this thesis. The analyses of the very-high energy data did not reveal significant emission from any of the sources either.
This work presents the first systematic search for gamma-ray emission from bow shocks of runaway stars. For the first time Fermi-LAT data was specifically analysed to reveal emission from bow shocks of runaway stars. In the TeV regime no searches for emission from theses objects have been published so far, the study presented here is the first in this energy regime. The level of the gamma-ray emission from bow shocks of runaway stars is constrained by the calculated upper limits over six orders in magnitude in energy.
The upper limits calculated for the bow shocks of runaway stars in the course of this work, constrain several models. For the best candidate, ζ Ophiuchi, the upper limits in the Fermi-LAT energy range are lower than the predictions by a factor ~5. This challenges the assumptions made in this model and gives valuable input for further modelling approaches.
The analyses were performed with the software packages provided by the H.E.S.S. and Fermi collaborations. The development of a unified analysis framework for gamma-ray data, namely GammaLib/ctools, is rapidly progressing within the CTA consortium. Recent implementations and cross-checks with current software frameworks are presented in the Appendix.
Donor-acceptor (D-A) copolymers have revolutionized the field of organic electronics over the last decade. Comprised of a electron rich and an electron deficient molecular unit, these copolymers facilitate the systematic modification of the material's optoelectronic properties. The ability to tune the optical band gap and to optimize the molecular frontier orbitals as well as the manifold of structural sites that enable chemical modifications has created a tremendous variety of copolymer structures. Today, these materials reach or even exceed the performance of amorphous inorganic semiconductors. Most impressively, the charge carrier mobility of D-A copolymers has been pushed to the technologically important value of 10 cm^{2}V^{-1}s^{-1}. Furthermore, owed to their enormous variability they are the material of choice for the donor component in organic solar cells, which have recently surpassed the efficiency threshold of 10%. Because of the great number of available D-A copolymers and due to their fast chemical evolution, there is a significant lack of understanding of the fundamental physical properties of these materials. Furthermore, the complex chemical and electronic structure of D-A copolymers in combination with their semi-crystalline morphology impede a straightforward identification of the microscopic origin of their superior performance. In this thesis, two aspects of prototype D-A copolymers were analysed. These are the investigation of electron transport in several copolymers and the application of low band gap copolymers as acceptor component in organic solar cells. In the first part, the investigation of a series of chemically modified fluorene-based copolymers is presented. The charge carrier mobility varies strongly between the different derivatives, although only moderate structural changes on the copolymers structure were made. Furthermore, rather unusual photocurrent transients were observed for one of the copolymers. Numerical simulations of the experimental results reveal that this behavior arises from a severe trapping of electrons in an exponential distribution of trap states. Based on the comparison of simulation and experiment, the general impact of charge carrier trapping on the shape of photo-CELIV and time-of-flight transients is discussed. In addition, the high performance naphthalenediimide (NDI)-based copolymer P(NDI2OD-T2) was characterized. It is shown that the copolymer posses one of the highest electron mobilities reported so far, which makes it attractive to be used as the electron accepting component in organic photovoltaic cells.\par Solar cells were prepared from two NDI-containing copolymers, blended with the hole transporting polymer P3HT. I demonstrate that the use of appropriate, high boiling point solvents can significantly increase the power conversion efficiency of these devices. Spectroscopic studies reveal that the pre-aggregation of the copolymers is suppressed in these solvents, which has a strong impact on the blend morphology. Finally, a systematic study of P3HT:P(NDI2OD-T2) blends is presented, which quantifies the processes that limit the efficiency of devices. The major loss channel for excited states was determined by transient and steady state spectroscopic investigations: the majority of initially generated electron-hole pairs is annihilated by an ultrafast geminate recombination process. Furthermore, exciton self-trapping in P(NDI2OD-T2) domains account for an additional reduction of the efficiency. The correlation of the photocurrent to microscopic morphology parameters was used to disclose the factors that limit the charge generation efficiency. Our results suggest that the orientation of the donor and acceptor crystallites relative to each other represents the main factor that determines the free charge carrier yield in this material system. This provides an explanation for the overall low efficiencies that are generally observed in all-polymer solar cells.
Formation of a Eu(III) borate solid species from a weak Eu(III) borate complex in aqueous solution
(2014)
In the presence of polyborates (detected by 11B-NMR) the formation of a weak Eu(III) borate complex (lg β11 ∼ 2, estimated) was observed by time-resolved laser-induced fluorescence spectroscopy (TRLFS). This complex is a precursor for the formation of a solid Eu(III) borate species. The formation of this solid in solution was investigated by TRLFS as a function of the total boron concentration: the lower the total boron concentration, the slower is the solid formation. The solid Eu(III) borate was characterized by IR spectroscopy, powder XRD and solid-state TRLFS. The determination of the europium to boron ratio portends the existence of pentaborate units in the amorphous solid.
The tropical warm pool waters surrounding Indonesia are one of the equatorial heat and moisture sources that are considered as a driving force of the global climate system. The climate in Indonesia is dominated by the equatorial monsoon system, and has been linked to El Niño-Southern Oscillation (ENSO) events, which often result in severe droughts or floods over Indonesia with profound societal and economic impacts on the populations living in the world's fourth most populated country. The latest IPCC report states that ENSO will remain the dominant mode in the tropical Pacific with global effects in the 21st century and ENSO-related precipitation extremes will intensify. However, no common agreement exists among climate simulation models for projected change in ENSO and the Australian-Indonesian Monsoon. Exploring high-resolution palaeoclimate archives, like tree rings or varved lake sediments, provide insights into the natural climate variability of the past, and thus helps improving and validating simulations of future climate changes. Centennial tree-ring stable isotope records | Within this doctoral thesis the main goal was to explore the potential of tropical tree rings to record climate signals and to use them as palaeoclimate proxies. In detail, stable carbon (δ13C) and oxygen (δ18O) isotopes were extracted from teak trees in order to establish the first well-replicated centennial (AD 1900-2007) stable isotope records for Java, Indonesia. Furthermore, different climatic variables were tested whether they show significant correlation with tree-ring proxies (ring-width, δ13C, δ18O). Moreover, highly resolved intra-annual oxygen isotope data were established to assess the transfer of the seasonal precipitation signal into the tree rings. Finally, the established oxygen isotope record was used to reveal possible correlations with ENSO events. Methodological achievements | A second goal of this thesis was to assess the applicability of novel techniques which facilitate and optimize high-resolution and high-throughput stable isotope analysis of tree rings. Two different UV-laser-based microscopic dissection systems were evaluated as a novel sampling tool for high-resolution stable isotope analysis. Furthermore, an improved procedure of tree-ring dissection from thin cellulose laths for stable isotope analysis was designed. The most important findings of this thesis are: I) The herein presented novel sampling techniques improve stable isotope analyses for tree-ring studies in terms of precision, efficiency and quality. The UV-laser-based microdissection serve as a valuable tool for sampling plant tissue at ultrahigh-resolution and for unprecedented precision. II) A guideline for a modified method of cellulose extraction from wholewood cross-sections and subsequent tree-ring dissection was established. The novel technique optimizes the stable isotope analysis process in two ways: faster and high-throughput cellulose extraction and precise tree-ring separation at annual to high-resolution scale. III) The centennial tree-ring stable isotope records reveal significant correlation with regional precipitation. High-resolution stable oxygen values, furthermore, allow distinguishing between dry and rainy season rainfall. IV) The δ18O record reveals significant correlation with different ENSO flavors and demonstrates the importance of considering ENSO flavors when interpreting palaeoclimatic data in the tropics. The findings of my dissertation show that seasonally resolved δ18O records from Indonesian teak trees are a valuable proxy for multi-centennial reconstructions of regional precipitation variability (monsoon signals) and large-scale ocean-atmosphere phenomena (ENSO) for the Indo-Pacific region. Furthermore, the novel methodological achievements offer many unexplored avenues for multidisciplinary research in high-resolution palaeoclimatology.
Protein-metal coordination complexes are well known as active centers in enzymatic catalysis, and to contribute to signal transduction, gas transport, and to hormone function. Additionally, they are now known to contribute as load-bearing cross-links to the mechanical properties of several biological materials, including the jaws of Nereis worms and the byssal threads of marine mussels. The primary aim of this thesis work is to better understand the role of protein-metal cross-links in the mechanical properties of biological materials, using the mussel byssus as a model system. Specifically, the focus is on histidine-metal cross-links as sacrificial bonds in the fibrous core of the byssal thread (Chapter 4) and L-3,4-dihydroxyphenylalanine (DOPA)-metal bonds in the protective thread cuticle (Chapter 5).
Byssal threads are protein fibers, which mussels use to attach to various substrates at the seashore. These relatively stiff fibers have the ability to extend up to about 100 % strain, dissipating large amounts of mechanical energy from crashing waves, for example. Remarkably, following damage from cyclic loading, initial mechanical properties are subsequently recovered by a material-intrinsic self-healing capability. Histidine residues coordinated to transition metal ions in the proteins comprising the fibrous thread core have been suggested as reversible sacrificial bonds that contribute to self-healing; however, this remains to be substantiated in situ. In the first part of this thesis, the role of metal coordination bonds in the thread core was investigated using several spectroscopic methods. In particular, X-ray absorption spectroscopy (XAS) was applied to probe the coordination environment of zinc in Mytilus californianus threads at various stages during stretching and subsequent healing. Analysis of the extended X-ray absorption fine structure (EXAFS) suggests that tensile deformation of threads is correlated with the rupture of Zn-coordination bonds and that self-healing is connected with the reorganization of Zn-coordination bond topologies rather than the mere reformation of Zn-coordination bonds. These findings have interesting implications for the design of self-healing metallopolymers.
The byssus cuticle is a protective coating surrounding the fibrous thread core that is both as hard as an epoxy and extensible up to 100 % strain before cracking. It was shown previously that cuticle stiffness and hardness largely depend on the presence of Fe-DOPA coordination bonds. However, the byssus is known to concentrate a large variety of metals from seawater, some of which are also capable of binding DOPA (e.g. V). Therefore, the question arises whether natural variation of metal composition can affect the mechanical performance of the byssal thread cuticle. To investigate this hypothesis, nanoindentation and confocal Raman spectroscopy were applied to the cuticle of native threads, threads with metals removed (EDTA treated), and threads in which the metal ions in the native tissue were replaced by either Fe or V. Interestingly, replacement of metal ions with either Fe or V leads to the full recovery of native mechanical properties with no statistical difference between each other or the native properties. This likely indicates that a fixed number of metal coordination sites are maintained within the byssal thread cuticle – possibly achieved during thread formation – which may provide an evolutionarily relevant mechanism for maintaining reliable mechanics in an unpredictable environment.
While the dynamic exchange of bonds plays a vital role in the mechanical behavior and self-healing in the thread core by allowing them to act as reversible sacrificial bonds, the compatibility of DOPA with other metals allows an inherent adaptability of the thread cuticle to changing circumstances. The requirements to both of these materials can be met by the dynamic nature of the protein-metal cross-links, whereas covalent cross-linking would fail to provide the adaptability of the cuticle and the self-healing of the core. In summary, these studies of the thread core and the thread cuticle serve to underline the important and dynamic roles of protein-metal coordination in the mechanical function of load-bearing protein fibers, such as the mussel byssus.
Critics argue that there has been a trend among Microfinance Institutions (MFI) to focus on profitability in order to stay financially sustainable. This made some institutions neglect the social mission of microfinancing. In this paper I intend to examine if empirical evidence supports this so called mission drift hypothesis as well as other claims in this context. Using the global panel data set of the MIX (Microfinance Information Exchange), which gathers from 1995 to 2010 and contains up to 1400 institutions with a high variety of organizational forms, I was able to identify a world-wide mission drift effect in their social goal of reaching out the poorest part of the population. Furthermore, I find that, on average, the outreach of an MFI has a significant negative influence on its short and long term financial performance. Despite that, I eventually proved that the probability that an MFI worsens its social performance substantially increases if its profitability has decreased in the previous years.
The atmosphere over the Arctic Ocean is strongly influenced by the distribution of sea ice and open water. Leads in the sea ice produce strong convective fluxes of sensible and latent heat and release aerosol particles into the atmosphere. They increase the occurrence of clouds and modify the structure and characteristics of the atmospheric boundary layer (ABL) and thereby influence the Arctic climate.
In the course of this study aircraft measurements were performed over the western Arctic Ocean as part of the campaign PAMARCMIP 2012 of the Alfred Wegener Institute for Polar and Marine Research (AWI). Backscatter from aerosols and clouds within the lower troposphere and the ABL were measured with the nadir pointing Airborne Mobile Aerosol Lidar (AMALi) and dropsondes were launched to obtain profiles of meteorological variables. Furthermore, in situ measurements of aerosol properties, meteorological variables and turbulence were part of the campaign. The measurements covered a broad range of atmospheric and sea ice conditions.
In this thesis, properties of the ABL over Arctic sea ice with a focus on the influence of open leads are studied based on the data from the PAMARCMIP campaign. The height of the ABL is determined by different methods that are applied to dropsonde and AMALi backscatter profiles. ABL heights are compared for different flights representing different conditions of the atmosphere and of sea ice and open water influence. The different criteria for ABL height that are applied show large variation in terms of agreement among each other, depending on the characteristics of the ABL and its history. It is shown that ABL height determination from lidar backscatter by methods commonly used under mid-latitude conditions is applicable to the Arctic ABL only under certain conditions. Aerosol or clouds within the ABL are needed as a tracer for ABL height detection from backscatter. Hence an aerosol source close to the surface is necessary, that is typically found under the present influence of open water and therefore convective conditions. However it is not always possible to distinguish residual layers from the actual ABL. Stable boundary layers are generally difficult to detect.
To illustrate the complexity of the Arctic ABL and processes therein, four case studies are analyzed each of which represents a snapshot of the interplay between atmosphere and underlying sea ice or water surface. Influences of leads and open water on the aerosol and clouds within the ABL are identified and discussed. Leads are observed to cause the formation of fog and cloud layers within the ABL by humidity emission. Furthermore they decrease the stability and increase the height of the ABL and consequently facilitate entrainment of air and aerosol layers from the free troposphere.
Polyglycolide (PGA) is a biodegradable polymer with multiple applications in the medical sector. Here the synthesis of high molecular weight polyglycolide by ring-opening polymerization of diglycolide is reported. For the first time stabilizer free supercritical carbon dioxide (scCO2) was used as a reaction medium. scCO2 allowed for a reduction in reaction temperature compared to conventional processes. Together with the lowering of monomer concentration and consequently reduced heat generation compared to bulk reactions thermal decomposition of the product occurring already during polymerization is strongly reduced. The reaction temperatures and pressures were varied between 120 and 150 °C and 145 to 1400 bar. Tin(II) ethyl hexanoate and 1-dodecanol were used as catalyst and initiator, respectively. The highest number average molecular weight of 31 200 g mol−1 was obtained in 5 hours from polymerization at 120 °C and 530 bar. In all cases the products were obtained as a dry white powder. Remarkably, independent of molecular weight the melting temperatures were always at (219 ± 2) °C.
Background
The Amazon molly, Poecilia formosa (Teleostei: Poeciliinae) is an unisexual, all-female species. It evolved through the hybridisation of two closely related sexual species and exhibits clonal reproduction by sperm dependent parthenogenesis (or gynogenesis) where the sperm of a parental species is only used to activate embryogenesis of the apomictic, diploid eggs but does not contribute genetic material to the offspring.
Here we provide and describe the first de novo assembled transcriptome of the Amazon molly in comparison with its maternal ancestor, the Atlantic molly Poecilia mexicana. The transcriptome data were produced through sequencing of single end libraries (100 bp) with the Illumina sequencing technique.
Results
83,504,382 reads for the Amazon molly and 81,625,840 for the Atlantic molly were assembled into 127,283 and 78,961 contigs for the Amazon molly and the Atlantic molly, respectively. 63% resp. 57% of the contigs could be annotated with gene ontology terms after sequence similarity comparisons. Furthermore, we were able to identify genes normally involved in reproduction and especially in meiosis also in the transcriptome dataset of the apomictic reproducing Amazon molly.
Conclusions
We assembled and annotated the transcriptome of a non-model organism, the Amazon molly, without a reference genome (de novo). The obtained dataset is a fundamental resource for future research in functional and expression analysis. Also, the presence of 30 meiosis-specific genes within a species where no meiosis is known to take place is remarkable and raises new questions for future research.
The subsurface upper Palaeozoic sedimentary successions of the Loppa High half-graben and the Finnmark platform in the Norwegian Barents Sea (southwest Barents Sea) were investigated using 2D/3D seismic datasets combined with well and core data. These sedimentary successions represent a case of mixed siliciclastic-carbonates depositional systems, which formed during the earliest phase of the Atlantic rifting between Greenland and Norway. During the Carboniferous and Permian the southwest part of the Barents Sea was located along the northern margin of Pangaea, which experienced a northward drift at a speed of ~2–3 mm per year. This gradual shift in the paleolatitudinal position is reflected by changes in regional climatic conditions: from warm-humid in the early Carboniferous, changing to warm-arid in the middle to late Carboniferous and finally to colder conditions in the late Permian. Such changes in paleolatitude and climate have resulted in major changes in the style of sedimentation including variations in the type of carbonate factories. The upper Palaeozoic sedimentary succession is composed of four major depositional units comprising chronologically the Billefjorden Group dominated by siliciclastic deposition in extensional tectonic-controlled wedges, the Gipsdalen Group dominated by warm-water carbonates, stacked buildups and evaporites, the Bjarmeland Group characterized by cool-water carbonates as well as by the presence of buildup networks, and the Tempelfjorden Group characterized by fine-grained sedimentation dominated by biological silica production. In the Loppa High, the integration of a core study with multi-attribute seismic facies classification allowed highlighting the main sedimentary unconformities and mapping the spatial extent of a buried paleokarst terrain. This geological feature is interpreted to have formed during a protracted episode of subaerial exposure occurring between the late Palaeozoic and middle Triassic. Based on seismic sequence stratigraphy analysis the palaeogeography in time and space of the Loppa High basin was furthermore reconstructed and a new and more detailed tectono-sedimentary model for this area was proposed. In the Finnmark platform area, a detailed core analysis of two main exploration wells combined with key 2D seismic sections located along the main depositional profile, allowed the evaluation of depositional scenarios for the two main lithostratigraphic units: the Ørn Formation (Gipsdalen Group) and the Isbjørn Formation (Bjarmeland Group). During the mid-Sakmarian, two major changes were observed between the two formations including (1) the variation in the type of the carbonate factories, which is interpreted to be depth-controlled and (2) the change in platform morphology, which evolved from a distally steepened ramp to a homoclinal ramp. The results of this study may help supporting future reservoirs characterization of the upper Palaeozoic units in the Barents Sea, particularly in the Loppa High half-graben and the Finmmark platform area.
The monsoon is an important component of the Earth’s climate system. It played a vital role in the development and sustenance of the largely agro-based economy in India. A better understanding of past variations in the Indian Summer Monsoon (ISM) is necessary to assess its nature under global warming scenarios. Instead, our knowledge of spatiotemporal patterns of past ISM strength, as inferred from proxy records, is limited due to the lack of high-resolution paleo-hydrological records from the core monsoon domain.
In this thesis I aim to improve our understanding of Holocene ISM variability from the core ‘monsoon zone’ (CMZ) in India. To achieve this goal, I tried to understand modern and thereafter reconstruct Holocene monsoonal hydrology, by studying surface sediments and a high-resolution sedimentary record from the saline-alkaline Lonar crater lake, central India. My approach relies on analyzing stable carbon and hydrogen isotope ratios from sedimentary lipid biomarkers to track past hydrological changes.
In order to evaluate the relationship of the modern ecosystem and hydrology of the lake I studied the distribution of lipid biomarkers in the modern ecosystem and compared it to lake surface sediments. The major plants from dry deciduous mixed forest type produced a greater amount of leaf wax n-alkanes and a greater fraction of n-C31 and n-C33 alkanes relative to n-C27 and n-C29. Relatively high average chain length (ACL) values (29.6–32.8) for these plants seem common for vegetation from an arid and warm climate. Additionally I found that human influence and subsequent nutrient supply result in increased lake primary productivity, leading to an unusually high concentration of tetrahymanol, a biomarker for salinity and water column stratification, in the nearshore sediments. Due to this inhomogeneous deposition of tetrahymanol in modern sediments, I hypothesize that lake level fluctuation may potentially affect aquatic lipid biomarker distributions in lacustrine sediments, in addition to source changes.
I reconstructed centennial-scale hydrological variability associated with changes in the intensity of the ISM based on a record of leaf wax and aquatic biomarkers and their stable carbon (δ13C) and hydrogen (δD) isotopic composition from a 10 m long sediment core from the lake. I identified three main periods of distinct hydrology over the Holocene in central India. The period between 10.1 and 6 cal. ka BP was likely the wettest during the Holocene. Lower ACL index values (29.4 to 28.6) of leaf wax n-alkanes and their negative δ13C values (–34.8‰ to –27.8‰) indicated the dominance of woody C3 vegetation in the catchment, and negative δDwax (average for leaf wax n-alkanes) values (–171‰ to –147‰) argue for a wet period due to an intensified monsoon. After 6 cal. ka BP, a gradual shift to less negative δ13C values (particularly for the grass derived n-C31) and appearance of the triterpene lipid tetrahymanol, generally considered as a marker for salinity and water column stratification, marked the onset of drier conditions. At 5.1 cal. ka BP increasing flux of leaf wax n-alkanes along with the highest flux of tetrahymanol indicated proximity of the lakeshore to the center due to a major lake level decrease. Rapid fluctuations in abundance of both terrestrial and aquatic biomarkers between 4.8 and 4 cal. ka BP indicated an unstable lake ecosystem, culminating in a transition to arid conditions. A pronounced shift to less negative δ13C values, in particular for n-C31 (–25.2‰ to –22.8‰), over this period indicated a change of dominant vegetation to C4 grasses. Along with a 40‰ increase in leaf wax n-alkane δD values, which likely resulted from less rainfall and/or higher plant evapotranspiration, I interpret this period to reflect the driest conditions in the region during the last 10.1 ka. This transition led to protracted late Holocene arid conditions and the establishment of a permanently saline lake. This is supported by the high abundance of tetrahymanol. A late Holocene peak of cyanobacterial biomarker input at 1.3 cal. ka BP might represent an event of lake eutrophication, possibly due to human impact and the onset of cattle/livestock farming in the catchment.
The most intriguing feature of the mid-Holocene driest period was the high amplitude and rapid fluctuations in δDwax values, probably due to a change in the moisture source and/or precipitation seasonality. I hypothesize that orbital induced weakening of the summer solar insolation and associated reorganization of the general atmospheric circulation were responsible for an unstable hydroclimate in the mid-Holocene in the CMZ.
My findings shed light onto the sequence of changes during mean state changes of the monsoonal system, once an insolation driven threshold has been passed, and show that small changes in solar insolation can be associated to major environmental changes and large fluctuations in moisture source, a scenario that may be relevant with respect to future changes in the ISM system.
A feasible approach to construct multilayer films of sulfonated polyanilines – PMSA1 and PABMSA1 – containing different ratios of aniline, 2-methoxyaniline-5-sulfonic acid (MAS) and 3-aminobenzoic acid (AB), with the entrapped redox enzyme pyrroloquinoline quinone-dependent glucose dehydrogenase (PQQ-GDH) on Au and ITO electrode surfaces, is described. The formation of layers has been followed and confirmed by electrochemical impedance spectroscopy (EIS), which demonstrates that the multilayer assembly can be achieved in a progressive and uniform manner. The gold and ITO electrodes subsequently modified with PMSA1:PQQ-GDH and PABMSA1 films are studied by cyclic voltammetry (CV) and UV-Vis spectroscopy which show a significant direct bioelectrocatalytical response to the oxidation of the substrate glucose without any additional mediator. This response correlates linearly with the number of deposited layers. Furthermore, the constructed polymer/enzyme multilayer system exhibits a rather good long-term stability, since the catalytic current response is maintained for more than 60% of the initial value even after two weeks of storage. This verifies that a productive interaction of the enzyme embedded in the film of substituted polyaniline can be used as a basis for the construction of bioelectronic units, which are useful as indicators for processes liberating glucose and allowing optical and electrochemical transduction.
This essay approaches T. S. Eliot’s Four Quartets (1935–1942) from the perspectives of Eve Kosofsky Sedgwick’s critical practice of reparative reading and of Paul Ricoeur’s poststructuralist hermeneutics. It demonstrates that Sedgwick’s and Ricoeur’s approaches can be productively combined to investigate hermeneutic processes in which the textual energy of a dissemination of meaning is redirected by a reparative or integrative impulse. In Four Quartets, this impetus induces the creation of semantic innovation through a violation of semantic pertinence, that is, through novel, tensional and provisional connections between formerly separate textual elements and semantic units.
Extreme weather events are likely to occur more often under climate change and the resulting effects on ecosystems could lead to a further acceleration of climate change. But not all extreme weather events lead to extreme ecosystem response. Here, we focus on hazardous ecosystem behaviour and identify coinciding weather conditions. We use a simple probabilistic risk assessment based on time series of ecosystem behaviour and climate conditions. Given the risk assessment terminology, vulnerability and risk for the previously defined hazard are estimated on the basis of observed hazardous ecosystem behaviour.
We apply this approach to extreme responses of terrestrial ecosystems to drought, defining the hazard as a negative net biome productivity over a 12-month period. We show an application for two selected sites using data for 1981-2010 and then apply the method to the pan-European scale for the same period, based on numerical modelling results (LPJmL for ecosystem behaviour; ERA-Interim data for climate).
Our site-specific results demonstrate the applicability of the proposed method, using the SPEI to describe the climate condition. The site in Spain provides an example of vulnerability to drought because the expected value of the SPEI is 0.4 lower for hazardous than for non-hazardous ecosystem behaviour. In northern Germany, on the contrary, the site is not vulnerable to drought because the SPEI expectation values imply wetter conditions in the hazard case than in the non-hazard case.
At the pan-European scale, ecosystem vulnerability to drought is calculated in the Mediterranean and temperate region, whereas Scandinavian ecosystems are vulnerable under conditions without water shortages. These first model- based applications indicate the conceptual advantages of the proposed method by focusing on the identification of critical weather conditions for which we observe hazardous ecosystem behaviour in the analysed data set. Application of the method to empirical time series and to future climate would be important next steps to test the approach.
ANG-2 for quantitative Na+ determination in living cells by time-resolved fluorescence microscopy
(2014)
Sodium ions (Na+) play an important role in a plethora of cellular processes, which are complex and partly still unexplored. For the investigation of these processes and quantification of intracellular Na+ concentrations ([Na+]i), two-photon coupled fluorescence lifetime imaging microscopy (2P-FLIM) was performed in the salivary glands of the cockroach Periplaneta americana. For this, the novel Na+-sensitive fluorescent dye Asante NaTRIUM Green-2 (ANG-2) was evaluated, both in vitro and in situ. In this context, absorption coefficients, fluorescence quantum yields and 2P action cross-sections were determined for the first time. ANG-2 was 2P-excitable over a broad spectral range and displayed fluorescence in the visible spectral range. Although the fluorescence decay behaviour of ANG-2 was triexponential in vitro, its analysis indicates a Na+-sensitivity appropriate for recordings in living cells. The Na+-sensitivity was reduced in situ, but the biexponential fluorescence decay behaviour could be successfully analysed in terms of quantitative [Na+]i recordings. Thus, physiological 2P-FLIM measurements revealed a dopamine-induced [Na+]i rise in cockroach salivary gland cells, which was dependent on a Na+-K+-2Cl− cotransporter (NKCC) activity. It was concluded that ANG-2 is a promising new sodium indicator applicable for diverse biological systems.
The quantitative descriptions of the state of stress in the Earth’s crust, and spatial-temporal stress changes are of great importance in terms of scientific questions as well as applied geotechnical issues. Human activities in the underground (boreholes, tunnels, caverns, reservoir management, etc.) have a large impact on the stress state. It is important to assess, whether these activities may lead to (unpredictable) hazards, such as induced seismicity. Equally important is the understanding of the in situ stress state in the Earth’s crust, as it allows the determination of safe well paths, already during well planning. The same goes for the optimal configuration of the injection- and production wells, where stimulation for artificial fluid path ways is necessary.
The here presented cumulative dissertation consists of four separate manuscripts, which are already published, submitted or will be submitted for peer review within the next weeks. The main focus is on the investigation of the possible usage of geothermal energy in the province Alberta (Canada). A 3-D geomechanical–numerical model was designed to quantify the contemporary 3-D stress tensor in the upper crust. For the calibration of the regional model, 321 stress orientation data and 2714 stress magnitude data were collected, whereby the size and diversity of the database is unique. A calibration scheme was developed, where the model is calibrated versus the in situ stress data stepwise for each data type and gradually optimized using statistically test methods. The optimum displacement on the model boundaries can be determined by bivariate linear regression, based on only three model runs with varying deformation ratio. The best-fit model is able to predict most of the in situ stress data quite well. Thus, the model can provide the full stress tensor along any chosen virtual well paths. This can be used to optimize the orientation of horizontal wells, which e.g. can be used for reservoir stimulation. The model confirms regional deviations from the average stress orientation trend, such as in the region of the Peace River Arch and the Bow Island Arch.
In the context of data compilation for the Alberta stress model, the Canadian database of the World Stress Map (WSM) could be expanded by including 514 new data records. This publication of an update of the Canadian stress map after ~20 years with a specific focus on Alberta shows, that the maximum horizontal stress (SHmax) is oriented southwest to northeast over large areas in Northern America. The SHmax orientation in Alberta is very homogeneous, with an average of about 47°. In order to calculate the average SHmax orientation on a regular grid as well as to estimate the wave-length of stress orientation, an existing algorithm has been improved and is applied to the Canadian data. The newly introduced quasi interquartile range on the circle (QIROC) improves the variance estimation of periodic data, as it is less susceptible to its outliers.
Another geomechanical–numerical model was built to estimate the 3D stress tensor in the target area ”Nördlich Lägern” in Northern Switzerland. This location, with Opalinus clay as a host rock, is a potential repository site for high-level radioactive waste. The performed modelling aims to investigate the sensitivity of the stress tensor on tectonic shortening, topography, faults and variable rock properties within the Mesozoic sedimentary stack, according to the required stability needed for a suitable radioactive waste disposal site. The majority of the tectonic stresses caused by the far-field shortening from the South are admitted by the competent rock units in the footwall and hanging wall of the argillaceous target horizon, the Upper Malm and Upper Muschelkalk. Thus, the differential stress within the host rock remains relatively low. East-west striking faults release stresses driven by tectonic shortening. The purely gravitational influence by the topography is low; higher SHmax magnitudes below topographical depression and lower values below hills are mainly observed near the surface. A complete calibration of the model is not possible, as no stress magnitude data are available for calibration, yet. The collection of this data will begin in 2015; subsequently they will be used to adjust the geomechanical–numerical model again.
The third geomechanical–numerical model investigates the stress variation in an ultra-deep gold mine in South Africa. This reservoir model is spatially one order of magnitude smaller than the previous local model from Northern Switzerland. Here, the primary focus is to investigate the hypothesis that the Mw 1.9 earthquake on 27 December 2007 was induced by stress changes due to the mining process. The Coulomb failure stress change (DeltaCFS) was used to analyse the stress change. It confirmed that the seismic event was induced by static stress transfer due to the mining progress. The rock was brought closer to failure on the derived rupture plane by stress changes of up to 1.5–15MPa, in dependence of the DeltaCFS analysis type. A forward modelling of a generic excavation scheme reveals that with decreasing distance to the dyke the DeltaCFS values increase significantly. Hence, even small changes in the mining progress can have a significant impact on the seismic hazard risk, i.e. the change of the occurrence probability to induce a seismic event of economic concern.
An important contribution of geosciences to the renewable energy production portfolio is the exploration and utilization of geothermal resources. For the development of a geothermal project at great depths a detailed geological and geophysical exploration program is required in the first phase. With the help of active seismic methods high-resolution images of the geothermal reservoir can be delivered. This allows potential transport routes for fluids to be identified as well as regions with high potential of heat extraction to be mapped, which indicates favorable conditions for geothermal exploitation. The presented work investigates the extent to which an improved characterization of geothermal reservoirs can be achieved with the new methods of seismic data processing. The summations of traces (stacking) is a crucial step in the processing of seismic reflection data. The common-reflection-surface (CRS) stacking method can be applied as an alternative for the conventional normal moveout (NMO) or the dip moveout (DMO) stack. The advantages of the CRS stack beside an automatic determination of stacking operator parameters include an adequate imaging of arbitrarily curved geological boundaries, and a significant increase in signal-to-noise (S/N) ratio by stacking far more traces than used in a conventional stack. A major innovation I have shown in this work is that the quality of signal attributes that characterize the seismic images can be significantly improved by this modified type of stacking in particular. Imporoved attribute analysis facilitates the interpretation of seismic images and plays a significant role in the characterization of reservoirs. Variations of lithological and petro-physical properties are reflected by fluctuations of specific signal attributes (eg. frequency or amplitude characteristics). Its further interpretation can provide quality assessment of the geothermal reservoir with respect to the capacity of fluids within a hydrological system that can be extracted and utilized. The proposed methodological approach is demonstrated on the basis on two case studies. In the first example, I analyzed a series of 2D seismic profile sections through the Alberta sedimentary basin on the eastern edge of the Canadian Rocky Mountains. In the second application, a 3D seismic volume is characterized in the surroundings of a geothermal borehole, located in the central part of the Polish basin. Both sites were investigated with the modified and improved stacking attribute analyses. The results provide recommendations for the planning of future geothermal plants in both study areas.
This work is devoted to the convergence analysis of a modified Runge-Kutta-type iterative regularization method for solving nonlinear ill-posed problems under a priori and a posteriori stopping rules. The convergence rate results of the proposed method can be obtained under Hölder-type source-wise condition if the Fréchet derivative is properly scaled and locally Lipschitz continuous. Numerical results are achieved by using the Levenberg-Marquardt and Radau methods.
Photoinduced excitation energy transfer and accompanying charge separation are elucidated for a supramolecular system of a single fullerene covalently linked to six pyropheophorbide-a dye molecules. Molecular dynamics simulations are performed to gain an atomistic picture of the architecture and the surrounding solvent. Excitation energy transfer among the dye molecules and electron transfer from the excited dyes to the fullerene are described by a mixed quantum–classical version of the Förster rate and the semiclassical Marcus rate, respectively. The mean characteristic time of energy redistribution lies in the range of 10 ps, while electron transfer proceeds within 150 ps. In between, on a 20 to 50 ps time-scale, conformational changes take place in the system. This temporal hierarchy of processes guarantees efficient charge separation, if the structure is exposed to a solvent. The fast energy transfer can adopt the dye excitation to the actual conformation. In this sense, the probability to achieve charge separation is large enough since any dominance of unfavorable conformations that exhibit a large dye–fullerene distance is circumvented. And the slow electron transfer may realize an averaging with respect to different conformations. To confirm the reliability of our computations, ensemble measurements on the charge separation dynamics are simulated and a very good agreement with the experimental data is obtained.
The toxicologically most relevant mercury (Hg) species for human exposure is methylmercury (MeHg). Thiomersal is a common preservative used in some vaccine formulations. The aim of this study is to get further mechanistic insight into the yet not fully understood neurotoxic modes of action of organic Hg species. Mercury species investigated include MeHgCl and thiomersal. Additionally HgCl2 was studied, since in the brain mercuric Hg can be formed by dealkylation of the organic species. As a cellular system astrocytes were used. In vivo astrocytes provide the environment necessary for neuronal function. In the present study, cytotoxic effects of the respective mercuricals increased with rising alkylation level and correlated with their cellular bioavailability. Further experiments revealed for all species at subcytotoxic concentrations no induction of DNA strand breaks, whereas all species massively increased H2O2-induced DNA strand breaks. This co- genotoxic effect is likely due to a disturbance of the cellular DNA damage response. Thus, at nanomolar, sub-cytotoxic concentrations, all three mercury species strongly disturbed poly(ADP-ribosyl)ation, a signalling reaction induced by DNA strand breaks. Interestingly, the molecular mechanism behind this inhibition seems to be different for the species. Since chronic PARP-1 inhibition is also discussed to sacrifice neurogenesis and learning abilities, further experiments on neurons and in vivo studies could be helpful to clarify whether the inhibition of poly(ADP-ribosyl) ation contributes to organic Hg induced neurotoxicity.
Probably no other field of statistical physics at the borderline of soft matter and biological physics has caused such a flurry of papers as polymer translocation since the 1994 landmark paper by Bezrukov, Vodyanoy, and Parsegian and the study of Kasianowicz in 1996. Experiments, simulations, and theoretical approaches are still contributing novel insights to date, while no universal consensus on the statistical understanding of polymer translocation has been reached. We here collect the published results, in particular, the famous–infamous debate on the scaling exponents governing the translocation process. We put these results into perspective and discuss where the field is going. In particular, we argue that the phenomenon of polymer translocation is non-universal and highly sensitive to the exact specifications of the models and experiments used towards its analysis.
Background: Development of eukaryotic organisms is controlled by transcription factors that trigger specific and global changes in gene expression programs. In plants, MADS-domain transcription factors act as master regulators of developmental switches and organ specification. However, the mechanisms by which these factors dynamically regulate the expression of their target genes at different developmental stages are still poorly understood.
Results: We characterized the relationship of chromatin accessibility, gene expression, and DNA binding of two MADS-domain proteins at different stages of Arabidopsis flower development. Dynamic changes in APETALA1 and SEPALLATA3 DNA binding correlated with changes in gene expression, and many of the target genes could be associated with the developmental stage in which they are transcriptionally controlled. We also observe dynamic changes in chromatin accessibility during flower development. Remarkably, DNA binding of APETALA1 and SEPALLATA3 is largely independent of the accessibility status of their binding regions and it can precede increases in DNA accessibility. These results suggest that APETALA1 and SEPALLATA3 may modulate chromatin accessibility, thereby facilitating access of other transcriptional regulators to their target genes.
Conclusions: Our findings indicate that different homeotic factors regulate partly overlapping, yet also distinctive sets of target genes in a partly stage-specific fashion. By combining the information from DNA-binding and gene expression data, we are able to propose models of stage-specific regulatory interactions, thereby addressing dynamics of regulatory networks throughout flower development. Furthermore, MADS-domain TFs may regulate gene expression by alternative strategies, one of which is modulation of chromatin accessibility.
The nature of the links between speech production and perception has been the subject of longstanding debate. The present study investigated the articulatory parameter of tongue height and the acoustic F1–F0 difference for the phonological distinction of vowel height in American English front vowels. Multiple repetitions of /i, ɪ, e, ɛ, æ/ in [(h)Vd] sequences were recorded in seven adult speakers. Articulatory (ultrasound) and acoustic data were collected simultaneously to provide a direct comparison of variability in vowel production in both domains. Results showed idiosyncratic patterns of articulation for contrasting the three front vowel pairs /i-ɪ/, /e-ɛ/, and /ɛ-æ/ across subjects, with the degree of variability in vowel articulation comparable to that observed in the acoustics for all seven participants. However, contrary to what was expected, some speakers showed reversals for tongue height for /ɪ/-/e/ that were also reflected in acoustics, with F1 higher for /ɪ/ than for /e/. The data suggest the phonological distinction of height is conveyed via speaker-specific articulatory-acoustic patterns that do not strictly match features descriptions. However, the acoustic signal is faithful to the articulatory configuration that generated it, carrying the crucial information for perceptual contrast.
Ultraschall Berlin
(2014)
Large-scale floodplain sediment dynamics in the Mekong Delta : present state and future prospects
(2014)
The Mekong Delta (MD) sustains the livelihood and food security of millions of people in Vietnam and Cambodia. It is known as the “rice bowl” of South East Asia and has one of the world’s most productive fisheries. Sediment dynamics play a major role for the high productivity of agriculture and fishery in the delta. However, the MD is threatened by climate change, sea level rise and unsustainable development activities in the Mekong Basin. But despite its importance and the expected threats, the understanding of the present and future sediment dynamics in the MD is very limited. This is a consequence of its large extent, the intricate system of rivers, channels and floodplains and the scarcity of observations. Thus this thesis aimed at (1) the quantification of suspended sediment dynamics and associated sediment-nutrient deposition in floodplains of the MD, and (2) assessed the impacts of likely future boundary changes on the sediment dynamics in the MD. The applied methodology combines field experiments and numerical simulation to quantify and predict the sediment dynamics in the entire delta in a spatially explicit manner. The experimental part consists of a comprehensive procedure to monitor quantity and spatial variability of sediment and associated nutrient deposition for large and complex river floodplains, including an uncertainty analysis. The measurement campaign applied 450 sediment mat traps in 19 floodplains over the MD for a complete flood season. The data also supports quantification of nutrient deposition in floodplains based on laboratory analysis of nutrient fractions of trapped sedimentation.The main findings are that the distribution of grain size and nutrient fractions of suspended sediment are homogeneous over the Vietnamese floodplains. But the sediment deposition within and between ring dike floodplains shows very high spatial variability due to a high level of human inference. The experimental findings provide the essential data for setting up and calibration of a large-scale sediment transport model for the MD. For the simulation studies a large scale hydrodynamic model was developed in order to quantify large-scale floodplain sediment dynamics. The complex river-channel-floodplain system of the MD is described by a quasi-2D model linking a hydrodynamic and a cohesive sediment transport model. The floodplains are described as quasi-2D presentations linked to rivers and channels modeled in 1D by using control structures. The model setup, based on the experimental findings, ignored erosion and re-suspension processes due to a very high degree of human interference during the flood season. A two-stage calibration with six objective functions was developed in order to calibrate both the hydrodynamic and sediment transport modules. The objective functions include hydraulic and sediment transport parameters in main rivers, channels and floodplains. The model results show, for the first time, the tempo-spatial distribution of sediment and associated nutrient deposition rates in the whole MD. The patterns of sediment transport and deposition are quantified for different sub-systems. The main factors influencing spatial sediment dynamics are the network of rivers, channels and dike-rings, sluice gate operations, magnitude of the floods and tidal influences. The superposition of these factors leads to high spatial variability of the sediment transport and deposition, in particular in the Vietnamese floodplains. Depending on the flood magnitude, annual sediment loads reaching the coast vary from 48% to 60% of the sediment load at Kratie, the upper boundary of the MD. Deposited sediment varies from 19% to 23% of the annual load at Kratie in Cambodian floodplains, and from 1% to 6% in the compartmented and diked floodplains in Vietnam. Annual deposited nutrients (N, P, K), which are associated to the sediment deposition, provide on average more than 50% of mineral fertilizers typically applied for rice crops in non-flooded ring dike compartments in Vietnam. This large-scale quantification provides a basis for estimating the benefits of the annual Mekong floods for agriculture and fishery, for assessing the impacts of future changes on the delta system, and further studies on coastal deposition/erosion. For the estimation of future prospects a sensitivity-based approach is applied to assess the response of floodplain hydraulics and sediment dynamics to the changes in the delta boundaries including hydropower development, climate change in the Mekong River Basin and effective sea level rise. The developed sediment model is used to simulate the mean sediment transport and sediment deposition in the whole delta system for the baseline (2000-2010) and future (2050-2060) periods. For each driver we derive a plausible range of future changes and discretize it into five levels, resulting in altogether 216 possible factor combinations. Our results thus cover all plausible future pathways of sediment dynamics in the delta based on current knowledge. The uncertainty of the range of the resulting impacts can be decreased in case more information on these drivers becomes available. Our results indicate that the hydropower development dominates the changes in sediment dynamics of the Mekong Delta, while sea level rise has the smallest effect. The floodplains of Vietnamese Mekong Delta are much more sensitive to the changes compared to the other subsystems of the delta. In terms of median changes of the three combined drivers, the inundation extent is predicted to increase slightly, but the overall floodplain sedimentation would be reduced by approximately 40%, while the sediment load to the Sea would diminish to half of the current rates. These findings provide new and valuable information on the possible impacts of future development on the delta, and indicate the most vulnerable areas. Thus, the presented results are a significant contribution to the ongoing international discussion on the hydropower development in the Mekong basin and its impact on the Mekong delta.
The work elaborates on the question if coaches in non-professional soccer can influence referee decisions. Modeled from a principal-agent perspective, the managing referee boards can be seen as the principal. They aim at facilitating a fair competition which is in accordance with the existing rules and regulations. In doing so, the referees are assigned as impartial agents on the pitch. The coaches take over a non-legitimate principal-like role trying to influence the referees even though they do not have the formal right to do so.
Separate questionnaires were set up for referees and coaches. The coach questionnaire aimed at identifying the extent and the forms of influencing attempts by coaches. The referee questionnaire tried to elaborate on the questions if referees take notice of possible influencing attempts and how they react accordingly.
The results were put into relation with official match data in order to identify significant influences on personal sanctions (yellow cards, second yellow cards, red cards) and the match result.
It is found that there is a slight effect on the referee’s decisions. However, this effect is rather disadvantageous for the influencing coach and there is no evidence for an impact on the match result itself.
Hemocompatible materials are needed for internal and extracorporeal biomedical applications, which should be realizable by reducing protein and thrombocyte adhesion to such materials. Polyethers have been demonstrated to be highly efficient in this respect on smooth surfaces. Here, we investigate the grafting of oligo- and polyglycerols to rough poly(ether imide) membranes as a polymer relevant to biomedical applications and show the reduction of protein and thrombocyte adhesion as well as thrombocyte activation. It could be demonstrated that, by performing surface grafting with oligo- and polyglycerols of relatively high polydispersity (>1.5) and several reactive groups for surface anchoring, full surface shielding can be reached, which leads to reduced protein adsorption of albumin and fibrinogen. In addition, adherent thrombocytes were not activated. This could be clearly shown by immunostaining adherent proteins and analyzing the thrombocyte covered area. The presented work provides an important strategy for the development of application relevant hemocompatible 3D structured materials.
A vector error correction model for the relationship between public debt and inflation in Germany
(2014)
In the paper, the interaction between public debt and inflation including mutual impulse response will be analysed. The European sovereign debt crisis brought once again the focus on the consequences of public debt in combination with an expansive monetary policy for the development of consumer prices. Public deficits can lead to inflation if the money supply is expansive. The high level of national debt, not only in the Euro-crisis countries, and the strong increase in total assets of the European Central Bank, as a result of the unconventional monetary policy, caused fears on inflating national debt. The transmission from public debt to inflation through money supply and long-term interest rate will be shown in the paper. Based on these theoretical thoughts, the variables public debt, consumer price index, money supply m3 and long-term interest rate will be analysed within a vector error correction model estimated by Johansen approach. In the empirical part of the article, quarterly data for Germany from 1991 by 2010 are to be examined.
Knowing the rates and mechanisms of geomorphic process that shape the Earth’s surface is crucial to understand landscape evolution. Modern methods for estimating denudation rates enable us to quantitatively express and compare processes of landscape downwearing that can be traced through time and space—from the seemingly intact, though intensely shattered, phantom blocks of the catastrophically fragmented basal facies of giant rockslides up to denudational noise in orogen-wide data sets averaging over several millennia. This great variety of spatiotemporal scales of denudation rates is both boon and bane of geomorphic process rates. Indeed, processes of landscape downwearing can be traced far back in time, helping us to understand the Earth’s evolution. Yet, this benefit may turn into a drawback due to scaling issues if these rates are to be compared across different observation timescales.
This thesis investigates the mechanisms, patterns and rates of landscape downwearing across the Himalaya-Tibet orogen.
Accounting for the spatiotemporal variability of denudation processes, this thesis addresses landscape downwearing on three distinctly different spatial scales, starting off at the local scale of individual hillslopes where considerable amounts of debris are generated from rock instantaneously: Rocksliding in active mountains is a major impetus of landscape downwearing. Study I provides a systematic overview of the internal sedimentology of giant rockslide deposits and thus meets the challenge of distinguishing them from macroscopically and microscopically similar glacial deposits, tectonic fault-zone breccias, and impact breccias. This distinction is important to avoid erroneous or misleading deduction of paleoclimatic or tectonic implications. -> Grain size analysis shows that rockslide-derived micro-breccia closely resemble those from meteorite impact or tectonic faults. -> Frictionite may occur more frequently that previously assumed. -> Mössbauer-spectroscopy derived results indicate basal rock melting in the absence of water, involving short-term temperatures of >1500°C.
Zooming out, Study II tracks the fate of these sediments, using the example of the upper Indus River, NW India. There we use river sand samples from the Indus and its tributaries to estimate basin-averaged denudation rates along a ~320-km reach across the Tibetan Plateau margin, to answer the question whether incision into the western Tibetan Plateau margin is currently active or not. -> We find an about one-order-of-magnitude upstream decay—from 110 to 10 mm kyr^-1—of cosmogenic Be-10-derived basin-wide denudation rates across the morphological knickpoint that marks the transition from the Transhimalayan ranges to the Tibetan Plateau. This trend is corroborated by independent bulk petrographic and heavy mineral analysis of the same samples. -> From the observation that tributary-derived basin-wide denudation rates do not increase markedly until ~150–200 km downstream of the topographic plateau margin we conclude that incision into the Tibetan Plateau is inactive. -> Comparing our postglacial Be-10-derived denudation rates to long-term (>10^6 yr) estimates from low-temperature thermochronometry, ranging from 100 to 750 mm kyr^-1, points to an order- of-magnitude decay of rates of landscape downwearing towards present. We infer that denudation rates must have been higher in the Quaternary, probably promoted by the interplay of glacial and interglacial stages.
Our investigation of regional denudation patterns in the upper Indus finally is an integral part of Study III that synthesizes denudation of the Himalaya-Tibet orogen. In order to identify general and time-invariant predictors for Be-10-derived denudation rates we analyze tectonic, climatic and topographic metrics from an inventory of 297 drainage basins from various parts of the orogen. Aiming to get insight to the full response distributions of denudation rate to tectonic, climatic and topographic candidate predictors, we apply quantile regression instead of ordinary least squares regression, which has been standard analysis tool in previous studies that looked for denudation rate predictors. -> We use principal component analysis to reduce our set of 26 candidate predictors, ending up with just three out of these: Aridity Index, topographic steepness index, and precipitation of the coldest quarter of the year. -> Topographic steepness index proves to perform best during additive quantile regression. Our consequent prediction of denudation rates on the basin scale involves prediction errors that remain between 5 and 10 mm kyr^-1. -> We conclude that while topographic metrics such as river-channel steepness and slope gradient—being representative on timescales that our cosmogenic Be-10-derived denudation rates integrate over—generally appear to be more suited as predictors than climatic and tectonic metrics based on decadal records.
Hemolysis, the rupturing of red blood cells, can result from numerous medical conditions (in vivo) or occur after collecting blood specimen or extracting plasma and serum out of whole blood (in vitro). In clinical laboratory practice, hemolysis can be a serious problem due to its potential to bias detection of various analytes or biomarkers. Here we present the first ‘‘mix-and-measure’’ method to assess the degree of hemolysis in biosamples using luminescence spectroscopy. Luminescent terbium complexes (LTC) were studied in the presence of free hemoglobin (Hb) as indicators for hemolysis in TRIS-buffer, and in fresh human plasma with absorption, excitation and emission measurements. Our findings indicate dynamic as well as resonance energy transfer (FRET) between the LTC and the porphyrin ligand of hemoglobin. This transfer leads to a decrease in luminescence intensity and decay time even at nanomolar hemoglobin concentrations either in buffer or plasma. Luminescent terbium complexes are very sensitive to free hemoglobin in buffer and blood plasma. Due to the instant change in luminescence properties of the LTC in presence of Hb it is possible to access the concentration of hemoglobin via spectroscopic methods without incubation time or further treatment of the sample thus enabling a rapid and sensitive detection of hemolysis in clinical diagnostics.
The B fields in OB stars (BOB) survey is an ESO large programme collecting spectropolarimetric observations for a large number of early-type stars in order to study the occurrence rate, properties, and ultimately the origin of magnetic fields in massive stars. As of July 2014, a total of 98 objects were observed over 20 nights with FORS2 and HARPSpol. Our preliminary results indicate that the fraction of magnetic OB stars with an organised, detectable field is low. This conclusion, now independently reached by two different surveys, has profound implications for any theoretical model attempting to explain the field formation in these objects. We discuss in this contribution some important issues addressed by our observations (e.g., the lower bound of the field strength) and the discovery of some remarkable objects.
Co-doping of the MOF 3∞[Zn(2-methylimidazolate-4-amide-5-imidate)] (IFP-1 = Imidazolate Framework Potsdam-1) with luminescent Eu3+ and Tb3+ ions presents an approach to utilize the porosity of the MOF for the intercalation of luminescence centers and for tuning of the chromaticity to the emission of white light of the quality of a three color emitter. Organic based fluorescence processes of the MOF backbone as well as metal based luminescence of the dopants are combined to one homogenous single source emitter while retaining the MOF's porosity. The lanthanide ions Eu3+ and Tb3+ were doped in situ into IFP-1 upon formation of the MOF by intercalation into the micropores of the growing framework without a structure directing effect. Furthermore, the color point is temperature sensitive, so that a cold white light with a higher blue content is observed at 77 K and a warmer white light at room temperature (RT) due to the reduction of the organic emission at higher temperatures. The study further illustrates the dependence of the amount of luminescent ions on porosity and sorption properties of the MOF and proves the intercalation of luminescence centers into the pore system by low-temperature site selective photoluminescence spectroscopy, SEM and EDX. It also covers an investigation of the border of homogenous uptake within the MOF pores and the formation of secondary phases of lanthanide formates on the surface of the MOF. Crossing the border from a homogenous co-doping to a two-phase composite system can be beneficially used to adjust the character and warmth of the white light. This study also describes two-color emitters of the formula Ln@IFP-1a–d (Ln: Eu, Tb) by doping with just one lanthanide Eu3+ or Tb3+.
Arsenic-containing hydrocarbons (AsHC) constitute one group of arsenolipids that have been identified in seafood. In this first in vivo toxicity study for AsHCs, we show that AsHCs exert toxic effects in Drosophila melanogaster in a concentration range similar to that of arsenite. In contrast to arsenite, however, AsHCs cause developmental toxicity in the late developmental stages of Drosophila melanogaster. This work illustrates the need for a full characterisation of the toxicity of AsHCs in experimental animals to finally assess the risk to human health related to the presence of arsenolipids in seafood.
Arsenic-containing hydrocarbons are one group of fat-soluble organic arsenic compounds (arsenolipids) found in marine fish and other seafood. A risk assessment of arsenolipids is urgently needed, but has not been possible because of the total lack of toxicological data. In this study the cellular toxicity of three arsenic-containing hydrocarbons was investigated in cultured human bladder (UROtsa) and liver (HepG2) cells. Cytotoxicity of the arsenic-containing hydrocarbons was comparable to that of arsenite, which was applied as the toxic reference arsenical. A large cellular accumulation of arsenic, as measured by ICP-MS/MS, was observed after incubation of both cell lines with the arsenolipids. Moreover, the toxic mode of action shown by the three arsenic-containing hydrocarbons seemed to differ from that observed for arsenite. Evidence suggests that the high cytotoxic potential of the lipophilic arsenicals results from a decrease in the cellular energy level. This first in vitro based risk assessment cannot exclude a risk to human health related to the presence of arsenolipids in seafood, and indicates the urgent need for further toxicity studies in experimental animals to fully assess this possible risk.
Process models specify behavioral execution constraints between activities as well as between activities and data objects. A data object is characterized by its states and state transitions represented as object life cycle. For process execution, all behavioral execution constraints must be correct. Correctness can be verified via soundness checking which currently only considers control flow information. For data correctness, conformance between a process model and its object life cycles is checked. Current approaches abstract from dependencies between multiple data objects and require fully specified process models although, in real-world process repositories, often underspecified models are found. Coping with these issues, we introduce the concept of synchronized object life cycles and we define a mapping of data constraints of a process model to Petri nets extending an existing mapping. Further, we apply the notion of weak conformance to process models to tell whether each time an activity needs to access a data object in a particular state, it is guaranteed that the data object is in or can reach the expected state. Then, we introduce an algorithm for an integrated verification of control flow correctness and weak data conformance using soundness checking.
Modern microscopic techniques following the stochastic motion of labelled tracer particles have uncovered significant deviations from the laws of Brownian motion in a variety of animate and inanimate systems. Such anomalous diffusion can have different physical origins, which can be identified from careful data analysis. In particular, single particle tracking provides the entire trajectory of the traced particle, which allows one to evaluate different observables to quantify the dynamics of the system under observation. We here provide an extensive overview over different popular anomalous diffusion models and their properties. We pay special attention to their ergodic properties, highlighting the fact that in several of these models the long time averaged mean squared displacement shows a distinct disparity to the regular, ensemble averaged mean squared displacement. In these cases, data obtained from time averages cannot be interpreted by the standard theoretical results for the ensemble averages. Here we therefore provide a comparison of the main properties of the time averaged mean squared displacement and its statistical behaviour in terms of the scatter of the amplitudes between the time averages obtained from different trajectories. We especially demonstrate how anomalous dynamics may be identified for systems, which, on first sight, appear to be Brownian. Moreover, we discuss the ergodicity breaking parameters for the different anomalous stochastic processes and showcase the physical origins for the various behaviours. This Perspective is intended as a guidebook for both experimentalists and theorists working on systems, which exhibit anomalous diffusion.
Cyanobacteria produce about 40 percent of the world’s primary biomass, but also a variety of often toxic peptides such as microcystin. Mass developments, so called blooms, can pose a real threat to the drinking water supply in many parts of the world. This study aimed at characterizing the biological function of microcystin production in one of the most common bloom-forming cyanobacterium Microcystis aeruginosa.
In a first attempt, the effect of elevated light intensity on microcystin production and its binding to cellular proteins was studied. Therefore, conventional microcystin quantification techniques were combined with protein-biochemical methods. RubisCO, the key enzyme for primary carbon fixation was a major microcystin interaction partner. High light exposition strongly stimulated microcystin-protein interactions. Up to 60 percent of the total cellular microcystin was detected bound to proteins, i.e. inaccessible for standard quantification procedures. Underestimation of total microcystin contents when neglecting the protein fraction was also demonstrated in field samples. Finally, an immuno-fluorescence based method was developed to identify microcystin producing cyanobacteria in mixed populations.
The high light induced microcystin interaction with proteins suggested an impact of the secondary metabolite on the primary metabolism of Microcystis by e.g. modulating the activity of enzymes. For addressing that question, a comprehensive GC/MS-based approach was conducted to compare the accumulation of metabolites in the wild-type of Microcystis aeruginosa PCC 7806 and the microcystin deficient ΔmcyB mutant. From all 501 detected non-redundant metabolites 85 (17 percent) accumulated significantly different in either of both genotypes upon high light exposition. Accumulation of compatible solutes in the ΔmcyB mutant suggests a role of microcystin in fine-tuning the metabolic flow to prevent stress related to excess light, high oxygen concentration and carbon limitation.
Co-analysis of the widely used model cyanobacterium Synechocystis PCC 6803 revealed profound metabolic differences between species of cyanobacteria. Whereas Microcystis channeled more resources towards carbohydrate synthesis, Synechocystis invested more in amino acids. These findings were supported by electron microscopy of high light treated cells and the quantification of storage compounds. While Microcystis accumulated mainly glycogen to about 8.5 percent of its fresh weight within three hours, Synechocystis produced higher amounts of cyanophycin. The results showed that the characterization of species-specific metabolic features should gain more attention with regard to the biotechnological use of cyanobacteria.
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application. Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services. Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns. The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the Research Scholl, this technical report covers a wide range of research topics. These include but are not limited to: Self-Adaptive Service-Oriented Systems, Operating System Support for Service-Oriented Systems, Architecture and Modeling of Service-Oriented Systems, Adaptive Process Management, Services Composition and Workflow Planning, Security Engineering of Service-Based IT Systems, Quantitative Analysis and Optimization of Service-Oriented Systems, Service-Oriented Systems in 3D Computer Graphics sowie Service-Oriented Geoinformatics.
Pulsar wind nebulae (PWNe) are the most abundant TeV gamma-ray emitters in the Milky Way. The radiative emission of these objects is powered by fast-rotating pulsars, which donate parts of their rotational energy into winds of relativistic particles. This thesis presents an in-depth study of the detected population of PWNe at high energies. To outline general trends regarding their evolutionary behaviour, a time-dependent model is introduced and compared to the available data. In particular, this work presents two exceptional PWNe which protrude from the rest of the population, namely the Crab Nebula and N 157B. Both objects are driven by pulsars with extremely high rotational energy loss rates. Accordingly, they are often referred to as energetic twins. Modelling the non-thermal multi-wavelength emission of N157B gives access to specific properties of this object, like the magnetic field inside the nebula. Comparing the derived parameters to those of the Crab Nebula reveals large intrinsic differences between the two PWNe. Possible origins of these differences are discussed in context of the resembling pulsars.
Compared to the TeV gamma-ray regime, the number of detected PWNe is much smaller in the MeV-GeV gamma-ray range. In the latter range, the Crab Nebula stands out by the recent detection of gamma-ray flares. In general, the measured flux enhancements on short time scales of days to weeks were not expected in the theoretical understanding of PWNe. In this thesis, the variability of the Crab Nebula is analysed using data from the Fermi Large Area Telescope (Fermi-LAT). For the presented analysis, a new gamma-ray reconstruction method is used, providing a higher sensitivity and a lower energy threshold compared to previous analyses. The derived gamma-ray light curve of the Crab Nebula is investigated for flares and periodicity. The detected flares are analysed regarding their energy spectra, and their variety and commonalities are discussed. In addition, a dedicated analysis of the flare which occurred in March 2013 is performed. The derived short-term variability time scale is roughly 6h, implying a small region inside the Crab Nebula to be responsible for the enigmatic flares. The most promising theories explaining the origins of the flux eruptions and gamma-ray variability are discussed in detail.
In the technical part of this work, a new analysis framework is presented. The introduced software, called gammalib/ctools, is currently being developed for the future CTA observa- tory. The analysis framework is extensively tested using data from the H. E. S. S. experiment. To conduct proper data analysis in the likelihood framework of gammalib/ctools, a model describing the distribution of background events in H.E.S.S. data is presented. The software provides the infrastructure to combine data from several instruments in one analysis. To study the gamma-ray emitting PWN population, data from Fermi-LAT and H. E. S. S. are combined in the likelihood framework of gammalib/ctools. In particular, the spectral peak, which usually lies in the overlap energy regime between these two instruments, is determined with the presented analysis framework. The derived measurements are compared to the predictions from the time-dependent model. The combined analysis supports the conclusion of a diverse population of gamma-ray emitting PWNe.
Zinc deficiency has a fundamental influence on the immune defense, with multiple effects on different immune cells, resulting in a major impairment of human health. Monocytes and macrophages are among the immune cells that are most fundamentally affected by zinc, but the impact of zinc on these cells is still far from being completely understood. Therefore, this study investigates the influence of zinc deficiency on monocytes of healthy human donors. Peripheral blood mononuclear cells, which include monocytes, were cultured under zinc deficient conditions for 3 days. This was achieved by two different methods: by application of the membrane permeable chelator N,N,N0´,N0´-tetrakis-(2-pyridylmethyl)ethylenediamine (TPEN) or by removal of zinc from the culture medium using a CHELEX 100 resin. Subsequently, monocyte functions were analyzed in response to Escherichia coli, Staphylococcus aureus, and Streptococcus pneumoniae. Zinc depletion had differential effects. On the one hand, elimination of bacterial pathogens by phagocytosis and oxidative burst was elevated. On the other hand, the production of the inflammatory cytokines tumor necrosis factor (TNF)-a and interleukin (IL)-6 was reduced. This suggests that monocytes shift from intercellular communication to basic innate defensive functions in response to zinc deficiency. These results were obtained regardless of the method by which zinc deficiency was achieved. However, CHELEX-treated medium strongly augmented cytokine production, independently from its capability for zinc removal. This side-effect severely limits the use of CHELEX for investigating the effects of zinc deficiency on innate immunity.
Bacteria respond to changing environmental conditions by switching the global pattern of expressed genes. In response to specific environmental stresses the cell activates several stress-specific molecules such as sigma factors. They reversibly bind the RNA polymerase to form the so-called holoenzyme and direct it towards the appropriate stress response genes. In exponentially growing E. coli cells, the majority of the transcriptional activity is carried out by the housekeeping sigma factor, while stress responses are often under the control of alternative sigma factors. Different sigma factors compete for binding to a limited pool of RNA polymerase (RNAP) core enzymes, providing a mechanism for cross talk between genes or gene classes via the sharing of expression machinery. To quantitatively analyze the contribution of sigma factor competition to global changes in gene expression, we develop a thermodynamic model that describes binding between sigma factors and core RNAP at equilibrium, transcription, non-specific binding to DNA and the modulation of the availability of the molecular components.
Association of housekeeping sigma factor to RNAP is generally favored by its abundance and higher binding affinity to the core. In order to promote transcription by alternative sigma subunits, the bacterial cell modulates the transcriptional efficiency in a reversible manner through several strategies such as anti-sigma factors, 6S RNA and generally any kind of transcriptional regulators (e.g. activators or inhibitors). By shifting the outcome of sigma factor competition for the core, these modulators bias the transcriptional program of the cell. The model is validated by comparison with in vitro competition experiments, with which excellent agreement is found. We observe that transcription is affected via the modulation of the concentrations of the different types of holoenzymes, so saturated promoters are only weakly affected by sigma factor competition. However, in case of overlapping promoters or promoters recognized by two types of sigma factors, we find that even saturated promoters are strongly affected.
Active transcription effectively lowers the affinity between the sigma factor driving it and the core RNAP, resulting in complex cross talk effects and raising the question of how their in vitro measure is relevant in the cell. We also estimate that sigma factor competition is not strongly affected by non-specific binding of core RNAPs, sigma factors, and holoenzymes to DNA. Finally, we analyze the role of increased core RNAP availability upon the shut-down of ribosomal RNA transcription during stringent response. We find that passive up-regulation of alternative sigma-dependent transcription is not only possible, but also displays hypersensitivity based on the sigma factor competition. Our theoretical analysis thus provides support for a significant role of passive control during that global switch of the gene expression program and gives new insights into RNAP partitioning in the cell.
The characterization of exoplanets is a young and rapidly expanding field in astronomy.
It includes a method called transmission spectroscopy that searches for planetary spectral
fingerprints in the light received from the host star during the event of a transit. This
techniques allows for conclusions on the atmospheric composition at the terminator region,
the boundary between the day and night side of the planet. Observationally a big
challenge, first attempts in the community have been successful in the detection of several
absorption features in the optical wavelength range. These are for example a Rayleighscattering
slope and absorption by sodium and potassium. However, other objects show
a featureless spectrum indicative for a cloud or haze layer of condensates masking the
probable atmospheric layers.
In this work, we performed transmission spectroscopy by spectrophotometry of three
Hot Jupiter exoplanets. When we began the work on this thesis, optical transmission
spectra have been available for two exoplanets. Our main goal was to advance the current
sample of probed objects to learn by comparative exoplanetology whether certain
absorption features are common. We selected the targets HAT-P-12b, HAT-P-19b and
HAT-P-32b, for which the detection of atmospheric signatures is feasible with current
ground-based instrumentation. In addition, we monitored the host stars of all three objects
photometrically to correct for influences of stellar activity if necessary.
The obtained measurements of the three objects all favor featureless spectra. A variety
of atmospheric compositions can explain the lack of a wavelength dependent absorption.
But the broad trend of featureless spectra in planets of a wide range of temperatures,
found in this work and in similar studies recently published in the literature, favors an
explanation based on the presence of condensates even at very low concentrations in the
atmospheres of these close-in gas giants. This result points towards the general conclusion
that the capability of transmission spectroscopy to determine the atmospheric composition
is limited, at least for measurements at low spectral resolution.
In addition, we refined the transit parameters and ephemerides of HAT-P-12b and HATP-
19b. Our monitoring campaigns allowed for the detection of the stellar rotation period
of HAT-P-19 and a refined age estimate. For HAT-P-12 and HAT-P-32, we derived upper
limits on their potential variability. The calculated upper limits of systematic effects of
starspots on the derived transmission spectra were found to be negligible for all three
targets.
Finally, we discussed the observational challenges in the characterization of exoplanet
atmospheres, the importance of correlated noise in the measurements and formulated
suggestions on how to improve on the robustness of results in future work.
The aim of the present thesis is to answer the question to what degree the processes involved in sentence comprehension are sensitive to task demands. A central phenomenon in this regard is the so-called ambiguity advantage, which is the finding that ambiguous sentences can be easier to process than unambiguous sentences. This finding may appear counterintuitive, because more meanings should be associated with a higher computational effort. Currently, two theories exist that can explain this finding.
The Unrestricted Race Model (URM) by van Gompel et al. (2001) assumes that several sentence interpretations are computed in parallel, whenever possible, and that the first interpretation to be computed is assigned to the sentence. Because the duration of each structure-building process varies from trial to trial, the parallelism in structure-building predicts that ambiguous sentences should be processed faster. This is because when two structures are permissible, the chances that some interpretation will be computed quickly are higher than when only one specific structure is permissible. Importantly, the URM is not sensitive to task demands such as the type of comprehension questions being asked.
A radically different proposal is the strategic underspecification model by Swets et al. (2008). It assumes that readers do not attempt to resolve ambiguities unless it is absolutely necessary. In other words, they underspecify. According the strategic underspecification hypothesis, all attested replications of the ambiguity advantage are due to the fact that in those experiments, readers were not required to fully understand the sentence.
In this thesis, these two models of the parser’s actions at choice-points in the sentence are presented and evaluated. First, it is argued that the Swets et al.’s (2008) evidence against the URM and in favor of underspecification is inconclusive. Next, the precise predictions of the URM as well as the underspecification model are refined. Subsequently, a self-paced reading experiment involving the attachment of pre-nominal relative clauses in Turkish is presented, which provides evidence against strategical underspecification. A further experiment is presented which investigated relative clause attachment in German using the speed-accuracy tradeoff (SAT) paradigm. The experiment provides evidence against strategic underspecification and in favor of the URM. Furthermore the results of the experiment are used to argue that human sentence comprehension is fallible, and that theories of parsing should be able to account for that fact. Finally, a third experiment is presented, which provides evidence for the sensitivity to task demands in the treatment of ambiguities. Because this finding is incompatible with the URM, and because the strategic underspecification model has been ruled out, a new model of ambiguity resolution is proposed: the stochastic multiple-channel model of ambiguity resolution (SMCM). It is further shown that the quantitative predictions of the SMCM are in agreement with experimental data.
In conclusion, it is argued that the human sentence comprehension system is parallel and fallible, and that it is sensitive to task-demands.
Entrepreneurship is known to be a main driver of economic growth. Hence, governments have an interest in supporting and promoting entrepreneurial activities. Start-up subsidies, which have been analyzed extensively, only aim at mitigating the lack of financial capital. However, some entrepreneurs also lack in human, social, and managerial capital. One way to address these shortcomings is by subsidizing coaching programs for entrepreneurs. However, theoretical and empirical evidence about business coaching and programs subsidizing coaching is scarce. This dissertation gives an extensive overview of coaching and is the first empirical study for Germany analyzing the effects of coaching programs on its participants. In the theoretical part of the dissertation the process of a business start-up is described and it is discussed how and in which stage of the company’s evolvement coaching can influence entrepreneurial success. The concept of coaching is compared to other non-monetary types of support as training, mentoring, consulting, and counseling. Furthermore, national and international support programs are described. Most programs have either no or small positive effects. However, there is little quantitative evidence in the international literature. In the empirical part of the dissertation the effectiveness of coaching is shown by evaluating two German coaching programs, which support entrepreneurs via publicly subsidized coaching sessions. One of the programs aims at entrepreneurs who have been employed before becoming self-employed, whereas the other program is targeted at former unemployed entrepreneurs. The analysis is based on the evaluation of a quantitative and a qualitative dataset. The qualitative data are gathered by intensive one-on-one interviews with coaches and entrepreneurs. These data give a detailed insight about the coaching topics, duration, process, effectiveness, and the thoughts of coaches and entrepreneurs. The quantitative data include information about 2,936 German-based entrepreneurs. Using propensity score matching, the success of participants of the two coaching programs is compared with adequate groups of non-participants. In contrast to many other studies also personality traits are observed and controlled for in the matching process. The results show that only the program for former unemployed entrepreneurs has small positive effects. Participants have a larger survival probability in self-employment and a larger probability to hire employees than matched non-participants. In contrast, the program for former employed individuals has negative effects. Compared to individuals who did not participate in the coaching program, participants have a lower probability to stay in self-employment, lower earned net income, lower number of employees and lower life satisfaction. There are several reasons for these differing results of the two programs. First, former unemployed individuals have more basic coaching needs than former employed individuals. Coaches can satisfy these basic coaching needs, whereas former employed individuals have more complex business problems, which are not very easy to be solved by a coaching intervention. Second, the analysis reveals that former employed individuals are very successful in general. It is easier to increase the success of former unemployed individuals as they have a lower base level of success than former employed individuals. An effect heterogeneity analysis shows that coaching effectiveness differs by region. Coaching for previously unemployed entrepreneurs is especially useful in regions with bad labor market conditions. In summary, in line with previous literature, it is found that coaching has little effects on the success of entrepreneurs. The previous employment status, the characteristics of the entrepreneur and the regional labor market conditions play a crucial role in the effectiveness of coaching. In conclusion, coaching needs to be well tailored to the individual and applied thoroughly. Therefore, governments should design and provide coaching programs only after due consideration.
Herein, we report the chain-growth tin-free room temperature polymerization method to synthesize n-type perylene diimide-dithiophene-based conjugated polymers (PPDIT2s) suitable for solar cell and transistor applications. The palladium/electron-rich tri-tert-butylphosphine catalyst is effective to enable the chain-growth polymerization of anion-radical monomer Br-TPDIT-Br/Zn to PPDIT2 with a molecular weight up to Mw ≈ 50 kg mol−1 and moderate polydispersity. This is the second example of the polymerization of unusual anion-radical aromatic complexes formed in a reaction of active Zn and electron-deficient diimide-based aryl halides. As such, the discovered polymerization method is not a specific reactivity feature of the naphthalene-diimide derivatives but is rather a general polymerization tool. This is an important finding, given the significantly higher maximum external quantum efficiency that can be reached with PDI-based copolymers (32–45%) in all-polymer solar cells compared to NDI-based materials (15–30%). Our studies revealed that PPDIT2 synthesized by the new method and the previously published polymer prepared by step-growth Stille polycondensation show similar electron mobility and all-polymer solar cell performance. At the same time, the polymerization reported herein has several technological advantages as it proceeds relatively fast at room temperature and does not involve toxic tin-based compounds. Because several chain-growth polymerization reactions are well-suited for the preparation of well-defined multi-functional polymer architectures, the next target is to explore the utility of the discovered polymerization in the synthesis of end-functionalized polymers and block copolymers. Such materials would be helpful to improve the nanoscale morphology of polymer blends in all-polymer solar cells.
Boolean constraint solving technology has made tremendous progress over the last decade, leading to industrial-strength solvers, for example, in the areas of answer set programming (ASP), the constraint satisfaction problem (CSP), propositional satisfiability (SAT) and satisfiability of quantified Boolean formulas (QBF). However, in all these areas, there exist multiple solving strategies that work well on different applications; no strategy dominates all other strategies. Therefore, no individual solver shows robust state-of-the-art performance in all kinds of applications. Additionally, the question arises how to choose a well-performing solving strategy for a given application; this is a challenging question even for solver and domain experts. One way to address this issue is the use of portfolio solvers, that is, a set of different solvers or solver configurations. We present three new automatic portfolio methods: (i) automatic construction of parallel portfolio solvers (ACPP) via algorithm configuration,(ii) solving the $NP$-hard problem of finding effective algorithm schedules with Answer Set Programming (aspeed), and (iii) a flexible algorithm selection framework (claspfolio2) allowing for fair comparison of different selection approaches. All three methods show improved performance and robustness in comparison to individual solvers on heterogeneous instance sets from many different applications. Since parallel solvers are important to effectively solve hard problems on parallel computation systems (e.g., multi-core processors), we extend all three approaches to be effectively applicable in parallel settings. We conducted extensive experimental studies different instance sets from ASP, CSP, MAXSAT, Operation Research (OR), SAT and QBF that indicate an improvement in the state-of-the-art solving heterogeneous instance sets. Last but not least, from our experimental studies, we deduce practical advice regarding the question when to apply which of our methods.
The structures and synthesis of polyzwitterions ("polybetaines") are reviewed, emphasizing the literature of the past decade. Particular attention is given to the general challenges faced, and to successful strategies to obtain polymers with a true balance of permanent cationic and anionic groups, thus resulting in an overall zero charge. Also, the progress due to applying new methodologies from general polymer synthesis, such as controlled polymerization methods or the use of "click" chemical reactions is presented. Furthermore, the emerging topic of responsive ("smart") polyzwitterions is addressed. The considerations and critical discussions are illustrated by typical examples.
This paper reports a problematic case of unequivocally evidencing participant orientation to the projective force of some turn-initial demonstrative wh-clefts (DCs) within the framework of Conversation Analysis (CA) and Interactional Linguistics (IL). Conducting rhythmic analyses appears helpful in this regard, in that they disclose rhythmic regularities which suggest a speaker's orientation towards a projected turn continuation. In this particular case, rhythmic analyses can therefore be shown to meaningfully complement sequential analyses and analyses of turn-design, so as to gather additional evidence for participant orientations. In conclusion, I will point to possibly more extensive relations between rhythmicity and projection and proffer a tentative outlook for the usability of rhythmic analyses as an analytic tool in CA and IL.
Object and action naming in Russian- and German- speaking monolingual and bilingual children*
(2014)
The present study investigates the influence of word category on naming performance in two populations: bilingual and monolingual children. The question is whether and, if so, to what extent monolingual and bilingual children differ with respect to noun and verb naming and whether a noun bias exists in the lexical abilities of bilingual children. Picture naming of objects and actions by Russian-German bilingual children (aged 4-7 years) was compared to age-matched monolingual children. The results clearly demonstrate a naming deficit of bilingual children in comparison to monolingual children that increases with age. Noun learning is more fragile in bilingual contexts than is verb learning. In bilingual language acquisition, nouns do not predominate over verbs as much as is seen in monolingual German and Russian children. The results are discussed with respect to semantic-conceptual aspects and language-specific features of nouns and verbs, and the impact of input on the acquisition of these word categories.
Following the principles of green chemistry, a simple and efficient synthesis of functionalised imidazolium zwitterionic compounds from renewable resources was developed based on a modified one-pot Debus-Radziszewski reaction. The combination of different carbohydrate-derived 1,2-dicarbonyl compounds and amino acids is a simple way to modulate the properties and introduce different functionalities. A representative compound was assessed as an acid catalyst, and converted into acidic ionic liquids by reaction with several strong acids. The reactivity of the double carboxylic functionality was explored by esterification with long and short chain alcohols, as well as functionalised amines, which led to the straightforward formation of surfactant-like molecules or bifunctional esters and amides. One of these di-esters is currently being investigated for the synthesis of poly(ionic liquids). The functionalisation of cellulose with one of the bifunctional esters was investigated and preliminary tests employing it for the functionalisation of filter papers were carried out successfully. The imidazolium zwitterions were converted into ionic liquids via hydrothermal decarboxylation in flow, a benign and scalable technique. This method provides access to imidazolium ionic liquids via a simple and sustainable methodology, whilst completely avoiding contamination with halide salts. Different ionic liquids can be generated depending on the functionality contained in the ImZw precursor. Two alanine-derived ionic liquids were assessed for their physicochemical properties and applications as solvents for the dissolution of cellulose and the Heck coupling.
In the aftermath of the severe flooding in Central Europe in August 2002, a number of changes in flood policies were launched in Germany and other European countries, aiming at improved risk management. The question arises as to whether these changes have already had an impact on the residents' ability to cope with floods, and whether flood-affected private households are now better prepared than they were in 2002. Therefore, computer-aided telephone interviews with private households in Germany that suffered from property damage due to flooding in 2005, 2006, 2010 or 2011 were performed and analysed with respect to flood awareness, precaution, preparedness and recovery. The data were compared to a similar investigation conducted after the flood in 2002.
After the flood in 2002, the level of private precautions taken increased considerably. One contributing factor is the fact that, in general, a larger proportion of people knew that they were at risk of flooding. The best level of precaution was found before the flood events in 2006 and 2011. The main reason for this might be that residents had more experience with flooding than residents affected in 2005 or 2010. Yet, overall, flood experience and knowledge did not necessarily result in building retrofitting or flood-proofing measures, which are considered as mitigating damages most effectively. Hence, investments still need to be stimulated in order to reduce future damage more efficiently.
Early warning and emergency responses were substantially influenced by flood characteristics. In contrast to flood-affected people in 2006 or 2011, people affected by flooding in 2005 or 2010 had to deal with shorter lead times and therefore had less time to take emergency measures. Yet, the lower level of emergency measures taken also resulted from the people's lack of flood experience and insufficient knowledge of how to protect themselves. Overall, it was noticeable that these residents suffered from higher losses. Therefore, it is important to further improve early warning systems and communication channels, particularly in hilly areas with rapid-onset flooding.
Background
In the past, plyometric training (PT) has been predominantly performed on stable surfaces. The purpose of this pilot study was to examine effects of a 7-week lower body PT on stable vs. unstable surfaces. This type of exercise condition may be denoted as metastable equilibrium.
Methods
Thirty-three physically active male sport science students (age: 24.1 ± 3.8 years) were randomly assigned to a PT group (n = 13) exercising on stable (STAB) and a PT group (n = 20) on unstable surfaces (INST). Both groups trained countermovement jumps, drop jumps, and practiced a hurdle jump course. In addition, high bar squats were performed. Physical fitness tests on stable surfaces (hexagonal obstacle test, countermovement jump, hurdle drop jump, left-right hop, dynamic and static balance tests, and leg extension strength) were used to examine the training effects.
Results
Significant main effects of time (ANOVA) were found for the countermovement jump, hurdle drop jump, hexagonal test, dynamic balance, and leg extension strength. A significant interaction of time and training mode was detected for the countermovement jump in favor of the INST group. No significant improvements were evident for either group in the left-right hop and in the static balance test.
Conclusions
These results show that lower body PT on unstable surfaces is a safe and efficient way to improve physical performance on stable surfaces.
We report on the development of an on-chip RPA (recombinase polymerase amplification) with simultaneous multiplex isothermal amplification and detection on a solid surface. The isothermal RPA was applied to amplify specific target sequences from the pathogens Neisseria gonorrhoeae, Salmonella enterica and methicillin-resistant Staphylococcus aureus (MRSA) using genomic DNA. Additionally, a positive plasmid control was established as an internal control. The four targets were amplified simultaneously in a quadruplex reaction. The amplicon is labeled during on-chip RPA by reverse oligonucleotide primers coupled to a fluorophore. Both amplification and spatially resolved signal generation take place on immobilized forward primers bount to expoxy-silanized glass surfaces in a pump-driven hybridization chamber. The combination of microarray technology and sensitive isothermal nucleic acid amplification at 38 °C allows for a multiparameter analysis on a rather small area. The on-chip RPA was characterized in terms of reaction time, sensitivity and inhibitory conditions. A successful enzymatic reaction is completed in <20 min and results in detection limits of 10 colony-forming units for methicillin-resistant Staphylococcus aureus and Salmonella enterica and 100 colony-forming units for Neisseria gonorrhoeae. The results show this method to be useful with respect to point-of-care testing and to enable simplified and miniaturized nucleic acid-based diagnostics.
Background
Nucleic acid amplification is the most sensitive and specific method to detect Plasmodium falciparum. However the polymerase chain reaction remains laboratory-based and has to be conducted by trained personnel. Furthermore, the power dependency for the thermocycling process and the costly equipment necessary for the read-out are difficult to cover in resource-limited settings. This study aims to develop and evaluate a combination of isothermal nucleic acid amplification and simple lateral flow dipstick detection of the malaria parasite for point-of-care testing.
Methods
A specific fragment of the 18S rRNA gene of P. falciparum was amplified in 10 min at a constant 38°C using the isothermal recombinase polymerase amplification (RPA) method. With a unique probe system added to the reaction solution, the amplification product can be visualized on a simple lateral flow strip without further labelling. The combination of these methods was tested for sensitivity and specificity with various Plasmodium and other protozoa/bacterial strains, as well as with human DNA. Additional investigations were conducted to analyse the temperature optimum, reaction speed and robustness of this assay.
Results
The lateral flow RPA (LF-RPA) assay exhibited a high sensitivity and specificity. Experiments confirmed a detection limit as low as 100 fg of genomic P. falciparum DNA, corresponding to a sensitivity of approximately four parasites per reaction. All investigated P. falciparum strains (n = 77) were positively tested while all of the total 11 non-Plasmodium samples, showed a negative test result. The enzymatic reaction can be conducted under a broad range of conditions from 30-45°C with high inhibitory concentration of known PCR inhibitors. A time to result of 15 min from start of the reaction to read-out was determined.
Conclusions
Combining the isothermal RPA and the lateral flow detection is an approach to improve molecular diagnostic for P. falciparum in resource-limited settings. The system requires none or only little instrumentation for the nucleic acid amplification reaction and the read-out is possible with the naked eye. Showing the same sensitivity and specificity as comparable diagnostic methods but simultaneously increasing reaction speed and dramatically reducing assay requirements, the method has potential to become a true point-of-care test for the malaria parasite.
We propose a novel cluster-based reduced-order modelling (CROM) strategy for unsteady flows. CROM combines the cluster analysis pioneered in Gunzburger's group (Burkardt, Gunzburger & Lee, Comput. Meth. Appl. Mech. Engng, vol. 196, 2006a, pp. 337-355) and transition matrix models introduced in fluid dynamics in Eckhardt's group (Schneider, Eckhardt & Vollmer, Phys. Rev. E, vol. 75, 2007, art. 066313). CROM constitutes a potential alternative to POD models and generalises the Ulam-Galerkin method classically used in dynamical systems to determine a finite-rank approximation of the Perron-Frobenius operator. The proposed strategy processes a time-resolved sequence of flow snapshots in two steps. First, the snapshot data are clustered into a small number of representative states, called centroids, in the state space. These centroids partition the state space in complementary non-overlapping regions (centroidal Voronoi cells). Departing from the standard algorithm, the probabilities of the clusters are determined, and the states are sorted by analysis of the transition matrix. Second, the transitions between the states are dynamically modelled using a Markov process. Physical mechanisms are then distilled by a refined analysis of the Markov process, e. g. using finite-time Lyapunov exponent (FTLE) and entropic methods. This CROM framework is applied to the Lorenz attractor (as illustrative example), to velocity fields of the spatially evolving incompressible mixing layer and the three-dimensional turbulent wake of a bluff body. For these examples, CROM is shown to identify non-trivial quasi-attractors and transition processes in an unsupervised manner. CROM has numerous potential applications for the systematic identification of physical mechanisms of complex dynamics, for comparison of flow evolution models, for the identification of precursors to desirable and undesirable events, and for flow control applications exploiting nonlinear actuation dynamics.
The adaptation of cell growth and proliferation to environmental changes is essential for the surviving of biological systems. The evolutionary conserved Ser/Thr protein kinase “Target of Rapamycin” (TOR) has emerged as a major signaling node that integrates the sensing of numerous growth signals to the coordinated regulation of cellular metabolism and growth. Although the TOR signaling pathway has been widely studied in heterotrophic organisms, the research on TOR in photosynthetic eukaryotes has been hampered by the reported land plant resistance to rapamycin. Thus, the finding that Chlamydomonas reinhardtii is sensitive to rapamycin, establish this unicellular green alga as a useful model system to investigate TOR signaling in photosynthetic eukaryotes.
The observation that rapamycin does not fully arrest Chlamydomonas growth, which is different from observations made in other organisms, prompted us to investigate the regulatory function of TOR in Chlamydomonas in context of the cell cycle. Therefore, a growth system that allowed synchronously growth under widely unperturbed cultivation in a fermenter system was set up and the synchronized cells were characterized in detail. In a highly resolved kinetic study, the synchronized cells were analyzed for their changes in cytological parameters as cell number and size distribution and their starch content. Furthermore, we applied mass spectrometric analysis for profiling of primary and lipid metabolism. This system was then used to analyze the response dynamics of the Chlamydomonas metabolome and lipidome to TOR-inhibition by rapamycin
The results show that TOR inhibition reduces cell growth, delays cell division and daughter cell release and results in a 50% reduced cell number at the end of the cell cycle. Consistent with the growth phenotype we observed strong changes in carbon and nitrogen partitioning in the direction of rapid conversion into carbon and nitrogen storage through an accumulation of starch, triacylglycerol and arginine. Interestingly, it seems that the conversion of carbon into triacylglycerol occurred faster than into starch after TOR inhibition, which may indicate a more dominant role of TOR in the regulation of TAG biosynthesis than in the regulation of starch.
This study clearly shows, for the first time, a complex picture of metabolic and lipidomic dynamically changes during the cell cycle of Chlamydomonas reinhardtii and furthermore reveals a complex regulation and adjustment of metabolite pools and lipid composition in response to TOR inhibition.
Anomalous diffusion is frequently described by scaled Brownian motion (SBM){,} a Gaussian process with a power-law time dependent diffusion coefficient. Its mean squared displacement is ?x2(t)? [similar{,} equals] 2K(t)t with K(t) [similar{,} equals] t[small alpha]-1 for 0 < [small alpha] < 2. SBM may provide a seemingly adequate description in the case of unbounded diffusion{,} for which its probability density function coincides with that of fractional Brownian motion. Here we show that free SBM is weakly non-ergodic but does not exhibit a significant amplitude scatter of the time averaged mean squared displacement. More severely{,} we demonstrate that under confinement{,} the dynamics encoded by SBM is fundamentally different from both fractional Brownian motion and continuous time random walks. SBM is highly non-stationary and cannot provide a physical description for particles in a thermalised stationary system. Our findings have direct impact on the modelling of single particle tracking experiments{,} in particular{,} under confinement inside cellular compartments or when optical tweezers tracking methods are used.
Recently, interest in collecting and mining large sets of educational data on student background and performance to conduct research on learning and instruction has developed as an area generally referred to as learning analytics. Higher education leaders are recognising the value of learning analytics for improving not only learning and teaching but also the entire educational arena. However, theoretical concepts and empirical evidence need to be generated within the fast evolving field of learning analytics. In this paper, we introduce a holistic learning analytics framework. Based on this framework, student, learning, and curriculum profiles have been developed which include relevant static and dynamic parameters for facilitating the learning analytics framework. Based on the theoretical model, an empirical study was conducted to empirically validate the parameters included in the student profile. The paper concludes with practical implications and issues for future research.