Filtern
Erscheinungsjahr
Dokumenttyp
- Postprint (481)
- Wissenschaftlicher Artikel (56)
- Dissertation (5)
- Sonstiges (1)
- Preprint (1)
Schlagworte
- climate-change (16)
- model (16)
- climate (13)
- variability (9)
- evolution (8)
- precipitation (7)
- transport (7)
- Model (6)
- adaptation (6)
- ancient DNA (6)
Institut
- Mathematisch-Naturwissenschaftliche Fakultät (544) (entfernen)
Recent studies have claimed the existence of very massive stars (VMS) up to 300 M⊙ in the local Universe. As this finding may represent a paradigm shift for the canonical stellar upper-mass limit of 150 M⊙, it is timely to discuss the status of the data, as well as the far-reaching implications of such objects. We held a Joint Discussion at the General Assembly in Beijing to discuss (i) the determination of the current masses of the most massive stars, (ii) the formation of VMS, (iii) their mass loss, and (iv) their evolution and final fate. The prime aim was to reach broad consensus between observers and theorists on how to identify and quantify the dominant physical processes.
In the late Palaeozoic fore-arc system of north-central Chile at latitudes 31-32 degrees S (from the west to the east) three lithotectonic units are telescoped within a short distance by a Mesozoic strikeslip event (derived peak P-T conditions in brackets): (1) the basally accreted Choapa Metamorphic Complex (CMC; 350-430 degrees C, 6-9 kbar), (2) the frontally accreted Arrayan Formation (AF; 280-320 degrees C, 4-6 kbar) and (3) the retrowedge basin of the Huentelauquen Formation (HF; 280-320 degrees C, 3-4 kbar). In the CMC, Ar-Ar spot ages locally date white-mica formation at peak P-T conditions and during early exhumation at 279-242 Ma. In a local garnet mica-schist intercalation (570-585 degrees C, 11-13 kbar) Ar-Ar spot ages refer to the ascent from the subduction channel at 307-274 Ma. Portions of the CMC were isobarically heated to 510-580 degrees C at 6.6-8.5 kbar. The age of peak P-T conditions in the AF can only vaguely be approximated at >= 310 Ma by relict fission-track ages consistent with the observation that frontal accretion occurred prior to basal accretion. Zircon fission-track dating indicates cooling below similar to 280 degrees C at similar to 248 Ma in the CMC and the AF, when a regional unconformity also formed. Ar-Ar white-mica spot ages in parts of the CMC and within the entire AF and HF point to heterogeneous resetting during Mesozoic extensional and shortening events at similar to 245-240 Ma, similar to 210-200 Ma, similar to 174-159 Ma and similar to 142-127 Ma. The zircon fission-track ages are locally reset at 109-96 Ma. All resetting of Ar-Ar white-mica ages is proposed to have occurred by in situ dissolution/precipitation at low temperature in the presence of locally penetrating hydrous fluids. Hence syn-and postaccretionary events in the fore-arc system can still be distinguished and dated in spite of its complex heterogeneous postaccretional overprint.
Die Ausstellung "Die Geschichte des Standortes Potsdam-Golm 1935 bis 1991" zeigt die wechselvolle Historie des jetzigen Universitäts- und Wissenschaftsstandortes. Die Ursprünge finden sich in der 1935 errichteten General-Wever-Kaserne. Nach der Beendigung des Zweiten Weltkrieges und bis zur Wende nutzten sowohl die sowjetische Armee als auch das Ministerium für Staatssicherheit das Gelände. Thematisiert werden unter anderem die militärische Zentralregion Brandenburg, die Herausbildung der Geheimdiensthochschule von 1951 bis 1990, die Lehre an dieser Einrichtung, das Studienleben und die Forschungstätigkeit sowie die Nutzung des Standortes nach 1990.
Die Ausstellung besteht aus 13 mit zahlreichen Fotos versehenen Tafeln.
SXP 1062 is an exceptional case of a young neutron star in a wind-fed high-mass X-ray binary associated with a supernova remnant. A unique combination of measured spin period, its derivative, luminosity and young age makes this source a key probe for the physics of accretion and neutron star evolution. Theoretical models proposed to explain the properties of SXP 1062 shall be tested with new data.
Dynamic regulatory on/off minimization for biological systems under internal temporal perturbations
(2012)
Background: Flux balance analysis (FBA) together with its extension, dynamic FBA, have proven instrumental for analyzing the robustness and dynamics of metabolic networks by employing only the stoichiometry of the included reactions coupled with adequately chosen objective function. In addition, under the assumption of minimization of metabolic adjustment, dynamic FBA has recently been employed to analyze the transition between metabolic states.
Results: Here, we propose a suite of novel methods for analyzing the dynamics of (internally perturbed) metabolic networks and for quantifying their robustness with limited knowledge of kinetic parameters. Following the biochemically meaningful premise that metabolite concentrations exhibit smooth temporal changes, the proposed methods rely on minimizing the significant fluctuations of metabolic profiles to predict the time-resolved metabolic state, characterized by both fluxes and concentrations. By conducting a comparative analysis with a kinetic model of the Calvin-Benson cycle and a model of plant carbohydrate metabolism, we demonstrate that the principle of regulatory on/off minimization coupled with dynamic FBA can accurately predict the changes in metabolic states.
Conclusions: Our methods outperform the existing dynamic FBA-based modeling alternatives, and could help in revealing the mechanisms for maintaining robustness of dynamic processes in metabolic networks over time.
Recent PIC simulations of relativistic electron-positron (electron-ion) jets injected into a stationary medium show that particle acceleration occurs in the shocked regions. Simulations show that the Weibel instability is responsible for generating and amplifying highly nonuniform, small-scale magnetic fields and for particle acceleration. These magnetic fields contribute to the electron’s transverse eflection behind the shock. The “jitter” radiation from deflected electrons in turbulent magnetic fields has properties different from synchrotron radiation calculated in a uniform magnetic field. This jitter radiation may be important for understanding the complex time evolution and/or spectral structure of gamma-ray bursts, relativistic jets in general, and supernova remnants. In order to calculate radiation from first principles and go beyond the standard synchrotron model, we have used PIC simulations. We present synthetic spectra to compare with the spectra obtained from Fermi observations.
Background: The linear noise approximation (LNA) is commonly used to predict how noise is regulated and exploited at the cellular level. These predictions are exact for reaction networks composed exclusively of first order reactions or for networks involving bimolecular reactions and large numbers of molecules. It is however well known that gene regulation involves bimolecular interactions with molecule numbers as small as a single copy of a particular gene. It is therefore questionable how reliable are the LNA predictions for these systems.
Results: We implement in the software package intrinsic Noise Analyzer (iNA), a system size expansion based method which calculates the mean concentrations and the variances of the fluctuations to an order of accuracy higher than the LNA. We then use iNA to explore the parametric dependence of the Fano factors and of the coefficients of variation of the mRNA and protein fluctuations in models of genetic networks involving nonlinear protein degradation, post-transcriptional, post-translational and negative feedback regulation. We find that the LNA can significantly underestimate the amplitude and period of noise-induced oscillations in genetic oscillators. We also identify cases where the LNA predicts that noise levels can be optimized by tuning a bimolecular rate constant whereas our method shows that no such regulation is possible. All our results are confirmed by stochastic simulations.
Conclusion: The software iNA allows the investigation of parameter regimes where the LNA fares well and where it does not. We have shown that the parametric dependence of the coefficients of variation and Fano factors for common gene regulatory networks is better described by including terms of higher order than LNA in the system size expansion. This analysis is considerably faster than stochastic simulations due to the extensive ensemble averaging needed to obtain statistically meaningful results. Hence iNA is well suited for performing computationally efficient and quantitative studies of intrinsic noise in gene regulatory networks.
In this paper, we determine necessary and sufficient conditions for Bruck-Reilly and generalized Bruck-Reilly ∗-extensions of arbitrary monoids to be regular, coregular and strongly π-inverse. These semigroup classes have applications in various field of mathematics, such as matrix theory, discrete mathematics and p-adic analysis (especially in operator theory). In addition, while regularity and coregularity have so many applications in the meaning of boundaries (again in operator theory), inverse monoids and Bruck-Reilly extensions contain a mixture fixed-point results of algebra, topology and geometry within the purposes of this journal.
Communicating location-specific information to pedestrians is a challenging task which can be aided by user-friendly digital technologies. In this paper, landmark visibility analysis, as a means for developing more usable pedestrian navigation systems, is discussed. Using an algorithmic framework for image-based 3D analysis, this method integrates a 3D city model with identified landmarks and produces raster visibility layers for each one. This output enables an Android phone prototype application to indicate the visibility of landmarks from the user's actual position. Tested in the field, the method achieves sufficient accuracy for the context of use and improves navigation efficiency and effectiveness.
The genetic code is degenerate; thus, protein evolution does not uniquely determine the coding sequence. One of the puzzles in evolutionary genetics is therefore to uncover evolutionary driving forces that result in specific codon choice. In many bacteria, the first 5-10 codons of protein-coding genes are often codons that are less frequently used in the rest of the genome, an effect that has been argued to arise from selection for slowed early elongation to reduce ribosome traffic jams. However, genome analysis across many species has demonstrated that the region shows reduced mRNA folding consistent with pressure for efficient translation initiation. This raises the possibility that unusual codon usage is a side effect of selection for reduced mRNA structure. Here we discriminate between these two competing hypotheses, and show that in bacteria selection favours codons that reduce mRNA folding around the translation start, regardless of whether these codons are frequent or rare. Experiments confirm that primarily mRNA structure, and not codon usage, at the beginning of genes determines the translation rate.
TRAPID
(2013)
Transcriptome analysis through next-generation sequencing technologies allows the generation of detailed gene catalogs for non-model species, at the cost of new challenges with regards to computational requirements and bioinformatics expertise. Here, we present TRAPID, an online tool for the fast and efficient processing of assembled RNA-Seq transcriptome data, developed to mitigate these challenges. TRAPID offers high-throughput open reading frame detection, frameshift correction and includes a functional, comparative and phylogenetic toolbox, making use of 175 reference proteomes. Benchmarking and comparison against state-of-the-art transcript analysis tools reveals the efficiency and unique features of the TRAPID system. TRAPID is freely available at http://bioinformatics.psb.ugent.be/webtools/trapid/.
We study origin, parameter optimization, and thermodynamic efficiency of isothermal rocking ratchets based on fractional subdiffusion within a generalized non-Markovian Langevin equation approach. A corresponding multi-dimensional Markovian embedding dynamics is realized using a set of auxiliary Brownian particles elastically coupled to the central Brownian particle (see video on the journal web site). We show that anomalous subdiffusive transport emerges due to an interplay of nonlinear response and viscoelastic effects for fractional Brownian motion in periodic potentials with broken space-inversion symmetry and driven by a time-periodic field. The anomalous transport becomes optimal for a subthreshold driving when the driving period matches a characteristic time scale of interwell transitions. It can also be optimized by varying temperature, amplitude of periodic potential and driving strength. The useful work done against a load shows a parabolic dependence on the load strength. It grows sublinearly with time and the corresponding thermodynamic efficiency decays algebraically in time because the energy supplied by the driving field scales with time linearly. However, it compares well with the efficiency of normal diffusion rocking ratchets on an appreciably long time scale.
Proposing relevant perturbations to biological signaling networks is central to many problems in biology and medicine because it allows for enabling or disabling certain biological outcomes. In contrast to quantitative methods that permit fine-grained (kinetic) analysis, qualitative approaches allow for addressing large-scale networks. This is accomplished by more abstract representations such as logical networks. We elaborate upon such a qualitative approach aiming at the computation of minimal interventions in logical signaling networks relying on Kleene's three-valued logic and fixpoint semantics. We address this problem within answer set programming and show that it greatly outperforms previous work using dedicated algorithms.
Understanding the magnetic configuration of the source regions of coronal mass ejections (CMEs) is vital in order to determine the trigger and driver of these events. Observations of four CME productive active regions are presented here, which indicate that the pre-eruption magnetic configuration is that of a magnetic flux rope. The flux ropes are formed in the solar atmosphere by the process known as flux cancellation and are stable for several hours before the eruption. The observations also indicate that the magnetic structure that erupts is not the entire flux rope as initially formed, raising the question of whether the flux rope is able to undergo a partial eruption or whether it undergoes a transition in specific flux rope configuration shortly
before the CME.
The dynamics of external contributions to the geomagnetic field is investigated by applying time-frequency methods to magnetic observatory data. Fractal models and multiscale analysis enable obtaining maximum quantitative information related to the short-term dynamics of the geomagnetic field activity. The stochastic properties of the horizontal component of the transient external field are determined by searching for scaling laws in the power spectra. The spectrum fits a power law with a scaling exponent β, a typical characteristic of self-affine time-series. Local variations in the power-law exponent are investigated by applying wavelet analysis to the same time-series. These analyses highlight the self-affine properties of geomagnetic perturbations and their persistence. Moreover, they show that the main phases of sudden storm disturbances are uniquely characterized by a scaling exponent varying between 1 and 3, possibly related to the energy contained in the external field. These new findings suggest the existence of a long-range dependence, the scaling exponent being an efficient indicator of geomagnetic activity and singularity detection. These results show that by using magnetogram regularity to reflect the magnetosphere activity, a theoretical analysis of the external geomagnetic field based on local power-law exponents is possible.
In various biological systems and small scale technological applications particles transiently bind to a cylindrical surface. Upon unbinding the particles diffuse in the vicinal bulk before rebinding to the surface. Such bulk-mediated excursions give rise to an effective surface translation, for which we here derive and discuss the dynamic equations, including additional surface diffusion. We discuss the time evolution of the number of surface-bound particles, the effective surface mean squared displacement, and the surface propagator. In particular, we observe sub- and superdiffusive regimes. A plateau of the surface mean-squared displacement reflects a stalling of the surface diffusion at longer times. Finally, the corresponding first passage problem for the cylindrical geometry is analysed.
The course timetabling problem can be generally defined as the task of assigning a number of lectures to a limited set of timeslots and rooms, subject to a given set of hard and soft constraints. The modeling language for course timetabling is required to be expressive enough to specify a wide variety of soft constraints and objective functions. Furthermore, the resulting encoding is required to be extensible for capturing new constraints and for switching them between hard and soft, and to be flexible enough to deal with different formulations. In this paper, we propose to make effective use of ASP as a modeling language for course timetabling. We show that our ASP-based approach can naturally satisfy the above requirements, through an ASP encoding of the curriculum-based course timetabling problem proposed in the third track of the second international timetabling competition (ITC-2007). Our encoding is compact and human-readable, since each constraint is individually expressed by either one or two rules. Each hard constraint is expressed by using integrity constraints and aggregates of ASP. Each soft constraint S is expressed by rules in which the head is the form of penalty (S, V, C), and a violation V and its penalty cost C are detected and calculated respectively in the body. We carried out experiments on four different benchmark sets with five different formulations. We succeeded either in improving the bounds or producing the same bounds for many combinations of problem instances and formulations, compared with the previous best known bounds.
During an unusually massive filament eruption on 7 June 2011, SDO/AIA imaged for the first time significant EUV emission around a magnetic reconnection region in the solar corona. The reconnection occurred between magnetic fields of the laterally expanding CME and a neighbouring active region. A pre-existing quasi-separatrix layer was activated in the process. This scenario is supported by data-constrained numerical simulations of the eruption. Observations show that dense cool filament plasma was re-directed and heated in situ, producing coronal-temperature emission around the reconnection region. These results provide the first direct observational evidence, supported by MHD simulations and magnetic modelling, that a large-scale re-configuration of the coronal magnetic field takes place during solar eruptions via the process of magnetic reconnection.
Background: With increasing age neuromuscular deficits (e.g., sarcopenia) may result in impaired physical performance and an increased risk for falls. Prominent intrinsic fall-risk factors are age-related decreases in balance and strength / power performance as well as cognitive decline. Additional studies are needed to develop specifically tailored exercise programs for older adults that can easily be implemented into clinical practice. Thus, the objective of the present trial is to assess the effects of a fall prevention program that was developed by an interdisciplinary expert panel on measures of balance, strength / power, body composition, cognition, psychosocial well-being, and falls self-efficacy in healthy older adults. Additionally, the time-related effects of detraining are tested.
Methods/Design: Healthy old people (n = 54) between the age of 65 to 80 years will participate in this trial. The testing protocol comprises tests for the assessment of static / dynamic steady-state balance (i.e., Sharpened Romberg Test, instrumented gait analysis), proactive balance (i.e., Functional Reach Test; Timed Up and Go Test), reactive balance (i.e., perturbation test during bipedal stance; Push and Release Test), strength (i.e., hand grip strength test; Chair Stand Test), and power (i.e., Stair Climb Power Test; countermovement jump). Further, body composition will be analysed using a bioelectrical impedance analysis system. In addition, questionnaires for the assessment of psychosocial (i.e., World Health Organisation Quality of Life Assessment-Bref), cognitive (i.e., Mini Mental State Examination), and fall risk determinants (i.e., Fall Efficacy Scale -International) will be included in the study protocol. Participants will be randomized into two intervention groups or the control / waiting group. After baseline measures, participants in the intervention groups will conduct a 12-week balance and strength / power exercise intervention 3 times per week, with each training session lasting 30 min. (actual training time). One intervention group will complete an extensive supervised training program, while the other intervention group will complete a short version (` 3 times 3') that is home-based and controlled by weekly phone calls. Post-tests will be conducted right after the intervention period. Additionally, detraining effects will be measured 12 weeks after program cessation. The control group / waiting group will not participate in any specific intervention during the experimental period, but will receive the extensive supervised program after the experimental period.
Discussion: It is expected that particularly the supervised combination of balance and strength / power training will improve performance in variables of balance, strength / power, body composition, cognitive function, psychosocial well-being, and falls self-efficacy of older adults. In addition, information regarding fall risk assessment, dose-response-relations, detraining effects, and supervision of training will be provided. Further, training-induced health-relevant changes, such as improved performance in activities of daily living, cognitive function, and quality of life, as well as a reduced risk for falls may help to lower costs in the health care system. Finally, practitioners, therapists, and instructors will be provided with a scientifically evaluated feasible, safe, and easy-to-administer exercise program for fall prevention.
We study the thermal Markovian diffusion of tracer particles in a 2D medium with spatially varying diffusivity D(r), mimicking recently measured, heterogeneous maps of the apparent diffusion coefficient in biological cells. For this heterogeneous diffusion process (HDP) we analyse the mean squared displacement (MSD) of the tracer particles, the time averaged MSD, the spatial probability density function, and the first passage time dynamics from the cell boundary to the nucleus. Moreover we examine the non-ergodic properties of this process which are important for the correct physical interpretation of time averages of observables obtained from single particle tracking experiments. From extensive computer simulations of the 2D stochastic Langevin equation we present an in-depth study of this HDP. In particular, we find that the MSDs along the radial and azimuthal directions in a circular domain obey anomalous and Brownian scaling, respectively. We demonstrate that the time averaged MSD stays linear as a function of the lag time and the system thus reveals a weak ergodicity breaking. Our results will enable one to rationalise the diffusive motion of larger tracer particles such as viruses or submicron beads in biological cells.
In the field of disk-based parallel database management systems exists a great variety of solutions based on a shared-storage or a shared-nothing architecture. In contrast, main memory-based parallel database management systems are dominated solely by the shared-nothing approach as it preserves the in-memory performance advantage by processing data locally on each server. We argue that this unilateral development is going to cease due to the combination of the following three trends: a) Nowadays network technology features remote direct memory access (RDMA) and narrows the performance gap between accessing main memory inside a server and of a remote server to and even below a single order of magnitude. b) Modern storage systems scale gracefully, are elastic, and provide high-availability. c) A modern storage system such as Stanford's RAMCloud even keeps all data resident in main memory. Exploiting these characteristics in the context of a main-memory parallel database management system is desirable. The advent of RDMA-enabled network technology makes the creation of a parallel main memory DBMS based on a shared-storage approach feasible.
This thesis describes building a columnar database on shared main memory-based storage. The thesis discusses the resulting architecture (Part I), the implications on query processing (Part II), and presents an evaluation of the resulting solution in terms of performance, high-availability, and elasticity (Part III).
In our architecture, we use Stanford's RAMCloud as shared-storage, and the self-designed and developed in-memory AnalyticsDB as relational query processor on top. AnalyticsDB encapsulates data access and operator execution via an interface which allows seamless switching between local and remote main memory, while RAMCloud provides not only storage capacity, but also processing power. Combining both aspects allows pushing-down the execution of database operators into the storage system. We describe how the columnar data processed by AnalyticsDB is mapped to RAMCloud's key-value data model and how the performance advantages of columnar data storage can be preserved.
The combination of fast network technology and the possibility to execute database operators in the storage system opens the discussion for site selection. We construct a system model that allows the estimation of operator execution costs in terms of network transfer, data processed in memory, and wall time. This can be used for database operators that work on one relation at a time - such as a scan or materialize operation - to discuss the site selection problem (data pull vs. operator push). Since a database query translates to the execution of several database operators, it is possible that the optimal site selection varies per operator. For the execution of a database operator that works on two (or more) relations at a time, such as a join, the system model is enriched by additional factors such as the chosen algorithm (e.g. Grace- vs. Distributed Block Nested Loop Join vs. Cyclo-Join), the data partitioning of the respective relations, and their overlapping as well as the allowed resource allocation.
We present an evaluation on a cluster with 60 nodes where all nodes are connected via RDMA-enabled network equipment. We show that query processing performance is about 2.4x slower if everything is done via the data pull operator execution strategy (i.e. RAMCloud is being used only for data access) and about 27% slower if operator execution is also supported inside RAMCloud (in comparison to operating only on main memory inside a server without any network communication at all). The fast-crash recovery feature of RAMCloud can be leveraged to provide high-availability, e.g. a server crash during query execution only delays the query response for about one second. Our solution is elastic in a way that it can adapt to changing workloads a) within seconds, b) without interruption of the ongoing query processing, and c) without manual intervention.
Bacteria respond to changing environmental conditions by switching the global pattern of expressed genes. In response to specific environmental stresses the cell activates several stress-specific molecules such as sigma factors. They reversibly bind the RNA polymerase to form the so-called holoenzyme and direct it towards the appropriate stress response genes. In exponentially growing E. coli cells, the majority of the transcriptional activity is carried out by the housekeeping sigma factor, while stress responses are often under the control of alternative sigma factors. Different sigma factors compete for binding to a limited pool of RNA polymerase (RNAP) core enzymes, providing a mechanism for cross talk between genes or gene classes via the sharing of expression machinery. To quantitatively analyze the contribution of sigma factor competition to global changes in gene expression, we develop a thermodynamic model that describes binding between sigma factors and core RNAP at equilibrium, transcription, non-specific binding to DNA and the modulation of the availability of the molecular components.
Association of housekeeping sigma factor to RNAP is generally favored by its abundance and higher binding affinity to the core. In order to promote transcription by alternative sigma subunits, the bacterial cell modulates the transcriptional efficiency in a reversible manner through several strategies such as anti-sigma factors, 6S RNA and generally any kind of transcriptional regulators (e.g. activators or inhibitors). By shifting the outcome of sigma factor competition for the core, these modulators bias the transcriptional program of the cell. The model is validated by comparison with in vitro competition experiments, with which excellent agreement is found. We observe that transcription is affected via the modulation of the concentrations of the different types of holoenzymes, so saturated promoters are only weakly affected by sigma factor competition. However, in case of overlapping promoters or promoters recognized by two types of sigma factors, we find that even saturated promoters are strongly affected.
Active transcription effectively lowers the affinity between the sigma factor driving it and the core RNAP, resulting in complex cross talk effects and raising the question of how their in vitro measure is relevant in the cell. We also estimate that sigma factor competition is not strongly affected by non-specific binding of core RNAPs, sigma factors, and holoenzymes to DNA. Finally, we analyze the role of increased core RNAP availability upon the shut-down of ribosomal RNA transcription during stringent response. We find that passive up-regulation of alternative sigma-dependent transcription is not only possible, but also displays hypersensitivity based on the sigma factor competition. Our theoretical analysis thus provides support for a significant role of passive control during that global switch of the gene expression program and gives new insights into RNAP partitioning in the cell.
This study examines the course and driving forces of recent vegetation change in the Mongolian steppe. A sediment core covering the last 55years from a small closed-basin lake in central Mongolia was analyzed for its multi-proxy record at annual resolution. Pollen analysis shows that highest abundances of planted Poaceae and highest vegetation diversity occurred during 1977-1992, reflecting agricultural development in the lake area. A decrease in diversity and an increase in Artemisia abundance after 1992 indicate enhanced vegetation degradation in recent times, most probably because of overgrazing and farmland abandonment. Human impact is the main factor for the vegetation degradation within the past decades as revealed by a series of redundancy analyses, while climate change and soil erosion play subordinate roles. High Pediastrum (a green algae) influx, high atomic total organic carbon/total nitrogen (TOC/TN) ratios, abundant coarse detrital grains, and the decrease of C-13(org) and N-15 since about 1977 but particularly after 1992 indicate that abundant terrestrial organic matter and nutrients were transported into the lake and caused lake eutrophication, presumably because of intensified land use. Thus, we infer that the transition to a market economy in Mongolia since the early 1990s not only caused dramatic vegetation degradation but also affected the lake ecosystem through anthropogenic changes in the catchment area.
We propose a novel cluster-based reduced-order modelling (CROM) strategy for unsteady flows. CROM combines the cluster analysis pioneered in Gunzburger's group (Burkardt, Gunzburger & Lee, Comput. Meth. Appl. Mech. Engng, vol. 196, 2006a, pp. 337-355) and transition matrix models introduced in fluid dynamics in Eckhardt's group (Schneider, Eckhardt & Vollmer, Phys. Rev. E, vol. 75, 2007, art. 066313). CROM constitutes a potential alternative to POD models and generalises the Ulam-Galerkin method classically used in dynamical systems to determine a finite-rank approximation of the Perron-Frobenius operator. The proposed strategy processes a time-resolved sequence of flow snapshots in two steps. First, the snapshot data are clustered into a small number of representative states, called centroids, in the state space. These centroids partition the state space in complementary non-overlapping regions (centroidal Voronoi cells). Departing from the standard algorithm, the probabilities of the clusters are determined, and the states are sorted by analysis of the transition matrix. Second, the transitions between the states are dynamically modelled using a Markov process. Physical mechanisms are then distilled by a refined analysis of the Markov process, e. g. using finite-time Lyapunov exponent (FTLE) and entropic methods. This CROM framework is applied to the Lorenz attractor (as illustrative example), to velocity fields of the spatially evolving incompressible mixing layer and the three-dimensional turbulent wake of a bluff body. For these examples, CROM is shown to identify non-trivial quasi-attractors and transition processes in an unsupervised manner. CROM has numerous potential applications for the systematic identification of physical mechanisms of complex dynamics, for comparison of flow evolution models, for the identification of precursors to desirable and undesirable events, and for flow control applications exploiting nonlinear actuation dynamics.
claspfolio 2
(2014)
Building on the award-winning, portfolio-based ASP solver claspfolio, we present claspfolio 2, a modular and open solver architecture that integrates several different portfolio-based algorithm selection approaches and techniques. The claspfolio 2 solver framework supports various feature generators, solver selection approaches, solver portfolios, as well as solver-schedule-based pre-solving techniques. The default configuration of claspfolio 2 relies on a light-weight version of the ASP solver clasp to generate static and dynamic instance features. The flexible open design of claspfolio 2 is a distinguishing factor even beyond ASP. As such, it provides a unique framework for comparing and combining existing portfolio-based algorithm selection approaches and techniques in a single, unified framework. Taking advantage of this, we conducted an extensive experimental study to assess the impact of different feature sets, selection approaches and base solver portfolios. In addition to gaining substantial insights into the utility of the various approaches and techniques, we identified a default configuration of claspfolio 2 that achieves substantial performance gains not only over clasp's default configuration and the earlier version of claspfolio, but also over manually tuned configurations of clasp.
Modern 3D geovisualization systems (3DGeoVSs) are complex and evolving systems that are required to be adaptable and leverage distributed resources, including massive geodata. This article focuses on 3DGeoVSs built based on the principles of service-oriented architectures, standards and image-based representations (SSI) to address practically relevant challenges and potentials. Such systems facilitate resource sharing and agile and efficient system construction and change in an interoperable manner, while exploiting images as efficient, decoupled and interoperable representations. The software architecture of a 3DGeoVS and its underlying visualization model have strong effects on the system's quality attributes and support various system life cycle activities. This article contributes a software reference architecture (SRA) for 3DGeoVSs based on SSI that can be used to design, describe and analyze concrete software architectures with the intended primary benefit of an increase in effectiveness and efficiency in such activities. The SRA integrates existing, proven technology and novel contributions in a unique manner. As the foundation for the SRA, we propose the generalized visualization pipeline model that generalizes and overcomes expressiveness limitations of the prevalent visualization pipeline model. To facilitate exploiting image-based representations (IReps), the SRA integrates approaches for the representation, provisioning and styling of and interaction with IReps. Five applications of the SRA provide proofs of concept for the general applicability and utility of the SRA. A qualitative evaluation indicates the overall suitability of the SRA, its applications and the general approach of building 3DGeoVSs based on SSI.
Stress drop is a key factor in earthquake mechanics and engineering seismology. However, stress drop calculations based on fault slip can be significantly biased, particularly due to subjectively determined smoothing conditions in the traditional least-square slip inversion. In this study, we introduce a mechanically constrained Bayesian approach to simultaneously invert for fault slip and stress drop based on geodetic measurements. A Gaussian distribution for stress drop is implemented in the inversion as a prior. We have done several synthetic tests to evaluate the stability and reliability of the inversion approach, considering different fault discretization, fault geometries, utilized datasets, and variability of the slip direction, respectively. We finally apply the approach to the 2010 M8.8 Maule earthquake and invert for the coseismic slip and stress drop simultaneously. Two fault geometries from the literature are tested. Our results indicate that the derived slip models based on both fault geometries are similar, showing major slip north of the hypocenter and relatively weak slip in the south, as indicated in the slip models of other studies. The derived mean stress drop is 5-6 MPa, which is close to the stress drop of similar to 7 MPa that was independently determined according to force balance in this region Luttrell et al. (J Geophys Res, 2011). These findings indicate that stress drop values can be consistently extracted from geodetic data.
In a recent paper, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
The Runge-Kutta type regularization method was recently proposed as a potent tool for the iterative solution of nonlinear ill-posed problems. In this paper we analyze the applicability of this regularization method for solving inverse problems arising in atmospheric remote sensing, particularly for the retrieval of spheroidal particle distribution. Our numerical simulations reveal that the Runge-Kutta type regularization method is able to retrieve two-dimensional particle distributions using optical backscatter and extinction coefficient profiles, as well as depolarization information.
This study investigates the spatial and temporal distributions of 14 key arboreal taxa and their driving forces during the last 22,000 calendar years before ad 1950 (kyr BP) using a taxonomically harmonized and temporally standardized fossil pollen dataset with a 500-year resolution from the eastern part of continental Asia. Logistic regression was used to estimate pollen abundance thresholds for vegetation occurrence (presence or dominance), based on modern pollen data and present ranges of 14 taxa in China. Our investigation reveals marked changes in spatial and temporal distributions of the major arboreal taxa. The thermophilous (Castanea, Castanopsis, Cyclobalanopsis, Fagus, Pterocarya) and eurythermal (Juglans, Quercus, Tilia, Ulmus) broadleaved tree taxa were restricted to the current tropical or subtropical areas of China during the Last Glacial Maximum (LGM) and spread northward since c. 14.5 kyr BP. Betula and conifer taxa (Abies, Picea, Pinus), in contrast, retained a wider distribution during the LGM and showed no distinct expansion direction during the Late Glacial. Since the late mid-Holocene, the abundance but not the spatial extent of most trees decreased. The changes in spatial and temporal distributions for the 14 taxa are a reflection of climate changes, in particular monsoonal moisture, and, in the late Holocene, human impact. The post-LGM expansion patterns in eastern continental China seem to be different from those reported for Europe and North America, for example, the westward spread for eurythermal broadleaved taxa.
We report on the development of an on-chip RPA (recombinase polymerase amplification) with simultaneous multiplex isothermal amplification and detection on a solid surface. The isothermal RPA was applied to amplify specific target sequences from the pathogens Neisseria gonorrhoeae, Salmonella enterica and methicillin-resistant Staphylococcus aureus (MRSA) using genomic DNA. Additionally, a positive plasmid control was established as an internal control. The four targets were amplified simultaneously in a quadruplex reaction. The amplicon is labeled during on-chip RPA by reverse oligonucleotide primers coupled to a fluorophore. Both amplification and spatially resolved signal generation take place on immobilized forward primers bount to expoxy-silanized glass surfaces in a pump-driven hybridization chamber. The combination of microarray technology and sensitive isothermal nucleic acid amplification at 38 °C allows for a multiparameter analysis on a rather small area. The on-chip RPA was characterized in terms of reaction time, sensitivity and inhibitory conditions. A successful enzymatic reaction is completed in <20 min and results in detection limits of 10 colony-forming units for methicillin-resistant Staphylococcus aureus and Salmonella enterica and 100 colony-forming units for Neisseria gonorrhoeae. The results show this method to be useful with respect to point-of-care testing and to enable simplified and miniaturized nucleic acid-based diagnostics.
Background
Nucleic acid amplification is the most sensitive and specific method to detect Plasmodium falciparum. However the polymerase chain reaction remains laboratory-based and has to be conducted by trained personnel. Furthermore, the power dependency for the thermocycling process and the costly equipment necessary for the read-out are difficult to cover in resource-limited settings. This study aims to develop and evaluate a combination of isothermal nucleic acid amplification and simple lateral flow dipstick detection of the malaria parasite for point-of-care testing.
Methods
A specific fragment of the 18S rRNA gene of P. falciparum was amplified in 10 min at a constant 38°C using the isothermal recombinase polymerase amplification (RPA) method. With a unique probe system added to the reaction solution, the amplification product can be visualized on a simple lateral flow strip without further labelling. The combination of these methods was tested for sensitivity and specificity with various Plasmodium and other protozoa/bacterial strains, as well as with human DNA. Additional investigations were conducted to analyse the temperature optimum, reaction speed and robustness of this assay.
Results
The lateral flow RPA (LF-RPA) assay exhibited a high sensitivity and specificity. Experiments confirmed a detection limit as low as 100 fg of genomic P. falciparum DNA, corresponding to a sensitivity of approximately four parasites per reaction. All investigated P. falciparum strains (n = 77) were positively tested while all of the total 11 non-Plasmodium samples, showed a negative test result. The enzymatic reaction can be conducted under a broad range of conditions from 30-45°C with high inhibitory concentration of known PCR inhibitors. A time to result of 15 min from start of the reaction to read-out was determined.
Conclusions
Combining the isothermal RPA and the lateral flow detection is an approach to improve molecular diagnostic for P. falciparum in resource-limited settings. The system requires none or only little instrumentation for the nucleic acid amplification reaction and the read-out is possible with the naked eye. Showing the same sensitivity and specificity as comparable diagnostic methods but simultaneously increasing reaction speed and dramatically reducing assay requirements, the method has potential to become a true point-of-care test for the malaria parasite.
The Arabidopsis Kinome
(2014)
Background
Protein kinases constitute a particularly large protein family in Arabidopsis with important functions in cellular signal transduction networks. At the same time Arabidopsis is a model plant with high frequencies of gene duplications. Here, we have conducted a systematic analysis of the Arabidopsis kinase complement, the kinome, with particular focus on gene duplication events. We matched Arabidopsis proteins to a Hidden-Markov Model of eukaryotic kinases and computed a phylogeny of 942 Arabidopsis protein kinase domains and mapped their origin by gene duplication.
Results
The phylogeny showed two major clades of receptor kinases and soluble kinases, each of which was divided into functional subclades. Based on this phylogeny, association of yet uncharacterized kinases to families was possible which extended functional annotation of unknowns. Classification of gene duplications within these protein kinases revealed that representatives of cytosolic subfamilies showed a tendency to maintain segmentally duplicated genes, while some subfamilies of the receptor kinases were enriched for tandem duplicates. Although functional diversification is observed throughout most subfamilies, some instances of functional conservation among genes transposed from the same ancestor were observed. In general, a significant enrichment of essential genes was found among genes encoding for protein kinases.
Conclusions
The inferred phylogeny allowed classification and annotation of yet uncharacterized kinases. The prediction and analysis of syntenic blocks and duplication events within gene families of interest can be used to link functional biology to insights from an evolutionary viewpoint. The approach undertaken here can be applied to any gene family in any organism with an annotated genome.
Pulsar wind nebulae (PWNe) are the most abundant TeV gamma-ray emitters in the Milky Way. The radiative emission of these objects is powered by fast-rotating pulsars, which donate parts of their rotational energy into winds of relativistic particles. This thesis presents an in-depth study of the detected population of PWNe at high energies. To outline general trends regarding their evolutionary behaviour, a time-dependent model is introduced and compared to the available data. In particular, this work presents two exceptional PWNe which protrude from the rest of the population, namely the Crab Nebula and N 157B. Both objects are driven by pulsars with extremely high rotational energy loss rates. Accordingly, they are often referred to as energetic twins. Modelling the non-thermal multi-wavelength emission of N157B gives access to specific properties of this object, like the magnetic field inside the nebula. Comparing the derived parameters to those of the Crab Nebula reveals large intrinsic differences between the two PWNe. Possible origins of these differences are discussed in context of the resembling pulsars.
Compared to the TeV gamma-ray regime, the number of detected PWNe is much smaller in the MeV-GeV gamma-ray range. In the latter range, the Crab Nebula stands out by the recent detection of gamma-ray flares. In general, the measured flux enhancements on short time scales of days to weeks were not expected in the theoretical understanding of PWNe. In this thesis, the variability of the Crab Nebula is analysed using data from the Fermi Large Area Telescope (Fermi-LAT). For the presented analysis, a new gamma-ray reconstruction method is used, providing a higher sensitivity and a lower energy threshold compared to previous analyses. The derived gamma-ray light curve of the Crab Nebula is investigated for flares and periodicity. The detected flares are analysed regarding their energy spectra, and their variety and commonalities are discussed. In addition, a dedicated analysis of the flare which occurred in March 2013 is performed. The derived short-term variability time scale is roughly 6h, implying a small region inside the Crab Nebula to be responsible for the enigmatic flares. The most promising theories explaining the origins of the flux eruptions and gamma-ray variability are discussed in detail.
In the technical part of this work, a new analysis framework is presented. The introduced software, called gammalib/ctools, is currently being developed for the future CTA observa- tory. The analysis framework is extensively tested using data from the H. E. S. S. experiment. To conduct proper data analysis in the likelihood framework of gammalib/ctools, a model describing the distribution of background events in H.E.S.S. data is presented. The software provides the infrastructure to combine data from several instruments in one analysis. To study the gamma-ray emitting PWN population, data from Fermi-LAT and H. E. S. S. are combined in the likelihood framework of gammalib/ctools. In particular, the spectral peak, which usually lies in the overlap energy regime between these two instruments, is determined with the presented analysis framework. The derived measurements are compared to the predictions from the time-dependent model. The combined analysis supports the conclusion of a diverse population of gamma-ray emitting PWNe.
Magnetite is an iron oxide, which is ubiquitous in rocks and is usually deposited as small nanoparticulate matter among other rock material. It differs from most other iron oxides because it contains divalent and trivalent iron. Consequently, it has a special crystal structure and unique magnetic properties. These properties are used for paleoclimatic reconstructions where naturally occurring magnetite helps understanding former geological ages. Further on, magnetic properties are used in bio- and nanotechnological applications –synthetic magnetite serves as a contrast agent in MRI, is exploited in biosensing, hyperthermia or is used in storage media.
Magnetic properties are strongly size-dependent and achieving size control under preferably mild synthesis conditions is of interest in order to obtain particles with required properties. By using a custom-made setup, it was possible to synthesize stable single domain magnetite nanoparticles with the co-precipitation method. Furthermore, it was shown that magnetite formation is temperature-dependent, resulting in larger particles at higher temperatures. However, mechanistic approaches about the details are incomplete.
Formation of magnetite from solution was shown to occur from nanoparticulate matter rather than solvated ions. The theoretical framework of such processes has only started to be described, partly due to the lack of kinetic or thermodynamic data. Synthesis of magnetite nanoparticles at different temperatures was performed and the Arrhenius plot was used determine an activation energy for crystal growth of 28.4 kJ mol-1, which led to the conclusion that nanoparticle diffusion is the rate-determining step.
Furthermore, a study of the alteration of magnetite particles of different sizes as a function of their storage conditions is presented. The magnetic properties depend not only on particle size but also depend on the structure of the oxide, because magnetite oxidizes to maghemite under environmental conditions. The dynamics of this process have not been well described. Smaller nanoparticles are shown to oxidize more rapidly than larger ones and the lower the storage temperature, the lower the measured oxidation. In addition, the magnetic properties of the altered particles are not decreased dramatically, thus suggesting that this alteration will not impact the use of such nanoparticles as medical carriers.
Finally, the effect of biological additives on magnetite formation was investigated. Magnetotactic bacteria¬¬ are able to synthesize and align magnetite nanoparticles of well-defined size and morphology due to the involvement of special proteins with specific binding properties. Based on this model of morphology control, phage display experiments were performed to determine peptide sequences that preferably bind to (111)-magnetite faces. The aim was to control the shape of magnetite nanoparticles during the formation. Magnetotactic bacteria are also able to control the intracellular redox potential with proteins called magnetochromes. MamP is such a protein and its oxidizing nature was studied in vitro via biomimetic magnetite formation experiments based on ferrous ions. Magnetite and further trivalent oxides were found.
This work helps understanding basic mechanisms of magnetite formation and gives insight into non-classical crystal growth. In addition, it is shown that alteration of magnetite nanoparticles is mainly based on oxidation to maghemite and does not significantly influence the magnetic properties. Finally, biomimetic experiments help understanding the role of MamP within the bacteria and furthermore, a first step was performed to achieve morphology control in magnetite formation via co-precipitation.
Research in rodents has shown that dietary vitamin A reduces body fat by enhancing fat mobilisation and energy utilisation; however, their effects in growing dogs remain unclear. In the present study, we evaluated the development of body weight and body composition and compared observed energy intake with predicted energy intake in forty-nine puppies from two breeds (twenty-four Labrador Retriever (LAB) and twenty-five Miniature Schnauzer (MS)). A total of four different diets with increasing vitamin A content between 5.24 and 104.80 mu mol retinol (5000-100 000 IU vitamin A)/4184 kJ (1000 kcal) metabolisable energy were fed from the age of 8 weeks up to 52 (MS) and 78 weeks (LAB). The daily energy intake was recorded throughout the experimental period. The body condition score was evaluated weekly using a seven-category system, and food allowances were adjusted to maintain optimal body condition. Body composition was assessed at the age of 26 and 52 weeks for both breeds and at the age of 78 weeks for the LAB breed only using dual-energy X-ray absorptiometry. The growth curves of the dogs followed a breed-specific pattern. However, data on energy intake showed considerable variability between the two breeds as well as when compared with predicted energy intake. In conclusion, the data show that energy intakes of puppies particularly during early growth are highly variable; however, the growth pattern and body composition of the LAB and MS breeds are not affected by the intake of vitamin A at levels up to 104.80 mu mol retinol (100 000 IU vitamin A)/4184 kJ (1000 kcal).
The complementary advantages of high-rate Global Positioning System (GPS) and accelerometer observations for measuring seismic ground motion have been recognised in previous research. Here we propose an approach of tight integration of GPS and accelerometer measurements. The baseline shifts of the accelerometer are introduced as unknown parameters and estimated by a random walk process in the Precise Point Positioning (PPP) solution. To demonstrate the performance of the new strategy, we carried out several experiments using collocated GPS and accelerometer. The experimental results show that the baseline shifts of the accelerometer are automatically corrected, and high precision coseismic information of strong ground motion can be obtained in real-time. Additionally, the convergence and precision of the PPP is improved by the combined solution.
Bats are important components in tropical mammal assemblages. Unravelling the mechanisms allowing multiple syntopic bat species to coexist can provide insights into community ecology. However, dietary information on component species of these assemblages is often difficult to obtain. Here we measuredstable carbon and nitrogen isotopes in hair samples clipped from the backs of 94 specimens to indirectly examine whether trophic niche differentiation and microhabitat segregation explain the coexistence of 16 bat species at Ankarana, northern Madagascar. The assemblage ranged over 4.4% in delta N-15 and was structured into two trophic levels with phytophagous Pteropodidae as primary consumers (c. 3% enriched over plants) and different insectivorous bats as secondary consumers (c. 4% enriched over primary consumers). Bat species utilizing different microhabitats formed distinct isotopic clusters (metric analyses of delta C-13-delta N-15 bi-plots), but taxa foraging in the same microhabitat did not show more pronounced trophic differentiation than those occupying different microhabitats. As revealed by multivariate analyses, no discernible feeding competition was found in the local assemblage amongst congeneric species as compared with non-congeners. In contrast to ecological niche theory, but in accordance with studies on New and Old World bat assemblages, competitive interactions appear to be relaxed at Ankarana and not a prevailing structuring force.
Flood damage has increased significantly and is expected to rise further in many parts of the world. For assessing potential changes in flood risk, this paper presents an integrated model chain quantifying flood hazards and losses while considering climate and land use changes. In the case study region, risk estimates for the present and the near future illustrate that changes in flood risk by 2030 are relatively low compared to historic periods. While the impact of climate change on the flood hazard and risk by 2030 is slight or negligible, strong urbanisation associated with economic growth contributes to a remarkable increase in flood risk. Therefore, it is recommended to frequently consider land use scenarios and economic developments when assessing future flood risks. Further, an adapted and sustainable risk management is necessary to encounter rising flood losses, in which non-structural measures are becoming more and more important. The case study demonstrates that adaptation by non-structural measures such as stricter land use regulations or enhancement of private precaution is capable of reducing flood risk by around 30 %. Ignoring flood risks, in contrast, always leads to further increasing losses-with our assumptions by 17 %. These findings underline that private precaution and land use regulation could be taken into account as low cost adaptation strategies to global climate change in many flood prone areas. Since such measures reduce flood risk regardless of climate or land use changes, they can also be recommended as no-regret measures.
In the aftermath of the severe flooding in Central Europe in August 2002, a number of changes in flood policies were launched in Germany and other European countries, aiming at improved risk management. The question arises as to whether these changes have already had an impact on the residents' ability to cope with floods, and whether flood-affected private households are now better prepared than they were in 2002. Therefore, computer-aided telephone interviews with private households in Germany that suffered from property damage due to flooding in 2005, 2006, 2010 or 2011 were performed and analysed with respect to flood awareness, precaution, preparedness and recovery. The data were compared to a similar investigation conducted after the flood in 2002.
After the flood in 2002, the level of private precautions taken increased considerably. One contributing factor is the fact that, in general, a larger proportion of people knew that they were at risk of flooding. The best level of precaution was found before the flood events in 2006 and 2011. The main reason for this might be that residents had more experience with flooding than residents affected in 2005 or 2010. Yet, overall, flood experience and knowledge did not necessarily result in building retrofitting or flood-proofing measures, which are considered as mitigating damages most effectively. Hence, investments still need to be stimulated in order to reduce future damage more efficiently.
Early warning and emergency responses were substantially influenced by flood characteristics. In contrast to flood-affected people in 2006 or 2011, people affected by flooding in 2005 or 2010 had to deal with shorter lead times and therefore had less time to take emergency measures. Yet, the lower level of emergency measures taken also resulted from the people's lack of flood experience and insufficient knowledge of how to protect themselves. Overall, it was noticeable that these residents suffered from higher losses. Therefore, it is important to further improve early warning systems and communication channels, particularly in hilly areas with rapid-onset flooding.
The B fields in OB stars (BOB) survey is an ESO large programme collecting spectropolarimetric observations for a large number of early-type stars in order to study the occurrence rate, properties, and ultimately the origin of magnetic fields in massive stars. As of July 2014, a total of 98 objects were observed over 20 nights with FORS2 and HARPSpol. Our preliminary results indicate that the fraction of magnetic OB stars with an organised, detectable field is low. This conclusion, now independently reached by two different surveys, has profound implications for any theoretical model attempting to explain the field formation in these objects. We discuss in this contribution some important issues addressed by our observations (e.g., the lower bound of the field strength) and the discovery of some remarkable objects.
A dramatic efficiency improvement of bulk heterojunction solar cells based on electron-donating conjugated polymers in combination with soluble fullerene derivatives has been achieved over the past years. Certified and reported power conversion efficiencies now reach over 9% for single junctions and exceed the 10% benchmark for tandem solar cells. This trend brightens the vision of organic photovoltaics becoming competitive with inorganic solar cells including the realization of low-cost and large-area organic photovoltaics. For the best performing organic materials systems, the yield of charge generation can be very efficient. However, a detailed understanding of the free charge carrier generation mechanisms at the donor acceptor interface and the energy loss associated with it needs to be established. Moreover, organic solar cells are limited by the competition between charge extraction and free charge recombination, accounting for further efficiency losses. A conclusive picture and the development of precise methodologies for investigating the fundamental processes in organic solar cells are crucial for future material design, efficiency optimization, and the implementation of organic solar cells into commercial products.
In order to advance the development of organic photovoltaics, my thesis focuses on the comprehensive understanding of charge generation, recombination and extraction in organic bulk heterojunction solar cells summarized in 6 chapters on the cumulative basis of 7 individual publications.
The general motivation guiding this work was the realization of an efficient hybrid inorganic/organic tandem solar cell with sub-cells made from amorphous hydrogenated silicon and organic bulk heterojunctions. To realize this project aim, the focus was directed to the low band-gap copolymer PCPDTBT and its derivatives, resulting in the examination of the charge carrier dynamics in PCPDTBT:PC70BM blends in relation to by the blend morphology. The phase separation in this blend can be controlled by the processing additive diiodooctane, enhancing domain purity and size. The quantitative investigation of the free charge formation was realized by utilizing and improving the time delayed collection field technique. Interestingly, a pronounced field dependence of the free carrier generation for all blends is found, with the field dependence being stronger without the additive. Also, the bimolecular recombination coefficient for both blends is rather high and increases with decreasing internal field which we suggest to be caused by a negative field dependence of mobility. The additive speeds up charge extraction which is rationalized by the threefold increase in mobility.
By fluorine attachment within the electron deficient subunit of PCPDTBT, a new polymer F-PCPDTBT is designed. This new material is characterized by a stronger tendency to aggregate as compared to non-fluorinated PCPDTBT. Our measurements show that for F-PCPDTBT:PCBM blends the charge carrier generation becomes more efficient and the field-dependence of free charge carrier generation is weakened. The stronger tendency to aggregate induced by the fluorination also leads to increased polymer rich domains, accompanied in a threefold reduction in the non-geminate recombination coefficient at conditions of open circuit. The size of the polymer domains is nicely correlated to the field-dependence of charge generation and the Langevin reduction factor, which highlights the importance of the domain size and domain purity for efficient charge carrier generation. In total, fluorination of PCPDTBT causes the PCE to increase from 3.6 to 6.1% due to enhanced fill factor, short circuit current and open circuit voltage. Further optimization of the blend ratio, active layer thickness, and polymer molecular weight resulted in 6.6% efficiency for F-PCPDTBT:PC70BM solar cells.
Interestingly, the double fluorinated version 2F-PCPDTBT exhibited poorer FF despite a further reduction of geminate and non-geminate recombination losses. To further analyze this finding, a new technique is developed that measures the effective extraction mobility under charge carrier densities and electrical fields comparable to solar cell operation conditions. This method involves the bias enhanced charge extraction technique. With the knowledge of the carrier density under different electrical field and illumination conditions, a conclusive picture of the changes in charge carrier dynamics leading to differences in the fill factor upon fluorination of PCPDTBT is attained. The more efficient charge generation and reduced recombination with fluorination is counterbalanced by a decreased extraction mobility. Thus, the highest fill factor of 60% and efficiency of 6.6% is reached for F-PCPDTBT blends, while 2F-PCPDTBT blends have only moderate fill factors of 54% caused by the lower effective extraction mobility, limiting the efficiency to 6.5%.
To understand the details of the charge generation mechanism and the related losses, we evaluated the yield and field-dependence of free charge generation using time delayed collection field in combination with sensitive measurements of the external quantum efficiency and absorption coefficients for a variety of blends. Importantly, both the yield and field-dependence of free charge generation is found to be unaffected by excitation energy, including direct charge transfer excitation below the optical band gap. To access the non-detectable absorption at energies of the relaxed charge transfer emission, the absorption was reconstructed from the CT emission, induced via the recombination of thermalized charges in electroluminescence. For a variety of blends, the quantum yield at energies of charge transfer emission was identical to excitations with energies well above the optical band-gap. Thus, the generation proceeds via the split-up of the thermalized charge transfer states in working solar cells. Further measurements were conducted on blends with fine-tuned energy levels and similar blend morphologies by using different fullerene derivatives. A direct correlation between the efficiency of free carrier generation and the energy difference of the relaxed charge transfer state relative to the energy of the charge separated state is found. These findings open up new guidelines for future material design as new high efficiency materials require a minimum energetic offset between charge transfer and the charge separated state while keeping the HOMO level (and LUMO level) difference between donor and acceptor as small as possible.
Dual-normal logic programs
(2015)
Disjunctive Answer Set Programming is a powerful declarative programming paradigm with complexity beyond NP. Identifying classes of programs for which the consistency problem is in NP is of interest from the theoretical standpoint and can potentially lead to improvements in the design of answer set programming solvers. One of such classes consists of dual-normal programs, where the number of positive body atoms in proper rules is at most one. Unlike other classes of programs, dual-normal programs have received little attention so far. In this paper we study this class. We relate dual-normal programs to propositional theories and to normal programs by presenting several inter-translations. With the translation from dual-normal to normal programs at hand, we introduce the novel class of body-cycle free programs, which are in many respects dual to head-cycle free programs. We establish the expressive power of dual-normal programs in terms of SE- and UE-models, and compare them to normal programs. We also discuss the complexity of deciding whether dual-normal programs are strongly and uniformly equivalent.
The Strange-tailed Tyrant Alectrurus risora (Aves: Tyrannidae) is an endemic species of southern South American grasslands that suffered a 90% reduction of its original distribution due to habitat transformation. This has led the species to be classified as globally Vulnerable. By the beginning of the last century, populations were partially migratory and moved south during the breeding season. Currently, the main breeding population inhabits the Ibera wetlands in the province of Corrientes, north-east Argentina, where it is resident all year round. There are two remaining small populations in the province of Formosa, north-east Argentina, and in southern Paraguay, which are separated from the main population by the Parana-Paraguay River and its continuous riverine forest habitat. The populations of Corrientes and Formosa are separated by 300 km and the grasslands between populations are non-continuous due to habitat transformation. We used mtDNA sequences and eight microsatellite loci to test if there were evidences of genetic isolation between Argentinean populations. We found no evidence of genetic structure between populations (Phi(ST) = 0.004, P = 0.32; Fst = 0.01, P = 0.06), which can be explained by either retained ancestral polymorphism or by dispersal between populations. We found no evidence for a recent demographic bottleneck in nuclear loci. Our results indicate that these populations could be managed as a single conservation unit on a regional scale. Conservation actions should be focused on preserving the remaining network of areas with natural grasslands to guarantee reproduction, dispersal and prevent further decline of populations.
Translation of protein from mRNA is a complex multi-step process that occurs at a non-uniform rate. Variability in ribosome speed along an mRNA enables refinement of the proteome and plays a critical role in protein biogenesis. Detailed single protein studies have found both tRNA abundance and mRNA secondary structure as key modulators of translation elongation rate, but recent genome-wide ribosome profiling experiments have not observed significant influence of either on translation efficiency. Here we provide evidence that this results from an inherent trade-off between these factors. We find codons pairing to high-abundance tRNAs are preferentially used in regions of high secondary structure content, while codons read by significantly less abundant tRNAs are located in lowly structured regions. By considering long stretches of high and low mRNA secondary structure in Saccharomyces cerevisiae and Escherichia coli and comparing them to randomized-gene models and experimental expression data, we were able to distinguish clear selective pressures and increased protein expression for specific codon choices. The trade-off between secondary structure and tRNA-concentration based codon choice allows for compensation of their independent effects on translation, helping to smooth overall translational speed and reducing the chance of potentially detrimental points of excessively slow or fast ribosome movement.
GEOMAGIA50.v3
(2015)
Background: GEOMAGIA50.v3 for sediments is a comprehensive online database providing access to published paleomagnetic, rock magnetic, and chronological data obtained from lake and marine sediments deposited over the past 50 ka. Its objective is to catalogue data that will improve our understanding of changes in the geomagnetic field, physical environments, and climate.
Findings: GEOMAGIA50.v3 for sediments builds upon the structure of the pre-existing GEOMAGIA50 database for magnetic data from archeological and volcanic materials. A strong emphasis has been placed on the storage of geochronological data, and it is the first magnetic archive that includes comprehensive radiocarbon age data from sediments. The database will be updated as new sediment data become available.
Conclusions: The web-based interface for the sediment database is located at http://geomagia.gfz-potsdam.de/geomagiav3/SDquery.php. This paper is a companion to Brown et al. (Earth Planets Space doi:10.1186/s40623-015-0232-0,2015) and describes the data types, structure, and functionality of the sediment database.
For a long time, the analysis of ancient human DNA represented one of the most controversial disciplines in an already controversial field of research. Scepticism in this field was only matched by the long-lasting controversy over the authenticity of ancient pathogen DNA. This ambiguous view on ancient human DNA had a dichotomous root. On the one hand, the interest in ancient human DNA is great because such studies touch on the history and evolution of our own species. On the other hand, because these studies are dealing with samples from our own species, results are easily compromised by contamination of the experiments with modern human DNA, which is ubiquitous in the environment. Consequently, some of the most disputed studies published - apart maybe from early reports on million year old dinosaur or amber DNA - reported DNA analyses from human subfossil remains. However, the development of so-called next- or second-generation sequencing (SGS) in 2005 and the technological advances associated with it have generated new confidence in the genetic study of ancient human remains. The ability to sequence shorter DNA fragments than with PCR amplification coupled to traditional Sanger sequencing, along with very high sequencing throughput have both reduced the risk of sequencing modern contamination and provided tools to evaluate the authenticity of DNA sequence data. The field is now rapidly developing, providing unprecedented insights into the evolution of our own species and past human population dynamics as well as the evolution and history of human pathogens and epidemics. Here, we review how recent technological improvements have rapidly transformed ancient human DNA research from a highly controversial subject to a central component of modern anthropological research. We also discuss potential future directions of ancient human DNA research.
Towards the assimilation of tree-ring-width records using ensemble Kalman filtering techniques
(2015)
This paper investigates the applicability of the Vaganov–Shashkin–Lite (VSL) forward model for tree-ring-width chronologies as observation operator within a proxy data assimilation (DA) setting. Based on the principle of limiting factors, VSL combines temperature and moisture time series in a nonlinear fashion to obtain simulated TRW chronologies. When used as observation operator, this modelling approach implies three compounding, challenging features: (1) time averaging, (2) “switching recording” of 2 variables and (3) bounded response windows leading to “thresholded response”. We generate pseudo-TRW observations from a chaotic 2-scale dynamical system, used as a cartoon of the atmosphere-land system, and attempt to assimilate them via ensemble Kalman filtering techniques. Results within our simplified setting reveal that VSL’s nonlinearities may lead to considerable loss of assimilation skill, as compared to the utilization of a time-averaged (TA) linear observation operator. In order to understand this undesired effect, we embed VSL’s formulation into the framework of fuzzy logic (FL) theory, which thereby exposes multiple representations of the principle of limiting factors. DA experiments employing three alternative growth rate functions disclose a strong link between the lack of smoothness of the growth rate function and the loss of optimality in the estimate of the TA state. Accordingly, VSL’s performance as observation operator can be enhanced by resorting to smoother FL representations of the principle of limiting factors. This finding fosters new interpretations of tree-ring-growth limitation processes.