Refine
Has Fulltext
- yes (109) (remove)
Year of publication
- 2011 (109) (remove)
Document Type
- Doctoral Thesis (109) (remove)
Is part of the Bibliography
- yes (109) (remove)
Keywords
- Dictyostelium (3)
- Holocene (3)
- Holozän (3)
- Klimawandel (3)
- Nanopartikel (3)
- Selbstorganisation (3)
- climate change (3)
- nanoparticles (3)
- self-assembly (3)
- Antwortmengenprogrammierung (2)
Institute
- Institut für Chemie (19)
- Institut für Physik und Astronomie (19)
- Institut für Biochemie und Biologie (18)
- Institut für Geowissenschaften (14)
- Extern (10)
- Institut für Informatik und Computational Science (10)
- Wirtschaftswissenschaften (9)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (7)
- Department Psychologie (4)
- Department Linguistik (3)
- Institut für Ernährungswissenschaft (3)
- Sozialwissenschaften (3)
- Institut für Umweltwissenschaften und Geographie (2)
- Department Sport- und Gesundheitswissenschaften (1)
- Institut für Mathematik (1)
- Institut für Philosophie (1)
- Strukturbereich Kognitionswissenschaften (1)
A phagocyte-specific Irf8 gene enhancer establishes early conventional dendritic cell commitment
(2011)
Haematopoietic development is a complex process that is strictly hierarchically organized. Here, the phagocyte lineages are a very heterogeneous cell compartment with specialized functions in innate immunity and induction of adaptive immune responses. Their generation from a common precursor must be tightly controlled. Interference within lineage formation programs for example by mutation or change in expression levels of transcription factors (TF) is causative to leukaemia. However, the molecular mechanisms driving specification into distinct phagocytes remain poorly understood. In the present study I identify the transcription factor Interferon Regulatory Factor 8 (IRF8) as the specification factor of dendritic cell (DC) commitment in early phagocyte precursors. Employing an IRF8 reporter mouse, I showed the distinct Irf8 expression in haematopoietic lineage diversification and isolated a novel bone marrow resident progenitor which selectively differentiates into CD8α+ conventional dendritic cells (cDCs) in vivo. This progenitor strictly depends on Irf8 expression to properly establish its transcriptional DC program while suppressing a lineage-inappropriate neutrophile program. Moreover, I demonstrated that Irf8 expression during this cDC commitment-step depends on a newly discovered myeloid-specific cis-enhancer which is controlled by the haematopoietic transcription factors PU.1 and RUNX1. Interference with their binding leads to abrogation of Irf8 expression, subsequently to disturbed cell fate decisions, demonstrating the importance of these factors for proper phagocyte cell development. Collectively, these data delineate a transcriptional program establishing cDC fate choice with IRF8 in its center.
A systems biological approach towards the molecular basis of heterosis in Arabidopsis thaliana
(2011)
Heterosis is defined as the superiority in performance of heterozygous genotypes compared to their corresponding genetically different homozygous parents. This phenomenon is already known since the beginning of the last century and it has been widely used in plant breeding, but the underlying genetic and molecular mechanisms are not well understood. In this work, a systems biological approach based on molecular network structures is proposed to contribute to the understanding of heterosis. Hybrids are likely to contain additional regulatory possibilities compared to their homozygous parents and, therefore, they may be able to correctly respond to a higher number of environmental challenges, which leads to a higher adaptability and, thus, the heterosis phenomenon. In the network hypothesis for heterosis, presented in this work, more regulatory interactions are expected in the molecular networks of the hybrids compared to the homozygous parents. Partial correlations were used to assess this difference in the global interaction structure of regulatory networks between the hybrids and the homozygous genotypes. This network hypothesis for heterosis was tested on metabolite profiles as well as gene expression data of the two parental Arabidopsis thaliana accessions C24 and Col-0 and their reciprocal crosses. These plants are known to show a heterosis effect in their biomass phenotype. The hypothesis was confirmed for mid-parent and best-parent heterosis for either hybrid of our experimental metabolite as well as gene expression data. It was shown that this result is influenced by the used cutoffs during the analyses. Too strict filtering resulted in sets of metabolites and genes for which the network hypothesis for heterosis does not hold true for either hybrid regarding mid-parent as well as best-parent heterosis. In an over-representation analysis, the genes that show the largest heterosis effects according to our network hypothesis were compared to genes of heterotic quantitative trait loci (QTL) regions. Separately for either hybrid regarding mid-parent as well as best-parent heterosis, a significantly larger overlap between the resulting gene lists of the two different approaches towards biomass heterosis was detected than expected by chance. This suggests that each heterotic QTL region contains many genes influencing biomass heterosis in the early development of Arabidopsis thaliana. Furthermore, this integrative analysis led to a confinement and an increased confidence in the group of candidate genes for biomass heterosis in Arabidopsis thaliana identified by both approaches.
This Thesis puts its focus on the physics of neutron stars and its description with methods of numerical relativity. In the first step, a new numerical framework the Whisky2D code will be developed, which solves the relativistic equations of hydrodynamics in axisymmetry. Therefore we consider an improved formulation of the conserved form of these equations. The second part will use the new code to investigate the critical behaviour of two colliding neutron stars. Considering the analogy to phase transitions in statistical physics, we will investigate the evolution of the entropy of the neutron stars during the whole process. A better understanding of the evolution of thermodynamical quantities, like the entropy in critical process, should provide deeper understanding of thermodynamics in relativity. More specifically, we have written the Whisky2D code, which solves the general-relativistic hydrodynamics equations in a flux-conservative form and in cylindrical coordinates. This of course brings in 1/r singular terms, where r is the radial cylindrical coordinate, which must be dealt with appropriately. In the above-referenced works, the flux operator is expanded and the 1/r terms, not containing derivatives, are moved to the right-hand-side of the equation (the source term), so that the left hand side assumes a form identical to the one of the three-dimensional (3D) Cartesian formulation. We call this the standard formulation. Another possibility is not to split the flux operator and to redefine the conserved variables, via a multiplication by r. We call this the new formulation. The new equations are solved with the same methods as in the Cartesian case. From a mathematical point of view, one would not expect differences between the two ways of writing the differential operator, but, of course, a difference is present at the numerical level. Our tests show that the new formulation yields results with a global truncation error which is one or more orders of magnitude smaller than those of alternative and commonly used formulations. The second part of the Thesis uses the new code for investigations of critical phenomena in general relativity. In particular, we consider the head-on-collision of two neutron stars in a region of the parameter space where two final states a new stable neutron star or a black hole, lay close to each other. In 1993, Choptuik considered one-parameter families of solutions, S[P], of the Einstein-Klein-Gordon equations for a massless scalar field in spherical symmetry, such that for every P > P⋆, S[P] contains a black hole and for every P < P⋆, S[P] is a solution not containing singularities. He studied numerically the behavior of S[P] as P → P⋆ and found that the critical solution, S[P⋆], is universal, in the sense that it is approached by all nearly-critical solutions regardless of the particular family of initial data considered. All these phenomena have the common property that, as P approaches P⋆, S[P] approaches a universal solution S[P⋆] and that all the physical quantities of S[P] depend only on |P − P⋆|. The first study of critical phenomena concerning the head-on collision of NSs was carried out by Jin and Suen in 2007. In particular, they considered a series of families of equal-mass NSs, modeled with an ideal-gas EOS, boosted towards each other and varied the mass of the stars, their separation, velocity and the polytropic index in the EOS. In this way they could observe a critical phenomenon of type I near the threshold of black-hole formation, with the putative solution being a nonlinearly oscillating star. In a successive work, they performed similar simulations but considering the head-on collision of Gaussian distributions of matter. Also in this case they found the appearance of type-I critical behaviour, but also performed a perturbative analysis of the initial distributions of matter and of the merged object. Because of the considerable difference found in the eigenfrequencies in the two cases, they concluded that the critical solution does not represent a system near equilibrium and in particular not a perturbed Tolmann-Oppenheimer-Volkoff (TOV) solution. In this Thesis we study the dynamics of the head-on collision of two equal-mass NSs using a setup which is as similar as possible to the one considered above. While we confirm that the merged object exhibits a type-I critical behaviour, we also argue against the conclusion that the critical solution cannot be described in terms of equilibrium solution. Indeed, we show that, in analogy with what is found in, the critical solution is effectively a perturbed unstable solution of the TOV equations. Our analysis also considers fine-structure of the scaling relation of type-I critical phenomena and we show that it exhibits oscillations in a similar way to the one studied in the context of scalar-field critical collapse.
"Kinderwelt ist Bewegungswelt" (Schmidt, 1997, S. 156, zitiert nach Schmidt, Hartmann-Tews & Brettschneider, 2003, S. 31). Das kindliche Bewegungsverhalten hat sich bereits im Grundschulalter verändert, so dass sich Bewegungsaktivitäten von Kindern erheblich unterscheiden und keineswegs mehr verallgemeinert werden können. Richtet man den Fokus auf die Frage „Wie bewegt sind unsere Kinder?“ so scheint diese von den Medien bereits beantwortet zu sein, da dort von ansteigendem Bewegungsmangel der heutigen Kinder gegenüber früheren Generationen berichtet wird. Wenn es in den Diskussionen um den Gesundheitszustand unserer Kinder geht, nimmt die körperlich-sportliche Aktivität eine entscheidende Rolle ein. Bewegungsmangel ist hierbei ein zentraler Begriff der in der Öffentlichkeit diskutiert wird. Bei der Betrachtung der einzelnen Studien fällt auf, dass deutliche Defizite in der Messung der körperlich-sportlichen Aktivität bestehen. Zentraler Kritikpunkt in den meisten Studien ist die subjektive Erfassung der körperlich-sportlichen Aktivität. Ein Großteil bisheriger Untersuchungen zum Bewegungsverhalten basiert auf Beobachtungen, Befragungen oder Bewegungstagebüchern. Diese liefern ausschließlich zum Teil subjektive Einschätzungen der Kinder oder Eltern über die tatsächliche Bewegungszeit und -intensität. Das objektive Erfassen der Aktivität bzw. Inaktivität ist zwar seit einigen Jahren zentraler Gegenstand vieler Studien, dennoch gilt es, dieses noch sachkundiger zu lösen, um subjektive und objektive Daten zu vergleichen. Um dem Bewegungsmangel der heutigen Kinder entgegenzuwirken, sind empirisch abgesicherte Erkenntnisse über die Bedingungsfaktoren und die Folgen des veränderten Bewegungsverhaltens dringend nötig. Die Quer- und Längsschnittuntersuchung umfasst die Bereiche Anthropometrie, die Erfassung der körperlich-sportlichen Aktivität und die Herzfrequenzmessung über 24h. Für die Studie konnten 106 Jungen und Mädchen im Zeitraum von Januar 2007 bis April 2009 rekrutiert und überprüft werden. Die physiologischen Parameter wurden mit Hilfe des ACTIHEART-Messsytems aufgezeichnet und berechnet. Die Ergebnisse zur körperlich-sportlichen Aktivität wurden in die Untersuchungsabschnitte Schulzeit gesamt, Pause, Sportunterricht, Nachmittag und 24h unterteilt. Durch das Messsystem werden die Bewegungsaktivität und die Herzfrequenz synchron aufgezeichnet. Das System nimmt die Beschleunigungswerte des Körpers auf und speichert sie im frei wählbaren Zeitintervall, Short oder Long Term, in Form von „activity counts“ ab. Das Messsytem berechnet weiterhin die Intensität körperlicher Aktivität.
Algorithmic Trading
(2011)
Die Elektronisierung der Finanzmärkte ist in den letzten Jahren weit vorangeschritten. Praktisch jede Börse verfügt über ein elektronisches Handelssystem. In diesem Kontext beschreibt der Begriff Algorithmic Trading ein Phänomen, bei dem Computerprogramme den Menschen im Wertpapierhandel ersetzen. Sie helfen dabei Investmententscheidungen zu treffen oder Transaktionen durchzuführen. Algorithmic Trading selbst ist dabei nur eine unter vielen Innovationen, welche die Entwicklung des Börsenhandels geprägt haben. Hier sind z.B. die Erfindung der Telegraphie, des Telefons, des FAX oder der elektronische Wertpapierabwicklung zu nennen. Die Frage ist heute nicht mehr, ob Computerprogramme im Börsenhandel eingesetzt werden. Sondern die Frage ist, wo die Grenze zwischen vollautomatischem Börsenhandel (durch Computer) und manuellem Börsenhandel (von Menschen) verläuft. Bei der Erforschung von Algorithmic Trading wird die Wissenschaft mit dem Problem konfrontiert, dass keinerlei Informationen über diese Computerprogramme zugänglich sind. Die Idee dieser Dissertation bestand darin, dieses Problem zu umgehen und Informationen über Algorithmic Trading indirekt aus der Analyse von (Fonds-)Renditen zu extrahieren. Johannes Gomolka untersucht daher die Forschungsfrage, ob sich Aussagen über computergesteuerten Wertpapierhandel (kurz: Algorithmic Trading) aus der Analyse von (Fonds-)Renditen ziehen lassen. Zur Beantwortung dieser Forschungsfrage formuliert der Autor eine neue Definition von Algorithmic Trading und unterscheidet mit Buy-Side und Sell-Side Algorithmic Trading zwei grundlegende Funktionen der Computerprogramme (die Entscheidungs- und die Transaktionsunterstützung). Für seine empirische Untersuchung greift Gomolka auf das Multifaktorenmodell zur Style-Analyse von Fung und Hsieh (1997) zurück. Mit Hilfe dieses Modells ist es möglich, die Zeitreihen von Fondsrenditen in interpretierbare Grundbestandteile zu zerlegen und den einzelnen Regressionsfaktoren eine inhaltliche Bedeutung zuzuordnen. Die Ergebnisse dieser Dissertation zeigen, dass man mit Hilfe der Style-Analyse Aussagen über Algorithmic Trading aus der Analyse von (Fonds-)Renditen machen kann. Die Aussagen sind jedoch keiner technischen Natur, sondern auf die Analyse von Handelsstrategien (Investment-Styles) begrenzt.
In this work new fluorinated and non-fluorinated mono- and bifunctional trithiocarbonates of the structure Z-C(=S)-S-R and Z-C(=S)-S-R-S-C(=S)-Z were synthesized for the use as chain transfer agents (CTAs) in the RAFT-process. All newly synthesized CTAs were tested for their efficiency to moderate the free radical polymerization process by polymerizing styrene (M3). Besides characterization of the homopolymers by GPC measurements, end- group analysis of the synthesized block copolymers via 1H-, 19F-NMR, and in some cases also UV-vis spectroscopy, were performed attaching suitable fluorinated moieties to the Z- and/or R-groups of the CTAs. Symmetric triblock copolymers of type BAB and non-symmetric fluorine end- capped polymers were accessible using the RAFT process in just two or one polymerization step. In particular, the RAFT-process enabled the controlled polymerization of hydrophilic monomers such as N-isopropylacrylamide (NIPAM) (M1) as well as N-acryloylpyrrolidine (NAP) (M2) for the A-blocks and of the hydrophobic monomers styrene (M3), 2-fluorostyrene (M4), 3-fluorostyrene (M5), 4-fluorostyrene (M6) and 2,3,4,5,6-pentafluorostyrene (M7) for the B-blocks. The properties of the BAB-triblock copolymers were investigated in dilute, concentrated and highly concentrated aqueous solutions using DLS, turbidimetry, 1H- and 19F-NMR, rheology, determination of the CMC, foam height- and surface tension measurements and microscopy. Furthermore, their ability to stabilize emulsions and microemulsions and the wetting behaviour of their aqueous solutions on different substrates was investigated. The behaviour of the fluorine end-functionalized polymers to form micelles was studied applying DLS measurements in diluted organic solution. All investigated BAB-triblock copolymers were able to form micelles and show surface activity at room temperature in dilute aqueous solution. The aqueous solutions displayed moderate foam formation. With different types and concentrations of oils, the formation of emulsions could be detected using a light microscope. A boosting effect in microemulsions could not be found adding BAB-triblock copolymers. At elevated polymer concentrations, the formation of hydrogels was proved applying rheology measurements.
The present thesis introduces an iterative expert-based Bayesian approach for assessing greenhouse gas (GHG) emissions from the 2030 German new vehicle fleet and quantifying the impacts of their main drivers. A first set of expert interviews has been carried out in order to identify technologies which may help to lower car GHG emissions and to quantify their emission reduction potentials. Moreover, experts were asked for their probability assessments that the different technologies will be widely adopted, as well as for important prerequisites that could foster or hamper their adoption. Drawing on the results of these expert interviews, a Bayesian Belief Network has been built which explicitly models three vehicle types: Internal Combustion Engine Vehicles (which include mild and full Hybrid Electric Vehicles), Plug-In Hybrid Electric Vehicles, and Battery Electric Vehicles. The conditional dependencies of twelve central variables within the BBN - battery energy, fuel and electricity consumption, relative costs, and sales shares of the vehicle types - have been quantified by experts from German car manufacturers in a second series of interviews. For each of the seven second-round interviews, an expert's individually specified BBN results. The BBN have been run for different hypothetical 2030 scenarios which differ, e.g., in regard to battery development, regulation, and fuel and electricity GHG intensities. The present thesis delivers results both in regard to the subject of the investigation and in regard to its method. On the subject level, it has been found that the different experts expect 2030 German new car fleet emission to be at 50 to 65% of 2008 new fleet emissions under the baseline scenario. They can be further reduced to 40 to 50% of the emissions of the 2008 fleet though a combination of a higher share of renewables in the electricity mix, a larger share of biofuels in the fuel mix, and a stricter regulation of car CO$_2$ emissions in the European Union. Technically, 2030 German new car fleet GHG emissions can be reduced to a minimum of 18 to 44% of 2008 emissions, a development which can not be triggered by any combination of measures modeled in the BBN alone but needs further commitment. Out of a wealth of existing BBN, few have been specified by individual experts through elicitation, and to my knowledge, none of them has been employed for analyzing perspectives for the future. On the level of methods, this work shows that expert-based BBN are a valuable tool for making experts' expectations for the future explicit and amenable to the analysis of different hypothetical scenarios. BBN can also be employed for quantifying the impacts of main drivers. They have been demonstrated to be a valuable tool for iterative stakeholder-based science approaches.
Corvino, Corvino and Schoen, Chruściel and Delay have shown the existence of a large class of asymptotically flat vacuum initial data for Einstein's field equations which are static or stationary in a neighborhood of space-like infinity, yet quite general in the interior. The proof relies on some abstract, non-constructive arguments which makes it difficult to calculate such data numerically by using similar arguments. A quasilinear elliptic system of equations is presented of which we expect that it can be used to construct vacuum initial data which are asymptotically flat, time-reflection symmetric, and asymptotic to static data up to a prescribed order at space-like infinity. A perturbation argument is used to show the existence of solutions. It is valid when the order at which the solutions approach staticity is restricted to a certain range. Difficulties appear when trying to improve this result to show the existence of solutions that are asymptotically static at higher order. The problems arise from the lack of surjectivity of a certain operator. Some tensor decompositions in asymptotically flat manifolds exhibit some of the difficulties encountered above. The Helmholtz decomposition, which plays a role in the preparation of initial data for the Maxwell equations, is discussed as a model problem. A method to circumvent the difficulties that arise when fast decay rates are required is discussed. This is done in a way that opens the possibility to perform numerical computations. The insights from the analysis of the Helmholtz decomposition are applied to the York decomposition, which is related to that part of the quasilinear system which gives rise to the difficulties. For this decomposition analogous results are obtained. It turns out, however, that in this case the presence of symmetries of the underlying metric leads to certain complications. The question, whether the results obtained so far can be used again to show by a perturbation argument the existence of vacuum initial data which approach static solutions at infinity at any given order, thus remains open. The answer requires further analysis and perhaps new methods.
Most of the microelectronic circuits fabricated today are synchronous, i.e. they are driven by one or several clock signals. Synchronous circuit design faces several fundamental challenges such as high-speed clock distribution, integration of multiple cores operating at different clock rates, reduction of power consumption and dealing with voltage, temperature, manufacturing and runtime variations. Asynchronous or clockless design plays a key role in alleviating these challenges, however the design and test of asynchronous circuits is much more difficult in comparison to their synchronous counterparts. A driving force for a widespread use of asynchronous technology is the availability of mature EDA (Electronic Design Automation) tools which provide an entire automated design flow starting from an HDL (Hardware Description Language) specification yielding the final circuit layout. Even though there was much progress in developing such EDA tools for asynchronous circuit design during the last two decades, the maturity level as well as the acceptance of them is still not comparable with tools for synchronous circuit design. In particular, logic synthesis (which implies the application of Boolean minimisation techniques) for the entire system's control path can significantly improve the efficiency of the resulting asynchronous implementation, e.g. in terms of chip area and performance. However, logic synthesis, in particular for asynchronous circuits, suffers from complexity problems. Signal Transitions Graphs (STGs) are labelled Petri nets which are a widely used to specify the interface behaviour of speed independent (SI) circuits - a robust subclass of asynchronous circuits. STG decomposition is a promising approach to tackle complexity problems like state space explosion in logic synthesis of SI circuits. The (structural) decomposition of STGs is guided by a partition of the output signals and generates a usually much smaller component STG for each partition member, i.e. a component STG with a much smaller state space than the initial specification. However, decomposition can result in component STGs that in isolation have so-called irreducible CSC conflicts (i.e. these components are not SI synthesisable anymore) even if the specification has none of them. A new approach is presented to avoid such conflicts by introducing internal communication between the components. So far, STG decompositions are guided by the finest output partitions, i.e. one output per component. However, this might not yield optimal circuit implementations. Efficient heuristics are presented to determine coarser partitions leading to improved circuits in terms of chip area. For the new algorithms correctness proofs are given and their implementations are incorporated into the decomposition tool DESIJ. The presented techniques are successfully applied to some benchmarks - including 'real-life' specifications arising in the context of control resynthesis - which delivered promising results.
Business Process Management (BPM) emerged as a means to control, analyse, and optimise business operations. Conceptual models are of central importance for BPM. Most prominently, process models define the behaviour that is performed to achieve a business value. In essence, a process model is a mapping of properties of the original business process to the model, created for a purpose. Different modelling purposes, therefore, result in different models of a business process. Against this background, the misalignment of process models often observed in the field of BPM is no surprise. Even if the same business scenario is considered, models created for strategic decision making differ in content significantly from models created for process automation. Despite their differences, process models that refer to the same business process should be consistent, i.e., free of contradictions. Apparently, there is a trade-off between strictness of a notion of consistency and appropriateness of process models serving different purposes. Existing work on consistency analysis builds upon behaviour equivalences and hierarchical refinements between process models. Hence, these approaches are computationally hard and do not offer the flexibility to gradually relax consistency requirements towards a certain setting. This thesis presents a framework for the analysis of behaviour consistency that takes a fundamentally different approach. As a first step, an alignment between corresponding elements of related process models is constructed. Then, this thesis conducts behavioural analysis grounded on a relational abstraction of the behaviour of a process model, its behavioural profile. Different variants of these profiles are proposed, along with efficient computation techniques for a broad class of process models. Using behavioural profiles, consistency of an alignment between process models is judged by different notions and measures. The consistency measures are also adjusted to assess conformance of process logs that capture the observed execution of a process. Further, this thesis proposes various complementary techniques to support consistency management. It elaborates on how to implement consistent change propagation between process models, addresses the exploration of behavioural commonalities and differences, and proposes a model synthesis for behavioural profiles.
Besteuerung von Unternehmensgewinnen im Licht des Konzepts der konsumorientierten Einkommensteuer
(2011)
Die Dissertation widmet sich dem Problem der fiskalischen Konsequenzen der konsumorientierten Steuern, die die unternehmerischen Gewinne unabhängig von der Rechtsform belasten. Im empirischen Teil der Arbeit wird der Untersuchungsgegenstand auf die zinsbereinigte Gewinnsteuer (allowance for corporate equity) eingegrenzt. Die Untersuchung beruht auf theoretischen Überlegungen sowie einer eigenen Simulationsanalyse. Den Schwerpunkt bilden dabei zwei Kategorien, zwischen denen ein kausaler Zusammenhang vorliegt: die Gestaltung der Bemessungsgrundlage einerseits und die Erfüllung der Fiskalfunktion andererseits. Das Hauptziel der Arbeit ist es, die fiskalischen Konsequenzen einer nach dem Konzept der Konsumorientierung modifizierten Bemessungsgrundlage der Gewinnsteuern zu überprüfen. Die Abschätzung der fiskalischen Konsequenzen wird aufgrund der vier folgenden Bereiche vorgenommen: (1) theoretische Konzepte der konsumorientierten Einkommensteuer, (2) bisherige Umsetzungen der Konzepte der konsumorientierten Gewinnsteuer, (3) bisherige Untersuchungen der konsumorientierten Gewinnsteuer, (4) eine eigene Simulation der fiskalischen Konsequenzen der konsumorientierten Gewinnsteuer. Um das Hauptziel der Arbeit zu erreichen, werden acht in Form von Teilfragen ausformulierte Untersuchungsprobleme gelöst. Sie betreffen sowohl die theoretischen Ausführungen, als auch die empirische Untersuchung. Dabei entsprechen sie den einzelnen Untersuchungsschritten, die in den aufeinander folgenden Kapiteln der Arbeit durchgeführt werden. Anhand der Analyse der bisherigen wissenschaftlichen Erkenntnisse und der praktischen Umsetzungen des Konzepts der konsumorientierten Steuern wurde die folgende Haupthypothese aufgestellt: Der Ausfall des Steueraufkommens, der ein direkter Effekt der Gestaltung der Bemessungsgrundlage nach dem Konzept der Konsumorientierung ist, schließt die Fiskalfunktion der Gewinnsteuern nicht aus. Das Verfahren, das eine Verifizierung der Haupthypothese zum Ziel hat, erfolgt durch eine Untersuchung von drei Teilhypothesen: der Hypothese über die Nullsteuer, der Hypothese über den differenzierten Aufkommensausfall und der Hypothese über die Konzentration der Steuerschuld. In der Dissertation werden empirische Daten aus drei Quellen benutzt. Sie umfassen einen Teil der in Polen in den Jahren 2004-2008 tätigen Unternehmen und ermöglichen es, eine Simulationsanalyse des Aufkommensausfalls durchzuführen. Diese bedient sich der Methodik der Mikro- und Gruppensimulation, was in den bisherigen Untersuchungen zur Unternehmensbesteuerung ein eher selten anzutreffender Ansatz ist. Die gewonnenen Ergebnisse zeigen, dass die Steuereinnahmen aus der Einkommensteuer und der Körperschaftsteuer durch die Modifizierung der Bemessungsgrundlage deutlich reduziert werden. Die relativ große fiskalische Bedeutung der beiden Steuern bleibt jedoch erhalten und der Ausfall des Steueraufkommens, der direkt nach der Einführung einer konsumorientierten Steuerreform eintreten würde, wäre der „Preis“ für eine bessere, weniger verzerrende Bemessungsgrundlage. Die Dissertation liefert Ergebnisse, die für die Gestaltung der Steuerpolitik in Polen wie auch in anderen Ländern relevant sind. Dies scheint insbesondere im Kontext des häufig diskutierten Umbaus des Systems der Einkommen- und Gewinnbesteuerung bedeutsam. Darüber hinaus bildet die Arbeit einen Ausgangspunkt für weitere, vertiefte Untersuchungen zu den möglichen Gestaltungsformen der Einkommen- und Gewinnsteuern wie auch zu deren Folgen. Die Methode der Steuersimulation kann weiterentwickelt werden und in anderen Analysen der potenziellen Konsequenzen von Steuerreformen Anwendung finden.
Business process models are used within a range of organizational initiatives, where every stakeholder has a unique perspective on a process and demands the respective model. As a consequence, multiple process models capturing the very same business process coexist. Keeping such models in sync is a challenge within an ever changing business environment: once a process is changed, all its models have to be updated. Due to a large number of models and their complex relations, model maintenance becomes error-prone and expensive. Against this background, business process model abstraction emerged as an operation reducing the number of stored process models and facilitating model management. Business process model abstraction is an operation preserving essential process properties and leaving out insignificant details in order to retain information relevant for a particular purpose. Process model abstraction has been addressed by several researchers. The focus of their studies has been on particular use cases and model transformations supporting these use cases. This thesis systematically approaches the problem of business process model abstraction shaping the outcome into a framework. We investigate the current industry demand in abstraction summarizing it in a catalog of business process model abstraction use cases. The thesis focuses on one prominent use case where the user demands a model with coarse-grained activities and overall process ordering constraints. We develop model transformations that support this use case starting with the transformations based on process model structure analysis. Further, abstraction methods considering the semantics of process model elements are investigated. First, we suggest how semantically related activities can be discovered in process models-a barely researched challenge. The thesis validates the designed abstraction methods against sets of industrial process models and discusses the method implementation aspects. Second, we develop a novel model transformation, which combined with the related activity discovery allows flexible non-hierarchical abstraction. In this way this thesis advocates novel model transformations that facilitate business process model management and provides the foundations for innovative tool support.
The Casimir-Polder interaction between a single neutral atom and a nearby surface, arising from the (quantum and thermal) fluctuations of the electromagnetic field, is a cornerstone of cavity quantum electrodynamics (cQED), and theoretically well established. Recently, Bose-Einstein condensates (BECs) of ultracold atoms have been used to test the predictions of cQED. The purpose of the present thesis is to upgrade single-atom cQED with the many-body theory needed to describe trapped atomic BECs. Tools and methods are developed in a second-quantized picture that treats atom and photon fields on the same footing. We formulate a diagrammatic expansion using correlation functions for both the electromagnetic field and the atomic system. The formalism is applied to investigate, for BECs trapped near surfaces, dispersion interactions of the van der Waals-Casimir-Polder type, and the Bosonic stimulation in spontaneous decay of excited atomic states. We also discuss a phononic Casimir effect, which arises from the quantum fluctuations in an interacting BEC.
Non-mycorrhizal fungal endophytes are able to colonize internally roots without causing visible disease symptoms establishing neutral or mutualistic associations with plants. These fungi known as non-clavicipitaceous endophytes have a broad host range of monocot and eudicot plants and are highly diverse. Some of them promote plant growth and confer increased abiotic-stress tolerance and disease resistance. According to such possible effects on host plants, it was aimed to isolate and to characterize native fungal root endophytes from tomato (Lycopersicon esculentum Mill.) and to analyze their effects on plant development, plant resistance and fruit yield and quality together with the model endophyte Piriformospora indica. Fifty one new fungal strains were isolated from desinfected tomato roots of four different crop sites in Colombia. These isolates were roughly characterized and fourteen potential endophytes were further analyzed concerning their taxonomy, their root colonization capacity and their impact on plant growth. Sequencing of the ITS region from the ribosomal RNA gene cluster and in-depth morphological characterisation revealed that they correspond to different phylogenetic groups among the phylum Ascomycota. Nine different morphotypes were described including six dark septate endophytes (DSE) that did not correspond to the Phialocephala group. Detailed confocal microscopy analysis showed various colonization patterns of the endophytes inside the roots ranging from epidermal penetration to hyphal growth through the cortex. Tomato pot experiments under glass house conditions showed that they differentially affect plant growth depending on colonization time and inoculum concentration. Three new isolates (two unknown fungal endophyte DSE48, DSE49 and one identified as Leptodontidium orchidicola) with neutral or positiv effects were selected and tested in several experiments for their influence on vegetative growth, fruit yield and quality and their ability to diminish the impact of the pathogen Verticillium dahliae on tomato plants. Although plant growth promotion by all three fungi was observed in young plants, vegetative growth parameters were not affected after 22 weeks of cultivation except a reproducible increase of root diameter by the endophyte DSE49. Additionally, L. orchidicola increased biomass and glucose content of tomato fruits, but only at an early date of harvest and at a certain level of root colonization. Concerning bioprotective effects, the endophytes DSE49 and L. orchidicola decreased significantly disease symptoms caused by the pathogen V. dahliae, but only at a low dosis of the pathogen. In order to analyze, if the model root endophytic fungus Piriformospora indica could be suitable for application in production systems, its impact on tomato was evaluated. Similarly to the new fungal isolates, significant differences for vegetative growth parameters were only observable in young plants and, but protection against V. dahliae could be seen in one experiment also at high dosage of the pathogen. As the DSE L. orchidicola, P. indica increased the number and biomass of marketable tomatoes only at the beginning of fruit setting, but this did not lead to a significant higher total yield. If the effects on growth are due to a better nutrition of the plant with mineral element was analyzed in barley in comparison to the arbuscular mycorrhizal fungus Glomus mosseae. While the mycorrhizal fungus increased nitrogen and phosphate uptake of the plant, no such effect was observed for P. indica. In summary this work shows that many different fungal endophytes can be also isolated from roots of crops and, that these isolates can have positive effects on early plant development. This does, however, not lead to an increase in total yield or in improvement of fruit quality of tomatoes under greenhouse conditions.
The present thesis was born and evolved within the RAdial Velocity Experiment (RAVE) with the goal of measuring chemical abundances from the RAVE spectra and exploit them to investigate the chemical gradients along the plane of the Galaxy to provide constraints on possible Galactic formation scenarios. RAVE is a large spectroscopic survey which aims to observe spectroscopically ~10^6 stars by the end of 2012 and measures their radial velocities, atmospheric parameters and chemical abundances. The project makes use of the UK Schmidt telescope at Australian Astronomical Observatory (AAO) in Siding Spring, Australia, equipped with the multiobject spectrograph 6dF. To date, RAVE collected and measured more than 450,000 spectra. The precision of the chemical abundance estimations depends on the reliability of the atomic and atmosphere parameters adopted (in particular the oscillator strengths of the absorption lines and the effective temperature, gravity, and metallicity of the stars measured). Therefore we first identified 604 absorption lines in the RAVE wavelength range and refined their oscillator strengths with an inverse spectral analysis. Then, we improved the RAVE stellar parameters by modifying the RAVE pipeline and the spectral library the pipeline rely on. The modifications removed some systematic errors in stellar parameters discovered during this work. To obtain chemical abundances, we developed two different processing pipelines. Both of them perform chemical abundances measurements by assuming stellar atmospheres in Local Thermodynamic Equilibrium (LTE). The first one determines elements abundances from equivalent widths of absorption lines. Since this pipeline showed poor sensibility on abundances relative to iron, it has been superseded. The second one exploits the chi^2 minimization technique between observed and model spectra. Thanks to its precision, it has been adopted for the creation of the RAVE chemical catalogue. This pipeline provides abundances with uncertains of about ~0.2dex for spectra with signal-to-noise ratio S/N>40 and ~0.3dex for spectra with 20>S/N>40. For this work, the pipeline measured chemical abundances up to 7 elements for 217,358 RAVE stars. With these data we investigated the chemical gradients along the Galactic radius of the Milky Way. We found that stars with low vertical velocities |W| (which stay close to the Galactic plane) show an iron abundance gradient in agreement with previous works (~-0.07$ dex kpc^-1) whereas stars with larger |W| which are able to reach larger heights above the Galactic plane, show progressively flatter gradients. The gradients of the other elements follow the same trend. This suggests that an efficient radial mixing acts in the Galaxy or that the thick disk formed from homogeneous interstellar matter. In particular, we found hundreds of stars which can be kinetically classified as thick disk stars exhibiting a chemical composition typical of the thin disk. A few stars of this kind have already been detected by other authors, and their origin is still not clear. One possibility is that they are thin disk stars kinematically heated, and then underwent an efficient radial mixing process which blurred (and so flattened) the gradient. Alternatively they may be a transition population" which represents an evolutionary bridge between thin and thick disk. Our analysis shows that the two explanations are not mutually exclusive. Future follow-up high resolution spectroscopic observations will clarify their role in the Galactic disk evolution.
In the living cell, the organization of the complex internal structure relies to a large extent on molecular motors. Molecular motors are proteins that are able to convert chemical energy from the hydrolysis of adenosine triphosphate (ATP) into mechanical work. Being about 10 to 100 nanometers in size, the molecules act on a length scale, for which thermal collisions have a considerable impact onto their motion. In this way, they constitute paradigmatic examples of thermodynamic machines out of equilibrium. This study develops a theoretical description for the energy conversion by the molecular motor myosin V, using many different aspects of theoretical physics. Myosin V has been studied extensively in both bulk and single molecule experiments. Its stepping velocity has been characterized as a function of external control parameters such as nucleotide concentration and applied forces. In addition, numerous kinetic rates involved in the enzymatic reaction of the molecule have been determined. For forces that exceed the stall force of the motor, myosin V exhibits a 'ratcheting' behaviour: For loads in the direction of forward stepping, the velocity depends on the concentration of ATP, while for backward loads there is no such influence. Based on the chemical states of the motor, we construct a general network theory that incorporates experimental observations about the stepping behaviour of myosin V. The motor's motion is captured through the network description supplemented by a Markov process to describe the motor dynamics. This approach has the advantage of directly addressing the chemical kinetics of the molecule, and treating the mechanical and chemical processes on equal grounds. We utilize constraints arising from nonequilibrium thermodynamics to determine motor parameters and demonstrate that the motor behaviour is governed by several chemomechanical motor cycles. In addition, we investigate the functional dependence of stepping rates on force by deducing the motor's response to external loads via an appropriate Fokker-Planck equation. For substall forces, the dominant pathway of the motor network is profoundly different from the one for superstall forces, which leads to a stepping behaviour that is in agreement with the experimental observations. The extension of our analysis to Markov processes with absorbing boundaries allows for the calculation of the motor's dwell time distributions. These reveal aspects of the coordination of the motor's heads and contain direct information about the backsteps of the motor. Our theory provides a unified description for the myosin V motor as studied in single motor experiments.
The complete consumption of the oceanic domain of a tectonic plate by subduction into the upper mantle results in continent subduction, although continental crust is typically of lower density than the upper mantle. Thus, the sites of former oceanic domains (named suture zones) are generally decorated with stratigraphic sequences deposited along continental passive margins that were metamorphosed under low-grade, high-pressure conditions, i.e., low temperature/depth ratios (< 15°C/km) with respect to geothermal gradients in tectonically stable regions. Throughout the Mesozoic and Cenozoic (i.e., since ca. 250 Ma), the Mediterranean realm was shaped by the closure of the Tethyan Ocean, which likely consisted in numerous oceanic domains and microcontinents. However, the exact number and position of Tethyan oceans and continents (i.e., the Tethyan palaeogeography) remains debated. This is particularly the case of Western and Central Anatolia, where a continental fragment was accreted to the southern composite margin of the Eurasia sometime between the Late Cretaceous and the early Cenozoic. The most frontal part of this microcontinent experienced subduction-related metamorphism around 85-80 Ma, and collision-related metamorphism affected more external parts around 35 Ma. This unsually-long period between subduction- and collision-related metamorphisms (ca. 50 Ma) in units ascribed to the same continental edge constitutes a crucial issue to address in order to unravel how Anatolia was assembled. The Afyon Zone is a tectono-sedimentary unit exposed south and structurally below the front high-pressure belt. It is composed of a Mesozoic sedimentary sequence deposited on top of a Precambrian to Palaeozoic continental substratum, which can be traced from Northwestern to southern Central Anatolia, along a possible Tethyan suture. Whereas the Afyon Zone was defined as a low-pressure metamorphic unit, high-pressure minerals (mainly Fe-Mg-carpholite in metasediments) were recently reported from its central part. These findings shattered previous conceptions on the tectono-metamorphic evolution of the Afyon Zone in particular, and of the entire region in general, and shed light on the necessity to revise the regional extent of subduction-related metamorphism by re-inspecting the petrology of poorly-studied metasediments. In this purpose, I re-evaluated the metamorphic evolution of the entire Afyon Zone starting from field observations. Low-grade, high-pressure mineral assemblages (Fe-Mg-carpholite and glaucophane) are reported throughout the unit. Well-preserved carpholite-chloritoid assemblages are useful to improve our understanding of mineral relations and transitions in the FeO-MgO-Al2O3-SiO2-H2O system during rocks’ travel down to depth (prograde metamorphism). Inspection of petrographic textures, minute variations in mineral composition and Mg-Fe distribution among carpholite-chloritoid assemblages documents multistage mineral growth, accompanied by a progressive enrichment in Mg, and strong element partitioning. Using an updated database of mineral thermodynamic properties, I modelled the pressure and temperature conditions that are consistent with textural and chemical observations. Carpholite-bearing assemblages in the Afyon Zone account for a temperature increase from 280 to 380°C between 0.9 and 1.1 GPa (equivalent to a depth of 30-35 km). In order to further constrain regional geodynamics, first radiometric ages were determined in close association with pressure-temperature estimates for the Afyon Zone, as well as two other tectono-sedimentary units from the same continental passive margin (the Ören and Kurudere-Nebiler Units from SW Anatolia). For age determination, I employed 40Ar-39Ar geochronology on white mica in carpholite-bearing rocks. For thermobarometry, a multi-equilibrium approach was used based on quartz-chlorite-mica and quartz-chlorite-chloritoid associations formed at the expense of carpholite-bearing assemblages, i.e., during the exhumation from the subduction zone. This combination allows deciphering the significance of the calculated radiometric ages in terms of metamorphic conditions. Results show that the Afyon Zone and the Ören Unit represent a latest Cretaceous high-pressure metamorphic belt, and the Kurudere-Nebiler Unit was affected by subduction-related metamorphism around 45 Ma and cooled down after collision-related metamorphism around 26 Ma. The results provided in the present thesis and from the literature allow better understanding continental amalgamation in Western Anatolia. It is shown that at least two distinct oceanic branches, whereas only one was previously considered, have closed during continuous north-dipping subduction between 92 and 45 Ma. Between 85-80 and 70-65 Ma, a narrow continental domain (including the Afyon Zone) was buried into a subduction zone within the northern oceanic strand. Parts of the subducted continent crust were exhumed while the upper oceanic plate was transported southwards. Subduction of underlying lithosphere persisted, leading to the closure of the southern oceanic branch and to subduct the front of a second continental domain (including the Kurudere-Nebiler Unit). This followed by a continental collisional stage characterized by the cease of subduction, crustal thicknening and the detachment of the subducting oceanic slab from the accreted continent lithosphere. The present study supports that in the late Mesozoic the East Mediterranean realm had a complex tectonic configuration similar to present Southeast Asia or the Caribbean, with multiple, coexisting oceanic basins, microcontinents and subduction zones.
Complete protection against flood risks by structural measures is impossible. Therefore flood prediction is important for flood risk management. Good explanatory power of flood models requires a meaningful representation of bio-physical processes. Therefore great interest exists to improve the process representation. Progress in hydrological process understanding is achieved through a learning cycle including critical assessment of an existing model for a given catchment as a first step. The assessment will highlight deficiencies of the model, from which useful additional data requirements are derived, giving a guideline for new measurements. These new measurements may in turn lead to improved process concepts. The improved process concepts are finally summarized in an updated hydrological model. In this thesis I demonstrate such a learning cycle, focusing on the advancement of model evaluation methods and more cost effective measurements. For a successful model evaluation, I propose that three questions should be answered: 1) when is a model reproducing observations in a satisfactory way? 2) If model results deviate, of what nature is the difference? And 3) what are most likely the relevant model components affecting these differences? To answer the first two questions, I developed a new method to assess the temporal dynamics of model performance (or TIGER - TIme series of Grouped Errors). This method is powerful in highlighting recurrent patterns of insufficient model behaviour for long simulation periods. I answered the third question with the analysis of the temporal dynamics of parameter sensitivity (TEDPAS). For calculating TEDPAS, an efficient method for sensitivity analysis is necessary. I used such an efficient method called Fourier Amplitude Sensitivity Test, which has a smart sampling scheme. Combining the two methods TIGER and TEDPAS provided a powerful tool for model assessment. With WaSiM-ETH applied to the Weisseritz catchment as a case study, I found insufficient process descriptions for the snow dynamics and for the recession during dry periods in late summer and fall. Focusing on snow dynamics, reasons for poor model performance can either be a poor representation of snow processes in the model, or poor data on snow cover, or both. To obtain an improved data set on snow cover, time series of snow height and temperatures were collected with a cost efficient method based on temperature measurements on multiple levels at each location. An algorithm was developed to simultaneously estimate snow height and cold content from these measurements. Both, snow height and cold content are relevant quantities for spring flood forecasting. Spatial variability was observed at the local and the catchment scale with an adjusted sampling design. At the local scale, samples were collected on two perpendicular transects of 60 m length and analysed with geostatistical methods. The range determined from fitted theoretical variograms was within the range of the sampling design for 80% of the plots. No patterns were found, that would explain the random variability and spatial correlation at the local scale. At the watershed scale, locations of the extensive field campaign were selected according to a stratified sample design to capture the combined effects of elevation, aspect and land use. The snow height is mainly affected by the plot elevation. The expected influence of aspect and land use was not observed. To better understand the deficiencies of the snow module in WaSiM-ETH, the same approach, a simple degree day model was checked for its capability to reproduce the data. The degree day model was capable to explain the temporal variability for plots with a continuous snow pack over the entire snow season, if parameters were estimated for single plots. However, processes described in the simple model are not sufficient to represent multiple accumulation-melt-cycles, as observed for the lower catchment. Thus, the combined spatio-temporal variability at the watershed scale is not captured by the model. Further tests on improved concepts for the representation of snow dynamics at the Weißeritz are required. From the data I suggest to include at least rain on snow and redistribution by wind as additional processes to better describe spatio-temporal variability. Alternatively an energy balance snow model could be tested. Overall, the proposed learning cycle is a useful framework for targeted model improvement. The advanced model diagnostics is valuable to identify model deficiencies and to guide field measurements. The additional data collected throughout this work helps to get a deepened understanding of the processes in the Weisseritz catchment.
The Arctic is a particularly sensitive area with respect to climate change due to the high surface albedo of snow and ice and the extreme radiative conditions. Clouds and aerosols as parts of the Arctic atmosphere play an important role in the radiation budget, which is, as yet, poorly quantified and understood. The LIDAR (Light Detection And Ranging) measurements presented in this PhD thesis contribute with continuous altitude resolved aerosol profiles to the understanding of occurrence and characteristics of aerosol layers above Ny-Ålesund, Spitsbergen. The attention was turned to the analysis of periods with high aerosol load. As the Arctic spring troposphere exhibits maximum aerosol optical depths (AODs) each year, March and April of both the years 2007 and 2009 were analyzed. Furthermore, stratospheric aerosol layers of volcanic origin were analyzed for several months, subsequently to the eruptions of the Kasatochi and Sarychev volcanoes in summer 2008 and 2009, respectively. The Koldewey Aerosol Raman LIDAR (KARL) is an instrument for the active remote sensing of atmospheric parameters using pulsed laser radiation. It is operated at the AWIPEV research base and was fundamentally upgraded within the framework of this PhD project. It is now equipped with a new telescope mirror and new detection optics, which facilitate atmospheric profiling from 450m above sea level up to the mid-stratosphere. KARL provides highly resolved profiles of the scattering characteristics of aerosol and cloud particles (backscattering, extinction and depolarization) as well as water vapor profiles within the lower troposphere. Combination of KARL data with data from other instruments on site, namely radiosondes, sun photometer, Micro Pulse LIDAR, and tethersonde system, resulted in a comprehensive data set of scattering phenomena in the Arctic atmosphere. The two spring periods March and April 2007 and 2009 were at first analyzed based on meteorological parameters, like local temperature and relative humidity profiles as well as large scale pressure patterns and air mass origin regions. Here, it was not possible to find a clear correlation between enhanced AOD and air mass origin. However, in a comparison of two cloud free periods in March 2007 and April 2009, large AOD values in 2009 coincided with air mass transport through the central Arctic. This suggests the occurrence of aerosol transformation processes during the aerosol transport to Ny-Ålesund. Measurements on 4 April 2009 revealed maximum AOD values of up to 0.12 and aerosol size distributions changing with altitude. This and other performed case studies suggest the differentiation between three aerosol event types and their origin: Vertically limited aerosol layers in dry air, highly variable hygroscopic boundary layer aerosols and enhanced aerosol load across wide portions of the troposphere. For the spring period 2007, the available KARL data were statistically analyzed using a characterization scheme, which is based on optical characteristics of the scattering particles. The scheme was validated using several case studies. Volcanic eruptions in the northern hemisphere in August 2008 and June 2009 arose the opportunity to analyze volcanic aerosol layers within the stratosphere. The rate of stratospheric AOD change was similar within both years with maximum values above 0.1 about three to five weeks after the respective eruption. In both years, the stratospheric AOD persisted at higher rates than usual until the measurements were stopped in late September due to technical reasons. In 2008, up to three aerosol layers were detected, the layer structure in 2009 was characterized by up to six distinct and thin layers which smeared out to one broad layer after about two months. The lowermost aerosol layer was continuously detected at the tropopause altitude. Three case studies were performed, all revealed rather large indices of refraction of m = (1.53–1.55) - 0.02i, suggesting the presence of an absorbing carbonaceous component. The particle radius, derived with inversion calculations, was also similar in both years with values ranging from 0.16 to 0.19 μm. However, in 2009, a second mode in the size distribution was detected at about 0.5 μm. The long term measurements with the Koldewey Aerosol Raman LIDAR in Ny-Ålesund provide the opportunity to study Arctic aerosols in the troposphere and the stratosphere not only in case studies but on longer time scales. In this PhD thesis, both, tropospheric aerosols in the Arctic spring and stratospheric aerosols following volcanic eruptions have been described qualitatively and quantitatively. Case studies and comparative studies with data of other instruments on site allowed for the analysis of microphysical aerosol characteristics and their temporal evolution.
Der Begriff Gesundheit wurde von der WHO definiert als „nicht nur die Abwesenheit von Krankheit, sondern, positiv als Naturrecht formuliert, vollständiges körperliches, seelisches und soziales Wohlbefinden“. Deswegen hat die WHO mit dem Gesundheitsministerium in Syrien das Programm „Gesunde Dörfer“ seit 1996 gestaltet. Es zielt darauf, den wirtschaftlichen, sozialen und gesundheitlichen Zustand der Landbevölkerung zu verbessern, insbesondere soll es den großen Unterschied zwischen Stadt und Land vermindern. Das Projekt stellt sich die Aufgabe, den Einfluss des Programms auf Wirtschafts- und Gesundheitsparameter im Vergleich mit anderen Kontrolledörfern zu analysieren. Hierzu werden Umfragedaten in Syrien ausgewertet. Die Auswertung der Befragung in der vorliegenden Arbeit zeigte, dass das Programm seine Ziele bezüglich der Verbesserung der Gesundheits- und Lebensqualität erreicht Erfolgreich war das Programm „Gesunde Dörfer“ auch, mit Ausnahme der nördlichen und östlichen Region, in der Verringerung der Anzahl arbeitender Kinder, in der Förderung der Frauen eine Beschäftigung aufzunehmen bzw. ein Studium zu absolvieren sowie in der Reduzierung der Analphabetenrate.
Bei der Entdeckung der Glutathionperoxidase-2 (GPx2) wurde zunächst davon ausgegangen, dass die Funktion dieses Enzyms im Kryptengrund des Colons einzig in der Reduktion von H2O2 besteht. Im Laufe der weiteren Erforschung zeigte sich, dass GPx2 auch in verschiedenen Tumorgeweben vermehrt exprimiert wird. Dabei wird diskutiert, ob die Wirkung von GPx2 im Tumor eher als pro- oder als antikarzinogen einzustufen ist. Mehrere Experimente in vitro und in vivo zeigten antiinflammatorische Eigenschaften der GPx2. Aufgrund dieser Befunde wird derzeit über weitere Funktionen der GPx2 spekuliert. In dieser Arbeit wurde die physiologische Funktion von GPx2 näher erforscht, dazu wurden Wildtyp- und GPx2-Knockout-Mäuse in Hinblick auf Veränderungen der Enzymexpression und der Colonmorphologie untersucht. Es wurden drei verschiedene Selendiäten verfüttert: selenarmes, selenadäquates und selensupplementiertes Futter. Unter physiologischen Bedingungen ist am Kryptengrund des Colons, innerhalb der proliferierenden Zone, die Mitoserate am höchsten. Der Großteil der apoptotischen Zellen ist hingegen an der Kryptenspitze vorzufinden. Durch den Knockout von GPx2 kam es zu einer signifikanten Erhöhung der Apoptoserate am Kryptengrund. Dabei war der größte Effekt auf selenarmem Futter zu verzeichnen. Hierbei wurde sogar eine Veränderung der Colonmorphologie dokumentiert, da die Verschiebung der Proliferationszone in Richtung Kryptenspitze eine Verlängerung der Krypten nach sich zog. Im Wildtyp wurden keine Apoptosen im Kryptengrund detektiert. GPx1 wird unter physiologischen Bedingungen im Gegensatz zur GPx2 in der Kryptenspitze exprimiert und ist im Selenmangel nicht mehr detektierbar. Der Knockout von GPx2 erhöhte die GPx1-Expression im Kryptengrund auf allen drei Selendiäten. Diese Überexpression von GPx1 am Kryptengrund soll vermutlich den Verlust von GPx2 an dieser Stelle kompensieren. Da jedoch dort die massive Apoptoserate detektiert wurde, kann die GPx1 nicht die komplette Funktion von GPx2 kompensieren. Diese Ergebnisse deuten darauf hin, dass die Funktion von GPx2 nicht nur in der Reduktion von H2O2 liegt. Vielmehr kann eine Rolle bei der Aufrechterhaltung der Homöostase von Zellen postuliert werden. Ein weiterer Bestandteil dieser Arbeit war die Klärung der Frage, welchen Einfluss GPx2 auf die entzündungsassoziierte Colonkarzinogenese ausübt. In dem hierfür verwendeten AOM/DSS-Model wird der karzinogene Prozess durch Entzündung vorangetrieben. Es erfolgte sowohl im Wildtyp als auch im GPx2-Knockout zum einen die Bewertung des Entzündungsstatus des Colons und zum anderen wurde die Anzahl von ACF und Tumoren verglichen. Das Colon im GPx2-Knockout war wesentlich stärker entzündet als im Wildtyp. Diese Ergebnisse bestätigen die für die GPx2 postulierte antiinflammatorische Funktion. Normalerweise führt eine Erhöhung der Mitoseanzahl zur Regeneration des entzündeten Gewebes. Jedoch beeinflusst der Verlust von GPx2 vermutlich den Ablauf der Entzündung, indem beispielsweise die Regeneration des Gewebes durch die enorm hohe Apoptoserate am Kryptengrund verlangsamt wird. Des Weiteren hatten sich im GPx2-Knockout tendenziell mehr Tumore entwickelt. Somit korrelierte die Entzündung des Colons mit der Entwicklung von Tumoren. Der Verlust von GPx2 begünstigte vermutlich sowohl die Tumorinitiation als auch die Tumorprogression. Allerdings stimulierte die Expression von GPx2 ebenfalls das Tumorwachstum. Es kann geschlussfolgert werden, dass eine adäquate GPx2-Expression vor Entzündung schützt und somit das Risiko für Colonkrebs senkt. Ob GPx2 aber insgesamt pro- oder antikarzinogen wirkt, hängt vermutlich vom Stadium des Colonkarzinogenese ab.
Das vorliegende Buch vergleicht Strategien biologischer Systeme mit militärischen Strategien. Die zentrale Fragestellung ist dabei darauf gerichtet, ob es neben systemischen Gemeinsamkeiten auch gemeinsame oder ähnliche Strukturmuster und ähnliche Prozessabläufe beispielsweise sowohl im biologischen Abwehrmechanismus des Immunsystems und bei Insektenstaaten als auch bei Prozessen im Militär gibt. Vor diesem Hintergrund klaffen in der Theorie der Strategie, speziell in den Militärwissenschaften Lücken, denn der Systemansatz wird nicht konsequent beachtet, wie in diesem Buch mehrfach nachgewiesen ist. Von einem allgemeinen Verständnis der Strategie als bewusstem planerischem Vorgehen ist Abstand zu nehmen. Ausgehend von der Methode der Analogie und des Vergleichs wird im theoretischen Teil dieses Buches die Allgemeine Systemtheorie erläutert. Dabei werden der Begriff der Strategie ebenso wie die Begriffe Struktur und Prozess und Ansätze aus der Kriegsphilosophie von Clausewitz untersucht. Den Ausgangspunkt und schließlich auch wieder den Endpunkt der Überlegungen bilden neben dem notwendigen weiten Verständnis von Strategie, vor allem der Begriff der Organisation, ihrer Umwelt und der in diesem Zusammenhang bestehenden Wechselwirkung. Sowohl die Wechselwirkung von Umwelt und System als auch ihre Abhängigkeit durch strukturelle Kopplung werden beschrieben. Das Zusammenspiel und die daraus entstehende Komplexität der fünf Komponenten der Wahrnehmung, der Information und der Führung im Zusammenhang der Komponenten von Raum und Zeit in einem sozialen System lassen die klassische Ziel-Mittel-Zweck-Beziehung Clausewitz´scher Strategiedefinition verkürzt erscheinen. Anhand eines kurzen Rekurses der Methoden der Sozialen Netzwerkanalyse (SNA) wird der breite und tiefgehende Analyserahmen der Messung und Transparenzerreichung in Organisationen vorgestellt. Die SNA wird als Ausprägung der Netzwerk- und Graphentheorie, in die Allgemeine Systemtheorie integriert. Sie bildet eine zukunftsweisende Methode der Untersuchung von Netzwerken wie etwa dem Internet (Facebook, Xing etc.). Der aufgezeigte Theorierahmen bildet dabei zugleich eine Methode für den Systemvergleich und kann als Vorgehensmodell künftiger Strategieentwicklung genutzt werden. Der anschließende Systemvergleich wird mit mehreren Beispielen durchgeführt. Ausgehend von der Zelle als Grundeinheit werden Strukturen und Prozesse des Immunsystems mit solchen in militärischen Strukturen, weil sie im Lauf der Evolution enorme Leistungen in Reaktion, Anpassung und Optimierung vollbracht haben. Der Vergleich geht der Frage nach, ob in diesen Bereichen der Strategie und Organisation systemische Grundregeln existieren. Das Beispiel der Wechselwirkung zwischen Parasit und Wirt zeigt, dass jeder Fortschritt und Sieg angesichts der Systemeinbettung von Strategie nur relativ wirken kann. Die Analogie zwischen Viren und Bakterien sowie die Entwicklung des Begriffs der sozialen Mimikry führen zu einem erweiterten Verständnis der Strategie von Terroristen in sozialen Systemen. Verdeutlicht wird das Grundschema des Täuschens und Eindringens in Systeme sowie die Beeinflussung und Umsteuerung von Prozessen und Strukturen in einem System durch Kommunikation und Implementation von Codes. Am Beispiel des Immunsystems und der Bildung verschiedener Kommunikations- und Steuerungsmechanismen von Zellsystemen sowie Beispielen von Schwarmbildung und der Organisation sozialer Insekten werden eine Vielzahl heuristischer Hinweise für neue Ansätze für die Organisation von Streitkräften und ihrer Steuerung gefunden. Neben der Erarbeitung eines grundlegenden Strategiebegriffs anhand von Wahrnehmung und Selektion als Grundprozess der Erzeugung von Strategie wird eine differenzierte Betrachtung von Begriffen wie Redundanz und Robustheit sowie eine relativierende Sichtweise von Risiko, Gefahr und Schaden gewonnen. Der Vergleich mit dem Immunsystems zeigt einfache Beispiele der Informationsspeicherung und -übertragung, die zudem Bypassfähigkeiten sowie dezentrale Eskalations- und Deeskalationsprinzipien veranschaulichen. Dies eröffnet in Analogie dieser Prinzipien einen weiten Raum Sicherheitsarchitekturen zu überdenken und neu zu strukturieren. Zudem kann die räumliche Ausbreitung von Information und Kräften als ein gemeinsames Grundproblem der Entwicklung und Wirksamkeit von Strategien sowohl in der Natur, als auch im Militär identifiziert werden. Die Betrachtung zeigt zudem wie Zellen mit fehlgeleiteten Prozessen und Strukturen umgehen. Die Analogie deutet auf das Erfordernis einer Veränderung im Umgang mit Fehlern und ihrer Rückführ- und Umkehrbarkeit im weitesten Sinne. Das Buch eröffnet überdies ein neues Verständnis von Staat, Gewaltenteilung und Institutionen in einem sozialen System. Die Ergebnisse sind auch auf andere Forschungsbereiche, Organisationen und unterschiedlichste soziale Systeme übertragbar. Es eröffnet sich ein breites Anwendungsspektrum für künftige strategische Untersuchungen.
Supermassive black holes are a fundamental component of the universe in general and of galaxies in particular. Almost every massive galaxy harbours a supermassive black hole (SMBH) in its center. Furthermore, there is a close connection between the growth of the SMBH and the evolution of its host galaxy, manifested in the relationship between the mass of the black hole and various properties of the galaxy's spheroid component, like its stellar velocity dispersion, luminosity or mass. Understanding this relationship and the growth of SMBHs is essential for our picture of galaxy formation and evolution. In this thesis, I make several contributions to improve our knowledge on the census of SMBHs and on the coevolution of black holes and galaxies. The first route I follow on this road is to obtain a complete census of the black hole population and its properties. Here, I focus particularly on active black holes, observable as Active Galactic Nuclei (AGN) or quasars. These are found in large surveys of the sky. In this thesis, I use one of these surveys, the Hamburg/ESO survey (HES), to study the AGN population in the local volume (z~0). The demographics of AGN are traditionally represented by the AGN luminosity function, the distribution function of AGN at a given luminosity. I determined the local (z<0.3) optical luminosity function of so-called type 1 AGN, based on the broad band B_J magnitudes and AGN broad Halpha emission line luminosities, free of contamination from the host galaxy. I combined this result with fainter data from the Sloan Digital Sky Survey (SDSS) and constructed the best current optical AGN luminosity function at z~0. The comparison of the luminosity function with higher redshifts supports the current notion of 'AGN downsizing', i.e. the space density of the most luminous AGN peaks at higher redshifts and the space density of less luminous AGN peaks at lower redshifts. However, the AGN luminosity function does not reveal the full picture of active black hole demographics. This requires knowledge of the physical quantities, foremost the black hole mass and the accretion rate of the black hole, and the respective distribution functions, the active black hole mass function and the Eddington ratio distribution function. I developed a method for an unbiased estimate of these two distribution functions, employing a maximum likelihood technique and fully account for the selection function. I used this method to determine the active black hole mass function and the Eddington ratio distribution function for the local universe from the HES. I found a wide intrinsic distribution of black hole accretion rates and black hole masses. The comparison of the local active black hole mass function with the local total black hole mass function reveals evidence for 'AGN downsizing', in the sense that in the local universe the most massive black holes are in a less active stage then lower mass black holes. The second route I follow is a study of redshift evolution in the black hole-galaxy relations. While theoretical models can in general explain the existence of these relations, their redshift evolution puts strong constraints on these models. Observational studies on the black hole-galaxy relations naturally suffer from selection effects. These can potentially bias the conclusions inferred from the observations, if they are not taken into account. I investigated the issue of selection effects on type 1 AGN samples in detail and discuss various sources of bias, e.g. an AGN luminosity bias, an active fraction bias and an AGN evolution bias. If the selection function of the observational sample and the underlying distribution functions are known, it is possible to correct for this bias. I present a fitting method to obtain an unbiased estimate of the intrinsic black hole-galaxy relations from samples that are affected by selection effects. Third, I try to improve our census of dormant black holes and the determination of their masses. One of the most important techniques to determine the black hole mass in quiescent galaxies is via stellar dynamical modeling. This method employs photometric and kinematic observations of the galaxy and infers the gravitational potential from the stellar orbits. This method can reveal the presence of the black hole and give its mass, if the sphere of the black hole's gravitational influence is spatially resolved. However, usually the presence of a dark matter halo is ignored in the dynamical modeling, potentially causing a bias on the determined black hole mass. I ran dynamical models for a sample of 12 galaxies, including a dark matter halo. For galaxies for which the black hole's sphere of influence is not well resolved, I found that the black hole mass is systematically underestimated when the dark matter halo is ignored, while there is almost no effect for galaxies with well resolved sphere of influence.
Im Rahmen einer prospektiven Längsschnittuntersuchung wurde der Berufseinstieg von ÄrztInnen (N = 185) als normatives kritisches Lebensereignis untersucht. Dazu wurden sie insgesamt drei Mal im Abstand von jeweils sechs Monaten im ersten Jahr nach ihrem Studiumsabschluss befragt (T1: in den ersten zwei Wochen nach dem Staatsexamen, T2: kurzzeitig nach dem Berufseinstieg, T3: im Schnitt 9.5 Monate nach dem Berufseinstieg). Die Ergebnisse zeigten zunächst, dass unlängst examinierte Jung-ÄrztInnen, die sich vergleichsweise schlechter auf den Beruf durch das Studium vorbereitet fühlten, ihren bevorstehenden Berufseinstieg negativer bewerteten und schon vor diesem beanspruchter waren. Die Bewertung des Berufseinstiegs vermittelte dabei den Zusammenhang zwischen einer schlechten Vorbereitung und der Beanspruchung. Arbeitsspezifische Copingfunktionalität wiederum pufferte den Zusammenhang zwischen einer schlechten Vorbereitung und der Bewertung des Berufseinstiegs. Das Problem einer als schlecht empfundenen Vorbereitung verdeutlichte sich in der Längsschnittanalyse – sie sagte eine höhere Beanspruchung zum zweiten Messzeitpunkt, d.h. nach dem Berufseinstieg, vorher. In der Untersuchung der Beanspruchungsentwicklung über die drei Messzeitpunkte hinweg fanden sich nur wenige Veränderungen. Es ließ sich zwar eine deutliche Zunahme der mittleren Depressivitäts-Ausprägungen über den Berufseinstieg hinweg herausstellen (T1-T2); auf anderen Beanspruchungsindikatoren zeigte sich jedoch kein direkter Effekt des Arbeitsbeginns bzw. fand sich auch keine Adaptation der Jung-ÄrztInnen an ihre neue Situation im Sinne einer sich verringernden Beanspruchung im weiteren Verlauf (T2-T3). In der Erklärung interindividueller Unterschiede in der Beanspruchung im Untersuchungszeitraum zeigte sich, dass die sich mit dem Berufseinstieg einstellende Arbeitsbelastung zum zweiten und dritten Messzeitpunkt erwartungsgemäß positiv mit Beanspruchung assoziiert war. Die Arbeitsbelastungs-Beanspruchungs-Beziehung bestand jedoch nur im Querschnitt; in der Längsschnittanalyse fand sich kein Effekt der T2-Arbeitsbelastung auf die T3-Beanspruchung. Ausgangsunterschiede in psychischen Ressourcen wirkten einerseits direkt auf die Beanspruchung zu T2, zum Teil moderierten sie aber auch den Zusammenhang zwischen der Arbeitsbelastung und Beanspruchung: Eine höhere Resilienz und die Wahrnehmung sozialer Unterstützung sagten eine geringere Beanspruchung nach dem Berufseinstieg vorher. Jung-ÄrztInnen, die sich durch eine stärkere Arbeitsbelastung auszeichneten, aber über ein funktionaleres Bewältigungsverhalten im Arbeitskontext verfügten, waren kurzzeitig nach dem Berufseinstieg weniger beansprucht als stark arbeitsbelastete Jung-ÄrztInnen mit weniger funktionalem Coping. Verringerungen in den psychischen Ressourcen über den Berufseinstieg hinweg wirkten sich direkt, d.h. per se ungünstig auf die Beanspruchung zum dritten Messzeitpunkt aus. Zudem interagierten sie mit der zu diesem Zeitpunkt bestehenden Arbeitsbelastung in Vorhersage der Beanspruchung. Stärker arbeitsbelastete Jung-ÄrztInnen, deren Copingfunktionalität und Wahrnehmung sozialer Unterstützung vom ersten zum dritten Messzeitpunkt abgenommen hatte, waren am Ende des Untersuchungszeitraums am stärksten beansprucht. Hinsichtlich der Auswirkungen des Berufseinstiegs auf die Persönlichkeit der Jung-ÄrztInnen fanden sich ungünstige Veränderungen: Sowohl die Ausprägungen psychischer Ressourcen (Widerstandsfähigkeit, Wahrnehmung sozialer Unterstützung hinsichtlich der Arbeitstätigkeit) als auch die der Big Five-Faktoren nahmen im Mittel ab. Interindividuelle Unterschiede in den Veränderungen ließen sich auf die Beanspruchung kurzzeitig nach dem Berufseinstieg (T2) bzw. auf deren Entwicklung in den Folgemonaten (T2-T3) zurückführen: Jene Jung-ÄrztInnen, die vergleichsweise stark beansprucht auf den Berufseinstieg reagiert hatten bzw. deren Beanspruchung im weiteren Verlauf zunahm, zeigten entsprechend ungünstige Veränderungen. Die Ergebnisse zusammengefasst verdeutlicht sich folgende Problematik: Jung-ÄrztInnen, die weniger gut, d.h. persönlichkeitsbasiert geschützt den Berufseinstieg absolvieren, reagieren stärker beansprucht und sind dann auch diejenigen, deren Persönlichkeit sich in den ersten Arbeitsmonaten ungünstig verändert. Jung-ÄrztInnen mit geringen psychischen Ressourcen sind folglich nicht nur besonders vulnerabel für die Entwicklung von Beanspruchung angesichts belastender Arbeitsbedingungen, sondern ihre vergleichsweise hohe Beanspruchung bedingt eine weitere Verringerung des Schutz- und Pufferpotenzials ihrer Persönlichkeit. Es kommt zu einer ungünstigen Akzentuierung der ohnehin schon vergleichsweise ressourcenschwachen Persönlichkeit, welche die Vulnerabilität für zukünftige Beanspruchung erhöht. Aus den Ergebnissen lässt sich ein Unterstützungsbedarf junger ÄrztInnen in der sensiblen und wegweisenden Berufseinstiegsphase ableiten. Neben einer Verbesserung ihrer Arbeitsbedingungen stellen eine rechtzeitige Sensibilisierung junger ÄrztInnen für den Arbeitsbelastungs-Beanspruchungs-Zusammenhang, ihre regelmäßige Supervision sowie vor allem aber auch kompetenzorientiertes und ressourcenstärkendes Feedback von den Mentoren und Vorgesetzten die Grundlage dafür dar, dass die Jung-MedizinerInnen selbst gesund bleiben und sie die ärztliche Tätigkeit trotz ihres wohl stets hohen Belastungspotenzials als erfüllend und zufriedenstellend erleben.
After the collapse of the Soviet Union the former member states have started the transformation process. The transformation process from planned to market economy includes not only the adaptation of the economy to the new market rules but also the profound social and political transformation. For this reason such processes present huge challenges to affected societies. The transformational recession in Georgia was significantly enhanced by the civil war and by ethnic conflicts in Abkhazia and South Ossetia. During the ethnic conflicts and civil war the business and technical infrastructure were damaged and most of them were completely destroyed. Poverty and political instability were predominated. The trade relations with the member countries of Council of Mutual Economic Assistance (Comecon) were aborted. Moreover, the conflict in South Ossetia hampered the power supply from Russia and a conflict in Abkhazia, the production and trade with tea and citruses, which were major trade commodities at that time. In the beginning of 90-ies, Georgian government with the assistance of international organizations, such as International Monetary Fund and World Bank started to elaborate political and economical reforms. The reforms included several aspects, such as the transfer of public assets to private through privatization, the liberalization of domestic market and trade and the creation of market-oriented institutions. Because of lack in implementation neither economical nor political transformation has been achieved. For instance, by the begin of market oriented reforms the awareness of policy makers about the importance of entrepreneurship, in particular small and medium ones for the economy was low. The absence of previous experience prevented the elaboration of appropriate policy instruments and methods for the development of competitive market economy. The stimulation of private sector has been generally neglected. This had a severe affect on political, social and economical problems, which still hampers the development of middle class in Georgia. The presented research indicates that productive entrepreneurship is a driving force of an economy. The entrepreneurial activities on the one hand facilitate the resource allocation and on the other through the development of new products and services urge the competition. Furthermore, they contribute to technological improvements through innovation, create jobs and thus boost the economic and social development of a particular region or country. However, it is important that the legal and institutional framework is appropriately settled. Unlike mature market economies, Georgia is not characterized by well-developed sector of small and medium sized businesses. Most existing SMEs are operating in local markets and predominantly in the shadow economy. It is also noteworthy that small business in Georgia belongs to so called “mom and pop” rather than to innovative, growth oriented businesses. They are mostly engaged in trade and craft. In addition of poor performance, the business activity of SMEs is very centralized. The vast majority of them are operating in the capital Tbilisi. The poor performance of small and medium businesses in Georgia and their negligence by the market forces is among others due to the armed conflicts and state failure. As in the beginning of transformation process, down to the present day, the state fails to provide necessary conditions, such as rule of law, the protection of property rights and competition, transparent and uncorrupted public administration. The result is the weak middle class. The weak middle class by itself has a negative impact on economic development and democratization process in Georgia.
Does it have to be trees? : Data-driven dependency parsing with incomplete and noisy training data
(2011)
We present a novel approach to training data-driven dependency parsers on incomplete annotations. Our parsers are simple modifications of two well-known dependency parsers, the transition-based Malt parser and the graph-based MST parser. While previous work on parsing with incomplete data has typically couched the task in frameworks of unsupervised or semi-supervised machine learning, we essentially treat it as a supervised problem. In particular, we propose what we call agnostic parsers which hide all fragmentation in the training data from their supervised components. We present experimental results with training data that was obtained by means of annotation projection. Annotation projection is a resource-lean technique which allows us to transfer annotations from one language to another within a parallel corpus. However, the output tends to be noisy and incomplete due to cross-lingual non-parallelism and error-prone word alignments. This makes the projected annotations a suitable test bed for our fragment parsers. Our results show that (i) dependency parsers trained on large amounts of projected annotations achieve higher accuracy than the direct projections, and that (ii) our agnostic fragment parsers perform roughly on a par with the original parsers which are trained only on strictly filtered, complete trees. Finally, (iii) when our fragment parsers are trained on artificially fragmented but otherwise gold standard dependencies, the performance loss is moderate even with up to 50% of all edges removed.
Dryland vulnerability : typical patterns and dynamics in support of vulnerability reduction efforts
(2011)
The pronounced constraints on ecosystem functioning and human livelihoods in drylands are frequently exacerbated by natural and socio-economic stresses, including weather extremes and inequitable trade conditions. Therefore, a better understanding of the relation between these stresses and the socio-ecological systems is important for advancing dryland development. The concept of vulnerability as applied in this dissertation describes this relation as encompassing the exposure to climate, market and other stresses as well as the sensitivity of the systems to these stresses and their capacity to adapt. With regard to the interest in improving environmental and living conditions in drylands, this dissertation aims at a meaningful generalisation of heterogeneous vulnerability situations. A pattern recognition approach based on clustering revealed typical vulnerability-creating mechanisms at global and local scales. One study presents the first analysis of dryland vulnerability with global coverage at a sub-national resolution. The cluster analysis resulted in seven typical patterns of vulnerability according to quantitative indication of poverty, water stress, soil degradation, natural agro-constraints and isolation. Independent case studies served to validate the identified patterns and to prove the transferability of vulnerability-reducing approaches. Due to their worldwide coverage, the global results allow the evaluation of a specific system’s vulnerability in its wider context, even in poorly-documented areas. Moreover, climate vulnerability of smallholders was investigated with regard to their food security in the Peruvian Altiplano. Four typical groups of households were identified in this local dryland context using indicators for harvest failure risk, agricultural resources, education and non-agricultural income. An elaborate validation relying on independently acquired information demonstrated the clear correlation between weather-related damages and the identified clusters. It also showed that household-specific causes of vulnerability were consistent with the mechanisms implied by the corresponding patterns. The synthesis of the local study provides valuable insights into the tailoring of interventions that reflect the heterogeneity within the social group of smallholders. The conditions necessary to identify typical vulnerability patterns were summarised in five methodological steps. They aim to motivate and to facilitate the application of the selected pattern recognition approach in future vulnerability analyses. The five steps outline the elicitation of relevant cause-effect hypotheses and the quantitative indication of mechanisms as well as an evaluation of robustness, a validation and a ranking of the identified patterns. The precise definition of the hypotheses is essential to appropriately quantify the basic processes as well as to consistently interpret, validate and rank the clusters. In particular, the five steps reflect scale-dependent opportunities, such as the outcome-oriented aspect of validation in the local study. Furthermore, the clusters identified in Northeast Brazil were assessed in the light of important endogenous processes in the smallholder systems which dominate this region. In order to capture these processes, a qualitative dynamic model was developed using generalised rules of labour allocation, yield extraction, budget constitution and the dynamics of natural and technological resources. The model resulted in a cyclic trajectory encompassing four states with differing degree of criticality. The joint assessment revealed aggravating conditions in major parts of the study region due to the overuse of natural resources and the potential for impoverishment. The changes in vulnerability-creating mechanisms identified in Northeast Brazil are well-suited to informing local adjustments to large-scale intervention programmes, such as “Avança Brasil”. Overall, the categorisation of a limited number of typical patterns and dynamics presents an efficient approach to improving our understanding of dryland vulnerability. Appropriate decision-making for sustainable dryland development through vulnerability reduction can be significantly enhanced by pattern-specific entry points combined with insights into changing hotspots of vulnerability and the transferability of successful adaptation strategies.
Die öffentliche Verwaltung setzt seit mehreren Jahren E-Government-Anwendungssysteme ein, um ihre Verwaltungsprozesse intensiver mit moderner Informationstechnik zu unterstützen. Da die öffentliche Verwaltung in ihrem Handeln in besonderem Maße an Recht und Gesetz gebunden ist verstärkt und verbreitet sich der Zusammenhang zwischen den Gesetzen und Rechtsvorschriften einerseits und der zur Aufgabenunterstützung eingesetzten Informationstechnik andererseits. Aus Sicht der Softwaretechnik handelt es sich bei diesem Zusammenhang um eine spezielle Form der Verfolgbarkeit von Anforderungen (engl. Traceability), die so genannte Verfolgbarkeit im Vorfeld der Anforderungsspezifikation (Pre-Requirements Specification Traceability, kurz Pre-RS Traceability), da sie Aspekte betrifft, die relevant sind, bevor die Anforderungen in eine Spezifikation eingeflossen sind (Ursprünge von Anforderungen). Der Ansatz dieser Arbeit leistet einen Beitrag zur Verfolgbarkeit im Vorfeld der Anforderungsspezifikation von E-Government-Anwendungssystemen. Er kombiniert dazu aktuelle Entwicklungen und Standards (insbesondere des World Wide Web Consortium und der Object Management Group) aus den Bereichen Verfolgbarkeit von Anforderungen, Semantic Web, Ontologiesprachen und modellgetriebener Softwareentwicklung. Der Lösungsansatz umfasst eine spezielle Ontologie des Verwaltungshandeln, die mit den Techniken, Methoden und Werkzeugen des Semantic Web eingesetzt wird, um in Texten von Rechtsvorschriften relevante Ursprünge von Anforderungen durch Annotationen mit einer definierten Semantik zu versehen. Darauf aufbauend wird das Ontology Definition Metamodel (ODM) verwendet, um die Annotationen als spezielle Individuen einer Ontologie auf Elemente der Unified Modeling Language (UML) abzubilden. Dadurch entsteht ein neuer Modelltyp Pre-Requirements Model (PRM), der das Vorfeld der Anforderungsspezifikation formalisiert. Modelle diesen Typs können auch verwendet werden, um Aspekte zu formalisieren die sich nicht oder nicht vollständig aus dem Text der Rechtsvorschrift ergeben. Weiterhin bietet das Modell die Möglichkeit zum Anschluss an die modellgetriebene Softwareentwicklung. In der Arbeit wird deshalb eine Erweiterung der Model Driven Architecture (MDA) vorgeschlagen. Zusätzlich zu den etablierten Modelltypen Computation Independent Model (CIM), Platform Independent Model (PIM) und Platform Specific Model (PSM) könnte der Einsatz des PRM Vorteile für die Verfolgbarkeit bringen. Wird die MDA mit dem PRM auf das Vorfeld der Anforderungsspezifikation ausgeweitet, kann eine Transformation des PRM in ein CIM als initiale Anforderungsspezifikation erfolgen, indem der MOF Query View Transformation Standard (QVT) eingesetzt wird. Als Teil des QVT-Standards ist die Aufzeichnung von Verfolgbarkeitsinformationen bei Modelltransformationen verbindlich. Um die semantische Lücke zwischen PRM und CIM zu überbrücken, erfolgt analog zum Einsatz des Plattformmodells (PM) in der PIM nach PSM Transformation der Einsatz spezieller Hilfsmodelle. Es kommen dafür die im Projekt "E-LoGo" an der Universität Potsdam entwickelten Referenzmodelle zum Einsatz. Durch die Aufzeichnung der Abbildung annotierter Textelemente auf Elemente im PRM und der Transformation der Elemente des PRM in Elemente des CIM kann durchgängige Verfolgbarkeit im Vorfeld der Anforderungsspezifikation erreicht werden. Der Ansatz basiert auf einer so genannten Verfolgbarkeitsdokumentation in Form verlinkter Hypertextdokumente, die mittels XSL-Stylesheet erzeugt wurden und eine Verbindung zur graphischen Darstellung des Diagramms (z. B. Anwendungsfall-, Klassendiagramm der UML) haben. Der Ansatz unterstützt die horizontale Verfolgbarkeit zwischen Elementen unterschiedlicher Modelle vorwärts- und rückwärtsgerichtet umfassend. Er bietet außerdem vertikale Verfolgbarkeit, die Elemente des gleichen Modells und verschiedener Modellversionen in Beziehung setzt. Über den offensichtlichen Nutzen einer durchgängigen Verfolgbarkeit im Vorfeld der Anforderungsspezifikation (z. B. Analyse der Auswirkungen einer Gesetzesänderung, Berücksichtigung des vollständigen Kontextes einer Anforderung bei ihrer Priorisierung) hinausgehend, bietet diese Arbeit eine erste Ansatzmöglichkeit für eine Feedback-Schleife im Prozess der Gesetzgebung. Stehen beispielsweise mehrere gleichwertige Gestaltungsoptionen eines Gesetzes zur Auswahl, können die Auswirkungen jeder Option analysiert und der Aufwand ihrer Umsetzung in E-Government-Anwendungen als Auswahlkriterium berücksichtigt werden. Die am 16. März 2011 in Kraft getretene Änderung des NKRG schreibt eine solche Analyse des so genannten „Erfüllungsaufwands“ für Teilbereiche des Verwaltungshandelns bereits heute verbindlich vor. Für diese Analyse kann die vorliegende Arbeit einen Ansatz bieten, um zu fundierten Aussagen über den Änderungsaufwand eingesetzter E-Government-Anwendungssysteme zu kommen.
Die Natur unterliegt ständigen Veränderungen und befindet sich nur vermeintlich in einem Gleichgewicht. Umweltparameter wie Temperatur, Luftfeuchtigkeit oder Sonneneinstrahlung schwanken auf einer Zeitskala von Sekunden bis Jahrmillionen und beinhalten teils beträchtliche Unterschiede. Mit diesen Umweltveränderungen müssen sich Arten als Teil eines Ökosystems auseinandersetzen. Für Ökologen ist interessant, wie sich individuelle Reaktionen auf die Umweltveränderungen im dynamischen Verhalten einer ganzen Population bemerkbar machen und ob deren Verhalten vorhersagbar ist. Der Demografie einer Population kommt hierbei eine entscheidende Rolle zu, da sie das Resultat von Wachstums- und Sterbeprozessen darstellt. Eben jene Prozesse werden von der Umwelt maßgeblich beeinflusst. Doch wie genau beeinflussen Umweltveränderungen das Verhalten ganzer Populationen? Wie sieht das vorübergehende, transiente Verhalten aus? Als Resultat von Umwelteinflüssen bilden sich in Populationen sogenannte Kohorten, hinsichtlich der Zahl an Individuen überproportional stark vertretene Alters- oder Größenklassen. Sterben z.B. aufgrund eines außergewöhnlich harten Winters, die alten und jungen Individuen einer Population, so besteht diese anschließend hauptsächlich aus Individuen mittleren Alters. Sie wurde sozusagen synchronisiert. Eine solche Populationen neigt zu regelmäßigen Schwankungen (Oszillationen) in ihrer Dichte, da die sich abwechselnden Phasen der individuellen Entwicklung und der Reproduktion nun von einem Großteil der Individuen synchron durchschritten werden. D.h., mal wächst die Population und mal nimmt sie entsprechend der Sterblichkeit ab. In Experimenten mit Phytoplankton-Populationen konnte ich zeigen, dass dieses oszillierende Verhalten mit dem in der Physik gebräuchlichen Konzept der Synchronisation beschrieben werden kann. Synchrones Verhalten ist eines der verbreitetsten Phänomene in der Natur und kann z.B. in synchron schwingenden Brücken, als auch bei der Erzeugung von Lasern oder in Form von rhythmischem Applaus auf einem Konzert beobachtet werden. Wie stark die Schwankungen sind, hängt dabei sowohl von der Stärke der Umweltveränderung als auch vom demografischen Zustand der Population vor der Veränderung ab. Zwei Populationen, die sich in verschiedenen Habitaten aufhalten, können zwar gleich stark von einer Umweltveränderung beeinflusst werden. Die Reaktionen im anschließenden Verhalten können jedoch äußerst unterschiedlich ausfallen, wenn sich die Populationen zuvor in stark unterschiedlichen demografischen Zuständen befanden. Darüber hinaus treten bestimmte, für das Verhalten einer Population relevante Mechanismen überhaupt erst in Erscheinung, wenn sich die Umweltbedingungen ändern. So fiel in Experimenten beispielsweise die Populationsdichte um rund 50 Prozent ab nachdem sich die Ressourcenverfügbarkeit verdoppelte. Der Grund für dieses gegenintuitive Verhalten konnte mit der erhöhten Aufnahme von Ressourcen erklärt werden. Damit verbessert eine Algenzelle zwar die eigene Konstitution, jedoch verzögert sich dadurch die auch die Reproduktion und die Populationsdichte nimmt gemäß ihrer Verluste bzw. Sterblichkeit ab. Zwei oder mehr räumlich getrennte Populationen können darüber hinaus durch Umwelteinflüsse synchronisiert werden. Dies wird als Moran-Effekt bezeichnet. Angenommen auf zwei weit voneinander entfernten Inseln lebt jeweils eine Population. Zwischen beiden findet kein Austausch statt – und doch zeigt sich beim Vergleich ihrer Zeitreihen eine große Ähnlichkeit. Das überregionale Klima synchronisiert hierbei die lokalen Umwelteinflüsse. Diese wiederum bestimmen das Verhalten der jeweiligen Population. Der Moran-Effekt besagt nun, dass die Ähnlichkeit zwischen den Populationen jener zwischen den Umwelteinflüssen entspricht, oder geringer ist. Meine Ergebnisse bestätigen dies und zeigen darüber hinaus, dass sich die Populationen sogar ähnlicher sein können als die Umwelteinflüsse, wenn man von unterschiedlich stark schwankenden Einflüssen ausgeht.
Motivation | Societal and economic needs of East Africa rely entirely on the availability of water, which is governed by the regular onset and retreat of the rainy seasons. Fluctuations in the amounts of rainfall has tremendous impact causing widespread famine, disease outbreaks and human migrations. Efforts towards high resolution forecasting of seasonal precipitation and hydrological systems are therefore needed, which requires high frequency short to long-term analyses of available climate data that I am going to present in this doctoral thesis by three different studies. 15,000 years - Suguta Valley | The main study of this thesis concentrated on the understanding of humidity changes within the last African Humid Period (AHP, 14.8-5.5 ka BP). The nature and causes of intensity variations of the West-African (WAM) and Indian Summer monsoons (ISM) during the AHP, especially their exact influence on regional climate relative to each other, is currently intensely debated. Here, I present a high-resolution multiproxy lake-level record spanning the AHP from the remote Suguta Valley in the northern Kenya Rift, located between the WAM and ISM domains. The presently desiccated valley was during the AHP filled by a 300 m deep and 2200 km2 large palaeo-lake due to an increase in precipitation of only 26%. The record explains the synchronous onset of large lakes in the East African Rift System (EARS) with the longitudinal shift of the Congo Air Boundary (CAB) over the East African and Ethiopian Plateaus, as the direct consequence of an enhanced atmospheric pressure gradient between East-Africa and India due to a precessional-forced northern hemisphere insolation maximum. Pronounced, and abrupt lake level fluctuations during the generally wet AHP are explained by small-scale solar irradiation changes weakening this pressure gradient atmospheric moisture availability preventing the CAB from reaching the study area. Instead, the termination of the AHP occurred, in a non-linear manner due to a change towards an equatorial insolation maximum ca. 6.5 ka ago extending the AHP over Ethiopia and West-Africa. 200 years - Lake Naivasha | The second part of the thesis focused on the analysis of a 200 year-old sediment core from Lake Naivasha in the Central Kenya Rift, one of the very few present freshwater lakes in East Africa. The results revealed and confirmed, that the appliance of proxy records for palaeo-climate reconstruction for the last 100 years within a time of increasing industrialisation and therefore human impact to the proxy-record containing sites are broadly limited. Since the middle of the 20th century, intense anthropogenic activity around Lake Naivasha has led to cultural eutrophication, which has overprinted the influence of natural climate variation to the lake usually inferred from proxy records such as diatoms, transfer-functions, geochemical and sedimentological analysis as used in this study. The results clarify the need for proxy records from remote unsettled areas to contribute with pristine data sets to current debates about anthropologic induced global warming since the past 100 years. 14 years - East African Rift | In order to avoid human influenced data sets and validate spatial and temporal heterogeneities of proxy-records from East Africa, the third part of the thesis therefore concentrated on the most recent past 14 years (1996-2010) detecting climate variability by using remotely sensed rainfall data. The advancement in the spatial coverage and temporal resolutions of rainfall data allow a better understanding of influencing climate mechanisms and help to better interpret proxy-records from the EARS in order to reconstruct past climate conditions. The study focuses on the dynamics of intraseasonal rainfall distribution within catchments of eleven lake basins in the EARS that are often used for palaeo-climate studies. We discovered that rainfall in adjacent basins exhibits high complexities in the magnitudes of intraseasonal variability, biennial to triennial precipitation patterns and even are not necessarily correlated often showing opposite trends. The variability among the watersheds is driven by the complex interaction of topography, in particular the shape, length and elevation of the catchment and its relative location to the East African Rift System and predominant influence of the ITCZ or CAB, whose locations and intensities are dependent on the strength of low pressure cells over India, SST variations in the Atlantic, Pacific or Indian Ocean, QBO phases and the 11-year solar cycle. Among all seasons we observed, January-September is the season of highest and most complex rainfall variability, especially for the East African Plateau basins, most likely due to the irregular penetration and sensitivity of the CAB.
The impact of global warming on human water resources is attracting increasing attention. No other region in this world is so strongly affected by changes in water supply than the tropics. Especially in Africa, the availability and access to water is more crucial to existence (basic livelihoods and economic growth) than anywhere else on Earth. In East Africa, rainfall is mainly influenced by the migration of the Inter-Tropical Convergence Zone (ITCZ) and by the El Niño Southern Oscillation (ENSO) with more rain and floods during El Niño and severe droughts during La Niña. The forecasting of East African rainfall in a warming world requires a better understanding of the response of ENSO-driven variability to mean climate. Unfortunately, existing meteorological data sets are too short or incomplete to establish a precise evaluation of future climate. From Lake Challa near Mount Kilimanjaro, we report records from a laminated lake sediment core spanning the last 25,000 years. Analyzing a monthly cleared sediment trap confirms the annual origin of the laminations and demonstrates that the varve-thicknesses are strongly linked to the duration and strength of the windy season. Given the modern control of seasonal ITCZ location on wind and rain in this region and the inverse relation between the two, thicker varves represent windier and thus drier years. El Niño (La Niña) events are associated with wetter (drier) conditions in east Africa and decreased (increased) surface wind speeds. Based on this fact, the thickness of the varves can be used as a tool to reconstruct a) annual rainfall b) wind season strength, and c) ENSO variability. Within this thesis, I found evidence for centennialscale changes in ENSO-related rainfall variability during the last three millennia, abrupt changes in variability during the Medieval Climate Anomaly and the Little Ice Age, and an overall reduction in East African rainfall and its variability during the Last Glacial period. Climate model simulations support forward extrapolation from these lake-sediment data, indicating that a future Indian Ocean warming will enhance East Africa’s hydrological cycle and its interannual variability in rainfall. Furthermore, I compared geochemical analyses from the sediment trap samples with a broad range of limnological, meteorological, and geological parameters to characterize the impact of sedimentation processes from the in-situ rocks to the deposited sediments. As a result an excellent calibration for existing μXRF data from Lake Challa over the entire 25,000 year long profile was provided. The climate development during the last 25,000 years as reconstructed from the Lake Challa sediments is in good agreement with other studies and highlights the complex interactions between long-term orbital forcing, atmosphere, ocean and land surface conditions. My findings help to understand how abrupt climate changes occur and how these changes correlate with climate changes elsewhere on Earth.
Salty taste has evolved to maintain electrolyte homeostasis, serving as a detector for salt containing food. In rodents, salty taste involves at least two transduction mechanisms. One is sensitive to the drug amiloride and specific for Na+, involving epithelial sodium channel (ENaC). A second rodent transduction pathway, which is triggered by various cations, is amiloride insensitive and not almost understood to date. Studies in primates showed amiloride-sensitive as well as amiloride-insensitive gustatory responses to NaCl, implying a role of both salt taste transduction pathways in humans. However, sensory studies in humans point to largely amiloride-insensitive sodium taste perception. An involvement of ENaC in human sodium taste perception was not shown, so far. In this study, ENaC subunit protein and mRNA could be localized to human taste bud cells (TBC). Thus, basolateral αβγ-ENaC ion channels are likely in TBC of circumvallate papillae, possibly mediating basolateral sodium entry. Similarly, basolateral βγ-ENaC might play a role in fungiform TBC. Strikingly, δ-ENaC subunit was confined to taste bud pores of both papillae, likely mediating gustatory sodium entry in TBC, either apical or paracellular via tight junctions. However, regional separation of δ-ENaC and βγ-ENaC in fungiform and circumvallate TBC indicate the presence of unknown interaction partner necessary to assemble into functional ion channels. However, screening of a macaque taste tissue cDNA library did neither reveal polypeptides assembling into a functional cation channel by interaction with δ-ENaC or βγ-ENaC nor ENaC independent salt taste receptor candidates. Thus, ENaC subunits are likely involved in human taste transduction, while exact composition and identity of an amiloride (in)sensitive salt taste receptors remain unclear. Localization of δ-ENaC in human taste pores strongly suggests a role in human taste transduction. In contrast, δ-ENaC is classified as pseudogene Scnn1d in mouse. However, no experimental detected sequences are annotated, while evidences for parts of Scnn1d derived mRNAs exist. In order to elucidate if Scnn1d is possibly involved in rodent salt taste perception, Scnn1d was evaluated in this study to clarify if Scnn1d is a gene or a transcribed pseudogene in mice. Comparative mapping of human SCNN1D to mouse chromosome 4 revealed complete Scnn1d sequence as well as its pseudogenization by Mus specific endogenous retroviruses. Moreover, tissue specific transcription of unitary Scnn1d pseudogene was found in mouse vallate papillae, kidney and testis and led to identification of nine Scnn1d transcripts. In vitro translation experiments showed that Scnn1d transcripts are coding competent for short polypeptides, possibly present in vivo. However, no sodium channel like function or sodium channel modulating activity was evident for Scnn1d transcripts and/or derived polypeptides. Thus, an involvement of mouse δ-ENaC in sodium taste transduction is unlikely and points to species specific differences in salt taste transduction mechanisms.
Bildverarbeitungsanwendungen stellen besondere Ansprüche an das ausführende Rechensystem. Einerseits ist eine hohe Rechenleistung erforderlich. Andererseits ist eine hohe Flexibilität von Vorteil, da die Entwicklung tendentiell ein experimenteller und interaktiver Prozess ist. Für neue Anwendungen tendieren Entwickler dazu, eine Rechenarchitektur zu wählen, die sie gut kennen, anstatt eine Architektur einzusetzen, die am besten zur Anwendung passt. Bildverarbeitungsalgorithmen sind inhärent parallel, doch herkömmliche bildverarbeitende eingebettete Systeme basieren meist auf sequentiell arbeitenden Prozessoren. Im Gegensatz zu dieser "Unstimmigkeit" können hocheffiziente Systeme aus einer gezielten Synergie aus Software- und Hardwarekomponenten aufgebaut werden. Die Konstruktion solcher System ist jedoch komplex und viele Lösungen, wie zum Beispiel grobgranulare Architekturen oder anwendungsspezifische Programmiersprachen, sind oft zu akademisch für einen Einsatz in der Wirtschaft. Die vorliegende Arbeit soll ein Beitrag dazu leisten, die Komplexität von Hardware-Software-Systemen zu reduzieren und damit die Entwicklung hochperformanter on-Chip-Systeme im Bereich Bildverarbeitung zu vereinfachen und wirtschaftlicher zu machen. Dabei wurde Wert darauf gelegt, den Aufwand für Einarbeitung, Entwicklung als auch Erweiterungen gering zu halten. Es wurde ein Entwurfsfluss konzipiert und umgesetzt, welcher es dem Softwareentwickler ermöglicht, Berechnungen durch Hardwarekomponenten zu beschleunigen und das zu Grunde liegende eingebettete System komplett zu prototypisieren. Hierbei werden komplexe Bildverarbeitungsanwendungen betrachtet, welche ein Betriebssystem erfordern, wie zum Beispiel verteilte Kamerasensornetzwerke. Die eingesetzte Software basiert auf Linux und der Bildverarbeitungsbibliothek OpenCV. Die Verteilung der Berechnungen auf Software- und Hardwarekomponenten und die daraus resultierende Ablaufplanung und Generierung der Rechenarchitektur erfolgt automatisch. Mittels einer auf der Antwortmengenprogrammierung basierten Entwurfsraumexploration ergeben sich Vorteile bei der Modellierung und Erweiterung. Die Systemsoftware wird mit OpenEmbedded/Bitbake synthetisiert und die erzeugten on-Chip-Architekturen auf FPGAs realisiert.
Die Wahrnehmung von Geschmacksempfindungen beruht auf dem Zusammenspiel verschiedener Sinneseindrücke wie Schmecken, Riechen und Tasten. Diese Komplexität der gustatorischen Wahrnehmung erschwert die Beantwortung der Frage wie Geschmacksinformationen vom Mund ins Gehirn weitergeleitet, prozessiert und kodiert werden. Die Analysen zur neuronalen Prozessierung von Geschmacksinformationen erfolgten zumeist mit Bitterstimuli am Mausmodell. Zwar ist bekannt, dass das Genom der Maus für 35 funktionelle Bitterrezeptoren kodiert, jedoch war nur für zwei unter ihnen ein Ligand ermittelt worden. Um eine bessere Grundlage für tierexperimentelle Arbeiten zu schaffen, wurden 16 der 35 Bitterrezeptoren der Maus heterolog in HEK293T-Zellen exprimiert und in Calcium-Imaging-Experimenten funktionell charakterisiert. Die Daten belegen, dass das Funktionsspektrum der Bitterrezeptoren der Maus im Vergleich zum Menschen enger ist und widerlegen damit die Aussage, dass humane und murine orthologe Rezeptoren durch das gleiche Ligandenspektrum angesprochen werden. Die Interpretation von tierexperimentellen Daten und die Übertragbarkeit auf den Menschen werden folglich nicht nur durch die Komplexität des Geschmacks, sondern auch durch Speziesunterschiede verkompliziert. Die Komplexität des Geschmacks beruht u. a. auf der Tatsache, dass Geschmacksstoffe selten isoliert auftreten und daher eine Vielzahl an Informationen kodiert werden muss. Um solche geschmacksstoffassoziierten Stimuli in der Analyse der gustatorischen Kommunikationsbahnen auszuschließen, sollten Opsine, die durch Licht spezifischer Wellenlänge angeregt werden können, für die selektive Ersetzung von Geschmacksrezeptoren genutzt werden. Um die Funktionalität dieser angestrebten Knockout-Knockin-Modelle zu evaluieren, die eine Kopplung von Opsinen mit dem geschmacksspezifischen G-Protein Gustducin voraussetzte, wurden Oozyten vom Krallenfrosch Xenopus laevis mit dem Zwei-Elektroden-Spannungsklemm-Verfahren hinsichtlich dieser Interaktion analysiert. Der positiven Bewertung dieser Kopplung folgte die Erzeugung von drei Mauslinien, die in der kodierenden Region eines spezifischen Geschmacksrezeptors (Tas1r1, Tas1r2, Tas2r114) Photorezeptoren exprimierten. Durch RT-PCR-, In-situ-Hybridisierungs- und immunhistochemische Experimente konnte der erfolgreiche Knockout der Rezeptorgene und der Knockin der Opsine belegt werden. Der Nachweis der Funktionalität der Opsine im gustatorischen System wird Gegenstand zukünftiger Analysen sein. Bei erfolgreichem Beleg der Lichtempfindlichkeit von Geschmacksrezeptorzellen dieser Mausmodelle wäre ein System geschaffen, dass es ermöglichen würde, gustatorische neuronale Netzwerke und Hirnareale zu identifizieren, die auf einen reinen geschmacks- und qualitätsspezifischen Stimulus zurückzuführen wären.
Public debate about energy relations between the EU and Russia is distorted. These distortions present considerable obstacles to the development of true partnership. At the core of the conflict is a struggle for resource rents between energy producing, energy consuming and transit countries. Supposed secondary aspects, however, are also of great importance. They comprise of geopolitics, market access, economic development and state sovereignty. The European Union, having engaged in energy market liberalisation, faces a widening gap between declining domestic resources and continuously growing energy demand. Diverse interests inside the EU prevent the definition of a coherent and respected energy policy. Russia, for its part, is no longer willing to subsidise its neighbouring economies by cheap energy exports. The Russian government engages in assertive policies pursuing Russian interests. In so far, it opts for a different globalisation approach, refusing the role of mere energy exporter. In view of the intensifying struggle for global resources, Russia, with its large energy potential, appears to be a very favourable option for European energy supplies, if not the best one. However, several outcomes of the strategic game between the two partners can be imagined. Engaging in non-cooperative strategies will in the end leave all stakeholders worse-off. The European Union should therefore concentrate on securing its partnership with Russia instead of damaging it. Stable cooperation would need the acceptance that the partner may pursue his own goals, which might be different from one’s own interests. The question is, how can a sustainable compromise be found? This thesis finds that a mix of continued dialogue, a tit for tat approach bolstered by an international institutional framework and increased integration efforts appears as a preferable solution.
Mathematical modeling of biological phenomena has experienced increasing interest since new high-throughput technologies give access to growing amounts of molecular data. These modeling approaches are especially able to test hypotheses which are not yet experimentally accessible or guide an experimental setup. One particular attempt investigates the evolutionary dynamics responsible for today's composition of organisms. Computer simulations either propose an evolutionary mechanism and thus reproduce a recent finding or rebuild an evolutionary process in order to learn about its mechanism. The quest for evolutionary fingerprints in metabolic and gene-coexpression networks is the central topic of this cumulative thesis based on four published articles. An understanding of the actual origin of life will probably remain an insoluble problem. However, one can argue that after a first simple metabolism has evolved, the further evolution of metabolism occurred in parallel with the evolution of the sequences of the catalyzing enzymes. Indications of such a coevolution can be found when correlating the change in sequence between two enzymes with their distance on the metabolic network which is obtained from the KEGG database. We observe that there exists a small but significant correlation primarily on nearest neighbors. This indicates that enzymes catalyzing subsequent reactions tend to be descended from the same precursor. Since this correlation is relatively small one can at least assume that, if new enzymes are no "genetic children" of the previous enzymes, they certainly be descended from any of the already existing ones. Following this hypothesis, we introduce a model of enzyme-pathway coevolution. By iteratively adding enzymes, this model explores the metabolic network in a manner similar to diffusion. With implementation of an Gillespie-like algorithm we are able to introduce a tunable parameter that controls the weight of sequence similarity when choosing a new enzyme. Furthermore, this method also defines a time difference between successive evolutionary innovations in terms of a new enzyme. Overall, these simulations generate putative time-courses of the evolutionary walk on the metabolic network. By a time-series analysis, we find that the acquisition of new enzymes appears in bursts which are pronounced when the influence of the sequence similarity is higher. This behavior strongly resembles punctuated equilibrium which denotes the observation that new species tend to appear in bursts as well rather than in a gradual manner. Thus, our model helps to establish a better understanding of punctuated equilibrium giving a potential description at molecular level. From the time-courses we also extract a tentative order of new enzymes, metabolites, and even organisms. The consistence of this order with previous findings provides evidence for the validity of our approach. While the sequence of a gene is actually subject to mutations, its expression profile might also indirectly change through the evolutionary events in the cellular interplay. Gene coexpression data is simply accessible by microarray experiments and commonly illustrated using coexpression networks where genes are nodes and get linked once they show a significant coexpression. Since the large number of genes makes an illustration of the entire coexpression network difficult, clustering helps to show the network on a metalevel. Various clustering techniques already exist. However, we introduce a novel one which maintains control of the cluster sizes and thus assures proper visual inspection. An application of the method on Arabidopsis thaliana reveals that genes causing a severe phenotype often show a functional uniqueness in their network vicinity. This leads to 20 genes of so far unknown phenotype which are however suggested to be essential for plant growth. Of these, six indeed provoke such a severe phenotype, shown by mutant analysis. By an inspection of the degree distribution of the A.thaliana coexpression network, we identified two characteristics. The distribution deviates from the frequently observed power-law by a sharp truncation which follows after an over-representation of highly connected nodes. For a better understanding, we developed an evolutionary model which mimics the growth of a coexpression network by gene duplication which underlies a strong selection criterion, and slight mutational changes in the expression profile. Despite the simplicity of our assumption, we can reproduce the observed properties in A.thaliana as well as in E.coli and S.cerevisiae. The over-representation of high-degree nodes could be identified with mutually well connected genes of similar functional families: zinc fingers (PF00096), flagella, and ribosomes respectively. In conclusion, these four manuscripts demonstrate the usefulness of mathematical models and statistical tools as a source of new biological insight. While the clustering approach of gene coexpression data leads to the phenotypic characterization of so far unknown genes and thus supports genome annotation, our model approaches offer explanations for observed properties of the coexpression network and furthermore substantiate punctuated equilibrium as an evolutionary process by a deeper understanding of an underlying molecular mechanism.
In this thesis chemical reactions under hydrothermal conditions were explored, whereby emphasis was put on green chemistry. Water at high temperature and pressure acts as a benign solvent. Motivation to work under hydrothermal conditions was well-founded in the tunability of physicochemical properties with temperature, e.g. of dielectric constant, density or ion product, which often resulted in surprising reactivity. Another cornerstone was the implementation of the principles of green chemistry. Besides the use of water as solvent, this included the employment of a sustainable feedstock and the sensible use of resources by minimizing waste and harmful intermediates and additives. To evaluate the feasibility of hydrothermal conditions for chemical synthesis, exemplary reactions were performed. These were carried out in a continuous flow reactor, allowing for precise control of reaction conditions and kinetics measurements. In most experiments a temperature of 200 °C in combination with a pressure of 100 bar was chosen. In some cases the temperature was even raised to 300 °C. Water in this subcritical range can also be found in nature at hydrothermal vents on the ocean floor. On the primitive earth, environments with such conditions were however present in larger numbers. Therefore we tested whether biologically important carbohydrates could be formed at high temperature from the simple, probably prebiotic precursor formaldehyde. Indeed, this formose reaction could be carried out successfully, although the yield was lower compared to the counterpart reaction under ambient conditions. However, striking differences regarding selectivity and necessary catalysts were observed. At moderate temperatures bases and catalytically active cations like Ca2+ are necessary and the main products are hexoses and pentoses, which accumulate due to their higher stability. In contrast, in high-temperature water no catalyst was necessary but a slightly alkaline solution was sufficient. Hexoses were only formed in negligible amounts, whereas pentoses and the shorter carbohydrates accounted for the major fraction. Amongst the pentoses there was some preference for the formation of ribose. Even deoxy sugars could be detected in traces. The observation that catalysts can be avoided was successfully transferred to another reaction. In a green chemistry approach platform chemicals must be produced from sustainable resources. Carbohydrates can for instance be employed as a basis. They can be transformed to levulinic acid and formic acid, which can both react via a transfer hydrogenation to the green solvent and biofuel gamma-valerolactone. This second reaction usually requires catalysis by Ru or Pd, which are neither sustainable nor low-priced. Under hydrothermal conditions these heavy metals could be avoided and replaced by cheap salts, taking advantage of the temperature dependence of the acid dissociation constant. Simple sulfate was recognized as a temperature switchable base. With this additive high yield could be achieved by simultaneous prevention of waste. In contrast to conventional bases, which create salt upon neutralization, a temperature switchable base becomes neutral again when cooled down and thus can be reused. This adds another sustainable feature to the high atom economy of the presented hydrothermal synthesis. In a last study complex decomposition pathways of biomass were investigated. Gas chromatography in conjunction with mass spectroscopy has proven to be a powerful tool for the identification of unknowns. It was observed that several acids were formed when carbohydrates were treated with bases at high temperature. This procedure was also applied to digest wood. Afterwards it was possible to fermentate the solution and a good yield of methane was obtained. This has to be regarded in the light of the fact that wood practically cannot be used as a feedstock in a biogas factory. Thus the hydrothermal pretreatment is an efficient means to employ such materials as well. Also the reaction network of the hydrothermal decomposition of glycine was investigated using isotope-labeled compounds as comparison for the unambiguous identification of unknowns. This refined analysis allowed the identification of several new molecules and pathways, not yet described in literature. In summary several advantages could be taken from synthesis in high-temperature water. Many catalysts, absolutely necessary under ambient conditions, could either be completely avoided or replaced by cheap, sustainable alternatives. In this respect water is not only a green solvent, but helps to prevent waste and preserves resources.
Regulation of gene transcription plays a major role in mediating cellular responses and physiological behavior in all known organisms. The finding that similar genes are often regulated in a similar manner (co-regulated or "co-expressed") has directed several "guilt-by-association" approaches in order to reverse-engineer the cellular transcriptional networks using gene expression data as a compass. This kind of studies has been considerably assisted in the recent years by the development of high-throughput transcript measurement platforms, specifically gene microarrays and next-generation sequencing. In this thesis, I describe several approaches for improving the extraction and interpretation of the information contained in microarray based gene expression data, through four steps: (1) microarray platform design, (2) microarray data normalization, (3) gene network reverse engineering based on expression data and (4) experimental validation of expression-based guilt-by-association inferences. In the first part test case is shown aimed at the generation of a microarray for Thellungiella salsuginea, a salt and drought resistant close relative to the model plant Arabidopsis thaliana; the transcripts of this organism are generated on the combination of publicly available ESTs and newly generated ad-hoc next-generation sequencing data. Since the design of a microarray platform requires the availability of highly reliable and non-redundant transcript models, these issues are addressed consecutively, proposing several different technical solutions. In the second part I describe how inter-array correlation artifacts are generated by the common microarray normalization methods RMA and GCRMA, together with the technical and mathematical characteristics underlying the problem. A solution is proposed in the form of a novel normalization method, called tRMA. The third part of the thesis deals with the field of expression-based gene network reverse engineering. It is shown how different centrality measures in reverse engineered gene networks can be used to distinguish specific classes of genes, in particular essential genes in Arabidopsis thaliana, and how the use of conditional correlation can add a layer of understanding over the information flow processes underlying transcript regulation. Furthermore, several network reverse engineering approaches are compared, with a particular focus on the LASSO, a linear regression derivative rarely applied before in global gene network reconstruction, despite its theoretical advantages in robustness and interpretability over more standard methods. The performance of LASSO is assessed through several in silico analyses dealing with the reliability of the inferred gene networks. In the final part, LASSO and other reverse engineering methods are used to experimentally identify novel genes involved in two independent scenarios: the seed coat mucilage pathway in Arabidopsis thaliana and the hypoxic tuber development in Solanum tuberosum. In both cases an interesting method complementarity is shown, which strongly suggests a general use of hybrid approaches for transcript expression-based inferences. In conclusion, this work has helped to improve our understanding of gene transcription regulation through a better interpretation of high-throughput expression data. Part of the network reverse engineering methods described in this thesis have been included in a tool (CorTo) for gene network reverse engineering and annotated visualization from custom transcription datasets.
Um Extremereignisse in der Dynamik des indischen Sommermonsuns (ISM) in der geologischen Vergangenheit zu identifizieren, schlage ich einen neuartigen Ansatz basierend auf der Quantifikation von Fluktuationen in einem nichtlinearen Ähnlichkeitsmaß vor. Dieser reagiert empfindlich auf Zeitabschnitte mit deutlichen Veränderungen in der dynamischen Komplexität kurzer Zeitreihen. Ein mathematischer Zusammenhang zwischen dem neuen Maß und dynamischen Invarianten des zugrundeliegenden Systems wie fraktalen Dimensionen und Lyapunovexponenten wird analytisch hergeleitet. Weiterhin entwickle ich einen statistischen Test zur Schätzung der Signifikanz der so identifizierten dynamischen Übergänge. Die Stärken der Methode werden durch die Aufdeckung von Bifurkationsstrukturen in paradigmatischen Modellsystemen nachgewiesen, wobei im Vergleich zu den traditionellen Lyapunovexponenten eine Identifikation komplexerer dynamischer Übergänge möglich ist. Wir wenden die neu entwickelte Methode zur Analyse realer Messdaten an, um ausgeprägte dynamische Veränderungen auf Zeitskalen von Jahrtausenden in Klimaproxydaten des südasiatischen Sommermonsunsystems während des Pleistozäns aufzuspüren. Dabei zeigt sich, dass viele dieser Übergänge durch den externen Einfluss der veränderlichen Sonneneinstrahlung, sowie durch dem Klimasystem interne Einflussfaktoren auf das Monsunsystem (Eiszeitzyklen der nördlichen Hemisphäre und Einsatz der tropischenWalkerzirkulation) induziert werden. Trotz seiner Anwendbarkeit auf allgemeine Zeitreihen ist der diskutierte Ansatz besonders zur Untersuchung von kurzen Paläoklimazeitreihen geeignet. Die während des ISM über dem indischen Subkontinent fallenden Niederschläge treten, bedingt durch die zugrundeliegende Dynamik der atmosphärischen Zirkulation und topographische Einflüsse, in äußerst komplexen, raumzeitlichen Mustern auf. Ich stelle eine detaillierte Analyse der Sommermonsunniederschläge über der indischen Halbinsel vor, die auf Ereignissynchronisation (ES) beruht, einem Maß für die nichtlineare Korrelation von Punktprozessen wie Niederschlagsereignissen. Mit hierarchischen Clusteringalgorithmen identifiziere ich zunächst Regionen mit besonders kohärenten oder homogenen Monsunniederschlägen. Dabei können auch die Zeitverzögerungsmuster von Regenereignissen rekonstruiert werden. Darüber hinaus führe ich weitere Analysen auf Basis der Theorie komplexer Netzwerke durch. Diese Studien ermöglichen wertvolle Einsichten in räumliche Organisation, Skalen und Strukturen von starken Niederschlagsereignissen oberhalb der 90% und 94% Perzentilen während des ISM (Juni bis September). Weiterhin untersuche ich den Einfluss von verschiedenen, kritischen synoptischen Systemen der Atmosphäre sowie der steilen Topographie des Himalayas auf diese Niederschlagsmuster. Die vorgestellte Methode ist nicht nur geeignet, die Struktur extremer Niederschlagsereignisse zu visualisieren, sondern kann darüber hinaus über der Region atmosphärische Transportwege von Wasserdampf und Feuchtigkeitssenken auf dekadischen Skalen identifizieren.Weiterhin wird ein einfaches, auf komplexen Netzwerken basierendes Verfahren zur Entschlüsselung der räumlichen Feinstruktur und Zeitentwicklung von Monsunniederschlagsextremen während der vergangenen 60 Jahre vorgestellt.
During reading oculomotor processes guide the eyes over the text. The visual information recorded is accessed, evaluated and processed. Only by retrieving the meaning of a word from the long-term memory, as well as through the connection and storage of the information about each individual word, is it possible to access the semantic meaning of a sentence. Therefore memory, and here in particular working memory, plays a pivotal role in the basic processes of reading. The following dissertation investigates to what extent different demands on memory and memory capacity have an effect on eye movement behavior while reading. The frequently used paradigm of the reading span task, in which test subjects read and evaluate individual sentences, was used for the experimental review of the research questions. The results speak for the fact that working memory processes have a direct effect on various eye movement measurements. Thus a high working memory load, for example, reduced the perceptual span while reading. The lower the individual working memory capacity of the reader was, the stronger was the influence of the working memory load on the processing of the sentence.
This work addresses issues in the automatic preprocessing of historical German input text for use by conventional natural language processing techniques. Conventional techniques cannot adequately account for historical input text due to conventional tools' reliance on a fixed application-specific lexicon keyed by contemporary orthographic surface form on the one hand, and the lack of consistent orthographic conventions in historical input text on the other. Historical spelling variation is treated here as an error-correction problem or "canonicalization" task: an attempt to automatically assign each (historical) input word a unique extant canonical cognate, thus allowing direct application-specific processing (tagging, parsing, etc.) of the returned canonical forms without need for any additional application-specific modifications. In the course of the work, various methods for automatic canonicalization are investigated and empirically evaluated, including conflation by phonetic identity, conflation by lemma instantiation heuristics, canonicalization by weighted finite-state rewrite cascade, and token-wise disambiguation by a dynamic Hidden Markov Model.
Human-induced alterations of the environment are causing biotic changes worldwide, including the extinction of species and a mixing of once disparate floras and faunas. One type of biological communities that is expected to be particularly affected by environmental alterations are herb layer plant communities of fragmented forests such as those in the west European lowlands. However, our knowledge about current changes in species diversity and composition in these communities is limited due to a lack of adequate long-term studies. In this thesis, I resurveyed the herb layer communities of ancient forest patches in the Weser-Elbe region (NW Germany) after two decades using 175 semi-permanent plots. The general objectives were (i) to quantify changes in plant species diversity considering also between-community (β) and functional diversity, (ii) to determine shifts in species composition in terms of species’ niche breadth and functional traits and (iii) to find indications on the most likely environmental drivers for the observed changes. These objectives were pursued with four independent research papers (Chapters 1-4) whose results were brought together in a General Discussion. Alpha diversity (species richness) increased by almost four species on average, whereas β diversity tended to decrease (Chapter 1). The latter is interpreted as a beginning floristic homogenization. The observed changes were primarily the result of a spread of native habitat generalists that are able to tolerate broad pH and moisture ranges. The changes in α and β diversity were only significant when species abundances were neglected (Chapters 1 and 2), demonstrating that the diversity changes resulted mainly from gains and losses of low-abundance species. This study is one of the first studies in temperate Europe that demonstrates floristic homogenization of forest plant communities at a larger than local scale. The diversity changes found at the taxonomic level did not result in similar changes at the functional level (Chapter 2). The likely reason is that these communities are functionally “buffered”. Single communities involve most of the functional diversity of the regional pool, i.e., they are already functionally rich, while they are functionally redundant among each other, i.e., they are already homogeneous. Independent of taxonomic homogenization, the abundance of 30 species decreased significantly (Chapter 4). These species included 12 ancient forest species (i.e., species closely tied to forest patches with a habitat continuity > 200 years) and seven species listed on the Red List of endangered plant species in NW Germany. If these decreases continue over the next decades, local extinctions may result. This biotic impoverishment would seriously conflict with regional conservation goals. Community assembly mechanisms changed at the local level particularly at sites that experienced disturbance by forest management activities between the sampling periods (Chapter 3). Disturbance altered community assembly mechanisms in two ways: (i) it relaxed environmental filters and allowed the coexistence of different reproduction strategies, as reflected by a higher diversity of reproductive traits at the time of the resurvey, and (ii) it enhanced light availability and tightened competitive filters. These limited the functional diversity with respect to canopy height and selected for taller species. Thirty-one winner and 30 loser species, which had significantly increased or decreased in abundance, respectively, were characterized by various functional traits and ecological performances to find indications on the most likely environmental drivers for the observed floristic changes (Chapter 4). Winner species had higher seed longevity, flowered later in the season and had more often an oceanic distribution compared to loser species. Loser species tended to have a higher specific leaf area, to be more susceptible to deer browsing and to have a performance optimum at higher soil pH values compared to winner species. Multiple logistic regression analyses indicated that disturbances due to forest management interventions were the primary cause of the species shifts. As one of the first European resurvey studies, this study provides indications that an enhanced browsing pressure due to increased deer densities and increasingly warmer winters are important drivers. The study failed to demonstrate that eutrophication and acidification due to atmospheric deposition substantially drive herb layer changes. The restriction of the sample to the most base-rich sites in the region is discussed as a likely reason. Furthermore, the decline of several ancient forest species is discussed as an indication that the forest patches are still paying off their “extinction debt”, i.e., exhibit a delayed response to forest fragmentation.
Aggregation of the Amyloid β (Aβ) peptide to amyloid fibrils is associated with the outbreak of Alzheimer’s disease. Early aggregation intermediates in form of soluble oligomers are of special interest as they are believed to be the major toxic components in the process. These oligomers are of disordered and transient nature. Therefore, their detailed molecular structure is difficult to access experimentally and often remains unknown. In the present work extensive, fully atomistic replica exchange molecular dynamics simulations were performed to study the preaggregated, monomer states and early aggregation intermediates (dimers, trimers) of Aβ(25-35) and Aβ(10-35)-NH2 in aqueous solution. The folding and aggregation of Aβ(25-35) were studied at neutral pH and 293 K. Aβ(25-35) monomers mainly adopt β-hairpin conformations characterized by a β-turn formed by residues G29 and A30, and a β-sheet between residues N27–K28 and I31–I32 in equilibrium with coiled conformations. The β-hairpin conformations served as initial configurations to model spontaneous aggregation of Aβ(25-35). As expected, within the Aβ(25-35) dimer and trimer ensembles many different poorly populated conformations appear. Nevertheless, we were able to distinguish between disordered and fibril-like oligomers. Whereas disordered oligomers are rather compact with few intermolecular hydrogen bonds (HBs), fibril-like oligomers are characterized by the formation of large intermolecular β-sheets. In most of the fibril-like dimers and trimers individual peptides are fully extended forming in- or out-of-register antiparallel β-sheets. A small amount of fibril-like trimers contained V-shaped peptides forming parallel β-sheets. The dimensions of extended and V-shaped oligomers correspond well to the diameters of two distinct morphologies found for Aβ(25-35) fibrils. The transition from disordered to fibril-like Aβ(25-35) dimers is unfavorable but driven by energy. The lower energy of fibril-like dimers arises from favorable intermolecular HBs and other electrostatic interactions which compete with a loss in entropy. Approximately 25 % of the entropic cost correspond to configurational entropy. The rest relates to solvent entropy, presumably caused by hydrophobic and electrostatic effects. In contrast to the transition towards fibril-like dimers the first step of aggregation is driven by entropy. Here, we compared structural and thermodynamic properties of the individual monomer, dimer and trimer ensembles to gain qualitative information about the aggregation process. The β-hairpin conformation observed for monomers is successively dissolved in dimer and trimer ensembles while instead intermolecular β-sheets are formed. As expected upon aggregation the configurational entropy decreases. Additionally, the solvent accessible surface area (SASA), especially the hydrophobic SASA, decreases yielding a favorable solvation free energy which overcompensates the loss in configurational entropy. In summary, the hydrophobic effect, possibly combined with electrostatic effects, yields an increase in solvent entropy which is believed to be one major driving force towards aggregation. Spontaneous folding of the Aβ(10-35)-NH2 monomer was modeled using two force fields, GROMOS96 43a1 and OPLS/AA, and compared to primary NMR data collected at pH 5.6 and 283 K taken from the literature. Unexpectedly, the two force fields yielded significantly different main conformations. Comparison between experimental and calculated nuclear Overhauser effect (NOE) distances is not sufficient to distinguish between the different force fields. Additionally, the comparison with scalar coupling constants suggest that the chosen protonation in both simulations corresponds to a pH lower than in the experiment. Based on this analysis we were unable to determine which force field yields a better description of this system. Dimerization of Aβ(10-35)-NH2 was studied at neutral pH and 300 K. Dimer conformations arrange in many distinct, poorly populated and rather complex alignments or interlocking patterns which are rather stabilized by side chain interactions than by specific intermolecular hydrogen bonds. Similar to Aβ(25-35) dimers, transition towards β-sheet-rich, fibril-like Aβ(10-35) dimers is driven by energy competing with a loss in entropy. Here, transition is mediated by favorable peptide-solvent and solvent-solvent interactions mainly arising from electrostatic interactions.
Functional analyses of microtubule and centrosome-associated proteins in Dictyostelium discoideum
(2011)
Understanding the role of microtubule-associated proteins is the key to understand the complex mechanisms regulating microtubule dynamics. This study employs the model system Dictyostelium discoideum to elucidate the role of the microtubule-associated protein TACC (Transforming acidic coiled-coil) in promoting microtubule growth and stability. Dictyostelium TACC was localized at the centrosome throughout the entire cell cycle. The protein was also detected at microtubule plus ends, however, unexpectedly only during interphase but not during mitosis. The same cell cycle-dependent localization pattern was observed for CP224, the Dictyostelium XMAP215 homologue. These ubiquitous MAPs have been found to interact with TACC proteins directly and are known to act as microtubule polymerases and nucleators. This work shows for the first time in vivo that both a TACC and XMAP215 family protein can differentially localize to microtubule plus ends during interphase and mitosis. RNAi knockdown mutants revealed that TACC promotes microtubule growth during interphase and is essential for proper formation of astral microtubules in mitosis. In many organisms, impaired microtubule stability upon TACC depletion was explained by the failure to efficiently recruit the TACC-binding XMAP215 protein to centrosomes or spindle poles. By contrast, fluorescence recovery after photobleaching (FRAP) analyses conducted in this study demonstrate that in Dictyostelium recruitment of CP224 to centrosomes or spindle poles is not perturbed in the absence of TACC. Instead, CP224 could no longer be detected at the tips of microtubules in TACC mutant cells. This finding demonstrates for the first time in vivo that a TACC protein is essential for the association of an XMAP215 protein with microtubule plus ends. The GFP-TACC strains generated in this work also turned out to be a valuable tool to study the unusual microtubule dynamics in Dictyostelium. Here, microtubules exhibit a high degree of lateral bending movements but, in contrast most other organisms, they do not obviously undergo any growth or shrinkage events during interphase. Despite of that they are affected by microtubuledepolymerizing drugs such as thiabendazole or nocodazol which are thought to act solely on dynamic microtubules. Employing 5D-fluorescence live cell microscopy and FRAP analyses this study suggests Dictyostelium microtubules to be dynamic only in the periphery, while they are stable at the centrosome. In the recent years, the identification of yet unknown components of the Dictyostelium centrosome has made tremendous progress. A proteomic approach previously conducted by our group disclosed several uncharacterized candidate proteins, which remained to be verified as genuine centrosomal components. The second part of this study focuses on the investigation of three such candidate proteins, Cenp68, CP103 and the putative spindle assembly checkpoint protein Mad1. While a GFP-CP103 fusion protein could clearly be localized to isolated centrosomes that are free of microtubules, Cenp68 and Mad1 were found to associate with the centromeres and kinetochores, respectively. The investigation of Cenp68 included the generation of a polyclonal anti-Cenp68 antibody, the screening for interacting proteins and the generation of knockout mutants which, however, did not display any obvious phenotype. Yet, Cenp68 has turned out as a very useful marker to study centromere dynamics during the entire cell cycle. During mitosis, GFP-Mad1 localization strongly resembled the behavior of other Mad1 proteins, suggesting the existence of a yet uncharacterized spindle assembly checkpoint in Dictyostelium.
Im Rahmen eines explorativen Vergleichsuntersuchungsplans wurde untersucht, inwieweit sich die durch biologische Faktoren bedingte unterschiedliche Lebenserfahrung sowie die Sozialisationsbedingungen in der psychosexuellen Entwicklung bei hetero-, homo- und postoperativen transsexuellen Männern (N = 191) auf die Integration der Geschlechterstereotypen in die kognitiven (Selbst-, Fremdwahrnehmung), emotionalen (Selbst- und Fremdbewertung) und verhaltensmäßigen Aspekte (Normen der geschlechtsspezifischen Verhaltens) der Geschlechtsidentität auswirken und ob sich Identifikationsmuster der Entwicklung des geschlechtlichen Selbstkonzepts ableiten lassen. Die Messung der kognitiven Aspekte des geschlechtlichen Selbstkonzepts (Maskulinität und Femininität) erfolgte mittels der GERO-Skala von Brengelmann und Hendrich (1990). Zur Erfassung der emotionalen Aspekte und Identifikationsmuster der Entwicklung des geschlechtlichen Selbstkonzepts wurden die Werte für die Variablen Maskulinität und Femininität zuerst mittels der computergesteuerten Methodik IDEXMONO und IDEXIDIO, die auf der Identitätsstrukturanalyse (Identity Structure Analysis) von Weinreich (2003) basiert, aufgearbeitet und weiter interferenzstatistisch ausgewertet. Weiterhin wurden der Fragebogen zur Messung normativer Geschlechtsrollenorientierung (NGRO) von Athenstaedt (2000) sowie ein ad hoc entworfener demographischer Fragebogen eingesetzt. Die Ergebnisse zeigen, dass der Verlauf der psychosexuellen Entwicklung einen starken Einfluss auf die Integration der Geschlechterstereotypen in die geschlechtliche Selbst- und Fremdwahrnehmung hat. Im kognitiven Bereich, bezogen auf die persönliche Identität (Grad der Selbstzuschreibung männlicher und weiblicher Merkmale), stellt die Maskulinität eine stabile und erstrebenswerte Variable zur Herausbildung des geschlechtlichen Selbstkonzepts bei allen Gruppen dar. Die Femininität trägt am meisten zur Differenzierung zwischen den Hetero-, Homo- und Transsexuellen bei. Sie wird, je nach der Entwicklungsphase, unterschiedlich in das geschlechtliche Selbstkonzept integriert. Hinsichtlich der sozialen Identität (Zugehörigkeitsgefühl) lassen sich die Gruppen bezüglich der wahrgenommenen Ähnlichkeiten sowohl mit männlichen als auch weiblichen Personen, je nach der Entwicklungsphase, unterscheiden. Die soziale Wahrnehmung von Männern und Frauen (Fremdwahrnehmung), ist bei Transsexuellen traditioneller als die der Hetero- und Homosexuellen. Bei der Selbst- und Fremdbewertung ergaben sich keine signifikanten Unterschiede. Bei der Internalisierung der sozialen Normen des geschlechtsspezifischen Verhaltens zeigt sich, dass Heterosexuelle der Ausübung der Geschlechterrollen gegenüber egalitärer eingestellt sind als Trans- und Homosexuelle. Bei den Sozialisationsfaktoren ist hervorzuheben, dass generell weibliche Identifikationspersonen einen stärkeren Einfluss auf die Herausbildung des geschlechtlichen Selbstkonzeptes hatten als männliche Identifikationspersonen. Es scheint jedoch, dass Homosexuelle bei der Entwicklung ihres geschlechtlichen Selbstkonzepts stärker unter dem Einfluss der Frauen stehen als die anderen zwei erforschten Gruppen. Zur Beantwortung der Frage, welche selbstkonzeptbezogenen Variablen und Entwicklungsfaktoren die größte statistische Bedeutung für die Trennung und Prädiktion der einzelnen untersuchten Gruppen haben, wurde eine Diskriminanzanalyse berechnet. Die größte diskriminatorische Bedeutung besitzen die Variablen „Stereotypische Wahrnehmung der männlichen Personen“ und „Ego-Involvement mit weiblichen Personen“ für die Diskriminanzfunktion 1 (Trennung der Transsexuellen von Hetero- und Homosexuellen) und die Variablen „Empathische Identifikation mit männlichen Personen in der Vergangenheit“ und „Zuwachs an empathischer Identifikation mit weiblichen Personen“ für die Diskriminanzfunktion 2 (Trennung der Hetero- von Homosexuellen).
Die vorliegende Arbeit beschäftigt sich mit der Synthese und Charakterisierung mesoporöser monolithischer Silica und deren Hybridmaterialien mit Ionischen Flüssigkeiten (ILs, ionic liquids). Zur Synthese der Silicaproben wurde ein Sol-Gel-Verfahren, ausgehend von einer Präkursorverbindung wie Tetramethylorthosilicat angewendet. Der Katalysator mit der geringsten Basizität führte zum Material mit der kleinsten Porengröße und der größten spezifischen Oberfläche. Eine Kombination von porösen Silica mit ILs führt zur Materialklasse der Silica-Ionogele. Diese Hybridmaterialien verbinden die Eigenschaften eines porösen Festkörpers mit denen einer IL (Leitfähigkeit, weites elektrochemisches Fenster, gute thermische Stabilität) und bieten vielfältige Einsatzmöglichkeiten z.B. in der Katalyse- Solar- und Sensortechnik. Um diese Materialien für ihren Verwendungszweck zu optimieren, bedarf es deren umfassenden Charakterisierung. Daher wurde in der vorliegenden Arbeit das thermische Verhalten von Silica-Ionogelen unter Verwendung verschiedener 1-Ethyl-3-methylimidazolium [Emim]-basierter ILs untersucht. Interessanterweise zeigen die untersuchten ILs deutliche Änderungen in ihrem thermischen Verhalten, wenn diese in porösen Materialien eingeschlossen werden (Confinement). Während sich die untersuchten reinen ILs durch klar unterscheidbare Phasenübergänge auszeichnen, konnten für die entsprechenden Hybridmaterialien deutlich schwächer ausgeprägte Übergänge beobachtet werden. Einzelne Phasenübergänge wurden unterdrückt (Glas- und Kristallisationsübergänge), während z.B. Schmelzübergänge in verbreiterten Temperaturbereichen, zum Teil als einzeln getrennte Schmelzpeaks beobachtet wurden. Diese Untersuchungen belegen deutliche Eigenschaftsänderungen der ILs in eingeschränkten Geometrien. Über Festkörper-NMR-Spektroskopie konnte außerdem gezeigt werden, daß die ILs in den mesoporösen Silicamaterialien eine unerwartet hohe Mobilität aufweisen. Die ILs können als quasi-flüssig bezeichnet werden und zeigen die nach bestem Wissen höchste Mobilität, die bisher für vergleichbare Hybridmaterialien beobachtet wurde. Durch Verwendung von funktionalisierten Präkursoren, sowie der Wahl der Reaktionsbedingungen, kann die Oberfläche der Silicamaterialien chemisch funktionalisiert werden und damit die Materialeigenschaften in der gewünschten Weise beeinflußt werden. In der vorliegenden Arbeit wurde der Einfluß der Oberflächenfunktionalität auf das thermische Verhalten hin untersucht. Dazu wurden zwei verschiedene Möglichkeiten der Funktionalisierung angewendet und miteinander verglichen. Bei der in-situ-Funktionalisierung wird die chemische Funktionalität während der Sol-Gel-Synthese über ein entsprechend funktionalisiertes Silan mit in das Silicamaterial einkondensiert. Eine postsynthetische Funktionalisierung erfolgt durch Reaktion der Endgruppen eines Silicamaterials mit geeigneten Reaktionspartnern. Um den Einfluß der physikalischen Eigenschaften der Probe auf die Reaktion zu untersuchen, wurden pulverisierte und monolithische Silicamaterialien miteinander verglichen. Im letzten Teil der Arbeit wurde die Vielfältigkeit, mit der Silicamaterialien postsynthetisch funktionalisiert werden können demonstriert. Durch die Kenntnis von Struktur-Eigenschaftsbeziehungen können die Eigenschaften von Silica-Ionogelen durch die geeignete Kombination von fester und mobiler Phase in der gewünschten Weise verändert werden. Die vorliegende Arbeit soll einen Beitrag zur Untersuchung dieser Beziehungen leisten, um das Potential dieser interessanten Materialien für Anwendungen nutzen zu können.
The lakes of the East African Rift System (EARS) have been intensively studied to better understand the influence of climate change on hydrological systems. The exceptional sensitivity of these rift lakes, however, is both a challenge and an opportunity when trying to reconstruct past climate changes from changes in the hydrological budget of lake basins on timescales 100 to 104 years. On one hand, differences in basin geometrics (shape, area, volume, depth), catchment rainfall distributions and varying erosion-deposition rates complicate regional interpretation of paleoclimate information from lacustrine sediment proxies. On the other hand, the sensitivity of rift lakes often provides paleoclimate records of excellent quality characterized by a high signal-to-noise ratio. This study aims at better understanding of the climate-proxy generating process in rift lakes by parameterizing the geomorphological and hydroclimatic conditions of a particular site providing a step towards the establishment of regional calibrations of transfer functions for climate reconstructions. The knowledge of the sensitivity of a lake basin to climate change furthermore is crucial for a better assessment of the probability of catastrophic changes in the future, which bear risks for landscapes, ecosystems, and organisms of all sorts, including humans. Part 1 of this thesis explores the effect of the morphology and the effective moisture of a lake catchment. The availability of digital elevation models (DEM) and gridded climate data sets facilitates the comparison of the morphological and hydroclimatic conditions of rift lakes. I used the hypsometric integral (HI) calculated from Shuttle Radar Topography Mission (SRTM) data to describe the morphology of ten lake basins in Kenya and Ethiopia. The aridity index (AI) describing the difference in the precipitation/evaporation balance within a catchment was used to compare the hydroclimatic of these basins. Correlating HI and AI with published Holocene lake-level variations revealed that lakes responding sensitively to relatively moderate climate change are typically graben shaped and characterized by a HI between 0.23-0.30, and relatively humid conditions with AI >1. These amplifier lakes, a term first introduced but not fully parameterized by Alayne Street-Perrott in the early 80s, are unexceptionally located in the crest of the Kenyan and Ethiopian domes. The non-amplifier lakes in the EARS either have lower HI 0.13-0.22 and higher AI (>1) or higher HI (0.31-0.37) and low AI (<1), reflecting pan-shaped morphologies with more arid hydroclimatic conditions. Part 2 of this work addresses the third important factor to be considered when using lake-level and proxy records to unravel past climate changes in the EARS: interbasin connectivity and groundwater flow through faulted and porous subsurface lithologies in a rift setting. First, I have compiled the available hydrogeological data including lithology, resistivity and water-well data for the adjacent Naivasha and Elmenteita-Nakuru basins in the Central Kenya Rift. Using this subsurface information and established records of lake-level decline at the last wet-dry climate transitions, i.e., the termination of the African Humid Period (AHP, 15 to 5 kyr BP), I used a linear decay model to estimate typical groundwater flow between the two basins. The results suggest a delayed response of the groundwater levels of ca. 5 kyrs if no recharge of groundwater occurs during the wet-dry transition, whereas the lag is 2-2.7 kyrs only using the modern recharge of ca. 0.52 m/yr. The estimated total groundwater flow from higher Lake Naivasha (1,880 m a.s.l. during the AHP) to Nakuru-Elmenteita (1,770 m) was 40 cubic kilometers. The unexpectedly large volume, more than half of the volume of the paleo-Lake Naivasha during the Early Holocene, emphasizes the importance of groundwater in hydrological modeling of paleo-lakes in rifts. Moreover, the subsurface connectivity of rift lakes also causes a significant lag time to the system introducing a nonlinear component to the system that has to be considered while interpreting paleo-lake records. Part 3 of this thesis investigated the modern intraseasonal precipitation variability within eleven lake basins discussed in the first section of the study excluding Lake Victoria and including Lake Tana. Remotely sensed rainfall estimates (RFE) from FEWS NET for 1996-2010, are used for the, March April May (MAM) July August September (JAS), October November (ON) and December January February (DJF). The seasonal precipitation are averaged and correlated with the prevailing regional and local climatic mechanisms. Results show high variability with Biennial to Triennial precipitation patterns. The spatial distribution of precipitation in JAS are linked to the onset and strength of the Congo Air Boundary (CAB) and Indian Summer Monsoon (ISM) dynamics. while in ON they are related to the strength of Positive ENSO and IOD phases This study describes the influence of graben morphologies, extreme climate constrasts within catchments and basins connectivity through faults and porous lithologies on rift lakes. Hence, it shows the importance of a careful characterization of a rift lake by these parameters prior to concluding from lake-level and proxy records to climate changes. Furthermore, this study highlights the exceptional sensitivity of rift lakes to relatively moderate climate change and its consequences for water availability to the biosphere including humans.
The needs for sustainable energy generation, but also a sustainable chemistry display the basic motivation of the current thesis. By different single investigated cases, which are all related to the element carbon, the work can be devided into two major topics. At first, the sustainable synthesis of “useful” carbon materials employing the process of hydrothermal carbonisation (HC) is described. In the second part, the synthesis of heteroatom - containing carbon materials for electrochemical and fuel cell applications employing ionic liquid precursors is presented. On base of a thorough review of the literature on hydrothermolysis and hydrothermal carbonisation of sugars in addition to the chemistry of hydroxymethylfurfural, mechanistic considerations of the formation of hydrothermal carbon are proposed. On the base of these reaction schemes, the mineral borax, is introduced as an additive for the hydrothermal carbonisation of glucose. It was found to be a highly active catalyst, resulting in decreased reaction times and increased carbon yields. The chemical impact of borax, in the following is exploited for the modification of the micro- and nanostructure of hydrothermal carbon. From the borax - mediated aggregation of those primary species, widely applicable, low density, pure hydrothermal carbon aerogels with high porosities and specific surface areas are produced. To conclude the first section of the thesis, a short series of experiments is carried out, for the purpose of demonstrating the applicability of the HC model to “real” biowaste i.e. watermelon waste as feedstock for the production of useful materials. In part two cyano - containing ionic liquids are employed as precursors for the synthesis of high - performance, heteroatom - containing carbon materials. By varying the ionic liquid precursor and the carbonisation conditions, it was possible to design highly active non - metal electrocatalyst for the reduction of oxygen. In the direct reduction of oxygen to water (like used in polymer electrolyte fuel cells), compared to commercial platinum catalysts, astonishing activities are observed. In another example the selective and very cost efficient electrochemical synthesis of hydrogen peroxide is presented. In a last example the synthesis of graphitic boron carbon nitrides from the ionic liquid 1 - Ethyl - 3 - methylimidazolium - tetracyanoborate is investigated in detail. Due to the employment of unreactive salts as a new tool to generate high surface area these materials were first time shown to be another class of non - precious metal oxygen reduction electrocatalyst.
Genetic variation is crucial for the long-term survival of the species as it provides the potential for adaptive responses to environmental changes such as emerging diseases. The Major Histocompatibility Complex (MHC) is a gene family that plays a central role in the vertebrate’s immune system by triggering the adaptive immune response after exposure to pathogens. MHC genes have become highly suitable molecular markers of adaptive significance. They synthesize two primary cell surface molecules namely MHC class I and class II that recognize short fragments of proteins derived respectively from intracellular (e.g. viruses) and extracellular (e.g. bacteria, protozoa, arthropods) origins and present them to immune cells. High levels of MHC polymorphism frequently observed in natural populations are interpreted as an adaptation to detect and present a wide array of rapidly evolving pathogens. This variation appears to be largely maintained by positive selection driven mainly by pathogenic selective pressures. For my doctoral research I focused on MHC I and II variation in free-ranging cheetahs (Acinonyx jubatus) and leopards (Panthera pardus) on Namibian farmlands. Both felid species are sympatric thus subject to similar pathogenic pressures but differ in their evolutionary and demographic histories. The main aims were to investigate 1) the extent and patterns of MHC variation at the population level in both felids, 2) the association between levels of MHC variation and disease resistance in free-ranging cheetahs, and 3) the role of selection at different time scales in shaping MHC variation in both felids. Cheetahs and leopards represent the largest free-ranging carnivores in Namibia. They concentrate in unprotected areas on privately owned farmlands where domestic and other wild animals also occur and the risk of pathogen transmission is increased. Thus, knowledge on adaptive genetic variation involved in disease resistance may be pertinent to both felid species’ conservation. The cheetah has been used as a classic example in conservation genetics textbooks due to overall low levels of genetic variation. Reduced variation at MHC genes has been associated with high susceptibility to infectious diseases in cheetahs. However, increased disease susceptibility has only been observed in captive cheetahs whereas recent studies in free-ranging Namibian cheetahs revealed a good health status. This raised the question whether the diversity at MHC I and II genes in free-ranging cheetahs is higher than previously reported. In this study, a total of 10 MHC I alleles and four MHC II alleles were observed in 149 individuals throughout Namibia. All alleles but one likely belong to functional MHC genes as their expression was confirmed. The observed alleles belong to four MHC I and three MHC II genes in the species as revealed by phylogenetic analyses. Signatures of historical positive selection acting on specific sites that interact directly with pathogen-derived proteins were detected in both MHC classes. Furthermore, a high genetic differentiation at MHC I was observed between Namibian cheetahs from east-central and north-central regions known to differ substantially in exposure to feline-specific viral pathogens. This suggests that the patterns of MHC I variation in the current population mirrors different pathogenic selective pressure imposed by viruses. Cheetahs showed low levels of MHC diversity compared with other mammalian species including felids, but this does not seem to influence the current immunocompetence of free-ranging cheetahs in Namibia and contradicts the previous conclusion that the cheetah is a paradigm species of disease susceptibility. However, it cannot be ruled out that the low MHC variation might limit a prosperous immunocompetence in the case of an emerging disease scenario because none of the remaining alleles might be able to recognize a novel pathogen. In contrast to cheetahs, leopards occur in most parts of Africa being perhaps the most abundant big cat in the continent. Leopards seem to have escaped from large-scale declines due to epizootics in the past in contrast to some free-ranging large carnivore populations in Africa that have been afflicted by epizootics. Currently, no information about the MHC sequence variation and constitution in African leopards exists. In this study, I characterized genetic variation at MHC I and MHC II genes in free-ranging leopards from Namibia. A total of six MHC I and six MHC II sequences were detected in 25 individuals from the east-central region. The maximum number of sequences observed per individual suggests that they likely correspond to at least three MHC I and three MHC II genes. Hallmarks of MHC evolution were confirmed such as historical positive selection, recombination and trans-species polymorphism. The low MHC variation detected in Namibian leopards is not conclusive and further research is required to assess the extent of MHC variation in different areas of its geographic range. Results from this thesis will contribute to better understanding the evolutionary significance of MHC and conservation implications in free-ranging felids. Translocation of wildlife is an increasingly used management tool for conservation purposes that should be conducted carefully as it may affect the ability of the translocated animals to cope with different pathogenic selective pressures.
Species respond to environmental change by dynamically adjusting their geographical ranges. Robust predictions of these changes are prerequisites to inform dynamic and sustainable conservation strategies. Correlative species distribution models (SDMs) relate species’ occurrence records to prevailing environmental factors to describe the environmental niche. They have been widely applied in global change context as they have comparably low data requirements and allow for rapid assessments of potential future species’ distributions. However, due to their static nature, transient responses to environmental change are essentially ignored in SDMs. Furthermore, neither dispersal nor demographic processes and biotic interactions are explicitly incorporated. Therefore, it has often been suggested to link statistical and mechanistic modelling approaches in order to make more realistic predictions of species’ distributions for scenarios of environmental change. In this thesis, I present two different ways of such linkage. (i) Mechanistic modelling can act as virtual playground for testing statistical models and allows extensive exploration of specific questions. I promote this ‘virtual ecologist’ approach as a powerful evaluation framework for testing sampling protocols, analyses and modelling tools. Also, I employ such an approach to systematically assess the effects of transient dynamics and ecological properties and processes on the prediction accuracy of SDMs for climate change projections. That way, relevant mechanisms are identified that shape the species’ response to altered environmental conditions and which should hence be considered when trying to project species’ distribution through time. (ii) I supplement SDM projections of potential future habitat for black grouse in Switzerland with an individual-based population model. By explicitly considering complex interactions between habitat availability and demographic processes, this allows for a more direct assessment of expected population response to environmental change and associated extinction risks. However, predictions were highly variable across simulations emphasising the need for principal evaluation tools like sensitivity analysis to assess uncertainty and robustness in dynamic range predictions. Furthermore, I identify data coverage of the environmental niche as a likely cause for contrasted range predictions between SDM algorithms. SDMs may fail to make reliable predictions for truncated and edge niches, meaning that portions of the niche are not represented in the data or niche edges coincide with data limits. Overall, my thesis contributes to an improved understanding of uncertainty factors in predictions of range dynamics and presents ways how to deal with these. Finally I provide preliminary guidelines for predictive modelling of dynamic species’ response to environmental change, identify key challenges for future research and discuss emerging developments.