Refine
Year of publication
- 2014 (222) (remove)
Document Type
- Doctoral Thesis (222) (remove)
Keywords
- Gammastrahlungsastronomie (4)
- data analysis (4)
- gamma-ray astronomy (4)
- Crab Nebula (3)
- Datenanalyse (3)
- Krebsnebel (3)
- Synchronisation (3)
- Systembiologie (3)
- synchronization (3)
- systems biology (3)
Institute
- Institut für Biochemie und Biologie (37)
- Institut für Physik und Astronomie (25)
- Institut für Geowissenschaften (19)
- Wirtschaftswissenschaften (15)
- Institut für Chemie (13)
- Historisches Institut (12)
- Institut für Informatik und Computational Science (10)
- Department Psychologie (9)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (9)
- Institut für Ernährungswissenschaft (8)
Schlucken ist ein lebensnotwendiger Prozess, dessen Diagnose und Therapie eine enorme Herausforderung bedeutet. Die Erkennung und Beurteilung von Schlucken und Schluckstörungen erfordert den Einsatz von technisch aufwendigen Verfahren, wie Videofluoroskopie (VFSS) und fiberoptisch-endoskopische Schluckuntersuchung (FEES), die eine hohe Belastung für die Patienten darstellen. Beide Verfahren werden als Goldstandard in der Diagnostik von Schluckstörungen eingesetzt. Die Durchführung obliegt in der Regel ärztlichem Personal. Darüber hinaus erfordert die Auswertung des Bildmaterials der Diagnostik eine ausreichend hohe Erfahrung. In der Therapie findet neben den klassischen Therapiemethoden, wie z.B. diätetische Modifikationen und Schluckmanöver, auch zunehmend die funktionelle Elektrostimulation Anwendung. Ziel der vorliegenden Dissertationsschrift ist die Evaluation eines im Verbundprojekt BigDysPro entwickelten Bioimpedanz (BI)- und Elektromyographie (EMG)-Messsystems. Es wurde geprüft, ob sich das BI- und EMG-Messsystem eignet, sowohl in der Diagnostik als auch in der Therapie als eigenständiges Messsystem und im Rahmen einer Schluckneuroprothese eingesetzt zu werden. In verschiedenen Studien wurden gesunde Probanden für die Überprüfung der Reproduzierbarkeit (Intra-und Interrater-Reliabilität), der Unterscheidbarkeit von Schluck- und Kopfbewegungen und der Beeinflussung der Biosignale (BI, EMG) durch verschiedene Faktoren (Geschlecht der Probanden, Leitfähigkeit, Konsistenz und Menge der Nahrung) untersucht. Durch zusätzliche Untersuchungen mit Patienten wurde einerseits der Einfluss der Elektrodenart geprüft. Andererseits wurden parallel zur BI- und EMG-Messung auch endoskopische (FEES) und radiologische Schluckuntersuchungen (VFSS) durchgeführt, um die Korrelation der Biosignale mit der Bewegung anatomischer Strukturen (VFSS) und mit der Schluckqualität (FEES) zu prüfen. Es wurden 31 gesunde Probanden mit 1819 Schlucken und 60 Patienten mit 715 Schlucken untersucht. Die Messkurven zeigten einen typischen, reproduzierbaren Signalverlauf, der mit anatomischen und funktionellen Änderungen während der pharyngalen Schluckphase in der VFSS korrelierte (r > 0,7). Aus dem Bioimpedanzsignal konnten Merkmale extrahiert werden, die mit physiologischen Merkmalen eines Schluckes, wie verzögerter laryngealer Verschluss und Kehlkopfhebung, korrelierten und eine Einschätzung der Schluckqualität in Übereinstimmung mit der FEES ermöglichten. In den Signalverläufen der Biosignale konnten signifikante Unterschiede zwischen Schluck- und Kopfbewegungen und den Nahrungsmengen und -konsistenzen nachgewiesen werden. Im Gegensatz zur Nahrungsmenge und -konsistenz zeigte die Leitfähigkeit der zu schluckenden Nahrung, das Geschlecht der Probanden und die Art der Elektroden keinen signifikanten Einfluss auf die Messsignale. Mit den Ergebnissen der Evaluation konnte gezeigt werden, dass mit dem BI- und EMG-Messsystem ein neuartiges und nicht-invasives Verfahren zur Verfügung steht, das eine reproduzierbare Darstellung der pharyngalen Schluckphase und ihrer Veränderungen ermöglicht. Daraus ergeben sich vielseitige Einsatzmöglichkeiten in der Diagnostik, z.B. Langzeitmessung zur Schluckfrequenz und Einschätzung der Schluckqualität, und in der Therapie, z.B. der Einsatz in einer Schluckneuroprothese oder als Biofeedback zur Darstellung des Schluckes, von Schluckstörungen.
Inhalt der Arbeit ist es, einen Überblick über die historische Entwicklung der öffentlich-rechtlichen Gefährdungshaftung in Deutschland vom 18. Jahrhundert bis heute zu geben sowie ihre praktische Bedeutung zu analysieren. Dabei wird zwischen den unterschiedlichen Gesetzgebungen, Rechtsprechungen und den theoretischen Lösungsansätzen der öffentlich-rechtlichen Gefährdungshaftung unterschieden und insbesondere letzteres problematisiert. Ferner wird auf das Verhältnis zu den grundrechtlichen Schutzpflichten, den sozialen Risikotatbeständen, dem sozialrechtlichen Herstellungsanspruch und den Tumultschäden eingegangen.
Durch Zeit und Raum
(2014)
Die vorliegende Studie beschäftigte sich mit der Bedeutung der dysfunktionalen Einstellungen für die Entwicklung von depressiven Symptomen bei Kindern und Jugendlichen. Nach der kognitiven Theorie der Depression von Beck (1967, 1996) führen dysfunktionale Einstellungen in Interaktion mit Stress zu depressiven Symptomen. Es existieren allerdings nur wenige Studien, die die longitudinale Beziehung zwischen den dysfunktionalen Einstellungen und der Depressivität bei Kindern und Jugendlichen untersucht haben (Lakdawalla et al., 2007). Folglich kann noch nicht eindeutig geklärt werden, ob die dysfunktionalen Einstellungen Ursache, Begleiterscheinung oder Konsequenz der Depression sind. Als Datengrundlage diente eine Stichprobe von Kindern und Jugendlichen im Alter von 9 bis 20 Jahren, die im Rahmen der PIER-Studie zu dysfunktionalen Einstellungen, kritischen Lebensereignissen und depressiven Symptomen befragt wurden (Nt1t2 = 1.053; t1: 2011/2012, t2: 2013/2014). Querschnittliche Analysen zeigten hohe Assoziationen zwischen den dysfunktionalen Einstellungen, kritischen Lebensereignissen und depressiven Symptomen. Eine latente Moderationsanalyse wies nur bei den Jugendlichen auf signifikante Interaktion zwischen den dysfunktionalen Einstellungen und den kritischen Lebensereignissen in der Vorhersage depressiver Symptomatik hin. Im Längsschnitt zeigten latente Cross-Lagged-Panel-Analysen erwartungsgemäß, dass die dysfunktionalen Einstellungen und die Depressivität mit dem Alter immer stabilere Konstrukte darstellen, die sehr eng miteinander zusammenhängen. Eine diesem Modell hinzugefügte latente Moderationsanalyse konnte das kognitive Modell der Depression nach Beck weder bei Kindern noch bei Jugendlichen bestätigen. Die spätere depressive Symptomatik konnte lediglich durch Haupteffekte der früheren Ausprägung der Depressivität und der kritischen Lebensereignisse vorhergesagt werden. Diese Ergebnisse legen den Schluss nahe, dass es sich bei den dysfunktionalen Einstellungen eher um Begleiterscheinungen als um Risikofaktoren oder Konsequenzen der depressiven Symptomatik handelt.
Effect of benzylglucosinolate on signaling pathways associated with type 2 diabetes prevention
(2014)
Type 2 diabetes (T2D) is a health problem throughout the world. In 2010, there were nearly 230 million individuals with diabetes worldwide and it is estimated that in the economically advanced countries the cases will increase about 50% in the next twenty years. Insulin resistance is one of major features in T2D, which is also a risk factor for metabolic and cardiovascular complications. Epidemiological and animal studies have shown that the consumption of vegetables and fruits can delay or prevent the development of the disease, although the underlying mechanisms of these effects are still unclear. Brassica species such as broccoli (Brassica oleracea var. italica) and nasturtium (Tropaeolum majus) possess high content of bioactive phytochemicals, e.g. nitrogen sulfur compounds (glucosinolates and isothiocyanates) and polyphenols largely associated with the prevention of cancer. Isothiocyanates (ITCs) display their anti-carcinogenic potential by inducing detoxicating phase II enzymes and increasing glutathione (GSH) levels in tissues. In T2D diabetes an increase in gluconeogenesis and triglyceride synthesis, and a reduction in fatty acid oxidation accompanied by the presence of reactive oxygen species (ROS) are observed; altogether is the result of an inappropriate response to insulin. Forkhead box O (FOXO) transcription factors play a crucial role in the regulation of insulin effects on gene expression and metabolism, and alterations in FOXO function could contribute to metabolic disorders in diabetes. In this study using stably transfected human osteosarcoma cells (U-2 OS) with constitutive expression of FOXO1 protein labeled with GFP (green fluorescent protein) and human hepatoma cells HepG2 cell cultures, the ability of benzylisothiocyanate (BITC) deriving from benzylglucosinolate, extracted from nasturtium to modulate, i) the insulin-signaling pathway, ii) the intracellular localization of FOXO1 and iii) the expression of proteins involved in glucose metabolism, ROS detoxification, cell cycle arrest and DNA repair was evaluated. BITC promoted oxidative stress and in response to that induced FOXO1 translocation from cytoplasm into the nucleus antagonizing the insulin effect. BITC stimulus was able to down-regulate gluconeogenic enzymes, which can be considered as an anti-diabetic effect; to promote antioxidant resistance expressed by the up-regulation in manganese superoxide dismutase (MnSOD) and detoxification enzymes; to modulate autophagy by induction of BECLIN1 and down-regulation of the mammalian target of rapamycin complex 1 (mTORC1) pathway; and to promote cell cycle arrest and DNA damage repair by up-regulation of the cyclin-dependent kinase inhibitor (p21CIP) and Growth Arrest / DNA Damage Repair (GADD45). Except for the nuclear factor (erythroid derived)-like2 (NRF2) and its influence in the detoxification enzymes gene expression, all the observed effects were independent from FOXO1, protein kinase B (AKT/PKB) and NAD-dependent deacetylase sirtuin-1 (SIRT1). The current study provides evidence that besides of the anticarcinogenic potential, isothiocyanates might have a role in T2D prevention. BITC stimulus mimics the fasting state, in which insulin signaling is not triggered and FOXO proteins remain in the nucleus modulating gene expression of their target genes, with the advantage of a down-regulation of gluconeogenesis instead of its increase. These effects suggest that BITC might be considered as a promising substance in the prevention or treatment of T2D, therefore the factors behind of its modulatory effects need further investigation.
Die Häufung von Diabetes, Herz-Kreislauf-Erkrankungen und einigen Krebsarten, deren Entstehung auf Übergewicht und Bewegungsmangel zurückzuführen sind, ist ein aktuelles Problem unserer Gesellschaft. Insbesondere mit fortschreitendem Alter nehmen die damit einhergehenden Komplikationen zu. Umso bedeutender ist das Verständnis der pathologischen Mechanismen in Folge von Adipositas, Bewegungsmangel, des Alterungsprozesses und den Einfluss-nehmenden Faktoren.
Ziel dieser Arbeit war die Entstehung metabolischer Erkrankungen beim Menschen zu untersuchen. Die Auswertung von Verlaufsdaten anthropometrischer und metabolischer Parameter der 584 Teilnehmern der prospektiven ‚Metabolisches Syndrom Berlin Potsdam Follow-up Studie‘ wies für die gesamte Kohorte einen Anstieg an Übergewicht, ebenso eine Verschlechterung des Blutdrucks und des Glukosestoffwechsels auf. Wir untersuchten, ob das Hormon FGF21 Einfluss an dem Auftreten eines Diabetes mellitus Typ 2 (T2DM) oder des Metabolischen Syndroms (MetS) hat. Wir konnten zeigen, dass Personen, die später ein MetS entwickeln, bereits zu Studienbeginn einen erhöhten FGF21-Spiegel, einen höheren BMI, WHR, Hb1Ac und diastolischen Blutdruck aufwiesen. Neben FGF21 wurde auch Vaspin in diesem Zusammenhang untersucht. Es zeigte sich, dass Personen, die später einen T2DM entwickeln, neben einer Erhöhung klinischer Parameter tendenziell erhöhte Spiegel des Hormons aufwiesen. Mit FGF21 und Vaspin wurden hier zwei neue Faktoren für die Vorhersage des Metabolischen Syndroms bzw. Diabetes mellitus Typ 2 identifiziert.
Der langfristige Effekt einer Gewichtsreduktion wurde in einer Subkohorte von 60 Personen untersucht. Der überwiegende Teil der Probanden mit Gewichtsabnahme-Intervention nahm in der ersten sechsmonatigen Phase erfolgreich ab. Jedoch zeigte sich ein deutlicher Trend zur Wiederzunahme des verlorenen Gewichts über den Beobachtungszeitraum von fünf Jahren. Von besonderem Interesse war die Abschätzung des kardiovaskulären Risikos über den Framingham Score. Es wurde deutlich, dass für Personen mit konstanter Gewichtsabnahme ein deutlich geringeres kardiovaskuläres Risiko bestand. Hingegen zeigten Personen mit konstanter Wiederzunahme oder starken Gewichtsschwankungen ein hohes kardiovaskuläres Risiko. Unsere Daten legten nahe, dass eine erfolgreiche dauerhafte Gewichtsreduktion statistisch mit einem erniedrigten kardiovaskulären Risiko assoziiert ist, während Probanden mit starken Gewichtsschwankungen oder einer Gewichtszunahme ein gesteigertes Risiko haben könnten.
Um die Interaktion der molekularen Vorgänge hinsichtlich der Gewichtsreduktion und Lebensspanne untersuchen zu können, nutzen wir den Modellorganismus C.elegans. Eine kontinuierliche Restriktion wirkte sich verlängernd, eine Überversorgung verkürzend auf die Lebensspanne des Rundwurms aus. Der Einfluss eines zeitlich eingeschränkten, intermittierenden Nahrungsregimes, analog zum Weight-Cycling im Menschen, auf die Lebensspanne war von großem Interesse. Dieser regelmäßige Wechsel zwischen ad libitum Fütterung und Restriktion hatte in Abhängigkeit von der Häufigkeit der Restriktion einen unterschiedlich starken lebensverlängernden Effekt. Phänomene, wie Gewichtswiederzunahmen, sind in C.elegans nicht zu beobachten und beruhen vermutlich auf einem Mechanismus ist, der evolutionär jünger und in C.elegans noch nicht angelegt ist.
Um neue Stoffwechselwege zu identifizieren, die die Lebensspanne beeinflussen, wurden Metabolitenprofile genetischer als auch diätetischer Langlebigkeitsmodelle analysiert. Diese Analysen wiesen den Tryptophan-Stoffwechsel als einen neuen, bisher noch nicht im Fokus stehenden Stoffwechselweg aus, der mit Langlebigkeit in Verbindung steht.
El lunfardo : Kontaktvarietät der Migrationskultur am Rio de la Plata und in der Welt des Tango
(2014)
Donor-acceptor (D-A) copolymers have revolutionized the field of organic electronics over the last decade. Comprised of a electron rich and an electron deficient molecular unit, these copolymers facilitate the systematic modification of the material's optoelectronic properties. The ability to tune the optical band gap and to optimize the molecular frontier orbitals as well as the manifold of structural sites that enable chemical modifications has created a tremendous variety of copolymer structures. Today, these materials reach or even exceed the performance of amorphous inorganic semiconductors. Most impressively, the charge carrier mobility of D-A copolymers has been pushed to the technologically important value of 10 cm^{2}V^{-1}s^{-1}. Furthermore, owed to their enormous variability they are the material of choice for the donor component in organic solar cells, which have recently surpassed the efficiency threshold of 10%. Because of the great number of available D-A copolymers and due to their fast chemical evolution, there is a significant lack of understanding of the fundamental physical properties of these materials. Furthermore, the complex chemical and electronic structure of D-A copolymers in combination with their semi-crystalline morphology impede a straightforward identification of the microscopic origin of their superior performance. In this thesis, two aspects of prototype D-A copolymers were analysed. These are the investigation of electron transport in several copolymers and the application of low band gap copolymers as acceptor component in organic solar cells. In the first part, the investigation of a series of chemically modified fluorene-based copolymers is presented. The charge carrier mobility varies strongly between the different derivatives, although only moderate structural changes on the copolymers structure were made. Furthermore, rather unusual photocurrent transients were observed for one of the copolymers. Numerical simulations of the experimental results reveal that this behavior arises from a severe trapping of electrons in an exponential distribution of trap states. Based on the comparison of simulation and experiment, the general impact of charge carrier trapping on the shape of photo-CELIV and time-of-flight transients is discussed. In addition, the high performance naphthalenediimide (NDI)-based copolymer P(NDI2OD-T2) was characterized. It is shown that the copolymer posses one of the highest electron mobilities reported so far, which makes it attractive to be used as the electron accepting component in organic photovoltaic cells.\par Solar cells were prepared from two NDI-containing copolymers, blended with the hole transporting polymer P3HT. I demonstrate that the use of appropriate, high boiling point solvents can significantly increase the power conversion efficiency of these devices. Spectroscopic studies reveal that the pre-aggregation of the copolymers is suppressed in these solvents, which has a strong impact on the blend morphology. Finally, a systematic study of P3HT:P(NDI2OD-T2) blends is presented, which quantifies the processes that limit the efficiency of devices. The major loss channel for excited states was determined by transient and steady state spectroscopic investigations: the majority of initially generated electron-hole pairs is annihilated by an ultrafast geminate recombination process. Furthermore, exciton self-trapping in P(NDI2OD-T2) domains account for an additional reduction of the efficiency. The correlation of the photocurrent to microscopic morphology parameters was used to disclose the factors that limit the charge generation efficiency. Our results suggest that the orientation of the donor and acceptor crystallites relative to each other represents the main factor that determines the free charge carrier yield in this material system. This provides an explanation for the overall low efficiencies that are generally observed in all-polymer solar cells.
In this thesis we consider diverse aspects of existence and correctness of asymptotic solutions to elliptic differential and pseudodifferential equations. We begin our studies with the case of a general elliptic boundary value problem in partial derivatives. A small parameter enters the coefficients of the main equation as well as into the boundary conditions. Such equations have already been investigated satisfactory, but there still exist certain theoretical deficiencies. Our aim is to present the general theory of elliptic problems with a small parameter. For this purpose we examine in detail the case of a bounded domain with a smooth boundary. First of all, we construct formal solutions as power series in the small parameter. Then we examine their asymptotic properties. It suffices to carry out sharp two-sided \emph{a priori} estimates for the operators of boundary value problems which are uniform in the small parameter. Such estimates failed to hold in functional spaces used in classical elliptic theory. To circumvent this limitation we exploit norms depending on the small parameter for the functions defined on a bounded domain. Similar norms are widely used in literature, but their properties have not been investigated extensively. Our theoretical investigation shows that the usual elliptic technique can be correctly carried out in these norms. The obtained results also allow one to extend the norms to compact manifolds with boundaries. We complete our investigation by formulating algebraic conditions on the operators and showing their equivalence to the existence of a priori estimates. In the second step, we extend the concept of ellipticity with a small parameter to more general classes of operators. Firstly, we want to compare the difference in asymptotic patterns between the obtained series and expansions for similar differential problems. Therefore we investigate the heat equation in a bounded domain with a small parameter near the time derivative. In this case the characteristics touch the boundary at a finite number of points. It is known that the solutions are not regular in a neighbourhood of such points in advance. We suppose moreover that the boundary at such points can be non-smooth but have cuspidal singularities. We find a formal asymptotic expansion and show that when a set of parameters comes through a threshold value, the expansions fail to be asymptotic. The last part of the work is devoted to general concept of ellipticity with a small parameter. Several theoretical extensions to pseudodifferential operators have already been suggested in previous studies. As a new contribution we involve the analysis on manifolds with edge singularities which allows us to consider wider classes of perturbed elliptic operators. We examine that introduced classes possess a priori estimates of elliptic type. As a further application we demonstrate how developed tools can be used to reduce singularly perturbed problems to regular ones.
In Chapter 1 of the dissertation, the role of social networks is analyzed as an important determinant in the search behavior of the unemployed. Based on the hypothesis that the unemployed generate information on vacancies through their social network, search theory predicts that individuals with large social networks should experience an increased productivity of informal search, and reduce their search in formal channels. Due to the higher productivity of search, unemployed with a larger network are also expected to have a higher reservation wage than unemployed with a small network. The model-theoretic predictions are tested and confirmed empirically. It is found that the search behavior of unemployed is significantly affected by the presence of social contacts, with larger networks implying a stronger substitution away from formal search channels towards informal channels. The substitution is particularly pronounced for passive formal search methods, i.e., search methods that generate rather non-specific types of job offer information at low relative cost. We also find small but significant positive effects of an increase of the network size on the reservation wage. These results have important implications on the analysis of the job search monitoring or counseling measures that are usually targeted at formal search only. Chapter 2 of the dissertation addresses the labor market effects of vacancy information during the early stages of unemployment. The outcomes considered are the speed of exit from unemployment, the effects on the quality of employment and the short-and medium-term effects on active labor market program (ALMP) participation. It is found that vacancy information significantly increases the speed of entry into employment; at the same time the probability to participate in ALMP is significantly reduced. Whereas the long-term reduction in the ALMP arises in consequence of the earlier exit from unemployment, we also observe a short-run decrease for some labor market groups which suggest that caseworker use high and low intensity activation measures interchangeably which is clearly questionable from an efficiency point of view. For unemployed who find a job through vacancy information we observe a small negative effect on the weekly number of hours worked. In Chapter 3, the long-term effects of participation in ALMP are assessed for unemployed youth under 25 years of age. Complementary to the analysis in Chapter 2, the effects of participation in time- and cost-intensive measures of active labor market policies are examined. In particular we study the effects of job creation schemes, wage subsidies, short-and long-term training measures and measures to promote the participation in vocational training. The outcome variables of interest are the probability to be in regular employment, and participation in further education during the 60 months following program entry. The analysis shows that all programs, except job creation schemes have positive and long-term effects on the employment probability of youth. In the short-run only short-term training measures generate positive effects, as long-term training programs and wage subsidies exhibit significant locking-in'' effects. Measures to promote vocational training are found to increase the probability of attending education and training significantly, whereas all other programs have either no or a negative effect on training participation. Effect heterogeneity with respect to the pre-treatment level education shows that young people with higher pre-treatment educational levels benefit more from participation most programs. However, for longer-term wage subsidies we also find strong positive effects for young people with low initial education levels. The relative benefit of training measures is higher in West than in East Germany. In the evaluation studies of Chapters 2 and 3 semi-parametric balancing methods of Propensity Score Matching (PSM) and Inverse Probability Weighting (IPW) are used to eliminate the effects of counfounding factors that influence both the treatment participation as well as the outcome variable of interest, and to establish a causal relation between program participation and outcome differences. While PSM and IPW are intuitive and methodologically attractive as they do not require parametric assumptions, the practical implementation may become quite challenging due to their sensitivity to various data features. Given the importance of these methods in the evaluation literature, and the vast number of recent methodological contributions in this field, Chapter 4 aims to reduce the knowledge gap between the methodological and applied literature by summarizing new findings of the empirical and statistical literature and practical guidelines for future applied research. In contrast to previous publications this study does not only focus on the estimation of causal effects, but stresses that the balancing challenge can and should be discussed independent of question of causal identification of treatment effects on most empirical applications. Following a brief outline of the practical implementation steps required for PSM and IPW, these steps are presented in detail chronologically, outlining practical advice for each step. Subsequently, the topics of effect estimation, inference, sensitivity analysis and the combination with parametric estimation methods are discussed. Finally, new extensions of the methodology and avenues for future research are presented.
Virtualized cloud data centers provide on-demand resources, enable agile resource provisioning, and host heterogeneous applications with different resource requirements. These data centers consume enormous amounts of energy, increasing operational expenses, inducing high thermal inside data centers, and raising carbon dioxide emissions. The increase in energy consumption can result from ineffective resource management that causes inefficient resource utilization. This dissertation presents detailed models and novel techniques and algorithms for virtual resource management in cloud data centers. The proposed techniques take into account Service Level Agreements (SLAs) and workload heterogeneity in terms of memory access demand and communication patterns of web applications and High Performance Computing (HPC) applications. To evaluate our proposed techniques, we use simulation and real workload traces of web applications and HPC applications and compare our techniques against the other recently proposed techniques using several performance metrics. The major contributions of this dissertation are the following: proactive resource provisioning technique based on robust optimization to increase the hosts' availability for hosting new VMs while minimizing the idle energy consumption. Additionally, this technique mitigates undesirable changes in the power state of the hosts by which the hosts' reliability can be enhanced in avoiding failure during a power state change. The proposed technique exploits the range-based prediction algorithm for implementing robust optimization, taking into consideration the uncertainty of demand. An adaptive range-based prediction for predicting workload with high fluctuations in the short-term. The range prediction is implemented in two ways: standard deviation and median absolute deviation. The range is changed based on an adaptive confidence window to cope with the workload fluctuations. A robust VM consolidation for efficient energy and performance management to achieve equilibrium between energy and performance trade-offs. Our technique reduces the number of VM migrations compared to recently proposed techniques. This also contributes to a reduction in energy consumption by the network infrastructure. Additionally, our technique reduces SLA violations and the number of power state changes. A generic model for the network of a data center to simulate the communication delay and its impact on VM performance, as well as network energy consumption. In addition, a generic model for a memory-bus of a server, including latency and energy consumption models for different memory frequencies. This allows simulating the memory delay and its influence on VM performance, as well as memory energy consumption. Communication-aware and energy-efficient consolidation for parallel applications to enable the dynamic discovery of communication patterns and reschedule VMs using migration based on the determined communication patterns. A novel dynamic pattern discovery technique is implemented, based on signal processing of network utilization of VMs instead of using the information from the hosts' virtual switches or initiation from VMs. The result shows that our proposed approach reduces the network's average utilization, achieves energy savings due to reducing the number of active switches, and provides better VM performance compared to CPU-based placement. Memory-aware VM consolidation for independent VMs, which exploits the diversity of VMs' memory access to balance memory-bus utilization of hosts. The proposed technique, Memory-bus Load Balancing (MLB), reactively redistributes VMs according to their utilization of a memory-bus using VM migration to improve the performance of the overall system. Furthermore, Dynamic Voltage and Frequency Scaling (DVFS) of the memory and the proposed MLB technique are combined to achieve better energy savings.
Enterprise-specific in-memory data managment : HYRISEc - an in-memory column store engine for OLXP
(2014)
Weltweit streben Anti-Doping Institute danach jene Sportler zu überführen, welche sich unerlaubter Mittel oder Methoden bedienen. Die hierfür notwendigen Testsysteme werden kontinuierlich weiterentwickelt und neue Methoden aufgrund neuer Wirkstoffe der Pharmaindustrie etabliert. Gegenstand dieser Arbeit war es, eine parallele Mehrkomponentenanalyse auf Basis von Antigen-Antikörper Reaktionen zu entwickeln, bei dem es primär um Verringerung des benötigten Probevolumens und der Versuchszeit im Vergleich zu einem Standard Nachweis-Verfahren ging. Neben der Verwendung eines Multiplex Ansatzes und der Mikroarraytechnologie stellten ebenfalls die Genauigkeit aller Messparameter, die Stabilität des Versuchsaufbaus sowie die Performance über einen Einfach-Blind-Ansatz Herausforderungen dar. Die Anforderung an den Multiplex Ansatz, keine falschen Signale trotz ähnlicher Strukturen zu messen, konnte durch die gezielte Kombination von spezifischen Antikörpern realisiert werden. Hierfür wurden neben Kreuzreaktivitätstests auf dem Mikroarray parallel erfolgreich Western Blot Versuche durchgeführt. Jene Antikörper, welche in diesen Versuchen die gesetzten Anforderungen erfüllten, wurden für das Ermitteln der kleinsten nachweisbaren Konzentration verwendet. Über das Optimieren der Versuchsbedingungen konnte unter Verwendung von Tween in der Waschlösung sowohl auf Glas als auch auf Kunststoff die Hintergrundfluoreszenz reduziert und somit eine Steigerung des Signal/Hintergrundverhältnisses erreicht werden. In den Versuchen zu Ermittlung der Bestimmungsgrenze wurde für das humane Choriongonadotropin (hCG-i) eine Konzentration von 10 mU/ml, für dessen beta-Untereinheit (hCG-beta) eine Konzentration von 3,6 mU/ml und für das luteinisierende Hormon (LH) eine Konzentration von 10 mU/ml bestimmt. Den ermittelten Wert im Serum für das hCG-i entspricht dem von der Welt-Anti-Dopin-Agentur (WADA) geforderten Wert in Urin von 5 mU/ml. Neben der Ermittlung von Bestimmungsgrenzen wurden diese hinsichtlich auftretender Matrixeffekte in Serum und Blut gemessen. Wie aus den Versuchen zur Ermittlung von Kreuzreaktivitäten auf dem Mikroarray zu entnehmen ist, lassen sich das LH, das hCG-i und hCG-β ebenfalls in Serum und Blut messen. Die Durchführung einer Performance-Analyse über einem Einfach-Blind-Ansatz mit 130 Serum Proben, wurde ebenfalls über dieses System realisiert. Die ausgewerteten Proben wurden anschließend über eine Grenzwertoptimierungskurve analysiert und die diagnostische Spezifität ermittelt. Für die Messungen des LH konnte eine Sensitivität und Spezifität von 100% erreicht werden. Demnach wurden alle negativen und positiven Proben eindeutig interpretiert. Für das hCG-β konnte ebenfalls eine Spezifität von 100% und eine Sensitivität von 97% erreicht werden. Die hCG-i Proben wurden mit einer Spezifität von 100% und eine Sensitivität von 97,5% gemessen. Um den Nachweis zu erbringen, dass dieser Versuchsaufbau über mehrere Wochen stabile Signale bei Vermessen von identischen Proben liefert, wurde ein über zwölf Wochen angesetzter Stabilitätstest für alle Parameter erfolgreich in Serum und Blut durchgeführt. Zusammenfassend konnte in dieser Arbeit erfolgreich eine Mehrkomponentenanalyse als Multiplex Ansatz auf einem Mikroarray entwickelt werden. Die Durchführung der Performance-Analyse und des Stabilitätstests zeigen bereits die mögliche Einsatzfähigkeit dieses Tests im Kontext einer Dopinganalyse.
Das Influenzavirus infiziert Säugetiere und Vögel. Der erste Schritt im Infektionszyklus ist die Anbindung des Viruses über sein Oberflächenprotein Hämagglutinin (HA) an Zuckerstrukturen auf Epithelzellen des respiratorischen Traktes im Wirtsorganismus. Aus den drei komplementaritätsbestimmenden Regionen (complementarity determining regions, CDRs) der schweren Kette eines monoklonalen Hämagglutinin-bindenden Antikörpers wurden drei lineare Peptide abgeleitet. Die Bindungseigenschaften der drei Peptide wurden experimentell mittels Oberflächenplasmonenresonanzspektroskopie untersucht. Es zeigte sich, dass in Übereinstimmung mit begleitenden Molekulardynamik-Simulationen zwei der drei Peptide (PeB und PeC) analog zur Bindefähigkeit des Antikörpers in der Lage sind, Influenzaviren vom Stamm X31 (H3N2 A/Aichi/2/1968) zu binden. Die Interaktion des Peptids PeB, welches potentiell mit der konservierten Rezeptorbindestelle im HA interagiert, wurde anschließend näher charakterisiert. Die Detektion der Influenzaviren war unter geeigneten Immobilisationsbedingungen im diagnostisch relevanten Bereich möglich. Die Spezifität der PeB-Virus-Bindung wurde mittels geeigneter Kontrollen auf der Seite des Analyten und des Liganden nachgewiesen. Des Weiteren war das Peptid PeB in der Lage die Bindung von X31-Viren an Mimetika seines natürlichen Rezeptors zu inhibieren, was die spezifische Interaktion mit der Rezeptorbindungsstelle im Hämagglutinin belegt. Anschließend wurde die Primärsequenz von PeB durch eine vollständige Substitutionsanalyse im Microarray-Format hinsichtlich der Struktur-Aktivitäts-Beziehungen charakterisiert. Dies führte außerdem zu verbesserten Peptidvarianten mit erhöhter Affinität und breiterer Spezifität gegen aktuelle Influenzastämme verschiedener Serotypen (z.B. H1N1/2009, H5N1/2004, H7N1/2013). Schließlich konnte durch Verwendung einer in der Primärsequenz angepassten höher affinen Peptidvariante die Influenzainfektion in vitro inhibiert werden. Damit stellen die vom ursprünglichen Peptid PeB abgeleiteten Varianten Rezeptormoleküle in biosensorischen Testsystemen sowie potentielle Wirkstoffe dar.
Ziel der Arbeit war die Entwicklung von farbstoffmarkierten Polymeren, die einen temperaturgetriebenen Knäuel-Kollaps-Phasenübergang in wässriger Lösung ("thermo-responsive Polymere") zeigen und diesen in ein optisches Signal übersetzen können. Solche Polymere unterliegen innerhalb eines kleinen Temperaturintervalls einer massiven Änderung ihres Verhaltens, z B. ihrer Konformation und ihres Quellungsgrads. Diese Änderungen sind mit einem Wechsel der Löseeigenschaften von hydrophil zu hydrophob verbunden. Als Matrixpolymere wurden Poly-N-isopropylacrylamid (polyNIPAm), Poly(oligoethylen-glykolacrylat) (polyOEGA) und Poly(oligoethylenglykolmethacrylat) (polyOEGMA) ein-gesetzt, in die geeignete Farbstoffen durch Copolymerisation eingebaut wurden. Als besonders geeignet, um den Phasenübergang in ein optisches Signal zu übersetzen, erwiesen sich hierfür kompakte, solvatochrome Cumarin- und Naphthalimidderivate. Diese beeinträchtigten weder das Polymerisationsverhalten noch den Phasenübergang, reagierten aber sowohl bezüglich Farbe als auch Fluoreszenz stark auf die Polarität des Lösemittels. Weiterhin wurden Systeme entwickelt, die mittels Energietransfer (FRET) ein an den Phasenübergang gekoppeltes optisches Signal erzeugen. Hierbei wurde ein Cumarin als Donor- und ein Polythiophen als Akzeptorfarbstoff eingesetzt. Es zeigte sich, dass trotz scheinbarer Ähnlichkeit bestimmte Polymere ausgeprägt auf einen Temperaturstimulus mit Änderung ihrer spektralen Eigenschaften reagieren, andere aber nicht. Hierfür wurden die molekularen Ursachen untersucht. Als wahrscheinliche Gründe für das Ausbleiben einer spektralen Änderung in Oligo(ethylenglykol)-basierten Polymeren sind zum einen die fehlende Dehydratationseffektivität infolge des Fehlens eines selbstgenügenden Wasserstoffbrückenbindungsmotivs zu nennen und zum anderen die sterische Abschirmung der Farbstoffe durch die Oligo(ethylenglykol)-Seitenketten. Als Prinzipbeweis für die Nützlichkeit solcher Systeme für die Bioanalytik wurde ein System entwickelt, dass die Löslichkeitseigenschaft eines thermoresponsiven Polymers durch Antikörper-Antigen-Reaktion änderte. Die Bindung selbst kleiner Mengen eines Antikörpers ließ sich so direkt optisch auslesen und war bereits mit dem bloßen Auge zu erkennen.
Epitop-Kartierung von PBP2A und Identifizierung MRSA-spezifischer immunodominanter Peptidsequenzen
(2014)
Im Rahmen der Dissertation wird die Anwendung und Wirkung von Kernelementen des New Public Management (NPM) am Beispiel der Bürgerdienste der sechs europäischen Hauptstädte Berlin, Brüssel, Kopenhagen, Madrid, Prag und Warschau analysiert. Hierbei steht der Vergleich von Hauptstädten der MOE-Staaten mit Hauptstädten alter EU-Mitgliedsstaaten im Vordergrund. Es wird die folgende Forschungshypothese untersucht: Die Verwaltungen in den Hauptstädten der östlichen Mitgliedsstaaten der EU haben in Folge der grundsätzlichen gesellschaftlichen und politischen Umbrüche in den 1990er Jahren bedeutend mehr Kernelemente des NPM beim Neuaufbau ihrer öffentlichen Verwaltungen eingeführt. Durch den folgerichtigen Aufbau kundenorientierter und moderner Verwaltungen sowie der strikten Anwendung der Kernelemente des New Public Management arbeiten die Bürgerdienste in den Hauptstädten östlicher EU-Mitgliedsstaaten effizienter und wirkungsvoller als vergleichbare Bürgerdienste in den Hauptstädten westlicher EU-Mitgliedsstaaten. Zur Überprüfung der Forschungshypothese werden die Vergleichsstädte zunächst den entsprechenden Rechts- und Verwaltungstraditionen (kontinentaleuropäisch deutsch, napoleonisch und skandinavisch) zugeordnet und bezüglich ihrer Ausgangslage zum Aufbau einer modernen Verwaltung (Westeuropäische Verwaltung, Wiedervereinigungsverwaltung und Transformations-verwaltung) kategorisiert. Im Anschluss werden die institutionellen Voraussetzungen hinterfragt, was die deskriptive Darstellung der Stadt- und Verwaltungsgeschichte sowie die Untersuchung von organisatorischen Strukturen der Bürgerdienste, die Anwendung der NPM-Instrumente als auch die Innen- und Außenperspektive des NPM umfasst. Es wird festgestellt, ob und in welcher Form die Bürgerdienste der Vergleichsstädte die Kernelemente des NPM anwenden. Im Anschluss werden die Vergleichsstädte bezüglich der Anwendung der Kernelemente miteinander verglichen, wobei der Fokus auf dem persönlichen Vertriebsweg und der Kundenorientierung liegt. Der folgende Teil der Dissertation befasst sich mit dem Output der Bürgerdienste, der auf operative Resultate untersucht und verglichen wird. Hierbei stellt sich insbesondere die Frage nach den Leistungsmengen und der Produktivität des Outputs. Es werden aber auch die Ergebnisse von Verwaltungsprozessen untersucht, insbesondere in Bezug auf die Kundenorientierung. Hierfür wird ein Effizienzvergleich der Bürgerdienste in den Vergleichsstädten anhand einer relativen Effizienzmessung und der Free Disposal Hull (FDH)-Methode nach Bouckaert durchgeführt. Es ist eine Konzentration auf populäre Dienstleistungen aus dem Portfolio der Bürgerdienste notwendig. Daher werden die vergleichbaren Dienstleistungen Melde-, Personalausweis-, Führerschein- und Reisepass-angelegenheiten unter Einbeziehung des Vollzeitäquivalents zur Berechnung der Effizienz der Bürgerdienste herangezogen. Hierfür werden Daten aus den Jahren 2009 bis 2011 genutzt, die teilweise aus verwaltungsinternen Datenbanken stammen. Anschließend wird der Versuch unternommen, den Outcome in die Effizienzanalyse der Bürgerdienste einfließen zu lassen. In diesem Zusammenhang wird die Anwendbarkeit von verschiedenen erweiterten Best-Practice-Verfahren und auch eine Erweiterung der relativen Effizienzmessung und der FDH-Methode geprüft. Als Gesamtfazit der Dissertation kann festgehalten werden, dass die Bürgerdienste in den untersuchten Hauptstädten der MOE-Staaten nicht mehr Kernelemente des NPM anwenden, als die Hauptstädte der westlichen Mitgliedsstaaten der EU. Im Gegenteil wendet Prag deutlich weniger NPM-Instrumente als andere Vergleichsstädte an, wohingegen Warschau zwar viele NPM-Instrumente anwendet, jedoch immer von einer westeuropäischen Vergleichsstadt übertroffen wird. Auch die Hypothese, dass die Bürgerdienste in den Hauptstädten der MOE-Staaten effizienter arbeiten als vergleichbare Bürgerdienste in den Hauptstädten westlicher EU-Mitgliedsstaaten wurde durch die Dissertation entkräftet. Das Gegenteil ist der Fall, da Prag und Warschau im Rahmen des Effizienzvergleichs lediglich durchschnittliche oder schlechte Performances aufweisen. Die aufgestellte Hypothese ist durch die Forschungsergebnisse widerlegt, lediglich das gute Abschneiden der Vergleichsstadt Warschau bei der Anwendungsanalyse kann einen Teil der These im gewissen Umfang bestätigen.
In der Berufsgruppe der Lehrerinnen und Lehrer besteht eine hohe Prävalenz psychischer und psychosomatischer Erkrankungen. Aus- und Weiterbildungsangebote zur Vermittlung lehrerspezifischer sozialer Kompetenzen spielen eine wichtige Rolle bei der Förderung der Lehrergesundheit. In der vorliegenden Studie wurde das „Lehrer/innen-Coaching nach dem Freiburger Modell“ evaluiert, welches die Kompetenz von Lehrkräften stärken soll, innerhalb der Schule und insbesondere im Unterricht, schwierige interpersonelle Situationen aktiv und konstruktiv zu gestalten. Damit sollen stressbedingte gesundheitliche Belastungen abgebaut und dem Entstehen gravierender psychischer Störungen vorgebeugt werden. In der vorliegenden Arbeit werden zwei modifizierte Versionen dieses Programms erstmalig im Rahmen einer landesweiten Feldstudie untersucht. Die zentralen Evaluationsfragestellungen beziehen sich auf die Effektivität der Intervention als Gesundheitsförderungsmaßnahme (Akzeptanz, Wirksamkeit, Wirksamkeitsvergleich der beiden Interventionsformen im landesweiten Einsatz). Daneben strebt die Studie einen Vergleich mit den Ergebnissen einer Vorgängerstudie sowie die Generierung weiterer Erkenntnisse zum Zusammenhang zwischen Aspekten der sozialen Kompetenz von Lehrkräften und ihrer psychischen Gesundheit an. An der Maßnahme konnten alle baden-württembergischen Lehrerinnen und Lehrer mit einer Berufserfahrung von mindestens 10 Jahren teilnehmen. Für die Untersuchung der Wirksamkeit der Maßnahme und des Wirksamkeitsvergleichs der beiden unterschiedlichen Formen liegt ein quasiexperimentelles Design mit insgesamt zwei Messzeitpunkten vor. In die Auswertung zur Wirksamkeit der Intervention konnten die Daten von den 314 Teilnehmern einbezogen werden. Die Messinstrumente, die in der vorliegenden Studie zur Anwendung kamen, waren der General Health Questionnaire (GHQ-12), das Maslach Burnout Inventory (MBI-D) und die ins Deutsche übersetzte Jefferson Scale of Empathy (JSE) in der an Lehrer adaptierten Form. Die Evaluationsergebnisse zeigen, dass die Teilnahme am „Lehrer/innen-Coaching nach dem Freiburger Modell” mit einer signifikanten Verbesserung der gesundheitsbezogenen abhängigen Variablen einhergeht. Besonders hervorzuheben ist die ausgeprägte Verbesserung der mittels GHQ-12 erfassten psychischen Gesundheit. Das Ergebnis des Prä-Post-Vergleichs der Gesundheitswerte beider Interventionsgruppen bestätigte sich auch im Vergleich zu einer Null-Interventionsgruppe: Entsprechend der Hypothese gab es bei den Teilnehmern eine signifikant stärkere Verbesserung der psychischen Gesundheit als bei den Nicht-Teilnehmern (Null-Interventionsgruppe). Die beiden Interventionsmodi „Kompaktform” und „Kurzform” erwiesen sich im Hinblick auf die Verbesserung der Lehrergesundheit als gleichermaßen wirksam. Zudem zeigen die Ergebnisse der Teilnehmerbefragung, dass die Maßnahme Anklang bei der Zielgruppe fand. Die Akzeptanz durch die Zielgruppe ist für die Wirksamkeit einer auf Freiwilligkeit basierenden verhaltenspräventiven Maßnahme naturgemäß eine essenzielle Voraussetzung. Bei der psychischen Gesundheit der Lehrer bestehen – wie aus weiteren Befunden der Studie ersichtlich – bedeutsame Zusammenhänge zu einer intakten zwischenmenschlichen Beziehung mit den Schülern, einer gelungenen, durch gegenseitige Unterstützung gekennzeichneten Interaktion im Kollegium und einem entsprechend unterstützenden Führungsverhalten der Schulleitung. Dies macht deutlich, welches besondere Gewicht einer gelingenden Beziehungsgestaltung an Schulen und im Unterricht beizumessen ist. Bezüglich der Vorgehensweise in der vorliegenden Untersuchung werden einige methodische Limitationen hinsichtlich des Designs diskutiert. Ergänzend wird im Ausblick der Evaluationsstudie darauf hingewiesen, wie sich durch die Verknüpfung des vorliegenden Programms mit weiteren, auf den Ebenen Verhalten, Verhältnisse und Führung ansetzenden gesundheitspräventiven Maßnahmen, zukünftig die Stärkung der psychischen Gesundheit von Lehrkräften weiter ausbauen ließe.
Nowadays, software systems are getting more and more complex. To tackle this challenge most diverse techniques, such as design patterns, service oriented architectures (SOA), software development processes, and model-driven engineering (MDE), are used to improve productivity, while time to market and quality of the products stay stable. Multiple of these techniques are used in parallel to profit from their benefits. While the use of sophisticated software development processes is standard, today, MDE is just adopted in practice. However, research has shown that the application of MDE is not always successful. It is not fully understood when advantages of MDE can be used and to what degree MDE can also be disadvantageous for productivity. Further, when combining different techniques that aim to affect the same factor (e.g. productivity) the question arises whether these techniques really complement each other or, in contrast, compensate their effects. Due to that, there is the concrete question how MDE and other techniques, such as software development process, are interrelated. Both aspects (advantages and disadvantages for productivity as well as the interrelation to other techniques) need to be understood to identify risks relating to the productivity impact of MDE. Before studying MDE's impact on productivity, it is necessary to investigate the range of validity that can be reached for the results. This includes two questions. First, there is the question whether MDE's impact on productivity is similar for all approaches of adopting MDE in practice. Second, there is the question whether MDE's impact on productivity for an approach of using MDE in practice remains stable over time. The answers for both questions are crucial for handling risks of MDE, but also for the design of future studies on MDE success. This thesis addresses these questions with the goal to support adoption of MDE in future. To enable a differentiated discussion about MDE, the term MDE setting'' is introduced. MDE setting refers to the applied technical setting, i.e. the employed manual and automated activities, artifacts, languages, and tools. An MDE setting's possible impact on productivity is studied with a focus on changeability and the interrelation to software development processes. This is done by introducing a taxonomy of changeability concerns that might be affected by an MDE setting. Further, three MDE traits are identified and it is studied for which manifestations of these MDE traits software development processes are impacted. To enable the assessment and evaluation of an MDE setting's impacts, the Software Manufacture Model language is introduced. This is a process modeling language that allows to reason about how relations between (modeling) artifacts (e.g. models or code files) change during application of manual or automated development activities. On that basis, risk analysis techniques are provided. These techniques allow identifying changeability risks and assessing the manifestations of the MDE traits (and with it an MDE setting's impact on software development processes). To address the range of validity, MDE settings from practice and their evolution histories were capture in context of this thesis. First, this data is used to show that MDE settings cover the whole spectrum concerning their impact on changeability or interrelation to software development processes. Neither it is seldom that MDE settings are neutral for processes nor is it seldom that MDE settings have impact on processes. Similarly, the impact on changeability differs relevantly. Second, a taxonomy of evolution of MDE settings is introduced. In that context it is discussed to what extent different types of changes on an MDE setting can influence this MDE setting's impact on changeability and the interrelation to processes. The category of structural evolution, which can change these characteristics of an MDE setting, is identified. The captured MDE settings from practice are used to show that structural evolution exists and is common. In addition, some examples of structural evolution steps are collected that actually led to a change in the characteristics of the respective MDE settings. Two implications are: First, the assessed diversity of MDE settings evaluates the need for the analysis techniques that shall be presented in this thesis. Second, evolution is one explanation for the diversity of MDE settings in practice. To summarize, this thesis studies the nature and evolution of MDE settings in practice. As a result support for the adoption of MDE settings is provided in form of techniques for the identification of risks relating to productivity impacts.
Reading is a complex cognitive task based on the analyses of visual stimuli. Due to the physiology of the eye, only a small number of letters around the fixation position can be extracted with high visual acuity, while the visibility of words and letters outside this so-called foveal region quickly drops with increasing eccentricity. As a consequence, saccadic eye movements are needed to repeatedly shift the fovea to new words for visual word identification during reading. Moreover, even within a foveated word fixation positions near the word center are superior to other fixation positions for efficient word recognition (O’Regan, 1981; Brysbaert, Vitu, and Schroyens, 1996). Thus, most reading theories assume that readers aim specifically at word centers during reading (for a review see Reichle, Rayner, & Pollatsek, 2003). However, saccades’ landing positions within words during reading are in fact systematically modulated by the distance of the launch site from the word center (McConkie, Kerr, Reddix, & Zola, 1988). In general, it is largely unknown how readers identify the center of upcoming target words and there is no computational model of the sensorimotor translation of the decision for a target word into spatial word center coordinates. Here we present a series of three studies which aim at advancing the current knowledge about the computation of saccade target coordinates during saccade planning in reading. Based on a large corpus analyses, we firstly identified word skipping as a further factor beyond the launch-site distance with a likewise systematic and surprisingly large effect on within-word landing positions. Most importantly, we found that the end points of saccades after skipped word are shifted two and more letters to the left as compared to one-step saccades (i.e., from word N to word N+1) with equal launch-site distances. Then we present evidence from a single saccade experiment suggesting that the word-skipping effect results from highly automatic low-level perceptual processes, which are essentially based on the localization of blank spaces between words. Finally, in the third part, we present a Bayesian model of the computation of the word center from primary sensory measurements of inter-word spaces. We demonstrate that the model simultaneously accounts for launch-site and saccade-type contingent modulations of within-word landing positions in reading. Our results show that the spatial saccade target during reading is the result of complex estimations of the word center based on incomplete sensory information, which also leads to specific systematic deviations of saccades’ landing positions from the word center. Our results have important implications for current reading models and experimental reading research.
Cette recherche a pour objet l'articulation entre les dimensions anthropologiques et sociologiques de l'anthropologie philosophique de Helmuth Plessner (1892-1985). Elles procèdent selon trois axes. Je m'efforce (1) d'offrir une synthèse de l'anthropologie philosophique plessnerienne afin (2) de reconstituer les conditions de possibilité du social au stade humain de l'organique. Le troisième axe (3) correspond, enfin, à l'analyse des limites structurelles du social à partir de ses deux dimensions constitutives : l'individuel (limites ontogénétiques, comportementales et inter-personnelles) et le collectif (limites culturelles, intra- et inter-culturelles).
Magnetite is an iron oxide, which is ubiquitous in rocks and is usually deposited as small nanoparticulate matter among other rock material. It differs from most other iron oxides because it contains divalent and trivalent iron. Consequently, it has a special crystal structure and unique magnetic properties. These properties are used for paleoclimatic reconstructions where naturally occurring magnetite helps understanding former geological ages. Further on, magnetic properties are used in bio- and nanotechnological applications –synthetic magnetite serves as a contrast agent in MRI, is exploited in biosensing, hyperthermia or is used in storage media.
Magnetic properties are strongly size-dependent and achieving size control under preferably mild synthesis conditions is of interest in order to obtain particles with required properties. By using a custom-made setup, it was possible to synthesize stable single domain magnetite nanoparticles with the co-precipitation method. Furthermore, it was shown that magnetite formation is temperature-dependent, resulting in larger particles at higher temperatures. However, mechanistic approaches about the details are incomplete.
Formation of magnetite from solution was shown to occur from nanoparticulate matter rather than solvated ions. The theoretical framework of such processes has only started to be described, partly due to the lack of kinetic or thermodynamic data. Synthesis of magnetite nanoparticles at different temperatures was performed and the Arrhenius plot was used determine an activation energy for crystal growth of 28.4 kJ mol-1, which led to the conclusion that nanoparticle diffusion is the rate-determining step.
Furthermore, a study of the alteration of magnetite particles of different sizes as a function of their storage conditions is presented. The magnetic properties depend not only on particle size but also depend on the structure of the oxide, because magnetite oxidizes to maghemite under environmental conditions. The dynamics of this process have not been well described. Smaller nanoparticles are shown to oxidize more rapidly than larger ones and the lower the storage temperature, the lower the measured oxidation. In addition, the magnetic properties of the altered particles are not decreased dramatically, thus suggesting that this alteration will not impact the use of such nanoparticles as medical carriers.
Finally, the effect of biological additives on magnetite formation was investigated. Magnetotactic bacteria¬¬ are able to synthesize and align magnetite nanoparticles of well-defined size and morphology due to the involvement of special proteins with specific binding properties. Based on this model of morphology control, phage display experiments were performed to determine peptide sequences that preferably bind to (111)-magnetite faces. The aim was to control the shape of magnetite nanoparticles during the formation. Magnetotactic bacteria are also able to control the intracellular redox potential with proteins called magnetochromes. MamP is such a protein and its oxidizing nature was studied in vitro via biomimetic magnetite formation experiments based on ferrous ions. Magnetite and further trivalent oxides were found.
This work helps understanding basic mechanisms of magnetite formation and gives insight into non-classical crystal growth. In addition, it is shown that alteration of magnetite nanoparticles is mainly based on oxidation to maghemite and does not significantly influence the magnetic properties. Finally, biomimetic experiments help understanding the role of MamP within the bacteria and furthermore, a first step was performed to achieve morphology control in magnetite formation via co-precipitation.
Diese Arbeit untersucht, was passiert, wenn in Non-Profit-Organisation (NPO) der Anspruch des Bürgerschaftlichen Engagements auf Praktiken des Freiwilligenmanagements trifft. Ausgangspunkt dieser Fragestellung ist eine doppelte Diagnose: Zum einen setzen NPOs aufgrund mehrerer Faktoren - u.a. Ressourcenknappheit, Wettbewerb und Nachahmungseffekten – vermehrt auf Freiwilligenmanagement. Mit dieser von der BWL inspirierten, aber für NPO entwickelten Personalführungsmethode wollen sie mehr und bessere Freiwillige gewinnen und deren Einsatz effizienter strukturieren. Zum anderen haben sich gleichzeitig viele NPO dem Ziel des bürgerschaftlichen Engagements verschrieben. Damit reagieren sie auf den aus Politik und Wissenschaft zu vernehmenden Anspruch, die Zivilgesellschaft möge die knappen Kassen der öffentlichen Hand kompensieren und das wachsende Partizipationsbedürfnis weiter Teile der Bevölkerung durch eine neue Kultur der Teilhabe der Bürgerinnen und Bürger befriedigen. Bei näherer Betrachtung zeigt sich jedoch: Während Freiwilligenmanagement einer ökonomischen Handlungslogik folgt, ist bürgerschaftliches Engagement Ausdruck einer Handlungslogik der Zivilgesellschaft. Beide sind unter gegenwärtigen Bedingungen weder theoretisch noch praktisch miteinander vereinbar. Um beide Entwicklungen miteinander zu versöhnen, muss Freiwilligenmanagement unter dem Banner des Bürgerschaftlichen neu gedacht werden. Dieses Argument unterfüttert die Arbeit sowohl theoretisch und empirisch. Der Theorieteil gliedert sich in drei Teile. Zunächst wird der Begriff der NPO näher eingegrenzt. Dazu wird die bestehende Literatur zum Dritten Sektor und Non-Profit-Organisationen zu einem operationalisierbaren Begriff von NPO kondensiert. Daran anschließend werden aktuelle Trends im Feld der NPO identifiziert, die zeigen, dass NPO tatsächlich oft von widerstreitenden Handlungslogiken gekennzeichnet sind, darunter eine ökonomische und eine bürgerschaftliche. Die beiden folgenden Kapitel untersuchen dann jeweils eine der beiden Logiken. Zunächst wird das Leitbild des bürgerschaftlichen Engagements als Ausdruck einer zivilgesellschaftlichen Handlungslogik näher definiert. Dabei zeigt sich, dass dieser Begriff oft sehr unscharf verwendet wird. Daher greift die Arbeit auf die politiktheoretische Diskussion um Zivil- und Bürgergesellschaft auf und schmiedet daraus eine qualifizierte Definition von bürgerschaftlichem Engagement, die sich maßgeblich am Ideal von gesellschaftlich-politischer Partizipation und bürgerschaftlicher Kompetenz orientiert. Dem wird im dritten und letzten Kapitel des Theorieteils die ökonomische Handlungslogik in Form der Theorie des Freiwilligenmanagements gegenübergestellt. Bei der Darstellung zeigt sich schnell, dass dessen Grundprinzipien – anders als oft vorgebracht – mit den qualifizierten Idealen von Partizipation und Konkurrenz im Konflikt stehen. In der empirischen Analyse wird dann in den 8 Interviews den Widersprüchen zwischen bürgerschaftlichem Engagement und Freiwilligenmanagement in der Praxis nachgegangen. Die Ergebnisse dieser Untersuchung lassen sich in 5 Punkten zusammenfassen: 1. Freiwilligenmanagement orientiert sich erstens im wesentlichen an einer Zahl: Dem Zugewinn oder Verlust von freiwilliger Arbeit. 2. Freiwilligenmanagement installiert ein umfassendes System der Selektion von „passenden“ Freiwilligen. 3. Positiv hervorzuheben ist die institutionalisierte Ansprechbarkeit, die im Rahmen von Freiwilligenmanagement in NPO Einzug erhält. 4. Freiwilligenmanagement ist eng mit dem Anspruch verbunden, die Arbeit der Freiwilligen zu kontrollieren. Der Eigensinn des Engagements, die Notwendigkeit von Spielräumen, die Möglichkeit des Ausprobierens oder der Anspruch der Freiwilligen, an Entscheidungen zu partizipieren bzw. gar selbstorganisiert und -verantwortlich zu handeln, rückt dabei in den Hintergrund. 5. In den Interviews wird eine starke Ökonomisierung des Engagements sichtbar. Freiwillige werden als Ressource betrachtet, ihr Engagement als „Zeitspende“ statistisch erfasst, ihre (Dienst-)Leistung monetär bewertet. Im Zuge dessen erhält auch der Managerialism verstärkt Einfluss auf die Arbeit in NPO und begründet ein stark hierarchisches Verhältnis: Während die Freiwilligenmangerin aktiv handelt, wird die freiwillig Engagierte zum Objekt von Management-Techniken. Dass dies dem Anspruch der Partizipation entgegenläuft, ergibt sich dabei von selbst. Angesichts dieser Diagnose, dass real-existierendes Freiwilligenmanagement nicht mit dem Ideal des bürgerschaftlichen Engagement im engeren Sinne zusammenpasst, formuliert das Fazit Vorschläge für ein bürgerschaftlich orientiertes, engagement-sensibles Freiwilligenmanagement.
During this work I built a four wave mixing setup for the time-resolved femtosecond spectroscopy of Raman-active lattice modes. This setup enables to study the selective excitation of phonon polaritons. These quasi-particles arise from the coupling of electro-magnetic waves and transverse optical lattice modes, the so-called phonons. The phonon polaritons were investigated in the optically non-linear, ferroelectric crystals LiNbO₃ and LiTaO₃.
The direct observation of the frequency shift of the scattered narrow bandwidth probe pulses proofs the role of the Raman interaction during the probe and excitation process of phonon polaritons. I compare this experimental method with the measurement where ultra-short laser pulses are used. The frequency shift remains obscured by the relative broad bandwidth of these laser pulses. In an experiment with narrow bandwidth probe pulses, the Stokes and anti-Stokes intensities are spectrally separated. They are assigned to the corresponding counter-propagating wavepackets of phonon polaritons. Thus, the dynamics of these wavepackets was separately studied. Based on these findings, I develop the mathematical description of the so-called homodyne detection of light for the case of light scattering from counter propagating phonon polaritons.
Further, I modified the broad bandwidth of the ultra-short pump pulses using bandpass filters to generate two pump pulses with non-overlapping spectra. This enables the frequency-selective excitation of polariton modes in the sample, which allows me to observe even very weak polariton modes in LiNbO₃ or LiTaO₃ that belong to the higher branches of the dispersion relation of phonon polaritons. The experimentally determined dispersion relation of the phonon polaritons could therefore be extended and compared to theoretical models. In addition, I determined the frequency-dependent damping of phonon polaritons.
Knowing the rates and mechanisms of geomorphic process that shape the Earth’s surface is crucial to understand landscape evolution. Modern methods for estimating denudation rates enable us to quantitatively express and compare processes of landscape downwearing that can be traced through time and space—from the seemingly intact, though intensely shattered, phantom blocks of the catastrophically fragmented basal facies of giant rockslides up to denudational noise in orogen-wide data sets averaging over several millennia. This great variety of spatiotemporal scales of denudation rates is both boon and bane of geomorphic process rates. Indeed, processes of landscape downwearing can be traced far back in time, helping us to understand the Earth’s evolution. Yet, this benefit may turn into a drawback due to scaling issues if these rates are to be compared across different observation timescales.
This thesis investigates the mechanisms, patterns and rates of landscape downwearing across the Himalaya-Tibet orogen.
Accounting for the spatiotemporal variability of denudation processes, this thesis addresses landscape downwearing on three distinctly different spatial scales, starting off at the local scale of individual hillslopes where considerable amounts of debris are generated from rock instantaneously: Rocksliding in active mountains is a major impetus of landscape downwearing. Study I provides a systematic overview of the internal sedimentology of giant rockslide deposits and thus meets the challenge of distinguishing them from macroscopically and microscopically similar glacial deposits, tectonic fault-zone breccias, and impact breccias. This distinction is important to avoid erroneous or misleading deduction of paleoclimatic or tectonic implications. -> Grain size analysis shows that rockslide-derived micro-breccia closely resemble those from meteorite impact or tectonic faults. -> Frictionite may occur more frequently that previously assumed. -> Mössbauer-spectroscopy derived results indicate basal rock melting in the absence of water, involving short-term temperatures of >1500°C.
Zooming out, Study II tracks the fate of these sediments, using the example of the upper Indus River, NW India. There we use river sand samples from the Indus and its tributaries to estimate basin-averaged denudation rates along a ~320-km reach across the Tibetan Plateau margin, to answer the question whether incision into the western Tibetan Plateau margin is currently active or not. -> We find an about one-order-of-magnitude upstream decay—from 110 to 10 mm kyr^-1—of cosmogenic Be-10-derived basin-wide denudation rates across the morphological knickpoint that marks the transition from the Transhimalayan ranges to the Tibetan Plateau. This trend is corroborated by independent bulk petrographic and heavy mineral analysis of the same samples. -> From the observation that tributary-derived basin-wide denudation rates do not increase markedly until ~150–200 km downstream of the topographic plateau margin we conclude that incision into the Tibetan Plateau is inactive. -> Comparing our postglacial Be-10-derived denudation rates to long-term (>10^6 yr) estimates from low-temperature thermochronometry, ranging from 100 to 750 mm kyr^-1, points to an order- of-magnitude decay of rates of landscape downwearing towards present. We infer that denudation rates must have been higher in the Quaternary, probably promoted by the interplay of glacial and interglacial stages.
Our investigation of regional denudation patterns in the upper Indus finally is an integral part of Study III that synthesizes denudation of the Himalaya-Tibet orogen. In order to identify general and time-invariant predictors for Be-10-derived denudation rates we analyze tectonic, climatic and topographic metrics from an inventory of 297 drainage basins from various parts of the orogen. Aiming to get insight to the full response distributions of denudation rate to tectonic, climatic and topographic candidate predictors, we apply quantile regression instead of ordinary least squares regression, which has been standard analysis tool in previous studies that looked for denudation rate predictors. -> We use principal component analysis to reduce our set of 26 candidate predictors, ending up with just three out of these: Aridity Index, topographic steepness index, and precipitation of the coldest quarter of the year. -> Topographic steepness index proves to perform best during additive quantile regression. Our consequent prediction of denudation rates on the basin scale involves prediction errors that remain between 5 and 10 mm kyr^-1. -> We conclude that while topographic metrics such as river-channel steepness and slope gradient—being representative on timescales that our cosmogenic Be-10-derived denudation rates integrate over—generally appear to be more suited as predictors than climatic and tectonic metrics based on decadal records.
Galaxies are observational probes to study the Large Scale Structure. Their gravitational motions are tracers of the total matter density and therefore of the Large Scale Structure. Besides, studies of structure formation and galaxy evolution rely on numerical cosmological simulations. Still, only one universe observable from a given position, in time and space, is available for comparisons with simulations. The related cosmic variance affects our ability to interpret the results. Simulations constrained by observational data are a perfect remedy to this problem. Achieving such simulations requires the projects Cosmic flows and CLUES. Cosmic flows builds catalogs of accurate distance measurements to map deviations from the expansion. These measures are mainly obtained with the galaxy luminosity-rotation rate correlation. We present the calibration of that relation in the mid-infrared with observational data from Spitzer Space Telescope. Resulting accurate distance estimates will be included in the third catalog of the project. In the meantime, two catalogs up to 30 and 150 Mpc/h have been released. We report improvements and applications of the CLUES' method on these two catalogs. The technique is based on the constrained realization algorithm. The cosmic displacement field is computed with the Zel'dovich approximation. This latter is then reversed to relocate reconstructed three-dimensional constraints to their precursors' positions in the initial field. The size of the second catalog (8000 galaxies within 150 Mpc/h) highlighted the importance of minimizing the observational biases. By carrying out tests on mock catalogs, built from cosmological simulations, a method to minimize observational bias can be derived. Finally, for the first time, cosmological simulations are constrained solely by peculiar velocities. The process is successful as resulting simulations resemble the Local Universe. The major attractors and voids are simulated at positions approaching observational positions by a few megaparsecs, thus reaching the limit imposed by the linear theory.
Transcription factors (TFs) are ubiquitous gene expression regulators and play essential roles in almost all biological processes. This Ph.D. project is primarily focused on the functional characterisation of MYB112 - a member of the R2R3-MYB TF family from the model plant Arabidopsis thaliana. This gene was selected due to its increased expression during senescence based on previous qRT-PCR expression profiling experiments of 1880 TFs in Arabidopsis leaves at three developmental stages (15 mm leaf, 30 mm leaf and 20% yellowing leaf). MYB112 promoter GUS fusion lines were generated to further investigate the expression pattern of MYB112. Employing transgenic approaches in combination with metabolomics and transcriptomics we demonstrate that MYB112 exerts a major role in regulation of plant flavonoid metabolism. We report enhanced and impaired anthocyanin accumulation in MYB112 overexpressors and MYB112-deficient mutants, respectively. Expression profiling reveals that MYB112 acts as a positive regulator of the transcription factor PAP1 leading to increased anthocyanin biosynthesis, and as a negative regulator of MYB12 and MYB111, which both control flavonol biosynthesis. We also identify MYB112 early responsive genes using a combination of several approaches. These include gene expression profiling (Affymetrix ATH1 micro-arrays and qRT-PCR) and transactivation assays in leaf mesophyll cell protoplasts. We show that MYB112 binds to an 8-bp DNA fragment containing the core sequence (A/T/G)(A/C)CC(A/T)(A/G/T)(A/C)(T/C). By electrophoretic mobility shift assay (EMSA) and chromatin immunoprecipitation coupled to qPCR (ChIP-qPCR) we demonstrate that MYB112 binds in vitro and in vivo to MYB7 and MYB32 promoters revealing them as direct downstream target genes. MYB TFs were previously reported to play an important role in controlling flavonoid biosynthesis in plants. Many factors acting upstream of the anthocyanin biosynthesis pathway show enhanced expression levels during nitrogen limitation, or elevated sucrose content. In addition to the mentioned conditions, other environmental parameters including salinity or high light stress may trigger anthocyanin accumulation. In contrast to several other MYB TFs affecting anthocyanin biosynthesis pathway genes, MYB112 expression is not controlled by nitrogen limitation, or carbon excess, but rather is stimulated by salinity and high light stress. Thus, MYB112 constitutes a previously uncharacterised regulatory factor that modifies anthocyanin accumulation under conditions of abiotic stress.
Polyadenylation is a decisive 3’ end processing step during the maturation of pre-mRNAs. The length of the poly(A) tail has an impact on mRNA stability, localization and translatability. Accordingly, many eukaryotic organisms encode several copies of canonical poly(A) polymerases (cPAPs). The disruption of cPAPs in mammals results in lethality. In plants, reduced cPAP activity is non-lethal. Arabidopsis encodes three nuclear cPAPs, PAPS1, PAPS2 and PAPS4, which are constitutively expressed throughout the plant. Recently, the detailed analysis of Arabidopsis paps1 mutants revealed a subset of genes that is preferentially polyadenylated by the cPAP isoform PAPS1 (Vi et al. 2013). Thus, the specialization of cPAPs might allow the regulation of different sets of genes in order to optimally face developmental or environmental challenges.
To gain insights into the cPAP-based gene regulation in plants, the phenotypes of Arabidopsis cPAPs mutants under different conditions are characterized in detail in the following work. An involvement of all three cPAPs in flowering time regulation and stress response regulation is shown. While paps1 knockdown mutants flower early, paps4 and paps2 paps4 knockout mutants exhibit a moderate late-flowering phenotype. PAPS1 promotes the expression of the major flowering inhibitor FLC, supposedly by specific polyadenylation of an FLC activator. PAPS2 and PAPS4 exhibit partially overlapping functions and ensure timely flowering by repressing FLC and at least one other unidentified flowering inhibitor. The latter two cPAPs act in a novel regulatory pathway downstream of the autonomous pathway component FCA and act independently from the polyadenylation factors and flowering time regulators CstF64 and FY. Moreover, PAPS1 and PAPS2/PAPS4 are implicated in different stress response pathways in Arabidopsis. Reduced activity of the poly(A) polymerase PAPS1 results in enhanced resistance to osmotic and oxidative stress. Simultaneously, paps1 mutants are cold-sensitive. In contrast, PAPS2/PAPS4 are not involved in the regulation of osmotic or cold stress, but paps2 paps4 loss-of-function mutants exhibit enhanced sensitivity to oxidative stress provoked in the chloroplast. Thus, both PAPS1 and PAPS2/PAPS4 are required to maintain a balanced redox state in plants. PAPS1 seems to fulfil this function in concert with CPSF30, a polyadenylation factor that regulates alternative polyadenylation and tolerance to oxidative stress.
The individual paps mutant phenotypes and the cPAP-specific genetic interactions support the model of cPAP-dependent polyadenylation of selected mRNAs. The high similarity of the polyadenylation machineries in yeast, mammals and plants suggests that similar regulatory mechanisms might be present in other organism groups. The cPAP-dependent developmental and physiological pathways identified in this work allow the design of targeted experiments to better understand the ecological and molecular context underlying cPAP-specialization.
One of the most significant current discussions in Astrophysics relates to the origin of high-energy cosmic rays. According to our current knowledge, the abundance distribution of the elements in cosmic rays at their point of origin indicates, within plausible error limits, that they were initially formed by nuclear processes in the interiors of stars. It is also believed that their energy distribution up to 1018 eV has Galactic origins. But even though the knowledge about potential sources of cosmic rays is quite poor above „ 1015 eV, that is the “knee” of the cosmic-ray spectrum, up to the knee there seems to be a wide consensus that supernova remnants are the most likely candidates. Evidence of this comes from observations of non-thermal X-ray radiation, requiring synchrotron electrons with energies up to 1014 eV, exactly in the remnant of supernovae. To date, however, there is not conclusive evidence that they produce nuclei, the dominant component of cosmic rays, in addition to electrons. In light of this dearth of evidence, γ-ray observations from supernova remnants can offer the most promising direct way to confirm whether or not these astrophysical objects are indeed the main source of cosmic-ray nuclei below the knee. Recent observations with space- and ground-based observatories have established shell-type supernova remnants as GeV-to- TeV γ-ray sources. The interpretation of these observations is however complicated by the different radiation processes, leptonic and hadronic, that can produce similar fluxes in this energy band rendering ambiguous the nature of the emission itself. The aim of this work is to develop a deeper understanding of these radiation processes from a particular shell-type supernova remnant, namely RX J1713.7–3946, using observations of the LAT instrument onboard the Fermi Gamma-Ray Space Telescope. Furthermore, to obtain accurate spectra and morphology maps of the emission associated with this supernova remnant, an improved model of the diffuse Galactic γ-ray emission background is developed. The analyses of RX J1713.7–3946 carried out with this improved background show that the hard Fermi-LAT spectrum cannot be ascribed to the hadronic emission, leading thus to the conclusion that the leptonic scenario is instead the most natural picture for the high-energy γ-ray emission of RX J1713.7–3946. The leptonic scenario however does not rule out the possibility that cosmic-ray nuclei are accelerated in this supernova remnant, but it suggests that the ambient density may not be high enough to produce a significant hadronic γ-ray emission. Further investigations involving other supernova remnants using the improved back- ground developed in this work could allow compelling population studies, and hence prove or disprove the origin of Galactic cosmic-ray nuclei in these astrophysical objects. A break- through regarding the identification of the radiation mechanisms could be lastly achieved with a new generation of instruments such as CTA.
A dramatic efficiency improvement of bulk heterojunction solar cells based on electron-donating conjugated polymers in combination with soluble fullerene derivatives has been achieved over the past years. Certified and reported power conversion efficiencies now reach over 9% for single junctions and exceed the 10% benchmark for tandem solar cells. This trend brightens the vision of organic photovoltaics becoming competitive with inorganic solar cells including the realization of low-cost and large-area organic photovoltaics. For the best performing organic materials systems, the yield of charge generation can be very efficient. However, a detailed understanding of the free charge carrier generation mechanisms at the donor acceptor interface and the energy loss associated with it needs to be established. Moreover, organic solar cells are limited by the competition between charge extraction and free charge recombination, accounting for further efficiency losses. A conclusive picture and the development of precise methodologies for investigating the fundamental processes in organic solar cells are crucial for future material design, efficiency optimization, and the implementation of organic solar cells into commercial products.
In order to advance the development of organic photovoltaics, my thesis focuses on the comprehensive understanding of charge generation, recombination and extraction in organic bulk heterojunction solar cells summarized in 6 chapters on the cumulative basis of 7 individual publications.
The general motivation guiding this work was the realization of an efficient hybrid inorganic/organic tandem solar cell with sub-cells made from amorphous hydrogenated silicon and organic bulk heterojunctions. To realize this project aim, the focus was directed to the low band-gap copolymer PCPDTBT and its derivatives, resulting in the examination of the charge carrier dynamics in PCPDTBT:PC70BM blends in relation to by the blend morphology. The phase separation in this blend can be controlled by the processing additive diiodooctane, enhancing domain purity and size. The quantitative investigation of the free charge formation was realized by utilizing and improving the time delayed collection field technique. Interestingly, a pronounced field dependence of the free carrier generation for all blends is found, with the field dependence being stronger without the additive. Also, the bimolecular recombination coefficient for both blends is rather high and increases with decreasing internal field which we suggest to be caused by a negative field dependence of mobility. The additive speeds up charge extraction which is rationalized by the threefold increase in mobility.
By fluorine attachment within the electron deficient subunit of PCPDTBT, a new polymer F-PCPDTBT is designed. This new material is characterized by a stronger tendency to aggregate as compared to non-fluorinated PCPDTBT. Our measurements show that for F-PCPDTBT:PCBM blends the charge carrier generation becomes more efficient and the field-dependence of free charge carrier generation is weakened. The stronger tendency to aggregate induced by the fluorination also leads to increased polymer rich domains, accompanied in a threefold reduction in the non-geminate recombination coefficient at conditions of open circuit. The size of the polymer domains is nicely correlated to the field-dependence of charge generation and the Langevin reduction factor, which highlights the importance of the domain size and domain purity for efficient charge carrier generation. In total, fluorination of PCPDTBT causes the PCE to increase from 3.6 to 6.1% due to enhanced fill factor, short circuit current and open circuit voltage. Further optimization of the blend ratio, active layer thickness, and polymer molecular weight resulted in 6.6% efficiency for F-PCPDTBT:PC70BM solar cells.
Interestingly, the double fluorinated version 2F-PCPDTBT exhibited poorer FF despite a further reduction of geminate and non-geminate recombination losses. To further analyze this finding, a new technique is developed that measures the effective extraction mobility under charge carrier densities and electrical fields comparable to solar cell operation conditions. This method involves the bias enhanced charge extraction technique. With the knowledge of the carrier density under different electrical field and illumination conditions, a conclusive picture of the changes in charge carrier dynamics leading to differences in the fill factor upon fluorination of PCPDTBT is attained. The more efficient charge generation and reduced recombination with fluorination is counterbalanced by a decreased extraction mobility. Thus, the highest fill factor of 60% and efficiency of 6.6% is reached for F-PCPDTBT blends, while 2F-PCPDTBT blends have only moderate fill factors of 54% caused by the lower effective extraction mobility, limiting the efficiency to 6.5%.
To understand the details of the charge generation mechanism and the related losses, we evaluated the yield and field-dependence of free charge generation using time delayed collection field in combination with sensitive measurements of the external quantum efficiency and absorption coefficients for a variety of blends. Importantly, both the yield and field-dependence of free charge generation is found to be unaffected by excitation energy, including direct charge transfer excitation below the optical band gap. To access the non-detectable absorption at energies of the relaxed charge transfer emission, the absorption was reconstructed from the CT emission, induced via the recombination of thermalized charges in electroluminescence. For a variety of blends, the quantum yield at energies of charge transfer emission was identical to excitations with energies well above the optical band-gap. Thus, the generation proceeds via the split-up of the thermalized charge transfer states in working solar cells. Further measurements were conducted on blends with fine-tuned energy levels and similar blend morphologies by using different fullerene derivatives. A direct correlation between the efficiency of free carrier generation and the energy difference of the relaxed charge transfer state relative to the energy of the charge separated state is found. These findings open up new guidelines for future material design as new high efficiency materials require a minimum energetic offset between charge transfer and the charge separated state while keeping the HOMO level (and LUMO level) difference between donor and acceptor as small as possible.
The Adana Basin of southern Turkey, situated at the SE margin of the Central Anatolian Plateau is ideally located to record Neogene topographic and tectonic changes in the easternmost Mediterranean realm. Using industry seismic reflection data we correlate 34 seismic profiles with corresponding exposed units in the Adana Basin. The time-depth conversion of the interpreted seismic profiles allows us to reconstruct the subsidence curve of the Adana Basin and to outline the occurrence of a major increase in both subsidence and sedimentation rates at 5.45 – 5.33 Ma, leading to the deposition of almost 1500 km3 of conglomerates and marls. Our provenance analysis of the conglomerates reveals that most of the sediment is derived from and north of the SE margin of the Central Anatolian Plateau. A comparison of these results with the composition of recent conglomerates and the present drainage basins indicates major changes between late Messinian and present-day source areas. We suggest that these changes in source areas result of uplift and ensuing erosion of the SE margin of the plateau. This hypothesis is supported by the comparison of the Adana Basin subsidence curve with the subsidence curve of the Mut Basin, a mainly Neogene basin located on top of the Central Anatolian Plateau southern margin, showing that the Adana Basin subsidence event is coeval with an uplift episode of the plateau southern margin. The collection of several fault measurements in the Adana region show different deformation styles for the NW and SE margins of the Adana Basin. The weakly seismic NW portion of the basin is characterized by extensional and transtensional structures cutting Neogene deposits, likely accomodating the differential uplift occurring between the basin and the SE margin of the plateau. We interpret the tectonic evolution of the southern flank of the Central Anatolian Plateau and the coeval subsidence and sedimentation in the Adana Basin to be related to deep lithospheric processes, particularly lithospheric delamination and slab break-off.
Geometric electroelasticity
(2014)
In this work a diffential geometric formulation of the theory of electroelasticity is developed which also includes thermal and magnetic influences. We study the motion of bodies consisting of an elastic material that are deformed by the influence of mechanical forces, heat and an external electromagnetic field. To this end physical balance laws (conservation of mass, balance of momentum, angular momentum and energy) are established. These provide an equation that describes the motion of the body during the deformation. Here the body and the surrounding space are modeled as Riemannian manifolds, and we allow that the body has a lower dimension than the surrounding space. In this way one is not (as usual) restricted to the description of the deformation of three-dimensional bodies in a three-dimensional space, but one can also describe the deformation of membranes and the deformation in a curved space. Moreover, we formulate so-called constitutive relations that encode the properties of the used material. Balance of energy as a scalar law can easily be formulated on a Riemannian manifold. The remaining balance laws are then obtained by demanding that balance of energy is invariant under the action of arbitrary diffeomorphisms on the surrounding space. This generalizes a result by Marsden and Hughes that pertains to bodies that have the same dimension as the surrounding space and does not allow the presence of electromagnetic fields. Usually, in works on electroelasticity the entropy inequality is used to decide which otherwise allowed deformations are physically admissible and which are not. It is alsoemployed to derive restrictions to the possible forms of constitutive relations describing the material. Unfortunately, the opinions on the physically correct statement of the entropy inequality diverge when electromagnetic fields are present. Moreover, it is unclear how to formulate the entropy inequality in the case of a membrane that is subjected to an electromagnetic field. Thus, we show that one can replace the use of the entropy inequality by the demand that for a given process balance of energy is invariant under the action of arbitrary diffeomorphisms on the surrounding space and under linear rescalings of the temperature. On the one hand, this demand also yields the desired restrictions to the form of the constitutive relations. On the other hand, it needs much weaker assumptions than the arguments in physics literature that are employing the entropy inequality. Again, our result generalizes a theorem of Marsden and Hughes. This time, our result is, like theirs, only valid for bodies that have the same dimension as the surrounding space.
Unterschiedliche Verfahren zur Ermittlung von Georadar-Wellengeschwindigkeiten wurden entwickelt und erfolgreich angewendet. Für die Verfahren wurden statistische Methoden und Schwarmintelligenz-Algorithmen benutzt. Es wurde gezeigt, dass die neuen Verfahren schneller, präziser und besser reproduzierbare Ergebnisse für Georadar-Wellengeschwindigkeit erzielen als herkömmliche Verfahren.
Mit verbesserten Werten der Georadar-Wellengeschwindigkeit lassen sich die verzerrten dreidimensionalen Abbilder der obersten zehn Meter des Untergrundes, welche sich mit Georadar-Daten erzeugen lassen, korrigieren. In diesen korrigierten Abbildern sind dann realistische Tiefen von Schichten oder Objekten im Untergrund besser messbar. Außerdem verbessern präzisere Wellengeschwindigkeiten die Bestimmung von Bodenparametern, wie Wassergehalt oder Tonanteil. Die präsentierten Verfahren erlauben eine quantitative Angabe von Fehlern der bestimmten Wellengeschwindigkeit und der daraus folgenden Tiefen und Bodenparametern im Untergrund. Die Vorteile dieser neu entwickelten Verfahren zur Charakterisierung des Untergrundes der oberen Meter wurde an Feldbeispielen demonstriert.
Die vorliegende Studie untersucht die gesellschaftliche Rolle des gegenwärtigen Mathematikunterrichts an deutschen allgemeinbildenden Schulen aus einer soziologisch-kritischen Perspektive. In Zentrum des Interesses steht die durch den Mathematikunterricht erfahrene Sozialisation. Die Studie umfasst unter anderem eine Literaturdiskussion, die Ausarbeitung eines soziologischen Rahmens auf der Grundlage des Werks von Michel Foucault und zwei Teilstudien zur Soziologie der Logik und des Rechnens. Abschließend werden Dispositive des Mathematischen beschrieben, die darlegen, in welcher Art und mit welcher persönlichen und gesellschaftlichen Folgen der gegenwärtige Mathematikunterricht eine spezielle Geisteshaltung etabliert.
The characterization of exoplanets is a young and rapidly expanding field in astronomy.
It includes a method called transmission spectroscopy that searches for planetary spectral
fingerprints in the light received from the host star during the event of a transit. This
techniques allows for conclusions on the atmospheric composition at the terminator region,
the boundary between the day and night side of the planet. Observationally a big
challenge, first attempts in the community have been successful in the detection of several
absorption features in the optical wavelength range. These are for example a Rayleighscattering
slope and absorption by sodium and potassium. However, other objects show
a featureless spectrum indicative for a cloud or haze layer of condensates masking the
probable atmospheric layers.
In this work, we performed transmission spectroscopy by spectrophotometry of three
Hot Jupiter exoplanets. When we began the work on this thesis, optical transmission
spectra have been available for two exoplanets. Our main goal was to advance the current
sample of probed objects to learn by comparative exoplanetology whether certain
absorption features are common. We selected the targets HAT-P-12b, HAT-P-19b and
HAT-P-32b, for which the detection of atmospheric signatures is feasible with current
ground-based instrumentation. In addition, we monitored the host stars of all three objects
photometrically to correct for influences of stellar activity if necessary.
The obtained measurements of the three objects all favor featureless spectra. A variety
of atmospheric compositions can explain the lack of a wavelength dependent absorption.
But the broad trend of featureless spectra in planets of a wide range of temperatures,
found in this work and in similar studies recently published in the literature, favors an
explanation based on the presence of condensates even at very low concentrations in the
atmospheres of these close-in gas giants. This result points towards the general conclusion
that the capability of transmission spectroscopy to determine the atmospheric composition
is limited, at least for measurements at low spectral resolution.
In addition, we refined the transit parameters and ephemerides of HAT-P-12b and HATP-
19b. Our monitoring campaigns allowed for the detection of the stellar rotation period
of HAT-P-19 and a refined age estimate. For HAT-P-12 and HAT-P-32, we derived upper
limits on their potential variability. The calculated upper limits of systematic effects of
starspots on the derived transmission spectra were found to be negligible for all three
targets.
Finally, we discussed the observational challenges in the characterization of exoplanet
atmospheres, the importance of correlated noise in the measurements and formulated
suggestions on how to improve on the robustness of results in future work.
The monsoon is an important component of the Earth’s climate system. It played a vital role in the development and sustenance of the largely agro-based economy in India. A better understanding of past variations in the Indian Summer Monsoon (ISM) is necessary to assess its nature under global warming scenarios. Instead, our knowledge of spatiotemporal patterns of past ISM strength, as inferred from proxy records, is limited due to the lack of high-resolution paleo-hydrological records from the core monsoon domain.
In this thesis I aim to improve our understanding of Holocene ISM variability from the core ‘monsoon zone’ (CMZ) in India. To achieve this goal, I tried to understand modern and thereafter reconstruct Holocene monsoonal hydrology, by studying surface sediments and a high-resolution sedimentary record from the saline-alkaline Lonar crater lake, central India. My approach relies on analyzing stable carbon and hydrogen isotope ratios from sedimentary lipid biomarkers to track past hydrological changes.
In order to evaluate the relationship of the modern ecosystem and hydrology of the lake I studied the distribution of lipid biomarkers in the modern ecosystem and compared it to lake surface sediments. The major plants from dry deciduous mixed forest type produced a greater amount of leaf wax n-alkanes and a greater fraction of n-C31 and n-C33 alkanes relative to n-C27 and n-C29. Relatively high average chain length (ACL) values (29.6–32.8) for these plants seem common for vegetation from an arid and warm climate. Additionally I found that human influence and subsequent nutrient supply result in increased lake primary productivity, leading to an unusually high concentration of tetrahymanol, a biomarker for salinity and water column stratification, in the nearshore sediments. Due to this inhomogeneous deposition of tetrahymanol in modern sediments, I hypothesize that lake level fluctuation may potentially affect aquatic lipid biomarker distributions in lacustrine sediments, in addition to source changes.
I reconstructed centennial-scale hydrological variability associated with changes in the intensity of the ISM based on a record of leaf wax and aquatic biomarkers and their stable carbon (δ13C) and hydrogen (δD) isotopic composition from a 10 m long sediment core from the lake. I identified three main periods of distinct hydrology over the Holocene in central India. The period between 10.1 and 6 cal. ka BP was likely the wettest during the Holocene. Lower ACL index values (29.4 to 28.6) of leaf wax n-alkanes and their negative δ13C values (–34.8‰ to –27.8‰) indicated the dominance of woody C3 vegetation in the catchment, and negative δDwax (average for leaf wax n-alkanes) values (–171‰ to –147‰) argue for a wet period due to an intensified monsoon. After 6 cal. ka BP, a gradual shift to less negative δ13C values (particularly for the grass derived n-C31) and appearance of the triterpene lipid tetrahymanol, generally considered as a marker for salinity and water column stratification, marked the onset of drier conditions. At 5.1 cal. ka BP increasing flux of leaf wax n-alkanes along with the highest flux of tetrahymanol indicated proximity of the lakeshore to the center due to a major lake level decrease. Rapid fluctuations in abundance of both terrestrial and aquatic biomarkers between 4.8 and 4 cal. ka BP indicated an unstable lake ecosystem, culminating in a transition to arid conditions. A pronounced shift to less negative δ13C values, in particular for n-C31 (–25.2‰ to –22.8‰), over this period indicated a change of dominant vegetation to C4 grasses. Along with a 40‰ increase in leaf wax n-alkane δD values, which likely resulted from less rainfall and/or higher plant evapotranspiration, I interpret this period to reflect the driest conditions in the region during the last 10.1 ka. This transition led to protracted late Holocene arid conditions and the establishment of a permanently saline lake. This is supported by the high abundance of tetrahymanol. A late Holocene peak of cyanobacterial biomarker input at 1.3 cal. ka BP might represent an event of lake eutrophication, possibly due to human impact and the onset of cattle/livestock farming in the catchment.
The most intriguing feature of the mid-Holocene driest period was the high amplitude and rapid fluctuations in δDwax values, probably due to a change in the moisture source and/or precipitation seasonality. I hypothesize that orbital induced weakening of the summer solar insolation and associated reorganization of the general atmospheric circulation were responsible for an unstable hydroclimate in the mid-Holocene in the CMZ.
My findings shed light onto the sequence of changes during mean state changes of the monsoonal system, once an insolation driven threshold has been passed, and show that small changes in solar insolation can be associated to major environmental changes and large fluctuations in moisture source, a scenario that may be relevant with respect to future changes in the ISM system.
Fusionen stellen einen zentralen Baustein der Industrieökonomik dar. In diesem Buch wird der Frage nachgegangen, welchen Einfluss die räumliche Dimension auf eine Fusion ausübt. Dabei wird ein Grundmodell entwickelt und über dieses hinaus eine Vielzahl Erweiterungen präsentiert. Der Leser erhält somit die Möglichkeit ein tiefes Verständnis für Fusionen bei räumlichem Wettbewerb zu erlangen.
The work elaborates on the question if coaches in non-professional soccer can influence referee decisions. Modeled from a principal-agent perspective, the managing referee boards can be seen as the principal. They aim at facilitating a fair competition which is in accordance with the existing rules and regulations. In doing so, the referees are assigned as impartial agents on the pitch. The coaches take over a non-legitimate principal-like role trying to influence the referees even though they do not have the formal right to do so.
Separate questionnaires were set up for referees and coaches. The coach questionnaire aimed at identifying the extent and the forms of influencing attempts by coaches. The referee questionnaire tried to elaborate on the questions if referees take notice of possible influencing attempts and how they react accordingly.
The results were put into relation with official match data in order to identify significant influences on personal sanctions (yellow cards, second yellow cards, red cards) and the match result.
It is found that there is a slight effect on the referee’s decisions. However, this effect is rather disadvantageous for the influencing coach and there is no evidence for an impact on the match result itself.
Tierische und menschliche Fäkalien aus Landwirtschaft und Haushalten enthalten zahlreiche obligat und opportunistisch pathogene Mikroorganismen, deren Konzentration u. a. je nach Gesundheitszustand der betrachteten Gruppe schwankt. Neben den Krankheitserregern enthalten Fäkalien aber auch essentielle Pflanzennährstoffe (276) und dienen seit Jahrtausenden (63) als Dünger für Feldfrüchte. Mit der unbedarften Verwendung von pathogenbelastetem Fäkaldünger steigt jedoch auch das Risiko einer Infektion von Mensch und Tier. Diese Gefahr erhöht sich mit der globalen Vernetzung der Landwirtschaft, z. B. durch den Import von kontaminierten Futter- bzw. Lebensmitteln (29).
Die vorliegende Arbeit stellt die milchsaure Fermentation von Rindergülle und Klärschlamm als alternative Hygienisierungsmethode gegenüber der Pasteurisation in Biogasanlagen bzw. gebräuchlichen Kompostierung vor.
Dabei wird ein Abfall der Gram-negativen Bakterienflora sowie der Enterokokken, Schimmel- und Hefepilze unter die Nachweisgrenze von 3 log10KbE/g beobachtet, gleichzeitig steigt die Konzentration der Lactobacillaceae um das Tausendfache. Darüber hinaus wird gezeigt, dass pathogene Bakterien wie Staphylococcus aureus, Salmonella spp., Listeria monocytogenes, EHEC O:157 und vegetative Clostridum perfringens-Zellen innerhalb von 3 Tagen inaktiviert werden. Die Inaktivierung von ECBO-Viren und Spulwurmeiern erfolgt innerhalb von 7 bzw. 56 Tagen. Zur Aufklärung der Ursache der beobachteten Hygienisierung wurde das fermentierte Material auf flüchtige Fettsäuren sowie pH-Wertänderungen untersucht. Es konnte festgestellt werden, dass die gemessenen Werte nicht die alleinige Ursache für das Absterben der Erreger sind, vielmehr wird eine zusätzliche bakterizide Wirkung durch eine mutmaßliche Bildung von Bakteriozinen in Betracht gezogen. Die parasitizide Wirkung wird auf die physikalischen Bedingungen der Fermentation zurückgeführt.
Die methodischen Grundlagen basieren auf Analysen mittels zahlreicher klassisch-kultureller Verfahren, wie z. B. der Lebendkeimzahlbestimmung. Darüber hinaus findet die MALDI-TOF-Massenspektrometrie und die klassische PCR in Kombination mit der Gradienten-Gelelektrophorese Anwendung, um kultivierbare Bakterienfloren zu beschreiben bzw. nicht kultivierbare Bakterienfloren stichprobenartig zu erfassen.
Neben den Aspekten der Hygienisierung wird zudem die Eignung der Methode für die landwirtschaftliche Nutzung berücksichtigt. Dies findet sich insbesondere in der Komposition des zu fermentierenden Materials wieder, welches für die verstärkte Humusakkumulation im Ackerboden optimiert wurde. Darüber hinaus wird die Masseverlustbilanz während der milchsauren Fermentation mit denen der Kompostierung sowie der Verarbeitung in der Biogasanlage verglichen und als positiv bewertet, da sie mit insgesamt 2,45 % sehr deutlich unter den bisherigen Alternativen liegt (73, 138, 458). Weniger Verluste an organischem Material während der Hygienisierung führen zu einer größeren verwendbaren Düngermenge, die auf Grund ihres organischen Ursprungs zu einer Verstärkung des Humusanteiles im Ackerboden beitragen kann (56, 132).
Cyanobacteria produce about 40 percent of the world’s primary biomass, but also a variety of often toxic peptides such as microcystin. Mass developments, so called blooms, can pose a real threat to the drinking water supply in many parts of the world. This study aimed at characterizing the biological function of microcystin production in one of the most common bloom-forming cyanobacterium Microcystis aeruginosa.
In a first attempt, the effect of elevated light intensity on microcystin production and its binding to cellular proteins was studied. Therefore, conventional microcystin quantification techniques were combined with protein-biochemical methods. RubisCO, the key enzyme for primary carbon fixation was a major microcystin interaction partner. High light exposition strongly stimulated microcystin-protein interactions. Up to 60 percent of the total cellular microcystin was detected bound to proteins, i.e. inaccessible for standard quantification procedures. Underestimation of total microcystin contents when neglecting the protein fraction was also demonstrated in field samples. Finally, an immuno-fluorescence based method was developed to identify microcystin producing cyanobacteria in mixed populations.
The high light induced microcystin interaction with proteins suggested an impact of the secondary metabolite on the primary metabolism of Microcystis by e.g. modulating the activity of enzymes. For addressing that question, a comprehensive GC/MS-based approach was conducted to compare the accumulation of metabolites in the wild-type of Microcystis aeruginosa PCC 7806 and the microcystin deficient ΔmcyB mutant. From all 501 detected non-redundant metabolites 85 (17 percent) accumulated significantly different in either of both genotypes upon high light exposition. Accumulation of compatible solutes in the ΔmcyB mutant suggests a role of microcystin in fine-tuning the metabolic flow to prevent stress related to excess light, high oxygen concentration and carbon limitation.
Co-analysis of the widely used model cyanobacterium Synechocystis PCC 6803 revealed profound metabolic differences between species of cyanobacteria. Whereas Microcystis channeled more resources towards carbohydrate synthesis, Synechocystis invested more in amino acids. These findings were supported by electron microscopy of high light treated cells and the quantification of storage compounds. While Microcystis accumulated mainly glycogen to about 8.5 percent of its fresh weight within three hours, Synechocystis produced higher amounts of cyanophycin. The results showed that the characterization of species-specific metabolic features should gain more attention with regard to the biotechnological use of cyanobacteria.
Linked Open Data (LOD) comprises very many and often large public data sets and knowledge bases. Those datasets are mostly presented in the RDF triple structure of subject, predicate, and object, where each triple represents a statement or fact. Unfortunately, the heterogeneity of available open data requires significant integration steps before it can be used in applications. Meta information, such as ontological definitions and exact range definitions of predicates, are desirable and ideally provided by an ontology. However in the context of LOD, ontologies are often incomplete or simply not available. Thus, it is useful to automatically generate meta information, such as ontological dependencies, range definitions, and topical classifications. Association rule mining, which was originally applied for sales analysis on transactional databases, is a promising and novel technique to explore such data. We designed an adaptation of this technique for min-ing Rdf data and introduce the concept of “mining configurations”, which allows us to mine RDF data sets in various ways. Different configurations enable us to identify schema and value dependencies that in combination result in interesting use cases. To this end, we present rule-based approaches for auto-completion, data enrichment, ontology improvement, and query relaxation. Auto-completion remedies the problem of inconsistent ontology usage, providing an editing user with a sorted list of commonly used predicates. A combination of different configurations step extends this approach to create completely new facts for a knowledge base. We present two approaches for fact generation, a user-based approach where a user selects the entity to be amended with new facts and a data-driven approach where an algorithm discovers entities that have to be amended with missing facts. As knowledge bases constantly grow and evolve, another approach to improve the usage of RDF data is to improve existing ontologies. Here, we present an association rule based approach to reconcile ontology and data. Interlacing different mining configurations, we infer an algorithm to discover synonymously used predicates. Those predicates can be used to expand query results and to support users during query formulation. We provide a wide range of experiments on real world datasets for each use case. The experiments and evaluations show the added value of association rule mining for the integration and usability of RDF data and confirm the appropriateness of our mining configuration methodology.
Inferring gene regulatory networks and cellular phases from time-resolved transcriptomics data
(2014)
The purpose of this thesis is to develop an automated inversion scheme to derive point and finite source parameters for weak earthquakes, here intended with the unusual meaning of earthquakes with magnitudes at the limit or below the bottom magnitude threshold of standard source inversion routines. The adopted inversion approaches entirely rely on existing inversion software, the methodological work mostly targeting the development and tuning of optimized inversion flows. The resulting inversion scheme is tested for very different datasets, and thus allows the discussion on the source inversion problem at different scales. In the first application, dealing with mining induced seismicity, the source parameters determination is addressed at a local scale, with source-sensor distance of less than 3 km. In this context, weak seismicity corresponds to event below magnitude MW 2.0, which are rarely target of automated source inversion routines. The second application considers a regional dataset, namely the aftershock sequence of the 2010 Maule earthquake (Chile), using broadband stations at regional distances, below 300 km. In this case, the magnitude range of the target aftershocks range down to MW 4.0. This dataset is here considered as a weak seismicity case, since the analysis of such moderate seismicity is generally investigated only by moment tensor inversion routines, with no attempt to resolve source duration or finite source parameters. In this work, automated multi-step inversion schemes are applied to both datasets with the aim of resolving point source parameters, both using double couple (DC) and full moment tensor (MT) models, source duration and finite source parameters. A major result of the analysis of weaker events is the increased size of resulting moment tensor catalogues, which interpretation may become not trivial. For this reason, a novel focal mechanism clustering approach is used to automatically classify focal mechanisms, allowing the investigation of the most relevant and repetitive rupture features. The inversion of the mining induced seismicity dataset reveals the repetitive occurrence of similar rupture processes, where the source geometry is controlled by the shape of the mined panel. Moreover, moment tensor solutions indicate a significant contribution of tensile processes. Also the second application highlights some characteristic geometrical features of the fault planes, which show a general consistency with the orientation of the slab. The additional inversion for source duration allowed to verify the empirical correlation for moment normalized earthquakes in subduction zones among a decreasing rupture duration with increasing source depth, which was so far only observed for larger events.
Anorexia nervosa und unipolare Affektive Störungen stellen häufige und schwerwiegende kinder- und jugendpsychiatrische Störungsbilder dar, deren Pathogenese bislang nicht vollständig entschlüsselt ist. Verschiedene Studien zeigen bei erwachsenen Patienten gravierende Auffälligkeiten in den kognitiven Funktionen. Dahingegen scheinen bei adoleszenten Patienten lediglich leichtere Einschränkungen in den kognitiven Funktionen vorzuliegen. Die Prävalenz der Anorexia nervosa und unipolaren Affektiven Störung ist mit Beginn der Adoleszenz deutlich erhöht. Es ist anzunehmen, dass kognitive Dysfunktionen, die sich bereits in diesem Alter abzeichnen, den weiteren Krankheitsverlauf bis in das Erwachsenenalter, die Behandlungsergebnisse und die Prognose maßgeblich beeinträchtigen könnten. Zudem ist von einem höheren Chronifizierungsrisiko auszugehen. In der vorliegenden Arbeit wurden daher kognitive Funktionen bei adoleszenten Patientinnen mit Anorexia nervosa sowie Patienten mit unipolaren Affektiven Störungen untersucht. Die Überprüfung der kognitiven Funktionen bei Patientinnen mit Anorexia nervosa erfolgte vor und nach Gewichtszunahme. Weiterhin wurden zugrundeliegende biologische Mechanismen überprüft. Zudem wurde die Spezifität kognitiver Dysfunktionen für beide Störungsbilder untersucht und bei Patienten mit unipolaren Affektiven Störungen geschlechtsbezogene Unterschiede exploriert. Insgesamt gingen 47 Patientinnen mit Anorexia nervosa (mittleres Alter 16,3 + 1,6 Jahre), 39 Patienten mit unipolaren Affektiven Störungen (mittleres Alter 15,5 + 1,3 Jahre) sowie 78 Kontrollprobanden (mittleres Alter 16,5 + 1,3 Jahre) in die Untersuchung ein. Sämtliche Studienteilnehmer durchliefen eine neuropsychologische Testbatterie, bestehend aus Verfahren zur Überprüfung der kognitiven Flexibilität sowie visuellen und psychomotorischen Verarbeitungsgeschwindigkeit. Neben einem Intelligenzscreening wurden zudem das Ausmaß der depressiven Symptomatik sowie die allgemeine psychische Belastung erfasst. Die Ergebnisse legen nahe, dass bei adoleszenten Patientinnen mit Anorexia nervosa, sowohl im akut untergewichtigen Zustand als auch nach Gewichtszunahme, lediglich milde Beeinträchtigungen in den kognitiven Funktionen vorliegen. Im akut untergewichtigen Zustand offenbarten sich deutliche Zusammenhänge zwischen dem appetitregulierenden Peptid Agouti-related Protein und kognitiver Flexibilität, nicht jedoch zwischen Agouti-related Protein und visueller oder psychomotorischer Verarbeitungsgeschwindigkeit. Bei dem Vergleich von Anorexia nervosa und unipolaren Affektiven Störungen prädizierte die Zugehörigkeit zu der Patientengruppe Anorexia nervosa ein Risiko für das Vorliegen kognitiver Dysfunktionen. Es zeigte sich zudem, dass adoleszente Patienten mit unipolaren Affektiven Störungen lediglich in der psychomotorischen Verarbeitungsgeschwindigkeit tendenziell schwächere Leistungen offenbarten als gesunde Kontrollprobanden. Es ergab sich jedoch ein genereller geschlechtsbezogener Vorteil für weibliche Probanden in der visuellen und psychomotorischen Verarbeitungsgeschwindigkeit. Die vorliegenden Befunde unterstreichen die Notwendigkeit der Überprüfung kognitiver Funktionen bei adoleszenten Patienten mit Anorexia nervosa sowie unipolaren Affektiven Störungen in der klinischen Routinediagnostik. Die Patienten könnten von spezifischen Therapieprogrammen profitieren, die Beeinträchtigungen in den kognitiven Funktionen mildern bzw. präventiv behandeln.
Das Europäische Parlament ist zweifelsohne die mächtigste parlamentarische Versammlung auf supranationaler Ebene. Das provoziert die Frage, wie Entscheidungen in diesem Parlament gefällt werden und wie sie begründet werden können. Darin liegt das Hauptanliegen dieser Arbeit, die zur Beantwortung dieser Frage auf soziologische Ansätze der Erklärung sozialen Handelns zurückgreift und damit einen neuen Zugang zur Beobachtung parlamentarischen Handelns schafft. Dabei arbeitet sie heraus, wie wichtig es ist, bei der Analyse politischer Entscheidungsprozesse zu beachten, wie politische Probleme von Akteuren interpretiert und gegenüber Verhandlungspartnern dargestellt werden. An den Fallbeispielen der Entscheidungsprozesse zur Dienstleistungsrichtlinie, zur Chemikalien-Verordnung REACH und dem TDIP (CIA)-Ausschuss in der Legislaturperiode 2004–2009, wird der soziale Mechanismus dargestellt, der hinter Einigungen im Europäischen Parlament steckt. Kultur als Interpretation der Welt wird so zum Schlüssel des Verständnisses politischer Entscheidungen auf supranationaler Ebene.
Organische Halbleiter besitzen neue, bemerkenswerte Materialeigenschaften, die sie für die grundlegende Forschung wie auch aktuelle technologische Entwicklung (bsw. org. Leuchtdioden, org. Solarzellen) interessant werden lassen. Aufgrund der starken konformative Freiheit der konjugierten Polymerketten führt die Vielzahl der möglichen Anordnungen und die schwache intermolekulare Wechselwirkung für gewöhnlich zu geringer struktureller Ordnung im Festkörper. Die Morphologie hat gleichzeitig direkten Einfluss auf die elektronische Struktur der organischen Halbleiter, welches sich meistens in einer deutlichen Reduktion der Ladungsträgerbeweglichkeit gegenüber den anorganischen Verwandten zeigt. So stellt die Beweglichkeit der Ladungen im Halbleiter einen der limitierenden Faktoren für die Leistungsfähigkeit bzw. den Wirkungsgrad von funktionellen organischen Bauteilen dar. Im Jahr 2009 wurde ein neues auf Naphthalindiimid und Bithiophen basierendes Dornor/Akzeptor Copolymer vorgestellt [P(NDI2OD‑T2)], welches sich durch seine außergewöhnlich hohe Ladungsträgermobilität auszeichnet. In dieser Arbeit wird die Ladungsträgermobilität in P(NDI2OD‑T2) bestimmt, und der Transport durch eine geringe energetischer Unordnung charakterisiert. Obwohl dieses Material zunächst als amorph beschrieben wurde zeigt eine detaillierte Analyse der optischen Eigenschaften von P(NDI2OD‑T2), dass bereits in Lösung geordnete Vorstufen supramolekularer Strukturen (Aggregate) existieren. Quantenchemische Berechnungen belegen die beobachteten spektralen Änderungen. Mithilfe der NMR-Spektroskopie kann die Bildung der Aggregate unabhängig von optischer Spektroskopie bestätigt werden. Die Analytische Ultrazentrifugation an P(NDI2OD‑T2) Lösungen legt nahe, dass sich die Aggregation innerhalb der einzelnen Ketten unter Reduktion des hydrodynamischen Radius vollzieht. Die Ausbildung supramolekularen Strukturen nimmt auch eine signifikante Rolle bei der Filmbildung ein und verhindert gleichzeitig die Herstellung amorpher P(NDI2OD‑T2) Filme. Durch chemische Modifikation der P(NDI2OD‑T2)-Kette und verschiedener Prozessierungs-Methoden wurde eine Änderung des Kristallinitätsgrades und gleichzeitig der Orientierung der kristallinen Domänen erreicht und mittels Röntgenbeugung quantifiziert. In hochauflösenden Elektronenmikroskopie-Messungen werden die Netzebenen und deren Einbettung in die semikristallinen Strukturen direkt abgebildet. Aus der Kombination der verschiedenen Methoden erschließt sich ein Gesamtbild der Nah- und Fernordnung in P(NDI2OD‑T2). Über die Messung der Elektronenmobilität dieser Schichten wird die Anisotropie des Ladungstransports in den kristallographischen Raumrichtungen von P(NDI2OD‑T2) charakterisiert und die Bedeutung der intramolekularen Wechselwirkung für effizienten Ladungstransport herausgearbeitet. Gleichzeitig wird deutlich, wie die Verwendung von größeren und planaren funktionellen Gruppen zu höheren Ladungsträgermobilitäten führt, welche im Vergleich zu klassischen semikristallinen Polymeren weniger sensitiv auf die strukturelle Unordnung im Film sind.
Large-scale floodplain sediment dynamics in the Mekong Delta : present state and future prospects
(2014)
The Mekong Delta (MD) sustains the livelihood and food security of millions of people in Vietnam and Cambodia. It is known as the “rice bowl” of South East Asia and has one of the world’s most productive fisheries. Sediment dynamics play a major role for the high productivity of agriculture and fishery in the delta. However, the MD is threatened by climate change, sea level rise and unsustainable development activities in the Mekong Basin. But despite its importance and the expected threats, the understanding of the present and future sediment dynamics in the MD is very limited. This is a consequence of its large extent, the intricate system of rivers, channels and floodplains and the scarcity of observations. Thus this thesis aimed at (1) the quantification of suspended sediment dynamics and associated sediment-nutrient deposition in floodplains of the MD, and (2) assessed the impacts of likely future boundary changes on the sediment dynamics in the MD. The applied methodology combines field experiments and numerical simulation to quantify and predict the sediment dynamics in the entire delta in a spatially explicit manner. The experimental part consists of a comprehensive procedure to monitor quantity and spatial variability of sediment and associated nutrient deposition for large and complex river floodplains, including an uncertainty analysis. The measurement campaign applied 450 sediment mat traps in 19 floodplains over the MD for a complete flood season. The data also supports quantification of nutrient deposition in floodplains based on laboratory analysis of nutrient fractions of trapped sedimentation.The main findings are that the distribution of grain size and nutrient fractions of suspended sediment are homogeneous over the Vietnamese floodplains. But the sediment deposition within and between ring dike floodplains shows very high spatial variability due to a high level of human inference. The experimental findings provide the essential data for setting up and calibration of a large-scale sediment transport model for the MD. For the simulation studies a large scale hydrodynamic model was developed in order to quantify large-scale floodplain sediment dynamics. The complex river-channel-floodplain system of the MD is described by a quasi-2D model linking a hydrodynamic and a cohesive sediment transport model. The floodplains are described as quasi-2D presentations linked to rivers and channels modeled in 1D by using control structures. The model setup, based on the experimental findings, ignored erosion and re-suspension processes due to a very high degree of human interference during the flood season. A two-stage calibration with six objective functions was developed in order to calibrate both the hydrodynamic and sediment transport modules. The objective functions include hydraulic and sediment transport parameters in main rivers, channels and floodplains. The model results show, for the first time, the tempo-spatial distribution of sediment and associated nutrient deposition rates in the whole MD. The patterns of sediment transport and deposition are quantified for different sub-systems. The main factors influencing spatial sediment dynamics are the network of rivers, channels and dike-rings, sluice gate operations, magnitude of the floods and tidal influences. The superposition of these factors leads to high spatial variability of the sediment transport and deposition, in particular in the Vietnamese floodplains. Depending on the flood magnitude, annual sediment loads reaching the coast vary from 48% to 60% of the sediment load at Kratie, the upper boundary of the MD. Deposited sediment varies from 19% to 23% of the annual load at Kratie in Cambodian floodplains, and from 1% to 6% in the compartmented and diked floodplains in Vietnam. Annual deposited nutrients (N, P, K), which are associated to the sediment deposition, provide on average more than 50% of mineral fertilizers typically applied for rice crops in non-flooded ring dike compartments in Vietnam. This large-scale quantification provides a basis for estimating the benefits of the annual Mekong floods for agriculture and fishery, for assessing the impacts of future changes on the delta system, and further studies on coastal deposition/erosion. For the estimation of future prospects a sensitivity-based approach is applied to assess the response of floodplain hydraulics and sediment dynamics to the changes in the delta boundaries including hydropower development, climate change in the Mekong River Basin and effective sea level rise. The developed sediment model is used to simulate the mean sediment transport and sediment deposition in the whole delta system for the baseline (2000-2010) and future (2050-2060) periods. For each driver we derive a plausible range of future changes and discretize it into five levels, resulting in altogether 216 possible factor combinations. Our results thus cover all plausible future pathways of sediment dynamics in the delta based on current knowledge. The uncertainty of the range of the resulting impacts can be decreased in case more information on these drivers becomes available. Our results indicate that the hydropower development dominates the changes in sediment dynamics of the Mekong Delta, while sea level rise has the smallest effect. The floodplains of Vietnamese Mekong Delta are much more sensitive to the changes compared to the other subsystems of the delta. In terms of median changes of the three combined drivers, the inundation extent is predicted to increase slightly, but the overall floodplain sedimentation would be reduced by approximately 40%, while the sediment load to the Sea would diminish to half of the current rates. These findings provide new and valuable information on the possible impacts of future development on the delta, and indicate the most vulnerable areas. Thus, the presented results are a significant contribution to the ongoing international discussion on the hydropower development in the Mekong basin and its impact on the Mekong delta.
The subsurface upper Palaeozoic sedimentary successions of the Loppa High half-graben and the Finnmark platform in the Norwegian Barents Sea (southwest Barents Sea) were investigated using 2D/3D seismic datasets combined with well and core data. These sedimentary successions represent a case of mixed siliciclastic-carbonates depositional systems, which formed during the earliest phase of the Atlantic rifting between Greenland and Norway. During the Carboniferous and Permian the southwest part of the Barents Sea was located along the northern margin of Pangaea, which experienced a northward drift at a speed of ~2–3 mm per year. This gradual shift in the paleolatitudinal position is reflected by changes in regional climatic conditions: from warm-humid in the early Carboniferous, changing to warm-arid in the middle to late Carboniferous and finally to colder conditions in the late Permian. Such changes in paleolatitude and climate have resulted in major changes in the style of sedimentation including variations in the type of carbonate factories. The upper Palaeozoic sedimentary succession is composed of four major depositional units comprising chronologically the Billefjorden Group dominated by siliciclastic deposition in extensional tectonic-controlled wedges, the Gipsdalen Group dominated by warm-water carbonates, stacked buildups and evaporites, the Bjarmeland Group characterized by cool-water carbonates as well as by the presence of buildup networks, and the Tempelfjorden Group characterized by fine-grained sedimentation dominated by biological silica production. In the Loppa High, the integration of a core study with multi-attribute seismic facies classification allowed highlighting the main sedimentary unconformities and mapping the spatial extent of a buried paleokarst terrain. This geological feature is interpreted to have formed during a protracted episode of subaerial exposure occurring between the late Palaeozoic and middle Triassic. Based on seismic sequence stratigraphy analysis the palaeogeography in time and space of the Loppa High basin was furthermore reconstructed and a new and more detailed tectono-sedimentary model for this area was proposed. In the Finnmark platform area, a detailed core analysis of two main exploration wells combined with key 2D seismic sections located along the main depositional profile, allowed the evaluation of depositional scenarios for the two main lithostratigraphic units: the Ørn Formation (Gipsdalen Group) and the Isbjørn Formation (Bjarmeland Group). During the mid-Sakmarian, two major changes were observed between the two formations including (1) the variation in the type of the carbonate factories, which is interpreted to be depth-controlled and (2) the change in platform morphology, which evolved from a distally steepened ramp to a homoclinal ramp. The results of this study may help supporting future reservoirs characterization of the upper Palaeozoic units in the Barents Sea, particularly in the Loppa High half-graben and the Finmmark platform area.
What is a radical? Somebody who goes against mainstream opinions? An agitator who suggests transforming society at the risk of endangering its harmony? In the political context of the British Isles at the end of the eighteenth century, the word radical had a negative connotation. It referred to the Levellers and the English Civil War, it brought back a period of history which was felt as a traumatic experience. Its stigmas were still vivid in the mind of the political leaders of these times. The reign of Cromwell was certainly the main reason for the general aversion of any form of virulent contestation of the power, especially when it contained political claims.
In the English political context, radicalism can be understood as the different campaigns for parliamentary reforms establishing universal suffrage. However, it became evident that not all those who were supporting such a reform originated from the same social class or shared the same ideals. As a matter of fact, the reformist associations and their leaders often disagreed with each other. Edward Royle and Hames Walvin claimed that radicalism could not be analyzed historically as a concept, because it was not a homogeneous movement, nor it had common leaders and a clear ideology. For them, radicalism was merely a loose concept, « a state of mind rather than a plan of action. »
At the beginning of the nineteenth-century, the newspaper The Northern Star used the word radical in a positive way to designate a person or a group of people whose ideas were conform to those of the newspaper. However, an opponent of parliamentary reform will use the same word in a negative way, in this case the word radical will convey a notion of menace. From the very beginning, the term radical covered a large spectrum of ideas and conceptions. In fact, the plurality of what the word conveys is the main characteristic of what a radical is. As a consequence, because the radicals tended to differentiate themselves with their plurality and their differences rather than with common features, it seems impossible to define what radicalism (whose suffix in –ism implies that it designate a doctrine, an ideology) is. Nevertheless, today it is accepted by all historians. From the mid-twentieth century, we could say that it was taken from granted to consider radicalism as a movement that fitted with the democratic precepts (universal suffrage, freedom of speech) of our modern world.
Let us first look at radicalism as a convenient way to designate the different popular movements appealing to universal suffrage during the time period 1792-1848. We could easily observe through the successions of men and associations, a long lasting radical state of mind: Cartwright, Horne Tooke, Thomas Hardy, Francis Burdett, William Cobbett, Henry Hunt, William Lovett, Bronterre O’Brien, Feargus O’Connor, The London Society for Constitutional information (SCI), The London Corresponding Society (LCS), The Hampden Clubs, The Chartists, etc. These organizations and people acknowledged having many things in common and being inspired by one another in carrying out their activities. These influences can be seen in the language and the political ideology that British historians name as "Constitutionalist", but also, in the political organization of extra-parliamentary societies. Most of the radicals were eager to redress injustices and, in practice, they were inspired by a plan of actions drawn on from the pamphlets of the True Whigs of the eighteenth-century. We contest the argument that the radicals lacked coherence and imagination or that they did not know how to put into practice their ambitions. In fact, their innovative forms of protest left a mark on history and found many successors in the twentieth century. Radicals’ prevarications were the result of prohibitive legislation that regulated the life of associations and the refusal of the authorities to cooperate with them.
As mentioned above, the term radical was greatly used and the contemporaries of the period starting from the French Revolution to Chartism never had to quarrel about the notions the word radical covered. However, this does not imply that all radicals were the same or that they belong to the same entity. Equally to Horne Tooke, the Reverend and ultra-Tory Stephens was considered as a radical, it went also with the shoemaker Thomas Hardy and the extravagant aristocrat Francis Burdett. Whether one belonged to the Aristocracy, the middle-class, the lower class or the Church, nothing could prevent him from being a radical. Surely, anybody could be a radical in its own way. Radicalism was wide enough to embrace everybody, from revolutionary reformers to paternalistic Tories.
We were interested to clarify the meaning of the term radical because its inclusive nature was overlooked by historians. That’s why the term radical figures in the original title of our dissertation Les voix/voies radicales (radical voices/ways to radicalism). In the French title, both words voix/voies are homonymous; the first one voix (voice) correspond to people, the second one voies (ways) refers to ideas. By this, we wanted to show that the word radical belongs to the sphere of ideas and common experience but also to the nature of human beings.
Methodoloy
The thesis stresses less on the question of class and its formation than on the circumstances that brought people to change their destiny and those of their fellows or to modernize the whole society. We challenged the work of E.P. Thompson, who in his famous book, The Making of the English Working Class, defined the radical movements in accordance with an idea of class.
How a simple shoe-maker, Thomas Hardy, could become the center of attention during a trial where he was accused of being the mastermind of a modern revolution? What brought William Cobbett, an ultra-Tory, self-taught intellectual, to gradually espouse the cause of universal suffrage at a period where it was unpopular to do so? Why a whole population gathered to hear Henry Hunt, a gentleman farmer whose background did not destine him for becoming the champion of the people? It seemed that the easiest way to answer to these questions and to understand the nature of the popular movements consisted in studying the life of their leaders. We aimed at reconstructing the universe which surrounded the principal actors of the reform movements as if we were a privileged witness of theses times.
This idea to associate the biographies of historical characters for a period of more than fifty years arouse when we realized that key events of the reform movements were echoing each other, such the trial of Thomas Hardy in 1794 and the massacre of Peterloo of 1819. The more we learned about the major events of radicalism and the life of their leaders, the more we were intrigued. Finally, one could ask himself if being a radical was not after all a question of character rather than one of class. The different popular movements in favour of a parliamentary reform were in fact far more inclusive and diversified from what historians traditionally let us to believe. For instance, once he manage to gather a sufficient number of members of the popular classes, Thomas Hardy projected to give the control of his association to an intellectual elite led by Horne Tooke.
Moreover, supporters of the radical reforms followed leaders whose background was completely different as theirs. For example, O’Connor claimed royal descent from the ancient kings of Ireland. William Cobbett, owner of a popular newspaper was proud of his origins as a farmer. William Lovett, close to the liberals and a few members of parliament came from a very poor family of fishermen. We have thus put together the life of these five men, Thomas hardy, William Cobbett, Henry Hunt, William Lovett and Feargus O’Connor in order to compose a sort of a saga of the radicals. This association gives us a better idea of the characteristics of the different movements in which they participated, but also, throw light on the circumstances of their formation and their failures, on the particular atmosphere which prevailed at these times, on the men who influenced these epochs, and finally on the marks they had left. These men were at the heart of a whole network and in contact with other actors of peripheral movements. They gathered around themselves close and loyal fellows with whom they shared many struggles but also quarreled and had strong words.
The original part of our approach is reflected in the choice to not consider studying the fluctuations of the radical movements in a linear fashion where the story follows a strict chronology. We decided to split up the main issue of the thesis through different topics. To do so, we simply have described the life of the people who inspired these movements. Each historical figure covers a chapter, and the general story follows a chronological progression. Sometimes we had to go back through time or discuss the same events in different chapters when the main protagonists lived in the same period of time.
Radical movements were influenced by people of different backgrounds. What united them above all was their wish to obtain a normalization of the political world, to redress injustices and obtain parliamentary reform. We paid particular attention to the moments where the life of these men corresponded to an intense activity of the radical movement or to a transition of its ideas and organization. We were not so much interested in their feelings about secondary topics nor did we about their affective relations. Furthermore, we had little interest in their opinions on things which were not connected to our topic unless it helped us to have a better understanding of their personality. We have purposely reduced the description of our protagonists to their radical sphere. Of course we talked about their background and their intellectual development; people are prone to experience reversals of opinions, the case of Cobbett is the most striking one.
The life of these personalities coincided with particular moments of the radical movement, such as the first popular political associations, the first open-air mass meetings, the first popular newspapers, etc. We wanted to emphasize the personalities of those who addressed speeches and who were present in the radical associations. One could argue that the inconvenience of focusing on a particular person presents a high risk of overlooking events and people who were not part of his world. However, it was essential to differ from an analysis or a chronicle which had prevailed in the studies of the radical movements, as we aimed at offering a point of view that completed the precedents works written on that topic. In order to do so, we have deliberately put the humane character of the radical movement at the center of our work and used the techniques of biography as a narrative thread.
Conclusion
The life of each historical figure that we have portrayed corresponded to a particular epoch of the radical movement. Comparing the speeches of the radical leaders over a long period of time, we noticed that the radical ideology evolved. The principles of the Rights of Men faded away and gave place to more concrete reasoning, such as the right to benefit from one’s own labour. This transition is characterized by the Chartist period of Feargus O’Connor. This does not mean that collective memory and radical tradition ceased to play an important part. The popular classes were always appealed to Constitutional rhetoric and popular myths. Indeed, thanks to them they identified themselves and justified their claims to universal suffrage.
We focused on the life of a few influent leaders of radicalism in order to understand its evolution and its nature. The description of their lives constituted our narrative thread and it enabled us to maintain consistency in our thesis. If the chapters are independent the one from the other, events and speeches are in correspondences. Sometimes we could believe that we were witnessing a repetition of facts and events as if history was repeating itself endlessly. However, like technical progress, the spirit of time, Zeitgeist, experiences changes and mutations. These features are fundamental elements to comprehend historical phenomena; the latter cannot be simplified to philosophical, sociological, or historical concept. History is a science which has this particularity that the physical reality of phenomena has a human dimension. As a consequence, it is essential not to lose touch with the human aspect of history when one pursues studies and intellectual activities on a historical phenomenon.
We decided to take a route opposite to the one taken by many historians. We have first identified influential people from different epochs before entering into concepts analysis. Thanks to this compilation of radical leaders, a new and fresh look to the understanding of radicalism was possible. Of course, we were not the first one to have studied them, but we ordered them following a chronology, like Plutarch enjoyed juxtaposing Greeks and Romans historical figures. Thanks to this technique we wanted to highlight the features of the radical leaders’ speeches, personalities and epochs, but also their differences. At last, we tried to draw the outlines and the heart of different radical movements in order to follow the ways that led to radicalism. We do not pretend to have offered an original and exclusive definition of radicalism, we mainly wanted to understand the nature of what defines somebody as a radical and explain the reasons why thousands of people decided to believe in this man. Moreover, we wanted to distance ourselves from the ideological debate of the Cold War which permeated also the interpretation of past events. Too often, the history of radicalism was either narrated with a form of revolutionary nostalgia or in order to praise the merits of liberalism.
If the great mass meetings ends in the mid-nineteenth-century with the fall of Chartism, this practice spread out in the whole world in the twentieth-century. Incidentally, the Arab Spring of the beginning of the twenty-first-century demonstrated that a popular platform was the best way for the people to claim their rights and destabilize a political system which they found too authoritative. Through protest the people express an essential quality of revolt, which is an expression of emancipation from fear. From then on, a despotic regime loses this psychological terror which helped it to maintain itself into power. The balance of power between the government and its people would also take a new turn. The radicals won this psychological victory more than 150 years ago and yet universal suffrage was obtained only a century later. From the acceptance of the principles of liberties to their cultural practice, a long route has to be taken to change people’s mind. It is a wearisome struggle for the most vulnerable people. In the light of western history, fundamental liberties must be constantly defended. Paradoxically, revolt is an essential and constitutive element of the maintenance of democracy.
Mars is one of the best candidates among planetary bodies for supporting life. The presence of water in the form of ice and atmospheric vapour together with the availability of biogenic elements and energy are indicators of the possibility of hosting life as we know it. The occurrence of permanently frozen ground – permafrost, is a common phenomenon on Mars and it shows multiple morphological analogies with terrestrial permafrost. Despite the extreme inhospitable conditions, highly diverse microbial communities inhabit terrestrial permafrost in large numbers. Among these are methanogenic archaea, which are anaerobic chemotrophic microorganisms that meet many of the metabolic and physiological requirements for survival on the martian subsurface. Moreover, methanogens from Siberian permafrost are extremely resistant against different types of physiological stresses as well as simulated martian thermo-physical and subsurface conditions, making them promising model organisms for potential life on Mars. The main aims of this investigation are to assess the survival of methanogenic archaea under Mars conditions, focusing on methanogens from Siberian permafrost, and to characterize their biosignatures by means of Raman spectroscopy, a powerful technology for microbial identification that will be used in the ExoMars mission. For this purpose, methanogens from Siberian permafrost and non-permafrost habitats were subjected to simulated martian desiccation by exposure to an ultra-low subfreezing temperature (-80ºC) and to Mars regolith (S-MRS and P-MRS) and atmospheric analogues. They were also exposed to different concentrations of perchlorate, a strong oxidant found in martian soils. Moreover, the biosignatures of methanogens were characterized at the single-cell level using confocal Raman microspectroscopy (CRM). The results showed survival and methane production in all methanogenic strains under simulated martian desiccation. After exposure to subfreezing temperatures, Siberian permafrost strains had a faster metabolic recovery, whereas the membranes of non-permafrost methanogens remained intact to a greater extent. The strain Methanosarcina soligelidi SMA-21 from Siberian permafrost showed significantly higher methane production rates than all other strains after the exposure to martian soil and atmospheric analogues, and all strains survived the presence of perchlorate at the concentration on Mars. Furthermore, CRM analyses revealed remarkable differences in the overall chemical composition of permafrost and non-permafrost strains of methanogens, regardless of their phylogenetic relationship. The convergence of the chemical composition in non-sister permafrost strains may be the consequence of adaptations to the environment, and could explain their greater resistance compared to the non-permafrost strains. As part of this study, Raman spectroscopy was evaluated as an analytical technique for remote detection of methanogens embedded in a mineral matrix. This thesis contributes to the understanding of the survival limits of methanogenic archaea under simulated martian conditions to further assess the hypothetical existence of life similar to methanogens on the martian subsurface. In addition, the overall chemical composition of methanogens was characterized for the first time by means of confocal Raman microspectroscopy, with potential implications for astrobiological research.
Monoclonal antibodies (mAbs) are engineered immunoglobulins G (IgG) used for more than 20 years as targeted therapy in oncology, infectious diseases and (auto-)immune disorders. Their protein nature greatly influences their pharmacokinetics (PK), presenting typical linear and non-linear behaviors.
While it is common to use empirical modeling to analyze clinical PK data of mAbs, there is neither clear consensus nor guidance to, on one hand, select the structure of classical compartment models and on the other hand, interpret mechanistically PK parameters. The mechanistic knowledge present in physiologically-based PK (PBPK) models is likely to support rational classical model selection and thus, a methodology to link empirical and PBPK models is desirable. However, published PBPK models for mAbs are quite diverse in respect to the physiology of distribution spaces and the parameterization of the non-specific elimination involving the neonatal Fc receptor (FcRn) and endogenous IgG (IgGendo). The remarkable discrepancy between the simplicity of biodistribution data and the complexity of published PBPK models translates in parameter identifiability issues.
In this thesis, we address this problem with a simplified PBPK model—derived from a hierarchy of more detailed PBPK models and based on simplifications of tissue distribution model. With the novel tissue model, we are breaking new grounds in mechanistic modeling of mAbs disposition: We demonstrate that binding to FcRn is indeed linear and that it is not possible to infer which tissues are involved in the unspecific elimination of wild-type mAbs. We also provide a new approach to predict tissue partition coefficients based on mechanistic insights: We directly link tissue partition coefficients (Ktis) to data-driven and species-independent published antibody biodistribution coefficients (ABCtis) and thus, we ensure the extrapolation from pre-clinical species to human with the simplified PBPK model. We further extend the simplified PBPK model to account for a target, relevant to characterize the non-linear clearance due to mAb-target interaction.
With model reduction techniques, we reduce the dimensionality of the simplified PBPK model to design 2-compartment models, thus guiding classical model development with physiological and mechanistic interpretation of the PK parameters. We finally derive a new scaling approach for anatomical and physiological parameters in PBPK models that translates the inter-individual variability into the design of mechanistic covariate models with direct link to classical compartment models, specially useful for PK population analysis during clinical development.
Mathematical modeling of biological systems is a powerful tool to systematically investigate the functions of biological processes and their relationship with the environment. To obtain accurate and biologically interpretable predictions, a modeling framework has to be devised whose assumptions best approximate the examined scenario and which copes with the trade-off of complexity of the underlying mathematical description: with attention to detail or high coverage. Correspondingly, the system can be examined in detail on a smaller scale or in a simplified manner on a larger scale. In this thesis, the role of photosynthesis and its related biochemical processes in the context of plant metabolism was dissected by employing modeling approaches ranging from kinetic to stoichiometric models. The Calvin-Benson cycle, as primary pathway of carbon fixation in C3 plants, is the initial step for producing starch and sucrose, necessary for plant growth. Based on an integrative analysis for model ranking applied on the largest compendium of (kinetic) models for the Calvin-Benson cycle, those suitable for development of metabolic engineering strategies were identified. Driven by the question why starch rather than sucrose is the predominant transitory carbon storage in higher plants, the metabolic costs for their synthesis were examined. The incorporation of the maintenance costs for the involved enzymes provided a model-based support for the preference of starch as transitory carbon storage, by only exploiting the stoichiometry of synthesis pathways. Many photosynthetic organisms have to cope with processes which compete with carbon fixation, such as photorespiration whose impact on plant metabolism is still controversial. A systematic model-oriented review provided a detailed assessment for the role of this pathway in inhibiting the rate of carbon fixation, bridging carbon and nitrogen metabolism, shaping the C1 metabolism, and influencing redox signal transduction. The demand of understanding photosynthesis in its metabolic context calls for the examination of the related processes of the primary carbon metabolism. To this end, the Arabidopsis core model was assembled via a bottom-up approach. This large-scale model can be used to simulate photoautotrophic biomass production, as an indicator for plant growth, under so-called optimal, carbon-limiting and nitrogen-limiting growth conditions. Finally, the introduced model was employed to investigate the effects of the environment, in particular, nitrogen, carbon and energy sources, on the metabolic behavior. This resulted in a purely stoichiometry-based explanation for the experimental evidence for preferred simultaneous acquisition of nitrogen in both forms, as nitrate and ammonium, for optimal growth in various plant species. The findings presented in this thesis provide new insights into plant system's behavior, further support existing opinions for which mounting experimental evidences arise, and posit novel hypotheses for further directed large-scale experiments.
Der Bittergeschmack dient Säugern vermutlich zur Wahrnehmung und Vermeidung toxischer Substanzen. Bitterstoffe können jedoch auch gesund sein oder werden oft bereitwillig mit der Nahrung aufgenommen. Ob sie geschmacklich unterschieden werden können, ist allerdings umstritten. Detektiert werden Bitterstoffe von oralen Bittergeschmacksrezeptoren, den TAS2R (human) bzw. Tas2r (murin). In der Literatur gibt es aber immer mehr Hinweise darauf, dass überdies Tas2r nicht nur in extragustatorischen Organen exprimiert werden, sondern dort auch wichtige Aufgaben erfüllen könnten, was wiederum die Aufklärung ihrer noch nicht vollständig entschlüsselten Funktionsweisen erfordert. So ist noch unbekannt, ob alle bisher als funktionell identifizierten Tas2r wirklich gustatorische Funktionen erfüllen.
Im Rahmen der Charakterisierung neu generierter, im Locus des Bittergeschmacksrezeptors Tas2r131 genetisch modifizierter Mauslinien, wurde in vorliegender Arbeit die gustatorische sowie extragustatorische Expression von Tas2r131 untersucht. Dass Tas2r131 nicht nur in Pilzpapillen, Wall- und Blätterpapillen (VP+FoP), Gaumen, Ductus nasopalatinus, Vomeronasalorgan und Kehldeckel, sondern auch in Thymus, Testes und Nebenhodenkopf, in Gehirnarealen sowie im Ganglion geniculatum nachgewiesen wurde, bildete die Grundlage für weiterführende Studien. Die vorliegende Arbeit zeigt außerdem, dass Tas2r108, Tas2r126, Tas2r135, Tas2r137 und Tas2r143 in Blut exprimiert werden, was auf eine heterogene Funktion der Tas2r hindeutet. Dass zusätzlich erstmals die Expression aller 35 als funktionell beschriebenen Tas2r im gustatorischen VP+FoP-Epithel von C57BL/6-Mäusen nachgewiesen wurde, verweist auf deren Relevanz als funktionelle Geschmacksrezeptoren.
Weiter zeigten Untersuchungen zur Aufklärung eines möglichen Bitter-Unterscheidungsvermögens in Geschmackspapillen von Mäusen mit fluoreszenzmarkierten oder ablatierten Tas2r131-Zellen, dass Tas2r131 exprimierende Zellen eine Tas2r-Zellsubpopulation bilden. Darüber hinaus existieren innerhalb der Bitterzellen geordnete Tas2r-Expressionsmuster, die sich nach der chromosomalen Lage ihrer Gene richten. Isolierte Bitterzellen reagieren heterogen auf bekannte Bitterstoffe. Und Mäuse mit ablatierter Tas2r131-Zellpopulation besitzen noch andere Tas2r-Zellen und schmecken damit einige Bitterstoffe kaum noch, andere aber noch sehr gut. Diese Befunde belegen die Existenz verschiedener gustatorischer Tas2r-Zellpopulationen, welche die Voraussetzung bilden, Bitterstoffe heterogen zu detektieren. Ob dies die Grundlage für ein divergierendes Verhalten gegenüber unverträglichen und harmlosen oder gar nützlichen Bitterstoffen darstellt, kann mit Hilfe der dargelegten Tas2r-Expressionsmuster künftig in Verhaltensexperimenten geprüft werden.
Die Bittergeschmackswahrnehmung in Säugetieren stellt sich als ein hochkomplexer Mechanismus dar, dessen Vielschichtigkeit durch die hier neu aufgezeigten heterogenen Tas2r-Expressions- und Funktionsmuster erneut verdeutlicht wird.
In dieser Arbeit werden nichtlineare Kopplungsmechanismen von akustischen Oszillatoren untersucht, die zu Synchronisation führen können. Aufbauend auf die Fragestellungen vorangegangener Arbeiten werden mit Hilfe theoretischer und experimenteller Studien sowie mit Hilfe numerischer Simulationen die Elemente der Tonentstehung in der Orgelpfeife und die Mechanismen der gegenseitigen Wechselwirkung von Orgelpfeifen identifiziert. Daraus wird erstmalig ein vollständig auf den aeroakustischen und fluiddynamischen Grundprinzipien basierendes nichtlinear gekoppeltes Modell selbst-erregter Oszillatoren für die Beschreibung des Verhaltens zweier wechselwirkender Orgelpfeifen entwickelt. Die durchgeführten Modellrechnungen werden mit den experimentellen Befunden verglichen. Es zeigt sich, dass die Tonentstehung und die Kopplungsmechanismen von Orgelpfeifen durch das entwickelte Oszillatormodell in weiten Teilen richtig beschrieben werden. Insbesondere kann damit die Ursache für den nichtlinearen Zusammenhang von Kopplungsstärke und Synchronisation des gekoppelten Zwei-Pfeifen Systems, welcher sich in einem nichtlinearen Verlauf der Arnoldzunge darstellt, geklärt werden. Mit den gewonnenen Erkenntnissen wird der Einfluss des Raumes auf die Tonentstehung bei Orgelpfeifen betrachtet. Dafür werden numerische Simulationen der Wechselwirkung einer Orgelpfeife mit verschiedenen Raumgeometrien, wie z. B. ebene, konvexe, konkave, und gezahnte Geometrien, exemplarisch untersucht. Auch der Einfluss von Schwellkästen auf die Tonentstehung und die Klangbildung der Orgelpfeife wird studiert. In weiteren, neuartigen Synchronisationsexperimenten mit identisch gestimmten Orgelpfeifen, sowie mit Mixturen wird die Synchronisation für verschiedene, horizontale und vertikale Pfeifenabstände in der Ebene der Schallabstrahlung, untersucht. Die dabei erstmalig beobachteten räumlich isotropen Unstetigkeiten im Schwingungsverhalten der gekoppelten Pfeifensysteme, deuten auf abstandsabhängige Wechsel zwischen gegen- und gleichphasigen Sychronisationsregimen hin. Abschließend wird die Möglichkeit dokumentiert, das Phänomen der Synchronisation zweier Orgelpfeifen durch numerische Simulationen, also der Behandlung der kompressiblen Navier-Stokes Gleichungen mit entsprechenden Rand- und Anfangsbedingungen, realitätsnah abzubilden. Auch dies stellt ein Novum dar.
It is generally agreed upon that stars typically form in open clusters and stellar associations, but little is known about the structure of the open cluster system. Do open clusters and stellar associations form isolated or do they prefer to form in groups and complexes? Open cluster groups and complexes could verify star forming regions to be larger than expected, which would explain the chemical homogeneity over large areas in the Galactic disk. They would also define an additional level in the hierarchy of star formation and could be used as tracers for the scales of fragmentation in giant molecular clouds? Furthermore, open cluster groups and complexes could affect Galactic dynamics and should be considered in investigations and simulations on the dynamical processes, such as radial migration, disc heating, differential rotation, kinematic resonances, and spiral structure.
In the past decade there were a few studies on open cluster pairs (de La Fuente Marcos & de La Fuente Marcos 2009a,b,c) and on open cluster groups and complexes (Piskunov et al. 2006). The former only considered spatial proximity for the identification of the pairs, while the latter also required tangential velocities to be similar for the members. In this work I used the full set of 6D phase-space information to draw a more detailed picture on these structures. For this purpose I utilised the most homogeneous cluster catalogue available, namely the Catalogue of Open Cluster Data (COCD; Kharchenko et al. 2005a,b), which contains parameters for 650 open clusters and compact associations, as well as for their uniformly selected members. Additional radial velocity (RV) and metallicity ([M/H]) information on the members were obtained from the RAdial Velocity Experiment (RAVE; Steinmetz et al. 2006; Kordopatis et al. 2013) for 110 and 81 clusters, respectively. The RAVE sample was cleaned considering quality parameters and flags provided by RAVE (Matijevič et al. 2012; Kordopatis et al. 2013). To ensure that only real members were included for the mean values, also the cluster membership, as provided by Kharchenko et al. (2005a,b), was considered for the stars cross-matched in RAVE.
6D phase-space information could be derived for 432 out of the 650 COCD objects and I used an adaption of the Friends-of-Friends algorithm, as used in cosmology, to identify potential groupings. The vast majority of the 19 identified groupings were pairs, but I also found four groups of 4-5 members and one complex with 15 members. For the verification of the identified structures, I compared the results to a randomly selected subsample of the catalogue for the Milky Way global survey of Star Clusters (MWSC; Kharchenko et al. 2013), which became available recently, and was used as reference sample. Furthermore, I implemented Monte-Carlo simulations with randomised samples created from two distinguished input distributions for the spatial and velocity parameters. On the one hand, assuming a uniform distribution in the Galactic disc and, on the other hand, assuming the COCD data distributions to be representative for the whole open cluster population.
The results suggested that the majority of identified pairs are rather by chance alignments, but the groups and the complex seemed to be genuine. A comparison of my results to the pairs, groups and complexes proposed in the literature yielded a partial overlap, which was most likely because of selection effects and different parameters considered. This is another verification for the existence of such structures.
The characteristics of the found groupings favour that members of an open cluster grouping originate from a common giant molecular cloud and formed in a single, but possibly sequential, star formation event. Moreover, the fact that the young open cluster population showed smaller spatial separations between nearest neighbours than the old cluster population indicated that the lifetime of open cluster groupings is most likely comparable to that of the Galactic open cluster population itself. Still even among the old open clusters I could identify groupings, which suggested that the detected structure could be in some cases more long lived as one might think.
In this thesis I could only present a pilot study on structures in the Galactic open cluster population, since the data sample used was highly incomplete. For further investigations a far more complete sample would be required. One step in this direction would be to use data from large current surveys, like SDSS, RAVE, Gaia-ESO and VVV, as well as including results from studies on individual clusters. Later the sample can be completed by data from upcoming missions, like Gaia and 4MOST. Future studies using this more complete open cluster sample will reveal the effect of open cluster groupings on star formation theory and their significance for the kinematics, dynamics and evolution of the Milky Way, and thereby of spiral galaxies.
Permafrost, defined as ground that is frozen for at least two consecutive years, is a distinct feature of the terrestrial unglaciated Arctic. It covers approximately one quarter of the land area of the Northern Hemisphere (23,000,000 km²). Arctic landscapes, especially those underlain by permafrost, are threatened by climate warming and may degrade in different ways, including active layer deepening, thermal erosion, and development of rapid thaw features. In Siberian and Alaskan late Pleistocene ice-rich Yedoma permafrost, rapid and deep thaw processes (called thermokarst) can mobilize deep organic carbon (below 3 m depth) by surface subsidence due to loss of ground ice. Increased permafrost thaw could cause a feedback loop of global significance if its stored frozen organic carbon is reintroduced into the active carbon cycle as greenhouse gases, which accelerate warming and inducing more permafrost thaw and carbon release. To assess this concern, the major objective of the thesis was to enhance the understanding of the origin of Yedoma as well as to assess the associated organic carbon pool size and carbon quality (concerning degradability). The key research questions were:
- How did Yedoma deposits accumulate?
- How much organic carbon is stored in the Yedoma region?
- What is the susceptibility of the Yedoma region's carbon for future decomposition?
To address these three research questions, an interdisciplinary approach, including detailed field studies and sampling in Siberia and Alaska as well as methods of sedimentology, organic biogeochemistry, remote sensing, statistical analyses, and computational modeling were applied. To provide a panarctic context, this thesis additionally includes results both from a newly compiled northern circumpolar carbon database and from a model assessment of carbon fluxes in a warming Arctic.
The Yedoma samples show a homogeneous grain-size composition. All samples were poorly sorted with a multi-modal grain-size distribution, indicating various (re-) transport processes. This contradicts the popular pure loess deposition hypothesis for the origin of Yedoma permafrost. The absence of large-scale grinding processes via glaciers and ice sheets in northeast Siberian lowlands, processes which are necessary to create loess as material source, suggests the polygenetic origin of Yedoma deposits.
Based on the largest available data set of the key parameters, including organic carbon content, bulk density, ground ice content, and deposit volume (thickness and coverage) from Siberian and Alaskan study sites, this thesis further shows that deep frozen organic carbon in the Yedoma region consists of two distinct major reservoirs, Yedoma deposits and thermokarst deposits (formed in thaw-lake basins). Yedoma deposits contain ~80 Gt and thermokarst deposits ~130 Gt organic carbon, or a total of ~210 Gt. Depending on the approach used for calculating uncertainty, the range for the total Yedoma region carbon store is ±75 % and ±20 % for conservative single and multiple bootstrapping calculations, respectively. Despite the fact that these findings reduce the Yedoma region carbon pool by nearly a factor of two compared to previous estimates, this frozen organic carbon is still capable of inducing a permafrost carbon feedback to climate warming. The complete northern circumpolar permafrost region contains between 1100 and 1500 Gt organic carbon, of which ~60 % is perennially frozen and decoupled from the short-term carbon cycle.
When thawed and reintroduced into the active carbon cycle, the organic matter qualities become relevant. Furthermore, results from investigations into Yedoma and thermokarst organic matter quality studies showed that Yedoma and thermokarst organic matter exhibit no depth-dependent quality trend. This is evidence that after freezing, the ancient organic matter is preserved in a state of constant quality. The applied alkane and fatty-acid-based biomarker proxies including the carbon-preference and the higher-land-plant-fatty-acid indices show a broad range of organic matter quality and thus no significantly different qualities of the organic matter stored in thermokarst deposits compared to Yedoma deposits. This lack of quality differences shows that the organic matter biodegradability depends on different decomposition trajectories and the previous decomposition/incorporation history. Finally, the fate of the organic matter has been assessed by implementing deep carbon pools and thermokarst processes in a permafrost carbon model. Under various warming scenarios for the northern circumpolar permafrost region, model results show a carbon release from permafrost regions of up to ~140 Gt and ~310 Gt by the years 2100 and 2300, respectively. The additional warming caused by the carbon release from newly-thawed permafrost contributes 0.03 to 0.14°C by the year 2100. The model simulations predict that a further increase by the 23rd century will add 0.4°C to global mean surface air temperatures.
In conclusion, Yedoma deposit formation during the late Pleistocene was dominated by water-related (alluvial/fluvial/lacustrine) as well as aeolian processes under periglacial conditions. The circumarctic permafrost region, including the Yedoma region, contains a substantial amount of currently frozen organic carbon. The carbon of the Yedoma region is well-preserved and therefore available for decomposition after thaw. A missing quality-depth trend shows that permafrost preserves the quality of ancient organic matter. When the organic matter is mobilized by deep degradation processes, the northern permafrost region may add up to 0.4°C to the global warming by the year 2300.
The objective and motivation behind this research is to provide applications with easy-to-use interfaces to communities of deaf and functionally illiterate users, which enables them to work without any human assistance. Although recent years have witnessed technological advancements, the availability of technology does not ensure accessibility to information and communication technologies (ICT). Extensive use of text from menus to document contents means that deaf or functionally illiterate can not access services implemented on most computer software. Consequently, most existing computer applications pose an accessibility barrier to those who are unable to read fluently. Online technologies intended for such groups should be developed in continuous partnership with primary users and include a thorough investigation into their limitations, requirements and usability barriers. In this research, I investigated existing tools in voice, web and other multimedia technologies to identify learning gaps and explored ways to enhance the information literacy for deaf and functionally illiterate users. I worked on the development of user-centered interfaces to increase the capabilities of deaf and low literacy users by enhancing lexical resources and by evaluating several multimedia interfaces for them. The interface of the platform-independent Italian Sign Language (LIS) Dictionary has been developed to enhance the lexical resources for deaf users. The Sign Language Dictionary accepts Italian lemmas as input and provides their representation in the Italian Sign Language as output. The Sign Language dictionary has 3082 signs as set of Avatar animations in which each sign is linked to a corresponding Italian lemma. I integrated the LIS lexical resources with MultiWordNet (MWN) database to form the first LIS MultiWordNet(LMWN). LMWN contains information about lexical relations between words, semantic relations between lexical concepts (synsets), correspondences between Italian and sign language lexical concepts and semantic fields (domains). The approach enhances the deaf users’ understanding of written Italian language and shows that a relatively small set of lexicon can cover a significant portion of MWN. Integration of LIS signs with MWN made it useful tool for computational linguistics and natural language processing. The rule-based translation process from written Italian text to LIS has been transformed into service-oriented system. The translation process is composed of various modules including parser, semantic interpreter, generator, and spatial allocation planner. This translation procedure has been implemented in the Java Application Building Center (jABC), which is a framework for extreme model driven design (XMDD). The XMDD approach focuses on bringing software development closer to conceptual design, so that the functionality of a software solution could be understood by someone who is unfamiliar with programming concepts. The transformation addresses the heterogeneity challenge and enhances the re-usability of the system. For enhancing the e-participation of functionally illiterate users, two detailed studies were conducted in the Republic of Rwanda. In the first study, the traditional (textual) interface was compared with the virtual character-based interactive interface. The study helped to identify usability barriers and users evaluated these interfaces according to three fundamental areas of usability, i.e. effectiveness, efficiency and satisfaction. In another study, we developed four different interfaces to analyze the usability and effects of online assistance (consistent help) for functionally illiterate users and compared different help modes including textual, vocal and virtual character on the performance of semi-literate users. In our newly designed interfaces the instructions were automatically translated in Swahili language. All the interfaces were evaluated on the basis of task accomplishment, time consumption, System Usability Scale (SUS) rating and number of times the help was acquired. The results show that the performance of semi-literate users improved significantly when using the online assistance. The dissertation thus introduces a new development approach in which virtual characters are used as additional support for barely literate or naturally challenged users. Such components enhanced the application utility by offering a variety of services like translating contents in local language, providing additional vocal information, and performing automatic translation from text to sign language. Obviously, there is no such thing as one design solution that fits for all in the underlying domain. Context sensitivity, literacy and mental abilities are key factors on which I concentrated and the results emphasize that computer interfaces must be based on a thoughtful definition of target groups, purposes and objectives.
Die vorliegende Arbeit behandelt Untersuchungen zum Einfluss ionischer Flüssigkeiten sowohl auf den Rekombinationsprozess photolytisch generierter Lophylradikale als auch auf die photoinduzierte Polymerisation. Im Fokus standen hierbei pyrrolidiniumbasierte ionische Flüssigkeiten sowie polymerisierbare imidazoliumbasierte ionische Flüssigkeiten. Mittels UV-Vis-Spektroskopie wurde in den ionischen Flüssigkeiten im Vergleich zu ausgewählten organischen Lösungsmitteln die Rekombinationskinetik der aus o-Cl-HABI photolytisch generierten Lophylradikale bei unterschiedlichen Temperaturen verfolgt und die Geschwindigkeitskonstanten der Radikalrekombination bestimmt. Die Charakterisierung des Rekombinationsprozesses erfolgt dabei insbesondere unter Verwendung der mittels Eyring-Gleichung ermittelten Aktivierungsparameter. Hierbei konnte gezeigt werden, dass die Rekombination der Lophylradikale in den ionischen Flüssigkeiten im Gegensatz zu den organischen Lösungsmitteln zu einem großen Anteil innerhalb des Lösungsmittelkäfigs erfolgt. Weiterhin wurden für den Einsatz von o-Cl-HABI als Radikalbildner in den photoinduzierten Polymerisationen mehrere mögliche Co-Initiatoren über photokalorimetrische Messungen untersucht. Hierbei wurde auch ein neuer Aspekt zur Kettenübertragung vom Lophylradikal auf den heterocyclischen Co-Initiator vorgestellt. Darüber hinaus wurden photoinduzierte Polymerisationen unter Einsatz eines Initiatorsystems, bestehend aus o-Cl-HABI als Radikalbildner und einem heterocyclischen Co-Initiator, in den ionischen Flüssigkeiten untersucht. Diese Untersuchungen beinhalten zum einen photokalorimetrische Messungen der photoinduzierten Polymerisation von polymerisierbaren imidazoliumbasierten ionischen Flüssigkeiten. Zum anderen wurden Untersuchungen zur photoinduzierten Polymerisation von Methylmethacrylat in pyrrolidiniumbasierten ionischen Flüssigkeiten durchgeführt. Dabei wurden Einflussparameter wie Zeit, Temperatur, Viskosität, Lösungsmittelkäfigeffekt und die Alkylkettenlänge am Kation der ionischen Flüssigkeiten auf die Ausbeuten und Molmassen sowie Molmassenverteilungen der Polymere hin untersucht.
In the presented thesis, the most advanced photon reconstruction technique of ground-based γ-ray astronomy is adapted to the H.E.S.S. 28 m telescope. The method is based on a semi-analytical model of electromagnetic particle showers in the atmosphere. The properties of cosmic γ-rays are reconstructed by comparing the camera image of the telescope with the Cherenkov emission that is expected from the shower model. To suppress the dominant background from charged cosmic rays, events are selected based on several criteria. The performance of the analysis is evaluated with simulated events. The method is then applied to two sources that are known to emit γ-rays. The first of these is the Crab Nebula, the standard candle of ground-based γ-ray astronomy. The results of this source confirm the expected performance of the reconstruction method, where the much lower energy threshold compared to H.E.S.S. I is of particular importance. A second analysis is performed on the region around the Galactic Centre. The analysis results emphasise the capabilities of the new telescope to measure γ-rays in an energy range that is interesting for both theoretical and experimental astrophysics. The presented analysis features the lowest energy threshold that has ever been reached in ground-based γ-ray astronomy, opening a new window to the precise measurement of the physical properties of time-variable sources at energies of several tens of GeV.