Filtern
Volltext vorhanden
- ja (257) (entfernen)
Erscheinungsjahr
- 2012 (257) (entfernen)
Dokumenttyp
- Dissertation (100)
- Postprint (40)
- Monographie/Sammelband (36)
- Preprint (27)
- Wissenschaftlicher Artikel (26)
- Ausgabe (Heft) zu einer Zeitschrift (12)
- Masterarbeit (8)
- Habilitation (2)
- Sonstiges (2)
- Arbeitspapier (2)
- Konferenzveröffentlichung (1)
- Rezension (1)
Sprache
- Englisch (147)
- Deutsch (106)
- Mehrsprachig (2)
- Russisch (1)
- Spanisch (1)
Gehört zur Bibliographie
- ja (257) (entfernen)
Schlagworte
- Nachhaltigkeit (13)
- Climate Change Conference (12)
- Durban 2011 (12)
- Entwicklungspolitik (12)
- Klima (12)
- Klimakonferenz (12)
- Klimapolitik (12)
- NGO (12)
- Politik (12)
- Wirtschaft (12)
Institut
- Institut für Mathematik (30)
- Mathematisch-Naturwissenschaftliche Fakultät (23)
- Institut für Biochemie und Biologie (22)
- WeltTrends e.V. Potsdam (21)
- Institut für Physik und Astronomie (20)
- Institut für Chemie (19)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (13)
- Zentrum für Sprachen und Schlüsselkompetenzen (Zessko) (12)
- Humanwissenschaftliche Fakultät (11)
- Wirtschaftswissenschaften (11)
Background/Purpose
Muscular reflex responses of the lower extremities to sudden gait disturbances are related to postural stability and injury risk. Chronic ankle instability (CAI) has shown to affect activities related to the distal leg muscles while walking. Its effects on proximal muscle activities of the leg, both for the injured- (IN) and uninjured-side (NON), remain unclear. Therefore, the aim was to compare the difference of the motor control strategy in ipsilateral and contralateral proximal joints while unperturbed walking and perturbed walking between individuals with CAI and matched controls.
Materials and methods
In a cross-sectional study, 13 participants with unilateral CAI and 13 controls (CON) walked on a split-belt treadmill with and without random left- and right-sided perturbations. EMG amplitudes of muscles at lower extremities were analyzed 200 ms after perturbations, 200 ms before, and 100 ms after (Post100) heel contact while walking. Onset latencies were analyzed at heel contacts and after perturbations. Statistical significance was set at alpha≤0.05 and 95% confidence intervals were applied to determine group differences. Cohen’s d effect sizes were calculated to evaluate the extent of differences.
Results
Participants with CAI showed increased EMG amplitudes for NON-rectus abdominus at Post100 and shorter latencies for IN-gluteus maximus after heel contact compared to CON (p<0.05). Overall, leg muscles (rectus femoris, biceps femoris, and gluteus medius) activated earlier and less bilaterally (d = 0.30–0.88) and trunk muscles (bilateral rectus abdominus and NON-erector spinae) activated earlier and more for the CAI group than CON group (d = 0.33–1.09).
Conclusion
Unilateral CAI alters the pattern of the motor control strategy around proximal joints bilaterally. Neuromuscular training for the muscles, which alters motor control strategy because of CAI, could be taken into consideration when planning rehabilitation for CAI.
Flux-P
(2012)
Quantitative knowledge of intracellular fluxes in metabolic networks is invaluable for inferring metabolic system behavior and the design principles of biological systems. However, intracellular reaction rates can not often be calculated directly but have to be estimated; for instance, via 13C-based metabolic flux analysis, a model-based interpretation of stable carbon isotope patterns in intermediates of metabolism. Existing software such as FiatFlux, OpenFLUX or 13CFLUX supports experts in this complex analysis, but requires several steps that have to be carried out manually, hence restricting the use of this software for data interpretation to a rather small number of experiments. In this paper, we present Flux-P as an approach to automate and standardize 13C-based metabolic flux analysis, using the Bio-jETI workflow framework. Exemplarily based on the FiatFlux software, it demonstrates how services can be created that carry out the different analysis steps autonomously and how these can subsequently be assembled into software workflows that perform automated, high-throughput intracellular flux analysis of high quality and reproducibility. Besides significant acceleration and standardization of the data analysis, the agile workflow-based realization supports flexible changes of the analysis workflows on the user level, making it easy to perform custom analyses.
The reaction of the German labor market to the Great Recession 2008/09 was relatively mild – especially compared to other countries. The reason lies not only in the specific type of the recession – which was favorable for the German economy structure – but also in a series of labor market reforms initiated between 2002 and 2005 altering, inter alia, labor supply incentives. However, irrespective of the mild response to the Great Recession, there are a number of substantial future challenges the German labor market will soon have to face. Female labor supply still lies well below that of other countries and a massive demographic change over the next 50 years will have substantial effects on labor supply as well as the pension system. In addition, due to a skill-biased technological change over the next decades, firms will face problems of finding employees with adequate skills. The aim of this paper is threefold. First, we outline why the German labor market reacted in such a mild fashion, describe current economic trends of the labor market in light of general trends in the European Union, and reveal some of the main associated challenges. Thereafter, the paper analyzes recent reforms of the main institutional settings of the labor market which influence labor supply. Finally, based on the status quo of these institutional settings, the paper gives a brief overview of strategies to combat adequately the challenges in terms of labor supply and to ensure economic growth in the future.
Cell-level kinetic models for therapeutically relevant processes increasingly benefit the early stages of drug development. Later stages of the drug development processes, however, rely on pharmacokinetic compartment models while cell-level dynamics are typically neglected. We here present a systematic approach to integrate cell-level kinetic models and pharmacokinetic compartment models. Incorporating target dynamics into pharmacokinetic models is especially useful for the development of therapeutic antibodies because their effect and pharmacokinetics are inherently interdependent. The approach is illustrated by analysing the F(ab)-mediated inhibitory effect of therapeutic antibodies targeting the epidermal growth factor receptor. We build a multi-level model for anti-EGFR antibodies by combining a systems biology model with in vitro determined parameters and a pharmacokinetic model based on in vivo pharmacokinetic data. Using this model, we investigated in silico the impact of biochemical properties of anti-EGFR antibodies on their F(ab)-mediated inhibitory effect. The multi-level model suggests that the F(ab)-mediated inhibitory effect saturates with increasing drug-receptor affinity, thereby limiting the impact of increasing antibody affinity on improving the effect. This indicates that observed differences in the therapeutic effects of high affinity antibodies in the market and in clinical development may result mainly from Fc-mediated indirect mechanisms such as antibody-dependent cell cytotoxicity.
F2C2
(2012)
Background: Flux coupling analysis (FCA) has become a useful tool in the constraint-based analysis of genome-scale metabolic networks. FCA allows detecting dependencies between reaction fluxes of metabolic networks at steady-state. On the one hand, this can help in the curation of reconstructed metabolic networks by verifying whether the coupling between reactions is in agreement with the experimental findings. On the other hand, FCA can aid in defining intervention strategies to knock out target reactions.
Results: We present a new method F2C2 for FCA, which is orders of magnitude faster than previous approaches. As a consequence, FCA of genome-scale metabolic networks can now be performed in a routine manner.
Conclusions: We propose F2C2 as a fast tool for the computation of flux coupling in genome-scale metabolic networks. F2C2 is freely available for non-commercial use at https://sourceforge.net/projects/f2c2/files/.
All's well that ends well
(2012)
The transition from cell proliferation to cell expansion is critical for determining leaf size. Andriankaja et al. (2012) demonstrate that in leaves of dicotyledonous plants, a basal proliferation zone is maintained for several days before abruptly disappearing, and that chloroplast differentiation is required to trigger the onset of cell expansion.
This study provides a detailed analysis of the mid-Holocene to present-day precipitation change in the Asian monsoon region. We compare for the first time results of high resolution climate model simulations with a standardised set of mid-Holocene moisture reconstructions. Changes in the simulated summer monsoon characteristics (onset, withdrawal, length and associated rainfall) and the mechanisms causing the Holocene precipitation changes are investigated. According to the model, most parts of the Indian subcontinent received more precipitation (up to 5 mm/day) at mid-Holocene than at present-day. This is related to a stronger Indian summer monsoon accompanied by an intensified vertically integrated moisture flux convergence. The East Asian monsoon region exhibits local inhomogeneities in the simulated annual precipitation signal. The sign of this signal depends on the balance of decreased pre-monsoon and increased monsoon precipitation at mid-Holocene compared to present-day. Hence, rainfall changes in the East Asian monsoon domain are not solely associated with modifications in the summer monsoon circulation but also depend on changes in the mid-latitudinal westerly wind system that dominates the circulation during the pre-monsoon season. The proxy-based climate reconstructions confirm the regional dissimilarities in the annual precipitation signal and agree well with the model results. Our results highlight the importance of including the pre-monsoon season in climate studies of the Asian monsoon system and point out the complex response of this system to the Holocene insolation forcing. The comparison with a coarse climate model simulation reveals that this complex response can only be resolved in high resolution simulations.
This article examines two so-far-understudied verb doubling constructions in Mandarin Chinese, viz., verb doubling clefts and verb doubling lian…dou. We show that these constructions have the same internal syntax as regular clefts and lian…dou sentences, the doubling effect being epiphenomenal; therefore, we classify them as subtypes of the general cleft and lian…dou constructions, respectively, rather than as independent constructions. Additionally, we also show that, as in many other languages with comparable constructions, the two instances of the verb are part of a single movement chain, which has the peculiarity of allowing Spell-Out of more than one link.
The size of plant organs, such as leaves and flowers, is determined by an interaction of genotype and environmental influences. Organ growth occurs through the two successive processes of cell proliferation followed by cell expansion. A number of genes influencing either or both of these processes and thus contributing to the control of final organ size have been identified in the last decade. Although the overall picture of the genetic regulation of organ size remains fragmentary, two transcription factor/microRNA-based genetic pathways are emerging in the control of cell proliferation. However, despite this progress, fundamental questions remain unanswered, such as the problem of how the size of a growing organ could be monitored to determine the appropriate time for terminating growth. While genetic analysis will undoubtedly continue to advance our knowledge about size control in plants, a deeper understanding of this and other basic questions will require including advanced live-imaging and mathematical modeling, as impressively demonstrated by some recent examples. This should ultimately allow the comparison of the mechanisms underlying size control in plants and in animals to extract common principles and lineage-specific solutions.
Background
To determine the general appearance of normal axillary lymph nodes (LNs) in real-time tissue sonoelastography and to explore the method′s potential value in the prediction of LN metastases.
Methods
Axillary LNs in healthy probands (n=165) and metastatic LNs in breast cancer patients (n=15) were examined with palpation, B-mode ultrasound, Doppler and sonoelastography (assessment of the elasticity of the cortex and the medulla). The elasticity distributions were compared and sensitivity (SE) and specificity (SP) were calculated. In an exploratory analysis, positive and negative predictive values (PPV, NPV) were calculated based upon the estimated prevalence of LN metastases in different risk groups.
Results
In the elastogram, the LN cortex was significantly harder than the medulla in both healthy (p=0.004) and metastatic LNs (p=0.005). Comparing healthy and metastatic LNs, there was no difference in the elasticity distribution of the medulla (p=0.281), but we found a significantly harder cortex in metastatic LNs (p=0.006). The SE of clinical examination, B-mode ultrasound, Doppler ultrasound and sonoelastography was revealed to be 13.3%, 40.0%, 14.3% and 60.0%, respectively, and SP was 88.4%, 96.8%, 95.6% and 79.6%, respectively. The highest SE was achieved by the disjunctive combination of B-mode and elastographic features (cortex >3mm in B-mode or blue cortex in the elastogram, SE=73.3%). The highest SP was achieved by the conjunctive combination of B-mode ultrasound and elastography (cortex >3mm in B-mode and blue cortex in the elastogram, SP=99.3%).
Conclusions
Sonoelastography is a feasible method to visualize the elasticity distribution of LNs. Moreover, sonoelastography is capable of detecting elasticity differences between the cortex and medulla, and between metastatic and healthy LNs. Therefore, sonoelastography yields additional information about axillary LN status and can improve the PPV, although this method is still experimental.
The distinctness of, and overlap between, pea genotypes held in several Pisum germplasm collections has been used to determine their relatedness and to test previous ideas about the genetic diversity of Pisum. Our characterisation of genetic diversity among 4,538 Pisum accessions held in 7 European Genebanks has identified sources of novel genetic variation, and both reinforces and refines previous interpretations of the overall structure of genetic diversity in Pisum. Molecular marker analysis was based upon the presence/absence of polymorphism of retrotransposon insertions scored by a high-throughput microarray and SSAP approaches. We conclude that the diversity of Pisum constitutes a broad continuum, with graded differentiation into sub-populations which display various degrees of distinctness. The most distinct genetic groups correspond to the named taxa while the cultivars and landraces of Pisum sativum can be divided into two broad types, one of which is strongly enriched for modern cultivars. The addition of germplasm sets from six European Genebanks, chosen to represent high diversity, to a single collection previously studied with these markers resulted in modest additions to the overall diversity observed, suggesting that the great majority of the total genetic diversity collected for the Pisum genus has now been described. Two interesting sources of novel genetic variation have been identified. Finally, we have proposed reference sets of core accessions with a range of sample sizes to represent Pisum diversity for the future study and exploitation by researchers and breeders.
The closer the better
(2012)
A growing literature has suggested that processing of visual information presented near the hands is facilitated. In this study, we investigated whether the near-hands superiority effect also occurs with the hands moving. In two experiments, participants performed a cyclical bimanual movement task requiring concurrent visual identification of briefly presented letters. For both the static and dynamic hand conditions, the results showed improved letter recognition performance with the hands closer to the stimuli. The finding that the encoding advantage for near-hand stimuli also occurred with the hands moving suggests that the effect is regulated in real time, in accordance with the concept of a bimodal neural system that dynamically updates hand position in external space.
During reading, saccadic eye movements are generated to shift words into the center of the visual field for lexical processing. Recently, Krugel and Engbert (Vision Research 50:1532-1539, 2010) demonstrated that within-word fixation positions are largely shifted to the left after skipped words. However, explanations of the origin of this effect cannot be drawn from normal reading data alone. Here we show that the large effect of skipped words on the distribution of within-word fixation positions is primarily based on rather subtle differences in the low-level visual information acquired before saccades. Using arrangements of "x" letter strings, we reproduced the effect of skipped character strings in a highly controlled single-saccade task. Our results demonstrate that the effect of skipped words in reading is the signature of a general visuomotor phenomenon. Moreover, our findings extend beyond the scope of the widely accepted range-error model, which posits that within-word fixation positions in reading depend solely on the distances of target words. We expect that our results will provide critical boundary conditions for the development of visuomotor models of saccade planning during reading.
Dynamic regulatory on/off minimization for biological systems under internal temporal perturbations
(2012)
Background: Flux balance analysis (FBA) together with its extension, dynamic FBA, have proven instrumental for analyzing the robustness and dynamics of metabolic networks by employing only the stoichiometry of the included reactions coupled with adequately chosen objective function. In addition, under the assumption of minimization of metabolic adjustment, dynamic FBA has recently been employed to analyze the transition between metabolic states.
Results: Here, we propose a suite of novel methods for analyzing the dynamics of (internally perturbed) metabolic networks and for quantifying their robustness with limited knowledge of kinetic parameters. Following the biochemically meaningful premise that metabolite concentrations exhibit smooth temporal changes, the proposed methods rely on minimizing the significant fluctuations of metabolic profiles to predict the time-resolved metabolic state, characterized by both fluxes and concentrations. By conducting a comparative analysis with a kinetic model of the Calvin-Benson cycle and a model of plant carbohydrate metabolism, we demonstrate that the principle of regulatory on/off minimization coupled with dynamic FBA can accurately predict the changes in metabolic states.
Conclusions: Our methods outperform the existing dynamic FBA-based modeling alternatives, and could help in revealing the mechanisms for maintaining robustness of dynamic processes in metabolic networks over time.
Background
High blood glucose and diabetes are amongst the conditions causing the greatest losses in years of healthy life worldwide. Therefore, numerous studies aim to identify reliable risk markers for development of impaired glucose metabolism and type 2 diabetes. However, the molecular basis of impaired glucose metabolism is so far insufficiently understood. The development of so called 'omics' approaches in the recent years promises to identify molecular markers and to further understand the molecular basis of impaired glucose metabolism and type 2 diabetes. Although univariate statistical approaches are often applied, we demonstrate here that the application of multivariate statistical approaches is highly recommended to fully capture the complexity of data gained using high-throughput methods.
Methods
We took blood plasma samples from 172 subjects who participated in the prospective Metabolic Syndrome Berlin Potsdam follow-up study (MESY-BEPO Follow-up). We analysed these samples using Gas Chromatography coupled with Mass Spectrometry (GC-MS), and measured 286 metabolites. Furthermore, fasting glucose levels were measured using standard methods at baseline, and after an average of six years. We did correlation analysis and built linear regression models as well as Random Forest regression models to identify metabolites that predict the development of fasting glucose in our cohort.
Results
We found a metabolic pattern consisting of nine metabolites that predicted fasting glucose development with an accuracy of 0.47 in tenfold cross-validation using Random Forest regression. We also showed that adding established risk markers did not improve the model accuracy. However, external validation is eventually desirable. Although not all metabolites belonging to the final pattern are identified yet, the pattern directs attention to amino acid metabolism, energy metabolism and redox homeostasis.
Conclusions
We demonstrate that metabolites identified using a high-throughput method (GC-MS) perform well in predicting the development of fasting plasma glucose over several years. Notably, not single, but a complex pattern of metabolites propels the prediction and therefore reflects the complexity of the underlying molecular mechanisms. This result could only be captured by application of multivariate statistical approaches. Therefore, we highly recommend the usage of statistical methods that seize the complexity of the information given by high-throughput methods.
Portal alumni
(2012)
Das zurückliegende Jahr stand an der Universität Potsdam auch im Zeichen des zwanzigjährigen Jubiläums der Hochschule. Am 15. Juli 1991, wurde sie gegründet und während einer Festwoche feierten Professorinnen und Professoren, Mitarbeiterinnen, Mitarbeiter und Studierende dieses Jubiläum gebührend. Seit der Gründung der größten brandenburgischen Hochschule sind wissenschaftliches Renommee, Ansehen und Attraktivität stetig gewachsen. Gerade in den letzten Jahren hat sie ihr Profil geschärft. Vor allem die Kognitions-, die Geo- und Biowissenschaften sind hier zu nennen. Aber auch die Lehrerbildung besitzt einen hohen Stellenwert. International anerkannte Forschungsbereiche, Wissenschaftspreise, eine erfolgreiche Drittmittelbilanz und nicht zuletzt die bauliche Entwicklung an allen drei Standorten sind sichtbare Indikatoren für die erfolgreiche Entwicklung, die die Universität Potsdam in den letzten zwei Jahrzehnten durchlaufen hat. Die drei ehemaligen Präsidenten sowie verschiedene andere Protagonisten werfen in dieser Ausgabe der Portal Alumni einen Blick auf unterschiedliche Aspekte der zurückliegenden Entwicklung der Universität. Vom Erfolg der Universität zeugt auch die wachsende Zahl der Absolventinnen und Absolventen, die die Universität verlassen. Portal Alumni stellt in der vorliegenden Ausgabe deshalb Absolventen und deren universitäre und berufliche Lebenswege genauer vor und lässt damit zugleich kaleidoskopartig 20 Jahre Studium an der Universität Potsdam Revue passieren.
Portal Wissen = Raum
(2012)
Mit „Portal Wissen“ laden wir Sie ein, die Forschung an der Universität Potsdam zu entdecken und in ihrer Vielfalt kennenzulernen. In der ersten Ausgabe dreht sich alles um „Räume“. Räume, in denen geforscht wird, solche, die es zu erforschen gilt, andere, die durch Wissenschaft zugänglich oder erschlossen werden, aber auch Räume, die Wissenschaft braucht, um sich entfalten zu können. Forschung vermisst Räume: „Wissenschaft wird von Menschen gemacht“, schrieb der Physiker Werner Heisenberg. Umgekehrt lässt sich sagen: Wissenschaft macht Menschen, widmet sich ihnen, beeinflusst sie. Dieser Beziehung ist „Portal Wissen“ nachgegangen. Wir haben Wissenschaftler getroffen, sie gefragt, wie aus ihren Fragen Projekte entstehen, haben sie auf dem oft verschlungenen Weg zum Ziel begleitet. Ein besonderes Augenmerk dieses Heftes gilt den „Kulturellen Begegnungsräumen“, denen ein eigener Profilbereich der Forschung an der Universität Potsdam gewidmet ist.
Forschung hat Räume: Labore, Bibliotheken, Gewächshäuser oder Archive – hier ist Wissenschaft zu Liebe Leserinnen und Leser, Hause. All diese Orte sind so einzigartig wie die Wissenschaftler, die in ihnen arbeiten, oder die Untersuchungen, die hier stattfinden. Erst die Vision davon, wie ein Problem zu lösen ist, macht aus einfachen Zimmern „Laborräume“. Wir haben ihre Türen geöffnet, um zu zeigen, was – und wer – sich dahinter befindet.
Forschung eröffnet Räume: Wenn Wissenschaft erfolgreich ist, bewegt sie uns, bringt uns voran. Auf dem Weg einer wissenschaftlichen Erkenntnis aus dem Labor in den Alltag stehen mitunter Hürden, die meist nicht auf den ersten Blick zu erkennen sind. Auf jeden Fall aber ist ihre Anwendung erster Ausgangspunkt von Wissenschaft, Antrieb und Motivation jedes Forschers. „Portal Wissen“ zeigt, welche „Praxisräume“ sich aus der Übersetzung von Forschungsresultaten ergeben. Dort, wo wir es unbedingt erwarten, und dort, wo vielleicht nicht.
Forschung erschließt Räume: Bei Expeditionen, Feldversuchen und Exkursionen wird nahezu jede Umgebung zum mobilen Labor. So eröffnet Wissenschaft Zugänge auch zu Orten, die auf vielfach andere Weise verschlossen oder unzugänglich scheinen. Wir haben uns in Forscher- Reisetaschen gemogelt, um bei Entdeckungsreisen dabei zu sein, die weit weg – vor allem nach Afrika – führen. Zugleich haben wir beobachtet, wie „Entwicklungsräume“ sich auch von Potsdam aus erschließen lassen oder zumindest ihre Vermessung in Potsdam beginnen kann.
Forschung braucht Räume: Wissenschaft hat zwei Geschlechter, endlich. Noch nie waren so viele Frauen in der Forschung tätig wie derzeit. Ein Grund zum Ausruhen ist dies gleichwohl nicht. Deutschlandweit ist aktuell nur jede fünfte Professur von einer Frau besetzt. „Portal Wissen“ schaut, welche „Entwicklungsräume“ Frauen sich in der Wissenschaft, aber auch darüber hinaus geschaffen haben. Und wo sie ihnen verwehrt werden. Wir wünschen Ihnen eine anregende Lektüre und dass auch Sie einen Raum finden, der Sie inspiriert.
Prof. Dr. Robert Seckler
Vizepräsident für Forschung und wissenschaftlichen Nachwuchs
Background: Detection of immunogenic proteins remains an important task for life sciences as it nourishes the understanding of pathogenicity, illuminates new potential vaccine candidates and broadens the spectrum of biomarkers applicable in diagnostic tools. Traditionally, immunoscreenings of expression libraries via polyclonal sera on nitrocellulose membranes or screenings of whole proteome lysates in 2-D gel electrophoresis are performed. However, these methods feature some rather inconvenient disadvantages. Screening of expression libraries to expose novel antigens from bacteria often lead to an abundance of false positive signals owing to the high cross reactivity of polyclonal antibodies towards the proteins of the expression host. A method is presented that overcomes many disadvantages of the old procedures.
Results: Four proteins that have previously been described as immunogenic have successfully been assessed immunogenic abilities with our method. One protein with no known immunogenic behaviour before suggested potential immunogenicity. We incorporated a fusion tag prior to our genes of interest and attached the expressed fusion proteins covalently on microarrays. This enhances the specific binding of the proteins compared to nitrocellulose. Thus, it helps to reduce the number of false positives significantly. It enables us to screen for immunogenic proteins in a shorter time, with more samples and statistical reliability. We validated our method by employing several known genes from Campylobacter jejuni NCTC 11168.
Conclusions: The method presented offers a new approach for screening of bacterial expression libraries to illuminate novel proteins with immunogenic features. It could provide a powerful and attractive alternative to existing methods and help to detect and identify vaccine candidates, biomarkers and potential virulence-associated factors with immunogenic behaviour furthering the knowledge of virulence and pathogenicity of studied bacteria.
The development of infrared observational facilities has revealed a number of massive stars in obscured environments throughout the Milky Way and beyond. The determination of their stellar and wind properties from infrared diagnostics is thus required to take full advantage of the wealth of observations available in the near and mid infrared. However, the task is challenging. This session addressed some of the problems encountered and showed the limitations and successes of infrared studies of massive stars.
The safe upper limit for inclusion of vitamin A in complete diets for growing dogs is uncertain, with the result that current recommendations range from 5.24 to 104.80 mu mol retinol (5000 to 100 000 IU vitamin A)/4184 kJ (1000 kcal) metabolisable energy (ME). The aim of the present study was to determine the effect of feeding four concentrations of vitamin A to puppies from weaning until 1 year of age. A total of forty-nine puppies, of two breeds, Labrador Retriever and Miniature Schnauzer, were randomly assigned to one of four treatment groups. Following weaning at 8 weeks of age, puppies were fed a complete food supplemented with retinyl acetate diluted in vegetable oil and fed at 1ml oil/100 g diet to achieve an intake of 5.24, 13.10, 78.60 and 104.80 mu mol retinol (5000, 12 500, 75 000 and 100 000 IU vitamin A)/4184 kJ (1000 kcal) ME. Fasted blood and urine samples were collected at 8, 10, 12, 14, 16, 20, 26, 36 and 52 weeks of age and analysed for markers of vitamin A metabolism and markers of safety including haematological and biochemical variables, bone-specific alkaline phosphatase, cross-linked carboxyterminal telopeptides of type I collagen and dual-energy X-ray absorptiometry. Clinical examinations were conducted every 4 weeks. Data were analysed by means of a mixed model analysis with Bonferroni corrections for multiple endpoints. There was no effect of vitamin A concentration on any of the parameters, with the exception of total serum retinyl esters, and no effect of dose on the number, type and duration of adverse events. We therefore propose that 104.80 mu mol retinol (100 000 IU vitamin A)/4184 kJ (1000 kcal) is a suitable safe upper limit for use in the formulation of diets designed for puppy growth.
We present 3D zero-beta ideal MHD simulations of the solar flare/CME event that occurred in Active Region 11060 on 2010 April 8. The initial magnetic configurations of the two simulations are stable nonlinear force-free field and unstable magnetic field models constructed by Su et al. (2011) using the flux rope insertion method. The MHD simulations confirm that the stable model relaxes to a stable equilibrium, while the unstable model erupts as a CME. Comparisons between observations and MHD simulations of the CME are also presented.
Recent PIC simulations of relativistic electron-positron (electron-ion) jets injected into a stationary medium show that particle acceleration occurs in the shocked regions. Simulations show that the Weibel instability is responsible for generating and amplifying highly nonuniform, small-scale magnetic fields and for particle acceleration. These magnetic fields contribute to the electron’s transverse eflection behind the shock. The “jitter” radiation from deflected electrons in turbulent magnetic fields has properties different from synchrotron radiation calculated in a uniform magnetic field. This jitter radiation may be important for understanding the complex time evolution and/or spectral structure of gamma-ray bursts, relativistic jets in general, and supernova remnants. In order to calculate radiation from first principles and go beyond the standard synchrotron model, we have used PIC simulations. We present synthetic spectra to compare with the spectra obtained from Fermi observations.
Recent studies have claimed the existence of very massive stars (VMS) up to 300 M⊙ in the local Universe. As this finding may represent a paradigm shift for the canonical stellar upper-mass limit of 150 M⊙, it is timely to discuss the status of the data, as well as the far-reaching implications of such objects. We held a Joint Discussion at the General Assembly in Beijing to discuss (i) the determination of the current masses of the most massive stars, (ii) the formation of VMS, (iii) their mass loss, and (iv) their evolution and final fate. The prime aim was to reach broad consensus between observers and theorists on how to identify and quantify the dominant physical processes.
In the late Palaeozoic fore-arc system of north-central Chile at latitudes 31-32 degrees S (from the west to the east) three lithotectonic units are telescoped within a short distance by a Mesozoic strikeslip event (derived peak P-T conditions in brackets): (1) the basally accreted Choapa Metamorphic Complex (CMC; 350-430 degrees C, 6-9 kbar), (2) the frontally accreted Arrayan Formation (AF; 280-320 degrees C, 4-6 kbar) and (3) the retrowedge basin of the Huentelauquen Formation (HF; 280-320 degrees C, 3-4 kbar). In the CMC, Ar-Ar spot ages locally date white-mica formation at peak P-T conditions and during early exhumation at 279-242 Ma. In a local garnet mica-schist intercalation (570-585 degrees C, 11-13 kbar) Ar-Ar spot ages refer to the ascent from the subduction channel at 307-274 Ma. Portions of the CMC were isobarically heated to 510-580 degrees C at 6.6-8.5 kbar. The age of peak P-T conditions in the AF can only vaguely be approximated at >= 310 Ma by relict fission-track ages consistent with the observation that frontal accretion occurred prior to basal accretion. Zircon fission-track dating indicates cooling below similar to 280 degrees C at similar to 248 Ma in the CMC and the AF, when a regional unconformity also formed. Ar-Ar white-mica spot ages in parts of the CMC and within the entire AF and HF point to heterogeneous resetting during Mesozoic extensional and shortening events at similar to 245-240 Ma, similar to 210-200 Ma, similar to 174-159 Ma and similar to 142-127 Ma. The zircon fission-track ages are locally reset at 109-96 Ma. All resetting of Ar-Ar white-mica ages is proposed to have occurred by in situ dissolution/precipitation at low temperature in the presence of locally penetrating hydrous fluids. Hence syn-and postaccretionary events in the fore-arc system can still be distinguished and dated in spite of its complex heterogeneous postaccretional overprint.
This article investigates the nature of preposition copying and preposition pruning structures in present-day English. We begin by illustrating the two phenomena and consider how they might be accounted for in syntactic terms, and go on to explore the possibility that preposition copying and pruning arise for processing reasons. We then report on two acceptability judgement experiments examining the extent to which native speakers of English are sensitive to these types of 'error' in language comprehension. Our results indicate that preposition copying creates redundancy rather than ungrammaticality, whereas preposition pruning creates processing problems for comprehenders that may render it unacceptable in timed (but not necessarily in untimed) judgement tasks. Our findings furthermore illustrate the usefulness of combining corpus studies and experimentally elicited data for gaining a clearer picture of usage and acceptability, and the potential benefits of examining syntactic phenomena from both a theoretical and a processing perspective.
Using the eye-movement monitoring technique in two reading comprehension experiments, this study investigated the timing of constraints on wh-dependencies (so-called island constraints) in first- and second-language (L1 and L2) sentence processing. The results show that both L1 and L2 speakers of English are sensitive to extraction islands during processing, suggesting that memory storage limitations affect L1 and L2 comprehenders in essentially the same way. Furthermore, these results show that the timing of island effects in L1 compared to L2 sentence comprehension is affected differently by the type of cue (semantic fit versus filled gaps) signaling whether dependency formation is possible at a potential gap site. Even though L1 English speakers showed immediate sensitivity to filled gaps but not to lack of semantic fit, proficient German-speaking learners of English as a L2 showed the opposite sensitivity pattern. This indicates that initial wh-dependency formation in L2 processing is based on semantic feature matching rather than being structurally mediated as in L1 comprehension.
SXP 1062 is an exceptional case of a young neutron star in a wind-fed high-mass X-ray binary associated with a supernova remnant. A unique combination of measured spin period, its derivative, luminosity and young age makes this source a key probe for the physics of accretion and neutron star evolution. Theoretical models proposed to explain the properties of SXP 1062 shall be tested with new data.
We present the new multi-threaded version of the state-of-the-art answer set solver clasp. We detail its component and communication architecture and illustrate how they support the principal functionalities of clasp. Also, we provide some insights into the data representation used for different constraint types handled by clasp. All this is accompanied by an extensive experimental analysis of the major features related to multi-threading in clasp.
ASP modulo CSP
(2012)
We present the hybrid ASP solver clingcon, combining the simple modeling language and the high performance Boolean solving capacities of Answer Set Programming (ASP) with techniques for using non-Boolean constraints from the area of Constraint Programming (CP). The new clingcon system features an extended syntax supporting global constraints and optimize statements for constraint variables. The major technical innovation improves the interaction between ASP and CP solver through elaborated learning techniques based on irreducible inconsistent sets. A broad empirical evaluation shows that these techniques yield a performance improvement of an order of magnitude.
Mineral chemistry and thermobarometry of the staurolite-chloritoid schists from Poshtuk, NW Iran
(2012)
The Poshtuk metapelitic rocks in northwestern Iran underwent two main phases of regional and contact metamorphism. Microstructures, textural features and field relations indicate that these rocks underwent a polymetamorphic history. The dominant metamorphic assemblage of the metapelites is garnet, staurolite, chloritoid, chlorite, muscovite and quartz, which grew mainly syntectonically during the later contact metamorphic event. Peak metamorphic conditions of this event took place at 580 ◦ C and ∼ 3–4 kbar, indicating that this event occurred under high-temperature and low-pressure conditions (HT/LP metamorphism), which reflects the high heat flow in this part of the crust. This event is mainly controlled by advective heat input through magmatic intrusions into all levels of the crust. These extensive Eocene metamorphic and magmatic activities can be associated with the early Alpine Orogeny, which resulted in this area from the convergence between the Arabian and Eurasian plates, and the Cenozoic closure of the Tethys oceanic tract(s).
Much of our knowledge about the solar dynamo is based on sunspot observations. It is thus desirable to extend the set of positional and morphological data of sunspots into the past. Gustav Spörer observed in Germany from Anklam (1861–1873) and Potsdam (1874–1894). He left detailed prints of sunspot groups, which we digitized and processed to mitigate artifacts left in the print by the passage of time. After careful geometrical correction, the sunspot data are now available as synoptic charts for almost 450 solar rotation periods. Individual sunspot positions can thus be precisely determined and spot areas can be accurately measured using morphological image processing techniques. These methods also allow us to determine tilt angles of active regions (Joy’s law) and to assess the complexity of an active region.
The clumping of massive star winds is an established paradigm, which is confirmed by multiple lines of evidence and is supported by stellar wind theory. We use the results from time-dependent hydrodynamical models of the instability in the line-driven wind of a massive supergiant star to derive the time-dependent accretion rate on to a compact object in the Bondi-Hoyle-Lyttleton approximation. The strong density and velocity fluctuations in the wind result in strong variability of the synthetic X-ray light curves. Photoionization of inhomogeneous winds is different from the photoinization of smooth winds. The degree of ionization is affected by the wind clumping. The wind clumping must also be taken into account when comparing the observed and model spectra of the photoionized stellar wind.
Much previous experimental research on morphological processing has focused on surface and meaning-level properties of morphologically complex words, without paying much attention to the morphological differences between inflectional and derivational processes. Realization-based theories of morphology, for example, assume specific morpholexical representations for derived words that distinguish them from the products of inflectional or paradigmatic processes. The present study reports results from a series of masked priming experiments investigating the processing of inflectional and derivational phenomena in native (L1) and non-native (L2) speakers in a non-Indo-European language, Turkish. We specifically compared regular (Aorist) verb inflection with deadjectival nominalization, both of which are highly frequent, productive and transparent in Turkish. The experiments demonstrated different priming patterns for inflection and derivation, specifically within the L2 group. Implications of these findings are discussed both for accounts of L2 morphological processing and for the controversial linguistic distinction between inflection and derivation.
This study investigates phenomena that have been claimed to be indicative of Specific Language Impairment (SLI) in German, focusing on subject-verb agreement marking. Longitudinal data from fourteen German-speaking children with SLI, seven monolingual and seven Turkish-German successive bilingual children, were examined. We found similar patterns of impairment in the two participant groups. Both the monolingual and the bilingual children with SLI had correct (present vs. preterit) tense marking and produced syntactically complex sentences such as embedded clauses and wh-questions, but were limited in reliably producing correct agreement-marked verb forms. These contrasts indicate that agreement marking is impaired in German-speaking children with SLI, without any necessary concurrent deficits in either the CP-domain or in tense marking. Our results also show that it is possible to identify SLI from an early successive bilingual child's performance in one of her two languages.
Restrictions on addition
(2012)
Children up to school age have been reported to perform poorly when interpreting sentences containing restrictive and additive focus particles by treating sentences with a focus particle in the same way as sentences without it. Careful comparisons between results of previous studies indicate that this phenomenon is less pronounced for restrictive than for additive particles. We argue that this asymmetry is an effect of the presuppositional status of the proposition triggered by the additive particle. We tested this in two experiments with German-learning three-and four-year-olds using a method that made the exploitation of the information provided by the particles highly relevant for completing the task. Three-year-olds already performed remarkably well with sentences both with auch 'also' and with nur 'only'. Thus, children can consider the presuppositional contribution of the additive particle in their sentence interpretation and can exploit the restrictive particle as a marker of exhaustivity.
Although all bilinguals encounter cross-language interference (CLI), some bilinguals are more susceptible to interference than others. Here, we report on language performance of late bilinguals (Russian/German) on two bilingual tasks (interview, verbal fluency), their language use and switching habits. The only between-group difference was CLI: one group consistently produced significantly more errors of CLI on both tasks than the other (thereby replicating our findings from a bilingual picture naming task). This striking group difference in language control ability can only be explained by differences in cognitive control, not in language proficiency or language mode.
Background
Outcome quality management requires the consecutive registration of defined variables. The aim was to identify relevant parameters in order to objectively assess the in-patient rehabilitation outcome.
Methods
From February 2009 to June 2010 1253 patients (70.9 ± 7.0 years, 78.1% men) at 12 rehabilitation clinics were enrolled. Items concerning sociodemographic data, the impairment group (surgery, conservative/interventional treatment), cardiovascular risk factors, structural and functional parameters and subjective health were tested in respect of their measurability, sensitivity to change and their propensity to be influenced by rehabilitation.
Results
The majority of patients (61.1%) were referred for rehabilitation after cardiac surgery, 38.9% after conservative or interventional treatment for an acute coronary syndrome. Functionally relevant comorbidities were seen in 49.2% (diabetes mellitus, stroke, peripheral artery disease, chronic obstructive lung disease). In three key areas 13 parameters were identified as being sensitive to change and subject to modification by rehabilitation: cardiovascular risk factors (blood pressure, low-density lipoprotein cholesterol, triglycerides), exercise capacity (resting heart rate, maximal exercise capacity, maximal walking distance, heart failure, angina pectoris) and subjective health (IRES-24 (indicators of rehabilitation status): pain, somatic health, psychological well-being and depression as well as anxiety on the Hospital Anxiety and Depression Scale).
Conclusion
The outcome of in-patient rehabilitation in elderly patients can be comprehensively assessed by the identification of appropriate key areas, that is, cardiovascular risk factors, exercise capacity and subjective health. This may well serve as a benchmark for internal and external quality management.
Early acquisition of a second language influences the development of language abilities and cognitive functions. In the present study, we used functional Magnetic Resonance Imaging (fMRI) to investigate the impact of early bilingualism on the organization of the cortical language network during sentence production. Two groups of adult multilinguals, proficient in three languages, were tested on a narrative task; early multilinguals acquired the second language before the age of three years, late multilinguals after the age of nine. All participants learned a third language after nine years of age. Comparison of the two groups revealed substantial differences in language-related brain activity for early as well as late acquired languages. Most importantly, early multilinguals preferentially activated a fronto-striatal network in the left hemisphere, whereas the left posterior superior temporal gyrus (pSTG) was activated to a lesser degree than in late multilinguals. The same brain regions were highlighted in previous studies when a non-target language had to be controlled. Hence the engagement of language control in adult early multilinguals appears to be influenced by the specific learning and acquisition conditions during early childhood. Remarkably, our results reveal that the functional control of early and subsequently later acquired languages is similarly affected, suggesting that language experience has a pervasive influence into adulthood. As such, our findings extend the current understanding of control functions in multilinguals.
The project of public-reason liberalism faces a basic problem: publicly justified principles are typically too abstract and vague to be directly applied to practical political disputes, whereas applicable specifications of these principles are not uniquely publicly justified. One solution could be a legislative procedure that selects one member from the eligible set of inconclusively justified proposals. Yet if liberal principles are too vague to select sufficiently specific legislative proposals, can they, nevertheless, select specific legislative procedures? Based on the work of Gerald Gaus, this article argues that the only candidate for a conclusively justified decision procedure is a majoritarian or otherwise ‘neutral’ democracy. If the justification of democracy requires an equality baseline in the design of political regimes and if justifications for departure from this baseline are subject to reasonable disagreement, a majoritarian design is justified by default. Gaus’s own preference for super-majoritarian procedures is based on disputable specifications of justified liberal principles. These procedures can only be defended as a sectarian preference if the equality baseline is rejected, but then it is not clear how the set of justifiable political regimes can be restricted to full democracies.
This thesis is focussed on the electronic properties of the new material class named topological insulators. Spin and angle resolved photoelectron spectroscopy have been applied to reveal several unique properties of the surface state of these materials. The first part of this thesis introduces the methodical background of these quite established experimental techniques.
In the following chapter, the theoretical concept of topological insulators is introduced. Starting from the prominent example of the quantum Hall effect, the application of topological invariants to classify material systems is illuminated. It is explained how, in presence of time reversal symmetry, which is broken in the quantum Hall phase, strong spin orbit coupling can drive a system into a topologically non trivial phase. The prediction of the spin quantum Hall effect in two dimensional insulators an the generalization to the three dimensional case of topological insulators is reviewed together with the first experimental realization of a three dimensional topological insulator in the Bi1-xSbx alloys given in the literature.
The experimental part starts with the introduction of the Bi2X3 (X=Se, Te) family of materials. Recent theoretical predictions and experimental findings on the bulk and surface electronic structure of these materials are introduced in close discussion to our own experimental results. Furthermore, it is revealed, that the topological surface state of Bi2Te3 shares its orbital symmetry with the bulk valence band and the observation of a temperature induced shift of the chemical potential is to a high probability unmasked as a doping effect due to residual gas adsorption.
The surface state of Bi2Te3 is found to be highly spin polarized with a polarization value of about 70% in a macroscopic area, while in Bi2Se3 the polarization appears reduced, not exceeding 50%. We, however, argue that the polarization is most likely only extrinsically limited in terms of the finite angular resolution and the lacking detectability of the out of plane component of the electron spin. A further argument is based on the reduced surface quality of the single crystals after cleavage and, for Bi2Se3 a sensitivity of the electronic structure to photon exposure.
We probe the robustness of the topological surface state in Bi2X3 against surface impurities in Chapter 5. This robustness is provided through the protection by the time reversal symmetry. Silver, deposited on the (111) surface of Bi2Se3 leads to a strong electron doping but the surface state is observed up to a deposited Ag mass equivalent to one atomic monolayer. The opposite sign of doping, i.e., hole-like, is observed by exposing oxygen to Bi2Te3. But while the n-type shift of Ag on Bi2Se3 appears to be more or less rigid, O2 is lifting the Dirac point of the topological surface state in Bi2Te3 out of the valence band minimum at $\Gamma$. After increasing the oxygen dose further, it is possible to shift the Dirac point to the Fermi level, while the valence band stays well beyond. The effect is found reversible, by warming up the samples which is interpreted in terms of physisorption of O2.
For magnetic impurities, i.e., Fe, we find a similar behavior as for the case of Ag in both Bi2Se3 and Bi2Te3. However, in that case the robustness is unexpected, since magnetic impurities are capable to break time reversal symmetry which should introduce a gap in the surface state at the Dirac point which in turn removes the protection. We argue, that the fact that the surface state shows no gap must be attributed to a missing magnetization of the Fe overlayer. In Bi2Te3 we are able to observe the surface state for deposited iron mass equivalents in the monolayer regime. Furthermore, we gain control over the sign of doping through the sample temperature during deposition.
Chapter6 is devoted to the lifetime broadening of the photoemission signal from the topological surface states of Bi2Se3 and Bi2Te3. It is revealed that the hexagonal warping of the surface state in Bi2Te3 introduces an anisotropy for electrons traveling along the two distinct high symmetry directions of the surface Brillouin zone, i.e., $\Gamma$K and $\Gamma$M. We show that the phonon coupling strength to the surface electrons in Bi2Te3 is in nice agreement with the theoretical prediction but, nevertheless, higher than one may expect. We argue that the electron-phonon coupling is one of the main contributions to the decay of photoholes but the relatively small size of the Fermi surface limits the number of phonon modes that may scatter off electrons. This effect is manifested in the energy dependence of the imaginary part of the electron self energy of the surface state which shows a decay to higher binding energies in contrast to the monotonic increase proportional to E$^2$ in the Fermi liquid theory due to electron-electron interaction.
Furthermore, the effect of the surface impurities of Chapter 5 on the quasiparticle life- times is investigated. We find that Fe impurities have a much stronger influence on the lifetimes as compared to Ag. Moreover, we find that the influence is stronger independently of the sign of the doping. We argue that this observation suggests a minor contribution of the warping on increased scattering rates in contrast to current belief. This is additionally confirmed by the observation that the scattering rates increase further with increasing silver amount while the doping stays constant and by the fact that clean Bi2Se3 and Bi2Te3 show very similar scattering rates regardless of the much stronger warping in Bi2Te3.
In the last chapter we report on a strong circular dichroism in the angle distribution of the photoemission signal of the surface state of Bi2Te3. We show that the color pattern obtained by calculating the difference between photoemission intensities measured with opposite photon helicity reflects the pattern expected for the spin polarization. However, we find a strong influence on strength and even sign of the effect when varying the photon energy. The sign change is qualitatively confirmed by means of one-step photoemission calculations conducted by our collaborators from the LMU München, while the calculated spin polarization is found to be independent of the excitation energy. Experiment and theory together unambiguously uncover the dichroism in these systems as a final state effect and the question in the title of the chapter has to be negated: Circular dichroism in the angle distribution is not a new spin sensitive technique.
Bad governance causes economic, social, developmental and environmental problems in many developing countries. Developing countries have adopted a number of reforms that have assisted in achieving good governance. The success of governance reform depends on the starting point of each country – what institutional arrangements exist at the out-set and who the people implementing reforms within the existing institutional framework are. This dissertation focuses on how formal institutions (laws and regulations) and informal institutions (culture, habit and conception) impact on good governance. Three characteristics central to good governance - transparency, participation and accountability are studied in the research.
A number of key findings were: Good governance in Hanoi and Berlin represent the two extremes of the scale, while governance in Berlin is almost at the top of the scale, governance in Hanoi is at the bottom. Good governance in Hanoi is still far from achieved. In Berlin, information about public policies, administrative services and public finance is available, reliable and understandable. People do not encounter any problems accessing public information. In Hanoi, however, public information is not easy to access. There are big differences between Hanoi and Berlin in the three forms of participation. While voting in Hanoi to elect local deputies is formal and forced, elections in Berlin are fair and free. The candidates in local elections in Berlin come from different parties, whereas the candidacy of local deputies in Hanoi is thoroughly controlled by the Fatherland Front. Even though the turnout of voters in local deputy elections is close to 90 percent in Hanoi, the legitimacy of both the elections and the process of representation is non-existent because the local deputy candidates are decided by the Communist Party.
The involvement of people in solving local problems is encouraged by the government in Berlin. The different initiatives include citizenry budget, citizen activity, citizen initiatives, etc. Individual citizens are free to participate either individually or through an association.
Lacking transparency and participation, the quality of public service in Hanoi is poor. Citizens seldom get their services on time as required by the regulations. Citizens who want to receive public services can bribe officials directly, use the power of relationships, or pay a third person – the mediator ("Cò" - in Vietnamese).
In contrast, public service delivery in Berlin follows the customer-orientated principle. The quality of service is high in relation to time and cost. Paying speed money, bribery and using relationships to gain preferential public service do not exist in Berlin.
Using the examples of Berlin and Hanoi, it is clear to see how transparency, participation and accountability are interconnected and influence each other. Without a free and fair election as well as participation of non-governmental organisations, civil organisations, and the media in political decision-making and public actions, it is hard to hold the Hanoi local government accountable.
The key differences in formal institutions (regulative and cognitive) between Berlin and Hanoi reflect the three main principles: rule of law vs. rule by law, pluralism vs. monopoly Party in politics and social market economy vs. market economy with socialist orientation.
In Berlin the logic of appropriateness and codes of conduct are respect for laws, respect of individual freedom and ideas and awareness of community development. People in Berlin take for granted that public services are delivered to them fairly. Ideas such as using money or relationships to shorten public administrative procedures do not exist in the mind of either public officials or citizens.
In Hanoi, under a weak formal framework of good governance, new values and norms (prosperity, achievement) generated in the economic transition interact with the habits of the centrally-planned economy (lying, dependence, passivity) and traditional values (hierarchy, harmony, family, collectivism) influence behaviours of those involved.
In Hanoi “doing the right thing” such as compliance with law doesn’t become “the way it is”.
The unintended consequence of the deliberate reform actions of the Party is the prevalence of corruption. The socialist orientation seems not to have been achieved as the gap between the rich and the poor has widened.
Good governance is not achievable if citizens and officials are concerned only with their self-interest. State and society depend on each other. Theoretically to achieve good governance in Hanoi, institutions (formal and informal) able to create good citizens, officials and deputies should be generated. Good citizens are good by habit rather than by nature.
The rule of law principle is necessary for the professional performance of local administrations and People’s Councils. When the rule of law is applied consistently, the room for informal institutions to function will be reduced.
Promoting good governance in Hanoi is dependent on the need and desire to change the government and people themselves. Good governance in Berlin can be seen to be the result of the efforts of the local government and citizens after a long period of development and continuous adjustment.
Institutional transformation is always a long and complicated process because the change in formal regulations as well as in the way they are implemented may meet strong resistance from the established practice. This study has attempted to point out the weaknesses of the institutions of Hanoi and has identified factors affecting future development towards good governance. But it is not easy to determine how long it will take to change the institutional setting of Hanoi in order to achieve good governance.
This article deals with Spanish modal adverbs and verbs of cognitive attitude (Capelli 2007) and their epistemic and/or evidential use. The article is based upon the hypothesis that the study of the use of these linguistic devices has to be highly context-sensitive, as it is not always (only) the sentence level that has to be looked at if one wants to find out whether a certain adverb or verb of cognitive attitude is used evidentially or epistemically. In this article, therefore, the context is used to determine which meaning aspects of an element are encoded and which are contributed by the context. The data were retrieved from the daily newspaper El País. Nevertheless, the present study is not a quantitative one, but rather a qualitative study. My corpus analysis indicates that it is not possible to differentiate between the linguistic categories of evidentiality and epistemic modality in every case, although it indeed is possible in the vast majority of cases. In verbs of cognitive attitude, evidentiality and epistemic modality seem to be two interwoven categories, while concerning modal adverbs it is usually possible to separate the categories and to distinguish between the different subtypes of evidentiality such as visual evidence, hearsay and inference.
In older research literature, the prose epics emerging from the court of Elisabeth of Lorraine and Nassau-Saarbrücken have repeatedly been accused of lacking structure and literariness. By contrast, this article shows that narrative principles of seriality generate the complex structure of the voluminous ›Loher und Maller‹: literary strategies of repetition and variation organize the text on different levels. Recurring narrative structures, thematic constellations and motivations as well as lexical stereotypes are part of this comprehensive principle of seriality. Not triviality and insufficiency, but structural and narrative complexity and lexical accumulation of significance characterize ›Loher und Maller‹.
Inhalt: Editorial (Dr. Roswitha Lohwaßer) ; Auf Veränderungen reagieren. Herausforderungen an Beratungs- und Unterstützungssysteme im Kontext der Anforderungen an Schule
(Dr. Götz Bieber, Bernd Jankofsky) ; Lehrer qualifizieren, Kräfte bündeln. Überlegungen zur Enwicklung universitätsnaher Lehrerfortbildung in Niedersachsen (Dr. Jens Winkel, Ulrike Heinrichs) ; Nicht nur fachlich, auch didaktisch vorbereitet sein. Wann ist eine Fortbildung für Lehrkräfte eine gelungene Fortbildung (Elke Dengler) ; Der Lehrer - TÜV Qualitätscheck aus Schülersicht (Jorid Engler) ; Zusammenbringen, was zusammen gehört - universitäre Forschung und LehrerInnenfortbildung - (Dr. Charlotte Gemsa, Martina Rode) ; Wissensvermittlung spannend gestalten. Faszinierende numerische Experimente mit Polynomen - Fortbildung für Mathematiklehrer (Dr. Wolfgang Schöbel) ; Wow-Effekt, der Spaß an Mathe weckt. Erwartungen an eine Fortbildung für Mathematiklehrer (Jörg Schulz); Gemeinsames Lernen von Studierenden und Lehrenden. Was Potsdam von der Oldenburger Teamforschung lernen kann (Dr. Benjamin Apelojg) ; Design Thinking - Innovationsmethode für Projektarbeit. Ein Experiment im Helmholtz-Gymnasium (Andrea Scheer, Johannes Erdmann) ; Entwicklungen als Chance begreifen.
Lernkulturen und Portfolioarbeit in der Lehrerbildung (Dr. Mark-Oliver Carl) ; Rückblick auf die Tage der Lehrerbildung 2012
In diesem einleitenden Beitrag des Themenschwerpunktes wird der
Hintergrund der internationalen Klimaverhandlungen erläutert und
die Ergebnisse des Kopenhagen-Akkords vorgestellt. Angesichts des
Scheiterns der Kopenhagener Konferenz muss die zeitnahe Schließung
eines rechtlich bindenden, globalen Klimaabkommens als unwahrscheinlich
gelten. Die Klimapolitik wird zukünftig verstärkt auf nationalstaatlicher
und transnationaler Ebene erfolgen.
Abschied von KyotoPlus?
(2012)
Die Ergebnisse des Klimagipfels von Kopenhagen sind eine bittere
Enttäuschung für die EU. Ihr ist es nicht gelungen, ihren Führungsambitionen
beim globalen Klimaschutz gerecht zu werden und die
Konferenz zur Weichenstellung für ein rechtsverbindliches Klimaabkommen
nach 2012 zu nutzen. Damit steht die Union vor grundlegenden
strategischen Fragen zum Kurs ihrer Klimapolitik.
Gescheiterte Klimapolitik?
(2012)
Der Kopenhagener Klimagipfel 2009 ist mit Spannung erwartet worden.
Erreicht wurde lediglich ein Minimalkonsens. Der Autor liefert eine
akteurszentrierte Deutung des Kopenhagener Abkommens und stellt die
Frage nach dem Präzedenzcharakter der Verhandlungen: Handelte es sich
um ein einmaliges Versagen multilateraler Diplomatie oder um einen
Vorgeschmack auf die weltpolitische Routine des 21. Jahrhunderts?
Die Zivilgesellschaft hat dazu beigetragen, dass die Klimakonferenz in
Kopenhagen zu einem Medienereignis wurde. Fernab großer Demonstrationen
haben Nichtregierungsorganisationen (NRO) seit Jahren
einen guten Zugang zu den internationalen Klimaverhandlungen. Am
Beispiel von Chile wird gezeigt, wie Nichtregierungsorganisationen
durch professionellen Lobbyismus ihre Positionen in politische Prozesse
einspeisen. Sie befinden sich in einem Spannungsfeld von Kooperation
und Instrumentalisierung durch politische Entscheidungsträger.
Wie Klimaschutz finanzieren?
(2012)
Zur Finanzierung von Klimaschutz müssen öffentliche Mittel gezielt
eingesetzt werden. Dies beinhaltet auch die Rahmenbedingungen für
private Finanzströme signifikant zu verbessern. Anhand einer Problemanalyse
bestimmen die Autoren Eckdaten für diese Hebelwirkung.
Öffentliche Anschubfinanzierung kann somit die Grundlage für private
Investitionen sein. Dies wird exemplarisch an der Internationalen
Klimaschutzinitiative des Bundesumweltministeriums diskutiert.
China und Indien
(2012)
Der Artikel analysiert die neue Rolle aufsteigender Schwellenländer
in den internationalen Klimaverhandlungen am Beispiel Chinas und
Indiens. Die Ablehnung verbindlicher Reduktionsziele für Treibhausgase
wurde in Kopenhagen als Blockadepolitik beider Länder gewertet.
China und Indien können sich in ihrer Position behaupten, da ihr
gestiegenes Gewicht in der multipolaren Weltordnung und die Untätigkeit
führender Industrieländer ihre Verhandlungsposition stärkt. Die
Autorin diskutiert Kooperationsmöglichkeiten auf subnationaler Ebene,
die die Blockadeposition nationaler Regierungen umgehen können.
Der Autor diskutiert die Chancen und Risiken bei der Einbindung
des Südens in die internationale Klimapolitik. Lange Zeit hatten die
Entwicklungsländer am wenigsten zum Klimawandel beigetragen,
wären aber am stärksten von ihm betroffen. Mittlerweile jedoch tragen
diese Länder in erheblichem Maße selbst zum Klimawandel bei. Allerdings
setzen deren Regierungen auf Zeit. Sie erwarten Ressourcentransfers.
Dies verstärkt auch alte Probleme des ‚Rent-Seeking‘.
Klimapolitik am Ende?
(2012)
Einleitung
(2012)
Die Ausstellung "Die Geschichte des Standortes Potsdam-Golm 1935 bis 1991" zeigt die wechselvolle Historie des jetzigen Universitäts- und Wissenschaftsstandortes. Die Ursprünge finden sich in der 1935 errichteten General-Wever-Kaserne. Nach der Beendigung des Zweiten Weltkrieges und bis zur Wende nutzten sowohl die sowjetische Armee als auch das Ministerium für Staatssicherheit das Gelände. Thematisiert werden unter anderem die militärische Zentralregion Brandenburg, die Herausbildung der Geheimdiensthochschule von 1951 bis 1990, die Lehre an dieser Einrichtung, das Studienleben und die Forschungstätigkeit sowie die Nutzung des Standortes nach 1990.
Die Ausstellung besteht aus 13 mit zahlreichen Fotos versehenen Tafeln.
Üblicherweise vermeiden deutsche Parteien Kampfkandidaturen um den Vorsitz. Dennoch kam es auf dem Mannheimer SPD-Parteitag 1995 zu einer unerwarteten offenen Konkurrenz um das Spitzenamt. Das unbeabsichtigte Scheitern der Inszenierung der „Geschlossenheit“ der Partei führte zum Ausbruch der bis dahin unterdrückten Kämpfe um den Parteivorsitz. Der Mannheimer Parteitag steht exemplarisch für den Zusammenhang zwischen Inszenierung, Disziplin und den informellen Regeln innerparteilicher Machtkonstruktion. Am Beispiel dieses Parteitages zeigt die vorliegende Arbeit, wie umstrittenen Parteivorsitzenden sich gegen Widerstände im Amt behaupten können bzw. woran diese Strategie scheitern kann. Aus figurationstheoretischer Perspektive wird die Inszenierung als Notwendigkeit medienvermittelter Parteienkonkurrenz um Wählerstimmen gefasst. Inszenierung erfordert Selbstdisziplin und das koordinierte Handeln der Parteimitglieder. Innerparteilich wird so wechselseitige Abhängigkeit erzeugt. Diese wird gesteigert durch die Medien-Konzentration auf wenige Spitzenpolitiker. Die Mehrheit der Mandatsträger und Funktionäre ist angewiesen auf das medienwirksame Auftreten der Führung. Für den Medienerfolg braucht die Führung ihrerseits die Unterstützung der Mitglieder. Diese wechselseitige Abhängigkeit erzeugt sowohl typische Relevanzen als auch Möglichkeiten, die jeweils andere Interessengruppe unter Zugzwang zu setzen. Imageprobleme des Vorsitzenden sind als verletzte Erwartungen Anlass für innerparteiliche Machtkämpfe, in denen die Parteiführung insbesondere die Inszenierung der „Geschlossenheit“ nutzen kann, um offene Personaldiskussionen zu verhindern. Da Handlungsoptionen und -grenzen durch das Handeln der Akteure immer wieder neu geschaffen werden, besteht stets das Risiko des Scheiterns innerparteilicher Disziplinierung. Mit dem Nachvollzug von Disziplinierung und den Gründen ihrer Kontingenz versteht sich die vorliegende Arbeit als Beitrag zu einer Theorie informeller Machtregeln in Organisationen mit schwach ausgeprägten Herrschaftsstrukturen. Im ersten Teil der Arbeit wird der Zusammenhang zwischen Inszenierung und Macht durch die Konzepte Theatralität und Figuration entwickelt. Im zweiten Teil werden typische Konstellationen der gegenwärtigen parlamentarischen Demokratie auf typische beziehungsvermittelte Situationsdeutungen, Handlungsmöglichkeiten und -grenzen untersucht. Im dritten Teil wird der kontingente Prozess des innerparteilichen Machtkampfes am Beispiel des Mannheimer Parteitages 1995 nachvollzogen.
Eye movements are a powerful tool to examine cognitive processes. However, in most paradigms little is known about the dynamics present in sequences of saccades and fixations. In particular, the control of fixation durations has been widely neglected in most tasks. As a notable exception, both spatial and temporal aspects of eye-movement control have been thoroughly investigated during reading. There, the scientific discourse was dominated by three controversies, (i), the role of oculomotor vs. cognitive processing on eye-movement control, (ii) the serial vs. parallel processing of words, and, (iii), the control of fixation durations. The main purpose of this thesis was to investigate eye movements in tasks that require sequences of fixations and saccades. While reading phenomena served as a starting point, we examined eye guidance in non-reading tasks with the aim to identify general principles of eye-movement control. In addition, the investigation of eye movements in non-reading tasks helped refine our knowledge about eye-movement control during reading. Our approach included the investigation of eye movements in non-reading experiments as well as the evaluation and development of computational models. I present three main results : First, oculomotor phenomena during reading can also be observed in non-reading tasks (Chapter 2 & 4). Oculomotor processes determine the fixation position within an object. The fixation position, in turn, modulates both the next saccade target and the current fixation duration. Second, predicitions of eye-movement models based on sequential attention shifts were falsified (Chapter 3). In fact, our results suggest that distributed processing of multiple objects forms the basis of eye-movement control. Third, fixation durations are under asymmetric control (Chapter 4). While increasing processing demands immediately prolong fixation durations, decreasing processing demands reduce fixation durations only with a temporal delay. We propose a computational model ICAT to account for asymmetric control. In this model, an autonomous timer initiates saccades after random time intervals independent of ongoing processing. However, processing demands that are higher than expected inhibit the execution of the next saccade and, thereby, prolong the current fixation. On the other hand, lower processing demands will not affect the duration before the next saccade is executed. Since the autonomous timer adjusts to expected processing demands from fixation to fixation, a decrease in processing demands may lead to a temporally delayed reduction of fixation durations. In an extended version of ICAT, we evaluated its performance while simulating both temporal and spatial aspects of eye-movement control. The eye-movement phenomena investigated in this thesis have now been observed in a number of different tasks, which suggests that they represent general principles of eye guidance. I propose that distributed processing of the visual input forms the basis of eye-movement control, while fixation durations are controlled by the principles outlined in ICAT. In addition, oculomotor control contributes considerably to the variability observed in eye movements. Interpretations for the relation between eye movements and cognition strongly benefit from a precise understanding of this interplay.
Mit der Liberalisierung des Strommarkts, den unsicheren Aussichten in der Klimapolitik und stark schwankenden Preisen bei Brennstoffen, Emissionsrechten und Kraftwerkskomponenten hat bei Kraftwerksinvestitionen das Risikomanagement an Bedeutung gewonnen. Dies äußert sich im vermehrten Einsatz probabilistischer Verfahren. Insbesondere bei regulativen Risiken liefert der klassische, häufigkeitsbasierte Wahrscheinlichkeitsbegriff aber keine Handhabe zur Risikoquantifizierung. In dieser Arbeit werden Kraftwerksinvestitionen und -portfolien in Deutschland mit Methoden des Bayes'schen Risikomanagements bewertet. Die Bayes'sche Denkschule begreift Wahrscheinlichkeit als persönliches Maß für Unsicherheit. Wahrscheinlichkeiten können auch ohne statistische Datenanalyse allein mit Expertenbefragungen gewonnen werden. Das Zusammenwirken unsicherer Werttreiber wurde mit einem probabilistischen DCF-Modell (Discounted Cash Flow-Modell) spezifiziert und in ein Einflussdiagramm mit etwa 1200 Objekten umgesetzt. Da der Überwälzungsgrad von Brennstoff- und CO2-Kosten und damit die Höhe der von den Kraftwerken erwirtschafteten Deckungsbeiträge im Wettbewerb bestimmt werden, reicht eine einzelwirtschaftliche Betrachtung der Kraftwerke nicht aus. Strompreise und Auslastungen werden mit Heuristiken anhand der individuellen Position der Kraftwerke in der Merit Order bestimmt, d.h. anhand der nach kurzfristigen Grenzkosten gestaffelten Einsatzreihenfolge. Dazu wurden 113 thermische Großkraftwerke aus Deutschland in einer Merit Order vereinigt. Das Modell liefert Wahrscheinlichkeitsverteilungen für zentrale Größen wie Kapitalwerte von Bestandsportfolien sowie Stromgestehungskosten und Kapitalwerte von Einzelinvestitionen (Steinkohle- und Braunkohlekraftwerke mit und ohne CO2-Abscheidung sowie GuD-Kraftwerke). Der Wert der Bestandsportfolien von RWE, E.ON, EnBW und Vattenfall wird primär durch die Beiträge der Braunkohle- und Atomkraftwerke bestimmt. Erstaunlicherweise schlägt sich der Emissionshandel nicht in Verlusten nieder. Dies liegt einerseits an den Zusatzgewinnen der Atomkraftwerke, andererseits an den bis 2012 gratis zugeteilten Emissionsrechten, welche hohe Windfall-Profite generieren. Dadurch erweist sich der Emissionshandel in seiner konkreten Ausgestaltung insgesamt als gewinnbringendes Geschäft. Über die Restlaufzeit der Bestandskraftwerke resultiert ab 2008 aus der Einführung des Emissionshandels ein Barwertvorteil von insgesamt 8,6 Mrd. €. In ähnlicher Dimension liegen die Barwertvorteile aus der 2009 von der Bundesregierung in Aussicht gestellten Laufzeitverlängerung für Atomkraftwerke. Bei einer achtjährigen Laufzeitverlängerung ergäben sich je nach CO2-Preisniveau Barwertvorteile von 8 bis 15 Mrd. €. Mit höheren CO2-Preisen und Laufzeitverlängerungen von bis zu 28 Jahren würden 25 Mrd. € oder mehr zusätzlich anfallen. Langfristig erscheint fraglich, ob unter dem gegenwärtigen Marktdesign noch Anreize für Investitionen in fossile Kraftwerke gegeben sind. Zu Beginn der NAP 2-Periode noch rentable Investitionen in Braunkohle- und GuD-Kraftwerke werden mit der auslaufenden Gratiszuteilung von Emissionsrechten zunehmend unrentabler. Die Rentabilität wird durch Strommarkteffekte der erneuerbaren Energien und ausscheidender alter Gas- und Ölkraftwerke stetig weiter untergraben. Steinkohlekraftwerke erweisen sich selbst mit anfänglicher Gratiszuteilung als riskante Investition. Die festgestellten Anreizprobleme für Neuinvestitionen sollten jedoch nicht dem Emissionshandel zugeschrieben werden, sondern resultieren aus den an Grenzkosten orientierten Strompreisen. Das Anreizproblem ist allerdings bei moderaten CO2-Preisen am größten. Es gilt auch für Kraftwerke mit CO2-Abscheidung: Obwohl die erwarteten Vermeidungskosten für CCS-Kraftwerke gegenüber konventionellen Kohlekraftwerken im Jahr 2025 auf 25 €/t CO2 (Braunkohle) bzw. 38,5 €/t CO2 (Steinkohle) geschätzt werden, wird ihr Bau erst ab CO2-Preisen von 50 bzw. 77 €/t CO2 rentabel. Ob und welche Kraftwerksinvestitionen sich langfristig rechnen, wird letztlich aber politisch entschieden und ist selbst unter stark idealisierten Bedingungen kaum vorhersagbar.
The potential increase in frequency and magnitude of extreme floods is currently discussed in terms of global warming and the intensification of the hydrological cycle. The profound knowledge of past natural variability of floods is of utmost importance in order to assess flood risk for the future. Since instrumental flood series cover only the last ~150 years, other approaches to reconstruct historical and pre-historical flood events are needed. Annually laminated (varved) lake sediments are meaningful natural geoarchives because they provide continuous records of environmental changes > 10000 years down to a seasonal resolution. Since lake basins additionally act as natural sediment traps, the riverine sediment supply, which is preserved as detrital event layers in the lake sediments, can be used as a proxy for extreme discharge events. Within my thesis I examined a ~ 8.50 m long sedimentary record from the pre-Alpine Lake Mondsee (Northeast European Alps), which covered the last 7000 years. This sediment record consists of calcite varves and intercalated detrital layers, which range in thickness from 0.05 to 32 mm. Detrital layer deposition was analysed by a combined method of microfacies analysis via thin sections, Scanning Electron Microscopy (SEM), μX-ray fluorescence (μXRF) scanning and magnetic susceptibility. This approach allows characterizing individual detrital event layers and assigning a corresponding input mechanism and catchment. Based on varve counting and controlled by 14C age dates, the main goals of this thesis are (i) to identify seasonal runoff processes, which lead to significant sediment supply from the catchment into the lake basin and (ii) to investigate flood frequency under changing climate boundary conditions. This thesis follows a line of different time slices, presenting an integrative approach linking instrumental and historical flood data from Lake Mondsee in order to evaluate the flood record inferred from Lake Mondsee sediments. The investigation of eleven short cores covering the last 100 years reveals the abundance of 12 detrital layers. Therein, two types of detrital layers are distinguished by grain size, geochemical composition and distribution pattern within the lake basin. Detrital layers, which are enriched in siliciclastic and dolomitic material, reveal sediment supply from the Flysch sediments and Northern Calcareous Alps into the lake basin. These layers are thicker in the northern lake basin (0.1-3.9 mm) and thinner in the southern lake basin (0.05-1.6 mm). Detrital layers, which are enriched in dolomitic components forming graded detrital layers (turbidites), indicate the provenance from the Northern Calcareous Alps. These layers are generally thicker (0.65-32 mm) and are solely recorded within the southern lake basin. In comparison with instrumental data, thicker graded layers result from local debris flow events in summer, whereas thin layers are deposited during regional flood events in spring/summer. Extreme summer floods as reported from flood layer deposition are principally caused by cyclonic activity from the Mediterranean Sea, e.g. July 1954, July 1997 and August 2002. During the last two millennia, Lake Mondsee sediments reveal two significant flood intervals with decadal-scale flood episodes, during the Dark Ages Cold Period (DACP) and the transition from the Medieval Climate Anomaly (MCA) into the Little Ice Age (LIA) suggesting a linkage of transition to climate cooling and summer flood recurrences in the Northeastern Alps. In contrast, intermediate or decreased flood episodes appeared during the MWP and the LIA. This indicates a non-straightforward relationship between temperature and flood recurrence, suggesting higher cyclonic activity during climate transition in the Northeast Alps. The 7000-year flood chronology reveals 47 debris flows and 269 floods, with increased flood activity shifting around 3500 and 1500 varve yr BP (varve yr BP = varve years before present, before present = AD 1950). This significant increase in flood activity shows a coincidence with millennial-scale climate cooling that is reported from main Alpine glacier advances and lower tree lines in the European Alps since about 3300 cal. yr BP (calibrated years before present). Despite relatively low flood occurrence prior to 1500 varve yr BP, floods at Lake Mondsee could have also influenced human life in early Neolithic lake dwellings (5750-4750 cal. yr BP). While the first lake dwellings were constructed on wetlands, the later lake dwellings were built on piles in the water suggesting an early flood risk adaptation of humans and/or a general change of the Late Neolithic Culture of lake-dwellers because of socio-economic reasons. However, a direct relationship between the final abandonment of the lake dwellings and higher flood frequencies is not evidenced.
The field of machine learning studies algorithms that infer predictive models from data. Predictive models are applicable for many practical tasks such as spam filtering, face and handwritten digit recognition, and personalized product recommendation. In general, they are used to predict a target label for a given data instance. In order to make an informed decision about the deployment of a predictive model, it is crucial to know the model’s approximate performance. To evaluate performance, a set of labeled test instances is required that is drawn from the distribution the model will be exposed to at application time. In many practical scenarios, unlabeled test instances are readily available, but the process of labeling them can be a time- and cost-intensive task and may involve a human expert. This thesis addresses the problem of evaluating a given predictive model accurately with minimal labeling effort. We study an active model evaluation process that selects certain instances of the data according to an instrumental sampling distribution and queries their labels. We derive sampling distributions that minimize estimation error with respect to different performance measures such as error rate, mean squared error, and F-measures. An analysis of the distribution that governs the estimator leads to confidence intervals, which indicate how precise the error estimation is. Labeling costs may vary across different instances depending on certain characteristics of the data. For instance, documents differ in their length, comprehensibility, and technical requirements; these attributes affect the time a human labeler needs to judge relevance or to assign topics. To address this, the sampling distribution is extended to incorporate instance-specific costs. We empirically study conditions under which the active evaluation processes are more accurate than a standard estimate that draws equally many instances from the test distribution. We also address the problem of comparing the risks of two predictive models. The standard approach would be to draw instances according to the test distribution, label the selected instances, and apply statistical tests to identify significant differences. Drawing instances according to an instrumental distribution affects the power of a statistical test. We derive a sampling procedure that maximizes test power when used to select instances, and thereby minimizes the likelihood of choosing the inferior model. Furthermore, we investigate the task of comparing several alternative models; the objective of an evaluation could be to rank the models according to the risk that they incur or to identify the model with lowest risk. An experimental study shows that the active procedure leads to higher test power than the standard test in many application domains. Finally, we study the problem of evaluating the performance of ranking functions, which are used for example for web search. In practice, ranking performance is estimated by applying a given ranking model to a representative set of test queries and manually assessing the relevance of all retrieved items for each query. We apply the concepts of active evaluation and active comparison to ranking functions and derive optimal sampling distributions for the commonly used performance measures Discounted Cumulative Gain and Expected Reciprocal Rank. Experiments on web search engine data illustrate significant reductions in labeling costs.
Neben der Frage nach der leistungssteigernden Wirkung von sogenannten "Ich-kann"-Checklisten auf die Metakognitionsstrategien der Schülerinnen und Schüler, geht die Arbeit auch den Fragen nach, welche Schülerinnen und Schüler "Ich-kann"-Checklisten nutzen, in welcher Form und unter welchen Kontextmerkmalen sie am wirksamsten sind. Dabei handelt es sich um Listen mit festgelegten, fachlichen und überfachlichen Kompetenzen einer bzw. mehrerer Unterrichtseinheiten, die in Form von „Ich-kann“-Formulierungen für Schüler geschrieben sind und die Aufforderung einer Selbst- und Fremdeinschätzung beinhalten. Blickt man in die Veröffentlichungen der letzten Jahre zu diesem Thema und in die schulische Praxis, so ist eine deutliche Hinwendung zur Entwicklung und Arbeit mit „Ich-kann“-Checklisten und Kompetenzrastern zu erkennen. Umso erstaunlicher ist es, dass diesbezüglich so gut wie keine empirischen Untersuchungen vorliegen (vgl. Bastian & Merziger, 2007; Merziger, 2007). Basierend auf einer quantitativen Erhebung von 197 Gymnasialschülerinnen und -schülern in der 7. Jahrgangsstufe im Fach Deutsch wurde über einen Zeitraum von zwei Jahren diesen übergeordneten Fragen nachgegangen. Die Ergebnisse lassen die Aussagen zu, dass "Ich-kann"-Checklisten insbesondere für Jungen ein wirksames pädagogisches Instrument der Selbstregulation darstellen. So fördert die Arbeit mit "Ich-kann"-Checklisten nicht nur die Steuerung eigener Lernprozesse, sondern auch die Anstrengungsbereitschaft der Schülerinnen und Schüler, mehr für das Fach tun zu wollen. Eine während der Intervention erfolgte Selbsteinschätzung über den Leistungsstand mittels der "Ich-kann"-Checklisten fördert dabei den freiwilligen außerunterrichtlichen Gebrauch.
MHC genes encode proteins that are responsible for the recognition of foreign antigens and the triggering of a subsequent, adequate immune response of the organism. Thus they hold a key position in the immune system of vertebrates. It is believed that the extraordinary genetic diversity of MHC genes is shaped by adaptive selectional processes in response to the reoccurring adaptations of parasites and pathogens. A large number of MHC studies were performed in a wide range of wildlife species aiming to understand the role of immune gene diversity in parasite resistance under natural selection conditions. Methodically, most of this work with very few exceptions has focussed only upon the structural, i.e. sequence diversity of regions responsible for antigen binding and presentation. Most of these studies found evidence that MHC gene variation did indeed underlie adaptive processes and that an individual’s allelic diversity explains parasite and pathogen resistance to a large extent. Nevertheless, our understanding of the effective mechanisms is incomplete. A neglected, but potentially highly relevant component concerns the transcriptional differences of MHC alleles. Indeed, differences in the expression levels MHC alleles and their potential functional importance have remained unstudied. The idea that also transcriptional differences might play an important role relies on the fact that lower MHC gene expression is tantamount with reduced induction of CD4+ T helper cells and thus with a reduced immune response. Hence, I studied the expression of MHC genes and of immune regulative cytokines as additional factors to reveal the functional importance of MHC diversity in two free-ranging rodent species (Delomys sublineatus, Apodemus flavicollis) in association with their gastrointestinal helminths under natural selection conditions. I established the method of relative quantification of mRNA on liver and spleen samples of both species in our laboratory. As there was no available information on nucleic sequences of potential reference genes in both species, PCR primer systems that were established in laboratory mice have to be tested and adapted for both non-model organisms. In the due course, sets of stable reference genes for both species were found and thus the preconditions for reliable measurements of mRNA levels established. For D. sublineatus it could be demonstrated that helminth infection elicits aspects of a typical Th2 immune response. Whereas mRNA levels of the cytokine interleukin Il4 increased with infection intensity by strongyle nematodes neither MHC nor cytokine expression played a significant role in D. sublineatus. For A. flavicollis I found a negative association between the parasitic nematode Heligmosomoides polygyrus and hepatic MHC mRNA levels. As a lower MHC expression entails a lower immune response, this could be evidence for an immune evasive strategy of the nematode, as it has been suggested for many micro-parasites. This implies that H. polygyrus is capable to interfere actively with the MHC transcription. Indeed, this parasite species has long been suspected to be immunosuppressive, e.g. by induction of regulatory T-helper cells that respond with a higher interleukin Il10 and tumor necrosis factor Tgfb production. Both cytokines in turn cause an abated MHC expression. By disabling recognition by the MHC molecule H. polygyrus might be able to prevent an activation of the immune system. Indeed, I found a strong tendency in animals carrying the allele Apfl-DRB*23 to have an increased infection intensity with H. polygyrus. Furthermore, I found positive and negative associations between specific MHC alleles and other helminth species, as well as typical signs of positive selection acting on the nucleic sequences of the MHC. The latter was evident by an elevated rate of non-synonymous to synonymous substitutions in the MHC sequences of exon 2 encoding the functionally important antigen binding sites whereas the first and third exons of the MHC DRB gene were highly conserved. In conclusion, the studies in this thesis demonstrate that valid procedures to quantify expression of immune relevant genes are also feasible in non-model wildlife organisms. In addition to structural MHC diversity, also MHC gene expression should be considered to obtain a more complete picture on host-pathogen coevolutionary selection processes. This is especially true if parasites are able to interfere with systemic MHC expression. In this case advantageous or disadvantageous effects of allelic binding motifs are abated. The studies could not define the role of MHC gene expression in antagonistic coevolution as such but the results suggest that it depends strongly on the specific parasite species that is involved.
In dieser Arbeit wurden sphärische Gold Nanopartikel (NP) mit einem Durchmesser größer ~ 2 nm, Gold Quantenpunkte (QDs) mit einem Durchmesser kleiner ~ 2 nm sowie Gold Nanostäbchen (NRs) unterschiedlicher Länge hergestellt und optisch charakterisiert. Zudem wurden zwei neue Synthesevarianten für die Herstellung thermosensitiver Gold QDs entwickelt werden. Sphärische Gold NP zeigen eine Plasmonenbande bei ~ 520 nm, die auf die kollektive Oszillation von Elektronen zurückzuführen ist. Gold NRs weisen aufgrund ihrer anisotropen Form zwei Plasmonenbanden auf, eine transversale Plasmonenbande bei ~ 520 nm und eine longitudinale Plasmonenbande, die vom Länge-zu-Durchmesser-Verhältnis der Gold NRs abhängig ist. Gold QDs besitzen keine Plasmonenbande, da ihre Elektronen Quantenbeschränkungen unterliegen. Gold QDs zeigen jedoch aufgrund diskreter Energieniveaus und einer Bandlücke Photolumineszenz (PL). Die synthetisierten Gold QDs besitzen eine Breitbandlumineszenz im Bereich von ~ 500-800 nm, wobei die Lumineszenz-eigenschaften (Emissionspeak, Quantenausbeute, Lebenszeiten) stark von den Herstellungs-bedingungen und den Oberflächenliganden abhängen. Die PL in Gold QDs ist ein sehr komplexes Phänomen und rührt vermutlich von Singulett- und Triplett-Zuständen her. Gold NRs und Gold QDs konnten in verschiedene Polymere wie bspw. Cellulosetriacetat eingearbeitet werden. Polymernanokomposite mit Gold NRs wurden erstmals unter definierten Bedingungen mechanisch gezogen, um Filme mit optisch anisotropen (richtungsabhängigen) Eigenschaften zu erhalten. Zudem wurde das Temperaturverhalten von Gold NRs und Gold QDs untersucht. Es konnte gezeigt werden, dass eine lokale Variation der Größe und Form von Gold NRs in Polymernanokompositen durch Temperaturerhöhung auf 225-250 °C erzielt werden kann. Es zeigte sich, dass die PL der Gold QDs stark temperaturabhängig ist, wodurch die PL QY der Proben beim Abkühlen (-7 °C) auf knapp 30 % verdoppelt und beim Erhitzen auf 70 °C nahezu vollständig gelöscht werden konnte. Es konnte demonstriert werden, dass die Länge der Alkylkette des Oberflächenliganden einen Einfluss auf die Temperaturstabilität der Gold QDs hat. Zudem wurden verschiedene neuartige und optisch anisotrope Sicherheitslabels mit Gold NRs sowie thermosensitive Sicherheitslabel mit Gold QDs entwickelt. Ebenso scheinen Gold NRs und QDs für die und die Optoelektronik (bspw. Datenspeicherung) und die Medizin (bspw. Krebsdiagnostik bzw. -therapie) von großem Interesse zu sein.
Diese Arbeit befasst sich mit der Synthese und Charakterisierung von organolöslichen Thiophen und Benzodithiophen basierten Materialien und ihrer Anwendung als aktive lochleitende Halbleiterschichten in Feldeffekttransistoren. Im ersten Teil der Arbeit wird durch eine gezielte Modifikation des Thiophengrundgerüstes eine neue Comonomer-Einheit für die Synthese von Thiophen basierten Copolymeren erfolgreich dargestellt. Die hydrophoben Hexylgruppen in der 3-Position des Thiophens werden teilweise durch hydrophile 3,6-Dioxaheptylgruppen ersetzt. Über die Grignard-Metathese nach McCullough werden statistische Copolymere mit unterschiedlichen molaren Anteilen vom hydrophoben Hexyl- und hydrophilem 3,6-Dioxaheptylgruppen 1:1 (P-1), 1:2 (P-2) und 2:1 (P-3) erfolgreich hergestellt. Auch die Synthese eines definierten Blockcopolymers BP-1 durch sequentielle Addition der Comonomere wird realisiert. Optische und elektrochemische Eigenschaften der neuartigen Copolymere sind vergleichbar mit P3HT. Mit allen Copolymeren wird ein charakteristisches Transistorverhalten in einem Top-Gate/Bottom-Kontakt-Aufbau erhalten. Dabei werden mit P-1 als die aktive Halbleiterschicht im Bauteil, PMMA als Dielektrikum und Silber als Gate-Elektrode Mobilitäten von bis zu 10-2 cm2/Vs erzielt. Als Folge der optimierten Grenzfläche zwischen Dielektrikum und Halbleiter wird eine Verbesserung der Luftstabilität der Transistoren über mehrere Monate festgestellt. Im zweiten Teil der Arbeit werden Benzodithiophen basierte organische Materialien hergestellt. Für die Synthese der neuartigen Benzodithiophen-Derivate wird die Schlüsselverbindung TIPS-BDT in guter Ausbeute dargestellt. Die Difunktionalisierung von TIPS-BDT in den 2,6-Positionen über eine elektrophile Substitution liefert die gewünschten Dibrom- und Distannylmonomere. Zunächst werden über die Stille-Reaktion alternierende Copolymere mit alkylierten Fluoren- und Chinoxalin-Einheiten realisiert. Alle Copolymere zeichnen sich durch eine gute Löslichkeit in gängigen organischen Lösungsmitteln, hohe thermische Stabilität und durch gute Filmbildungseigenschaften aus. Des Weiteren sind alle Copolymere mit HOMO Lagen höher als -6.3 eV, verglichen mit den Thiophen basierten Copolymeren (P-1 bis P-3), sehr oxidationsstabil. Diese Copolymere zeigen amorphes Verhalten in den Halbleiterschichten in OFETs auf und es werden Mobilitäten bis zu 10-4 cm2/Vs erreicht. Eine Abhängigkeit der Bauteil-Leistung von dem Zinngehalt-Rest im Polymer wird nachgewiesen. Ein Zinngehalt von über 0.6 % kann enormen Einfluss auf die Mobilität ausüben, da die funktionellen SnMe3-Gruppen als Fallenzustände wirken können. Alternativ wird das alternierende TIPS-BDT/Fluoren-Copolymer P-5-Stille nach der Suzuki-Methode polymerisiert. Mit P-5-Suzuki als die aktive organische Halbleiterschicht im OFET wird die höchste Mobilität von 10-2 cm2/Vs erzielt. Diese Mobilität ist somit um zwei Größenordnungen höher als bei P-5-Stille, da die Fallenzustände in diesem Fall minimiert werden und folglich der Ladungstransport verbessert wird. Sowohl das Homopolymer P-12 als auch das Copolymer mit dem aromatischen Akzeptor Benzothiadiazol P-9 führen zu schwerlöslichen Polymeren. Aus diesem Grund werden einerseits Terpolymere aus TIPS-BDT/Fluoren/BTD-Einheiten P-10 und P-11 aufgebaut und andererseits wird versucht die TIPS-BDT-Einheit in die Seitenkette des Styrols einzubringen. Mit der Einführung von BTD in die Hauptpolymerkette werden insbesondere die Absorptions- und die elektrochemischen Eigenschaften beeinflusst. Im Vergleich zu dem TIPS-BDT/Fluoren-Copolymer reicht die Absorption bis in den sichtbaren Bereich und die LUMO Lage wird zu niederen Werten verschoben. Eine Verbesserung der Leistung in den Bauteilen wird jedoch nicht festgestellt. Die erfolgreiche erstmalige Synthese von TIPS-BDT als Seitenkettenpolymer an Styrol P-13 führt zu einem löslichen und amorphen Polymer mit vergleichbaren Mobilitäten von Styrol basierten Polymeren (µ = 10-5 cm2/Vs) im OFET. Ein weiteres Ziel dieser Arbeit ist die Synthese von niedermolekularen organolöslichen Benzodithiophen-Derivaten. Über Suzuki- und Stille-Reaktionen ist es erstmals möglich, verschiedenartige Aromaten über eine σ-Bindung an TIPS-BDT in den 2,6-Positionen zu knüpfen. Die UV/VIS-Untersuchungen zeigen, dass die Absorption durch die Verlängerung der π-Konjugationslänge zu höheren Wellenlängen verschoben wird. Darüber hinaus ist es möglich, thermisch vernetzbare Gruppen wie Allyloxy in das Molekülgerüst einzubauen. Das Einführen von F-Atomen in das Molekülgerüst resultiert in einer verstärkten Packungsordnung im Fluorbenzen funktionalisiertem TIPS-BDT (SM-4) im Festkörper mit sehr guten elektronischen Eigenschaften im OFET, wobei Mobilitäten bis zu 0.09 cm2/Vs erreicht werden.
Governments at central and sub-national levels are increasingly pursuing participatory mechanisms in a bid to improve governance and service delivery. This has been largely in the context of decentralization reforms in which central governments transfer (share) political, administrative, fiscal and economic powers and functions to sub-national units. Despite the great international support and advocacy for participatory governance where citizen’s voice plays a key role in decision making of decentralized service delivery, there is a notable dearth of empirical evidence as to the effect of such participation. This is the question this study sought to answer based on a case study of direct citizen participation in Local Authorities (LAs) in Kenya. This is as formally provided for by the Local Authority Service Delivery Action Plan (LASDAP) framework that was established to ensure citizens play a central role in planning and budgeting, implementation and monitoring of locally identified services towards improving livelihoods and reducing poverty. Influence of participation was assessed in terms of how it affected five key determinants of effective service delivery namely: efficient allocation of resources; equity in service delivery; accountability and reduction of corruption; quality of services; and, cost recovery. It finds that the participation of citizens is minimal and the resulting influence on the decentralized service delivery negligible. It concludes that despite the dismal performance of citizen participation, LASDAP has played a key role towards institutionalizing citizen participation that future structures will build on. It recommends that an effective framework of citizen participation should be one that is not directly linked to politicians; one that is founded on a legal framework and where citizens have a legal recourse opportunity; and, one that obliges LA officials both to implement what citizen’s proposals which meet the set criteria as well as to account for their actions in the management of public resources.
The Sun is surrounded by a 10^6 K hot atmosphere, the corona. The corona and the solar wind are fully ionized, and therefore in the plasma state. Magnetic fields play an important role in a plasma, since they bind electrically charged particles to their field lines. EUV spectroscopes, like the SUMER instrument on-board the SOHO spacecraft, reveal a preferred heating of coronal ions and strong temperature anisotropies. Velocity distributions of electrons can be measured directly in the solar wind, e.g. with the 3DPlasma instrument on-board the WIND satellite. They show a thermal core, an anisotropic suprathermal halo, and an anti-solar, magnetic-field-aligned, beam or "strahl". For an understanding of the physical processes in the corona, an adequate description of the plasma is needed. Magnetohydrodynamics (MHD) treats the plasma simply as an electrically conductive fluid. Multi-fluid models consider e.g. protons and electrons as separate fluids. They enable a description of many macroscopic plasma processes. However, fluid models are based on the assumption of a plasma near thermodynamic equilibrium. But the solar corona is far away from this. Furthermore, fluid models cannot describe processes like the interaction with electromagnetic waves on a microscopic scale. Kinetic models, which are based on particle velocity distributions, do not show these limitations, and are therefore well-suited for an explanation of the observations listed above. For the simplest kinetic models, the mirror force in the interplanetary magnetic field focuses solar wind electrons into an extremely narrow beam, which is contradicted by observations. Therefore, a scattering mechanism must exist that counteracts the mirror force. In this thesis, a kinetic model for electrons in the solar corona and wind is presented that provides electron scattering by resonant interaction with whistler waves. The kinetic model reproduces the observed components of solar wind electron distributions, i.e. core, halo, and a "strahl" with finite width. But the model is not only applicable on the quiet Sun. The propagation of energetic electrons from a solar flare is studied, and it is found that scattering in the direction of propagation and energy diffusion influence the arrival times of flare electrons at Earth approximately to the same degree. In the corona, the interaction of electrons with whistler waves does not only lead to scattering, but also to the formation of a suprathermal halo, as it is observed in interplanetary space. This effect is studied both for the solar wind as well as the closed volume of a coronal magnetic loop. The result is of fundamental importance for solar-stellar relations. The quiet solar corona always produces suprathermal electrons. This process is closely related to coronal heating, and can therefore be expected in any hot stellar corona. In the second part of this thesis it is detailed how to calculate growth or damping rates of plasma waves from electron velocity distributions. The emission and propagation of electron cyclotron waves in the quiet solar corona, and that of whistler waves during solar flares, is studied. The latter can be observed as so-called fiber bursts in dynamic radio spectra, and the results are in good agreement with observed bursts.
Aufstiege aus der Mittelschicht : soziale Aufstiegsmobilität von Haushalten zwischen 1984 und 2010
(2012)
Die Dissertation widmet sich den intragenerationalen Aufstiegsprozessen von Haushalten aus der Mittelschicht zu den Wohlhabenden. Intragenerationale Mobilitätsforschung wird bislang vor allem als arbeitsmarktbezogene Inidivualmobilität angesehen. Diese Dissertation erweitert den Ansatz auf die Ebene des Haushaltes. Dem liegt der Gedanke zugrunde, dass die soziale Position eines Individuums nicht allein durch sein Erwerbseinkommen determiniert wird. Ebenso entscheidend ist der Kontext des Haushaltes. Dieser bestimmt darüber, wie viele Personen zum Einkommen beitragen können und wie viele daran partizipieren. Weiterhin kommt der Haushaltsebene in Paar-Haushalten die Rolle des Aushandlungsortes zu. Hier wird über Familienplanung, Kinderwunsch und damit in Zusammenhang stehend auch über die Erwerbsbeteiligung der Partner entscheiden. Die vorliegende Dissertation untersucht diese Annahmen mithilfe von Daten des Sozioökonomischen Panels (SOEP) der Jahre 1984 bis 2010. Der Fokus liegt auf der Erwerbsbeteiligung und dem Bildungsniveau des Haushaltes, seiner Struktur, sowie dem Beruf des Haushaltsvorstandes. Es wird davon ausgegangen, dass dies die Hauptfaktoren sind, die über die finanziellen Möglichkeiten eines Haushaltes entscheiden. Ein weiterer Schwerpunkt der Arbeit liegt in der Berücksichtigung des historischen Kontextes, da anzunehmen ist, dass die oben benannten Faktoren sich und ihren Einfluss auf die Aufstiegsmöglichkeiten von Haushalten im historischen Verlauf verändert haben.
Climate is the principal driving force of hydrological extremes like floods and attributing generating mechanisms is an essential prerequisite for understanding past, present, and future flood variability. Successively enhanced radiative forcing under global warming enhances atmospheric water-holding capacity and is expected to increase the likelihood of strong floods. In addition, natural climate variability affects the frequency and magnitude of these events on annual to millennial time-scales. Particularly in the mid-latitudes of the Northern Hemisphere, correlations between meteorological variables and hydrological indices suggest significant effects of changing climate boundary conditions on floods. To date, however, understanding of flood responses to changing climate boundary conditions is limited due to the scarcity of hydrological data in space and time. Exploring paleoclimate archives like annually laminated (varved) lake sediments allows to fill this gap in knowledge offering precise dated time-series of flood variability for millennia. During river floods, detrital catchment material is eroded and transported in suspension by fluid turbulence into downstream lakes. In the water body the transport capacity of the inflowing turbidity current successively diminishes leading to the deposition of detrital layers on the lake floor. Intercalated into annual laminations these detrital layers can be dated down to seasonal resolution. Microfacies analyses and X-ray fluorescence scanning (µ-XRF) at 200 µm resolution were conducted on the varved Mid- to Late Holocene interval of two sediment profiles from pre-alpine Lake Ammersee (southern Germany) located in a proximal (AS10prox) and distal (AS10dist) position towards the main tributary River Ammer. To shed light on sediment distribution within the lake, particular emphasis was (1) the detection of intercalated detrital layers and their micro-sedimentological features, and (2) intra-basin correlation of these deposits. Detrital layers were dated down to the season by microscopic varve counting and determination of the microstratigraphic position within a varve. The resulting chronology is verified by accelerator mass spectrometry (AMS) 14C dating of 14 terrestrial plant macrofossils. Since ~5500 varve years before present (vyr BP), in total 1573 detrital layers were detected in either one or both of the investigated sediment profiles. Based on their microfacies, geochemistry, and proximal-distal deposition pattern, detrital layers were interpreted as River Ammer flood deposits. Calibration of the flood layer record using instrumental daily River Ammer runoff data from AD 1926 to 1999 proves the flood layer succession to represent a significant time-series of major River Ammer floods in spring and summer, the flood season in the Ammersee region. Flood layer frequency trends are in agreement with decadal variations of the East Atlantic-Western Russia (EA-WR) atmospheric pattern back to 200 yr BP (end of the used atmospheric data) and solar activity back to 5500 vyr BP. Enhanced flood frequency corresponds to the negative EA-WR phase and reduced solar activity. These common links point to a central role of varying large-scale atmospheric circulation over Europe for flood frequency in the Ammersee region and suggest that these atmospheric variations, in turn, are likely modified by solar variability during the past 5500 years. Furthermore, the flood layer record indicates three shifts in mean layer thickness and frequency of different manifestation in both sediment profiles at ~5500, ~2800, and ~500 vyr BP. Combining information from both sediment profiles enabled to interpret these shifts in terms of stepwise increases in mean flood intensity. Likely triggers of these shifts are gradual reduction of Northern Hemisphere orbital summer forcing and long-term solar activity minima. Hypothesized atmospheric response to this forcing is hemispheric cooling that enhances equator-to-pole temperature gradients and potential energy in the troposphere. This energy is transferred into stronger westerly cyclones, more extreme precipitation, and intensified floods at Lake Ammersee. Interpretation of flood layer frequency and thickness data in combination with reanalysis models and time-series analysis allowed to reconstruct the flood history and to decipher flood triggering climate mechanisms in the Ammersee region throughout the past 5500 years. Flood frequency and intensity are not stationary, but influenced by multi-causal climate forcing of large-scale atmospheric modes on time-scales from years to millennia. These results challenge future projections that propose an increase in floods when Earth warms based only on the assumption of an enhanced hydrological cycle.
A discrete analogue of the Witten Laplacian on the n-dimensional integer lattice is considered. After rescaling of the operator and the lattice size we analyze the tunnel effect between different wells, providing sharp asymptotics of the low-lying spectrum. Our proof, inspired by work of B. Helffer, M. Klein and F. Nier in continuous setting, is based on the construction of a discrete Witten complex and a semiclassical analysis of the corresponding discrete Witten Laplacian on 1-forms. The result can be reformulated in terms of metastable Markov processes on the lattice.
Sustainable management of semi-arid African savannas under environmental and political change
(2012)
Drylands cover about 40% of the earth’s land surface and provide the basis for the livelihoods of 38% of the global human population. Worldwide, these ecosystems are prone to heavy degradation. Increasing levels of dryland degradation result a strong decline of ecosystem services. In addition, in highly variable semi-arid environments changing future environmental conditions will potentially have severe consequences for productivity and ecosystem dynamics. Hence, global efforts have to be made to understand the particular causes and consequences of dryland degradation and to promote sustainable management options for semi-arid and arid ecosystems in a changing world. Here I particularly address the problem of semi-arid savanna degradation, which mostly occurs in form of woody plant encroachment. At this, I aim at finding viable sustainable management strategies and improving the general understanding of semi-arid savanna vegetation dynamics under conditions of extensive livestock production. Moreover, the influence of external forces, i.e. environmental change and land reform, on the use of savanna vegetation and on the ecosystem response to this land use is assessed. Based on this I identify conditions and strategies that facilitate a sustainable use of semi-arid savanna rangelands in a changing world. I extended an eco-hydrological model to simulate rangeland vegetation dynamics for a typical semi-arid savanna in eastern Namibia. In particular, I identified the response of semi-arid savanna vegetation to different land use strategies (including fire management) also with regard to different predicted precipitation, temperature and CO2 regimes. Not only environmental but also economic and political constraints like e.g. land reform programmes are shaping rangeland management strategies. Hence, I aimed at understanding the effects of the ongoing process of land reform in southern Africa on land use and the semi-arid savanna vegetation. Therefore, I developed and implemented an agent-based ecological-economic modelling tool for interactive role plays with land users. This tool was applied in an interdisciplinary empirical study to identify general patterns of management decisions and the between-farm cooperation of land reform beneficiaries in eastern Namibia. The eco-hydrological simulations revealed that the future dynamics of semi-arid savanna vegetation strongly depend on the respective climate change scenario. In particular, I found that the capacity of the system to sustain domestic livestock production will strongly depend on changes in the amount and temporal distribution of precipitation. In addition, my simulations revealed that shrub encroachment will become less likely under future climatic conditions although positive effects of CO2 on woody plant growth and transpiration have been considered. While earlier studies predicted a further increase in shrub encroachment due to increased levels of atmospheric CO2, my contrary finding is based on the negative impacts of temperature increase on the drought sensitive seedling germination and establishment of woody plant species. Further simulation experiments revealed that prescribed fires are an efficient tool for semi-arid rangeland management, since they suppress woody plant seedling establishment. The strategies tested have increased the long term productivity of the savanna in terms of livestock production and decreased the risk for shrub encroachment (i.e. savanna degradation). This finding refutes the views promoted by existing studies, which state that fires are of minor importance for the vegetation dynamics of semi-arid and arid savannas. Again, the difference in predictions is related to the bottleneck at the seedling establishment stage of woody plants, which has not been sufficiently considered in earlier studies. The ecological-economic role plays with Namibian land reform beneficiaries showed that the farmers made their decisions with regard to herd size adjustments according to economic but not according to environmental variables. Hence, they do not manage opportunistically by tracking grass biomass availability but rather apply conservative management strategies with low stocking rates. This implies that under the given circumstances the management of these farmers will not per se cause (or further worsen) the problem of savanna degradation and shrub encroachment due to overgrazing. However, as my results indicate that this management strategy is rather based on high financial pressure, it is not an indicator for successful rangeland management. Rather, farmers struggle hard to make any positive revenue from their farming business and the success of the Namibian land reform is currently disputable. The role-plays also revealed that cooperation between farmers is difficult even though obligatory due to the often small farm sizes. I thus propose that cooperation needs to be facilitated to improve the success of land reform beneficiaries.
The Indian summer monsoon (ISM) is one of the largest climate systems on earth and impacts the livelihood of nearly 40% of the world’s population. Despite dedicated efforts, a comprehensive picture of monsoon variability has proved elusive largely due to the absence of long term high resolution records, spatial inhomogeneity of the monsoon precipitation, and the complex forcing mechanisms (solar insolation, internal teleconnections for e.g., El Niño-Southern Oscillation, tropical-midlatitude interactions). My work aims to improve the understanding of monsoon variability through generation of long term high resolution palaeoclimate data from climatically sensitive regions in the ISM and westerlies domain. To achieve this aim I have (i) identified proxies (sedimentological, geochemical, isotopic, and mineralogical) that are sensitive to environmental changes; (ii) used the identified proxies to generate long term palaeoclimate data from two climatically sensitive regions, one in NW Himalayas (transitional westerlies and ISM domain in the Spiti valley and one in the core monsoon zone (Lonar lake) in central India); (iii) undertaken a regional overview to generate “snapshots” of selected time slices; and (iv) interpreted the spatial precipitation anomalies in terms of those caused by modern teleconnections. This approach must be considered only as the first step towards identifying the past teleconnections as the boundary conditions in the past were significantly different from today and would have impacted the precipitation anomalies. As the Spiti valley is located in the in the active tectonic orogen of Himalayas, it was essential to understand the role of regional tectonics to make valid interpretations of catchment erosion and detrital influx into the lake. My approach of using integrated structural/morphometric and geomorphic signatures provided clear evidence for active tectonics in this area and demonstrated the suitability of these lacustrine sediments as palaleoseismic archives. The investigations on the lacustrine outcrops in Spiti valley also provided information on changes in seasonality of precipitation and occurrence of frequent and intense periods (ca. 6.8-6.1 cal ka BP) of detrital influx indicating extreme hydrological events in the past. Regional comparison for this time slice indicates a possible extended “break-monsoon like” mode for the monsoon that favors enhanced precipitation over the Tibetan plateau, Himalayas and their foothills. My studies on surface sediments from Lonar lake helped to identify environmentally sensitive proxies which could also be used to interpret palaeodata obtained from a ca. 10m long core raised from the lake in 2008. The core encompasses the entire Holocene and is the first well dated (by 14C) archive from the core monsoon zone of central India. My identification of authigenic evaporite gaylussite crystals within the core sediments provided evidence of exceptionally drier conditions during 4.7-3.9 and 2.0-0.5 cal ka BP. Additionally, isotopic investigations on these crystals provided information on eutrophication, stratification, and carbon cycling processes in the lake.
Die Entwicklung neuer Verfahren für die Rückführung von Palladium aus Altmaterialien, wie gebrauchten Autoabgaskatalysatoren, in den Stoffstromkreislauf ist sowohl aus ökologischer als auch ökonomischer Sicht erstrebenswert. In dieser Arbeit wurden neue Flüssig-Flüssig- und Fest-Flüssig-Extraktionsmittel entwickelt, mit denen Palladium(II) aus einer oxidierenden, salzsauren Laugungslösung, die neben Palladium auch Platin und Rhodium sowie zahlreiche unedle Metalle enthält, zurückgewonnen werden kann. Die neuen Extraktionsmittel ungesättigte monomere 1,2-Dithioether und oligomere Ligandenmischungen mit vicinalen Dithioether-Einheiten – sind im Gegensatz zu vielen in der Literatur aufgeführten Extraktionsmitteln hochselektiv. Aufgrund ihrer geometrischen und elektronischen Präorganisation bilden sie mit Palladium(II) stabile quadratisch-planare Chelatkomplexe. Für die Entwicklung des Flüssig-Flüssig-Extraktionsmittels wurde eine Reihe von ungesättigten 1,2-Dithioetherliganden dargestellt, welche auf einer starren 1,2-Dithioethen-Einheit, die in ein variierendes elektronenziehendes Grundgerüst eingebettet ist, basieren und polare Seitenketten besitzen. Neben der Bestimmung der Kristallstrukturen der Liganden und ihrer Palladiumdichlorid-Komplexe wurden die elektro- und photochemischen Eigenschaften, die Komplexstabilität und das Verhalten in Lösung untersucht. In Flüssig-Flüssig-Extraktionsuntersuchungen konnte gezeigt werden, dass einige der neuen Liganden industriell genutzten Extraktionsmitteln durch eine schnellere Einstellung des Extraktionsgleichgewichts überlegen sind. Anhand von Kriterien, die für eine industrielle Nutzbarkeit entscheidend sind, wie: guter Oxidationsbeständigkeit, einer hohen Extraktionsausbeute (auch bei hohen Salzsäurekonzentrationen der Speiselösung), schneller Extraktionskinetik und einer hohen Selektivität für Palladium(II) wurde aus der Reihe der sechs Liganden ein geeignetes Flüssig-Flüssig-Extraktionsmittel ausgewählt: 1,2-Bis(2-methoxyethylthio)benzen. Mit diesem wurde ein praxisnahes Flüssig-Flüssig-Extraktionssystem entwickelt. Nach der schrittweisen Adaption der wässrigen Phase von einer Modelllösung hin zu der oxidierenden, salzsauren Laugungslösung erfolgte die Auswahl eines geeigneten großtechnisch, einsetzbaren Lösemittels (1,2-Dichlorbenzen) und eines effizienten Reextraktionsmittels (0,5 M Thioharnstoff in 0,1 M HCl). Die hohe Palladium(II)-Selektivität dieses Flüssig-Flüssig-Extraktionssystems konnte verifiziert und seine Wiederverwendbarkeit und Praxistauglichkeit unter Beweis gestellt werden. Weiterhin wurde gezeigt, dass sich beim Kontakt mit oxidierenden Medien aus dem Dithioether 1,2-Bis(2-methoxyethylthio)benzen geringe Mengen des Thioethersulfoxids 1-(2-Methoxyethylsulfinyl)-2-(2-methoxyethylthio)benzen bilden. Dieses wird im sauren Milieu protoniert und beschleunigt die Extraktion wie ein Phasentransferkatalysator, ohne jedoch die Palladium(II)-Selektivität herabzusetzen. Die Kristallstruktur des Palladiumdichlorid-Komplexes des Tioethersulfoxids zeigt, dass der unprotonierte Ligand Palladium(II), analog zum Dithioether, über die chelatisierenden Schwefelatome koordiniert. Verschiedene Mischungen von Oligo(dithioether)-Liganden und der monomere Ligand 1,2-Bis(2-methoxyethylthio)benzen dienten als Extraktionsmittel für Fest-Flüssig-Extraktionsversuche mit SIRs (solvent impregnated resins) und wurden zu diesem Zweck auf hydrophilem Kieselgel und organophilem Amberlite® XAD 2 adsorbiert. Die Oligo(dithioether)-Liganden basieren auf 1,2-Dithiobenzen oder 1,2-Dithiomaleonitril-Einheiten, welche über Tris(oxyethylen)ethylen- oder Trimethylen-Brücken miteinander verknüpft sind. Mit Hilfe von Batch-Versuchen konnte gezeigt werden, dass sich strukturelle Unterschiede - wie die Art der chelatisierenden Einheit, die Art der verbrückenden Ketten und das Trägermaterial - auf die Extraktionsausbeuten, die Extraktionskinetik und die Beladungskapazität auswirken. Die kieselgelhaltigen SIRs stellen das Extraktionsgleichgewicht viel schneller ein als die Amberlite® XAD 2-haltigen. Jedoch bleiben die Extraktionsmittel auf Amberlite® XAD 2, im Gegensatz zu Kieselgel, dauerhaft haften. Im salzsauren Milieu sind die 1,2-Dithiobenzen-derivate besser als Extraktionsmittel geeignet als die 1,2-Dithiomaleonitrilderivate. In Säulenversuchen mit der oxidierenden, salzsauren Laugungslösung und wiederverwendbaren, mit 1,2-Dithiobenzenderivaten imprägnierten, Amberlite® XAD 2-haltigen SIRs zeigte sich, dass für die Realisierung hoher Beladungskapazitäten sehr geringe Pumpraten benötigt werden. Trotzdem konnte die gute Palladium(II)-Selektivität dieser Festphasenmaterialien demonstriert werden. Allerdings wurden in den Eluaten im Gegensatz zu den Eluaten, die aus Flüssig-Flüssig-Extraktion resultierten neben dem Palladium auch geringe Mengen an Platin, Aluminium, Eisen und Blei gefunden.
The present thesis is to be brought into line with the current need for alternative and sustainable approaches toward energy management and materials design. In this context, carbon in particular has become the material of choice in many fields such as energy conversion and storage. Herein, three main topics are covered: 1)An alternative synthesis strategy toward highly porous functional carbons with tunable porosity using ordinary salts as porogen (denoted as “salt templating”) 2)The one-pot synthesis of porous metal nitride containing functional carbon composites 3)The combination of both approaches, enabling the generation of highly porous composites with finely tunable properties All approaches have in common that they are based on the utilization of ionic liquids, salts which are liquid below 100 °C, as precursors. Just recently, ionic liquids were shown to be versatile precursors for the generation of heteroatom-doped carbons since the liquid state and a negligible vapor pressure are highly advantageous properties. However, in most cases the products do not possess any porosity which is essential for many applications. In the first part, “salt templating”, the utilization of salts as diverse and sustainable porogens, is introduced. Exemplarily shown for ionic liquid derived nitrogen- and nitrogen-boron-co-doped carbons, the control of the porosity and morphology on the nanometer scale by salt templating is presented. The studies within this thesis were conducted with the ionic liquids 1-Butyl-3-methyl-pyridinium dicyanamide (Bmp-dca), 1-Ethyl-3-methyl-imidazolium dicyanamide (Emim-dca) and 1 Ethyl 3-methyl-imidazolium tetracyanoborate (Emim-tcb). The materials are generated through thermal treatment of precursor mixtures containing one of the ionic liquids and a porogen salt. By simple removal of the non-carbonizable template salt with water, functional graphitic carbons with pore sizes ranging from micro- to mesoporous and surface areas up to 2000 m2g-1 are obtained. The carbon morphologies, which presumably originate from different onsets of demixing, mainly depend on the nature of the porogen salt whereas the nature of the ionic liquid plays a minor role. Thus, a structural effect of the porogen salt rather than activation can be assumed. This offers an alternative to conventional activation and templating methods, enabling to avoid multiple-step and energy-consuming synthesis pathways as well as employment of hazardous chemicals for the template removal. The composition of the carbons can be altered via the heat-treatment procedure, thus at lower synthesis temperatures rather polymeric carbonaceous materials with a high degree of functional groups and high surface areas are accessible. First results suggest the suitability of the materials for CO2 utilization. In order to further illustrate the potential of ionic liquids as carbon precursors and to expand the class of carbons which can be obtained, the ionic liquid 1-Ethyl-3-methyl-imidazolium thiocyanate (Emim-scn) is introduced for the generation of nitrogen-sulfur-co-doped carbons in combination with the already studied ionic liquids Bmp-dca and Emim-dca. Here, the salt templating approach should also be applicable eventually further illustrating the potential of salt templating, too. In the second part, a one-pot and template-free synthesis approach toward inherently porous metal nitride nanoparticle containing nitrogen-doped carbon composites is presented. Since ionic liquids also offer outstanding solubility properties, the materials can be generated through the carbonization of homogeneous solutions of an ionic liquid acting as nitrogen as well as carbon source and the respective metal precursor. The metal content and surface area are easily tunable via the initial metal precursor amount. Furthermore, it is also possible to synthesize composites with ternary nitride nanoparticles whose composition is adjustable by the metal ratio in the precursor solution. Finally, both approaches are combined into salt templating of the one-pot composites. This opens the way to the one-step synthesis of composites with tunable composition, particle size as well as precisely controllable porosity and morphology. Thereby, common synthesis strategies where the product composition is often negatively affected by the template removal procedure can be avoided. The composites are further shown to be suitable as electrodes for supercapacitors. Here, different properties such as porosity, metal content and particle size are investigated and discussed with respect to their influence on the energy storage performance. Because a variety of ionic liquids, metal precursors and salts can be combined and a simple closed-loop process including salt recycling is imaginable, the approaches present a promising platform toward sustainable materials design.
Nucleation and growth of unsubstituted metal phthalocyanine films from solution on planar substrates
(2012)
In den vergangenen Jahren wurden kosteneffiziente nasschemische Beschichtungsverfahren für die Herstellung organischer Dünnfilme für verschiedene opto-elektronische Anwendungen entdeckt und weiterentwickelt. Unter anderem wurden Phthalocyanin-Moleküle in photoaktiven Schichten für die Herstellung von Solarzellen intensiv erforscht. Aufgrund der kleinen bzw. unbekannten Löslichkeit wurden Phthalocyanin-Schichten durch Aufdampfverfahren im Vakuum hergestellt. Des Weiteren wurde die Löslichkeit durch chemische Synthese erhöht, was aber die Eigenschaften von Pc beeinträchtigte. In dieser Arbeit wurde die Löslichkeit, optische Absorption und Stabilität von 8 verschiedenen unsubstituierten Metall-Phthalocyaninen in 28 verschiedenen Lösungsmitteln quantitativ gemessen. Wegen ausreichender Löslichkeit, Stabilität und Anwendbarkeit in organischen Solarzellen wurde Kupferphthalocyanin (CuPc) in Trifluoressigsäure (TFA) für weitere Untersuchungen ausgewählt. Durch die Rotationsbeschichtung von CuPc aus TFA Lösung wurde ein dünner Film aus der verdampfenden Lösung auf dem Substrat platziert. Nach dem Verdampfen des Lösungsmittels, die Nanobändern aus CuPc bedecken das Substrat. Die Nanobänder haben eine Dicke von etwa ~ 1 nm (typische Dimension eines CuPc-Molekül) und variierender Breite und Länge, je nach Menge des Materials. Solche Nanobändern können durch Rotationsbeschichtung oder auch durch andere Nassbeschichtungsverfahren, wie Tauchbeschichtung, erzeugt werden. Ähnliche Fibrillen-Strukturen entstehen durch Nassbeschichtung von anderen Metall-Phthalocyaninen, wie Eisen- und Magnesium-Phthalocyanin, aus TFA-Lösung sowie auf anderen Substraten, wie Glas oder Indium Zinnoxid. Materialeigenschaften von aufgebrachten CuPc aus TFA Lösung und CuPc in der Lösung wurden ausführlich mit Röntgenbeugung, Spektroskopie- und Mikroskopie Methoden untersucht. Es wird gezeigt, dass die Nanobänder nicht in der Lösung, sondern durch Verdampfen des Lösungsmittels und der Übersättigung der Lösung entstehen. Die Rasterkraftmikroskopie wurde dazu verwendet, um die Morphologie des getrockneten Films bei unterschiedlicher Konzentration zu studieren. Der Mechanismus der Entstehung der Nanobändern wurde im Detail studiert. Gemäß der Keimbildung und Wachstumstheorie wurde die Entstehung der CuPc Nanobänder aus einer übersättigt Lösung diskutiert. Die Form der Nanobändern wurde unter Berücksichtigung der Wechselwirkung zwischen den Molekülen und dem Substrat diskutiert. Die nassverarbeitete CuPc-Dünnschicht wurde als Donorschicht in organischen Doppelschicht Solarzellen mit C60-Molekül, als Akzeptor eingesetzt. Die Effizienz der Energieumwandlung einer solchen Zelle wurde entsprechend den Schichtdicken der CuPc Schicht untersucht.
Nowadays, model-driven engineering (MDE) promises to ease software development by decreasing the inherent complexity of classical software development. In order to deliver on this promise, MDE increases the level of abstraction and automation, through a consideration of domain-specific models (DSMs) and model operations (e.g. model transformations or code generations). DSMs conform to domain-specific modeling languages (DSMLs), which increase the level of abstraction, and model operations are first-class entities of software development because they increase the level of automation. Nevertheless, MDE has to deal with at least two new dimensions of complexity, which are basically caused by the increased linguistic and technological heterogeneity. The first dimension of complexity is setting up an MDE environment, an activity comprised of the implementation or selection of DSMLs and model operations. Setting up an MDE environment is both time-consuming and error-prone because of the implementation or adaptation of model operations. The second dimension of complexity is concerned with applying MDE for actual software development. Applying MDE is challenging because a collection of DSMs, which conform to potentially heterogeneous DSMLs, are required to completely specify a complex software system. A single DSML can only be used to describe a specific aspect of a software system at a certain level of abstraction and from a certain perspective. Additionally, DSMs are usually not independent but instead have inherent interdependencies, reflecting (partial) similar aspects of a software system at different levels of abstraction or from different perspectives. A subset of these dependencies are applications of various model operations, which are necessary to keep the degree of automation high. This becomes even worse when addressing the first dimension of complexity. Due to continuous changes, all kinds of dependencies, including the applications of model operations, must also be managed continuously. This comprises maintaining the existence of these dependencies and the appropriate (re-)application of model operations. The contribution of this thesis is an approach that combines traceability and model management to address the aforementioned challenges of configuring and applying MDE for software development. The approach is considered as a traceability approach because it supports capturing and automatically maintaining dependencies between DSMs. The approach is considered as a model management approach because it supports managing the automated (re-)application of heterogeneous model operations. In addition, the approach is considered as a comprehensive model management. Since the decomposition of model operations is encouraged to alleviate the first dimension of complexity, the subsequent composition of model operations is required to counteract their fragmentation. A significant portion of this thesis concerns itself with providing a method for the specification of decoupled yet still highly cohesive complex compositions of heterogeneous model operations. The approach supports two different kinds of compositions - data-flow compositions and context compositions. Data-flow composition is used to define a network of heterogeneous model operations coupled by sharing input and output DSMs alone. Context composition is related to a concept used in declarative model transformation approaches to compose individual model transformation rules (units) at any level of detail. In this thesis, context composition provides the ability to use a collection of dependencies as context for the composition of other dependencies, including model operations. In addition, the actual implementation of model operations, which are going to be composed, do not need to implement any composition concerns. The approach is realized by means of a formalism called an executable and dynamic hierarchical megamodel, based on the original idea of megamodels. This formalism supports specifying compositions of dependencies (traceability and model operations). On top of this formalism, traceability is realized by means of a localization concept, and model management by means of an execution concept.
Die Restenose stellt ein zentrales Problem der interventionellen Kardiologie dar und ist häufigste Komplikation nach perkutanen Angioplastieverfahren. Hauptursache dieser Wiederverengung des Gefäßes ist die Bildung einer Neointima durch die Proliferation transdifferenzierter vaskulärer glatter Muskelzellen und die Sekretion extrazellulärer Matrix. Die Entstehung reaktiver Sauerstoffspezies (ROS) und die Entzündungsreaktion nach der Gefäßverletzung werden als frühe, die Neointimabildung induzierende Prozesse diskutiert. Im Rahmen dieser Arbeit wurden mehrere Projekte bearbeitet, die Aufschluss über die während der Neointimabildung statt findenden Prozesse geben sollen. Mit Hilfe eines Verletzungsmodells der murinen Femoralarterie wurde der Einfluss der Entzündung und der ROS-Bildung auf die Neointimabildung in der Maus untersucht. Die Behandlung mit dem mitochondrialen Superoxiddismutase-Mimetikum MitoTEMPO verminderte die Bildung der Neointima besser, als die Behandlung mit dem globalen ROS-Fänger N-Acetylcystein. Die stärkste Hemmung der Neointimabildung wurde jedoch durch die Immunsuppression mit Rapamycin erreicht. Interferon-γ (INFγ) ist ein wichtiges Zytokin der Th1-Immunantwort, das in Folge der Gefäßverletzung freigesetzt wird und die proinflammatorischen Chemokine CXCL9 (MIG, Monokine Induced by INF), CXCL10 (IP-10, INF inducible Protein of 10 kDa) und CXCL11 (I-TAC, Interferon inducible T cell-Chemoattractant) induziert. CXCL9, CXCL10 und CXCL11 sind Liganden des CXC-Chemokinrezeptors 3 (CXCR3) und locken chemotaktisch CXCR3 positive Entzündungszellen zum Ort der Gefäßverletzung. Daher wurde die spezielle Bedeutung des Chemokins CXCL10 in der Restenose untersucht. Dazu wurden CXCL10-defiziente Mäuse dem Femoralisverletzungsmodell unterzogen und die Gefäße nach 14 Tagen morphometrisch und immunhistologisch untersucht. CXCL10-Defizienz führte in Mäusen zu einer verminderten Neointimabildung, die mit einer verringerten Inflammation, Apoptose und Proliferation im verletzten Gefäß korrelierte. Neben der Inflammation beeinflusst aber auch die Reendothelialisierung der verletzten Gefäßwand die Restenose. Interessanterweise war im Vergleich zu Wildtyp-Mäusen in den CXCL10-Knockout-Mäusen auch die Reendothelialisierung erheblich verbessert. Offensichtlich ist das CXCR3-Chemokinsystem also in völlig unterschiedliche biologische Prozesse involviert und beeinflusst nicht nur die Bildung der Neoimtima durch die Förderung der Entzündung, sondern auch die Unterdrückung der Reendothelialisierung der verletzten Gefäßwand. Tatsächlich wird der CXCR3 nicht nur auf Entzündungszellen, sondern auch auf Endothelzellen exprimiert. Zur separaten Untersuchung der Rolle des CXCR3 in der Inflammation und der Reendothelialisierung wurde im Rahmen dieser Arbeit damit begonnen konditionelle CXCR3-Knockout-Mäuse zu generieren, in denen der CXCR3 entweder in Entzündungszellen oder in Endothelzellen ausgeschaltet ist. Zum besseren Verständnis der molekularen Mechanismen, mit denen der CXCR3 seine Funktionen vermittelt, wurde zudem untersucht ob dieser mit anderen G-Protein-gekoppelten Rezeptoren (GPCR) interagiert. Die Analyse von Coimmunpräzipitaten deutet auf eine Homodimerisierung der beiden CXCR3 Splicevarianten CXCR3A und CXCR3B, sowie auf die Heterodimerbildung von CXCR3A und CXCR3B mit sich, sowie jeweils mit CCR2, CCR3, CCR5 und den Opioidrezeptoren MOR und KOR hin. Die getestete Methode des Fluoreszenz-Resonanz-Energietransfers (FRET) erwies sich jedoch als ungeeignet zur Untersuchung von CXCR3, da dieser in HEK293T-Zellen nicht korrekt transient exprimiert wurde. Insgesamt deuten die Ergebnisse dieser Arbeit darauf hin, dass das CXCR3-Chemokinsystem eine zentrale Rolle in unterschiedlichen, die Neointimabildung beeinflussenden Prozessen spielt. Damit könnten der CXCR3 und insbesondere das Chemokin CXCL10 interessante Zielmoleküle in der Entwicklung neuer verbesserter Therapien zur Verhinderung der Restenose darstellen.
This thesis investigates the gradient flow of Dirac-harmonic maps. Dirac-harmonic maps are critical points of an energy functional that is motivated from supersymmetric field theories. The critical points of this energy functional couple the equation for harmonic maps with spinor fields. At present, many analytical properties of Dirac-harmonic maps are known, but a general existence result is still missing. In this thesis the existence question is studied using the evolution equations for a regularized version of Dirac-harmonic maps. Since the energy functional for Dirac-harmonic maps is unbounded from below the method of the gradient flow cannot be applied directly. Thus, we first of all consider a regularization prescription for Dirac-harmonic maps and then study the gradient flow. Chapter 1 gives some background material on harmonic maps/harmonic spinors and summarizes the current known results about Dirac-harmonic maps. Chapter 2 introduces the notion of Dirac-harmonic maps in detail and presents a regularization prescription for Dirac-harmonic maps. In Chapter 3 the evolution equations for regularized Dirac-harmonic maps are introduced. In addition, the evolution of certain energies is discussed. Moreover, the existence of a short-time solution to the evolution equations is established. Chapter 4 analyzes the evolution equations in the case that the domain manifold is a closed curve. Here, the existence of a smooth long-time solution is proven. Moreover, for the regularization being large enough, it is shown that the evolution equations converge to a regularized Dirac-harmonic map. Finally, it is discussed in which sense the regularization can be removed. In Chapter 5 the evolution equations are studied when the domain manifold is a closed Riemmannian spin surface. For the regularization being large enough, the existence of a global weak solution, which is smooth away from finitely many singularities is proven. It is shown that the evolution equations converge weakly to a regularized Dirac-harmonic map. In addition, it is discussed if the regularization can be removed in this case.
Die Mykorrhiza (griechisch: mýkēs für „Pilz”; rhiza für „Wurzel”) stellt eine Symbiose zwischen Pilzen und einem Großteil der Landpflanzen dar. Der Pilz verbessert durch die Symbiose die Versorgung der Pflanze mit Nährstoffen, während die Pflanze den Pilz mit Kohlenhydraten versorgt. Die arbuskuläre Mykorrhiza (AM) stellt dabei einen beson-dere Form der Mykorrhiza dar. Der AM-Pilz bildet dabei während der Symbiose die namensgebenden Arbuskeln innerhalb der Wurzelzellen als Ort des primären Nährstoff- austausches aus. Die AM-Symbiose (AMS) ist der Forschungsschwerpunkt dieser Arbeit. Als Modellorganismen wurden Medicago truncatula und Glomus intraradices verwendet. Es wurden Transkriptionsanalysen durchgeführt um u.a. AMS regulierte Transkriptions- faktoren (TFs) zu identifizieren. Die Aktivität der Promotoren von drei der so identifizier-ten AMS-regulierten TFs (MtOFTN, MtNTS, MtDES) wurde mit Hilfe eine Reportergens visualisiert. Der Bereich der größten Promotoraktivität waren in einem Fall nur die ar- buskelhaltigen Zellen (MtOFTN). Im zweiten Fall war der Promotor auch aktiv in nicht arbuskelhaltigen Zellen, jedoch am stärksten aktiv in den arbuskelhaltigen Zellen (MtNTS). Ein weiterer Promotor war in arbuskelhaltigen Zellen und den diesen benach-barten Zellen gleich aktiv (MtDES). Zusätzlich wurden weitere Gene als AMS-reguliert identifiziert und es wurde für drei dieser Gene (MtPPK, MtAmT, MtMDRL) ebenfalls eine Promotor::Reporter-Aktivitäts- studie durchgeführt. Die Promotoren der Kinase (MtPPK) und des Ammoniumtrans-porters (MtAmt) waren dabei ausschließlich in arbuskelhaltigen Zellen aktiv, während die Aktivität des ABC-Transporters (MtMDRL) keinem bestimmten Zelltyp zuzuordnen war. Für zwei weitere identifizierte Gene, ein Kupfertransporter (MtCoT) und ein Zucker- bzw. Inositoltransporter (MtSuT), wurden RNA-Interferenz (RNAi)-Untersuchungen durchgeführt. Dabei stellte sich in beiden Fällen heraus, dass, sobald ein RNAi-Effekt in den transformierten Wurzeln vorlag, diese in einem deutlich geringerem Ausmaß wie in der Wurzelkontrolle von G. intraradices kolonisiert worden sind. Im Falle von MtCoT könnte das aus dem selben Grund geschehen, wie im Falle von MtPt4. Welche Rolle MtSuT genau in der Ausbildung der AMS spielt und welche Rolle Inositol in der Aus- bildung der AMS spielt müsste durch weitere Untersuchungen am Protein untersucht werden. Weitere Untersuchen an den in dieser Arbeit als spezifisch für arbuskelhaltige Zellen gezeigten Genen MtAmT, MtPPK und MtOFTN könnten ebenfalls aufschlussreich für das weitere Verständnis der AMS sein. Dies trifft auch auf die TFs MtNTS und MtDES zu, die zwar nicht ausschließlich arbuskelspezifisch transkribiert werden, aber auch eine Rolle in der Regulation der AMS innerhalb von M. truncatula Wurzeln zu spielen scheinen.
In dieser Arbeit werden die Effekte der Synchronisation nichtlinearer, akustischer Oszillatoren am Beispiel zweier Orgelpfeifen untersucht. Aus vorhandenen, experimentellen Messdaten werden die typischen Merkmale der Synchronisation extrahiert und dargestellt. Es folgt eine detaillierte Analyse der Übergangsbereiche in das Synchronisationsplateau, der Phänomene während der Synchronisation, als auch das Austreten aus der Synchronisationsregion beider Orgelpfeifen, bei verschiedenen Kopplungsstärken. Die experimentellen Befunde werfen Fragestellungen nach der Kopplungsfunktion auf. Dazu wird die Tonentstehung in einer Orgelpfeife untersucht. Mit Hilfe von numerischen Simulationen der Tonentstehung wird der Frage nachgegangen, welche fluiddynamischen und aero-akustischen Ursachen die Tonentstehung in der Orgelpfeife hat und inwiefern sich die Mechanismen auf das Modell eines selbsterregten akustischen Oszillators abbilden lässt. Mit der Methode des Coarse Graining wird ein Modellansatz formuliert.
Assuming that liquid iron alloy from the outer core interacts with the solid silicate-rich lower mantle the influence on the core-mantle reflected phase PcP is studied. If the core-mantle boundary is not a sharp discontinuity, this becomes apparent in the waveform and amplitude of PcP. Iron-silicate mixing would lead to regions of partial melting with higher density which in turn reduces the velocity of seismic waves. On the basis of the calculation and interpretation of short-period synthetic seismograms, using the reflectivity and Gauss Beam method, a model space is evaluated for these ultra-low velocity zones (ULVZs). The aim of this thesis is to analyse the behaviour of PcP between 10° and 40° source distance for such models using different velocity and density configurations. Furthermore, the resolution limits of seismic data are discussed. The influence of the assumed layer thickness, dominant source frequency and ULVZ topography are analysed. The Gräfenberg and NORSAR arrays are then used to investigate PcP from deep earthquakes and nuclear explosions. The seismic resolution of an ULVZ is limited both for velocity and density contrasts and layer thicknesses. Even a very thin global core-mantle transition zone (CMTZ), rather than a discrete boundary and also with strong impedance contrasts, seems possible: If no precursor is observable but the PcP_model /PcP_smooth amplitude reduction amounts to more than 10%, a very thin ULVZ of 5 km with a first-order discontinuity may exist. Otherwise, if amplitude reductions of less than 10% are obtained, this could indicate either a moderate, thin ULVZ or a gradient mantle-side CMTZ. Synthetic computations reveal notable amplitude variations as function of the distance and the impedance contrasts. Thereby a primary density effect in the very steep-angle range and a pronounced velocity dependency in the wide-angle region can be predicted. In view of the modelled findings, there is evidence for a 10 to 13.5 km thick ULVZ 600 km south-eastern of Moscow with a NW-SE extension of about 450 km. Here a single specific assumption about the velocity and density anomaly is not possible. This is in agreement with the synthetic results in which several models create similar amplitude-waveform characteristics. For example, a ULVZ model with contrasts of -5% VP , -15% VS and +5% density explain the measured PcP amplitudes. Moreover, below SW Finland and NNW of the Caspian Sea a CMB topography can be assumed. The amplitude measurements indicate a wavelength of 200 km and a height of 1 km topography, previously also shown in the study by Kampfmann and Müller (1989). Better constraints might be provided by a joined analysis of seismological data, mineralogical experiments and geodynamic modelling.
Virtual 3D city and landscape models are the main subject investigated in this thesis. They digitally represent urban space and have many applications in different domains, e.g., simulation, cadastral management, and city planning. Visualization is an elementary component of these applications. Photo-realistic visualization with an increasingly high degree of detail leads to fundamental problems for comprehensible visualization. A large number of highly detailed and textured objects within a virtual 3D city model may create visual noise and overload the users with information. Objects are subject to perspective foreshortening and may be occluded or not displayed in a meaningful way, as they are too small. In this thesis we present abstraction techniques that automatically process virtual 3D city and landscape models to derive abstracted representations. These have a reduced degree of detail, while essential characteristics are preserved. After introducing definitions for model, scale, and multi-scale representations, we discuss the fundamentals of map generalization as well as techniques for 3D generalization. The first presented technique is a cell-based generalization of virtual 3D city models. It creates abstract representations that have a highly reduced level of detail while maintaining essential structures, e.g., the infrastructure network, landmark buildings, and free spaces. The technique automatically partitions the input virtual 3D city model into cells based on the infrastructure network. The single building models contained in each cell are aggregated to abstracted cell blocks. Using weighted infrastructure elements, cell blocks can be computed on different hierarchical levels, storing the hierarchy relation between the cell blocks. Furthermore, we identify initial landmark buildings within a cell by comparing the properties of individual buildings with the aggregated properties of the cell. For each block, the identified landmark building models are subtracted using Boolean operations and integrated in a photo-realistic way. Finally, for the interactive 3D visualization we discuss the creation of the virtual 3D geometry and their appearance styling through colors, labeling, and transparency. We demonstrate the technique with example data sets. Additionally, we discuss applications of generalization lenses and transitions between abstract representations. The second technique is a real-time-rendering technique for geometric enhancement of landmark objects within a virtual 3D city model. Depending on the virtual camera distance, landmark objects are scaled to ensure their visibility within a specific distance interval while deforming their environment. First, in a preprocessing step a landmark hierarchy is computed, this is then used to derive distance intervals for the interactive rendering. At runtime, using the virtual camera distance, a scaling factor is computed and applied to each landmark. The scaling factor is interpolated smoothly at the interval boundaries using cubic Bézier splines. Non-landmark geometry that is near landmark objects is deformed with respect to a limited number of landmarks. We demonstrate the technique by applying it to a highly detailed virtual 3D city model and a generalized 3D city model. In addition we discuss an adaptation of the technique for non-linear projections and mobile devices. The third technique is a real-time rendering technique to create abstract 3D isocontour visualization of virtual 3D terrain models. The virtual 3D terrain model is visualized as a layered or stepped relief. The technique works without preprocessing and, as it is implemented using programmable graphics hardware, can be integrated with minimal changes into common terrain rendering techniques. Consequently, the computation is done in the rendering pipeline for each vertex, primitive, i.e., triangle, and fragment. For each vertex, the height is quantized to the nearest isovalue. For each triangle, the vertex configuration with respect to their isovalues is determined first. Using the configuration, the triangle is then subdivided. The subdivision forms a partial step geometry aligned with the triangle. For each fragment, the surface appearance is determined, e.g., depending on the surface texture, shading, and height-color-mapping. Flexible usage of the technique is demonstrated with applications from focus+context visualization, out-of-core terrain rendering, and information visualization. This thesis presents components for the creation of abstract representations of virtual 3D city and landscape models. Re-using visual language from cartography, the techniques enable users to build on their experience with maps when interpreting these representations. Simultaneously, characteristics of 3D geovirtual environments are taken into account by addressing and discussing, e.g., continuous scale, interaction, and perspective.
Aus dem Inhalt: - Themenschwerpunkt: Menschenrechte und Staatsbürgerschaft - Gibt es Menschenrechte ohne Bürgerschaft? - Menschenwürde und Staatsbürgerschaft - Die General Comments des Menschenrechtsausschusses der Vereinten Nationen – ein Beitrag zur Rechtsentwicklung im Völkerrecht - Politische Selbstbestimmung als Menschenrecht und im Völkerrecht - Libyen und der von außen unterstützte Systemwechsel
Estimation of the self-similarity exponent has attracted growing interest in recent decades and became a research subject in various fields and disciplines. Real-world data exhibiting self-similar behavior and/or parametrized by self-similarity exponent (in particular Hurst exponent) have been collected in different fields ranging from finance and human sciencies to hydrologic and traffic networks. Such rich classes of possible applications obligates researchers to investigate qualitatively new methods for estimation of the self-similarity exponent as well as identification of long-range dependencies (or long memory). In this thesis I present the Bayesian estimation of the Hurst exponent. In contrast to previous methods, the Bayesian approach allows the possibility to calculate the point estimator and confidence intervals at the same time, bringing significant advantages in data-analysis as discussed in this thesis. Moreover, it is also applicable to short data and unevenly sampled data, thus broadening the range of systems where the estimation of the Hurst exponent is possible. Taking into account that one of the substantial classes of great interest in modeling is the class of Gaussian self-similar processes, this thesis considers the realizations of the processes of fractional Brownian motion and fractional Gaussian noise. Additionally, applications to real-world data, such as the data of water level of the Nile River and fixational eye movements are also discussed.
Mikroorganismen in geothermischen Aquiferen : Einfluss mikrobieller Prozesse auf den Anlagenbetrieb
(2012)
In Fluid-, Filter- und Sedimentproben von vier geothermischen Anlagen des Norddeutschen Beckens wurden mit molekulargenetischen Verfahren unterschiedliche mikrobielle Gemeinschaften nachgewiesen. Die mikrobielle Zusammensetzung in den Prozesswässern wurde dabei durch die Aquiferteufe, die Salinität, die Temperatur und den verfügbaren Elektronendonatoren und -akzeptoren beeinflusst. Die in den anoxischen Prozesswässern identifizierten Organismen zeichneten sich durch einen chemoheterotrophen oder chemoautotrophen Stoffwechsel aus, wobei Nitrat, Sulfat, Eisen (III) oder Bikarbonat als terminale Elektronenakzeptoren fungierten. Mikroorganismen beeinflussten den Betrieb von zwei Anlagen negativ. So reduzierten im Prozesswasser des Kältespeichers am Berliner Reichstag vorhandene Eisenoxidierer, nahe verwandt zu der Gattung Gallionella, die Injektivität der Bohrungen durch Eisenhydroxidausfällungen in den Filterschlitzen. Biofilme, die von schwefeloxidierenden Bakterien der Gattung Thiothrix in den Filtern der obertägigen Anlage gebildet wurden, führten ebenfalls zu Betriebsstörungen, indem sie die Injektion des Fluids in den Aquifer behinderten. Beim Wärmespeicher in Neubrandenburg waren Sulfatreduzierer vermutlich an der Bildung von Eisensulfidausfällungen in den obertägigen Filtern und im bohrlochnahen Bereich beteiligt und verstärkten Korrosionsprozesse an der Pumpe im Bohrloch der kalten Aquiferseite. Organische Säuren in den Fluiden sowie mineralische Ausfällungen in den Filtern der obertägigen Anlagen waren Belege für die Aktivität der in den verschiedenen Anlagen vorhandenen Mikroorganismen. Es wurde zudem deutlich, dass Mikroorganismen auf Grund der hohen Durchflussraten in den Anlagen chemische Veränderungen in den Prozesswässern deutlich sensitiver anzeigen als chemische Analyseverfahren. So deuteten Änderungen in der Zusammensetzung der mikrobiellen Biozönosen und speziell die Identifikation von Indikatororganismen wie Eisen- und Schwefeloxidierern, fermentativen Bakterien und Sulfatreduzierern auf eine erhöhte Verfügbarkeit von Elektronendonatoren oder akzeptoren in den Prozesswässern hin. Die Ursachen für die an den Geothermieanlagen auftretenden Betriebsstörungen konnten dadurch erkannt werden.
Irrwege der Klimapolitik
(2012)
Inhalt I. Einleitung II. Es gibt kein Normalklima III. Folgen des Klimawandel IV. Folgen der Klimapolitik V. Schlußfolgerungen
Metals are often used in environments that are conducive to corrosion, which leads to a reduction in their mechanical properties and durability. Coatings are applied to corrosion-prone metals such as aluminum alloys to inhibit the destructive surface process of corrosion in a passive or active way. Standard anticorrosive coatings function as a physical barrier between the material and the corrosive environment and provide passive protection only when intact. In contrast, active protection prevents or slows down corrosion even when the main barrier is damaged. The most effective industrially used active corrosion inhibition for aluminum alloys is provided by chromate conversion coatings. However, their toxicity and worldwide restriction provoke an urgent need for finding environmentally friendly corrosion preventing systems. A promising approach to replace the toxic chromate coatings is to embed particles containing nontoxic inhibitor in a passive coating matrix. This work presents the development and optimization of effective anticorrosive coatings for the industrially important aluminum alloy, AA2024-T3 using this approach. The protective coatings were prepared by dispersing mesoporous silica containers, loaded with the nontoxic corrosion inhibitor 2-mercaptobenzothiazole, in a passive sol-gel (SiOx/ZrOx) or organic water-based layer. Two types of porous silica containers with different sizes (d ≈ 80 and 700 nm, respectively) were investigated. The studied robust containers exhibit high surface area (≈ 1000 m² g-1), narrow pore size distribution (dpore ≈ 3 nm) and large pore volume (≈ 1 mL g-1) as determined by N2 sorption measurements. These properties favored the subsequent adsorption and storage of a relatively large amount of inhibitor as well as its release in response to pH changes induced by the corrosion process. The concentration, position and size of the embedded containers were varied to ascertain the optimum conditions for overall anticorrosion performance. Attaining high anticorrosion efficiency was found to require a compromise between delivering an optimal amount of corrosion inhibitor and preserving the coating barrier properties. This study broadens the knowledge about the main factors influencing the coating anticorrosion efficiency and assists the development of optimum active anticorrosive coatings doped with inhibitor loaded containers.
Für den Einsatz in Autobatterien gibt es besondere Anforderungen an den Elektrolyten im Bereich der Energie- und Leistungsdichten, um beispielsweise thermische Verluste gering zu halten. Hochleitfähige Elektrolyte mit Leitfähigkeiten im Millisiemensbereich sind hier ebenso notwendig wie auch sichere, d.h. möglichst nicht brennbare und einen niedrigen Dampfdruck besitzende Materialien. Um diese Vorgaben zu erreichen, ist es notwendig, einen polymeren Separator zu entwickeln, welcher auf brennbare organische Lösungsmittel verzichtet und damit eine drastische Steigerung der Sicherheit gewährleistet. Gleichzeitig müssen hierbei die Leistungsvorgaben bezüglich der Leitfähigkeit erfüllt werden. Zu diesem Zweck wurde ein Konzept basierend auf der Kombination von einer polymeren sauerstoffreichen Matrix und einer ionischen Flüssigkeit entwickelt und verifiziert. Dabei wurden folgende Erkenntnisse gewonnen: 1. Es wurden neuartige diacrylierte sauerstoffreiche Matrixkomponenten mit vielen Carbonylfunktionen, für eine gute Lithiumleitfähigkeit, synthetisiert. 2. Es wurden mehrere neue ionische Flüssigkeiten sowohl auf Imidazolbasis als auch auf Ammoniumbasis synthetisiert und charakterisiert. 3. Die Einflüsse der Kationenstruktur und der Einfluss der Gegenionen im Bezug auf Schmelzpunkte und Leitfähigkeiten wurden untersucht. 4. Aus den entwickelten Materialien wurden Blendsysteme hergestellt und mittels Impedanzspektrometrie untersucht: Leitfähigkeiten von 10-4S/cm bei Raumtemperatur sind realisierbar. 5. Die Blendsysteme wurden auf ihre thermische Stabilität hin untersucht: Stabilitäten bis 250°C sind erreichbar. Dabei wird keine kristalline Struktur beobachtet.
Growing populations, continued economic development, and limited natural resources are critical factors affecting sustainable development. These factors are particularly pertinent in developing countries in which large parts of the population live at a subsistence level and options for sustainable development are limited. Therefore, addressing sustainable land use strategies in such contexts requires that decision makers have access to evidence-based impact assessment tools that can help in policy design and implementation. Ex-ante impact assessment is an emerging field poised at the science-policy interface and is used to assess the potential impacts of policy while also exploring trade-offs between economic, social and environmental sustainability targets. The objective of this study was to operationalise the impact assessment of land use scenarios in the context of developing countries that are characterised by limited data availability and quality. The Framework for Participatory Impact Assessment (FoPIA) was selected for this study because it allows for the integration of various sustainability dimensions, the handling of complexity, and the incorporation of local stakeholder perceptions. FoPIA, which was originally developed for the European context, was adapted to the conditions of developing countries, and its implementation was demonstrated in five selected case studies. In each case study, different land use options were assessed, including (i) alternative spatial planning policies aimed at the controlled expansion of rural-urban development in the Yogyakarta region (Indonesia), (ii) the expansion of soil and water conservation measures in the Oum Zessar watershed (Tunisia), (iii) the use of land conversion and the afforestation of agricultural areas to reduce soil erosion in Guyuan district (China), (iv) agricultural intensification and the potential for organic agriculture in Bijapur district (India), and (v) land division and privatisation in Narok district (Kenya). The FoPIA method was effectively adapted by dividing the assessment into three conceptual steps: (i) scenario development; (ii) specification of the sustainability context; and (iii) scenario impact assessment. A new methodological approach was developed for communicating alternative land use scenarios to local stakeholders and experts and for identifying recommendations for future land use strategies. Stakeholder and expert knowledge was used as the main sources of information for the impact assessment and was complemented by available quantitative data. Based on the findings from the five case studies, FoPIA was found to be suitable for implementing the impact assessment at case study level while ensuring a high level of transparency. FoPIA supports the identification of causal relationships underlying regional land use problems, facilitates communication among stakeholders and illustrates the effects of alternative decision options with respect to all three dimensions of sustainable development. Overall, FoPIA is an appropriate tool for performing preliminary assessments but cannot replace a comprehensive quantitative impact assessment, and FoPIA should, whenever possible, be accompanied by evidence from monitoring data or analytical tools. When using FoPIA for a policy oriented impact assessment, it is recommended that the process should follow an integrated, complementary approach that combines quantitative models, scenario techniques, and participatory methods.
Immune genes of the major histocompatibility complex (MHC) constitute a central component of the adaptive immune system and play an essential role in parasite resistance and associated life-history strategies. In addition to pathogen-mediated selection also sexual selection mechanisms have been identified as the main drivers of the typically-observed high levels of polymorphism in functionally important parts of the MHC. The recognition of the individual MHC constitution is presumed to be mediated through olfactory cues. Indeed, MHC genes are in physical linkage with olfactory receptor genes and alter the individual body odour. Moreover, they are expressed on sperm and trophoplast cells. Thus, MHC-mediated sexual selection processes might not only act in direct mate choice decisions, but also through cryptic processes during reproduction. Bats (Chiroptera) represent the second largest mammalian order and have been identified as important vectors of newly emerging infectious diseases affecting humans and wildlife. In addition, they are interesting study subjects in evolutionary ecology in the context of olfactory communication, mate choice and associated fitness benefits. Thus, it is surprising that Chiroptera belong to the least studied mammalian taxa in terms of their MHC evolution. In my doctoral thesis I aimed to gain insights in the evolution and diversity pattern of functional MHC genes in some of the major New World bat families by establishing species-specific primers through genome-walking into unknown flanking parts of familiar sites. Further, I took a free-ranging population of the lesser bulldog bat (Noctilio albiventris) in Panama as an example to understand the functional importance of the individual MHC constitution in parasite resistance and reproduction as well as the possible underlying selective forces shaping the observed diversity. My studies indicated that the typical MHC characteristics observed in other mammalian orders, like evidence for balancing and positive selection as well as recombination and gene conversion events, are also present in bats shaping their MHC diversity. I found a wide range of copy number variation of expressed DRB loci in the investigated species. In Saccopteryx bilineata, a species with a highly developed olfactory communication system, I found an exceptionally high number of MHC loci duplications generating high levels of variability at the individual level, which has never been described for any other mammalian species so far. My studies included for the first time phylogenetic relationships of MHC genes in bats and I found signs for a family-specific independent mode of evolution of duplicated genes, regardless whether the highly variable exon 2 (coding for the antigen binding region of the molecule) or more conserved exons (3, 4; encoding protein stabilizing parts) were considered indicating a monophyletic origin of duplicated loci within families. This result questions the general assumed pattern of MHC evolution in mammals where duplicated genes of different families usually cluster together suggesting that duplication occurred before speciation took place, which implies a trans-species mode of evolution. However, I found a trans-species mode of evolution within genera (Noctilio, Myotis) based on exon 2 signified by an intermingled clustering of DRB alleles. The gained knowledge on MHC sequence evolution in major New World bat families will facilitate future MHC investigations in this order. In the N. albiventris study population, the single expressed MHC class II DRB gene showed high sequence polymorphism, moderate allelic variability and high levels of population-wide heterozygosity. Whereas demographic processes had minor relevance in shaping the diversity pattern, I found clear evidence for parasite-mediated selection. This was evident by historical positive Darwinian selection maintaining diversity in the functionally important antigen binding sites, and by specific MHC alleles which were associated with low and high ectoparasite burden according to predictions of the ‘frequency dependent selection hypothesis’. Parasite resistance has been suggested to play an important role in mediating costly life history trade-offs leading to e.g. MHC- mediated benefits in sexual selection. The ‘good genes model’ predicts that males with a genetically well-adapted immune system in defending harmful parasites have the ability to allocate more resources to reproductive effort. I found support for this prediction since non-reproductive adult N. albiventris males carried more often an allele associated with high parasite loads, which differentiated them genetically from reproductively active males as well as from subadults, indicating a reduced transmission of this allele in subsequent generations. In addition, they suffered from increased ectoparasite burden which presumably reduced resources to invest in reproduction. Another sign for sexual selection was the observation of gender-specific difference in heterozygosity, with females showing lower levels of heterozygosity than males. This signifies that the sexes differ in their selection pressures, presumably through MHC-mediated molecular processes during reproduction resulting in a male specific heterozygosity advantage. My data make clear that parasite-mediated selection and sexual selection are interactive and operate together to form diversity at the MHC. Furthermore, my thesis is one of the rare studies contributing to fill the gap between MHC-mediated effects on co-evolutionary processes in parasite-host-interactions and on aspects of life-history evolution.
Gewässer werden traditionellerweise als abgeschlossene Ökosysteme gesehen, und insbeson¬dere das Zirkulieren von Wasser und Nährstoffen im Pelagial von Seen wird als Beispiel dafür angeführt. Allerdings wurden in der jüngeren Vergangenheit wichtige Verknüpfungen des Freiwasserkörpers von Gewässern aufgezeigt, die einerseits mit dem Benthal und andererseits mit dem Litoral, der terrestrischen Uferzone und ihrem Einzugsgebiet bestehen. Dadurch hat in den vergangen Jahren die horizontale und vertikale Konnektivität der Gewässerökosysteme erhöhtes wissenschaftliches Interesse auf sich gezogen, und damit auch die ökologischen Funktionen des Gewässergrunds (Benthal) und der Uferzonen (Litoral). Aus der neu beschriebenen Konnektivität innerhalb und zwischen diesen Lebensräumen ergeben sich weitreichende Konsequenzen für unser Bild von der Funktionalität der Gewässer. In der vorliegenden Habilitationsschrift wird am Beispiel von Fließgewässern und Seen des nordostdeutschen Flachlandes eine Reihe von internen und externen funktionalen Verknüpfungen in den horizontalen und vertikalen räumlichen Dimensionen aufgezeigt. Die zugrunde liegenden Untersuchungen umfassten zumeist sowohl abiotische als auch biologische Variablen, und umfassten thematisch, methodisch und hinsichtlich der Untersuchungsgewässer ein breites Spektrum. Dabei wurden in Labor- und Feldexperimenten sowie durch quantitative Feldmes¬sungen ökologischer Schlüsselprozesse wie Nährstoffretention, Kohlenstoffumsatz, extrazellu¬läre Enzymaktivität und Ressourcenweitergabe in Nahrungsnetzen (mittels Stabilisotopen¬methode) untersucht. In Bezug auf Fließgewässer wurden dadurch wesentliche Erkenntnisse hinsichtlich der Wirkung einer durch Konnekticität geprägten Hydromorphologie auf die die aquatische Biodiversität und die benthisch-pelagische Kopplung erbracht, die wiederum einen Schlüsselprozess darstellt für die Retention von in der fließenden Welle transportierten Stoffen, und damit letztlich für die Produktivität eines Flussabschnitts. Das Litoral von Seen wurde in Mitteleuropa jahrzehntelang kaum untersucht, so dass die durchgeführten Untersuchungen zur Gemeinschaftsstruktur, Habitatpräferenzen und Nahrungs¬netzverknüpfungen des eulitoralen Makrozoobenthos grundlegend neue Erkenntnisse erbrach¬ten, die auch unmittelbar in Ansätze zur ökologischen Bewertung von Seeufern gemäß EG-Wasserrahmenrichtlinie eingehen. Es konnte somit gezeigt werden, dass die Intensität sowohl die internen als auch der externen ökologischen Konnektivität durch die Hydrologie und Morphologie der Gewässer sowie durch die Verfügbarkeit von Nährstoffen wesentlich beeinflusst wird, die auf diese Weise vielfach die ökologische Funktionalität der Gewässer prägen. Dabei trägt die vertikale oder horizontale Konnektivität zur Stabilisierung der beteiligten Ökosysteme bei, indem sie den Austausch ermöglicht von Pflanzennährstoffen, von Biomasse sowie von migrierenden Organismen, wodurch Phasen des Ressourcenmangels überbrückt werden. Diese Ergebnisse können im Rahmen der Bewirtschaftung von Gewässern dahingehend genutzt werden, dass die Gewährleistung horizontaler und vertikaler Konnektivität in der Regel mit räumlich komplexeren, diverseren, zeitlich und strukturell resilienteren sowie leistungsfähi¬geren Ökosystemen einhergeht, die somit intensiver und sicherer nachhaltig genutzt werden können. Die Nutzung einer kleinen Auswahl von Ökosystemleistungen der Flüsse und Seen durch den Menschen hat oftmals zu einer starken Reduktion der ökologischen Konnektivität, und in der Folge zu starken Verlusten bei anderen Ökosystemleistungen geführt. Die Ergebnisse der dargestellten Forschungen zeigen auch, dass die Entwicklung und Implementierung von Strategien zum integrierten Management von komplexen sozial-ökologischen Systemen wesentlich unterstützt werden kann, wenn die horizontale und vertikale Konnektivität gezielt entwickelt wird.
Actin is one of the most abundant and highly conserved proteins in eukaryotic cells. The globular protein assembles into long filaments, which form a variety of different networks within the cytoskeleton. The dynamic reorganization of these networks - which is pivotal for cell motility, cell adhesion, and cell division - is based on cycles of polymerization (assembly) and depolymerization (disassembly) of actin filaments. Actin binds ATP and within the filament, actin-bound ATP is hydrolyzed into ADP on a time scale of a few minutes. As ADP-actin dissociates faster from the filament ends than ATP-actin, the filament becomes less stable as it grows older. Recent single filament experiments, where abrupt dynamical changes during filament depolymerization have been observed, suggest the opposite behavior, however, namely that the actin filaments become increasingly stable with time. Several mechanisms for this stabilization have been proposed, ranging from structural transitions of the whole filament to surface attachment of the filament ends. The key issue of this thesis is to elucidate the unexpected interruptions of depolymerization by a combination of experimental and theoretical studies. In new depolymerization experiments on single filaments, we confirm that filaments cease to shrink in an abrupt manner and determine the time from the initiation of depolymerization until the occurrence of the first interruption. This duration differs from filament to filament and represents a stochastic variable. We consider various hypothetical mechanisms that may cause the observed interruptions. These mechanisms cannot be distinguished directly, but they give rise to distinct distributions of the time until the first interruption, which we compute by modeling the underlying stochastic processes. A comparison with the measured distribution reveals that the sudden truncation of the shrinkage process neither arises from blocking of the ends nor from a collective transition of the whole filament. Instead, we predict a local transition process occurring at random sites within the filament. The combination of additional experimental findings and our theoretical approach confirms the notion of a local transition mechanism and identifies the transition as the photo-induced formation of an actin dimer within the filaments. Unlabeled actin filaments do not exhibit pauses, which implies that, in vivo, older filaments become destabilized by ATP hydrolysis. This destabilization can be identified with an acceleration of the depolymerization prior to the interruption. In the final part of this thesis, we theoretically analyze this acceleration to infer the mechanism of ATP hydrolysis. We show that the rate of ATP hydrolysis is constant within the filament, corresponding to a random as opposed to a vectorial hydrolysis mechanism.
Die Strahlentherapie ist neben der Chemotherapie und einer operativen Entfernung die stärkste Waffe für die Bekämpfung bösartiger Tumore in der Krebsmedizin. Nach Herz-Kreislauf-Erkrankungen ist Krebs die zweithäufigste Todesursache in der westlichen Welt, wobei Prostatakrebs heutzutage die häufigste, männliche Krebserkrankung darstellt. Trotz technologischer Fortschritte der radiologischen Verfahren kann es noch viele Jahre nach einer Radiotherapie zu einem Rezidiv kommen, was zum Teil auf die hohe Resistenzfähigkeit einzelner, entarteter Zellen des lokal vorkommenden Tumors zurückgeführt werden kann. Obwohl die moderne Strahlenbiologie viele Aspekte der Resistenzmechanismen näher beleuchtet hat, bleiben Fragestellungen, speziell über das zeitliche Ansprechen eines Tumors auf ionisierende Strahlung, größtenteils unbeantwortet, da systemweite Untersuchungen nur begrenzt vorliegen. Als Zellmodelle wurden vier Prostata-Krebszelllinien (PC3, DuCaP, DU-145, RWPE-1) mit unterschiedlichen Strahlungsempfindlichkeiten kultiviert und auf ihre Überlebensfähigkeit nach ionisierender Bestrahlung durch einen Trypanblau- und MTT-Vitalitätstest geprüft. Die proliferative Kapazität wurde mit einem Koloniebildungstest bestimmt. Die PC3 Zelllinie, als Strahlungsresistente, und die DuCaP Zelllinie, als Strahlungssensitive, zeigten dabei die größten Differenzen bezüglich der Strahlungsempfindlichkeit. Auf Grundlage dieser Ergebnisse wurden die beiden Zelllinien ausgewählt, um anhand ihrer transkriptomweiten Genexpressionen, eine Identifizierung potentieller Marker für die Prognose der Effizienz einer Strahlentherapie zu ermöglichen. Weiterhin wurde mit der PC3 Zelllinie ein Zeitreihenexperiment durchgeführt, wobei zu 8 verschiedenen Zeitpunkten nach Bestrahlung mit 1 Gy die mRNA mittels einer Hochdurchsatz-Sequenzierung quantifiziert wurde, um das dynamisch zeitversetzte Genexpressionsverhalten auf Resistenzmechanismen untersuchen zu können. Durch das Setzen eines Fold Change Grenzwertes in Verbindung mit einem P-Wert < 0,01 konnten aus 10.966 aktiven Genen 730 signifikant differentiell exprimierte Gene bestimmt werden, von denen 305 stärker in der PC3 und 425 stärker in der DuCaP Zelllinie exprimiert werden. Innerhalb dieser 730 Gene sind viele stressassoziierte Gene wiederzufinden, wie bspw. die beiden Transmembranproteingene CA9 und CA12. Durch Berechnung eines Netzwerk-Scores konnten aus den GO- und KEGG-Datenbanken interessante Kategorien und Netzwerke abgeleitet werden, wobei insbesondere die GO-Kategorien Aldehyd-Dehydrogenase [NAD(P)+] Aktivität (GO:0004030) und der KEGG-Stoffwechselweg der O-Glykan Biosynthese (hsa00512) als relevante Netzwerke auffällig wurden. Durch eine weitere Interaktionsanalyse konnten zwei vielversprechende Netzwerke mit den Transkriptionsfaktoren JUN und FOS als zentrale Elemente identifiziert werden. Zum besseren Verständnis des dynamisch zeitversetzten Ansprechens der strahlungsresistenten PC3 Zelllinie auf ionisierende Strahlung, konnten anhand der 10.840 exprimierten Gene und ihrer Expressionsprofile über 8 Zeitpunkte interessante Einblicke erzielt werden. Während es innerhalb von 30 min (00:00 - 00:30) nach Bestrahlung zu einer schnellen Runterregulierung der globalen Genexpression kommt, folgen in den drei darauffolgenden Zeitabschnitten (00:30 - 01:03; 01:03 - 02:12; 02:12 - 04:38) spezifische Expressionserhöhungen, die eine Aktivierung schützender Netzwerke, wie die Hochregulierung der DNA-Reparatursysteme oder die Arretierung des Zellzyklus, auslösen. In den abschließenden drei Zeitbereichen (04:38 - 09:43; 09:43 - 20:25; 20:25 - 42:35) liegt wiederum eine Ausgewogenheit zwischen Induzierung und Supprimierung vor, wobei die absoluten Genexpressionsveränderungen ansteigen. Beim Vergleich der Genexpressionen kurz vor der Bestrahlung mit dem letzten Zeitpunkt (00:00 - 42:53) liegen mit 2.670 die meisten verändert exprimierten Gene vor, was einer massiven, systemweiten Genexpressionsänderung entspricht. Signalwege wie die ATM-Regulierung des Zellzyklus und der Apoptose, des NRF2-Signalwegs nach oxidativer Stresseinwirkung und die DNA-Reparaturmechanismen der homologen Rekombination, des nicht-homologen End Joinings, der MisMatch-, der Basen-Exzision- und der Strang-Exzision-Reparatur spielen bei der zellulären Antwort eine tragende Rolle. Äußerst interessant sind weiterhin die hohen Aktivitäten RNA-gesteuerter Ereignisse, insbesondere von small nucleolar RNAs und Pseudouridin-Prozessen. Demnach scheinen diese RNA-modifizierenden Netzwerke einen bisher unbekannten funktionalen und schützenden Einfluss auf das Zellüberleben nach ionisierender Bestrahlung zu haben. All diese schützenden Netzwerke mit ihren zeitspezifischen Interaktionen sind essentiell für das Zellüberleben nach Einwirkung von oxidativem Stress und zeigen ein komplexes aber im Einklang befindliches Zusammenspiel vieler Einzelkomponenten zu einem systemweit ablaufenden Programm.