Refine
Has Fulltext
- yes (462) (remove)
Year of publication
- 2012 (462) (remove)
Document Type
- Article (157)
- Doctoral Thesis (105)
- Postprint (60)
- Monograph/Edited Volume (40)
- Preprint (27)
- Review (25)
- Part of Periodical (21)
- Master's Thesis (11)
- Other (11)
- Habilitation Thesis (2)
Language
- German (280)
- English (178)
- Multiple languages (2)
- Russian (1)
- Spanish (1)
Keywords
- Nachhaltigkeit (19)
- Curriculum Framework (18)
- European values education (18)
- Europäische Werteerziehung (18)
- Lehrevaluation (18)
- Politik (18)
- Studierendenaustausch (18)
- Unterrichtseinheiten (18)
- Wirtschaft (18)
- Zukunft (18)
Institute
- WeltTrends e.V. Potsdam (38)
- Extern (37)
- MenschenRechtsZentrum (31)
- Institut für Mathematik (30)
- Vereinigung für Jüdische Studien e. V. (28)
- Institut für Jüdische Studien und Religionswissenschaft (26)
- Institut für Biochemie und Biologie (23)
- Mathematisch-Naturwissenschaftliche Fakultät (23)
- Institut für Umweltwissenschaften und Geographie (22)
- Department Linguistik (20)
During reading, saccadic eye movements are generated to shift words into the center of the visual field for lexical processing. Recently, Krugel and Engbert (Vision Research 50:1532-1539, 2010) demonstrated that within-word fixation positions are largely shifted to the left after skipped words. However, explanations of the origin of this effect cannot be drawn from normal reading data alone. Here we show that the large effect of skipped words on the distribution of within-word fixation positions is primarily based on rather subtle differences in the low-level visual information acquired before saccades. Using arrangements of "x" letter strings, we reproduced the effect of skipped character strings in a highly controlled single-saccade task. Our results demonstrate that the effect of skipped words in reading is the signature of a general visuomotor phenomenon. Moreover, our findings extend beyond the scope of the widely accepted range-error model, which posits that within-word fixation positions in reading depend solely on the distances of target words. We expect that our results will provide critical boundary conditions for the development of visuomotor models of saccade planning during reading.
Dynamic regulatory on/off minimization for biological systems under internal temporal perturbations
(2012)
Background: Flux balance analysis (FBA) together with its extension, dynamic FBA, have proven instrumental for analyzing the robustness and dynamics of metabolic networks by employing only the stoichiometry of the included reactions coupled with adequately chosen objective function. In addition, under the assumption of minimization of metabolic adjustment, dynamic FBA has recently been employed to analyze the transition between metabolic states.
Results: Here, we propose a suite of novel methods for analyzing the dynamics of (internally perturbed) metabolic networks and for quantifying their robustness with limited knowledge of kinetic parameters. Following the biochemically meaningful premise that metabolite concentrations exhibit smooth temporal changes, the proposed methods rely on minimizing the significant fluctuations of metabolic profiles to predict the time-resolved metabolic state, characterized by both fluxes and concentrations. By conducting a comparative analysis with a kinetic model of the Calvin-Benson cycle and a model of plant carbohydrate metabolism, we demonstrate that the principle of regulatory on/off minimization coupled with dynamic FBA can accurately predict the changes in metabolic states.
Conclusions: Our methods outperform the existing dynamic FBA-based modeling alternatives, and could help in revealing the mechanisms for maintaining robustness of dynamic processes in metabolic networks over time.
Background
High blood glucose and diabetes are amongst the conditions causing the greatest losses in years of healthy life worldwide. Therefore, numerous studies aim to identify reliable risk markers for development of impaired glucose metabolism and type 2 diabetes. However, the molecular basis of impaired glucose metabolism is so far insufficiently understood. The development of so called 'omics' approaches in the recent years promises to identify molecular markers and to further understand the molecular basis of impaired glucose metabolism and type 2 diabetes. Although univariate statistical approaches are often applied, we demonstrate here that the application of multivariate statistical approaches is highly recommended to fully capture the complexity of data gained using high-throughput methods.
Methods
We took blood plasma samples from 172 subjects who participated in the prospective Metabolic Syndrome Berlin Potsdam follow-up study (MESY-BEPO Follow-up). We analysed these samples using Gas Chromatography coupled with Mass Spectrometry (GC-MS), and measured 286 metabolites. Furthermore, fasting glucose levels were measured using standard methods at baseline, and after an average of six years. We did correlation analysis and built linear regression models as well as Random Forest regression models to identify metabolites that predict the development of fasting glucose in our cohort.
Results
We found a metabolic pattern consisting of nine metabolites that predicted fasting glucose development with an accuracy of 0.47 in tenfold cross-validation using Random Forest regression. We also showed that adding established risk markers did not improve the model accuracy. However, external validation is eventually desirable. Although not all metabolites belonging to the final pattern are identified yet, the pattern directs attention to amino acid metabolism, energy metabolism and redox homeostasis.
Conclusions
We demonstrate that metabolites identified using a high-throughput method (GC-MS) perform well in predicting the development of fasting plasma glucose over several years. Notably, not single, but a complex pattern of metabolites propels the prediction and therefore reflects the complexity of the underlying molecular mechanisms. This result could only be captured by application of multivariate statistical approaches. Therefore, we highly recommend the usage of statistical methods that seize the complexity of the information given by high-throughput methods.
Portal alumni
(2012)
Das zurückliegende Jahr stand an der Universität Potsdam auch im Zeichen des zwanzigjährigen Jubiläums der Hochschule. Am 15. Juli 1991, wurde sie gegründet und während einer Festwoche feierten Professorinnen und Professoren, Mitarbeiterinnen, Mitarbeiter und Studierende dieses Jubiläum gebührend. Seit der Gründung der größten brandenburgischen Hochschule sind wissenschaftliches Renommee, Ansehen und Attraktivität stetig gewachsen. Gerade in den letzten Jahren hat sie ihr Profil geschärft. Vor allem die Kognitions-, die Geo- und Biowissenschaften sind hier zu nennen. Aber auch die Lehrerbildung besitzt einen hohen Stellenwert. International anerkannte Forschungsbereiche, Wissenschaftspreise, eine erfolgreiche Drittmittelbilanz und nicht zuletzt die bauliche Entwicklung an allen drei Standorten sind sichtbare Indikatoren für die erfolgreiche Entwicklung, die die Universität Potsdam in den letzten zwei Jahrzehnten durchlaufen hat. Die drei ehemaligen Präsidenten sowie verschiedene andere Protagonisten werfen in dieser Ausgabe der Portal Alumni einen Blick auf unterschiedliche Aspekte der zurückliegenden Entwicklung der Universität. Vom Erfolg der Universität zeugt auch die wachsende Zahl der Absolventinnen und Absolventen, die die Universität verlassen. Portal Alumni stellt in der vorliegenden Ausgabe deshalb Absolventen und deren universitäre und berufliche Lebenswege genauer vor und lässt damit zugleich kaleidoskopartig 20 Jahre Studium an der Universität Potsdam Revue passieren.
Portal Wissen = Raum
(2012)
Mit „Portal Wissen“ laden wir Sie ein, die Forschung an der Universität Potsdam zu entdecken und in ihrer Vielfalt kennenzulernen. In der ersten Ausgabe dreht sich alles um „Räume“. Räume, in denen geforscht wird, solche, die es zu erforschen gilt, andere, die durch Wissenschaft zugänglich oder erschlossen werden, aber auch Räume, die Wissenschaft braucht, um sich entfalten zu können. Forschung vermisst Räume: „Wissenschaft wird von Menschen gemacht“, schrieb der Physiker Werner Heisenberg. Umgekehrt lässt sich sagen: Wissenschaft macht Menschen, widmet sich ihnen, beeinflusst sie. Dieser Beziehung ist „Portal Wissen“ nachgegangen. Wir haben Wissenschaftler getroffen, sie gefragt, wie aus ihren Fragen Projekte entstehen, haben sie auf dem oft verschlungenen Weg zum Ziel begleitet. Ein besonderes Augenmerk dieses Heftes gilt den „Kulturellen Begegnungsräumen“, denen ein eigener Profilbereich der Forschung an der Universität Potsdam gewidmet ist.
Forschung hat Räume: Labore, Bibliotheken, Gewächshäuser oder Archive – hier ist Wissenschaft zu Liebe Leserinnen und Leser, Hause. All diese Orte sind so einzigartig wie die Wissenschaftler, die in ihnen arbeiten, oder die Untersuchungen, die hier stattfinden. Erst die Vision davon, wie ein Problem zu lösen ist, macht aus einfachen Zimmern „Laborräume“. Wir haben ihre Türen geöffnet, um zu zeigen, was – und wer – sich dahinter befindet.
Forschung eröffnet Räume: Wenn Wissenschaft erfolgreich ist, bewegt sie uns, bringt uns voran. Auf dem Weg einer wissenschaftlichen Erkenntnis aus dem Labor in den Alltag stehen mitunter Hürden, die meist nicht auf den ersten Blick zu erkennen sind. Auf jeden Fall aber ist ihre Anwendung erster Ausgangspunkt von Wissenschaft, Antrieb und Motivation jedes Forschers. „Portal Wissen“ zeigt, welche „Praxisräume“ sich aus der Übersetzung von Forschungsresultaten ergeben. Dort, wo wir es unbedingt erwarten, und dort, wo vielleicht nicht.
Forschung erschließt Räume: Bei Expeditionen, Feldversuchen und Exkursionen wird nahezu jede Umgebung zum mobilen Labor. So eröffnet Wissenschaft Zugänge auch zu Orten, die auf vielfach andere Weise verschlossen oder unzugänglich scheinen. Wir haben uns in Forscher- Reisetaschen gemogelt, um bei Entdeckungsreisen dabei zu sein, die weit weg – vor allem nach Afrika – führen. Zugleich haben wir beobachtet, wie „Entwicklungsräume“ sich auch von Potsdam aus erschließen lassen oder zumindest ihre Vermessung in Potsdam beginnen kann.
Forschung braucht Räume: Wissenschaft hat zwei Geschlechter, endlich. Noch nie waren so viele Frauen in der Forschung tätig wie derzeit. Ein Grund zum Ausruhen ist dies gleichwohl nicht. Deutschlandweit ist aktuell nur jede fünfte Professur von einer Frau besetzt. „Portal Wissen“ schaut, welche „Entwicklungsräume“ Frauen sich in der Wissenschaft, aber auch darüber hinaus geschaffen haben. Und wo sie ihnen verwehrt werden. Wir wünschen Ihnen eine anregende Lektüre und dass auch Sie einen Raum finden, der Sie inspiriert.
Prof. Dr. Robert Seckler
Vizepräsident für Forschung und wissenschaftlichen Nachwuchs
Background: Detection of immunogenic proteins remains an important task for life sciences as it nourishes the understanding of pathogenicity, illuminates new potential vaccine candidates and broadens the spectrum of biomarkers applicable in diagnostic tools. Traditionally, immunoscreenings of expression libraries via polyclonal sera on nitrocellulose membranes or screenings of whole proteome lysates in 2-D gel electrophoresis are performed. However, these methods feature some rather inconvenient disadvantages. Screening of expression libraries to expose novel antigens from bacteria often lead to an abundance of false positive signals owing to the high cross reactivity of polyclonal antibodies towards the proteins of the expression host. A method is presented that overcomes many disadvantages of the old procedures.
Results: Four proteins that have previously been described as immunogenic have successfully been assessed immunogenic abilities with our method. One protein with no known immunogenic behaviour before suggested potential immunogenicity. We incorporated a fusion tag prior to our genes of interest and attached the expressed fusion proteins covalently on microarrays. This enhances the specific binding of the proteins compared to nitrocellulose. Thus, it helps to reduce the number of false positives significantly. It enables us to screen for immunogenic proteins in a shorter time, with more samples and statistical reliability. We validated our method by employing several known genes from Campylobacter jejuni NCTC 11168.
Conclusions: The method presented offers a new approach for screening of bacterial expression libraries to illuminate novel proteins with immunogenic features. It could provide a powerful and attractive alternative to existing methods and help to detect and identify vaccine candidates, biomarkers and potential virulence-associated factors with immunogenic behaviour furthering the knowledge of virulence and pathogenicity of studied bacteria.
The development of infrared observational facilities has revealed a number of massive stars in obscured environments throughout the Milky Way and beyond. The determination of their stellar and wind properties from infrared diagnostics is thus required to take full advantage of the wealth of observations available in the near and mid infrared. However, the task is challenging. This session addressed some of the problems encountered and showed the limitations and successes of infrared studies of massive stars.
The safe upper limit for inclusion of vitamin A in complete diets for growing dogs is uncertain, with the result that current recommendations range from 5.24 to 104.80 mu mol retinol (5000 to 100 000 IU vitamin A)/4184 kJ (1000 kcal) metabolisable energy (ME). The aim of the present study was to determine the effect of feeding four concentrations of vitamin A to puppies from weaning until 1 year of age. A total of forty-nine puppies, of two breeds, Labrador Retriever and Miniature Schnauzer, were randomly assigned to one of four treatment groups. Following weaning at 8 weeks of age, puppies were fed a complete food supplemented with retinyl acetate diluted in vegetable oil and fed at 1ml oil/100 g diet to achieve an intake of 5.24, 13.10, 78.60 and 104.80 mu mol retinol (5000, 12 500, 75 000 and 100 000 IU vitamin A)/4184 kJ (1000 kcal) ME. Fasted blood and urine samples were collected at 8, 10, 12, 14, 16, 20, 26, 36 and 52 weeks of age and analysed for markers of vitamin A metabolism and markers of safety including haematological and biochemical variables, bone-specific alkaline phosphatase, cross-linked carboxyterminal telopeptides of type I collagen and dual-energy X-ray absorptiometry. Clinical examinations were conducted every 4 weeks. Data were analysed by means of a mixed model analysis with Bonferroni corrections for multiple endpoints. There was no effect of vitamin A concentration on any of the parameters, with the exception of total serum retinyl esters, and no effect of dose on the number, type and duration of adverse events. We therefore propose that 104.80 mu mol retinol (100 000 IU vitamin A)/4184 kJ (1000 kcal) is a suitable safe upper limit for use in the formulation of diets designed for puppy growth.
We present 3D zero-beta ideal MHD simulations of the solar flare/CME event that occurred in Active Region 11060 on 2010 April 8. The initial magnetic configurations of the two simulations are stable nonlinear force-free field and unstable magnetic field models constructed by Su et al. (2011) using the flux rope insertion method. The MHD simulations confirm that the stable model relaxes to a stable equilibrium, while the unstable model erupts as a CME. Comparisons between observations and MHD simulations of the CME are also presented.
Recent PIC simulations of relativistic electron-positron (electron-ion) jets injected into a stationary medium show that particle acceleration occurs in the shocked regions. Simulations show that the Weibel instability is responsible for generating and amplifying highly nonuniform, small-scale magnetic fields and for particle acceleration. These magnetic fields contribute to the electron’s transverse eflection behind the shock. The “jitter” radiation from deflected electrons in turbulent magnetic fields has properties different from synchrotron radiation calculated in a uniform magnetic field. This jitter radiation may be important for understanding the complex time evolution and/or spectral structure of gamma-ray bursts, relativistic jets in general, and supernova remnants. In order to calculate radiation from first principles and go beyond the standard synchrotron model, we have used PIC simulations. We present synthetic spectra to compare with the spectra obtained from Fermi observations.
Recent studies have claimed the existence of very massive stars (VMS) up to 300 M⊙ in the local Universe. As this finding may represent a paradigm shift for the canonical stellar upper-mass limit of 150 M⊙, it is timely to discuss the status of the data, as well as the far-reaching implications of such objects. We held a Joint Discussion at the General Assembly in Beijing to discuss (i) the determination of the current masses of the most massive stars, (ii) the formation of VMS, (iii) their mass loss, and (iv) their evolution and final fate. The prime aim was to reach broad consensus between observers and theorists on how to identify and quantify the dominant physical processes.
In the late Palaeozoic fore-arc system of north-central Chile at latitudes 31-32 degrees S (from the west to the east) three lithotectonic units are telescoped within a short distance by a Mesozoic strikeslip event (derived peak P-T conditions in brackets): (1) the basally accreted Choapa Metamorphic Complex (CMC; 350-430 degrees C, 6-9 kbar), (2) the frontally accreted Arrayan Formation (AF; 280-320 degrees C, 4-6 kbar) and (3) the retrowedge basin of the Huentelauquen Formation (HF; 280-320 degrees C, 3-4 kbar). In the CMC, Ar-Ar spot ages locally date white-mica formation at peak P-T conditions and during early exhumation at 279-242 Ma. In a local garnet mica-schist intercalation (570-585 degrees C, 11-13 kbar) Ar-Ar spot ages refer to the ascent from the subduction channel at 307-274 Ma. Portions of the CMC were isobarically heated to 510-580 degrees C at 6.6-8.5 kbar. The age of peak P-T conditions in the AF can only vaguely be approximated at >= 310 Ma by relict fission-track ages consistent with the observation that frontal accretion occurred prior to basal accretion. Zircon fission-track dating indicates cooling below similar to 280 degrees C at similar to 248 Ma in the CMC and the AF, when a regional unconformity also formed. Ar-Ar white-mica spot ages in parts of the CMC and within the entire AF and HF point to heterogeneous resetting during Mesozoic extensional and shortening events at similar to 245-240 Ma, similar to 210-200 Ma, similar to 174-159 Ma and similar to 142-127 Ma. The zircon fission-track ages are locally reset at 109-96 Ma. All resetting of Ar-Ar white-mica ages is proposed to have occurred by in situ dissolution/precipitation at low temperature in the presence of locally penetrating hydrous fluids. Hence syn-and postaccretionary events in the fore-arc system can still be distinguished and dated in spite of its complex heterogeneous postaccretional overprint.
This article investigates the nature of preposition copying and preposition pruning structures in present-day English. We begin by illustrating the two phenomena and consider how they might be accounted for in syntactic terms, and go on to explore the possibility that preposition copying and pruning arise for processing reasons. We then report on two acceptability judgement experiments examining the extent to which native speakers of English are sensitive to these types of 'error' in language comprehension. Our results indicate that preposition copying creates redundancy rather than ungrammaticality, whereas preposition pruning creates processing problems for comprehenders that may render it unacceptable in timed (but not necessarily in untimed) judgement tasks. Our findings furthermore illustrate the usefulness of combining corpus studies and experimentally elicited data for gaining a clearer picture of usage and acceptability, and the potential benefits of examining syntactic phenomena from both a theoretical and a processing perspective.
Using the eye-movement monitoring technique in two reading comprehension experiments, this study investigated the timing of constraints on wh-dependencies (so-called island constraints) in first- and second-language (L1 and L2) sentence processing. The results show that both L1 and L2 speakers of English are sensitive to extraction islands during processing, suggesting that memory storage limitations affect L1 and L2 comprehenders in essentially the same way. Furthermore, these results show that the timing of island effects in L1 compared to L2 sentence comprehension is affected differently by the type of cue (semantic fit versus filled gaps) signaling whether dependency formation is possible at a potential gap site. Even though L1 English speakers showed immediate sensitivity to filled gaps but not to lack of semantic fit, proficient German-speaking learners of English as a L2 showed the opposite sensitivity pattern. This indicates that initial wh-dependency formation in L2 processing is based on semantic feature matching rather than being structurally mediated as in L1 comprehension.
SXP 1062 is an exceptional case of a young neutron star in a wind-fed high-mass X-ray binary associated with a supernova remnant. A unique combination of measured spin period, its derivative, luminosity and young age makes this source a key probe for the physics of accretion and neutron star evolution. Theoretical models proposed to explain the properties of SXP 1062 shall be tested with new data.
We present the new multi-threaded version of the state-of-the-art answer set solver clasp. We detail its component and communication architecture and illustrate how they support the principal functionalities of clasp. Also, we provide some insights into the data representation used for different constraint types handled by clasp. All this is accompanied by an extensive experimental analysis of the major features related to multi-threading in clasp.
ASP modulo CSP
(2012)
We present the hybrid ASP solver clingcon, combining the simple modeling language and the high performance Boolean solving capacities of Answer Set Programming (ASP) with techniques for using non-Boolean constraints from the area of Constraint Programming (CP). The new clingcon system features an extended syntax supporting global constraints and optimize statements for constraint variables. The major technical innovation improves the interaction between ASP and CP solver through elaborated learning techniques based on irreducible inconsistent sets. A broad empirical evaluation shows that these techniques yield a performance improvement of an order of magnitude.
Mineral chemistry and thermobarometry of the staurolite-chloritoid schists from Poshtuk, NW Iran
(2012)
The Poshtuk metapelitic rocks in northwestern Iran underwent two main phases of regional and contact metamorphism. Microstructures, textural features and field relations indicate that these rocks underwent a polymetamorphic history. The dominant metamorphic assemblage of the metapelites is garnet, staurolite, chloritoid, chlorite, muscovite and quartz, which grew mainly syntectonically during the later contact metamorphic event. Peak metamorphic conditions of this event took place at 580 ◦ C and ∼ 3–4 kbar, indicating that this event occurred under high-temperature and low-pressure conditions (HT/LP metamorphism), which reflects the high heat flow in this part of the crust. This event is mainly controlled by advective heat input through magmatic intrusions into all levels of the crust. These extensive Eocene metamorphic and magmatic activities can be associated with the early Alpine Orogeny, which resulted in this area from the convergence between the Arabian and Eurasian plates, and the Cenozoic closure of the Tethys oceanic tract(s).
Much of our knowledge about the solar dynamo is based on sunspot observations. It is thus desirable to extend the set of positional and morphological data of sunspots into the past. Gustav Spörer observed in Germany from Anklam (1861–1873) and Potsdam (1874–1894). He left detailed prints of sunspot groups, which we digitized and processed to mitigate artifacts left in the print by the passage of time. After careful geometrical correction, the sunspot data are now available as synoptic charts for almost 450 solar rotation periods. Individual sunspot positions can thus be precisely determined and spot areas can be accurately measured using morphological image processing techniques. These methods also allow us to determine tilt angles of active regions (Joy’s law) and to assess the complexity of an active region.
The clumping of massive star winds is an established paradigm, which is confirmed by multiple lines of evidence and is supported by stellar wind theory. We use the results from time-dependent hydrodynamical models of the instability in the line-driven wind of a massive supergiant star to derive the time-dependent accretion rate on to a compact object in the Bondi-Hoyle-Lyttleton approximation. The strong density and velocity fluctuations in the wind result in strong variability of the synthetic X-ray light curves. Photoionization of inhomogeneous winds is different from the photoinization of smooth winds. The degree of ionization is affected by the wind clumping. The wind clumping must also be taken into account when comparing the observed and model spectra of the photoionized stellar wind.
Much previous experimental research on morphological processing has focused on surface and meaning-level properties of morphologically complex words, without paying much attention to the morphological differences between inflectional and derivational processes. Realization-based theories of morphology, for example, assume specific morpholexical representations for derived words that distinguish them from the products of inflectional or paradigmatic processes. The present study reports results from a series of masked priming experiments investigating the processing of inflectional and derivational phenomena in native (L1) and non-native (L2) speakers in a non-Indo-European language, Turkish. We specifically compared regular (Aorist) verb inflection with deadjectival nominalization, both of which are highly frequent, productive and transparent in Turkish. The experiments demonstrated different priming patterns for inflection and derivation, specifically within the L2 group. Implications of these findings are discussed both for accounts of L2 morphological processing and for the controversial linguistic distinction between inflection and derivation.
This study investigates phenomena that have been claimed to be indicative of Specific Language Impairment (SLI) in German, focusing on subject-verb agreement marking. Longitudinal data from fourteen German-speaking children with SLI, seven monolingual and seven Turkish-German successive bilingual children, were examined. We found similar patterns of impairment in the two participant groups. Both the monolingual and the bilingual children with SLI had correct (present vs. preterit) tense marking and produced syntactically complex sentences such as embedded clauses and wh-questions, but were limited in reliably producing correct agreement-marked verb forms. These contrasts indicate that agreement marking is impaired in German-speaking children with SLI, without any necessary concurrent deficits in either the CP-domain or in tense marking. Our results also show that it is possible to identify SLI from an early successive bilingual child's performance in one of her two languages.
Restrictions on addition
(2012)
Children up to school age have been reported to perform poorly when interpreting sentences containing restrictive and additive focus particles by treating sentences with a focus particle in the same way as sentences without it. Careful comparisons between results of previous studies indicate that this phenomenon is less pronounced for restrictive than for additive particles. We argue that this asymmetry is an effect of the presuppositional status of the proposition triggered by the additive particle. We tested this in two experiments with German-learning three-and four-year-olds using a method that made the exploitation of the information provided by the particles highly relevant for completing the task. Three-year-olds already performed remarkably well with sentences both with auch 'also' and with nur 'only'. Thus, children can consider the presuppositional contribution of the additive particle in their sentence interpretation and can exploit the restrictive particle as a marker of exhaustivity.
Although all bilinguals encounter cross-language interference (CLI), some bilinguals are more susceptible to interference than others. Here, we report on language performance of late bilinguals (Russian/German) on two bilingual tasks (interview, verbal fluency), their language use and switching habits. The only between-group difference was CLI: one group consistently produced significantly more errors of CLI on both tasks than the other (thereby replicating our findings from a bilingual picture naming task). This striking group difference in language control ability can only be explained by differences in cognitive control, not in language proficiency or language mode.
Background
Outcome quality management requires the consecutive registration of defined variables. The aim was to identify relevant parameters in order to objectively assess the in-patient rehabilitation outcome.
Methods
From February 2009 to June 2010 1253 patients (70.9 ± 7.0 years, 78.1% men) at 12 rehabilitation clinics were enrolled. Items concerning sociodemographic data, the impairment group (surgery, conservative/interventional treatment), cardiovascular risk factors, structural and functional parameters and subjective health were tested in respect of their measurability, sensitivity to change and their propensity to be influenced by rehabilitation.
Results
The majority of patients (61.1%) were referred for rehabilitation after cardiac surgery, 38.9% after conservative or interventional treatment for an acute coronary syndrome. Functionally relevant comorbidities were seen in 49.2% (diabetes mellitus, stroke, peripheral artery disease, chronic obstructive lung disease). In three key areas 13 parameters were identified as being sensitive to change and subject to modification by rehabilitation: cardiovascular risk factors (blood pressure, low-density lipoprotein cholesterol, triglycerides), exercise capacity (resting heart rate, maximal exercise capacity, maximal walking distance, heart failure, angina pectoris) and subjective health (IRES-24 (indicators of rehabilitation status): pain, somatic health, psychological well-being and depression as well as anxiety on the Hospital Anxiety and Depression Scale).
Conclusion
The outcome of in-patient rehabilitation in elderly patients can be comprehensively assessed by the identification of appropriate key areas, that is, cardiovascular risk factors, exercise capacity and subjective health. This may well serve as a benchmark for internal and external quality management.
Early acquisition of a second language influences the development of language abilities and cognitive functions. In the present study, we used functional Magnetic Resonance Imaging (fMRI) to investigate the impact of early bilingualism on the organization of the cortical language network during sentence production. Two groups of adult multilinguals, proficient in three languages, were tested on a narrative task; early multilinguals acquired the second language before the age of three years, late multilinguals after the age of nine. All participants learned a third language after nine years of age. Comparison of the two groups revealed substantial differences in language-related brain activity for early as well as late acquired languages. Most importantly, early multilinguals preferentially activated a fronto-striatal network in the left hemisphere, whereas the left posterior superior temporal gyrus (pSTG) was activated to a lesser degree than in late multilinguals. The same brain regions were highlighted in previous studies when a non-target language had to be controlled. Hence the engagement of language control in adult early multilinguals appears to be influenced by the specific learning and acquisition conditions during early childhood. Remarkably, our results reveal that the functional control of early and subsequently later acquired languages is similarly affected, suggesting that language experience has a pervasive influence into adulthood. As such, our findings extend the current understanding of control functions in multilinguals.
The project of public-reason liberalism faces a basic problem: publicly justified principles are typically too abstract and vague to be directly applied to practical political disputes, whereas applicable specifications of these principles are not uniquely publicly justified. One solution could be a legislative procedure that selects one member from the eligible set of inconclusively justified proposals. Yet if liberal principles are too vague to select sufficiently specific legislative proposals, can they, nevertheless, select specific legislative procedures? Based on the work of Gerald Gaus, this article argues that the only candidate for a conclusively justified decision procedure is a majoritarian or otherwise ‘neutral’ democracy. If the justification of democracy requires an equality baseline in the design of political regimes and if justifications for departure from this baseline are subject to reasonable disagreement, a majoritarian design is justified by default. Gaus’s own preference for super-majoritarian procedures is based on disputable specifications of justified liberal principles. These procedures can only be defended as a sectarian preference if the equality baseline is rejected, but then it is not clear how the set of justifiable political regimes can be restricted to full democracies.
This thesis is focussed on the electronic properties of the new material class named topological insulators. Spin and angle resolved photoelectron spectroscopy have been applied to reveal several unique properties of the surface state of these materials. The first part of this thesis introduces the methodical background of these quite established experimental techniques.
In the following chapter, the theoretical concept of topological insulators is introduced. Starting from the prominent example of the quantum Hall effect, the application of topological invariants to classify material systems is illuminated. It is explained how, in presence of time reversal symmetry, which is broken in the quantum Hall phase, strong spin orbit coupling can drive a system into a topologically non trivial phase. The prediction of the spin quantum Hall effect in two dimensional insulators an the generalization to the three dimensional case of topological insulators is reviewed together with the first experimental realization of a three dimensional topological insulator in the Bi1-xSbx alloys given in the literature.
The experimental part starts with the introduction of the Bi2X3 (X=Se, Te) family of materials. Recent theoretical predictions and experimental findings on the bulk and surface electronic structure of these materials are introduced in close discussion to our own experimental results. Furthermore, it is revealed, that the topological surface state of Bi2Te3 shares its orbital symmetry with the bulk valence band and the observation of a temperature induced shift of the chemical potential is to a high probability unmasked as a doping effect due to residual gas adsorption.
The surface state of Bi2Te3 is found to be highly spin polarized with a polarization value of about 70% in a macroscopic area, while in Bi2Se3 the polarization appears reduced, not exceeding 50%. We, however, argue that the polarization is most likely only extrinsically limited in terms of the finite angular resolution and the lacking detectability of the out of plane component of the electron spin. A further argument is based on the reduced surface quality of the single crystals after cleavage and, for Bi2Se3 a sensitivity of the electronic structure to photon exposure.
We probe the robustness of the topological surface state in Bi2X3 against surface impurities in Chapter 5. This robustness is provided through the protection by the time reversal symmetry. Silver, deposited on the (111) surface of Bi2Se3 leads to a strong electron doping but the surface state is observed up to a deposited Ag mass equivalent to one atomic monolayer. The opposite sign of doping, i.e., hole-like, is observed by exposing oxygen to Bi2Te3. But while the n-type shift of Ag on Bi2Se3 appears to be more or less rigid, O2 is lifting the Dirac point of the topological surface state in Bi2Te3 out of the valence band minimum at $\Gamma$. After increasing the oxygen dose further, it is possible to shift the Dirac point to the Fermi level, while the valence band stays well beyond. The effect is found reversible, by warming up the samples which is interpreted in terms of physisorption of O2.
For magnetic impurities, i.e., Fe, we find a similar behavior as for the case of Ag in both Bi2Se3 and Bi2Te3. However, in that case the robustness is unexpected, since magnetic impurities are capable to break time reversal symmetry which should introduce a gap in the surface state at the Dirac point which in turn removes the protection. We argue, that the fact that the surface state shows no gap must be attributed to a missing magnetization of the Fe overlayer. In Bi2Te3 we are able to observe the surface state for deposited iron mass equivalents in the monolayer regime. Furthermore, we gain control over the sign of doping through the sample temperature during deposition.
Chapter6 is devoted to the lifetime broadening of the photoemission signal from the topological surface states of Bi2Se3 and Bi2Te3. It is revealed that the hexagonal warping of the surface state in Bi2Te3 introduces an anisotropy for electrons traveling along the two distinct high symmetry directions of the surface Brillouin zone, i.e., $\Gamma$K and $\Gamma$M. We show that the phonon coupling strength to the surface electrons in Bi2Te3 is in nice agreement with the theoretical prediction but, nevertheless, higher than one may expect. We argue that the electron-phonon coupling is one of the main contributions to the decay of photoholes but the relatively small size of the Fermi surface limits the number of phonon modes that may scatter off electrons. This effect is manifested in the energy dependence of the imaginary part of the electron self energy of the surface state which shows a decay to higher binding energies in contrast to the monotonic increase proportional to E$^2$ in the Fermi liquid theory due to electron-electron interaction.
Furthermore, the effect of the surface impurities of Chapter 5 on the quasiparticle life- times is investigated. We find that Fe impurities have a much stronger influence on the lifetimes as compared to Ag. Moreover, we find that the influence is stronger independently of the sign of the doping. We argue that this observation suggests a minor contribution of the warping on increased scattering rates in contrast to current belief. This is additionally confirmed by the observation that the scattering rates increase further with increasing silver amount while the doping stays constant and by the fact that clean Bi2Se3 and Bi2Te3 show very similar scattering rates regardless of the much stronger warping in Bi2Te3.
In the last chapter we report on a strong circular dichroism in the angle distribution of the photoemission signal of the surface state of Bi2Te3. We show that the color pattern obtained by calculating the difference between photoemission intensities measured with opposite photon helicity reflects the pattern expected for the spin polarization. However, we find a strong influence on strength and even sign of the effect when varying the photon energy. The sign change is qualitatively confirmed by means of one-step photoemission calculations conducted by our collaborators from the LMU München, while the calculated spin polarization is found to be independent of the excitation energy. Experiment and theory together unambiguously uncover the dichroism in these systems as a final state effect and the question in the title of the chapter has to be negated: Circular dichroism in the angle distribution is not a new spin sensitive technique.
Bad governance causes economic, social, developmental and environmental problems in many developing countries. Developing countries have adopted a number of reforms that have assisted in achieving good governance. The success of governance reform depends on the starting point of each country – what institutional arrangements exist at the out-set and who the people implementing reforms within the existing institutional framework are. This dissertation focuses on how formal institutions (laws and regulations) and informal institutions (culture, habit and conception) impact on good governance. Three characteristics central to good governance - transparency, participation and accountability are studied in the research.
A number of key findings were: Good governance in Hanoi and Berlin represent the two extremes of the scale, while governance in Berlin is almost at the top of the scale, governance in Hanoi is at the bottom. Good governance in Hanoi is still far from achieved. In Berlin, information about public policies, administrative services and public finance is available, reliable and understandable. People do not encounter any problems accessing public information. In Hanoi, however, public information is not easy to access. There are big differences between Hanoi and Berlin in the three forms of participation. While voting in Hanoi to elect local deputies is formal and forced, elections in Berlin are fair and free. The candidates in local elections in Berlin come from different parties, whereas the candidacy of local deputies in Hanoi is thoroughly controlled by the Fatherland Front. Even though the turnout of voters in local deputy elections is close to 90 percent in Hanoi, the legitimacy of both the elections and the process of representation is non-existent because the local deputy candidates are decided by the Communist Party.
The involvement of people in solving local problems is encouraged by the government in Berlin. The different initiatives include citizenry budget, citizen activity, citizen initiatives, etc. Individual citizens are free to participate either individually or through an association.
Lacking transparency and participation, the quality of public service in Hanoi is poor. Citizens seldom get their services on time as required by the regulations. Citizens who want to receive public services can bribe officials directly, use the power of relationships, or pay a third person – the mediator ("Cò" - in Vietnamese).
In contrast, public service delivery in Berlin follows the customer-orientated principle. The quality of service is high in relation to time and cost. Paying speed money, bribery and using relationships to gain preferential public service do not exist in Berlin.
Using the examples of Berlin and Hanoi, it is clear to see how transparency, participation and accountability are interconnected and influence each other. Without a free and fair election as well as participation of non-governmental organisations, civil organisations, and the media in political decision-making and public actions, it is hard to hold the Hanoi local government accountable.
The key differences in formal institutions (regulative and cognitive) between Berlin and Hanoi reflect the three main principles: rule of law vs. rule by law, pluralism vs. monopoly Party in politics and social market economy vs. market economy with socialist orientation.
In Berlin the logic of appropriateness and codes of conduct are respect for laws, respect of individual freedom and ideas and awareness of community development. People in Berlin take for granted that public services are delivered to them fairly. Ideas such as using money or relationships to shorten public administrative procedures do not exist in the mind of either public officials or citizens.
In Hanoi, under a weak formal framework of good governance, new values and norms (prosperity, achievement) generated in the economic transition interact with the habits of the centrally-planned economy (lying, dependence, passivity) and traditional values (hierarchy, harmony, family, collectivism) influence behaviours of those involved.
In Hanoi “doing the right thing” such as compliance with law doesn’t become “the way it is”.
The unintended consequence of the deliberate reform actions of the Party is the prevalence of corruption. The socialist orientation seems not to have been achieved as the gap between the rich and the poor has widened.
Good governance is not achievable if citizens and officials are concerned only with their self-interest. State and society depend on each other. Theoretically to achieve good governance in Hanoi, institutions (formal and informal) able to create good citizens, officials and deputies should be generated. Good citizens are good by habit rather than by nature.
The rule of law principle is necessary for the professional performance of local administrations and People’s Councils. When the rule of law is applied consistently, the room for informal institutions to function will be reduced.
Promoting good governance in Hanoi is dependent on the need and desire to change the government and people themselves. Good governance in Berlin can be seen to be the result of the efforts of the local government and citizens after a long period of development and continuous adjustment.
Institutional transformation is always a long and complicated process because the change in formal regulations as well as in the way they are implemented may meet strong resistance from the established practice. This study has attempted to point out the weaknesses of the institutions of Hanoi and has identified factors affecting future development towards good governance. But it is not easy to determine how long it will take to change the institutional setting of Hanoi in order to achieve good governance.
Bestehende Forschung hat gezeigt, dass die Reformbereitschaft von Führungskräften eine wichtige Voraussetzung für die erfolgreiche Umsetzung von Veränderungsprojekten ist. Dieser Artikel geht der Frage nach, wie erklärt werden kann, warum einige Führungskräfte in der öffentlichen Verwaltung reformbereiter sind als andere. Er greift dabei auf eine Führungskräftebefragung aus dem Jahr 2010 zurück, die auf den Einschätzungen von 351 Verwaltungsmanagern aus der Ministerialverwaltung von Bund und Ländern basiert. Eine statistische Analyse dieser Daten kommt zu dem Ergebnis, dass die typische reformbereite Führungskraft intrinsisch motiviert ist, auf eine aufgabenorientierte Führung setzt sowie Arbeitserfahrung außerhalb der öffentlichen Verwaltung und keine juristische Ausbildung besitzt. Sie arbeitet auf oberer Hierarchieebene, ist jedoch eher mit Fach- als mit Führungsaufgaben beschäftigt. Der Artikel vertieft und erläutert diese Befunde sowie deren Implikationen für die Verwaltungspraxis.
This article deals with Spanish modal adverbs and verbs of cognitive attitude (Capelli 2007) and their epistemic and/or evidential use. The article is based upon the hypothesis that the study of the use of these linguistic devices has to be highly context-sensitive, as it is not always (only) the sentence level that has to be looked at if one wants to find out whether a certain adverb or verb of cognitive attitude is used evidentially or epistemically. In this article, therefore, the context is used to determine which meaning aspects of an element are encoded and which are contributed by the context. The data were retrieved from the daily newspaper El País. Nevertheless, the present study is not a quantitative one, but rather a qualitative study. My corpus analysis indicates that it is not possible to differentiate between the linguistic categories of evidentiality and epistemic modality in every case, although it indeed is possible in the vast majority of cases. In verbs of cognitive attitude, evidentiality and epistemic modality seem to be two interwoven categories, while concerning modal adverbs it is usually possible to separate the categories and to distinguish between the different subtypes of evidentiality such as visual evidence, hearsay and inference.
In older research literature, the prose epics emerging from the court of Elisabeth of Lorraine and Nassau-Saarbrücken have repeatedly been accused of lacking structure and literariness. By contrast, this article shows that narrative principles of seriality generate the complex structure of the voluminous ›Loher und Maller‹: literary strategies of repetition and variation organize the text on different levels. Recurring narrative structures, thematic constellations and motivations as well as lexical stereotypes are part of this comprehensive principle of seriality. Not triviality and insufficiency, but structural and narrative complexity and lexical accumulation of significance characterize ›Loher und Maller‹.
Inhalt: Editorial (Dr. Roswitha Lohwaßer) ; Auf Veränderungen reagieren. Herausforderungen an Beratungs- und Unterstützungssysteme im Kontext der Anforderungen an Schule
(Dr. Götz Bieber, Bernd Jankofsky) ; Lehrer qualifizieren, Kräfte bündeln. Überlegungen zur Enwicklung universitätsnaher Lehrerfortbildung in Niedersachsen (Dr. Jens Winkel, Ulrike Heinrichs) ; Nicht nur fachlich, auch didaktisch vorbereitet sein. Wann ist eine Fortbildung für Lehrkräfte eine gelungene Fortbildung (Elke Dengler) ; Der Lehrer - TÜV Qualitätscheck aus Schülersicht (Jorid Engler) ; Zusammenbringen, was zusammen gehört - universitäre Forschung und LehrerInnenfortbildung - (Dr. Charlotte Gemsa, Martina Rode) ; Wissensvermittlung spannend gestalten. Faszinierende numerische Experimente mit Polynomen - Fortbildung für Mathematiklehrer (Dr. Wolfgang Schöbel) ; Wow-Effekt, der Spaß an Mathe weckt. Erwartungen an eine Fortbildung für Mathematiklehrer (Jörg Schulz); Gemeinsames Lernen von Studierenden und Lehrenden. Was Potsdam von der Oldenburger Teamforschung lernen kann (Dr. Benjamin Apelojg) ; Design Thinking - Innovationsmethode für Projektarbeit. Ein Experiment im Helmholtz-Gymnasium (Andrea Scheer, Johannes Erdmann) ; Entwicklungen als Chance begreifen.
Lernkulturen und Portfolioarbeit in der Lehrerbildung (Dr. Mark-Oliver Carl) ; Rückblick auf die Tage der Lehrerbildung 2012
In diesem einleitenden Beitrag des Themenschwerpunktes wird der
Hintergrund der internationalen Klimaverhandlungen erläutert und
die Ergebnisse des Kopenhagen-Akkords vorgestellt. Angesichts des
Scheiterns der Kopenhagener Konferenz muss die zeitnahe Schließung
eines rechtlich bindenden, globalen Klimaabkommens als unwahrscheinlich
gelten. Die Klimapolitik wird zukünftig verstärkt auf nationalstaatlicher
und transnationaler Ebene erfolgen.
Abschied von KyotoPlus?
(2012)
Die Ergebnisse des Klimagipfels von Kopenhagen sind eine bittere
Enttäuschung für die EU. Ihr ist es nicht gelungen, ihren Führungsambitionen
beim globalen Klimaschutz gerecht zu werden und die
Konferenz zur Weichenstellung für ein rechtsverbindliches Klimaabkommen
nach 2012 zu nutzen. Damit steht die Union vor grundlegenden
strategischen Fragen zum Kurs ihrer Klimapolitik.
Gescheiterte Klimapolitik?
(2012)
Der Kopenhagener Klimagipfel 2009 ist mit Spannung erwartet worden.
Erreicht wurde lediglich ein Minimalkonsens. Der Autor liefert eine
akteurszentrierte Deutung des Kopenhagener Abkommens und stellt die
Frage nach dem Präzedenzcharakter der Verhandlungen: Handelte es sich
um ein einmaliges Versagen multilateraler Diplomatie oder um einen
Vorgeschmack auf die weltpolitische Routine des 21. Jahrhunderts?
Die Zivilgesellschaft hat dazu beigetragen, dass die Klimakonferenz in
Kopenhagen zu einem Medienereignis wurde. Fernab großer Demonstrationen
haben Nichtregierungsorganisationen (NRO) seit Jahren
einen guten Zugang zu den internationalen Klimaverhandlungen. Am
Beispiel von Chile wird gezeigt, wie Nichtregierungsorganisationen
durch professionellen Lobbyismus ihre Positionen in politische Prozesse
einspeisen. Sie befinden sich in einem Spannungsfeld von Kooperation
und Instrumentalisierung durch politische Entscheidungsträger.
Wie Klimaschutz finanzieren?
(2012)
Zur Finanzierung von Klimaschutz müssen öffentliche Mittel gezielt
eingesetzt werden. Dies beinhaltet auch die Rahmenbedingungen für
private Finanzströme signifikant zu verbessern. Anhand einer Problemanalyse
bestimmen die Autoren Eckdaten für diese Hebelwirkung.
Öffentliche Anschubfinanzierung kann somit die Grundlage für private
Investitionen sein. Dies wird exemplarisch an der Internationalen
Klimaschutzinitiative des Bundesumweltministeriums diskutiert.
China und Indien
(2012)
Der Artikel analysiert die neue Rolle aufsteigender Schwellenländer
in den internationalen Klimaverhandlungen am Beispiel Chinas und
Indiens. Die Ablehnung verbindlicher Reduktionsziele für Treibhausgase
wurde in Kopenhagen als Blockadepolitik beider Länder gewertet.
China und Indien können sich in ihrer Position behaupten, da ihr
gestiegenes Gewicht in der multipolaren Weltordnung und die Untätigkeit
führender Industrieländer ihre Verhandlungsposition stärkt. Die
Autorin diskutiert Kooperationsmöglichkeiten auf subnationaler Ebene,
die die Blockadeposition nationaler Regierungen umgehen können.
Der Autor diskutiert die Chancen und Risiken bei der Einbindung
des Südens in die internationale Klimapolitik. Lange Zeit hatten die
Entwicklungsländer am wenigsten zum Klimawandel beigetragen,
wären aber am stärksten von ihm betroffen. Mittlerweile jedoch tragen
diese Länder in erheblichem Maße selbst zum Klimawandel bei. Allerdings
setzen deren Regierungen auf Zeit. Sie erwarten Ressourcentransfers.
Dies verstärkt auch alte Probleme des ‚Rent-Seeking‘.
Klimapolitik am Ende?
(2012)
Einleitung
(2012)
Die Ausstellung "Die Geschichte des Standortes Potsdam-Golm 1935 bis 1991" zeigt die wechselvolle Historie des jetzigen Universitäts- und Wissenschaftsstandortes. Die Ursprünge finden sich in der 1935 errichteten General-Wever-Kaserne. Nach der Beendigung des Zweiten Weltkrieges und bis zur Wende nutzten sowohl die sowjetische Armee als auch das Ministerium für Staatssicherheit das Gelände. Thematisiert werden unter anderem die militärische Zentralregion Brandenburg, die Herausbildung der Geheimdiensthochschule von 1951 bis 1990, die Lehre an dieser Einrichtung, das Studienleben und die Forschungstätigkeit sowie die Nutzung des Standortes nach 1990.
Die Ausstellung besteht aus 13 mit zahlreichen Fotos versehenen Tafeln.
This document presents an axiom selection technique for classic first order theorem proving based on the relevance of axioms for the proof of a conjecture. It is based on unifiability of predicates and does not need statistical information like symbol frequency. The scope of the technique is the reduction of the set of axioms and the increase of the amount of provable conjectures in a given time. Since the technique generates a subset of the axiom set, it can be used as a preprocessor for automated theorem proving. This technical report describes the conception, implementation and evaluation of ARDE. The selection method, which is based on a breadth-first graph search by unifiability of predicates, is a weakened form of the connection calculus and uses specialised variants or unifiability to speed up the selection. The implementation of the concept is evaluated with comparison to the results of the world championship of theorem provers of the year 2012 (CASC J6). It is shown that both the theorem prover leanCoP which uses the connection calculus and E which uses equality reasoning, can benefit from the selection approach. Also, the evaluation shows that the concept is applyable for theorem proving problems with thousands of formulae and that the selection is independent from the calculus used by the theorem prover.
Üblicherweise vermeiden deutsche Parteien Kampfkandidaturen um den Vorsitz. Dennoch kam es auf dem Mannheimer SPD-Parteitag 1995 zu einer unerwarteten offenen Konkurrenz um das Spitzenamt. Das unbeabsichtigte Scheitern der Inszenierung der „Geschlossenheit“ der Partei führte zum Ausbruch der bis dahin unterdrückten Kämpfe um den Parteivorsitz. Der Mannheimer Parteitag steht exemplarisch für den Zusammenhang zwischen Inszenierung, Disziplin und den informellen Regeln innerparteilicher Machtkonstruktion. Am Beispiel dieses Parteitages zeigt die vorliegende Arbeit, wie umstrittenen Parteivorsitzenden sich gegen Widerstände im Amt behaupten können bzw. woran diese Strategie scheitern kann. Aus figurationstheoretischer Perspektive wird die Inszenierung als Notwendigkeit medienvermittelter Parteienkonkurrenz um Wählerstimmen gefasst. Inszenierung erfordert Selbstdisziplin und das koordinierte Handeln der Parteimitglieder. Innerparteilich wird so wechselseitige Abhängigkeit erzeugt. Diese wird gesteigert durch die Medien-Konzentration auf wenige Spitzenpolitiker. Die Mehrheit der Mandatsträger und Funktionäre ist angewiesen auf das medienwirksame Auftreten der Führung. Für den Medienerfolg braucht die Führung ihrerseits die Unterstützung der Mitglieder. Diese wechselseitige Abhängigkeit erzeugt sowohl typische Relevanzen als auch Möglichkeiten, die jeweils andere Interessengruppe unter Zugzwang zu setzen. Imageprobleme des Vorsitzenden sind als verletzte Erwartungen Anlass für innerparteiliche Machtkämpfe, in denen die Parteiführung insbesondere die Inszenierung der „Geschlossenheit“ nutzen kann, um offene Personaldiskussionen zu verhindern. Da Handlungsoptionen und -grenzen durch das Handeln der Akteure immer wieder neu geschaffen werden, besteht stets das Risiko des Scheiterns innerparteilicher Disziplinierung. Mit dem Nachvollzug von Disziplinierung und den Gründen ihrer Kontingenz versteht sich die vorliegende Arbeit als Beitrag zu einer Theorie informeller Machtregeln in Organisationen mit schwach ausgeprägten Herrschaftsstrukturen. Im ersten Teil der Arbeit wird der Zusammenhang zwischen Inszenierung und Macht durch die Konzepte Theatralität und Figuration entwickelt. Im zweiten Teil werden typische Konstellationen der gegenwärtigen parlamentarischen Demokratie auf typische beziehungsvermittelte Situationsdeutungen, Handlungsmöglichkeiten und -grenzen untersucht. Im dritten Teil wird der kontingente Prozess des innerparteilichen Machtkampfes am Beispiel des Mannheimer Parteitages 1995 nachvollzogen.
„Alle Kinder müssen zu wertvollen Menschen erzogen werden“, forderte Margot Honecker, Erziehungsminister der DDR von 1963 bis 1989. Während liberale Jugendsoziologen die Jugendphase als Moratorium begreifen und damit Heranwachsenden Freiräume zubilligen, geltende soziale Normen infrage zu stellen und selbstbestimmte Lebensentwürfe zu erproben, ohne ihr Handeln in gleicher Weise verantworten zu müssen wie Erwachsene, wurden Jugendliche in der DDR danach beurteilt, inwieweit sie dem Ideal der „allseitig gebildeten sozialistischen Persönlichkeit“ entsprachen. Nach Honeckers Ansicht wäre die freie Entfaltung des Individuums erst im Kommunismus möglich. Individuelle Entfaltung besaß für sie keinen eigenen Wert. Der politische Erziehungsanspruch erstreckte sich grundsätzlich auf alle Lebenswelten von Jugendlichen. Freiräume zur Selbstentfaltung waren in der DDR sowohl materiell als auch ideell eng umgrenzt, ein Umstand den der bundesdeutsche Bildungssoziologe Jürgen Zinnecker als „Jugendmoratorium in kasernierter Form“ bezeichnete. Dem politischen Anpassungsdruck waren Kinder und Jugendliche in besonders starkem Maße ausgesetzt. Zwar richtete sich der Erziehungsanspruch der SED grundsätzlich auf alle Bürger, doch anders als Erwachsene hatten Kinder und Jugendliche noch keine eigenständige Stellung innerhalb des sozialen und gesellschaftlichen Gefüges gefunden und deshalb weniger Möglichkeiten, sich der politischen Einwirkung zu entziehen. Mit dem Jugendgesetz von 1974 wurde die sozialistische Persönlichkeit als Erziehungsziel festgelegt, dem auch die Eltern zu folgen hatten. Bildungschancen wurden schon frühzeitig von der Anpassung an vorgegebene Normen abhängig gemacht, abweichendes Verhalten konnte rigide bestraft werden und gravierende Folgen für den weiteren Lebensweg haben. Auch wenn die meisten Jugendlichen die Forderungen des Staates zu erfüllen schienen und ihre Verbundenheit mit der Politik der SED wann immer gefordert bezeugten, standen sie dieser Politik tatsächlich mindestens gleichgültig gegenüber. Der „Widerspruch zwischen Wort und Tat“ war eines der gravierenden Probleme der Herrschenden im Umgang mit Heranwachsenden. Es gab aber auch Jugendliche, die bewusst Einschränkungen in Kauf nahmen, um ihre Vorstellungen eines selbstbestimmten Lebens verwirklichen zu können. Schon bei geringfügiger Abweichung von ausdrücklichen oder unausgesprochenen Vorgaben mussten sie mit erheblichen staatlichen Eingriffen in ihr persönliches Dasein rechnen. Die äußerste Form der Abweichung waren Ausreiseersuchen und Fluchtversuche. Jugendliche waren unter Antragstellern und „Republikflüchtigen“ überproportional vertreten. Die Dissertation beleuchtet das Spannungsverhältnis zwischen staatlich vorgegebenen Lebenswegen und eigen-sinniger Gestaltung verschiedener Lebensbereiche von Kindern und Jugendlichen für die Jahre der Honecker-Herrschaft zwischen 1971 bis 1989 im Bezirk Schwerin.
It sometimes happens that we finish reading a passage of text just to realize that we have no idea what we just read. During these episodes of mindless reading our mind is elsewhere yet the eyes still move across the text. The phenomenon of mindless reading is common and seems to be widely recognized in lay psychology. However, the scientific investigation of mindless reading has long been underdeveloped. Recent progress in research on mindless reading has been based on self-report measures and on treating it as an all-or-none phenomenon (dichotomy-hypothesis). Here, we introduce the levels-of-inattention hypothesis proposing that mindless reading is graded and occurs at different levels of cognitive processing. Moreover, we introduce two new behavioral paradigms to study mindless reading at different levels in the eye-tracking laboratory. First (Chapter 2), we introduce shuffled text reading as a paradigm to approximate states of weak mindless reading experimentally and compare it to reading of normal text. Results from statistical analyses of eye movements that subjects perform in this task qualitatively support the ‘mindless’ hypothesis that cognitive influences on eye movements are reduced and the ‘foveal load’ hypothesis that the response of the zoom lens of attention to local text difficulty is enhanced when reading shuffled text. We introduce and validate an advanced version of the SWIFT model (SWIFT 3) incorporating the zoom lens of attention (Chapter 3) and use it to explain eye movements during shuffled text reading. Simulations of the SWIFT 3 model provide fully quantitative support for the ‘mindless’ and the ‘foveal load’ hypothesis. They moreover demonstrate that the zoom lens is an important concept to explain eye movements across reading and mindless reading tasks. Second (Chapter 4), we introduce the sustained attention to stimulus task (SAST) to catch episodes when external attention spontaneously lapses (i.e., attentional decoupling or mind wandering) via the overlooking of errors in the text and via signal detection analyses of error detection. Analyses of eye movements in the SAST revealed reduced influences from cognitive text processing during mindless reading. Based on these findings, we demonstrate that it is possible to predict states of mindless reading from eye movement recordings online. That cognition is not always needed to move the eyes supports autonomous mechanisms for saccade initiation. Results from analyses of error detection and eye movements provide support to our levels-of-inattention hypothesis that errors at different levels of the text assess different levels of decoupling. Analyses of pupil size in the SAST (Chapter 5) provide further support to the levels of inattention hypothesis and to the decoupling hypothesis that off-line thought is a distinct mode of cognitive functioning that demands cognitive resources and is associated with deep levels of decoupling. The present work demonstrates that the elusive phenomenon of mindless reading can be vigorously investigated in the cognitive laboratory and further incorporated in the theoretical framework of cognitive science.
Auf den Spuren der griechischen Mythen bei Anton Čechov in den Werken der frühen Schaffensperiode
(2012)
Die Poetik des Alltags des russischen Schriftstellers Anton Čechov fasziniert bereits über ein Jahrhundert die Leser weltweit. Dieser Faszination liegt nicht zuletzt der griechische Mythos zugrunde, ein Kulturerbe, das die Denkweise unserer Gesellschaft tief greifend beeinflusst hat. Die antiken Gottheiten und Helden wie Apollo, Dionysos, Pythia, Narziss werden in Čechovs wenig untersuchtem Frühwerk zu Menschen des Alltags. Diese Projektion ist eine parodie- und travestiehafte Modifikation der mythischen Elementarstrukturen. In dieser Verschmelzung des Mythischen mit dem Alltäglichen wird Čechov zum Nachfolger insbesondere des antiken Dramatikers Epicharm. Methodisch basiert meine Analyse auf dem Begriffspaar von „Wiedergebrauchs-Rede“ und „Verbrauchs-Rede“ des Rhetorikers Heinrich Lausberg: Čechov erzählt die prominenten Mythen so wieder, dass sie zwar ihre Erhabenheit verlieren, ihre untergründige Kraft jedoch beibehalten und so das Selbstbild des modernen Menschen bereichern.
Die empirische Arbeit untersucht den interlingualen Transfer von französischen und deutschen Filmtiteln im vergangenen Jahrhundert. Sie basiert auf einem Korpus von 3.200 französischen Originaltiteln und ihren deutschen Neutiteln und schließt eine Forschungslücke der Filmtitelübersetzung für das Sprachenpaar deutsch-französisch. Im theoretischen Teil werden die text- und übersetzungswissenschaftlichen Grundlagen dargelegt. Filmtitel bilden eine eigene Textsorte, die unter Zuhilfenahme der Textualitätskriterien von de Beaugrande/Dressler spezifiziert wird. Anhand ausgewählter Beispiele aus dem Korpus werden maßgebliche Funktionen von Filmtiteln, wie Werbung, Information, Identifikation, Kontakt und Interpretation erörtert. Auf E. Prunčs Translationstypologie basieren jene fünf Strategien, die bei der Übertragung von französischen Filmtiteln in den deutschen Sprach- und Kulturraum zum Einsatz kommen: Identität, Analogie, Variation, Innovation sowie hybride Formen. Ausführlich werden Übersetzungen von Umtitelungen abgegrenzt. Die Auswertung des Korpus ergibt, dass Titelinnovation die am häufigsten angewandte Strategie beim Titeltransfer im gesamten Untersuchungszeitraum darstellt, während Titelidentitäten am seltensten zum Einsatz kommen. Die Betrachtung kürzerer Zeitspannen zeigt gewisse Tendenzen auf, beispielsweise die deutliche Zunahme von Hybridtiteln in jüngster Zeit. Erstmals wird in dieser Arbeit das Phänomen der Mehrfachbetitelungen in verschiedenen deutschsprachigen Ländern aufgegriffen, indem nach Motiven für unterschiedliche Neutitel in Deutschland, der ehemaligen DDR und Österreich gesucht wird. Den Abschluss bildet eine Betrachtung der Filmtitel aus rechtlicher und ökonomischer Perspektive, denn zusammen mit ihren Filmen stellen Titel von hoher Kommerzialität geprägte Texte dar, und wie jedes Wirtschaftsgut erfahren auch sie eine präzise juristische Regulierung.
Eye movements are a powerful tool to examine cognitive processes. However, in most paradigms little is known about the dynamics present in sequences of saccades and fixations. In particular, the control of fixation durations has been widely neglected in most tasks. As a notable exception, both spatial and temporal aspects of eye-movement control have been thoroughly investigated during reading. There, the scientific discourse was dominated by three controversies, (i), the role of oculomotor vs. cognitive processing on eye-movement control, (ii) the serial vs. parallel processing of words, and, (iii), the control of fixation durations. The main purpose of this thesis was to investigate eye movements in tasks that require sequences of fixations and saccades. While reading phenomena served as a starting point, we examined eye guidance in non-reading tasks with the aim to identify general principles of eye-movement control. In addition, the investigation of eye movements in non-reading tasks helped refine our knowledge about eye-movement control during reading. Our approach included the investigation of eye movements in non-reading experiments as well as the evaluation and development of computational models. I present three main results : First, oculomotor phenomena during reading can also be observed in non-reading tasks (Chapter 2 & 4). Oculomotor processes determine the fixation position within an object. The fixation position, in turn, modulates both the next saccade target and the current fixation duration. Second, predicitions of eye-movement models based on sequential attention shifts were falsified (Chapter 3). In fact, our results suggest that distributed processing of multiple objects forms the basis of eye-movement control. Third, fixation durations are under asymmetric control (Chapter 4). While increasing processing demands immediately prolong fixation durations, decreasing processing demands reduce fixation durations only with a temporal delay. We propose a computational model ICAT to account for asymmetric control. In this model, an autonomous timer initiates saccades after random time intervals independent of ongoing processing. However, processing demands that are higher than expected inhibit the execution of the next saccade and, thereby, prolong the current fixation. On the other hand, lower processing demands will not affect the duration before the next saccade is executed. Since the autonomous timer adjusts to expected processing demands from fixation to fixation, a decrease in processing demands may lead to a temporally delayed reduction of fixation durations. In an extended version of ICAT, we evaluated its performance while simulating both temporal and spatial aspects of eye-movement control. The eye-movement phenomena investigated in this thesis have now been observed in a number of different tasks, which suggests that they represent general principles of eye guidance. I propose that distributed processing of the visual input forms the basis of eye-movement control, while fixation durations are controlled by the principles outlined in ICAT. In addition, oculomotor control contributes considerably to the variability observed in eye movements. Interpretations for the relation between eye movements and cognition strongly benefit from a precise understanding of this interplay.
Mit der Liberalisierung des Strommarkts, den unsicheren Aussichten in der Klimapolitik und stark schwankenden Preisen bei Brennstoffen, Emissionsrechten und Kraftwerkskomponenten hat bei Kraftwerksinvestitionen das Risikomanagement an Bedeutung gewonnen. Dies äußert sich im vermehrten Einsatz probabilistischer Verfahren. Insbesondere bei regulativen Risiken liefert der klassische, häufigkeitsbasierte Wahrscheinlichkeitsbegriff aber keine Handhabe zur Risikoquantifizierung. In dieser Arbeit werden Kraftwerksinvestitionen und -portfolien in Deutschland mit Methoden des Bayes'schen Risikomanagements bewertet. Die Bayes'sche Denkschule begreift Wahrscheinlichkeit als persönliches Maß für Unsicherheit. Wahrscheinlichkeiten können auch ohne statistische Datenanalyse allein mit Expertenbefragungen gewonnen werden. Das Zusammenwirken unsicherer Werttreiber wurde mit einem probabilistischen DCF-Modell (Discounted Cash Flow-Modell) spezifiziert und in ein Einflussdiagramm mit etwa 1200 Objekten umgesetzt. Da der Überwälzungsgrad von Brennstoff- und CO2-Kosten und damit die Höhe der von den Kraftwerken erwirtschafteten Deckungsbeiträge im Wettbewerb bestimmt werden, reicht eine einzelwirtschaftliche Betrachtung der Kraftwerke nicht aus. Strompreise und Auslastungen werden mit Heuristiken anhand der individuellen Position der Kraftwerke in der Merit Order bestimmt, d.h. anhand der nach kurzfristigen Grenzkosten gestaffelten Einsatzreihenfolge. Dazu wurden 113 thermische Großkraftwerke aus Deutschland in einer Merit Order vereinigt. Das Modell liefert Wahrscheinlichkeitsverteilungen für zentrale Größen wie Kapitalwerte von Bestandsportfolien sowie Stromgestehungskosten und Kapitalwerte von Einzelinvestitionen (Steinkohle- und Braunkohlekraftwerke mit und ohne CO2-Abscheidung sowie GuD-Kraftwerke). Der Wert der Bestandsportfolien von RWE, E.ON, EnBW und Vattenfall wird primär durch die Beiträge der Braunkohle- und Atomkraftwerke bestimmt. Erstaunlicherweise schlägt sich der Emissionshandel nicht in Verlusten nieder. Dies liegt einerseits an den Zusatzgewinnen der Atomkraftwerke, andererseits an den bis 2012 gratis zugeteilten Emissionsrechten, welche hohe Windfall-Profite generieren. Dadurch erweist sich der Emissionshandel in seiner konkreten Ausgestaltung insgesamt als gewinnbringendes Geschäft. Über die Restlaufzeit der Bestandskraftwerke resultiert ab 2008 aus der Einführung des Emissionshandels ein Barwertvorteil von insgesamt 8,6 Mrd. €. In ähnlicher Dimension liegen die Barwertvorteile aus der 2009 von der Bundesregierung in Aussicht gestellten Laufzeitverlängerung für Atomkraftwerke. Bei einer achtjährigen Laufzeitverlängerung ergäben sich je nach CO2-Preisniveau Barwertvorteile von 8 bis 15 Mrd. €. Mit höheren CO2-Preisen und Laufzeitverlängerungen von bis zu 28 Jahren würden 25 Mrd. € oder mehr zusätzlich anfallen. Langfristig erscheint fraglich, ob unter dem gegenwärtigen Marktdesign noch Anreize für Investitionen in fossile Kraftwerke gegeben sind. Zu Beginn der NAP 2-Periode noch rentable Investitionen in Braunkohle- und GuD-Kraftwerke werden mit der auslaufenden Gratiszuteilung von Emissionsrechten zunehmend unrentabler. Die Rentabilität wird durch Strommarkteffekte der erneuerbaren Energien und ausscheidender alter Gas- und Ölkraftwerke stetig weiter untergraben. Steinkohlekraftwerke erweisen sich selbst mit anfänglicher Gratiszuteilung als riskante Investition. Die festgestellten Anreizprobleme für Neuinvestitionen sollten jedoch nicht dem Emissionshandel zugeschrieben werden, sondern resultieren aus den an Grenzkosten orientierten Strompreisen. Das Anreizproblem ist allerdings bei moderaten CO2-Preisen am größten. Es gilt auch für Kraftwerke mit CO2-Abscheidung: Obwohl die erwarteten Vermeidungskosten für CCS-Kraftwerke gegenüber konventionellen Kohlekraftwerken im Jahr 2025 auf 25 €/t CO2 (Braunkohle) bzw. 38,5 €/t CO2 (Steinkohle) geschätzt werden, wird ihr Bau erst ab CO2-Preisen von 50 bzw. 77 €/t CO2 rentabel. Ob und welche Kraftwerksinvestitionen sich langfristig rechnen, wird letztlich aber politisch entschieden und ist selbst unter stark idealisierten Bedingungen kaum vorhersagbar.
Seit Mitte 1950er Jahre hatten Bundesregierungen immer wieder betont, dass die Bundesrepublik „kein „Einwanderungsland“ sei. Das Bekenntnis der Rot-Grünen Koalition zum „Einwanderungsland“ und die Reformen im Bereich des Staatsbürgerschaftsrechts (1999), des Arbeitsrechts (2000) und der Zuwanderung (2004) markierte daher für viele Experten einen Paradigmawandel in der deutschen Immigrations- und Integrationspolitik. Dieser Wandel ist nie systematisch untersucht worden. Für den Zeitraum von 1981 bis 2005 geht die Arbeit auf der Basis einer stichwortbasierten Inhaltsanalyse und eines Gesetzgebungsindexes deshalb den Fragen nach, (1) inwieweit sich Veränderungen in der politischen Zuwanderungsdiskussion in Deutschland am Beispiel des Deutschen Bundestags nachweisen lassen (Diskursebene), (2) inwiefern die gesetzliche Steuerung und Regulierung von Immigration und Integration in dieser Periode von Liberalisierungstendenzen gekennzeichnet war (Policyebene), und (3) in welchem Verhältnis Diskurs und Policy zueinander stehen. Politische, ökonomische und gesellschaftliche Rahmenbedingungen werden dabei berücksichtigt. Theoretisch basiert die Arbeit auf den Annahmen der Punctuated Equilibrium Theory, die etwas ausführlicher dargestellt und mit den Konzepten Paradigma, Frame und Policywandel verbunden wird.
The potential increase in frequency and magnitude of extreme floods is currently discussed in terms of global warming and the intensification of the hydrological cycle. The profound knowledge of past natural variability of floods is of utmost importance in order to assess flood risk for the future. Since instrumental flood series cover only the last ~150 years, other approaches to reconstruct historical and pre-historical flood events are needed. Annually laminated (varved) lake sediments are meaningful natural geoarchives because they provide continuous records of environmental changes > 10000 years down to a seasonal resolution. Since lake basins additionally act as natural sediment traps, the riverine sediment supply, which is preserved as detrital event layers in the lake sediments, can be used as a proxy for extreme discharge events. Within my thesis I examined a ~ 8.50 m long sedimentary record from the pre-Alpine Lake Mondsee (Northeast European Alps), which covered the last 7000 years. This sediment record consists of calcite varves and intercalated detrital layers, which range in thickness from 0.05 to 32 mm. Detrital layer deposition was analysed by a combined method of microfacies analysis via thin sections, Scanning Electron Microscopy (SEM), μX-ray fluorescence (μXRF) scanning and magnetic susceptibility. This approach allows characterizing individual detrital event layers and assigning a corresponding input mechanism and catchment. Based on varve counting and controlled by 14C age dates, the main goals of this thesis are (i) to identify seasonal runoff processes, which lead to significant sediment supply from the catchment into the lake basin and (ii) to investigate flood frequency under changing climate boundary conditions. This thesis follows a line of different time slices, presenting an integrative approach linking instrumental and historical flood data from Lake Mondsee in order to evaluate the flood record inferred from Lake Mondsee sediments. The investigation of eleven short cores covering the last 100 years reveals the abundance of 12 detrital layers. Therein, two types of detrital layers are distinguished by grain size, geochemical composition and distribution pattern within the lake basin. Detrital layers, which are enriched in siliciclastic and dolomitic material, reveal sediment supply from the Flysch sediments and Northern Calcareous Alps into the lake basin. These layers are thicker in the northern lake basin (0.1-3.9 mm) and thinner in the southern lake basin (0.05-1.6 mm). Detrital layers, which are enriched in dolomitic components forming graded detrital layers (turbidites), indicate the provenance from the Northern Calcareous Alps. These layers are generally thicker (0.65-32 mm) and are solely recorded within the southern lake basin. In comparison with instrumental data, thicker graded layers result from local debris flow events in summer, whereas thin layers are deposited during regional flood events in spring/summer. Extreme summer floods as reported from flood layer deposition are principally caused by cyclonic activity from the Mediterranean Sea, e.g. July 1954, July 1997 and August 2002. During the last two millennia, Lake Mondsee sediments reveal two significant flood intervals with decadal-scale flood episodes, during the Dark Ages Cold Period (DACP) and the transition from the Medieval Climate Anomaly (MCA) into the Little Ice Age (LIA) suggesting a linkage of transition to climate cooling and summer flood recurrences in the Northeastern Alps. In contrast, intermediate or decreased flood episodes appeared during the MWP and the LIA. This indicates a non-straightforward relationship between temperature and flood recurrence, suggesting higher cyclonic activity during climate transition in the Northeast Alps. The 7000-year flood chronology reveals 47 debris flows and 269 floods, with increased flood activity shifting around 3500 and 1500 varve yr BP (varve yr BP = varve years before present, before present = AD 1950). This significant increase in flood activity shows a coincidence with millennial-scale climate cooling that is reported from main Alpine glacier advances and lower tree lines in the European Alps since about 3300 cal. yr BP (calibrated years before present). Despite relatively low flood occurrence prior to 1500 varve yr BP, floods at Lake Mondsee could have also influenced human life in early Neolithic lake dwellings (5750-4750 cal. yr BP). While the first lake dwellings were constructed on wetlands, the later lake dwellings were built on piles in the water suggesting an early flood risk adaptation of humans and/or a general change of the Late Neolithic Culture of lake-dwellers because of socio-economic reasons. However, a direct relationship between the final abandonment of the lake dwellings and higher flood frequencies is not evidenced.
The field of machine learning studies algorithms that infer predictive models from data. Predictive models are applicable for many practical tasks such as spam filtering, face and handwritten digit recognition, and personalized product recommendation. In general, they are used to predict a target label for a given data instance. In order to make an informed decision about the deployment of a predictive model, it is crucial to know the model’s approximate performance. To evaluate performance, a set of labeled test instances is required that is drawn from the distribution the model will be exposed to at application time. In many practical scenarios, unlabeled test instances are readily available, but the process of labeling them can be a time- and cost-intensive task and may involve a human expert. This thesis addresses the problem of evaluating a given predictive model accurately with minimal labeling effort. We study an active model evaluation process that selects certain instances of the data according to an instrumental sampling distribution and queries their labels. We derive sampling distributions that minimize estimation error with respect to different performance measures such as error rate, mean squared error, and F-measures. An analysis of the distribution that governs the estimator leads to confidence intervals, which indicate how precise the error estimation is. Labeling costs may vary across different instances depending on certain characteristics of the data. For instance, documents differ in their length, comprehensibility, and technical requirements; these attributes affect the time a human labeler needs to judge relevance or to assign topics. To address this, the sampling distribution is extended to incorporate instance-specific costs. We empirically study conditions under which the active evaluation processes are more accurate than a standard estimate that draws equally many instances from the test distribution. We also address the problem of comparing the risks of two predictive models. The standard approach would be to draw instances according to the test distribution, label the selected instances, and apply statistical tests to identify significant differences. Drawing instances according to an instrumental distribution affects the power of a statistical test. We derive a sampling procedure that maximizes test power when used to select instances, and thereby minimizes the likelihood of choosing the inferior model. Furthermore, we investigate the task of comparing several alternative models; the objective of an evaluation could be to rank the models according to the risk that they incur or to identify the model with lowest risk. An experimental study shows that the active procedure leads to higher test power than the standard test in many application domains. Finally, we study the problem of evaluating the performance of ranking functions, which are used for example for web search. In practice, ranking performance is estimated by applying a given ranking model to a representative set of test queries and manually assessing the relevance of all retrieved items for each query. We apply the concepts of active evaluation and active comparison to ranking functions and derive optimal sampling distributions for the commonly used performance measures Discounted Cumulative Gain and Expected Reciprocal Rank. Experiments on web search engine data illustrate significant reductions in labeling costs.
Neben der Frage nach der leistungssteigernden Wirkung von sogenannten "Ich-kann"-Checklisten auf die Metakognitionsstrategien der Schülerinnen und Schüler, geht die Arbeit auch den Fragen nach, welche Schülerinnen und Schüler "Ich-kann"-Checklisten nutzen, in welcher Form und unter welchen Kontextmerkmalen sie am wirksamsten sind. Dabei handelt es sich um Listen mit festgelegten, fachlichen und überfachlichen Kompetenzen einer bzw. mehrerer Unterrichtseinheiten, die in Form von „Ich-kann“-Formulierungen für Schüler geschrieben sind und die Aufforderung einer Selbst- und Fremdeinschätzung beinhalten. Blickt man in die Veröffentlichungen der letzten Jahre zu diesem Thema und in die schulische Praxis, so ist eine deutliche Hinwendung zur Entwicklung und Arbeit mit „Ich-kann“-Checklisten und Kompetenzrastern zu erkennen. Umso erstaunlicher ist es, dass diesbezüglich so gut wie keine empirischen Untersuchungen vorliegen (vgl. Bastian & Merziger, 2007; Merziger, 2007). Basierend auf einer quantitativen Erhebung von 197 Gymnasialschülerinnen und -schülern in der 7. Jahrgangsstufe im Fach Deutsch wurde über einen Zeitraum von zwei Jahren diesen übergeordneten Fragen nachgegangen. Die Ergebnisse lassen die Aussagen zu, dass "Ich-kann"-Checklisten insbesondere für Jungen ein wirksames pädagogisches Instrument der Selbstregulation darstellen. So fördert die Arbeit mit "Ich-kann"-Checklisten nicht nur die Steuerung eigener Lernprozesse, sondern auch die Anstrengungsbereitschaft der Schülerinnen und Schüler, mehr für das Fach tun zu wollen. Eine während der Intervention erfolgte Selbsteinschätzung über den Leistungsstand mittels der "Ich-kann"-Checklisten fördert dabei den freiwilligen außerunterrichtlichen Gebrauch.
MHC genes encode proteins that are responsible for the recognition of foreign antigens and the triggering of a subsequent, adequate immune response of the organism. Thus they hold a key position in the immune system of vertebrates. It is believed that the extraordinary genetic diversity of MHC genes is shaped by adaptive selectional processes in response to the reoccurring adaptations of parasites and pathogens. A large number of MHC studies were performed in a wide range of wildlife species aiming to understand the role of immune gene diversity in parasite resistance under natural selection conditions. Methodically, most of this work with very few exceptions has focussed only upon the structural, i.e. sequence diversity of regions responsible for antigen binding and presentation. Most of these studies found evidence that MHC gene variation did indeed underlie adaptive processes and that an individual’s allelic diversity explains parasite and pathogen resistance to a large extent. Nevertheless, our understanding of the effective mechanisms is incomplete. A neglected, but potentially highly relevant component concerns the transcriptional differences of MHC alleles. Indeed, differences in the expression levels MHC alleles and their potential functional importance have remained unstudied. The idea that also transcriptional differences might play an important role relies on the fact that lower MHC gene expression is tantamount with reduced induction of CD4+ T helper cells and thus with a reduced immune response. Hence, I studied the expression of MHC genes and of immune regulative cytokines as additional factors to reveal the functional importance of MHC diversity in two free-ranging rodent species (Delomys sublineatus, Apodemus flavicollis) in association with their gastrointestinal helminths under natural selection conditions. I established the method of relative quantification of mRNA on liver and spleen samples of both species in our laboratory. As there was no available information on nucleic sequences of potential reference genes in both species, PCR primer systems that were established in laboratory mice have to be tested and adapted for both non-model organisms. In the due course, sets of stable reference genes for both species were found and thus the preconditions for reliable measurements of mRNA levels established. For D. sublineatus it could be demonstrated that helminth infection elicits aspects of a typical Th2 immune response. Whereas mRNA levels of the cytokine interleukin Il4 increased with infection intensity by strongyle nematodes neither MHC nor cytokine expression played a significant role in D. sublineatus. For A. flavicollis I found a negative association between the parasitic nematode Heligmosomoides polygyrus and hepatic MHC mRNA levels. As a lower MHC expression entails a lower immune response, this could be evidence for an immune evasive strategy of the nematode, as it has been suggested for many micro-parasites. This implies that H. polygyrus is capable to interfere actively with the MHC transcription. Indeed, this parasite species has long been suspected to be immunosuppressive, e.g. by induction of regulatory T-helper cells that respond with a higher interleukin Il10 and tumor necrosis factor Tgfb production. Both cytokines in turn cause an abated MHC expression. By disabling recognition by the MHC molecule H. polygyrus might be able to prevent an activation of the immune system. Indeed, I found a strong tendency in animals carrying the allele Apfl-DRB*23 to have an increased infection intensity with H. polygyrus. Furthermore, I found positive and negative associations between specific MHC alleles and other helminth species, as well as typical signs of positive selection acting on the nucleic sequences of the MHC. The latter was evident by an elevated rate of non-synonymous to synonymous substitutions in the MHC sequences of exon 2 encoding the functionally important antigen binding sites whereas the first and third exons of the MHC DRB gene were highly conserved. In conclusion, the studies in this thesis demonstrate that valid procedures to quantify expression of immune relevant genes are also feasible in non-model wildlife organisms. In addition to structural MHC diversity, also MHC gene expression should be considered to obtain a more complete picture on host-pathogen coevolutionary selection processes. This is especially true if parasites are able to interfere with systemic MHC expression. In this case advantageous or disadvantageous effects of allelic binding motifs are abated. The studies could not define the role of MHC gene expression in antagonistic coevolution as such but the results suggest that it depends strongly on the specific parasite species that is involved.
In dieser Arbeit wurden sphärische Gold Nanopartikel (NP) mit einem Durchmesser größer ~ 2 nm, Gold Quantenpunkte (QDs) mit einem Durchmesser kleiner ~ 2 nm sowie Gold Nanostäbchen (NRs) unterschiedlicher Länge hergestellt und optisch charakterisiert. Zudem wurden zwei neue Synthesevarianten für die Herstellung thermosensitiver Gold QDs entwickelt werden. Sphärische Gold NP zeigen eine Plasmonenbande bei ~ 520 nm, die auf die kollektive Oszillation von Elektronen zurückzuführen ist. Gold NRs weisen aufgrund ihrer anisotropen Form zwei Plasmonenbanden auf, eine transversale Plasmonenbande bei ~ 520 nm und eine longitudinale Plasmonenbande, die vom Länge-zu-Durchmesser-Verhältnis der Gold NRs abhängig ist. Gold QDs besitzen keine Plasmonenbande, da ihre Elektronen Quantenbeschränkungen unterliegen. Gold QDs zeigen jedoch aufgrund diskreter Energieniveaus und einer Bandlücke Photolumineszenz (PL). Die synthetisierten Gold QDs besitzen eine Breitbandlumineszenz im Bereich von ~ 500-800 nm, wobei die Lumineszenz-eigenschaften (Emissionspeak, Quantenausbeute, Lebenszeiten) stark von den Herstellungs-bedingungen und den Oberflächenliganden abhängen. Die PL in Gold QDs ist ein sehr komplexes Phänomen und rührt vermutlich von Singulett- und Triplett-Zuständen her. Gold NRs und Gold QDs konnten in verschiedene Polymere wie bspw. Cellulosetriacetat eingearbeitet werden. Polymernanokomposite mit Gold NRs wurden erstmals unter definierten Bedingungen mechanisch gezogen, um Filme mit optisch anisotropen (richtungsabhängigen) Eigenschaften zu erhalten. Zudem wurde das Temperaturverhalten von Gold NRs und Gold QDs untersucht. Es konnte gezeigt werden, dass eine lokale Variation der Größe und Form von Gold NRs in Polymernanokompositen durch Temperaturerhöhung auf 225-250 °C erzielt werden kann. Es zeigte sich, dass die PL der Gold QDs stark temperaturabhängig ist, wodurch die PL QY der Proben beim Abkühlen (-7 °C) auf knapp 30 % verdoppelt und beim Erhitzen auf 70 °C nahezu vollständig gelöscht werden konnte. Es konnte demonstriert werden, dass die Länge der Alkylkette des Oberflächenliganden einen Einfluss auf die Temperaturstabilität der Gold QDs hat. Zudem wurden verschiedene neuartige und optisch anisotrope Sicherheitslabels mit Gold NRs sowie thermosensitive Sicherheitslabel mit Gold QDs entwickelt. Ebenso scheinen Gold NRs und QDs für die und die Optoelektronik (bspw. Datenspeicherung) und die Medizin (bspw. Krebsdiagnostik bzw. -therapie) von großem Interesse zu sein.
Diese Arbeit befasst sich mit der Synthese und Charakterisierung von organolöslichen Thiophen und Benzodithiophen basierten Materialien und ihrer Anwendung als aktive lochleitende Halbleiterschichten in Feldeffekttransistoren. Im ersten Teil der Arbeit wird durch eine gezielte Modifikation des Thiophengrundgerüstes eine neue Comonomer-Einheit für die Synthese von Thiophen basierten Copolymeren erfolgreich dargestellt. Die hydrophoben Hexylgruppen in der 3-Position des Thiophens werden teilweise durch hydrophile 3,6-Dioxaheptylgruppen ersetzt. Über die Grignard-Metathese nach McCullough werden statistische Copolymere mit unterschiedlichen molaren Anteilen vom hydrophoben Hexyl- und hydrophilem 3,6-Dioxaheptylgruppen 1:1 (P-1), 1:2 (P-2) und 2:1 (P-3) erfolgreich hergestellt. Auch die Synthese eines definierten Blockcopolymers BP-1 durch sequentielle Addition der Comonomere wird realisiert. Optische und elektrochemische Eigenschaften der neuartigen Copolymere sind vergleichbar mit P3HT. Mit allen Copolymeren wird ein charakteristisches Transistorverhalten in einem Top-Gate/Bottom-Kontakt-Aufbau erhalten. Dabei werden mit P-1 als die aktive Halbleiterschicht im Bauteil, PMMA als Dielektrikum und Silber als Gate-Elektrode Mobilitäten von bis zu 10-2 cm2/Vs erzielt. Als Folge der optimierten Grenzfläche zwischen Dielektrikum und Halbleiter wird eine Verbesserung der Luftstabilität der Transistoren über mehrere Monate festgestellt. Im zweiten Teil der Arbeit werden Benzodithiophen basierte organische Materialien hergestellt. Für die Synthese der neuartigen Benzodithiophen-Derivate wird die Schlüsselverbindung TIPS-BDT in guter Ausbeute dargestellt. Die Difunktionalisierung von TIPS-BDT in den 2,6-Positionen über eine elektrophile Substitution liefert die gewünschten Dibrom- und Distannylmonomere. Zunächst werden über die Stille-Reaktion alternierende Copolymere mit alkylierten Fluoren- und Chinoxalin-Einheiten realisiert. Alle Copolymere zeichnen sich durch eine gute Löslichkeit in gängigen organischen Lösungsmitteln, hohe thermische Stabilität und durch gute Filmbildungseigenschaften aus. Des Weiteren sind alle Copolymere mit HOMO Lagen höher als -6.3 eV, verglichen mit den Thiophen basierten Copolymeren (P-1 bis P-3), sehr oxidationsstabil. Diese Copolymere zeigen amorphes Verhalten in den Halbleiterschichten in OFETs auf und es werden Mobilitäten bis zu 10-4 cm2/Vs erreicht. Eine Abhängigkeit der Bauteil-Leistung von dem Zinngehalt-Rest im Polymer wird nachgewiesen. Ein Zinngehalt von über 0.6 % kann enormen Einfluss auf die Mobilität ausüben, da die funktionellen SnMe3-Gruppen als Fallenzustände wirken können. Alternativ wird das alternierende TIPS-BDT/Fluoren-Copolymer P-5-Stille nach der Suzuki-Methode polymerisiert. Mit P-5-Suzuki als die aktive organische Halbleiterschicht im OFET wird die höchste Mobilität von 10-2 cm2/Vs erzielt. Diese Mobilität ist somit um zwei Größenordnungen höher als bei P-5-Stille, da die Fallenzustände in diesem Fall minimiert werden und folglich der Ladungstransport verbessert wird. Sowohl das Homopolymer P-12 als auch das Copolymer mit dem aromatischen Akzeptor Benzothiadiazol P-9 führen zu schwerlöslichen Polymeren. Aus diesem Grund werden einerseits Terpolymere aus TIPS-BDT/Fluoren/BTD-Einheiten P-10 und P-11 aufgebaut und andererseits wird versucht die TIPS-BDT-Einheit in die Seitenkette des Styrols einzubringen. Mit der Einführung von BTD in die Hauptpolymerkette werden insbesondere die Absorptions- und die elektrochemischen Eigenschaften beeinflusst. Im Vergleich zu dem TIPS-BDT/Fluoren-Copolymer reicht die Absorption bis in den sichtbaren Bereich und die LUMO Lage wird zu niederen Werten verschoben. Eine Verbesserung der Leistung in den Bauteilen wird jedoch nicht festgestellt. Die erfolgreiche erstmalige Synthese von TIPS-BDT als Seitenkettenpolymer an Styrol P-13 führt zu einem löslichen und amorphen Polymer mit vergleichbaren Mobilitäten von Styrol basierten Polymeren (µ = 10-5 cm2/Vs) im OFET. Ein weiteres Ziel dieser Arbeit ist die Synthese von niedermolekularen organolöslichen Benzodithiophen-Derivaten. Über Suzuki- und Stille-Reaktionen ist es erstmals möglich, verschiedenartige Aromaten über eine σ-Bindung an TIPS-BDT in den 2,6-Positionen zu knüpfen. Die UV/VIS-Untersuchungen zeigen, dass die Absorption durch die Verlängerung der π-Konjugationslänge zu höheren Wellenlängen verschoben wird. Darüber hinaus ist es möglich, thermisch vernetzbare Gruppen wie Allyloxy in das Molekülgerüst einzubauen. Das Einführen von F-Atomen in das Molekülgerüst resultiert in einer verstärkten Packungsordnung im Fluorbenzen funktionalisiertem TIPS-BDT (SM-4) im Festkörper mit sehr guten elektronischen Eigenschaften im OFET, wobei Mobilitäten bis zu 0.09 cm2/Vs erreicht werden.
Slovak schools
(2012)
The EVE curriculum framework
(2012)
Assignments, curriculum framework and background information as the base of developing lessons
(2012)
1. What are the general strengths of the assignments? 2. Structure of the assignment 3. Resources of the assignment 4. Fostering self-expression 5. How could you improve the assignment? 6. Lack of specific examples 7. Not relating the issue to the students 8. Language Problems 9. Infeasibility to adaptation 10. In what ways was the additional information useful ? How could this be improved? 11. Was the framework useful for you and in what way? 12. In what ways did the assignments reflect the steps identified in the framework?
Developing Critical Thinking
(2012)
Deepening Understanding
(2012)
Teaching patterns and trends
(2012)
1. Outline 2. Definition 3. Why is it important (or not) to teach about patterns and trends? What are the strengths and weaknesses of teaching patterns and trends? 4. How were patterns and trends offered in the original assignments? 5. What did the student teacher change in practice? How did it go? 6. Suggestions for improving patterns and trends
The Dutch school system
(2012)
Developing critical thinking
(2012)
Deepening understanding
(2012)
1. What do we mean, when we say ‘deepening understanding’? 2. Which methods can be used to foster deepening understanding? 3. Examples for deepening understanding based on the assignments 4. Summary of methods and results 5. How did we train deepening under standing in school? 6. What did the pupils learn from it? 7. Our own experiences working on this chapter
Describing patterns
(2012)
1. What comes to your Mind when you think of 'patterns'? 2. Does your assignment include patterns? 3. Did you decide tu use some of the patterns? 4. If yes, what problem did you explain with the help of patterns? 4. Describe which patterns you used and how you used them 5. Did you explain the concept of a pattern to your pupils? 6. From your point of view – did pattern offer a helpful structure to prepare your lesson? 7. To what extent were patterns useful for the pupils to understand the main topic of the lesson? 8. How would you improve teaching patterns in your assignments? 9. If you didn ’t use any patterns , explain why. 10. What do you think about using the concept of patterns in general? 11. Will you use patterns in other lessons in the future? Describe why or why not. 12. Conclusion
Relating to students
(2012)
1. The Assignment 'Devotion to Religion and acitive Citizenship' 2. The Assignment 'How are religious spread across Europe' 3. The Assignment 'Is football as important as religion?' 4. The Assignment 'Why be religious?' 5. The Assignment 'Lucky charms' 6. The Assignment 'No Creo en el Jamas' (Life after death) 7. The Assignment 'Religion and its influence on politics ans policies' 8. The Assignment 'Secularisation in Europe' 9. The Assignment 'The meaning of religious places' 10. The Assignment 'Unity in diversity' 11. Which conceptions did you find?
Religion
(2012)
Videos related to the maps
(2012)
Governments at central and sub-national levels are increasingly pursuing participatory mechanisms in a bid to improve governance and service delivery. This has been largely in the context of decentralization reforms in which central governments transfer (share) political, administrative, fiscal and economic powers and functions to sub-national units. Despite the great international support and advocacy for participatory governance where citizen’s voice plays a key role in decision making of decentralized service delivery, there is a notable dearth of empirical evidence as to the effect of such participation. This is the question this study sought to answer based on a case study of direct citizen participation in Local Authorities (LAs) in Kenya. This is as formally provided for by the Local Authority Service Delivery Action Plan (LASDAP) framework that was established to ensure citizens play a central role in planning and budgeting, implementation and monitoring of locally identified services towards improving livelihoods and reducing poverty. Influence of participation was assessed in terms of how it affected five key determinants of effective service delivery namely: efficient allocation of resources; equity in service delivery; accountability and reduction of corruption; quality of services; and, cost recovery. It finds that the participation of citizens is minimal and the resulting influence on the decentralized service delivery negligible. It concludes that despite the dismal performance of citizen participation, LASDAP has played a key role towards institutionalizing citizen participation that future structures will build on. It recommends that an effective framework of citizen participation should be one that is not directly linked to politicians; one that is founded on a legal framework and where citizens have a legal recourse opportunity; and, one that obliges LA officials both to implement what citizen’s proposals which meet the set criteria as well as to account for their actions in the management of public resources.
The Sun is surrounded by a 10^6 K hot atmosphere, the corona. The corona and the solar wind are fully ionized, and therefore in the plasma state. Magnetic fields play an important role in a plasma, since they bind electrically charged particles to their field lines. EUV spectroscopes, like the SUMER instrument on-board the SOHO spacecraft, reveal a preferred heating of coronal ions and strong temperature anisotropies. Velocity distributions of electrons can be measured directly in the solar wind, e.g. with the 3DPlasma instrument on-board the WIND satellite. They show a thermal core, an anisotropic suprathermal halo, and an anti-solar, magnetic-field-aligned, beam or "strahl". For an understanding of the physical processes in the corona, an adequate description of the plasma is needed. Magnetohydrodynamics (MHD) treats the plasma simply as an electrically conductive fluid. Multi-fluid models consider e.g. protons and electrons as separate fluids. They enable a description of many macroscopic plasma processes. However, fluid models are based on the assumption of a plasma near thermodynamic equilibrium. But the solar corona is far away from this. Furthermore, fluid models cannot describe processes like the interaction with electromagnetic waves on a microscopic scale. Kinetic models, which are based on particle velocity distributions, do not show these limitations, and are therefore well-suited for an explanation of the observations listed above. For the simplest kinetic models, the mirror force in the interplanetary magnetic field focuses solar wind electrons into an extremely narrow beam, which is contradicted by observations. Therefore, a scattering mechanism must exist that counteracts the mirror force. In this thesis, a kinetic model for electrons in the solar corona and wind is presented that provides electron scattering by resonant interaction with whistler waves. The kinetic model reproduces the observed components of solar wind electron distributions, i.e. core, halo, and a "strahl" with finite width. But the model is not only applicable on the quiet Sun. The propagation of energetic electrons from a solar flare is studied, and it is found that scattering in the direction of propagation and energy diffusion influence the arrival times of flare electrons at Earth approximately to the same degree. In the corona, the interaction of electrons with whistler waves does not only lead to scattering, but also to the formation of a suprathermal halo, as it is observed in interplanetary space. This effect is studied both for the solar wind as well as the closed volume of a coronal magnetic loop. The result is of fundamental importance for solar-stellar relations. The quiet solar corona always produces suprathermal electrons. This process is closely related to coronal heating, and can therefore be expected in any hot stellar corona. In the second part of this thesis it is detailed how to calculate growth or damping rates of plasma waves from electron velocity distributions. The emission and propagation of electron cyclotron waves in the quiet solar corona, and that of whistler waves during solar flares, is studied. The latter can be observed as so-called fiber bursts in dynamic radio spectra, and the results are in good agreement with observed bursts.
Aufstiege aus der Mittelschicht : soziale Aufstiegsmobilität von Haushalten zwischen 1984 und 2010
(2012)
Die Dissertation widmet sich den intragenerationalen Aufstiegsprozessen von Haushalten aus der Mittelschicht zu den Wohlhabenden. Intragenerationale Mobilitätsforschung wird bislang vor allem als arbeitsmarktbezogene Inidivualmobilität angesehen. Diese Dissertation erweitert den Ansatz auf die Ebene des Haushaltes. Dem liegt der Gedanke zugrunde, dass die soziale Position eines Individuums nicht allein durch sein Erwerbseinkommen determiniert wird. Ebenso entscheidend ist der Kontext des Haushaltes. Dieser bestimmt darüber, wie viele Personen zum Einkommen beitragen können und wie viele daran partizipieren. Weiterhin kommt der Haushaltsebene in Paar-Haushalten die Rolle des Aushandlungsortes zu. Hier wird über Familienplanung, Kinderwunsch und damit in Zusammenhang stehend auch über die Erwerbsbeteiligung der Partner entscheiden. Die vorliegende Dissertation untersucht diese Annahmen mithilfe von Daten des Sozioökonomischen Panels (SOEP) der Jahre 1984 bis 2010. Der Fokus liegt auf der Erwerbsbeteiligung und dem Bildungsniveau des Haushaltes, seiner Struktur, sowie dem Beruf des Haushaltsvorstandes. Es wird davon ausgegangen, dass dies die Hauptfaktoren sind, die über die finanziellen Möglichkeiten eines Haushaltes entscheiden. Ein weiterer Schwerpunkt der Arbeit liegt in der Berücksichtigung des historischen Kontextes, da anzunehmen ist, dass die oben benannten Faktoren sich und ihren Einfluss auf die Aufstiegsmöglichkeiten von Haushalten im historischen Verlauf verändert haben.
Climate is the principal driving force of hydrological extremes like floods and attributing generating mechanisms is an essential prerequisite for understanding past, present, and future flood variability. Successively enhanced radiative forcing under global warming enhances atmospheric water-holding capacity and is expected to increase the likelihood of strong floods. In addition, natural climate variability affects the frequency and magnitude of these events on annual to millennial time-scales. Particularly in the mid-latitudes of the Northern Hemisphere, correlations between meteorological variables and hydrological indices suggest significant effects of changing climate boundary conditions on floods. To date, however, understanding of flood responses to changing climate boundary conditions is limited due to the scarcity of hydrological data in space and time. Exploring paleoclimate archives like annually laminated (varved) lake sediments allows to fill this gap in knowledge offering precise dated time-series of flood variability for millennia. During river floods, detrital catchment material is eroded and transported in suspension by fluid turbulence into downstream lakes. In the water body the transport capacity of the inflowing turbidity current successively diminishes leading to the deposition of detrital layers on the lake floor. Intercalated into annual laminations these detrital layers can be dated down to seasonal resolution. Microfacies analyses and X-ray fluorescence scanning (µ-XRF) at 200 µm resolution were conducted on the varved Mid- to Late Holocene interval of two sediment profiles from pre-alpine Lake Ammersee (southern Germany) located in a proximal (AS10prox) and distal (AS10dist) position towards the main tributary River Ammer. To shed light on sediment distribution within the lake, particular emphasis was (1) the detection of intercalated detrital layers and their micro-sedimentological features, and (2) intra-basin correlation of these deposits. Detrital layers were dated down to the season by microscopic varve counting and determination of the microstratigraphic position within a varve. The resulting chronology is verified by accelerator mass spectrometry (AMS) 14C dating of 14 terrestrial plant macrofossils. Since ~5500 varve years before present (vyr BP), in total 1573 detrital layers were detected in either one or both of the investigated sediment profiles. Based on their microfacies, geochemistry, and proximal-distal deposition pattern, detrital layers were interpreted as River Ammer flood deposits. Calibration of the flood layer record using instrumental daily River Ammer runoff data from AD 1926 to 1999 proves the flood layer succession to represent a significant time-series of major River Ammer floods in spring and summer, the flood season in the Ammersee region. Flood layer frequency trends are in agreement with decadal variations of the East Atlantic-Western Russia (EA-WR) atmospheric pattern back to 200 yr BP (end of the used atmospheric data) and solar activity back to 5500 vyr BP. Enhanced flood frequency corresponds to the negative EA-WR phase and reduced solar activity. These common links point to a central role of varying large-scale atmospheric circulation over Europe for flood frequency in the Ammersee region and suggest that these atmospheric variations, in turn, are likely modified by solar variability during the past 5500 years. Furthermore, the flood layer record indicates three shifts in mean layer thickness and frequency of different manifestation in both sediment profiles at ~5500, ~2800, and ~500 vyr BP. Combining information from both sediment profiles enabled to interpret these shifts in terms of stepwise increases in mean flood intensity. Likely triggers of these shifts are gradual reduction of Northern Hemisphere orbital summer forcing and long-term solar activity minima. Hypothesized atmospheric response to this forcing is hemispheric cooling that enhances equator-to-pole temperature gradients and potential energy in the troposphere. This energy is transferred into stronger westerly cyclones, more extreme precipitation, and intensified floods at Lake Ammersee. Interpretation of flood layer frequency and thickness data in combination with reanalysis models and time-series analysis allowed to reconstruct the flood history and to decipher flood triggering climate mechanisms in the Ammersee region throughout the past 5500 years. Flood frequency and intensity are not stationary, but influenced by multi-causal climate forcing of large-scale atmospheric modes on time-scales from years to millennia. These results challenge future projections that propose an increase in floods when Earth warms based only on the assumption of an enhanced hydrological cycle.
A discrete analogue of the Witten Laplacian on the n-dimensional integer lattice is considered. After rescaling of the operator and the lattice size we analyze the tunnel effect between different wells, providing sharp asymptotics of the low-lying spectrum. Our proof, inspired by work of B. Helffer, M. Klein and F. Nier in continuous setting, is based on the construction of a discrete Witten complex and a semiclassical analysis of the corresponding discrete Witten Laplacian on 1-forms. The result can be reformulated in terms of metastable Markov processes on the lattice.
Sustainable management of semi-arid African savannas under environmental and political change
(2012)
Drylands cover about 40% of the earth’s land surface and provide the basis for the livelihoods of 38% of the global human population. Worldwide, these ecosystems are prone to heavy degradation. Increasing levels of dryland degradation result a strong decline of ecosystem services. In addition, in highly variable semi-arid environments changing future environmental conditions will potentially have severe consequences for productivity and ecosystem dynamics. Hence, global efforts have to be made to understand the particular causes and consequences of dryland degradation and to promote sustainable management options for semi-arid and arid ecosystems in a changing world. Here I particularly address the problem of semi-arid savanna degradation, which mostly occurs in form of woody plant encroachment. At this, I aim at finding viable sustainable management strategies and improving the general understanding of semi-arid savanna vegetation dynamics under conditions of extensive livestock production. Moreover, the influence of external forces, i.e. environmental change and land reform, on the use of savanna vegetation and on the ecosystem response to this land use is assessed. Based on this I identify conditions and strategies that facilitate a sustainable use of semi-arid savanna rangelands in a changing world. I extended an eco-hydrological model to simulate rangeland vegetation dynamics for a typical semi-arid savanna in eastern Namibia. In particular, I identified the response of semi-arid savanna vegetation to different land use strategies (including fire management) also with regard to different predicted precipitation, temperature and CO2 regimes. Not only environmental but also economic and political constraints like e.g. land reform programmes are shaping rangeland management strategies. Hence, I aimed at understanding the effects of the ongoing process of land reform in southern Africa on land use and the semi-arid savanna vegetation. Therefore, I developed and implemented an agent-based ecological-economic modelling tool for interactive role plays with land users. This tool was applied in an interdisciplinary empirical study to identify general patterns of management decisions and the between-farm cooperation of land reform beneficiaries in eastern Namibia. The eco-hydrological simulations revealed that the future dynamics of semi-arid savanna vegetation strongly depend on the respective climate change scenario. In particular, I found that the capacity of the system to sustain domestic livestock production will strongly depend on changes in the amount and temporal distribution of precipitation. In addition, my simulations revealed that shrub encroachment will become less likely under future climatic conditions although positive effects of CO2 on woody plant growth and transpiration have been considered. While earlier studies predicted a further increase in shrub encroachment due to increased levels of atmospheric CO2, my contrary finding is based on the negative impacts of temperature increase on the drought sensitive seedling germination and establishment of woody plant species. Further simulation experiments revealed that prescribed fires are an efficient tool for semi-arid rangeland management, since they suppress woody plant seedling establishment. The strategies tested have increased the long term productivity of the savanna in terms of livestock production and decreased the risk for shrub encroachment (i.e. savanna degradation). This finding refutes the views promoted by existing studies, which state that fires are of minor importance for the vegetation dynamics of semi-arid and arid savannas. Again, the difference in predictions is related to the bottleneck at the seedling establishment stage of woody plants, which has not been sufficiently considered in earlier studies. The ecological-economic role plays with Namibian land reform beneficiaries showed that the farmers made their decisions with regard to herd size adjustments according to economic but not according to environmental variables. Hence, they do not manage opportunistically by tracking grass biomass availability but rather apply conservative management strategies with low stocking rates. This implies that under the given circumstances the management of these farmers will not per se cause (or further worsen) the problem of savanna degradation and shrub encroachment due to overgrazing. However, as my results indicate that this management strategy is rather based on high financial pressure, it is not an indicator for successful rangeland management. Rather, farmers struggle hard to make any positive revenue from their farming business and the success of the Namibian land reform is currently disputable. The role-plays also revealed that cooperation between farmers is difficult even though obligatory due to the often small farm sizes. I thus propose that cooperation needs to be facilitated to improve the success of land reform beneficiaries.
The Indian summer monsoon (ISM) is one of the largest climate systems on earth and impacts the livelihood of nearly 40% of the world’s population. Despite dedicated efforts, a comprehensive picture of monsoon variability has proved elusive largely due to the absence of long term high resolution records, spatial inhomogeneity of the monsoon precipitation, and the complex forcing mechanisms (solar insolation, internal teleconnections for e.g., El Niño-Southern Oscillation, tropical-midlatitude interactions). My work aims to improve the understanding of monsoon variability through generation of long term high resolution palaeoclimate data from climatically sensitive regions in the ISM and westerlies domain. To achieve this aim I have (i) identified proxies (sedimentological, geochemical, isotopic, and mineralogical) that are sensitive to environmental changes; (ii) used the identified proxies to generate long term palaeoclimate data from two climatically sensitive regions, one in NW Himalayas (transitional westerlies and ISM domain in the Spiti valley and one in the core monsoon zone (Lonar lake) in central India); (iii) undertaken a regional overview to generate “snapshots” of selected time slices; and (iv) interpreted the spatial precipitation anomalies in terms of those caused by modern teleconnections. This approach must be considered only as the first step towards identifying the past teleconnections as the boundary conditions in the past were significantly different from today and would have impacted the precipitation anomalies. As the Spiti valley is located in the in the active tectonic orogen of Himalayas, it was essential to understand the role of regional tectonics to make valid interpretations of catchment erosion and detrital influx into the lake. My approach of using integrated structural/morphometric and geomorphic signatures provided clear evidence for active tectonics in this area and demonstrated the suitability of these lacustrine sediments as palaleoseismic archives. The investigations on the lacustrine outcrops in Spiti valley also provided information on changes in seasonality of precipitation and occurrence of frequent and intense periods (ca. 6.8-6.1 cal ka BP) of detrital influx indicating extreme hydrological events in the past. Regional comparison for this time slice indicates a possible extended “break-monsoon like” mode for the monsoon that favors enhanced precipitation over the Tibetan plateau, Himalayas and their foothills. My studies on surface sediments from Lonar lake helped to identify environmentally sensitive proxies which could also be used to interpret palaeodata obtained from a ca. 10m long core raised from the lake in 2008. The core encompasses the entire Holocene and is the first well dated (by 14C) archive from the core monsoon zone of central India. My identification of authigenic evaporite gaylussite crystals within the core sediments provided evidence of exceptionally drier conditions during 4.7-3.9 and 2.0-0.5 cal ka BP. Additionally, isotopic investigations on these crystals provided information on eutrophication, stratification, and carbon cycling processes in the lake.
Die Entwicklung neuer Verfahren für die Rückführung von Palladium aus Altmaterialien, wie gebrauchten Autoabgaskatalysatoren, in den Stoffstromkreislauf ist sowohl aus ökologischer als auch ökonomischer Sicht erstrebenswert. In dieser Arbeit wurden neue Flüssig-Flüssig- und Fest-Flüssig-Extraktionsmittel entwickelt, mit denen Palladium(II) aus einer oxidierenden, salzsauren Laugungslösung, die neben Palladium auch Platin und Rhodium sowie zahlreiche unedle Metalle enthält, zurückgewonnen werden kann. Die neuen Extraktionsmittel ungesättigte monomere 1,2-Dithioether und oligomere Ligandenmischungen mit vicinalen Dithioether-Einheiten – sind im Gegensatz zu vielen in der Literatur aufgeführten Extraktionsmitteln hochselektiv. Aufgrund ihrer geometrischen und elektronischen Präorganisation bilden sie mit Palladium(II) stabile quadratisch-planare Chelatkomplexe. Für die Entwicklung des Flüssig-Flüssig-Extraktionsmittels wurde eine Reihe von ungesättigten 1,2-Dithioetherliganden dargestellt, welche auf einer starren 1,2-Dithioethen-Einheit, die in ein variierendes elektronenziehendes Grundgerüst eingebettet ist, basieren und polare Seitenketten besitzen. Neben der Bestimmung der Kristallstrukturen der Liganden und ihrer Palladiumdichlorid-Komplexe wurden die elektro- und photochemischen Eigenschaften, die Komplexstabilität und das Verhalten in Lösung untersucht. In Flüssig-Flüssig-Extraktionsuntersuchungen konnte gezeigt werden, dass einige der neuen Liganden industriell genutzten Extraktionsmitteln durch eine schnellere Einstellung des Extraktionsgleichgewichts überlegen sind. Anhand von Kriterien, die für eine industrielle Nutzbarkeit entscheidend sind, wie: guter Oxidationsbeständigkeit, einer hohen Extraktionsausbeute (auch bei hohen Salzsäurekonzentrationen der Speiselösung), schneller Extraktionskinetik und einer hohen Selektivität für Palladium(II) wurde aus der Reihe der sechs Liganden ein geeignetes Flüssig-Flüssig-Extraktionsmittel ausgewählt: 1,2-Bis(2-methoxyethylthio)benzen. Mit diesem wurde ein praxisnahes Flüssig-Flüssig-Extraktionssystem entwickelt. Nach der schrittweisen Adaption der wässrigen Phase von einer Modelllösung hin zu der oxidierenden, salzsauren Laugungslösung erfolgte die Auswahl eines geeigneten großtechnisch, einsetzbaren Lösemittels (1,2-Dichlorbenzen) und eines effizienten Reextraktionsmittels (0,5 M Thioharnstoff in 0,1 M HCl). Die hohe Palladium(II)-Selektivität dieses Flüssig-Flüssig-Extraktionssystems konnte verifiziert und seine Wiederverwendbarkeit und Praxistauglichkeit unter Beweis gestellt werden. Weiterhin wurde gezeigt, dass sich beim Kontakt mit oxidierenden Medien aus dem Dithioether 1,2-Bis(2-methoxyethylthio)benzen geringe Mengen des Thioethersulfoxids 1-(2-Methoxyethylsulfinyl)-2-(2-methoxyethylthio)benzen bilden. Dieses wird im sauren Milieu protoniert und beschleunigt die Extraktion wie ein Phasentransferkatalysator, ohne jedoch die Palladium(II)-Selektivität herabzusetzen. Die Kristallstruktur des Palladiumdichlorid-Komplexes des Tioethersulfoxids zeigt, dass der unprotonierte Ligand Palladium(II), analog zum Dithioether, über die chelatisierenden Schwefelatome koordiniert. Verschiedene Mischungen von Oligo(dithioether)-Liganden und der monomere Ligand 1,2-Bis(2-methoxyethylthio)benzen dienten als Extraktionsmittel für Fest-Flüssig-Extraktionsversuche mit SIRs (solvent impregnated resins) und wurden zu diesem Zweck auf hydrophilem Kieselgel und organophilem Amberlite® XAD 2 adsorbiert. Die Oligo(dithioether)-Liganden basieren auf 1,2-Dithiobenzen oder 1,2-Dithiomaleonitril-Einheiten, welche über Tris(oxyethylen)ethylen- oder Trimethylen-Brücken miteinander verknüpft sind. Mit Hilfe von Batch-Versuchen konnte gezeigt werden, dass sich strukturelle Unterschiede - wie die Art der chelatisierenden Einheit, die Art der verbrückenden Ketten und das Trägermaterial - auf die Extraktionsausbeuten, die Extraktionskinetik und die Beladungskapazität auswirken. Die kieselgelhaltigen SIRs stellen das Extraktionsgleichgewicht viel schneller ein als die Amberlite® XAD 2-haltigen. Jedoch bleiben die Extraktionsmittel auf Amberlite® XAD 2, im Gegensatz zu Kieselgel, dauerhaft haften. Im salzsauren Milieu sind die 1,2-Dithiobenzen-derivate besser als Extraktionsmittel geeignet als die 1,2-Dithiomaleonitrilderivate. In Säulenversuchen mit der oxidierenden, salzsauren Laugungslösung und wiederverwendbaren, mit 1,2-Dithiobenzenderivaten imprägnierten, Amberlite® XAD 2-haltigen SIRs zeigte sich, dass für die Realisierung hoher Beladungskapazitäten sehr geringe Pumpraten benötigt werden. Trotzdem konnte die gute Palladium(II)-Selektivität dieser Festphasenmaterialien demonstriert werden. Allerdings wurden in den Eluaten im Gegensatz zu den Eluaten, die aus Flüssig-Flüssig-Extraktion resultierten neben dem Palladium auch geringe Mengen an Platin, Aluminium, Eisen und Blei gefunden.
The present thesis is to be brought into line with the current need for alternative and sustainable approaches toward energy management and materials design. In this context, carbon in particular has become the material of choice in many fields such as energy conversion and storage. Herein, three main topics are covered: 1)An alternative synthesis strategy toward highly porous functional carbons with tunable porosity using ordinary salts as porogen (denoted as “salt templating”) 2)The one-pot synthesis of porous metal nitride containing functional carbon composites 3)The combination of both approaches, enabling the generation of highly porous composites with finely tunable properties All approaches have in common that they are based on the utilization of ionic liquids, salts which are liquid below 100 °C, as precursors. Just recently, ionic liquids were shown to be versatile precursors for the generation of heteroatom-doped carbons since the liquid state and a negligible vapor pressure are highly advantageous properties. However, in most cases the products do not possess any porosity which is essential for many applications. In the first part, “salt templating”, the utilization of salts as diverse and sustainable porogens, is introduced. Exemplarily shown for ionic liquid derived nitrogen- and nitrogen-boron-co-doped carbons, the control of the porosity and morphology on the nanometer scale by salt templating is presented. The studies within this thesis were conducted with the ionic liquids 1-Butyl-3-methyl-pyridinium dicyanamide (Bmp-dca), 1-Ethyl-3-methyl-imidazolium dicyanamide (Emim-dca) and 1 Ethyl 3-methyl-imidazolium tetracyanoborate (Emim-tcb). The materials are generated through thermal treatment of precursor mixtures containing one of the ionic liquids and a porogen salt. By simple removal of the non-carbonizable template salt with water, functional graphitic carbons with pore sizes ranging from micro- to mesoporous and surface areas up to 2000 m2g-1 are obtained. The carbon morphologies, which presumably originate from different onsets of demixing, mainly depend on the nature of the porogen salt whereas the nature of the ionic liquid plays a minor role. Thus, a structural effect of the porogen salt rather than activation can be assumed. This offers an alternative to conventional activation and templating methods, enabling to avoid multiple-step and energy-consuming synthesis pathways as well as employment of hazardous chemicals for the template removal. The composition of the carbons can be altered via the heat-treatment procedure, thus at lower synthesis temperatures rather polymeric carbonaceous materials with a high degree of functional groups and high surface areas are accessible. First results suggest the suitability of the materials for CO2 utilization. In order to further illustrate the potential of ionic liquids as carbon precursors and to expand the class of carbons which can be obtained, the ionic liquid 1-Ethyl-3-methyl-imidazolium thiocyanate (Emim-scn) is introduced for the generation of nitrogen-sulfur-co-doped carbons in combination with the already studied ionic liquids Bmp-dca and Emim-dca. Here, the salt templating approach should also be applicable eventually further illustrating the potential of salt templating, too. In the second part, a one-pot and template-free synthesis approach toward inherently porous metal nitride nanoparticle containing nitrogen-doped carbon composites is presented. Since ionic liquids also offer outstanding solubility properties, the materials can be generated through the carbonization of homogeneous solutions of an ionic liquid acting as nitrogen as well as carbon source and the respective metal precursor. The metal content and surface area are easily tunable via the initial metal precursor amount. Furthermore, it is also possible to synthesize composites with ternary nitride nanoparticles whose composition is adjustable by the metal ratio in the precursor solution. Finally, both approaches are combined into salt templating of the one-pot composites. This opens the way to the one-step synthesis of composites with tunable composition, particle size as well as precisely controllable porosity and morphology. Thereby, common synthesis strategies where the product composition is often negatively affected by the template removal procedure can be avoided. The composites are further shown to be suitable as electrodes for supercapacitors. Here, different properties such as porosity, metal content and particle size are investigated and discussed with respect to their influence on the energy storage performance. Because a variety of ionic liquids, metal precursors and salts can be combined and a simple closed-loop process including salt recycling is imaginable, the approaches present a promising platform toward sustainable materials design.
Especially for the last twenty years, the studies of Linguistic Landscapes (LLs) have been gaining the status as an autonomous linguistic discipline. The LL of a (mostly) geographically limited area – which consists of e.g. billboards, posters, shop signs, material for election campaigns, etc. – gives deep insights into the presence or absence of languages in that particular area. Thus, LL not only allows to conclude from the presence of a language to its dominance, but also from its absence to the oppression of minorities, above all in areas where minority languages should – demographically seen – be visible. The LLs of big cities are fruitful research areas due to the mass of linguistic data. The first part of this paper deals with the theoretical and practical research that has been conducted in LL studies so far. A summary of the theory, methodologies and different approaches is given. In the second part I apply the theoretical basis to my own case study. For this, the LLs of two shopping streets in different areas of Hong Kong were examined in 2010. It seems likely that the linguistic competence of English must be rather high in Hong Kong, due to the long-lasting influence of British culture and mentality and the official status of the language. The case study's results are based on empirical data showing the objectively visible presence of English in both examined areas, as well as on two surveys. Those were conducted both openly and anonymously. The surveys are a reinsurance measuring the level of linguistic competence of English in Hong Kong. That level was defined before by an analysis of the LL. Hence, this case study is a new approach to LL analysis which does not end with the description of its material composition (as have done most studies before), but which rather includes its creators by asking in what way people's actual linguistic competence is reflected in Hong Kong's LL.
Agriculture is one of the most important human activities providing food and more agricultural goods for seven billion people around the world and is of special importance in sub-Saharan Africa. The majority of people depends on the agricultural sector for their livelihoods and will suffer from negative climate change impacts on agriculture until the middle and end of the 21st century, even more if weak governments, economic crises or violent conflicts endanger the countries’ food security. The impact of temperature increases and changing precipitation patterns on agricultural vegetation motivated this thesis in the first place. Analyzing the potentials of reducing negative climate change impacts by adapting crop management to changing climate is a second objective of the thesis. As a precondition for simulating climate change impacts on agricultural crops with a global crop model first the timing of sowing in the tropics was improved and validated as this is an important factor determining the length and timing of the crops´ development phases, the occurrence of water stress and final crop yield. Crop yields are projected to decline in most regions which is evident from the results of this thesis, but the uncertainties that exist in climate projections and in the efficiency of adaptation options because of political, economical or institutional obstacles have to be considered. The effect of temperature increases and changing precipitation patterns on crop yields can be analyzed separately and varies in space across the continent. Southern Africa is clearly the region most susceptible to climate change, especially to precipitation changes. The Sahel north of 13° N and parts of Eastern Africa with short growing seasons below 120 days and limited wet season precipitation of less than 500 mm are also vulnerable to precipitation changes while in most other part of East and Central Africa, in contrast, the effect of temperature increase on crops overbalances the precipitation effect and is most pronounced in a band stretching from Angola to Ethiopia in the 2060s. The results of this thesis confirm the findings from previous studies on the magnitude of climate change impact on crops in sub-Saharan Africa but beyond that helps to understand the drivers of these changes and the potential of certain management strategies for adaptation in more detail. Crop yield changes depend on the initial growing conditions, on the magnitude of climate change, and on the crop, cropping system and adaptive capacity of African farmers which is only now evident from this comprehensive study for sub-Saharan Africa. Furthermore this study improves the representation of tropical cropping systems in a global crop model and considers the major food crops cultivated in sub-Saharan Africa and climate change impacts throughout the continent.
Nucleation and growth of unsubstituted metal phthalocyanine films from solution on planar substrates
(2012)
In den vergangenen Jahren wurden kosteneffiziente nasschemische Beschichtungsverfahren für die Herstellung organischer Dünnfilme für verschiedene opto-elektronische Anwendungen entdeckt und weiterentwickelt. Unter anderem wurden Phthalocyanin-Moleküle in photoaktiven Schichten für die Herstellung von Solarzellen intensiv erforscht. Aufgrund der kleinen bzw. unbekannten Löslichkeit wurden Phthalocyanin-Schichten durch Aufdampfverfahren im Vakuum hergestellt. Des Weiteren wurde die Löslichkeit durch chemische Synthese erhöht, was aber die Eigenschaften von Pc beeinträchtigte. In dieser Arbeit wurde die Löslichkeit, optische Absorption und Stabilität von 8 verschiedenen unsubstituierten Metall-Phthalocyaninen in 28 verschiedenen Lösungsmitteln quantitativ gemessen. Wegen ausreichender Löslichkeit, Stabilität und Anwendbarkeit in organischen Solarzellen wurde Kupferphthalocyanin (CuPc) in Trifluoressigsäure (TFA) für weitere Untersuchungen ausgewählt. Durch die Rotationsbeschichtung von CuPc aus TFA Lösung wurde ein dünner Film aus der verdampfenden Lösung auf dem Substrat platziert. Nach dem Verdampfen des Lösungsmittels, die Nanobändern aus CuPc bedecken das Substrat. Die Nanobänder haben eine Dicke von etwa ~ 1 nm (typische Dimension eines CuPc-Molekül) und variierender Breite und Länge, je nach Menge des Materials. Solche Nanobändern können durch Rotationsbeschichtung oder auch durch andere Nassbeschichtungsverfahren, wie Tauchbeschichtung, erzeugt werden. Ähnliche Fibrillen-Strukturen entstehen durch Nassbeschichtung von anderen Metall-Phthalocyaninen, wie Eisen- und Magnesium-Phthalocyanin, aus TFA-Lösung sowie auf anderen Substraten, wie Glas oder Indium Zinnoxid. Materialeigenschaften von aufgebrachten CuPc aus TFA Lösung und CuPc in der Lösung wurden ausführlich mit Röntgenbeugung, Spektroskopie- und Mikroskopie Methoden untersucht. Es wird gezeigt, dass die Nanobänder nicht in der Lösung, sondern durch Verdampfen des Lösungsmittels und der Übersättigung der Lösung entstehen. Die Rasterkraftmikroskopie wurde dazu verwendet, um die Morphologie des getrockneten Films bei unterschiedlicher Konzentration zu studieren. Der Mechanismus der Entstehung der Nanobändern wurde im Detail studiert. Gemäß der Keimbildung und Wachstumstheorie wurde die Entstehung der CuPc Nanobänder aus einer übersättigt Lösung diskutiert. Die Form der Nanobändern wurde unter Berücksichtigung der Wechselwirkung zwischen den Molekülen und dem Substrat diskutiert. Die nassverarbeitete CuPc-Dünnschicht wurde als Donorschicht in organischen Doppelschicht Solarzellen mit C60-Molekül, als Akzeptor eingesetzt. Die Effizienz der Energieumwandlung einer solchen Zelle wurde entsprechend den Schichtdicken der CuPc Schicht untersucht.
Nowadays, model-driven engineering (MDE) promises to ease software development by decreasing the inherent complexity of classical software development. In order to deliver on this promise, MDE increases the level of abstraction and automation, through a consideration of domain-specific models (DSMs) and model operations (e.g. model transformations or code generations). DSMs conform to domain-specific modeling languages (DSMLs), which increase the level of abstraction, and model operations are first-class entities of software development because they increase the level of automation. Nevertheless, MDE has to deal with at least two new dimensions of complexity, which are basically caused by the increased linguistic and technological heterogeneity. The first dimension of complexity is setting up an MDE environment, an activity comprised of the implementation or selection of DSMLs and model operations. Setting up an MDE environment is both time-consuming and error-prone because of the implementation or adaptation of model operations. The second dimension of complexity is concerned with applying MDE for actual software development. Applying MDE is challenging because a collection of DSMs, which conform to potentially heterogeneous DSMLs, are required to completely specify a complex software system. A single DSML can only be used to describe a specific aspect of a software system at a certain level of abstraction and from a certain perspective. Additionally, DSMs are usually not independent but instead have inherent interdependencies, reflecting (partial) similar aspects of a software system at different levels of abstraction or from different perspectives. A subset of these dependencies are applications of various model operations, which are necessary to keep the degree of automation high. This becomes even worse when addressing the first dimension of complexity. Due to continuous changes, all kinds of dependencies, including the applications of model operations, must also be managed continuously. This comprises maintaining the existence of these dependencies and the appropriate (re-)application of model operations. The contribution of this thesis is an approach that combines traceability and model management to address the aforementioned challenges of configuring and applying MDE for software development. The approach is considered as a traceability approach because it supports capturing and automatically maintaining dependencies between DSMs. The approach is considered as a model management approach because it supports managing the automated (re-)application of heterogeneous model operations. In addition, the approach is considered as a comprehensive model management. Since the decomposition of model operations is encouraged to alleviate the first dimension of complexity, the subsequent composition of model operations is required to counteract their fragmentation. A significant portion of this thesis concerns itself with providing a method for the specification of decoupled yet still highly cohesive complex compositions of heterogeneous model operations. The approach supports two different kinds of compositions - data-flow compositions and context compositions. Data-flow composition is used to define a network of heterogeneous model operations coupled by sharing input and output DSMs alone. Context composition is related to a concept used in declarative model transformation approaches to compose individual model transformation rules (units) at any level of detail. In this thesis, context composition provides the ability to use a collection of dependencies as context for the composition of other dependencies, including model operations. In addition, the actual implementation of model operations, which are going to be composed, do not need to implement any composition concerns. The approach is realized by means of a formalism called an executable and dynamic hierarchical megamodel, based on the original idea of megamodels. This formalism supports specifying compositions of dependencies (traceability and model operations). On top of this formalism, traceability is realized by means of a localization concept, and model management by means of an execution concept.
Die Restenose stellt ein zentrales Problem der interventionellen Kardiologie dar und ist häufigste Komplikation nach perkutanen Angioplastieverfahren. Hauptursache dieser Wiederverengung des Gefäßes ist die Bildung einer Neointima durch die Proliferation transdifferenzierter vaskulärer glatter Muskelzellen und die Sekretion extrazellulärer Matrix. Die Entstehung reaktiver Sauerstoffspezies (ROS) und die Entzündungsreaktion nach der Gefäßverletzung werden als frühe, die Neointimabildung induzierende Prozesse diskutiert. Im Rahmen dieser Arbeit wurden mehrere Projekte bearbeitet, die Aufschluss über die während der Neointimabildung statt findenden Prozesse geben sollen. Mit Hilfe eines Verletzungsmodells der murinen Femoralarterie wurde der Einfluss der Entzündung und der ROS-Bildung auf die Neointimabildung in der Maus untersucht. Die Behandlung mit dem mitochondrialen Superoxiddismutase-Mimetikum MitoTEMPO verminderte die Bildung der Neointima besser, als die Behandlung mit dem globalen ROS-Fänger N-Acetylcystein. Die stärkste Hemmung der Neointimabildung wurde jedoch durch die Immunsuppression mit Rapamycin erreicht. Interferon-γ (INFγ) ist ein wichtiges Zytokin der Th1-Immunantwort, das in Folge der Gefäßverletzung freigesetzt wird und die proinflammatorischen Chemokine CXCL9 (MIG, Monokine Induced by INF), CXCL10 (IP-10, INF inducible Protein of 10 kDa) und CXCL11 (I-TAC, Interferon inducible T cell-Chemoattractant) induziert. CXCL9, CXCL10 und CXCL11 sind Liganden des CXC-Chemokinrezeptors 3 (CXCR3) und locken chemotaktisch CXCR3 positive Entzündungszellen zum Ort der Gefäßverletzung. Daher wurde die spezielle Bedeutung des Chemokins CXCL10 in der Restenose untersucht. Dazu wurden CXCL10-defiziente Mäuse dem Femoralisverletzungsmodell unterzogen und die Gefäße nach 14 Tagen morphometrisch und immunhistologisch untersucht. CXCL10-Defizienz führte in Mäusen zu einer verminderten Neointimabildung, die mit einer verringerten Inflammation, Apoptose und Proliferation im verletzten Gefäß korrelierte. Neben der Inflammation beeinflusst aber auch die Reendothelialisierung der verletzten Gefäßwand die Restenose. Interessanterweise war im Vergleich zu Wildtyp-Mäusen in den CXCL10-Knockout-Mäusen auch die Reendothelialisierung erheblich verbessert. Offensichtlich ist das CXCR3-Chemokinsystem also in völlig unterschiedliche biologische Prozesse involviert und beeinflusst nicht nur die Bildung der Neoimtima durch die Förderung der Entzündung, sondern auch die Unterdrückung der Reendothelialisierung der verletzten Gefäßwand. Tatsächlich wird der CXCR3 nicht nur auf Entzündungszellen, sondern auch auf Endothelzellen exprimiert. Zur separaten Untersuchung der Rolle des CXCR3 in der Inflammation und der Reendothelialisierung wurde im Rahmen dieser Arbeit damit begonnen konditionelle CXCR3-Knockout-Mäuse zu generieren, in denen der CXCR3 entweder in Entzündungszellen oder in Endothelzellen ausgeschaltet ist. Zum besseren Verständnis der molekularen Mechanismen, mit denen der CXCR3 seine Funktionen vermittelt, wurde zudem untersucht ob dieser mit anderen G-Protein-gekoppelten Rezeptoren (GPCR) interagiert. Die Analyse von Coimmunpräzipitaten deutet auf eine Homodimerisierung der beiden CXCR3 Splicevarianten CXCR3A und CXCR3B, sowie auf die Heterodimerbildung von CXCR3A und CXCR3B mit sich, sowie jeweils mit CCR2, CCR3, CCR5 und den Opioidrezeptoren MOR und KOR hin. Die getestete Methode des Fluoreszenz-Resonanz-Energietransfers (FRET) erwies sich jedoch als ungeeignet zur Untersuchung von CXCR3, da dieser in HEK293T-Zellen nicht korrekt transient exprimiert wurde. Insgesamt deuten die Ergebnisse dieser Arbeit darauf hin, dass das CXCR3-Chemokinsystem eine zentrale Rolle in unterschiedlichen, die Neointimabildung beeinflussenden Prozessen spielt. Damit könnten der CXCR3 und insbesondere das Chemokin CXCL10 interessante Zielmoleküle in der Entwicklung neuer verbesserter Therapien zur Verhinderung der Restenose darstellen.
This thesis investigates the gradient flow of Dirac-harmonic maps. Dirac-harmonic maps are critical points of an energy functional that is motivated from supersymmetric field theories. The critical points of this energy functional couple the equation for harmonic maps with spinor fields. At present, many analytical properties of Dirac-harmonic maps are known, but a general existence result is still missing. In this thesis the existence question is studied using the evolution equations for a regularized version of Dirac-harmonic maps. Since the energy functional for Dirac-harmonic maps is unbounded from below the method of the gradient flow cannot be applied directly. Thus, we first of all consider a regularization prescription for Dirac-harmonic maps and then study the gradient flow. Chapter 1 gives some background material on harmonic maps/harmonic spinors and summarizes the current known results about Dirac-harmonic maps. Chapter 2 introduces the notion of Dirac-harmonic maps in detail and presents a regularization prescription for Dirac-harmonic maps. In Chapter 3 the evolution equations for regularized Dirac-harmonic maps are introduced. In addition, the evolution of certain energies is discussed. Moreover, the existence of a short-time solution to the evolution equations is established. Chapter 4 analyzes the evolution equations in the case that the domain manifold is a closed curve. Here, the existence of a smooth long-time solution is proven. Moreover, for the regularization being large enough, it is shown that the evolution equations converge to a regularized Dirac-harmonic map. Finally, it is discussed in which sense the regularization can be removed. In Chapter 5 the evolution equations are studied when the domain manifold is a closed Riemmannian spin surface. For the regularization being large enough, the existence of a global weak solution, which is smooth away from finitely many singularities is proven. It is shown that the evolution equations converge weakly to a regularized Dirac-harmonic map. In addition, it is discussed if the regularization can be removed in this case.
Die Mykorrhiza (griechisch: mýkēs für „Pilz”; rhiza für „Wurzel”) stellt eine Symbiose zwischen Pilzen und einem Großteil der Landpflanzen dar. Der Pilz verbessert durch die Symbiose die Versorgung der Pflanze mit Nährstoffen, während die Pflanze den Pilz mit Kohlenhydraten versorgt. Die arbuskuläre Mykorrhiza (AM) stellt dabei einen beson-dere Form der Mykorrhiza dar. Der AM-Pilz bildet dabei während der Symbiose die namensgebenden Arbuskeln innerhalb der Wurzelzellen als Ort des primären Nährstoff- austausches aus. Die AM-Symbiose (AMS) ist der Forschungsschwerpunkt dieser Arbeit. Als Modellorganismen wurden Medicago truncatula und Glomus intraradices verwendet. Es wurden Transkriptionsanalysen durchgeführt um u.a. AMS regulierte Transkriptions- faktoren (TFs) zu identifizieren. Die Aktivität der Promotoren von drei der so identifizier-ten AMS-regulierten TFs (MtOFTN, MtNTS, MtDES) wurde mit Hilfe eine Reportergens visualisiert. Der Bereich der größten Promotoraktivität waren in einem Fall nur die ar- buskelhaltigen Zellen (MtOFTN). Im zweiten Fall war der Promotor auch aktiv in nicht arbuskelhaltigen Zellen, jedoch am stärksten aktiv in den arbuskelhaltigen Zellen (MtNTS). Ein weiterer Promotor war in arbuskelhaltigen Zellen und den diesen benach-barten Zellen gleich aktiv (MtDES). Zusätzlich wurden weitere Gene als AMS-reguliert identifiziert und es wurde für drei dieser Gene (MtPPK, MtAmT, MtMDRL) ebenfalls eine Promotor::Reporter-Aktivitäts- studie durchgeführt. Die Promotoren der Kinase (MtPPK) und des Ammoniumtrans-porters (MtAmt) waren dabei ausschließlich in arbuskelhaltigen Zellen aktiv, während die Aktivität des ABC-Transporters (MtMDRL) keinem bestimmten Zelltyp zuzuordnen war. Für zwei weitere identifizierte Gene, ein Kupfertransporter (MtCoT) und ein Zucker- bzw. Inositoltransporter (MtSuT), wurden RNA-Interferenz (RNAi)-Untersuchungen durchgeführt. Dabei stellte sich in beiden Fällen heraus, dass, sobald ein RNAi-Effekt in den transformierten Wurzeln vorlag, diese in einem deutlich geringerem Ausmaß wie in der Wurzelkontrolle von G. intraradices kolonisiert worden sind. Im Falle von MtCoT könnte das aus dem selben Grund geschehen, wie im Falle von MtPt4. Welche Rolle MtSuT genau in der Ausbildung der AMS spielt und welche Rolle Inositol in der Aus- bildung der AMS spielt müsste durch weitere Untersuchungen am Protein untersucht werden. Weitere Untersuchen an den in dieser Arbeit als spezifisch für arbuskelhaltige Zellen gezeigten Genen MtAmT, MtPPK und MtOFTN könnten ebenfalls aufschlussreich für das weitere Verständnis der AMS sein. Dies trifft auch auf die TFs MtNTS und MtDES zu, die zwar nicht ausschließlich arbuskelspezifisch transkribiert werden, aber auch eine Rolle in der Regulation der AMS innerhalb von M. truncatula Wurzeln zu spielen scheinen.
In dieser Arbeit werden die Effekte der Synchronisation nichtlinearer, akustischer Oszillatoren am Beispiel zweier Orgelpfeifen untersucht. Aus vorhandenen, experimentellen Messdaten werden die typischen Merkmale der Synchronisation extrahiert und dargestellt. Es folgt eine detaillierte Analyse der Übergangsbereiche in das Synchronisationsplateau, der Phänomene während der Synchronisation, als auch das Austreten aus der Synchronisationsregion beider Orgelpfeifen, bei verschiedenen Kopplungsstärken. Die experimentellen Befunde werfen Fragestellungen nach der Kopplungsfunktion auf. Dazu wird die Tonentstehung in einer Orgelpfeife untersucht. Mit Hilfe von numerischen Simulationen der Tonentstehung wird der Frage nachgegangen, welche fluiddynamischen und aero-akustischen Ursachen die Tonentstehung in der Orgelpfeife hat und inwiefern sich die Mechanismen auf das Modell eines selbsterregten akustischen Oszillators abbilden lässt. Mit der Methode des Coarse Graining wird ein Modellansatz formuliert.