Refine
Has Fulltext
- yes (469) (remove)
Year of publication
- 2014 (469) (remove)
Document Type
- Article (142)
- Postprint (127)
- Doctoral Thesis (109)
- Monograph/Edited Volume (28)
- Part of Periodical (21)
- Preprint (13)
- Review (11)
- Master's Thesis (8)
- Bachelor Thesis (4)
- Conference Proceeding (4)
- Habilitation Thesis (2)
Language
Keywords
- prevention (23)
- Gewalt (21)
- Kriminalität (21)
- Nachhaltigkeit (21)
- Prävention (21)
- Rechtsextremismus (21)
- crime (21)
- right-wing extremism (21)
- sustainability (21)
- violence (21)
Institute
- Institut für Chemie (42)
- Humanwissenschaftliche Fakultät (34)
- Bürgerliches Recht (30)
- WeltTrends e.V. Potsdam (27)
- Institut für Physik und Astronomie (26)
- Institut für Biochemie und Biologie (24)
- Institut für Romanistik (22)
- Mathematisch-Naturwissenschaftliche Fakultät (22)
- Extern (21)
- Department Linguistik (20)
Der vorliegende Text gibt eine Bestandserhebung der bisher stattgefundenen Aktivitäten im E-Learning an der Universität Potsdam wieder, andererseits dient er auch dazu, Potenziale zu sichten und in einem nächstem Schritt daraus Ideen und Vorschläge für eine hochschulweite E-Learning-Strategie abzuleiten. Zielsetzung der Bestandsaufnahme ist es, die relevanten Informationen darzustellen, über den Platz der Universität Potsdam in der hochschulischen E-Learning-Landschaft zu orientieren und den Stand der Entwicklung zu bewerten.
Klaus-Dirk Henke, em. Lehrstuhlinhaber für Öffentliche Finanzen und Gesundheitsökonomie am Institut für Volkswirtschaftslehre und Wirtschaftsrecht der TU Berlin, behandelt, ausgehend von der derzeitigen Leistungserbringung im Gesundheitssektor, Chancen und Risiken der Leistungserbringung durch Genossenschaften.
Matthias Klipp, Beigeordneter für Stadtentwicklung und Bauen der Stadtverwaltung Potsdam, diskutiert die Rolle von Genossenschaften als Bausteine kommunaler Wohnungspolitik. Am Beispiel der Landeshauptstadt Potsdam zeigt er Potenziale von Wohnungsbaugenossenschaften für die Sicherung einer attraktiven und ausreichenden Wohnungsversorgung für die BürgerInnen auf. Er resümiert, dass sich die Strategien der Stadt für Neubau und Bestand auch in Kooperation mit Wohnungsgenossenschaften gerade auch bei der sozialen Wohnraumversorgung für die Bevölkerung bewährt haben.
Andreas Wieg, Abteilungsleiter Vorstandsstab des „Deutschen Genossenschafts- und Raiffeisenverband e. V.“, befasst sich in seinem Beitrag mit der Krisenerprobung und Krisenresistenz des genossenschaftlichen Ordnungsmodells. Dabei geht er auf die genossenschaftliche Organisationsstruktur und auf die Genossenschaftsarten ein. Er beschreibt Möglichkeiten von Genossenschaftsgründungen und geht abschließend auf die Nachhaltigkeit neuer Genossenschaften ein.
Genossenschaften
(2014)
Matthias Klipp, Beigeordneter für Stadtentwicklung und Bauen der Stadtverwaltung Potsdam, diskutiert die Rolle von Genossenschaften als Bausteine kommunaler Wohnungspolitik. Am Beispiel der Landeshauptstadt Potsdam zeigt er Potenziale von Wohnungsbaugenossenschaften für die Sicherung einer attraktiven und ausreichenden Wohnungsversorgung für die BürgerInnen auf. Er resümiert, dass sich die Strategien der Stadt für Neubau und Bestand auch in Kooperation mit Wohnungsgenossenschaften gerade auch bei der sozialen Wohnraumversorgung für die Bevölkerung bewährt haben.
This reference paper describes the sampling and contents of the IZA Evaluation Dataset Survey and outlines its vast potential for research in labor economics. The data have been part of a unique IZA project to connect administrative data from the German Federal Employment Agency with innovative survey data to study the out-mobility of individuals to work. This study makes the survey available to the research community as a Scientific Use File by explaining the development, structure, and access to the data. Furthermore, it also summarizes previous findings with the survey data.
Wolfgang George, Honorarprofessor der TH Mittelhessen und Vorstand der Andramedos eG, diskutiert ausgehend von seiner Definition der Regionalität Potenziale genossenschaftlicher Lösungen am Beispiel der Regionalen Energieversorgung (REV). Er resümiert, dass das Potenzial genossenschaftlicher Lösungen bis heute in diesem Feld nicht annähernd ausgeschöpft sei.
Unstetige Galerkin-Diskretisierung niedriger Ordnung in einem atmosphärischen Multiskalenmodell
(2014)
Die Dynamik der Atmosphäre der Erde umfasst einen Bereich von mikrophysikalischer Turbulenz über konvektive Prozesse und Wolkenbildung bis zu planetaren Wellenmustern. Für Wettervorhersage und zur Betrachtung des Klimas über Jahrzehnte und Jahrhunderte ist diese Gegenstand der Modellierung mit numerischen Verfahren. Mit voranschreitender Entwicklung der Rechentechnik sind Neuentwicklungen der dynamischen Kerne von Klimamodellen, die mit der feiner werdenden Auflösung auch entsprechende Prozesse auflösen können, notwendig. Der dynamische Kern eines Modells besteht in der Umsetzung (Diskretisierung) der grundlegenden dynamischen Gleichungen für die Entwicklung von Masse, Energie und Impuls, so dass sie mit Computern numerisch gelöst werden können. Die vorliegende Arbeit untersucht die Eignung eines unstetigen Galerkin-Verfahrens niedriger Ordnung für atmosphärische Anwendungen. Diese Eignung für Gleichungen mit Wirkungen von externen Kräften wie Erdanziehungskraft und Corioliskraft ist aus der Theorie nicht selbstverständlich. Es werden nötige Anpassungen beschrieben, die das Verfahren stabilisieren, ohne sogenannte „slope limiter” einzusetzen. Für das unmodifizierte Verfahren wird belegt, dass es nicht geeignet ist, atmosphärische Gleichgewichte stabil darzustellen. Das entwickelte stabilisierte Modell reproduziert eine Reihe von Standard-Testfällen der atmosphärischen Dynamik mit Euler- und Flachwassergleichungen in einem weiten Bereich von räumlichen und zeitlichen Skalen. Die Lösung der thermischen Windgleichung entlang der mit den Isobaren identischen charakteristischen Kurven liefert atmosphärische Gleichgewichtszustände mit durch vorgegebenem Grundstrom einstellbarer Neigung zu(barotropen und baroklinen)Instabilitäten, die für die Entwicklung von Zyklonen wesentlich sind. Im Gegensatz zu früheren Arbeiten sind diese Zustände direkt im z-System(Höhe in Metern)definiert und müssen nicht aus Druckkoordinaten übertragen werden.Mit diesen Zuständen, sowohl als Referenzzustand, von dem lediglich die Abweichungen numerisch betrachtet werden, und insbesondere auch als Startzustand, der einer kleinen Störung unterliegt, werden verschiedene Studien der Simulation von barotroper und barokliner Instabilität durchgeführt. Hervorzuheben ist dabei die durch die Formulierung von Grundströmen mit einstellbarer Baroklinität ermöglichte simulationsgestützte Studie des Grades der baroklinen Instabilität verschiedener Wellenlängen in Abhängigkeit von statischer Stabilität und vertikalem Windgradient als Entsprechung zu Stabilitätskarten aus theoretischen Betrachtungen in der Literatur.
Stabilität und Dynamik der Verfassungsprinzipien des Grundgesetzes der Bundesrepublik Deutschland
(2014)
Today, it is well known that galaxies like the Milky Way consist not only of stars but also of gas and dust. The galactic halo, a sphere of gas that surrounds the stellar disk of a galaxy, is especially interesting. It provides a wealth of information about in and outflowing gaseous material towards and away from galaxies and their hierarchical evolution. For the Milky Way, the so-called high-velocity clouds (HVCs), fast moving neutral gas complexes in the halo that can be traced by absorption-line measurements, are believed to play a crucial role in the overall matter cycle in our Galaxy. Over the last decades, the properties of these halo structures and their connection to the local circumgalactic and intergalactic medium (CGM and IGM, respectively) have been investigated in great detail by many different groups. So far it remains unclear, however, to what extent the results of these studies can be transferred to other galaxies in the local Universe. In this thesis, we study the absorption properties of Galactic HVCs and compare the HVC absorption characteristics with those of intervening QSO absorption-line systems at low redshift. The goal of this project is to improve our understanding of the spatial extent and physical conditions of gaseous galaxy halos in the local Universe. In the first part of the thesis we use HST /STIS ultraviolet spectra of more than 40 extragalactic background sources to statistically analyze the absorption properties of the HVCs in the Galactic halo. We determine fundamental absorption line parameters including covering fractions of different weakly/intermediately/highly ionized metals with a particular focus on SiII and MgII. Due to the similarity in the ionization properties of SiII and MgII, we are able to estimate the contribution of HVC-like halo structures to the cross section of intervening strong MgII absorbers at z = 0. Our study implies that only the most massive HVCs would be regarded as strong MgII absorbers, if the Milky Way halo would be seen as a QSO absorption line system from an exterior vantage point. Combining the observed absorption-cross section of Galactic HVCs with the well-known number density of intervening strong MgII absorbers at z = 0, we conclude that the contribution of infalling gas clouds (i.e., HVC analogs) in the halos of Milky Way-type galaxies to the cross section of strong MgII absorbers is 34%. This result indicates that only about one third of the strong MgII absorption can be associated with HVC analogs around other galaxies, while the majority of the strong MgII systems possibly is related to galaxy outflows and winds. The second part of this thesis focuses on the properties of intervening metal absorbers at low redshift. The analysis of the frequency and physical conditions of intervening metal systems in QSO spectra and their relation to nearby galaxies offers new insights into the typical conditions of gaseous galaxy halos. One major aspect in our study was to regard intervening metal systems as possible HVC analogs. We perform a detailed analysis of absorption line properties and line statistics for 57 metal absorbers along 78 QSO sightlines using newly-obtained ultraviolet spectra obtained with HST /COS. We find clear evidence for bimodal distribution in the HI column density in the absorbers, a trend that we interpret as sign for two different classes of absorption systems (with HVC analogs at the high-column density end). With the help of the strong transitions of SiII λ1260, SiIII λ1206, and CIII λ977 we have set up Cloudy photoionization models to estimate the local ionization conditions, gas densities, and metallicities. We find that the intervening absorption systems studied by us have, on average, similar physical conditions as Galactic HVC absorbers, providing evidence that many of them represent HVC analogs in the vicinity of other galaxies. We therefore determine typical halo sizes for SiII, SiIII, and CIII for L = 0.01L∗ and L = 0.05L∗ galaxies. Based on the covering fractions of the different ions in the Galactic halo, we find that, for example, the typical halo size for SiIII is ∼ 160 kpc for L = 0.05L∗ galaxies. We test the plausibility of this result by searching for known galaxies close to the QSO sightlines and at similar redshifts as the absorbers. We find that more than 34% of the measured SiIII absorbers have galaxies associated with them, with the majority of the absorbers indeed being at impact parameters ρ ≤160 kpc.
Das Europäische Parlament ist zweifelsohne die mächtigste parlamentarische Versammlung auf supranationaler Ebene. Das provoziert die Frage, wie Entscheidungen in diesem Parlament gefällt werden und wie sie begründet werden können. Darin liegt das Hauptanliegen dieser Arbeit, die zur Beantwortung dieser Frage auf soziologische Ansätze der Erklärung sozialen Handelns zurückgreift und damit einen neuen Zugang zur Beobachtung parlamentarischen Handelns schafft. Dabei arbeitet sie heraus, wie wichtig es ist, bei der Analyse politischer Entscheidungsprozesse zu beachten, wie politische Probleme von Akteuren interpretiert und gegenüber Verhandlungspartnern dargestellt werden. An den Fallbeispielen der Entscheidungsprozesse zur Dienstleistungsrichtlinie, zur Chemikalien-Verordnung REACH und dem TDIP (CIA)-Ausschuss in der Legislaturperiode 2004–2009, wird der soziale Mechanismus dargestellt, der hinter Einigungen im Europäischen Parlament steckt. Kultur als Interpretation der Welt wird so zum Schlüssel des Verständnisses politischer Entscheidungen auf supranationaler Ebene.
Organizations try to gain competitive advantages, and to increase customer satisfaction. To ensure the quality and efficiency of their business processes, they perform business process management. An important part of process management that happens on the daily operational level is process controlling. A prerequisite of controlling is process monitoring, i.e., keeping track of the performed activities in running process instances. Only by process monitoring can business analysts detect delays and react to deviations from the expected or guaranteed performance of a process instance. To enable monitoring, process events need to be collected from the process environment. When a business process is orchestrated by a process execution engine, monitoring is available for all orchestrated process activities. Many business processes, however, do not lend themselves to automatic orchestration, e.g., because of required freedom of action. This situation is often encountered in hospitals, where most business processes are manually enacted. Hence, in practice it is often inefficient or infeasible to document and monitor every process activity. Additionally, manual process execution and documentation is prone to errors, e.g., documentation of activities can be forgotten. Thus, organizations face the challenge of process events that occur, but are not observed by the monitoring environment. These unobserved process events can serve as basis for operational process decisions, even without exact knowledge of when they happened or when they will happen. An exemplary decision is whether to invest more resources to manage timely completion of a case, anticipating that the process end event will occur too late. This thesis offers means to reason about unobserved process events in a probabilistic way. We address decisive questions of process managers (e.g., "when will the case be finished?", or "when did we perform the activity that we forgot to document?") in this thesis. As main contribution, we introduce an advanced probabilistic model to business process management that is based on a stochastic variant of Petri nets. We present a holistic approach to use the model effectively along the business process lifecycle. Therefore, we provide techniques to discover such models from historical observations, to predict the termination time of processes, and to ensure quality by missing data management. We propose mechanisms to optimize configuration for monitoring and prediction, i.e., to offer guidance in selecting important activities to monitor. An implementation is provided as a proof of concept. For evaluation, we compare the accuracy of the approach with that of state-of-the-art approaches using real process data of a hospital. Additionally, we show its more general applicability in other domains by applying the approach on process data from logistics and finance.
The objective and motivation behind this research is to provide applications with easy-to-use interfaces to communities of deaf and functionally illiterate users, which enables them to work without any human assistance. Although recent years have witnessed technological advancements, the availability of technology does not ensure accessibility to information and communication technologies (ICT). Extensive use of text from menus to document contents means that deaf or functionally illiterate can not access services implemented on most computer software. Consequently, most existing computer applications pose an accessibility barrier to those who are unable to read fluently. Online technologies intended for such groups should be developed in continuous partnership with primary users and include a thorough investigation into their limitations, requirements and usability barriers. In this research, I investigated existing tools in voice, web and other multimedia technologies to identify learning gaps and explored ways to enhance the information literacy for deaf and functionally illiterate users. I worked on the development of user-centered interfaces to increase the capabilities of deaf and low literacy users by enhancing lexical resources and by evaluating several multimedia interfaces for them. The interface of the platform-independent Italian Sign Language (LIS) Dictionary has been developed to enhance the lexical resources for deaf users. The Sign Language Dictionary accepts Italian lemmas as input and provides their representation in the Italian Sign Language as output. The Sign Language dictionary has 3082 signs as set of Avatar animations in which each sign is linked to a corresponding Italian lemma. I integrated the LIS lexical resources with MultiWordNet (MWN) database to form the first LIS MultiWordNet(LMWN). LMWN contains information about lexical relations between words, semantic relations between lexical concepts (synsets), correspondences between Italian and sign language lexical concepts and semantic fields (domains). The approach enhances the deaf users’ understanding of written Italian language and shows that a relatively small set of lexicon can cover a significant portion of MWN. Integration of LIS signs with MWN made it useful tool for computational linguistics and natural language processing. The rule-based translation process from written Italian text to LIS has been transformed into service-oriented system. The translation process is composed of various modules including parser, semantic interpreter, generator, and spatial allocation planner. This translation procedure has been implemented in the Java Application Building Center (jABC), which is a framework for extreme model driven design (XMDD). The XMDD approach focuses on bringing software development closer to conceptual design, so that the functionality of a software solution could be understood by someone who is unfamiliar with programming concepts. The transformation addresses the heterogeneity challenge and enhances the re-usability of the system. For enhancing the e-participation of functionally illiterate users, two detailed studies were conducted in the Republic of Rwanda. In the first study, the traditional (textual) interface was compared with the virtual character-based interactive interface. The study helped to identify usability barriers and users evaluated these interfaces according to three fundamental areas of usability, i.e. effectiveness, efficiency and satisfaction. In another study, we developed four different interfaces to analyze the usability and effects of online assistance (consistent help) for functionally illiterate users and compared different help modes including textual, vocal and virtual character on the performance of semi-literate users. In our newly designed interfaces the instructions were automatically translated in Swahili language. All the interfaces were evaluated on the basis of task accomplishment, time consumption, System Usability Scale (SUS) rating and number of times the help was acquired. The results show that the performance of semi-literate users improved significantly when using the online assistance. The dissertation thus introduces a new development approach in which virtual characters are used as additional support for barely literate or naturally challenged users. Such components enhanced the application utility by offering a variety of services like translating contents in local language, providing additional vocal information, and performing automatic translation from text to sign language. Obviously, there is no such thing as one design solution that fits for all in the underlying domain. Context sensitivity, literacy and mental abilities are key factors on which I concentrated and the results emphasize that computer interfaces must be based on a thoughtful definition of target groups, purposes and objectives.
The contractile vacuole (CV) is an osmoregulatory organelle found exclusively in algae and protists. In addition to expelling excessive water out of the cell, it also expels ions and other metabolites and thereby contributes to the cell's metabolic homeostasis. The interest in the CV reaches beyond its immediate cellular roles. The CV's function is tightly related to basic cellular processes such as membrane dynamics and vesicle budding and fusion; several physiological processes in animals, such as synaptic neurotransmission and blood filtration in the kidney, are related to the CV's function; and several pathogens, such as the causative agents of sleeping sickness, possess CVs, which may serve as pharmacological targets. The green alga Chlamydomonas reinhardtii has two CVs. They are the smallest known CVs in nature, and they remain relatively untouched in the CV-related literature. Many genes that have been shown to be related to the CV in other organisms have close homologues in C. reinhardtii. We attempted to silence some of these genes and observe the effect on the CV. One of our genes, VMP1, caused striking, severe phenotypes when silenced. Cells exhibited defective cytokinesis and aberrant morphologies. The CV, incidentally, remained unscathed. In addition, mutant cells showed some evidence of disrupted autophagy. Several important regulators of the cell cycle as well as autophagy were found to be underexpressed in the mutant. Lipidomic analysis revealed many meaningful changes between wild-type and mutant cells, reinforcing the compromised-autophagy observation. VMP1 is a singular protein, with homologues in numerous eukaryotic organisms (aside from fungi), but usually with no relatives in each particular genome. Since its first characterization in 2002 it has been associated with several cellular processes and functions, namely autophagy, programmed cell-death, secretion, cell adhesion, and organelle biogenesis. It has been implicated in several human diseases: pancreatitis, diabetes, and several types of cancer. Our results reiterate some of the observations in VMP1's six reported homologues, but, importantly, show for the first time an involvement of this protein in cell division. The mechanisms underlying this involvement in Chlamydomonas, as well as other key aspects, such as VMP1's subcellular localization and interaction partners, still await elucidation.
Formation of a Eu(III) borate solid species from a weak Eu(III) borate complex in aqueous solution
(2014)
In the presence of polyborates (detected by 11B-NMR) the formation of a weak Eu(III) borate complex (lg β11 ∼ 2, estimated) was observed by time-resolved laser-induced fluorescence spectroscopy (TRLFS). This complex is a precursor for the formation of a solid Eu(III) borate species. The formation of this solid in solution was investigated by TRLFS as a function of the total boron concentration: the lower the total boron concentration, the slower is the solid formation. The solid Eu(III) borate was characterized by IR spectroscopy, powder XRD and solid-state TRLFS. The determination of the europium to boron ratio portends the existence of pentaborate units in the amorphous solid.
Photoinduced excitation energy transfer and accompanying charge separation are elucidated for a supramolecular system of a single fullerene covalently linked to six pyropheophorbide-a dye molecules. Molecular dynamics simulations are performed to gain an atomistic picture of the architecture and the surrounding solvent. Excitation energy transfer among the dye molecules and electron transfer from the excited dyes to the fullerene are described by a mixed quantum–classical version of the Förster rate and the semiclassical Marcus rate, respectively. The mean characteristic time of energy redistribution lies in the range of 10 ps, while electron transfer proceeds within 150 ps. In between, on a 20 to 50 ps time-scale, conformational changes take place in the system. This temporal hierarchy of processes guarantees efficient charge separation, if the structure is exposed to a solvent. The fast energy transfer can adopt the dye excitation to the actual conformation. In this sense, the probability to achieve charge separation is large enough since any dominance of unfavorable conformations that exhibit a large dye–fullerene distance is circumvented. And the slow electron transfer may realize an averaging with respect to different conformations. To confirm the reliability of our computations, ensemble measurements on the charge separation dynamics are simulated and a very good agreement with the experimental data is obtained.
The synthesis of two novel types of π-expanded coumarins has been developed. Modified Knoevenagel bis-condensation afforded 3,9-dioxa-perylene-2,8-diones. Subsequent oxidative aromatic coupling or light driven electrocyclization reaction led to dibenzo-1,7-dioxacoronene-2,8-dione. Unparalleled synthetic simplicity, straightforward purification and superb optical properties have the potential to bring these perylene and coronene analogs towards various applications.
Two-photon polymerization of hydrogels – versatile solutions to fabricate well-defined 3D structures
(2014)
Hydrogels are cross-linked water-containing polymer networks that are formed by physical, ionic or covalent interactions. In recent years, they have attracted significant attention because of their unique physical properties, which make them promising materials for numerous applications in food and cosmetic processing, as well as in drug delivery and tissue engineering. Hydrogels are highly water-swellable materials, which can considerably increase in volume without losing cohesion, are biocompatible and possess excellent tissue-like physical properties, which can mimic in vivo conditions. When combined with highly precise manufacturing technologies, such as two-photon polymerization (2PP), well-defined three-dimensional structures can be obtained. These structures can become scaffolds for selective cell-entrapping, cell/drug delivery, sensing and prosthetic implants in regenerative medicine. 2PP has been distinguished from other rapid prototyping methods because it is a non-invasive and efficient approach for hydrogel cross-linking. This review discusses the 2PP-based fabrication of 3D hydrogel structures and their potential applications in biotechnology. A brief overview regarding the 2PP methodology and hydrogel properties relevant to biomedical applications is given together with a review of the most important recent achievements in the field.
Werner Mittenzwei’s article of 1967, the title of which coined the term “Brecht-Lukács-Debatte”, is widely considered as a milestone in the development of East German literary criticism towards an “emancipation” from party politics. By placing Mittenzwei’s contribution in the wider context of discussions about the literature of the GDR, within the SED and the writers’ union as well as at international conferences, this article attempts to trace the emergence of “Umfunktionierung” both as a key term and in its official approval by the party.
A new functional luminescent lanthanide complex (LLC) has been synthesized with terbium as a central lanthanide ion and biotin as a functional moiety. Unlike in typical lanthanide complexes assembled via carboxylic moieties, in the presented complex, four phosphate groups are chelating the central lanthanide ion. This special chemical assembly enhances the complex stability in phosphate buffers conventionally used in biochemistry. The complex synthesis strategy and photophysical properties are described as well as the performance in time-resolved Förster Resonance Energy Transfer (FRET) assays. In those assays, this biotin-LLC transferred energy either to acceptor organic dyes (Cy5 or AF680) labelled on streptavidin or to quantum dots (QD655 or QD705) surface-functionalised with streptavidins. The permanent spatial donor–acceptor proximity is assured through strong and stable biotin–streptavidin binding. The energy transfer is evidenced from the quenching observed in donor emission and from a decrease in donor luminescence decay, both associated with simultaneous increase in acceptor intensity and in the decay time. The dye-based assays are realised in TRIS and in PBS, whereas QD-based systems are studied in borate buffer. The delayed emission analysis allows for quantifying the recognition process and for auto-fluorescence-free detection, which is particularly relevant for application in bioanalysis. In accordance with Förster theory, Förster-radii (R0) were found to be around 60 Å for organic dyes and around 105 Å for QDs. The FRET efficiency (η) reached 80% and 25% for dye and QD acceptors, respectively. Physical donor–acceptor distances (r) have been determined in the range 45–60 Å for organic dye acceptors, while for acceptor QDs between 120 Å and 145 Å. This newly synthesised biotin-LLC extends the class of highly sensitive analytical tools to be applied in the bioanalytical methods such as time-resolved fluoroimmunoassays (TR-FIA), luminescent imaging and biosensing.
Potentiality of nanosized materials has been largely proved but a closer look shows that a significant percentage of this research is related to oxides and metals, while the number drastically drops for metallic ceramics, namely transition metal nitrides and metal carbides. The lack of related publications do not reflect their potential but rather the difficulties related to their synthesis as dense and defect-free structures, fundamental prerequisites for advanced mechanical applications.
The present habilitation work aims to close the gap between preparation and processing, indicating novel synthetic pathways for a simpler and sustainable synthesis of transition metal nitride (MN) and carbide (MC) based nanostructures and easier processing thereafter. In spite of simplicity and reliability, the designed synthetic processes allow the production of functional materials, with the demanded size and morphology.
The goal was achieved exploiting classical and less-classical precursors, ranging from common metal salts and molecules (e.g. urea, gelatin, agar, etc), to more exotic materials, such as leafs, filter paper and even wood. It was found that the choice of precursors and reaction conditions makes it possible to control chemical composition (going for instance from metal oxides to metal oxy-nitrides to metal nitrides, or from metal nitrides to metal carbides, up to quaternary systems), size (from 5 to 50 nm) and morphology (going from mere spherical nanoparticles to rod-like shapes, fibers, layers, meso-porous and hierarchical structures, etc). The nature of the mixed precursors also allows the preparation of metal nitrides/carbides based nanocomposites, thus leading to multifunctional materials (e.g. MN/MC@C, MN/MC@PILs, etc) but also allowing dispersion in liquid media. Control over composition, size and morphology is obtained with simple adjustment of the main route, but also coupling it with processes such as electrospin, aerosol spray, bio-templating, etc. Last but not least, the nature of the precursor materials also allows easy processing, including printing, coating, casting, film and thin layers preparation, etc).
The designed routes are, concept-wise, similar and they all start by building up a secondary metal ion-N/C precursor network, which converts, upon heat treatment, into an intermediate “glass”. This glass stabilizes the nascent nanoparticles during their nucleation and impairs their uncontrolled growth during the heat treatment (scheme 1). This way, one of the main problems related to the synthesis of MN/MC, i.e. the need of very high temperature, could also be overcome (from up to 2000°C, for classical synthesis, down to 700°C in the present cases). The designed synthetic pathways are also conceived to allow usage of non-toxic compounds and to minimize (or even avoid) post-synthesis purification, still bringing to phase pure and well-defined (crystalline) nanoparticles.
This research aids to simplify the preparation of MN/MC, making these systems now readily available in suitable amounts both for fundamental and applied science. The prepared systems have been tested (in some cases for the first time) in many different fields, e.g. battery (MnN0.43@C shown a capacity stabilized at a value of 230 mAh/g, with coulombic efficiencies close to 100%), as alternative magnetic materials (Fe3C nanoparticles were prepared with different size and therefore different magnetic behavior, superparamagnetic or ferromagnetic, showing a saturation magnetization value up to 130 emu/g, i.e. similar to the value expected for the bulk material), as filters and for the degradation of organic dyes (outmatching the performance of carbon), as catalysts (both as active phase but also as active support, leading to high turnover rate and, more interesting, to tunable selectivity). Furthermore, with this route, it was possible to prepare for the first time, to the best of our knowledge, well-defined and crystalline MnN0.43, Fe3C and Zn1.7GeN1.8O nanoparticles via bottom-up approaches.
Once the synthesis of these materials can be made straightforward, any further modification, combination, manipulation, is in principle possible and new systems can be purposely conceived (e.g. hybrids, nanocomposites, ferrofluids, etc).
The atmosphere over the Arctic Ocean is strongly influenced by the distribution of sea ice and open water. Leads in the sea ice produce strong convective fluxes of sensible and latent heat and release aerosol particles into the atmosphere. They increase the occurrence of clouds and modify the structure and characteristics of the atmospheric boundary layer (ABL) and thereby influence the Arctic climate.
In the course of this study aircraft measurements were performed over the western Arctic Ocean as part of the campaign PAMARCMIP 2012 of the Alfred Wegener Institute for Polar and Marine Research (AWI). Backscatter from aerosols and clouds within the lower troposphere and the ABL were measured with the nadir pointing Airborne Mobile Aerosol Lidar (AMALi) and dropsondes were launched to obtain profiles of meteorological variables. Furthermore, in situ measurements of aerosol properties, meteorological variables and turbulence were part of the campaign. The measurements covered a broad range of atmospheric and sea ice conditions.
In this thesis, properties of the ABL over Arctic sea ice with a focus on the influence of open leads are studied based on the data from the PAMARCMIP campaign. The height of the ABL is determined by different methods that are applied to dropsonde and AMALi backscatter profiles. ABL heights are compared for different flights representing different conditions of the atmosphere and of sea ice and open water influence. The different criteria for ABL height that are applied show large variation in terms of agreement among each other, depending on the characteristics of the ABL and its history. It is shown that ABL height determination from lidar backscatter by methods commonly used under mid-latitude conditions is applicable to the Arctic ABL only under certain conditions. Aerosol or clouds within the ABL are needed as a tracer for ABL height detection from backscatter. Hence an aerosol source close to the surface is necessary, that is typically found under the present influence of open water and therefore convective conditions. However it is not always possible to distinguish residual layers from the actual ABL. Stable boundary layers are generally difficult to detect.
To illustrate the complexity of the Arctic ABL and processes therein, four case studies are analyzed each of which represents a snapshot of the interplay between atmosphere and underlying sea ice or water surface. Influences of leads and open water on the aerosol and clouds within the ABL are identified and discussed. Leads are observed to cause the formation of fog and cloud layers within the ABL by humidity emission. Furthermore they decrease the stability and increase the height of the ABL and consequently facilitate entrainment of air and aerosol layers from the free troposphere.
Cyanobacteria produce about 40 percent of the world’s primary biomass, but also a variety of often toxic peptides such as microcystin. Mass developments, so called blooms, can pose a real threat to the drinking water supply in many parts of the world. This study aimed at characterizing the biological function of microcystin production in one of the most common bloom-forming cyanobacterium Microcystis aeruginosa.
In a first attempt, the effect of elevated light intensity on microcystin production and its binding to cellular proteins was studied. Therefore, conventional microcystin quantification techniques were combined with protein-biochemical methods. RubisCO, the key enzyme for primary carbon fixation was a major microcystin interaction partner. High light exposition strongly stimulated microcystin-protein interactions. Up to 60 percent of the total cellular microcystin was detected bound to proteins, i.e. inaccessible for standard quantification procedures. Underestimation of total microcystin contents when neglecting the protein fraction was also demonstrated in field samples. Finally, an immuno-fluorescence based method was developed to identify microcystin producing cyanobacteria in mixed populations.
The high light induced microcystin interaction with proteins suggested an impact of the secondary metabolite on the primary metabolism of Microcystis by e.g. modulating the activity of enzymes. For addressing that question, a comprehensive GC/MS-based approach was conducted to compare the accumulation of metabolites in the wild-type of Microcystis aeruginosa PCC 7806 and the microcystin deficient ΔmcyB mutant. From all 501 detected non-redundant metabolites 85 (17 percent) accumulated significantly different in either of both genotypes upon high light exposition. Accumulation of compatible solutes in the ΔmcyB mutant suggests a role of microcystin in fine-tuning the metabolic flow to prevent stress related to excess light, high oxygen concentration and carbon limitation.
Co-analysis of the widely used model cyanobacterium Synechocystis PCC 6803 revealed profound metabolic differences between species of cyanobacteria. Whereas Microcystis channeled more resources towards carbohydrate synthesis, Synechocystis invested more in amino acids. These findings were supported by electron microscopy of high light treated cells and the quantification of storage compounds. While Microcystis accumulated mainly glycogen to about 8.5 percent of its fresh weight within three hours, Synechocystis produced higher amounts of cyanophycin. The results showed that the characterization of species-specific metabolic features should gain more attention with regard to the biotechnological use of cyanobacteria.
Modern microscopic techniques following the stochastic motion of labelled tracer particles have uncovered significant deviations from the laws of Brownian motion in a variety of animate and inanimate systems. Such anomalous diffusion can have different physical origins, which can be identified from careful data analysis. In particular, single particle tracking provides the entire trajectory of the traced particle, which allows one to evaluate different observables to quantify the dynamics of the system under observation. We here provide an extensive overview over different popular anomalous diffusion models and their properties. We pay special attention to their ergodic properties, highlighting the fact that in several of these models the long time averaged mean squared displacement shows a distinct disparity to the regular, ensemble averaged mean squared displacement. In these cases, data obtained from time averages cannot be interpreted by the standard theoretical results for the ensemble averages. Here we therefore provide a comparison of the main properties of the time averaged mean squared displacement and its statistical behaviour in terms of the scatter of the amplitudes between the time averages obtained from different trajectories. We especially demonstrate how anomalous dynamics may be identified for systems, which, on first sight, appear to be Brownian. Moreover, we discuss the ergodicity breaking parameters for the different anomalous stochastic processes and showcase the physical origins for the various behaviours. This Perspective is intended as a guidebook for both experimentalists and theorists working on systems, which exhibit anomalous diffusion.
New porous materials based on covalently connected monomers are presented. The key step of the synthesis is an acetalisation reaction. In previous years we used acetalisation reactions extensively to build up various molecular rods. Based on this approach, investigations towards porous polymeric materials were conducted by us. Here we wish to present the results of these studies in the synthesis of 1D polyacetals and porous 3D polyacetals. By scrambling experiments with 1D acetals we could prove that exchange reactions occur between different building blocks (evidenced by MALDI-TOF mass spectrometry). Based on these results we synthesized porous 3D polyacetals under the same mild conditions.
Picosecond X-ray absorption spectroscopy (XAS) is used to investigate the electronic and structural dynamics initiated by plasmon excitation of 1.8 nm diameter Au nanoparticles (NPs) functionalised with 1-hexanethiol. We show that 100 ps after photoexcitation the transient XAS spectrum is consistent with an 8% expansion of the Au–Au bond length and a large increase in disorder associated with melting of the NPs. Recovery of the ground state occurs with a time constant of ∼1.8 ns, arising from thermalisation with the environment. Simulations reveal that the transient spectrum exhibits no signature of charge separation at 100 ps and allows us to estimate an upper limit for the quantum yield (QY) of this process to be <0.1.
We study the thermal Markovian diffusion of tracer particles in a 2D medium with spatially varying diffusivity D(r), mimicking recently measured, heterogeneous maps of the apparent diffusion coefficient in biological cells. For this heterogeneous diffusion process (HDP) we analyse the mean squared displacement (MSD) of the tracer particles, the time averaged MSD, the spatial probability density function, and the first passage time dynamics from the cell boundary to the nucleus. Moreover we examine the non-ergodic properties of this process which are important for the correct physical interpretation of time averages of observables obtained from single particle tracking experiments. From extensive computer simulations of the 2D stochastic Langevin equation we present an in-depth study of this HDP. In particular, we find that the MSDs along the radial and azimuthal directions in a circular domain obey anomalous and Brownian scaling, respectively. We demonstrate that the time averaged MSD stays linear as a function of the lag time and the system thus reveals a weak ergodicity breaking. Our results will enable one to rationalise the diffusive motion of larger tracer particles such as viruses or submicron beads in biological cells.
Arsenic-containing hydrocarbons are one group of fat-soluble organic arsenic compounds (arsenolipids) found in marine fish and other seafood. A risk assessment of arsenolipids is urgently needed, but has not been possible because of the total lack of toxicological data. In this study the cellular toxicity of three arsenic-containing hydrocarbons was investigated in cultured human bladder (UROtsa) and liver (HepG2) cells. Cytotoxicity of the arsenic-containing hydrocarbons was comparable to that of arsenite, which was applied as the toxic reference arsenical. A large cellular accumulation of arsenic, as measured by ICP-MS/MS, was observed after incubation of both cell lines with the arsenolipids. Moreover, the toxic mode of action shown by the three arsenic-containing hydrocarbons seemed to differ from that observed for arsenite. Evidence suggests that the high cytotoxic potential of the lipophilic arsenicals results from a decrease in the cellular energy level. This first in vitro based risk assessment cannot exclude a risk to human health related to the presence of arsenolipids in seafood, and indicates the urgent need for further toxicity studies in experimental animals to fully assess this possible risk.
This study aims to further mechanistically understand toxic modes of action after chronic inorganic arsenic exposure. Therefore long-term incubation studies in cultured cells were carried out, to display chronically attained changes, which cannot be observed in the generally applied in vitro short-term incubation studies. Particularly, the cytotoxic, genotoxic and epigenetic effects of an up to 21 days incubation of human urothelial (UROtsa) cells with pico- to nanomolar concentrations of iAsIII and its metabolite thio-DMAV were compared. After 21 days of incubation, cytotoxic effects were strongly enhanced in the case of iAsIII and might partly be due to glutathione depletion and genotoxic effects on the chromosomal level. These results are in strong contrast to cells exposed to thio-DMAV. Thus, cells seemed to be able to adapt to this arsenical, as indicated among others by an increase in the cellular glutathione level. Most interestingly, picomolar concentrations of both iAsIII and thio-DMAV caused global DNA hypomethylation in UROtsa cells, which was quantified in parallel by 5-medC immunostaining and a newly established, reliable, high resolution mass spectrometry (HRMS)-based test system. This is the first time that epigenetic effects are reported for thio-DMAV; iAsIII induced epigenetic effects occur in at least 8000 fold lower concentrations as reported in vitro before. The fact that both arsenicals cause DNA hypomethylation at really low, exposure-relevant concentrations in human urothelial cells suggests that this epigenetic effect might contribute to inorganic arsenic induced carcinogenicity, which for sure has to be further investigated in future studies.
Probably no other field of statistical physics at the borderline of soft matter and biological physics has caused such a flurry of papers as polymer translocation since the 1994 landmark paper by Bezrukov, Vodyanoy, and Parsegian and the study of Kasianowicz in 1996. Experiments, simulations, and theoretical approaches are still contributing novel insights to date, while no universal consensus on the statistical understanding of polymer translocation has been reached. We here collect the published results, in particular, the famous–infamous debate on the scaling exponents governing the translocation process. We put these results into perspective and discuss where the field is going. In particular, we argue that the phenomenon of polymer translocation is non-universal and highly sensitive to the exact specifications of the models and experiments used towards its analysis.
Civil society is either considered as a motor of democratization or stabilizer of authoritarian rule. This dichotomy is partly due to the dominance of domains-based definitions of the concept that reduce civil society to a small range of formally organized, independent and democratically oriented NGOs. Additionally, research often treats civil society as a ‘black box’ without differentiating between potential variations in impact of different types of civil society actors on existing regime structures. In this thesis, I present an alternative conceptualization of civil society based on the interactions of societal actors to arrive at a more inclusive understanding of the term which is more suited for analysis in non-democratic settings. The operationalization of the action-based approach I develop allows for an empirical assessment of a large range of societal activities that can accordingly be categorized from little to very civil society-like depending on their specific modes of interactions within four dimensions. I employ this operationalization in a qualitative case study including different actors in the authoritarian monarchy of Jordan which suggests that Jordanian societal actors mostly exhibit tolerant and democratically oriented modes of interaction and do not reproduce authoritarian patterns. However, even democratically oriented actors do not necessarily take on an oppositional positions vis-à-vis the authoritarian regime. Thus, the Jordanian civil society might not feature a high potential to challenge existing power structures in the country.
In March 2010, the project CoCoCo (incipient COntinent-COntinent COllision) recorded a 650 km long amphibian N-S wide-angle seismic profile, extending from the Eratosthenes Seamount (ESM) across Cyprus and southern Turkey to the Anatolian plateau. The aim of the project is to reveal the impact of the transition from subduction to continent-continent collision of the African plate with the Cyprus-Anatolian plate. A visual quality check, frequency analysis and filtering were applied to the seismic data and reveal a good data quality. Subsequent first break picking, finite-differences ray tracing and inversion of the offshore wide-angle data leads to a first-arrival tomographic model. This model reveals (1) P-wave velocities lower than 6.5 km/s in the crust, (2) a variable crustal thickness of about 28 - 37 km and (3) an upper crustal reflection at 5 km depth beneath the ESM. Two land shots on Turkey, also recorded on Cyprus, airgun shots south of Cyprus and geological and previous seismic investigations provide the information to derive a layered velocity model beneath the Anatolian plateau and for the ophiolite complex on Cyprus. The analysis of the reflections provides evidence for a north-dipping plate subducting beneath Cyprus. The main features of this layered velocity model are (1) an upper and lower crust with large lateral changes of the velocity structure and thickness, (2) a Moho depth of about 38 - 45 km beneath the Anatolian plateau, (3) a shallow north-dipping subducting plate below Cyprus with an increasing dip and (4) a typical ophiolite sequence on Cyprus with a total thickness of about 12 km. The offshore-onshore seismic data complete and improve the information about the velocity structure beneath Cyprus and the deeper part of the offshore tomographic model. Thus, the wide-angle seismic data provide detailed insights into the 2-D geometry and velocity structures of the uplifted and overriding Cyprus-Anatolian plate. Subsequent gravity modelling confirms and extends the crustal P-wave velocity model. The deeper part of the subducting plate is constrained by the gravity data and has a dip angle of ~ 28°. Finally, an integrated analysis of the geophysical and geological information allows a comprehensive interpretation of the crustal structure related to the collision process.
Scientific inquiry requires that we formulate not only what we know, but also what we do not know and by how much. In climate data analysis, this involves an accurate specification of measured quantities and a consequent analysis that consciously propagates the measurement errors at each step. The dissertation presents a thorough analytical method to quantify errors of measurement inherent in paleoclimate data. An additional focus are the uncertainties in assessing the coupling between different factors that influence the global mean temperature (GMT).
Paleoclimate studies critically rely on `proxy variables' that record climatic signals in natural archives. However, such proxy records inherently involve uncertainties in determining the age of the signal. We present a generic Bayesian approach to analytically determine the proxy record along with its associated uncertainty, resulting in a time-ordered sequence of correlated probability distributions rather than a precise time series. We further develop a recurrence based method to detect dynamical events from the proxy probability distributions. The methods are validated with synthetic examples and
demonstrated with real-world proxy records. The proxy estimation step reveals the interrelations between proxy variability and uncertainty. The recurrence analysis of the East Asian Summer Monsoon during the last 9000 years confirms the well-known `dry' events at 8200 and 4400 BP, plus an additional significantly dry event at 6900 BP.
We also analyze the network of dependencies surrounding GMT. We find an intricate, directed network with multiple links between the different factors at multiple time delays. We further uncover a significant feedback from the GMT to the El Niño Southern Oscillation at quasi-biennial timescales. The analysis highlights the need of a more nuanced formulation of influences between different climatic factors, as well as the limitations in trying to estimate such dependencies.
The data quality of real-world datasets need to be constantly monitored and maintained to allow organizations and individuals to reliably use their data. Especially, data integration projects suffer from poor initial data quality and as a consequence consume more effort and money. Commercial products and research prototypes for data cleansing and integration help users to improve the quality of individual and combined datasets. They can be divided into either standalone systems or database management system (DBMS) extensions. On the one hand, standalone systems do not interact well with DBMS and require time-consuming data imports and exports. On the other hand, DBMS extensions are often limited by the underlying system and do not cover the full set of data cleansing and integration tasks.
We overcome both limitations by implementing a concise set of five data cleansing and integration operators on the parallel data analytics platform Stratosphere. We define the semantics of the operators, present their parallel implementation, and devise optimization techniques for individual operators and combinations thereof. Users specify declarative queries in our query language METEOR with our new operators to improve the data quality of individual datasets or integrate them to larger datasets. By integrating the data cleansing operators into the higher level language layer of Stratosphere, users can easily combine cleansing operators with operators from other domains, such as information extraction, to complex data flows. Through a generic description of the operators, the Stratosphere optimizer reorders operators even from different domains to find better query plans.
As a case study, we reimplemented a part of the large Open Government Data integration project GovWILD with our new operators and show that our queries run significantly faster than the original GovWILD queries, which rely on relational operators. Evaluation reveals that our operators exhibit good scalability on up to 100 cores, so that even larger inputs can be efficiently processed by scaling out to more machines. Finally, our scripts are considerably shorter than the original GovWILD scripts, which results in better maintainability of the scripts.
We study the diffusion of a tracer particle, which moves in continuum space between a lattice of excluded volume, immobile non-inert obstacles. In particular, we analyse how the strength of the tracer–obstacle interactions and the volume occupancy of the crowders alter the diffusive motion of the tracer. From the details of partitioning of the tracer diffusion modes between trapping states when bound to obstacles and bulk diffusion, we examine the degree of localisation of the tracer in the lattice of crowders. We study the properties of the tracer diffusion in terms of the ensemble and time averaged mean squared displacements, the trapping time distributions, the amplitude variation of the time averaged mean squared displacements, and the non-Gaussianity parameter of the diffusing tracer. We conclude that tracer–obstacle adsorption and binding triggers a transient anomalous diffusion. From a very narrow spread of recorded individual time averaged trajectories we exclude continuous type random walk processes as the underlying physical model of the tracer diffusion in our system. For moderate tracer–crowder attraction the motion is found to be fully ergodic, while at stronger attraction strength a transient disparity between ensemble and time averaged mean squared displacements occurs. We also put our results into perspective with findings from experimental single-particle tracking and simulations of the diffusion of tagged tracers in dense crowded suspensions. Our results have implications for the diffusion, transport, and spreading of chemical components in highly crowded environments inside living cells and other structured liquids.
An important contribution of geosciences to the renewable energy production portfolio is the exploration and utilization of geothermal resources. For the development of a geothermal project at great depths a detailed geological and geophysical exploration program is required in the first phase. With the help of active seismic methods high-resolution images of the geothermal reservoir can be delivered. This allows potential transport routes for fluids to be identified as well as regions with high potential of heat extraction to be mapped, which indicates favorable conditions for geothermal exploitation. The presented work investigates the extent to which an improved characterization of geothermal reservoirs can be achieved with the new methods of seismic data processing. The summations of traces (stacking) is a crucial step in the processing of seismic reflection data. The common-reflection-surface (CRS) stacking method can be applied as an alternative for the conventional normal moveout (NMO) or the dip moveout (DMO) stack. The advantages of the CRS stack beside an automatic determination of stacking operator parameters include an adequate imaging of arbitrarily curved geological boundaries, and a significant increase in signal-to-noise (S/N) ratio by stacking far more traces than used in a conventional stack. A major innovation I have shown in this work is that the quality of signal attributes that characterize the seismic images can be significantly improved by this modified type of stacking in particular. Imporoved attribute analysis facilitates the interpretation of seismic images and plays a significant role in the characterization of reservoirs. Variations of lithological and petro-physical properties are reflected by fluctuations of specific signal attributes (eg. frequency or amplitude characteristics). Its further interpretation can provide quality assessment of the geothermal reservoir with respect to the capacity of fluids within a hydrological system that can be extracted and utilized. The proposed methodological approach is demonstrated on the basis on two case studies. In the first example, I analyzed a series of 2D seismic profile sections through the Alberta sedimentary basin on the eastern edge of the Canadian Rocky Mountains. In the second application, a 3D seismic volume is characterized in the surroundings of a geothermal borehole, located in the central part of the Polish basin. Both sites were investigated with the modified and improved stacking attribute analyses. The results provide recommendations for the planning of future geothermal plants in both study areas.
It is generally agreed upon that stars typically form in open clusters and stellar associations, but little is known about the structure of the open cluster system. Do open clusters and stellar associations form isolated or do they prefer to form in groups and complexes? Open cluster groups and complexes could verify star forming regions to be larger than expected, which would explain the chemical homogeneity over large areas in the Galactic disk. They would also define an additional level in the hierarchy of star formation and could be used as tracers for the scales of fragmentation in giant molecular clouds? Furthermore, open cluster groups and complexes could affect Galactic dynamics and should be considered in investigations and simulations on the dynamical processes, such as radial migration, disc heating, differential rotation, kinematic resonances, and spiral structure.
In the past decade there were a few studies on open cluster pairs (de La Fuente Marcos & de La Fuente Marcos 2009a,b,c) and on open cluster groups and complexes (Piskunov et al. 2006). The former only considered spatial proximity for the identification of the pairs, while the latter also required tangential velocities to be similar for the members. In this work I used the full set of 6D phase-space information to draw a more detailed picture on these structures. For this purpose I utilised the most homogeneous cluster catalogue available, namely the Catalogue of Open Cluster Data (COCD; Kharchenko et al. 2005a,b), which contains parameters for 650 open clusters and compact associations, as well as for their uniformly selected members. Additional radial velocity (RV) and metallicity ([M/H]) information on the members were obtained from the RAdial Velocity Experiment (RAVE; Steinmetz et al. 2006; Kordopatis et al. 2013) for 110 and 81 clusters, respectively. The RAVE sample was cleaned considering quality parameters and flags provided by RAVE (Matijevič et al. 2012; Kordopatis et al. 2013). To ensure that only real members were included for the mean values, also the cluster membership, as provided by Kharchenko et al. (2005a,b), was considered for the stars cross-matched in RAVE.
6D phase-space information could be derived for 432 out of the 650 COCD objects and I used an adaption of the Friends-of-Friends algorithm, as used in cosmology, to identify potential groupings. The vast majority of the 19 identified groupings were pairs, but I also found four groups of 4-5 members and one complex with 15 members. For the verification of the identified structures, I compared the results to a randomly selected subsample of the catalogue for the Milky Way global survey of Star Clusters (MWSC; Kharchenko et al. 2013), which became available recently, and was used as reference sample. Furthermore, I implemented Monte-Carlo simulations with randomised samples created from two distinguished input distributions for the spatial and velocity parameters. On the one hand, assuming a uniform distribution in the Galactic disc and, on the other hand, assuming the COCD data distributions to be representative for the whole open cluster population.
The results suggested that the majority of identified pairs are rather by chance alignments, but the groups and the complex seemed to be genuine. A comparison of my results to the pairs, groups and complexes proposed in the literature yielded a partial overlap, which was most likely because of selection effects and different parameters considered. This is another verification for the existence of such structures.
The characteristics of the found groupings favour that members of an open cluster grouping originate from a common giant molecular cloud and formed in a single, but possibly sequential, star formation event. Moreover, the fact that the young open cluster population showed smaller spatial separations between nearest neighbours than the old cluster population indicated that the lifetime of open cluster groupings is most likely comparable to that of the Galactic open cluster population itself. Still even among the old open clusters I could identify groupings, which suggested that the detected structure could be in some cases more long lived as one might think.
In this thesis I could only present a pilot study on structures in the Galactic open cluster population, since the data sample used was highly incomplete. For further investigations a far more complete sample would be required. One step in this direction would be to use data from large current surveys, like SDSS, RAVE, Gaia-ESO and VVV, as well as including results from studies on individual clusters. Later the sample can be completed by data from upcoming missions, like Gaia and 4MOST. Future studies using this more complete open cluster sample will reveal the effect of open cluster groupings on star formation theory and their significance for the kinematics, dynamics and evolution of the Milky Way, and thereby of spiral galaxies.
The adaptation of cell growth and proliferation to environmental changes is essential for the surviving of biological systems. The evolutionary conserved Ser/Thr protein kinase “Target of Rapamycin” (TOR) has emerged as a major signaling node that integrates the sensing of numerous growth signals to the coordinated regulation of cellular metabolism and growth. Although the TOR signaling pathway has been widely studied in heterotrophic organisms, the research on TOR in photosynthetic eukaryotes has been hampered by the reported land plant resistance to rapamycin. Thus, the finding that Chlamydomonas reinhardtii is sensitive to rapamycin, establish this unicellular green alga as a useful model system to investigate TOR signaling in photosynthetic eukaryotes.
The observation that rapamycin does not fully arrest Chlamydomonas growth, which is different from observations made in other organisms, prompted us to investigate the regulatory function of TOR in Chlamydomonas in context of the cell cycle. Therefore, a growth system that allowed synchronously growth under widely unperturbed cultivation in a fermenter system was set up and the synchronized cells were characterized in detail. In a highly resolved kinetic study, the synchronized cells were analyzed for their changes in cytological parameters as cell number and size distribution and their starch content. Furthermore, we applied mass spectrometric analysis for profiling of primary and lipid metabolism. This system was then used to analyze the response dynamics of the Chlamydomonas metabolome and lipidome to TOR-inhibition by rapamycin
The results show that TOR inhibition reduces cell growth, delays cell division and daughter cell release and results in a 50% reduced cell number at the end of the cell cycle. Consistent with the growth phenotype we observed strong changes in carbon and nitrogen partitioning in the direction of rapid conversion into carbon and nitrogen storage through an accumulation of starch, triacylglycerol and arginine. Interestingly, it seems that the conversion of carbon into triacylglycerol occurred faster than into starch after TOR inhibition, which may indicate a more dominant role of TOR in the regulation of TAG biosynthesis than in the regulation of starch.
This study clearly shows, for the first time, a complex picture of metabolic and lipidomic dynamically changes during the cell cycle of Chlamydomonas reinhardtii and furthermore reveals a complex regulation and adjustment of metabolite pools and lipid composition in response to TOR inhibition.
The looping of polymers such as DNA is a fundamental process in the molecular biology of living cells, whose interior is characterised by a high degree of molecular crowding. We here investigate in detail the looping dynamics of flexible polymer chains in the presence of different degrees of crowding. From the analysis of the looping–unlooping rates and the looping probabilities of the chain ends we show that the presence of small crowders typically slows down the chain dynamics but larger crowders may in fact facilitate the looping. We rationalise these non-trivial and often counterintuitive effects of the crowder size on the looping kinetics in terms of an effective solution viscosity and standard excluded volume. It is shown that for small crowders the effect of an increased viscosity dominates, while for big crowders we argue that confinement effects (caging) prevail. The tradeoff between both trends can thus result in the impediment or facilitation of polymer looping, depending on the crowder size. We also examine how the crowding volume fraction, chain length, and the attraction strength of the contact groups of the polymer chain affect the looping kinetics and hairpin formation dynamics. Our results are relevant for DNA looping in the absence and presence of protein mediation, DNA hairpin formation, RNA folding, and the folding of polypeptide chains under biologically relevant high-crowding conditions.
Tierische und menschliche Fäkalien aus Landwirtschaft und Haushalten enthalten zahlreiche obligat und opportunistisch pathogene Mikroorganismen, deren Konzentration u. a. je nach Gesundheitszustand der betrachteten Gruppe schwankt. Neben den Krankheitserregern enthalten Fäkalien aber auch essentielle Pflanzennährstoffe (276) und dienen seit Jahrtausenden (63) als Dünger für Feldfrüchte. Mit der unbedarften Verwendung von pathogenbelastetem Fäkaldünger steigt jedoch auch das Risiko einer Infektion von Mensch und Tier. Diese Gefahr erhöht sich mit der globalen Vernetzung der Landwirtschaft, z. B. durch den Import von kontaminierten Futter- bzw. Lebensmitteln (29).
Die vorliegende Arbeit stellt die milchsaure Fermentation von Rindergülle und Klärschlamm als alternative Hygienisierungsmethode gegenüber der Pasteurisation in Biogasanlagen bzw. gebräuchlichen Kompostierung vor.
Dabei wird ein Abfall der Gram-negativen Bakterienflora sowie der Enterokokken, Schimmel- und Hefepilze unter die Nachweisgrenze von 3 log10KbE/g beobachtet, gleichzeitig steigt die Konzentration der Lactobacillaceae um das Tausendfache. Darüber hinaus wird gezeigt, dass pathogene Bakterien wie Staphylococcus aureus, Salmonella spp., Listeria monocytogenes, EHEC O:157 und vegetative Clostridum perfringens-Zellen innerhalb von 3 Tagen inaktiviert werden. Die Inaktivierung von ECBO-Viren und Spulwurmeiern erfolgt innerhalb von 7 bzw. 56 Tagen. Zur Aufklärung der Ursache der beobachteten Hygienisierung wurde das fermentierte Material auf flüchtige Fettsäuren sowie pH-Wertänderungen untersucht. Es konnte festgestellt werden, dass die gemessenen Werte nicht die alleinige Ursache für das Absterben der Erreger sind, vielmehr wird eine zusätzliche bakterizide Wirkung durch eine mutmaßliche Bildung von Bakteriozinen in Betracht gezogen. Die parasitizide Wirkung wird auf die physikalischen Bedingungen der Fermentation zurückgeführt.
Die methodischen Grundlagen basieren auf Analysen mittels zahlreicher klassisch-kultureller Verfahren, wie z. B. der Lebendkeimzahlbestimmung. Darüber hinaus findet die MALDI-TOF-Massenspektrometrie und die klassische PCR in Kombination mit der Gradienten-Gelelektrophorese Anwendung, um kultivierbare Bakterienfloren zu beschreiben bzw. nicht kultivierbare Bakterienfloren stichprobenartig zu erfassen.
Neben den Aspekten der Hygienisierung wird zudem die Eignung der Methode für die landwirtschaftliche Nutzung berücksichtigt. Dies findet sich insbesondere in der Komposition des zu fermentierenden Materials wieder, welches für die verstärkte Humusakkumulation im Ackerboden optimiert wurde. Darüber hinaus wird die Masseverlustbilanz während der milchsauren Fermentation mit denen der Kompostierung sowie der Verarbeitung in der Biogasanlage verglichen und als positiv bewertet, da sie mit insgesamt 2,45 % sehr deutlich unter den bisherigen Alternativen liegt (73, 138, 458). Weniger Verluste an organischem Material während der Hygienisierung führen zu einer größeren verwendbaren Düngermenge, die auf Grund ihres organischen Ursprungs zu einer Verstärkung des Humusanteiles im Ackerboden beitragen kann (56, 132).
Moderne Kraftfahrzeuge verfügen über eine Vielzahl an Sensoren, welche für einen reibungslosen technischen Betrieb benötigt werden. Hierzu zählen neben fahrzeugspezifischen Sensoren (wie z.B. Motordrehzahl und Fahrzeuggeschwindigkeit) auch umweltspezifische Sensoren (wie z.B. Luftdruck und Umgebungstemperatur). Durch die zunehmende technische Vernetzung wird es möglich, diese Daten der Kraftfahrzeugelektronik aus dem Fahrzeug heraus für die verschiedensten Zwecke zu verwenden.
Die vorliegende Arbeit soll einen Beitrag dazu leisten, diese neue Art an massenhaften Daten im Sinne des Konzepts der „Extended Floating Car Data“ (XFCD) als Geoinformationen nutzbar zu machen und diese für raumzeitliche Visualisierungen (zur visuellen Analyse) anwenden zu können. In diesem Zusammenhang wird speziell die Perspektive des Umwelt- und Verkehrsmonitoring betrachtet, wobei die Anforderungen und Potentiale mit Hilfe von Experteninterviews untersucht werden. Es stellt sich die Frage, welche Daten durch die Kraftfahrzeugelektronik geliefert und wie diese möglichst automatisiert erfasst, verarbeitet, visualisiert und öffentlich bereitgestellt werden können. Neben theoretischen und technischen Grundlagen zur Datenerfassung und -nutzung liegt der Fokus auf den Methoden der kartographischen Visualisierung. Dabei soll der Frage nachgegangenen werden, ob eine technische Implementierung ausschließlich unter Verwendung von Open Source Software möglich ist. Das Ziel der Arbeit bildet ein zweigliedriger Ansatz, welcher zum einen die Visualisierung für ein exemplarisch gewähltes Anwendungsszenario und zum anderen die prototypische Implementierung von der Datenerfassung im Fahrzeug unter Verwendung der gesetzlich vorgeschriebenen „On Board Diagnose“-Schnittstelle und einem Smartphone-gestützten Ablauf bis zur webbasierten Visualisierung umfasst.
Lehre im Format der Forschung hat nicht nur das Potential zur Anknüpfung an das traditionelle Humboldt‘sche Ideal der Verschränkung von Forschung und Lernen und bietet damit eine Alternative zur vielfach beklagten „Verschulung“ der Bologna-Ära. Darüber hinaus unterstützt Lehre im Format der Forschung die Professionalisierung und fachspezifische Identitätsbildung der Studierenden. Der Beitrag ist in drei Abschnitte mit eigenen Fragestellungen unterteilt: Der erste Teil fragt nach einem grundlegenden Ziel der Hochschullehre und beschreibt dieses Ziel als wissenschaftsbasierte Professionalität, die zwingend Forschungskompetenz benötigt. Lehre im Format der Forschung wird als ein geeigneter Weg zu diesem Ziel beschrieben. Im zweiten Teil werden die Ähnlichkeiten und Unterschiede von Forschungs- und Lernprozessen herausgearbeitet, die als lerntheoretische Begründung für Lehre im Format der Forschung angesehen werden können. Abschließend werden unterschiedliche Typen einer Lehre im Format der Forschung vorgestellt – Typen, die sich hinsichtlich des intendierten Forschungsumfangs und des erforderlichen Aufwands unterscheiden.
In Chapter 1 of the dissertation, the role of social networks is analyzed as an important determinant in the search behavior of the unemployed. Based on the hypothesis that the unemployed generate information on vacancies through their social network, search theory predicts that individuals with large social networks should experience an increased productivity of informal search, and reduce their search in formal channels. Due to the higher productivity of search, unemployed with a larger network are also expected to have a higher reservation wage than unemployed with a small network. The model-theoretic predictions are tested and confirmed empirically. It is found that the search behavior of unemployed is significantly affected by the presence of social contacts, with larger networks implying a stronger substitution away from formal search channels towards informal channels. The substitution is particularly pronounced for passive formal search methods, i.e., search methods that generate rather non-specific types of job offer information at low relative cost. We also find small but significant positive effects of an increase of the network size on the reservation wage. These results have important implications on the analysis of the job search monitoring or counseling measures that are usually targeted at formal search only. Chapter 2 of the dissertation addresses the labor market effects of vacancy information during the early stages of unemployment. The outcomes considered are the speed of exit from unemployment, the effects on the quality of employment and the short-and medium-term effects on active labor market program (ALMP) participation. It is found that vacancy information significantly increases the speed of entry into employment; at the same time the probability to participate in ALMP is significantly reduced. Whereas the long-term reduction in the ALMP arises in consequence of the earlier exit from unemployment, we also observe a short-run decrease for some labor market groups which suggest that caseworker use high and low intensity activation measures interchangeably which is clearly questionable from an efficiency point of view. For unemployed who find a job through vacancy information we observe a small negative effect on the weekly number of hours worked. In Chapter 3, the long-term effects of participation in ALMP are assessed for unemployed youth under 25 years of age. Complementary to the analysis in Chapter 2, the effects of participation in time- and cost-intensive measures of active labor market policies are examined. In particular we study the effects of job creation schemes, wage subsidies, short-and long-term training measures and measures to promote the participation in vocational training. The outcome variables of interest are the probability to be in regular employment, and participation in further education during the 60 months following program entry. The analysis shows that all programs, except job creation schemes have positive and long-term effects on the employment probability of youth. In the short-run only short-term training measures generate positive effects, as long-term training programs and wage subsidies exhibit significant locking-in'' effects. Measures to promote vocational training are found to increase the probability of attending education and training significantly, whereas all other programs have either no or a negative effect on training participation. Effect heterogeneity with respect to the pre-treatment level education shows that young people with higher pre-treatment educational levels benefit more from participation most programs. However, for longer-term wage subsidies we also find strong positive effects for young people with low initial education levels. The relative benefit of training measures is higher in West than in East Germany. In the evaluation studies of Chapters 2 and 3 semi-parametric balancing methods of Propensity Score Matching (PSM) and Inverse Probability Weighting (IPW) are used to eliminate the effects of counfounding factors that influence both the treatment participation as well as the outcome variable of interest, and to establish a causal relation between program participation and outcome differences. While PSM and IPW are intuitive and methodologically attractive as they do not require parametric assumptions, the practical implementation may become quite challenging due to their sensitivity to various data features. Given the importance of these methods in the evaluation literature, and the vast number of recent methodological contributions in this field, Chapter 4 aims to reduce the knowledge gap between the methodological and applied literature by summarizing new findings of the empirical and statistical literature and practical guidelines for future applied research. In contrast to previous publications this study does not only focus on the estimation of causal effects, but stresses that the balancing challenge can and should be discussed independent of question of causal identification of treatment effects on most empirical applications. Following a brief outline of the practical implementation steps required for PSM and IPW, these steps are presented in detail chronologically, outlining practical advice for each step. Subsequently, the topics of effect estimation, inference, sensitivity analysis and the combination with parametric estimation methods are discussed. Finally, new extensions of the methodology and avenues for future research are presented.
Die nachfolgende Länderstudie ist während eines längeren Arbeitsaufenthaltes im Rahmen der internationalen Zusammenarbeit in Kamerun entstanden. Mit ihr versuchen wir, unsere persönlichen Eindrücke und täglichen Beobachtungen in einem Land zu verarbeiten, in dem offenbar alle Hoffnung darauf beruht, dass der alternde Staatspräsident Paul Biya seinen Abschied von der politischen Bühne nimmt und damit ein autokratisches, korruptes Regime sein Ende findet. Diese Hoffnung scheint mit der Erwartung von Francis Fukuyama verbunden zu werden, der 1992 nach dem Zusammenbruch des Sowjet-Imperiums das „Ende der Geschichte“ erklärte, in der Überzeugung, dass das demokratische Gesellschaftsmodell bald überall Einzug halten würde. Bekanntlich hat sich diese Erwartung als zu optimistisch erwiesen. Mit unserer Untersuchung versuchen wir aufzuzeigen, warum sich die Hoffnung auf eine gerechtere Gesellschaft trotz langjähriger Bemühungen westlicher Geber um die Stärkung der Zivilgesellschaft und die Dezentralisierung staatlicher Aufgaben auch in Kamerun kaum erfüllen wird. Ein „Ende der Geschichte“ lässt sich auch für die Zeit nach Paul Biya nicht prognostizieren.
Im Rahmen der Dissertation wird die Anwendung und Wirkung von Kernelementen des New Public Management (NPM) am Beispiel der Bürgerdienste der sechs europäischen Hauptstädte Berlin, Brüssel, Kopenhagen, Madrid, Prag und Warschau analysiert. Hierbei steht der Vergleich von Hauptstädten der MOE-Staaten mit Hauptstädten alter EU-Mitgliedsstaaten im Vordergrund. Es wird die folgende Forschungshypothese untersucht: Die Verwaltungen in den Hauptstädten der östlichen Mitgliedsstaaten der EU haben in Folge der grundsätzlichen gesellschaftlichen und politischen Umbrüche in den 1990er Jahren bedeutend mehr Kernelemente des NPM beim Neuaufbau ihrer öffentlichen Verwaltungen eingeführt. Durch den folgerichtigen Aufbau kundenorientierter und moderner Verwaltungen sowie der strikten Anwendung der Kernelemente des New Public Management arbeiten die Bürgerdienste in den Hauptstädten östlicher EU-Mitgliedsstaaten effizienter und wirkungsvoller als vergleichbare Bürgerdienste in den Hauptstädten westlicher EU-Mitgliedsstaaten. Zur Überprüfung der Forschungshypothese werden die Vergleichsstädte zunächst den entsprechenden Rechts- und Verwaltungstraditionen (kontinentaleuropäisch deutsch, napoleonisch und skandinavisch) zugeordnet und bezüglich ihrer Ausgangslage zum Aufbau einer modernen Verwaltung (Westeuropäische Verwaltung, Wiedervereinigungsverwaltung und Transformations-verwaltung) kategorisiert. Im Anschluss werden die institutionellen Voraussetzungen hinterfragt, was die deskriptive Darstellung der Stadt- und Verwaltungsgeschichte sowie die Untersuchung von organisatorischen Strukturen der Bürgerdienste, die Anwendung der NPM-Instrumente als auch die Innen- und Außenperspektive des NPM umfasst. Es wird festgestellt, ob und in welcher Form die Bürgerdienste der Vergleichsstädte die Kernelemente des NPM anwenden. Im Anschluss werden die Vergleichsstädte bezüglich der Anwendung der Kernelemente miteinander verglichen, wobei der Fokus auf dem persönlichen Vertriebsweg und der Kundenorientierung liegt. Der folgende Teil der Dissertation befasst sich mit dem Output der Bürgerdienste, der auf operative Resultate untersucht und verglichen wird. Hierbei stellt sich insbesondere die Frage nach den Leistungsmengen und der Produktivität des Outputs. Es werden aber auch die Ergebnisse von Verwaltungsprozessen untersucht, insbesondere in Bezug auf die Kundenorientierung. Hierfür wird ein Effizienzvergleich der Bürgerdienste in den Vergleichsstädten anhand einer relativen Effizienzmessung und der Free Disposal Hull (FDH)-Methode nach Bouckaert durchgeführt. Es ist eine Konzentration auf populäre Dienstleistungen aus dem Portfolio der Bürgerdienste notwendig. Daher werden die vergleichbaren Dienstleistungen Melde-, Personalausweis-, Führerschein- und Reisepass-angelegenheiten unter Einbeziehung des Vollzeitäquivalents zur Berechnung der Effizienz der Bürgerdienste herangezogen. Hierfür werden Daten aus den Jahren 2009 bis 2011 genutzt, die teilweise aus verwaltungsinternen Datenbanken stammen. Anschließend wird der Versuch unternommen, den Outcome in die Effizienzanalyse der Bürgerdienste einfließen zu lassen. In diesem Zusammenhang wird die Anwendbarkeit von verschiedenen erweiterten Best-Practice-Verfahren und auch eine Erweiterung der relativen Effizienzmessung und der FDH-Methode geprüft. Als Gesamtfazit der Dissertation kann festgehalten werden, dass die Bürgerdienste in den untersuchten Hauptstädten der MOE-Staaten nicht mehr Kernelemente des NPM anwenden, als die Hauptstädte der westlichen Mitgliedsstaaten der EU. Im Gegenteil wendet Prag deutlich weniger NPM-Instrumente als andere Vergleichsstädte an, wohingegen Warschau zwar viele NPM-Instrumente anwendet, jedoch immer von einer westeuropäischen Vergleichsstadt übertroffen wird. Auch die Hypothese, dass die Bürgerdienste in den Hauptstädten der MOE-Staaten effizienter arbeiten als vergleichbare Bürgerdienste in den Hauptstädten westlicher EU-Mitgliedsstaaten wurde durch die Dissertation entkräftet. Das Gegenteil ist der Fall, da Prag und Warschau im Rahmen des Effizienzvergleichs lediglich durchschnittliche oder schlechte Performances aufweisen. Die aufgestellte Hypothese ist durch die Forschungsergebnisse widerlegt, lediglich das gute Abschneiden der Vergleichsstadt Warschau bei der Anwendungsanalyse kann einen Teil der These im gewissen Umfang bestätigen.
The term Linked Data refers to connected information sources comprising structured data about a wide range of topics and for a multitude of applications. In recent years, the conceptional and technical foundations of Linked Data have been formalized and refined. To this end, well-known technologies have been established, such as the Resource Description Framework (RDF) as a Linked Data model or the SPARQL Protocol and RDF Query Language (SPARQL) for retrieving this information. Whereas most research has been conducted in the area of generating and publishing Linked Data, this thesis presents novel approaches for improved management. In particular, we illustrate new methods for analyzing and processing SPARQL queries. Here, we present two algorithms suitable for identifying structural relationships between these queries. Both algorithms are applied to a large number of real-world requests to evaluate the performance of the approaches and the quality of their results. Based on this, we introduce different strategies enabling optimized access of Linked Data sources. We demonstrate how the presented approach facilitates effective utilization of SPARQL endpoints by prefetching results relevant for multiple subsequent requests. Furthermore, we contribute a set of metrics for determining technical characteristics of such knowledge bases. To this end, we devise practical heuristics and validate them through thorough analysis of real-world data sources. We discuss the findings and evaluate their impact on utilizing the endpoints. Moreover, we detail the adoption of a scalable infrastructure for improving Linked Data discovery and consumption. As we outline in an exemplary use case, this platform is eligible both for processing and provisioning the corresponding information.
Anorexia nervosa und unipolare Affektive Störungen stellen häufige und schwerwiegende kinder- und jugendpsychiatrische Störungsbilder dar, deren Pathogenese bislang nicht vollständig entschlüsselt ist. Verschiedene Studien zeigen bei erwachsenen Patienten gravierende Auffälligkeiten in den kognitiven Funktionen. Dahingegen scheinen bei adoleszenten Patienten lediglich leichtere Einschränkungen in den kognitiven Funktionen vorzuliegen. Die Prävalenz der Anorexia nervosa und unipolaren Affektiven Störung ist mit Beginn der Adoleszenz deutlich erhöht. Es ist anzunehmen, dass kognitive Dysfunktionen, die sich bereits in diesem Alter abzeichnen, den weiteren Krankheitsverlauf bis in das Erwachsenenalter, die Behandlungsergebnisse und die Prognose maßgeblich beeinträchtigen könnten. Zudem ist von einem höheren Chronifizierungsrisiko auszugehen. In der vorliegenden Arbeit wurden daher kognitive Funktionen bei adoleszenten Patientinnen mit Anorexia nervosa sowie Patienten mit unipolaren Affektiven Störungen untersucht. Die Überprüfung der kognitiven Funktionen bei Patientinnen mit Anorexia nervosa erfolgte vor und nach Gewichtszunahme. Weiterhin wurden zugrundeliegende biologische Mechanismen überprüft. Zudem wurde die Spezifität kognitiver Dysfunktionen für beide Störungsbilder untersucht und bei Patienten mit unipolaren Affektiven Störungen geschlechtsbezogene Unterschiede exploriert. Insgesamt gingen 47 Patientinnen mit Anorexia nervosa (mittleres Alter 16,3 + 1,6 Jahre), 39 Patienten mit unipolaren Affektiven Störungen (mittleres Alter 15,5 + 1,3 Jahre) sowie 78 Kontrollprobanden (mittleres Alter 16,5 + 1,3 Jahre) in die Untersuchung ein. Sämtliche Studienteilnehmer durchliefen eine neuropsychologische Testbatterie, bestehend aus Verfahren zur Überprüfung der kognitiven Flexibilität sowie visuellen und psychomotorischen Verarbeitungsgeschwindigkeit. Neben einem Intelligenzscreening wurden zudem das Ausmaß der depressiven Symptomatik sowie die allgemeine psychische Belastung erfasst. Die Ergebnisse legen nahe, dass bei adoleszenten Patientinnen mit Anorexia nervosa, sowohl im akut untergewichtigen Zustand als auch nach Gewichtszunahme, lediglich milde Beeinträchtigungen in den kognitiven Funktionen vorliegen. Im akut untergewichtigen Zustand offenbarten sich deutliche Zusammenhänge zwischen dem appetitregulierenden Peptid Agouti-related Protein und kognitiver Flexibilität, nicht jedoch zwischen Agouti-related Protein und visueller oder psychomotorischer Verarbeitungsgeschwindigkeit. Bei dem Vergleich von Anorexia nervosa und unipolaren Affektiven Störungen prädizierte die Zugehörigkeit zu der Patientengruppe Anorexia nervosa ein Risiko für das Vorliegen kognitiver Dysfunktionen. Es zeigte sich zudem, dass adoleszente Patienten mit unipolaren Affektiven Störungen lediglich in der psychomotorischen Verarbeitungsgeschwindigkeit tendenziell schwächere Leistungen offenbarten als gesunde Kontrollprobanden. Es ergab sich jedoch ein genereller geschlechtsbezogener Vorteil für weibliche Probanden in der visuellen und psychomotorischen Verarbeitungsgeschwindigkeit. Die vorliegenden Befunde unterstreichen die Notwendigkeit der Überprüfung kognitiver Funktionen bei adoleszenten Patienten mit Anorexia nervosa sowie unipolaren Affektiven Störungen in der klinischen Routinediagnostik. Die Patienten könnten von spezifischen Therapieprogrammen profitieren, die Beeinträchtigungen in den kognitiven Funktionen mildern bzw. präventiv behandeln.
Das deutsche Bildungssystem ist noch weit davon entfernt, Inklusion im Schulalltag und im Schulunterricht flächendeckend umzusetzen. Dies ist jedoch eine Verpflichtung, die Deutschland mit dem Beitritt zur Behindertenrechtskonvention eingegangen ist. Die Realisierung einer inklusiven Schulentwicklung gestaltet sich schwierig, da die in der Inklusion erfolgreichen Schulen es einerseits nicht schaffen, den notwendigen Bedarf aufzufangen und es andererseits auch nur in bedingtem Maße gelingt, ihr Wissen und ihre Praxiserfahrungen über Inklusion weiterzugeben. Zugleich zeigt sich im Schulalltag die Notwendigkeit eines Abbaus von Barrieren sowie einer Verbesserung der Lernsituation. Debatten über die Anerkennung der heterogenen Rahmenbedingungen und damit über die Umsetzung eines inklusionspädagogischen Ansatzes dürfen nicht nur theoretisch geführt werden. In dem Beitrag werden daher konkrete Möglichkeiten für den Fremdsprachenunterricht und Gute-Praxis-Beispiele aufgezeigt. Auch wenn ohne Frage umfangreichere finanzielle Mittel für eine Inklusionsumsetzung Voraussetzung wären, wird dabei sichtbar, dass adäquates Handeln und ein entsprechender Wille aus Verwaltung-, Schulleiter-, Lehrer- und Schülerperspektive schon vieles bewegen kann. Es wird aufgezeigt, welche Probleme und Herausforderungen sich in einer inklusiven Praxis ergeben können.
Planetary research is often user-based and requires considerable skill, time, and effort. Unfortunately, self-defined boundary conditions, definitions, and rules are often not documented or not easy to comprehend due to the complexity of research. This makes a comparison to other studies, or an extension of the already existing research, complicated. Comparisons are often distorted, because results rely on different, not well defined, or even unknown boundary conditions. The purpose of this research is to develop a standardized analysis method for planetary surfaces, which is adaptable to several research topics. The method provides a consistent quality of results. This also includes achieving reliable and comparable results and reducing the time and effort of conducting such studies. A standardized analysis method is provided by automated analysis tools that focus on statistical parameters. Specific key parameters and boundary conditions are defined for the tool application. The analysis relies on a database in which all key parameters are stored. These databases can be easily updated and adapted to various research questions. This increases the flexibility, reproducibility, and comparability of the research. However, the quality of the database and reliability of definitions directly influence the results. To ensure a high quality of results, the rules and definitions need to be well defined and based on previously conducted case studies. The tools then produce parameters, which are obtained by defined geostatistical techniques (measurements, calculations, classifications). The idea of an automated statistical analysis is tested to proof benefits but also potential problems of this method. In this study, I adapt automated tools for floor-fractured craters (FFCs) on Mars. These impact craters show a variety of surface features, occurring in different Martian environments, and having different fracturing origins. They provide a complex morphological and geological field of application. 433 FFCs are classified by the analysis tools due to their fracturing process. Spatial data, environmental context, and crater interior data are analyzed to distinguish between the processes involved in floor fracturing. Related geologic processes, such as glacial and fluvial activity, are too similar to be separately classified by the automated tools. Glacial and fluvial fracturing processes are merged together for the classification. The automated tools provide probability values for each origin model. To guarantee the quality and reliability of the results, classification tools need to achieve an origin probability above 50 %. This analysis method shows that 15 % of the FFCs are fractured by intrusive volcanism, 20 % by tectonic activity, and 43 % by water & ice related processes. In total, 75 % of the FFCs are classified to an origin type. This can be explained by a combination of origin models, superposition or erosion of key parameters, or an unknown fracturing model. Those features have to be manually analyzed in detail. Another possibility would be the improvement of key parameters and rules for the classification. This research shows that it is possible to conduct an automated statistical analysis of morphologic and geologic features based on analysis tools. Analysis tools provide additional information to the user and are therefore considered assistance systems.
The H.E.S.S. array is a third generation Imaging Atmospheric Cherenkov Telescope (IACT) array. It is located in the Khomas Highland in Namibia, and measures very high energy (VHE) gamma-rays. In Phase I, the array started data taking in 2004 with its four identical 13 m telescopes. Since then, H.E.S.S. has emerged as the most successful IACT experiment to date. Among the almost 150 sources of VHE gamma-ray radiation found so far, even the oldest detection, the Crab Nebula, keeps surprising the scientific community with unexplained phenomena such as the recently discovered very energetic flares of high energy gamma-ray radiation. During its most recent flare, which was detected by the Fermi satellite in March 2013, the Crab Nebula was simultaneously observed with the H.E.S.S. array for six nights. The results of the observations will be discussed in detail during the course of this work. During the nights of the flare, the new 24 m × 32 m H.E.S.S. II telescope was still being commissioned, but participated in the data taking for one night. To be able to reconstruct and analyze the data of the H.E.S.S. Phase II array, the algorithms and software used by the H.E.S.S. Phase I array had to be adapted. The most prominent advanced shower reconstruction technique developed by de Naurois and Rolland, the template-based model analysis, compares real shower images taken by the Cherenkov telescope cameras with shower templates obtained using a semi-analytical model. To find the best fitting image, and, therefore, the relevant parameters that describe the air shower best, a pixel-wise log-likelihood fit is done. The adaptation of this advanced shower reconstruction technique to the heterogeneous H.E.S.S. Phase II array for stereo events (i.e. air showers seen by at least two telescopes of any kind), its performance using MonteCarlo simulations as well as its application to real data will be described.
Biological materials have ever been used by humans because of their remarkable properties. This is surprising since the materials are formed under physiological conditions and with commonplace constituents. Nature thus not only provides us with inspiration for designing new materials but also teaches us how to use soft molecules to tune interparticle and external forces to structure and assemble simple building blocks into functional entities. Magnetotactic bacteria and their chain of magnetosomes represent a striking example of such an accomplishment where a very simple living organism controls the properties of inorganics via organics at the nanometer-scale to form a single magnetic dipole that orients the cell in the Earth magnetic field lines. My group has developed a biological and a bio-inspired research based on these bacteria. My research, at the interface between chemistry, materials science, physics, and biology focuses on how biological systems synthesize, organize and use minerals. We apply the design principles to sustainably form hierarchical materials with controlled properties that can be used e.g. as magnetically directed nanodevices towards applications in sensing, actuating, and transport. In this thesis, I thus first present how magnetotactic bacteria intracellularly form magnetosomes and assemble them in chains. I developed an assay, where cells can be switched from magnetic to non-magnetic states. This enabled to study the dynamics of magnetosome and magnetosome chain formation. We found that the magnetosomes nucleate within minutes whereas chains assembles within hours. Magnetosome formation necessitates iron uptake as ferrous or ferric ions. The transport of the ions within the cell leads to the formation of a ferritin-like intermediate, which subsequently is transported and transformed within the magnetosome organelle in a ferrihydrite-like precursor. Finally, magnetite crystals nucleate and grow toward their mature dimension. In addition, I show that the magnetosome assembly displays hierarchically ordered nano- and microstructures over several levels, enabling the coordinated alignment and motility of entire populations of cells. The magnetosomes are indeed composed of structurally pure magnetite. The organelles are partly composed of proteins, which role is crucial for the properties of the magnetosomes. As an example, we showed how the protein MmsF is involved in the control of magnetosome size and morphology. We have further shown by 2D X-ray diffraction that the magnetosome particles are aligned along the same direction in the magnetosome chain. We then show how magnetic properties of the nascent magnetosome influence the alignment of the particles, and how the proteins MamJ and MamK coordinate this assembly. We propose a theoretical approach, which suggests that biological forces are more important than physical ones for the chain formation. All these studies thus show how magnetosome formation and organization are under strict biological control, which is associated with unprecedented material properties. Finally, we show that the magnetosome chain enables the cells to find their preferred oxygen conditions if the magnetic field is present. The synthetic part of this work shows how the understanding of the design principles of magnetosome formation enabled me to perform biomimetic synthesis of magnetite particles within the highly desired size range of 25 to 100 nm. Nucleation and growth of such particles are based on aggregation of iron colloids termed primary particles as imaged by cryo-high resolution TEM. I show how additives influence magnetite formation and properties. In particular, MamP, a so-called magnetochrome proteins involved in the magnetosome formation in vivo, enables the in vitro formation of magnetite nanoparticles exclusively from ferrous iron by controlling the redox state of the process. Negatively charged additives, such as MamJ, retard magnetite nucleation in vitro, probably by interacting with the iron ions. Other additives such as e.g. polyarginine can be used to control the colloidal stability of stable-single domain sized nanoparticles. Finally, I show how we can “glue” magnetic nanoparticles to form propellers that can be actuated and swim with the help of external magnetic fields. We propose a simple theory to explain the observed movement. We can use the theoretical framework to design experimental conditions to sort out the propellers depending on their size and effectively confirm this prediction experimentally. Thereby, we could image propellers with size down to 290 nm in their longer dimension, much smaller than what perform so far.
Ausências Brasil
(2014)
Von der Militärdiktatur ermordet und spurlos verschwunden – diese Ausstellung greift zurück auf Fotoalben der Familienangehörigen von Brasilianern, die der systematischen Repression, Folter und Verschleppung der brasilianischen Militärdiktatur (1964–1985) zum Opfer gefallen sind: Arbeiter, Stadtguerilleros, Studenten, Akademiker, ganze Familien.
Derzeit wird unser Planet von einem Netz neuer bilateraler Handelsverträge umspannt. Treibende Kräfte sind die alten Wirtschaftsmächte EU und USA. Aber auch neue Akteure in der Weltwirtschaft des 21. Jahrhunderts wie China oder Indien sind dabei. Solche Abkommen üben hohen Druck auf konkurrierende Ökonomien in den jeweiligen Regionen aus. So verschafften die Abkommen EU-Korea und Korea-USA den südkoreanischen Elektronik- und Automobilherstellern einen so großen Kostenvorteil, dass die japanische Regierung an den Verhandlungstisch mit der EU (bilateral) und mit den USA (plurilateral im Pazifikabkommen TPP) gezwungen wurde.
This work introduces concepts and corresponding tool support to enable a complementary approach in dealing with recovery. Programmers need to recover a development state, or a part thereof, when previously made changes reveal undesired implications. However, when the need arises suddenly and unexpectedly, recovery often involves expensive and tedious work. To avoid tedious work, literature recommends keeping away from unexpected recovery demands by following a structured and disciplined approach, which consists of the application of various best practices including working only on one thing at a time, performing small steps, as well as making proper use of versioning and testing tools. However, the attempt to avoid unexpected recovery is both time-consuming and error-prone. On the one hand, it requires disproportionate effort to minimize the risk of unexpected situations. On the other hand, applying recommended practices selectively, which saves time, can hardly avoid recovery. In addition, the constant need for foresight and self-control has unfavorable implications. It is exhaustive and impedes creative problem solving. This work proposes to make recovery fast and easy and introduces corresponding support called CoExist. Such dedicated support turns situations of unanticipated recovery from tedious experiences into pleasant ones. It makes recovery fast and easy to accomplish, even if explicit commits are unavailable or tests have been ignored for some time. When mistakes and unexpected insights are no longer associated with tedious corrective actions, programmers are encouraged to change source code as a means to reason about it, as opposed to making changes only after structuring and evaluating them mentally. This work further reports on an implementation of the proposed tool support in the Squeak/Smalltalk development environment. The development of the tools has been accompanied by regular performance and usability tests. In addition, this work investigates whether the proposed tools affect programmers’ performance. In a controlled lab study, 22 participants improved the design of two different applications. Using a repeated measurement setup, the study examined the effect of providing CoExist on programming performance. The result of analyzing 88 hours of programming suggests that built-in recovery support as provided with CoExist positively has a positive effect on programming performance in explorative programming tasks.
Wood is used for many applications because of its excellent mechanical properties, relative abundance and as it is a renewable resource. However, its wider utilization as an engineering material is limited because it swells and shrinks upon moisture changes and is susceptible to degradation by microorganisms and/or insects. Chemical modifications of wood have been shown to improve dimensional stability, water repellence and/or durability, thus increasing potential service-life of wood materials. However current treatments are limited because it is difficult to introduce and fix such modifications deep inside the tissue and cell wall. Within the scope of this thesis, novel chemical modification methods of wood cell walls were developed to improve both dimensional stability and water repellence of wood material. These methods were partly inspired by the heartwood formation in living trees, a process, that for some species results in an insertion of hydrophobic chemical substances into the cell walls of already dead wood cells, In the first part of this thesis a chemistry to modify wood cell walls was used, which was inspired by the natural process of heartwood formation. Commercially available hydrophobic flavonoid molecules were effectively inserted in the cell walls of spruce, a softwood species with low natural durability, after a tosylation treatment to obtain “artificial heartwood”. Flavonoid inserted cell walls show a reduced moisture absorption, resulting in better dimensional stability, water repellency and increased hardness. This approach was quite different compared to established modifications which mainly address hydroxyl groups of cell wall polymers with hydrophilic substances. In the second part of the work in-situ styrene polymerization inside the tosylated cell walls was studied. It is known that there is a weak adhesion between hydrophobic polymers and hydrophilic cell wall components. The hydrophobic styrene monomers were inserted into the tosylated wood cell walls for further polymerization to form polystyrene in the cell walls, which increased the dimensional stability of the bulk wood material and reduced water uptake of the cell walls considerably when compared to controls. In the third part of the work, grafting of another hydrophobic and also biodegradable polymer, poly(ɛ-caprolactone) in the wood cell walls by ring opening polymerization of ɛ-caprolactone was studied at mild temperatures. Results indicated that polycaprolactone attached into the cell walls, caused permanent swelling of the cell walls up to 5%. Dimensional stability of the bulk wood material increased 40% and water absorption reduced more than 35%. A fully biodegradable and hydrophobized wood material was obtained with this method which reduces disposal problem of the modified wood materials and has improved properties to extend the material’s service-life. Starting from a bio-inspired approach which showed great promise as an alternative to standard cell wall modifications we showed the possibility of inserting hydrophobic molecules in the cell walls and supported this fact with in-situ styrene and ɛ-caprolactone polymerization into the cell walls. It was shown in this thesis that despite the extensive knowledge and long history of using wood as a material there is still room for novel chemical modifications which could have a high impact on improving wood properties.
Störungen des Hörvermögens
(2014)
Schwerhörigkeiten treten beim Menschen häufig auf und können angeboren oder erworben sein. Man unterscheidet in Schallleitungsschwerhörigkeiten, bei denen Schallaufnahme und Schallleitung in das Innenohr durch Fremdkörper, Infektionen, Verletzungen, Mittelohrbelüftungsprobleme und Fehlbildungen gestört sind und in Schallempfindungsschwerhörigkeiten, bei denen der Sinnesbereich des Innenohres, die Nervenleitung zum Hirnstamm oder die zentrale Verarbeitung im Gehirn betroffen sind. Ursächlich hierfür kommen neben vererbten Faktoren Infektionen, Verletzungen, Lärm, toxische Substanzen, Geburtsprobleme, Stoffwechselstörungen und Tumoren in Betracht. Bestehen Schwerhörigkeiten unbehandelt lange Zeit, kommt es – je nach Ausprägung – bei Kindern zu Störungen der Gehirn-, Sprachund emotionalen Entwicklung, bei allen Betroffenen auch zu Kommunikationsproblemen und zu Schwierigkeiten bei der Teilhabe am Leben in der sozialen Gemeinschaft. Schallleitungsschwerhörigkeiten können medizinisch behandelt und gebessert werden, während bei Schallempfindungsschwerhörigkeiten eine ursächliche medizinische Behandlung zur Zeit nicht möglich ist. In diesen Fällen ist die Versorgung mit Hörsystemen erforderlich. Hierzu zählt man, wiederum in Abhängigkeit vom Ausmaß der Schwerhörigkeit, IdO- und HdO-Hörgeräte, implantierbare Hörgeräte, cochleäre Implantate und Hirnstammimplantate. Bei Schwerhörigkeiten, die durch beruflichen Lärm verursacht sind, ist eine Herausnahme aus dem Lärmberuf Voraussetzung, um eine weitere Verschlechterung zu verhindern. Eine besondere Stellung kommt der einseitigen Schallempfindungsschwerhörigkeit zu, da sie bei Normalhörigkeit des anderen Ohres, seltener auffällt, den Betroffenen aber Probleme im täglichen Leben bereiten kann. Wichtig sind ihre Erkennung und Berücksichtigung, zum Beispiel in der Schule, am Arbeitsplatz und im Straßenverkehr. Dem bleibend hörbehinderten Menschen stehen nach dem Sozialgesetzbuch Hilfen und Vergünstigungen zu, um die durch die Hörbehinderung verursachten Nachteile zum Teil auszugleichen.
Der Weg zum neuen Hören
(2014)
In dieser Arbeit werden nichtlineare Kopplungsmechanismen von akustischen Oszillatoren untersucht, die zu Synchronisation führen können. Aufbauend auf die Fragestellungen vorangegangener Arbeiten werden mit Hilfe theoretischer und experimenteller Studien sowie mit Hilfe numerischer Simulationen die Elemente der Tonentstehung in der Orgelpfeife und die Mechanismen der gegenseitigen Wechselwirkung von Orgelpfeifen identifiziert. Daraus wird erstmalig ein vollständig auf den aeroakustischen und fluiddynamischen Grundprinzipien basierendes nichtlinear gekoppeltes Modell selbst-erregter Oszillatoren für die Beschreibung des Verhaltens zweier wechselwirkender Orgelpfeifen entwickelt. Die durchgeführten Modellrechnungen werden mit den experimentellen Befunden verglichen. Es zeigt sich, dass die Tonentstehung und die Kopplungsmechanismen von Orgelpfeifen durch das entwickelte Oszillatormodell in weiten Teilen richtig beschrieben werden. Insbesondere kann damit die Ursache für den nichtlinearen Zusammenhang von Kopplungsstärke und Synchronisation des gekoppelten Zwei-Pfeifen Systems, welcher sich in einem nichtlinearen Verlauf der Arnoldzunge darstellt, geklärt werden. Mit den gewonnenen Erkenntnissen wird der Einfluss des Raumes auf die Tonentstehung bei Orgelpfeifen betrachtet. Dafür werden numerische Simulationen der Wechselwirkung einer Orgelpfeife mit verschiedenen Raumgeometrien, wie z. B. ebene, konvexe, konkave, und gezahnte Geometrien, exemplarisch untersucht. Auch der Einfluss von Schwellkästen auf die Tonentstehung und die Klangbildung der Orgelpfeife wird studiert. In weiteren, neuartigen Synchronisationsexperimenten mit identisch gestimmten Orgelpfeifen, sowie mit Mixturen wird die Synchronisation für verschiedene, horizontale und vertikale Pfeifenabstände in der Ebene der Schallabstrahlung, untersucht. Die dabei erstmalig beobachteten räumlich isotropen Unstetigkeiten im Schwingungsverhalten der gekoppelten Pfeifensysteme, deuten auf abstandsabhängige Wechsel zwischen gegen- und gleichphasigen Sychronisationsregimen hin. Abschließend wird die Möglichkeit dokumentiert, das Phänomen der Synchronisation zweier Orgelpfeifen durch numerische Simulationen, also der Behandlung der kompressiblen Navier-Stokes Gleichungen mit entsprechenden Rand- und Anfangsbedingungen, realitätsnah abzubilden. Auch dies stellt ein Novum dar.
Die vorliegende Masterarbeit hat in einer Einstellungsstudie untersucht, welchen Einfluss Einstellungen gegenüber sprachlichen Varietäten und gegenüber der wahrgenommenen ethnischen Herkunft von Sprecher*innen auf die Leistungsbewertung von Schulaufsätzen haben. In Anlehnung an die Debatte um Sprachideologien wurden Einstellungen gegenüber den sprachlichen Varietäten Kiezdeutsch und dominantes Deutsch sowie, aufbauend auf Studien zur Wahrnehmung von sozialer Information über Sprecher*innen, Einstellungen gegenüber türkisch und deutsch markierten Vornamen miteinander verglichen. 157 Lehramtsstudierenden der Universität Potsdam wurde je ein fiktiver Schulaufsatz vorgelegt, der die jeweiligen Einstellungsobjekte sprachliche Varietät und ethnisch markierter Vorname enthielt. Durch einen Vergleich der individuellen Leistungsbewertung der Aufsätze wurde untersucht, welche Unterschiede sich im schulischen Kontext in der Bewertung und damit der Einstellung gegenüber bestimmten Sprecher*innen und ihrem Sprachgebrauch feststellen ließen. Die Studie ergab, dass in den fiktiven Schulaufsätzen Kiezdeutsch stärker sanktioniert wurde als dominantes Deutsch. Dieses Ergebnis konnte verstärkt beobachtet werden, wenn der Schulaufsatz vermeintlich von einer*m Sprecher*in mit türkisch markiertem Vornamen stammte. Die Ergebnisse der Studie lassen vermuten, dass eine Bewertung von Schüler*innen von einer Vorstellung darüber abhängt, wie weit oder nah entfernt der oder die betreffende Schüler*in zur sprachlichen und sozialen Norm steht.
Ken Loach ist seit mehr als fünf Jahrzehnten ein wichtiger Teil der britischen Filmszene. Längst hat seine Arbeit auch international Anerkennung gefunden und wurde mit vielen renommierten Auszeichnungen bedacht. Einige seiner Filme liefen sogar an den Kinokassen recht erfolgreich, trotzdem ist er für viele Menschen noch immer kein Begriff. Das ist sehr bedauerlich, denn Loach gehört zweifelsohne zu den ganz Großen in seinem Fach. Diese Arbeit soll aufzeigen, worin seine Filme sich von den Werken anderer Regisseure unterscheiden und warum sie so wertvoll sind. Loachs Werdegang lässt sich grob in drei Phasen unterteilen, welche im ersten Teil der Arbeit näher beschrieben werden. Anschließend wurden drei Beispiele ausgewählt, mit deren Hilfe Loachs Arbeitsweise und die dadurch erzielte Wirkung veranschaulicht werden. Den Filmen Kes (1969),Riff-Raff (1991) und My Name Is Joe (1998) ist in chronologischer Reihenfolge jeweils ein Kapitel gewidmet, um auf diese Weise auch eine Entwicklung in Loachs Laufbahn nachvollziehen zu können. Die Inhalte und die Hintergründe der einzelnen Filme werden zunächst kurz erläutert, um dann anschließend auf wichtige Aspekte von Loachs Schaffen anhand der Beispiele einzugehen.
In this thesis we consider diverse aspects of existence and correctness of asymptotic solutions to elliptic differential and pseudodifferential equations. We begin our studies with the case of a general elliptic boundary value problem in partial derivatives. A small parameter enters the coefficients of the main equation as well as into the boundary conditions. Such equations have already been investigated satisfactory, but there still exist certain theoretical deficiencies. Our aim is to present the general theory of elliptic problems with a small parameter. For this purpose we examine in detail the case of a bounded domain with a smooth boundary. First of all, we construct formal solutions as power series in the small parameter. Then we examine their asymptotic properties. It suffices to carry out sharp two-sided \emph{a priori} estimates for the operators of boundary value problems which are uniform in the small parameter. Such estimates failed to hold in functional spaces used in classical elliptic theory. To circumvent this limitation we exploit norms depending on the small parameter for the functions defined on a bounded domain. Similar norms are widely used in literature, but their properties have not been investigated extensively. Our theoretical investigation shows that the usual elliptic technique can be correctly carried out in these norms. The obtained results also allow one to extend the norms to compact manifolds with boundaries. We complete our investigation by formulating algebraic conditions on the operators and showing their equivalence to the existence of a priori estimates. In the second step, we extend the concept of ellipticity with a small parameter to more general classes of operators. Firstly, we want to compare the difference in asymptotic patterns between the obtained series and expansions for similar differential problems. Therefore we investigate the heat equation in a bounded domain with a small parameter near the time derivative. In this case the characteristics touch the boundary at a finite number of points. It is known that the solutions are not regular in a neighbourhood of such points in advance. We suppose moreover that the boundary at such points can be non-smooth but have cuspidal singularities. We find a formal asymptotic expansion and show that when a set of parameters comes through a threshold value, the expansions fail to be asymptotic. The last part of the work is devoted to general concept of ellipticity with a small parameter. Several theoretical extensions to pseudodifferential operators have already been suggested in previous studies. As a new contribution we involve the analysis on manifolds with edge singularities which allows us to consider wider classes of perturbed elliptic operators. We examine that introduced classes possess a priori estimates of elliptic type. As a further application we demonstrate how developed tools can be used to reduce singularly perturbed problems to regular ones.
Diese Arbeit beschäftigt sich mit dem Thema Beteiligungsmanagement und der damit verbundenen Steuerung öffentlicher Unternehmen auf kommunaler Ebene. Der Ausgangspunkt der Untersuchung ist die Erkenntnis, dass eine Kommune keine absolute Kontrolle über ihre öffentlichen Unternehmen ausüben kann und sollte. Stattdessen wird angenommen, dass im Zuge einer effizienten Steuerung eine Fokussierung auf relevante Themen, Bereiche und Aktivitäten der öffentlichen Unternehmen erfolgt. Da die Steuerung öffentlicher Unternehmen aufgrund der Vielzahl involvierter Akteure nur schwierig zu untersuchen ist, steht in der vorliegenden Analyse die speziell für das kommunale Beteiligungsmanagement eingerichtete „Organisationseinheit Beteiligungsmanagement“ im Mittelpunkt. Die Forschungsfrage lautet: Welche Faktoren erklären den Steuerungsfokus einer „Organisationseinheit Beteiligungsmanagement“? Im Zuge einer explorativen Annäherung an die Forschungsfrage werden vier Perspektiven aus der Literatur verschiedener Forschungsgebiete, im Besonderen aber der Agencification-Literatur, hergeleitet: eine strukturelle, eine aufgabenspezifische, eine kulturell-vergangenheitsbezogene sowie eine umweltbezogene Perspektive. Mit Hilfe dieser Perspektiven werden sowohl verwaltungs- als auch unternehmenszentrierte Faktoren erarbeitet, deren Einfluss auf die Wahl des Steuerungsfokus untersucht wird. Das Ergebnis der explorativ-vergleichenden Fallstudie von insgesamt neun kommunalen Organisationseinheiten Beteiligungsmanagement zeigt, dass die untersuchten Faktoren entweder eine Intensivierung bzw. veränderte Verortung oder eine Diversifizierung des Steuerungsfokus erklären. Eine Diversifizierung bedeutet, dass eine Vielzahl verschiedener Fokusse berücksichtigt wird.
Indonesien zählt zu den weltweit führenden Ländern bei der Nutzung von geothermischer Energie. Die geothermischen Energiequellen sind im Wesentlichen an den aktiven Vulkanismus gebunden, der durch die Prozesse an der indonesischen Subduktionszone verursacht wird. Darüber hinaus sind geotektonische Strukturen wie beispielsweise die Sumatra-Störung als verstärkende Faktoren für das geothermische Potenzial von Bedeutung. Bei der geophysikalischen Erkundung der indonesischen Geothermie-Ressourcen konzentrierte man sich bisher vor allem auf die Magnetotellurik. Passive Seismologie wurde dahingegen ausschließlich für die Überwachung von im Betrieb befindlichen Geothermie-Anlagen verwendet. Jüngste Untersuchungungen z.B. in Island und in den USA haben jedoch gezeigt, dass seismologische Verfahren bereits in der Erkundungsphase wichtige Informationen zu den physikalischen Eigenschaften, zum Spannungsfeld und zu möglichen Fluid- und Wärmetransportwegen liefern können. In der vorgelegten Doktorarbeit werden verschiedene moderne Methoden der passiven Seismologie verwendet, um beispielhaft ein neues, von der indonesischen Regierung für zukünftige geothermische Energiegewinnung ausgewiesenes Gebiet im nördlichen Teil Sumatras (Indonesien) zu erkunden. Die konkreten Ziele der Untersuchungen umfassten (1) die Ableitung von 3D Strukturmodellen der P- und S-Wellen Geschwindigkeiten (Parameter Vp und Vs), (2) die Bestimmung der Absorptionseigenschaften (Parameter Qp), und (3) die Kartierung und Charakterisierung von Störungssystemen auf der Grundlage der Seismizitätsverteilung und der Herdflächenlösungen. Für diese Zwecke habe ich zusammen mit Kollegen ein seismologisches Netzwerk in Tarutung (Sumatra) aufgebaut und über einen Zeitraum von 10 Monaten (Mai 2011 – Februar 2012) betrieben. Insgesamt wurden hierbei 42 Stationen (jeweils ausgestattet mit EDL-Datenlogger, 3-Komponenten, 1 Hz Seismometer) über eine Fläche von etwa 35 x 35 km verteilt. Mit dem Netzwerk wurden im gesamten Zeitraum 2568 lokale Erdbeben registriert. Die integrierte Betrachtung der Ergebnisse aus den verschiedenen Teilstudien (Tomographie, Erdbebenverteilung) erlaubt neue Einblicke in die generelle geologische Stukturierung sowie eine Eingrenzung von Bereichen mit einem erhöhten geothermischen Potenzial. Das tomographische Vp-Modell ermöglicht eine Bestimmung der Geometrie von Sedimentbecken entlang der Sumatra-Störung. Für die Geothermie besonders interessant ist der Bereich nordwestlich des Tarutung-Beckens. Die dort abgebildeten Anomalien (erhöhtes Vp/Vs, geringes Qp) habe ich als mögliche Aufstiegswege von warmen Fluiden interpretiert. Die scheinbar asymetrische Verteilung der Anomalien wird hierbei im Zusammenhang mit der Seismizitätsverteilung, der Geometrie der Beben-Bruchflächen, sowie struktur-geologischen Modellvorstellungen diskutiert. Damit werden wesentliche Informationen für die Planung einer zukünftigen geothermischen Anlage bereitgestellt.
Metabolic systems tend to exhibit steady states that can be measured in terms of their concentrations and fluxes. These measurements can be regarded as a phenotypic representation of all the complex interactions and regulatory mechanisms taking place in the underlying metabolic network. Such interactions determine the system's response to external perturbations and are responsible, for example, for its asymptotic stability or for oscillatory trajectories around the steady state. However, determining these perturbation responses in the absence of fully specified kinetic models remains an important challenge of computational systems biology. Structural kinetic modeling (SKM) is a framework to analyse whether a metabolic steady state remains stable under perturbation, without requiring detailed knowledge about individual rate equations. It provides a parameterised representation of the system's Jacobian matrix in which the model parameters encode information about the enzyme-metabolite interactions. Stability criteria can be derived by generating a large number of structural kinetic models (SK-models) with randomly sampled parameter sets and evaluating the resulting Jacobian matrices. The parameter space can be analysed statistically in order to detect network positions that contribute significantly to the perturbation response. Because the sampled parameters are equivalent to the elasticities used in metabolic control analysis (MCA), the results are easy to interpret biologically. In this project, the SKM framework was extended by several novel methodological improvements. These improvements were evaluated in a simulation study using a set of small example pathways with simple Michaelis Menten rate laws. Afterwards, a detailed analysis of the dynamic properties of the neuronal TCA cycle was performed in order to demonstrate how the new insights obtained in this work could be used for the study of complex metabolic systems. The first improvement was achieved by examining the biological feasibility of the elasticity combinations created during Monte Carlo sampling. Using a set of small example systems, the findings showed that the majority of sampled SK-models would yield negative kinetic parameters if they were translated back into kinetic models. To overcome this problem, a simple criterion was formulated that mitigates such infeasible models and the application of this criterion changed the conclusions of the SKM experiment. The second improvement of this work was the application of supervised machine-learning approaches in order to analyse SKM experiments. So far, SKM experiments have focused on the detection of individual enzymes to identify single reactions important for maintaining the stability or oscillatory trajectories. In this work, this approach was extended by demonstrating how SKM enables the detection of ensembles of enzymes or metabolites that act together in an orchestrated manner to coordinate the pathways response to perturbations. In doing so, stable and unstable states served as class labels, and classifiers were trained to detect elasticity regions associated with stability and instability. Classification was performed using decision trees and relevance vector machines (RVMs). The decision trees produced good classification accuracy in terms of model bias and generalizability. RVMs outperformed decision trees when applied to small models, but encountered severe problems when applied to larger systems because of their high runtime requirements. The decision tree rulesets were analysed statistically and individually in order to explore the role of individual enzymes or metabolites in controlling the system's trajectories around steady states. The third improvement of this work was the establishment of a relationship between the SKM framework and the related field of MCA. In particular, it was shown how the sampled elasticities could be converted to flux control coefficients, which were then investigated for their predictive information content in classifier training. After evaluation on the small example pathways, the methodology was used to study two steady states of the neuronal TCA cycle with respect to their intrinsic mechanisms responsible for stability or instability. The findings showed that several elasticities were jointly coordinated to control stability and that the main source for potential instabilities were mutations in the enzyme alpha-ketoglutarate dehydrogenase.
The tropical warm pool waters surrounding Indonesia are one of the equatorial heat and moisture sources that are considered as a driving force of the global climate system. The climate in Indonesia is dominated by the equatorial monsoon system, and has been linked to El Niño-Southern Oscillation (ENSO) events, which often result in severe droughts or floods over Indonesia with profound societal and economic impacts on the populations living in the world's fourth most populated country. The latest IPCC report states that ENSO will remain the dominant mode in the tropical Pacific with global effects in the 21st century and ENSO-related precipitation extremes will intensify. However, no common agreement exists among climate simulation models for projected change in ENSO and the Australian-Indonesian Monsoon. Exploring high-resolution palaeoclimate archives, like tree rings or varved lake sediments, provide insights into the natural climate variability of the past, and thus helps improving and validating simulations of future climate changes. Centennial tree-ring stable isotope records | Within this doctoral thesis the main goal was to explore the potential of tropical tree rings to record climate signals and to use them as palaeoclimate proxies. In detail, stable carbon (δ13C) and oxygen (δ18O) isotopes were extracted from teak trees in order to establish the first well-replicated centennial (AD 1900-2007) stable isotope records for Java, Indonesia. Furthermore, different climatic variables were tested whether they show significant correlation with tree-ring proxies (ring-width, δ13C, δ18O). Moreover, highly resolved intra-annual oxygen isotope data were established to assess the transfer of the seasonal precipitation signal into the tree rings. Finally, the established oxygen isotope record was used to reveal possible correlations with ENSO events. Methodological achievements | A second goal of this thesis was to assess the applicability of novel techniques which facilitate and optimize high-resolution and high-throughput stable isotope analysis of tree rings. Two different UV-laser-based microscopic dissection systems were evaluated as a novel sampling tool for high-resolution stable isotope analysis. Furthermore, an improved procedure of tree-ring dissection from thin cellulose laths for stable isotope analysis was designed. The most important findings of this thesis are: I) The herein presented novel sampling techniques improve stable isotope analyses for tree-ring studies in terms of precision, efficiency and quality. The UV-laser-based microdissection serve as a valuable tool for sampling plant tissue at ultrahigh-resolution and for unprecedented precision. II) A guideline for a modified method of cellulose extraction from wholewood cross-sections and subsequent tree-ring dissection was established. The novel technique optimizes the stable isotope analysis process in two ways: faster and high-throughput cellulose extraction and precise tree-ring separation at annual to high-resolution scale. III) The centennial tree-ring stable isotope records reveal significant correlation with regional precipitation. High-resolution stable oxygen values, furthermore, allow distinguishing between dry and rainy season rainfall. IV) The δ18O record reveals significant correlation with different ENSO flavors and demonstrates the importance of considering ENSO flavors when interpreting palaeoclimatic data in the tropics. The findings of my dissertation show that seasonally resolved δ18O records from Indonesian teak trees are a valuable proxy for multi-centennial reconstructions of regional precipitation variability (monsoon signals) and large-scale ocean-atmosphere phenomena (ENSO) for the Indo-Pacific region. Furthermore, the novel methodological achievements offer many unexplored avenues for multidisciplinary research in high-resolution palaeoclimatology.
Im Februar 1777 lobte die Ökonomische Gesellschaft zu Bern einen Preis von 100 Louis d’Or aus für den besten Vorschlag eines umfassenden Kriminalgesetzes. Das Preisgeld kam aus dem Kreis der französischen Aufklärer. Eine Hälfte stammte vermutlich von dem Pariser Parlamentsadvokaten Elie de Beaumont, der sich in den Justizaffären um Jean Calas und Pierre Paul Sirven einen Namen gemacht hatte. Die andere Hälfte hatte Voltaire beigesteuert, der das Geld von Friedrich II. von Preussen erhalten hatte. Das Preisausschreiben war ein großer Erfolg. Neben zahlreichen unbekannten Juristen beteiligten sich eine Reihe bekannter Persönlichkeiten, von denen hier nur die späteren Revolutionäre Marat, Brissot de Warville sowie die deutschen Strafrechtsprofessoren Quistorp und Gmelin genannt seien. Die historische Bedeutung des Berner Preisausschreibens liegt darin, dass es die bis dato vorwiegend programmatische Debatte um die Strafrechtsreform in eine praktische Phase überleitete. Es trat eine Welle praktischer Reformschriften los, in denen die Forderungen von Thomasius, Montesquieu und Beccaria umgesetzt wurden. Entscheidend dafür war, dass es mittels des Preisausschreibens gelang, eine große Zahl juristischer Experten zu aktivieren, die neben dem Reformwillen auch über das Fachwissen verfügten, das für die Entwicklung eines neuen Strafrechts erforderlich war. Von den 46 eingesendeten Preisschriften sind neun im Druck überliefert. Sechsundzwanzig befinden sich in Manuskriptform im Archiv der Ökonomischen Gesellschaft zu Bern. Der vorliegende Band versammelt die Transkriptionen von sieben manuskriptförmig überlieferten Preisschriften. Vier sind in französischer und drei in deutscher Sprache verfasst. Eine Preisschrift stammt von dem Genfer Jakobiner Julien Dentand, eine andere von dem deutschen Publizisten Johann Wolfgang Brenk. Die Autoren der übrigen fünf Manuskripte sind unbekannt. Die transkribierten Preisschriften sind Teil der quellenmäßigen Basis einer Untersuchung des strafrechtlichen Denkens im späten 18. Jahrhundert. Diese erscheint demnächst in den Studien zur Europäischen Rechtsgeschichte (Christoph Luther: Aufgeklärt strafen. Menschengerechtigkeit im 18. Jahrhundert).
Diese Arbeit beschäftigt sich mit der Motivation von Mitarbeitern an öffentlichen wissenschaftlichen Einrichtungen. Anhand einer Mitarbeiterumfrage am Leibniz-Institut für Agrartechnik Potsdam-Bornim e. V. werden mehrere Hypothesen untersucht, die auf der Self-Determination-Theory basieren. Die Analyse zeigt, dass viele Befragte eine hohe autonome Motivation aufweisen. Insbesondere das Gefühl, Wahlmöglichkeiten und Gestaltungsspielräume bei der Arbeit zu besitzen, beeinflusst die Motivation positiv. Während Führungskräfte dieses Gefühl der Autonomie stärken können, haben Charaktereigenschaften keinen Einfluss hierauf. Darüber hinaus zeigt sich, dass in der Wissenschaft ein Gefühl der sozialen Eingebundenheit im Arbeitskontext keine bedeutende Rolle zu spielen scheint.
These lecture notes are intended as a short introduction to diffusion processes on a domain with a reflecting boundary for graduate students, researchers in stochastic analysis and interested readers. Specific results on stochastic differential equations with reflecting boundaries such as existence and uniqueness, continuity and Markov properties, relation to partial differential equations and submartingale problems are given. An extensive list of references to current literature is included. This book has its origins in a mini-course the author gave at the University of Potsdam and at the Technical University of Berlin in Winter 2013.
Der vorliegende Band vereint thematisch auf das Essen bezogene Beiträge aus der slavistischen Literatur- und Kulturwissenschaft. Er zeigt das seit einigen Jahren artikulierte Interesse an dem Thema ohne dieses gleich zu einem „culinary turn“ stilisieren zu wollen. Das Essen ist – analysiert man die Präsenz des Themas in Literatur, Film und anderen Medien der Hochkultur – ein Bereich, in und mit dem allgemeinere kulturelle und soziale Prozesse besonders anschaulich gezeigt werden können.
The Adana Basin of southern Turkey, situated at the SE margin of the Central Anatolian Plateau is ideally located to record Neogene topographic and tectonic changes in the easternmost Mediterranean realm. Using industry seismic reflection data we correlate 34 seismic profiles with corresponding exposed units in the Adana Basin. The time-depth conversion of the interpreted seismic profiles allows us to reconstruct the subsidence curve of the Adana Basin and to outline the occurrence of a major increase in both subsidence and sedimentation rates at 5.45 – 5.33 Ma, leading to the deposition of almost 1500 km3 of conglomerates and marls. Our provenance analysis of the conglomerates reveals that most of the sediment is derived from and north of the SE margin of the Central Anatolian Plateau. A comparison of these results with the composition of recent conglomerates and the present drainage basins indicates major changes between late Messinian and present-day source areas. We suggest that these changes in source areas result of uplift and ensuing erosion of the SE margin of the plateau. This hypothesis is supported by the comparison of the Adana Basin subsidence curve with the subsidence curve of the Mut Basin, a mainly Neogene basin located on top of the Central Anatolian Plateau southern margin, showing that the Adana Basin subsidence event is coeval with an uplift episode of the plateau southern margin. The collection of several fault measurements in the Adana region show different deformation styles for the NW and SE margins of the Adana Basin. The weakly seismic NW portion of the basin is characterized by extensional and transtensional structures cutting Neogene deposits, likely accomodating the differential uplift occurring between the basin and the SE margin of the plateau. We interpret the tectonic evolution of the southern flank of the Central Anatolian Plateau and the coeval subsidence and sedimentation in the Adana Basin to be related to deep lithospheric processes, particularly lithospheric delamination and slab break-off.
Kulturtransfer im Kochtopf
(2014)
Reisen über den Tellerrand
(2014)
Der Umgang mit der musikalischen Fachsprache wird in den meisten Lehrplänen für den Musikunterricht der Sekundarstufe I gefordert. Allerdings fehlt nicht nur in den Lehrplänen, sondern auch in der musikdidaktischen Literatur eine inhaltliche Ausgestaltung dieser Forderung. Über Inhalt, Umfang und Ziel der in der Schule anzuwendenden musikalischen Fachsprache herrscht daher keine Klarheit. Empirische Untersuchungen zu den sprachlichen Inhalten im Musikunterricht liegen ebenfalls nicht vor. Auch in vielen anderen Unterrichtsfächern ist die Forschungslage die sprachlichen Inhalte betreffend überschaubar. Mit der Verwendung von Sprache sind jedoch nicht nur Kommunikationsprozesse verbunden, sondern gleichzeitig Lernprozesse innerhalb der Sprache, von der Wortschatzerweiterung bis zur Herstellung von inhaltlich-thematischen Zusammenhängen. Diese Lernprozesse werden beeinflusst von der Wortwahl der Lernenden und Lehrenden. Die Wortwahl der Lernenden lässt gleichzeitig einen Schluss zu auf den Stand des Wissens und dessen Vernetzung. Auf dieser Basis ist der sprachliche Inhalt des Musikunterrichtes der Gegenstand der vorgelegten Arbeit. Ziel der Studie war herauszu¬finden, inwieweit es gelingen kann, durch die Art und Weise des Einsatzes und den Umfang von Fachsprache im Musikunterricht Lernprozesse effektiver und erfolgreicher zu gestalten und besser an Gegenwarts- und Zukunftsbedürfnissen der Lernenden auszurichten.
Der Aufsatz argumentiert, dass der entscheidende Punkt an Ortholph Fuchspergers "Dialectica deutsch" der Nachweis ist, dass es möglich ist, in deutscher Sprache zu argumentieren. Dies richtet sich gegen die alleinige Verwendung der lateinischen Sprache als wissenschaftlicher Sprache. Fuchsperger zieht damit eine Konsequenz aus der humanistischen Umbestimmung des ars-Begriffes als einer deskriptiven und nicht normativen Verfahrensweise.
Ausprägungen räumlicher Identität in ehemaligen sudetendeutschen Gebieten der Tschechischen Republik
(2014)
Das tschechische Grenzgebiet ist eine der Regionen in Europa, die in der Folge des Zweiten Weltkrieges am gravierendsten von Umbrüchen in der zuvor bestehenden Bevölkerungsstruktur betroffen waren. Der erzwungenen Aussiedlung eines Großteils der ansässigen Bevölkerung folgten die Neubesiedlung durch verschiedenste Zuwanderergruppen sowie teilweise langanhaltende Fluktuationen der Einwohnerschaft. Die Stabilisierung der Bevölkerung stand sodann unter dem Zeichen der sozialistischen Gesellschafts- und Wirtschaftsordnung, die die Lebensweise und Raumwahrnehmung der neuen Einwohner nachhaltig prägte. Die Grenzöffnung von 1989, die politische Transformation sowie die Integration der Tschechischen Republik in die Europäische Union brachten neue demographische und sozioökonomische Entwicklungen mit sich. Sie schufen aber auch die Bedingungen dafür, sich neu und offen auch mit der spezifischen Geschichte des ehemaligen Sudetenlandes sowie mit dem Zustand der gegenwärtigen Gesellschaft in diesem Gebiet auseinanderzusetzen.
Im Rahmen der vorliegenden Arbeit wird anhand zweier Beispielregionen untersucht, welche Raumvorstellungen und Raumbindungen bei der heute in den ehemaligen sudetendeutschen Gebieten ansässigen Bevölkerung vorhanden sind und welche Einflüsse die unterschiedlichen raumstrukturellen Bedingungen darauf ausüben. Besonderes Augenmerk wird auf die soziale Komponente der Ausprägung räumlicher Identität gelegt, das heißt auf die Rolle von Bedeutungszuweisungen gegenüber Raumelementen im Rahmen sozialer Kommunikation und Interaktion. Dies erscheint von besonderer Relevanz in einem Raum, der sich durch eine gewisse Heterogenität seiner Einwohnerschaft hinsichtlich ihres ethnischen, kulturellen beziehungsweise biographischen Hintergrundes auszeichnet. Schließlich wird ermittelt, welche Impulse unter Umständen von einer ausgeprägten räumlichen Identität für die Entwicklung des Raumes ausgehen.
Die vorliegende Arbeit behandelt Untersuchungen zum Einfluss ionischer Flüssigkeiten sowohl auf den Rekombinationsprozess photolytisch generierter Lophylradikale als auch auf die photoinduzierte Polymerisation. Im Fokus standen hierbei pyrrolidiniumbasierte ionische Flüssigkeiten sowie polymerisierbare imidazoliumbasierte ionische Flüssigkeiten. Mittels UV-Vis-Spektroskopie wurde in den ionischen Flüssigkeiten im Vergleich zu ausgewählten organischen Lösungsmitteln die Rekombinationskinetik der aus o-Cl-HABI photolytisch generierten Lophylradikale bei unterschiedlichen Temperaturen verfolgt und die Geschwindigkeitskonstanten der Radikalrekombination bestimmt. Die Charakterisierung des Rekombinationsprozesses erfolgt dabei insbesondere unter Verwendung der mittels Eyring-Gleichung ermittelten Aktivierungsparameter. Hierbei konnte gezeigt werden, dass die Rekombination der Lophylradikale in den ionischen Flüssigkeiten im Gegensatz zu den organischen Lösungsmitteln zu einem großen Anteil innerhalb des Lösungsmittelkäfigs erfolgt. Weiterhin wurden für den Einsatz von o-Cl-HABI als Radikalbildner in den photoinduzierten Polymerisationen mehrere mögliche Co-Initiatoren über photokalorimetrische Messungen untersucht. Hierbei wurde auch ein neuer Aspekt zur Kettenübertragung vom Lophylradikal auf den heterocyclischen Co-Initiator vorgestellt. Darüber hinaus wurden photoinduzierte Polymerisationen unter Einsatz eines Initiatorsystems, bestehend aus o-Cl-HABI als Radikalbildner und einem heterocyclischen Co-Initiator, in den ionischen Flüssigkeiten untersucht. Diese Untersuchungen beinhalten zum einen photokalorimetrische Messungen der photoinduzierten Polymerisation von polymerisierbaren imidazoliumbasierten ionischen Flüssigkeiten. Zum anderen wurden Untersuchungen zur photoinduzierten Polymerisation von Methylmethacrylat in pyrrolidiniumbasierten ionischen Flüssigkeiten durchgeführt. Dabei wurden Einflussparameter wie Zeit, Temperatur, Viskosität, Lösungsmittelkäfigeffekt und die Alkylkettenlänge am Kation der ionischen Flüssigkeiten auf die Ausbeuten und Molmassen sowie Molmassenverteilungen der Polymere hin untersucht.
Zwischen den Juristischen Fakultäten der Universität Szeged und der Universität Potsdam besteht seit etlichen Jahren eine fruchtbare Kooperation in der Lehre. Durch sie entwickelt sich allmählich eine wissenschaftliche Zusammenarbeit. Gemeinsame Konferenzen und Publikationen sind dafür ein Beweis. Der vorliegende Band ist das Resultat dieser Kooperation. Der Buchtitel kennzeichnet das Engagement der ungarischen und der deutschen Juristen sowie die gemeinsamen Werte, welche der europäischen Rechtsentwicklung im 21. Jahrhundert zugrunde liegen und die Dogmatik der verschiedenen Rechtsgebiete verknüpfen. Die einzelnen Beiträge legen dabei Zeugnis ab von der ganzen Breite der Interessen der ungarischen und deutschen Juristen.
Historisch wie aktuell werden durch gewalttätige soziale Auseinandersetzungen bestehende gesellschaftliche Ordnungen infrage gestellt. In der Geschichtswissenschaft wie in der Soziologie waren Tumulte, Aufstände oder soziale Erhebungen immer wieder Gegenstand von Untersuchungen. Während der historische Zugriff auf diese Phänomene gewöhnlich durch detaillierte Beschreibungen historische Abläufe genau zu rekonstruieren versucht hat, um diese verstehen zu können, geht es soziologischen Arbeiten zumeist um einen viel stärker generalisierenden und erklärenden Zugriff. Zwar gab es immer wieder Anläufe, diese scheinbar unüberbrückbare Differenz zwischen den Disziplinen zu überwinden, doch alle die Versuche müssen als mehr oder weniger gescheitert angesehen werden. Nach wie vor gilt deshalb, dass mit der ausschließlichen Konzentration auf die je eigene disziplinäre Herangehensweise viel Erkenntnispotenzial verschenkt wird. Aus diesem Grund unterbreitet die vorliegende Studie einen neuen Vorschlag, Geschichtswissenschaft und Soziologie zusammenzubringen. Der Verfasser unternimmt hier den Versuch, die beiden vermeintlich so gegensätzlichen Auffassungen von Wissenschaftlichkeit über eine gemeinsame methodologische Perspektive zusammenzuführen und auf dieser Grundlage einen vereinten, erklärenden Zugriff von Geschichtswissenschaft und Soziologie zu skizzieren, der nach dem „Wie“ eines Ereignisses fragt, zugleich aber auch erklären will, „warum“ es dazu gekommen ist. Das vorliegende Buch untersucht auf dieser methodologischen Grundlage und mittels eines historisch-soziologischen Zugangs sozialen Protest im Vormärz, es schließt an Arbeiten der historischen Soziologie und Sozialgeschichte an und entwickelt dazu einen stringenten historisch-soziologischen Erklärungsansatz.
Auch nach Abschluss unseres Brandenburger Antike-Denkwerks (BrAnD) mit dem letzten Durchgang 2010/11 fand jedes Jahr der Potsdamer Lateintag statt ‒ zu ganz unterschiedlichen Themen und mit ca. 500 Teilnehmerinnen und Teilnehmern immer sehr gut besucht: 2011: Antike Geschichtsschreibung, 2012: Tod und Jenseits, 2013: Römische Religion. Der Band versammelt die Vorträge dieser vergangenen Veranstaltungen von N. Holzberg, B. Labahn, Chr. Kunst, S. Büttner-von Stülpnagel, V. Rosenberger und D. Šterbenc Erker.
Inhalt der Arbeit ist es, einen Überblick über die historische Entwicklung der öffentlich-rechtlichen Gefährdungshaftung in Deutschland vom 18. Jahrhundert bis heute zu geben sowie ihre praktische Bedeutung zu analysieren. Dabei wird zwischen den unterschiedlichen Gesetzgebungen, Rechtsprechungen und den theoretischen Lösungsansätzen der öffentlich-rechtlichen Gefährdungshaftung unterschieden und insbesondere letzteres problematisiert. Ferner wird auf das Verhältnis zu den grundrechtlichen Schutzpflichten, den sozialen Risikotatbeständen, dem sozialrechtlichen Herstellungsanspruch und den Tumultschäden eingegangen.
The work elaborates on the question if coaches in non-professional soccer can influence referee decisions. Modeled from a principal-agent perspective, the managing referee boards can be seen as the principal. They aim at facilitating a fair competition which is in accordance with the existing rules and regulations. In doing so, the referees are assigned as impartial agents on the pitch. The coaches take over a non-legitimate principal-like role trying to influence the referees even though they do not have the formal right to do so.
Separate questionnaires were set up for referees and coaches. The coach questionnaire aimed at identifying the extent and the forms of influencing attempts by coaches. The referee questionnaire tried to elaborate on the questions if referees take notice of possible influencing attempts and how they react accordingly.
The results were put into relation with official match data in order to identify significant influences on personal sanctions (yellow cards, second yellow cards, red cards) and the match result.
It is found that there is a slight effect on the referee’s decisions. However, this effect is rather disadvantageous for the influencing coach and there is no evidence for an impact on the match result itself.
In the field of disk-based parallel database management systems exists a great variety of solutions based on a shared-storage or a shared-nothing architecture. In contrast, main memory-based parallel database management systems are dominated solely by the shared-nothing approach as it preserves the in-memory performance advantage by processing data locally on each server. We argue that this unilateral development is going to cease due to the combination of the following three trends: a) Nowadays network technology features remote direct memory access (RDMA) and narrows the performance gap between accessing main memory inside a server and of a remote server to and even below a single order of magnitude. b) Modern storage systems scale gracefully, are elastic, and provide high-availability. c) A modern storage system such as Stanford's RAMCloud even keeps all data resident in main memory. Exploiting these characteristics in the context of a main-memory parallel database management system is desirable. The advent of RDMA-enabled network technology makes the creation of a parallel main memory DBMS based on a shared-storage approach feasible.
This thesis describes building a columnar database on shared main memory-based storage. The thesis discusses the resulting architecture (Part I), the implications on query processing (Part II), and presents an evaluation of the resulting solution in terms of performance, high-availability, and elasticity (Part III).
In our architecture, we use Stanford's RAMCloud as shared-storage, and the self-designed and developed in-memory AnalyticsDB as relational query processor on top. AnalyticsDB encapsulates data access and operator execution via an interface which allows seamless switching between local and remote main memory, while RAMCloud provides not only storage capacity, but also processing power. Combining both aspects allows pushing-down the execution of database operators into the storage system. We describe how the columnar data processed by AnalyticsDB is mapped to RAMCloud's key-value data model and how the performance advantages of columnar data storage can be preserved.
The combination of fast network technology and the possibility to execute database operators in the storage system opens the discussion for site selection. We construct a system model that allows the estimation of operator execution costs in terms of network transfer, data processed in memory, and wall time. This can be used for database operators that work on one relation at a time - such as a scan or materialize operation - to discuss the site selection problem (data pull vs. operator push). Since a database query translates to the execution of several database operators, it is possible that the optimal site selection varies per operator. For the execution of a database operator that works on two (or more) relations at a time, such as a join, the system model is enriched by additional factors such as the chosen algorithm (e.g. Grace- vs. Distributed Block Nested Loop Join vs. Cyclo-Join), the data partitioning of the respective relations, and their overlapping as well as the allowed resource allocation.
We present an evaluation on a cluster with 60 nodes where all nodes are connected via RDMA-enabled network equipment. We show that query processing performance is about 2.4x slower if everything is done via the data pull operator execution strategy (i.e. RAMCloud is being used only for data access) and about 27% slower if operator execution is also supported inside RAMCloud (in comparison to operating only on main memory inside a server without any network communication at all). The fast-crash recovery feature of RAMCloud can be leveraged to provide high-availability, e.g. a server crash during query execution only delays the query response for about one second. Our solution is elastic in a way that it can adapt to changing workloads a) within seconds, b) without interruption of the ongoing query processing, and c) without manual intervention.
Bacteria respond to changing environmental conditions by switching the global pattern of expressed genes. In response to specific environmental stresses the cell activates several stress-specific molecules such as sigma factors. They reversibly bind the RNA polymerase to form the so-called holoenzyme and direct it towards the appropriate stress response genes. In exponentially growing E. coli cells, the majority of the transcriptional activity is carried out by the housekeeping sigma factor, while stress responses are often under the control of alternative sigma factors. Different sigma factors compete for binding to a limited pool of RNA polymerase (RNAP) core enzymes, providing a mechanism for cross talk between genes or gene classes via the sharing of expression machinery. To quantitatively analyze the contribution of sigma factor competition to global changes in gene expression, we develop a thermodynamic model that describes binding between sigma factors and core RNAP at equilibrium, transcription, non-specific binding to DNA and the modulation of the availability of the molecular components.
Association of housekeeping sigma factor to RNAP is generally favored by its abundance and higher binding affinity to the core. In order to promote transcription by alternative sigma subunits, the bacterial cell modulates the transcriptional efficiency in a reversible manner through several strategies such as anti-sigma factors, 6S RNA and generally any kind of transcriptional regulators (e.g. activators or inhibitors). By shifting the outcome of sigma factor competition for the core, these modulators bias the transcriptional program of the cell. The model is validated by comparison with in vitro competition experiments, with which excellent agreement is found. We observe that transcription is affected via the modulation of the concentrations of the different types of holoenzymes, so saturated promoters are only weakly affected by sigma factor competition. However, in case of overlapping promoters or promoters recognized by two types of sigma factors, we find that even saturated promoters are strongly affected.
Active transcription effectively lowers the affinity between the sigma factor driving it and the core RNAP, resulting in complex cross talk effects and raising the question of how their in vitro measure is relevant in the cell. We also estimate that sigma factor competition is not strongly affected by non-specific binding of core RNAPs, sigma factors, and holoenzymes to DNA. Finally, we analyze the role of increased core RNAP availability upon the shut-down of ribosomal RNA transcription during stringent response. We find that passive up-regulation of alternative sigma-dependent transcription is not only possible, but also displays hypersensitivity based on the sigma factor competition. Our theoretical analysis thus provides support for a significant role of passive control during that global switch of the gene expression program and gives new insights into RNAP partitioning in the cell.
In the presented thesis, the most advanced photon reconstruction technique of ground-based γ-ray astronomy is adapted to the H.E.S.S. 28 m telescope. The method is based on a semi-analytical model of electromagnetic particle showers in the atmosphere. The properties of cosmic γ-rays are reconstructed by comparing the camera image of the telescope with the Cherenkov emission that is expected from the shower model. To suppress the dominant background from charged cosmic rays, events are selected based on several criteria. The performance of the analysis is evaluated with simulated events. The method is then applied to two sources that are known to emit γ-rays. The first of these is the Crab Nebula, the standard candle of ground-based γ-ray astronomy. The results of this source confirm the expected performance of the reconstruction method, where the much lower energy threshold compared to H.E.S.S. I is of particular importance. A second analysis is performed on the region around the Galactic Centre. The analysis results emphasise the capabilities of the new telescope to measure γ-rays in an energy range that is interesting for both theoretical and experimental astrophysics. The presented analysis features the lowest energy threshold that has ever been reached in ground-based γ-ray astronomy, opening a new window to the precise measurement of the physical properties of time-variable sources at energies of several tens of GeV.
During this work I built a four wave mixing setup for the time-resolved femtosecond spectroscopy of Raman-active lattice modes. This setup enables to study the selective excitation of phonon polaritons. These quasi-particles arise from the coupling of electro-magnetic waves and transverse optical lattice modes, the so-called phonons. The phonon polaritons were investigated in the optically non-linear, ferroelectric crystals LiNbO₃ and LiTaO₃.
The direct observation of the frequency shift of the scattered narrow bandwidth probe pulses proofs the role of the Raman interaction during the probe and excitation process of phonon polaritons. I compare this experimental method with the measurement where ultra-short laser pulses are used. The frequency shift remains obscured by the relative broad bandwidth of these laser pulses. In an experiment with narrow bandwidth probe pulses, the Stokes and anti-Stokes intensities are spectrally separated. They are assigned to the corresponding counter-propagating wavepackets of phonon polaritons. Thus, the dynamics of these wavepackets was separately studied. Based on these findings, I develop the mathematical description of the so-called homodyne detection of light for the case of light scattering from counter propagating phonon polaritons.
Further, I modified the broad bandwidth of the ultra-short pump pulses using bandpass filters to generate two pump pulses with non-overlapping spectra. This enables the frequency-selective excitation of polariton modes in the sample, which allows me to observe even very weak polariton modes in LiNbO₃ or LiTaO₃ that belong to the higher branches of the dispersion relation of phonon polaritons. The experimentally determined dispersion relation of the phonon polaritons could therefore be extended and compared to theoretical models. In addition, I determined the frequency-dependent damping of phonon polaritons.