Refine
Year of publication
- 2016 (322) (remove)
Document Type
- Doctoral Thesis (322) (remove)
Is part of the Bibliography
- yes (322)
Keywords
- Klimawandel (4)
- climate change (4)
- Blickbewegungen (3)
- Deutschland (3)
- earthquake (3)
- Aggression (2)
- Aphasie (2)
- Avantgarde (2)
- Berlin (2)
- Bodenfeuchte (2)
Institute
- Institut für Biochemie und Biologie (58)
- Institut für Geowissenschaften (37)
- Institut für Chemie (33)
- Institut für Physik und Astronomie (21)
- Institut für Ernährungswissenschaft (19)
- Wirtschaftswissenschaften (19)
- Institut für Informatik und Computational Science (16)
- Historisches Institut (14)
- Sozialwissenschaften (12)
- Department Linguistik (10)
Geospatial data has become a natural part of a growing number of information systems and services in the economy, society, and people's personal lives. In particular, virtual 3D city and landscape models constitute valuable information sources within a wide variety of applications such as urban planning, navigation, tourist information, and disaster management. Today, these models are often visualized in detail to provide realistic imagery. However, a photorealistic rendering does not automatically lead to high image quality, with respect to an effective information transfer, which requires important or prioritized information to be interactively highlighted in a context-dependent manner.
Approaches in non-photorealistic renderings particularly consider a user's task and camera perspective when attempting optimal expression, recognition, and communication of important or prioritized information. However, the design and implementation of non-photorealistic rendering techniques for 3D geospatial data pose a number of challenges, especially when inherently complex geometry, appearance, and thematic data must be processed interactively. Hence, a promising technical foundation is established by the programmable and parallel computing architecture of graphics processing units.
This thesis proposes non-photorealistic rendering techniques that enable both the computation and selection of the abstraction level of 3D geospatial model contents according to user interaction and dynamically changing thematic information. To achieve this goal, the techniques integrate with hardware-accelerated rendering pipelines using shader technologies of graphics processing units for real-time image synthesis. The techniques employ principles of artistic rendering, cartographic generalization, and 3D semiotics—unlike photorealistic rendering—to synthesize illustrative renditions of geospatial feature type entities such as water surfaces, buildings, and infrastructure networks. In addition, this thesis contributes a generic system that enables to integrate different graphic styles—photorealistic and non-photorealistic—and provide their seamless transition according to user tasks, camera view, and image resolution.
Evaluations of the proposed techniques have demonstrated their significance to the field of geospatial information visualization including topics such as spatial perception, cognition, and mapping. In addition, the applications in illustrative and focus+context visualization have reflected their potential impact on optimizing the information transfer regarding factors such as cognitive load, integration of non-realistic information, visualization of uncertainty, and visualization on small displays.
The ever-increasing fat content in Western diet, combined with decreased levels of physical activity, greatly enhance the incidence of metabolic-related diseases. Cancer cachexia (CC) and Metabolic syndrome (MetS) are both multifactorial highly complex metabolism related syndromes, whose etiology is not fully understood, as the mechanisms underlying their development are not completely unveiled. Nevertheless, despite being considered “opposite sides”, MetS and CC share several common issues such as insulin resistance and low-grade inflammation. In these scenarios, tissue macrophages act as key players, due to their capacity to produce and release inflammatory mediators. One of the main features of MetS is hyperinsulinemia, which is generally associated with an attempt of the β-cell to compensate for diminished insulin sensitivity (insulin resistance). There is growing evidence that hyperinsulinemia per se may contribute to the development of insulin resistance, through the establishment of low grade inflammation in insulin responsive tissues, especially in the liver (as insulin is secreted by the pancreas into the portal circulation). The hypothesis of the present study was that insulin may itself provoke an inflammatory response culminating in diminished hepatic insulin sensitivity. To address this premise, firstly, human cell line U937 differentiated macrophages were exposed to insulin, LPS and PGE2. In these cells, insulin significantly augmented the gene expression of the pro-inflammatory mediators IL-1β, IL-8, CCL2, Oncostatin M (OSM) and microsomal prostaglandin E2 synthase (mPGES1), and of the anti-inflammatory mediator IL-10. Moreover, the synergism between insulin and LPS enhanced the induction provoked by LPS in IL-1β, IL-8, IL-6, CCL2 and TNF-α gene. When combined with PGE2, insulin enhanced the induction provoked by PGE2 in IL-1β, mPGES1 and COX2, and attenuated the inhibition induced by PGE2 in CCL2 and TNF-α gene expression contributing to an enhanced inflammatory response by both mechanisms. Supernatants of insulin-treated U937 macrophages reduced the insulin-dependent induction of glucokinase in hepatocytes by 50%. Cytokines contained in the supernatant of insulin-treated U937 macrophages also activated hepatocytes ERK1/2, resulting in inhibitory serine phosphorylation of the insulin receptor substrate. Additionally, the transcription factor STAT3 was activated by phosphorylation resulting in the induction of SOCS3, which is capable of interrupting the insulin receptor signal chain. MicroRNAs, non-coding RNAs linked to protein expression regulation, nowadays recognized as active players in the generation of several inflammatory disorders such as cancer and type II diabetes are also of interest. Considering that in cancer cachexia, patients are highly affected by insulin resistance and inflammation, control, non-cachectic and cachectic cancer patients were selected and the respective circulating levels of pro-inflammatory mediators and microRNA-21-5p, a posttranscriptional regulator of STAT3 expression, assessed and correlated. Cachectic patients circulating cytokines IL-6 and IL-8 levels were significantly higher than those of non-cachectic and controls, and the expression of microRNA-21-5p was significantly lower. Additionally, microRNA-21-5p reduced expression correlated negatively with IL-6 plasma levels. These results indicate that hyperinsulinemia per se might contribute to the low grade inflammation prevailing in MetS patients and thereby promote the development
of insulin resistance particularly in the liver. Diminished MicroRNA-21-5p expression may enhance inflammation and STAT3 expression in cachectic patients, contributing to the development of insulin resistance.
Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of the observations. Unraveling such transitions yields essential information for the understanding of the observed system’s intrinsic evolution and potential external influences. A precise detection of multiple changes is therefore of great importance for various research disciplines, such as environmental sciences, bioinformatics and economics. The primary purpose of the detection approach introduced in this thesis is the investigation of transitions underlying direct or indirect climate observations. In order to develop a diagnostic approach capable to capture such a variety of natural processes, the generic statistical features in terms of central tendency and dispersion are employed in the light of Bayesian inversion. In contrast to established Bayesian approaches to multiple changes, the generic approach proposed in this thesis is not formulated in the framework of specialized partition models of high dimensionality requiring prior specification, but as a robust kernel-based approach of low dimensionality employing least informative prior distributions.
First of all, a local Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of a single transition. The analysis of synthetic time series comprising changes of different observational evidence, data loss and outliers validates the performance, consistency and sensitivity of the inference algorithm. To systematically investigate time series for multiple changes, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the weighted kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. The detection approach is applied to environmental time series from the Nile river in Aswan and the weather station Tuscaloosa, Alabama comprising documented changes. The method’s performance confirms the approach as a powerful diagnostic tool to decipher multiple changes underlying direct climate observations.
Finally, the kernel-based Bayesian inference approach is used to investigate a set of complex terrigenous dust records interpreted as climate indicators of the African region of the Plio-Pleistocene period. A detailed inference unravels multiple transitions underlying the indirect climate observations, that are interpreted as conjoint changes. The identified conjoint changes coincide with established global climate events. In particular, the two-step transition associated to the establishment of the modern Walker-Circulation contributes to the current discussion about the influence of paleoclimate changes on the environmental conditions in tropical and subtropical Africa at around two million years ago.
Christoph Sebastian Widdau leistet mit seinem Buch einen innovativen Beitrag zur Cassirer-Forschung, zu den Leibniz-Studien und zur Begründung der Menschenrechte. Er wirft ein ideengeschichtlich und philosophisch neues Licht auf die 'Natur' im Naturrecht, die kulturelle Bedeutung des Individuums und den Pluralismus politischer Ordnungen. Mit 'Cassirers Leibniz' zeigt Widdau auf, dass Menschenrechte kein beliebiger Zusatz zur Kultur, sondern vielmehr kulturkonstitutiv sind.
This doctoral thesis seeks to elaborate how Wittgenstein’s very sparse writings on ethics and ethical thought, together with his later work on the more general problem of normativity and his approach to philosophical problems as a whole, can be applied to contemporary meta-ethical debates about the nature of moral thought and language and the sources of moral obligation. I begin with a discussion of Wittgenstein’s early “Lecture on Ethics”, distinguishing the thesis of a strict fact/value dichotomy that Wittgenstein defends there from the related thesis that all ethical discourse is essentially and intentionally nonsensical, an attempt to go beyond the limits of sense. The first chapter discusses and defends Wittgenstein’s argument that moral valuation always goes beyond any ascertaining of fact; the second chapter seeks to draw out the valuable insights from Wittgenstein’s (early) insistence that value discourse is nonsensical while also arguing that this thesis is ultimately untenable and also incompatible with later Wittgensteinian understanding of language. On the basis of this discussion I then take up the writings of the American philosopher Cora Diamond, who has worked out an ethical approach in a very closely Wittgensteinian spirit, and show how this approach shares many of the valuable insights of the moral expressivism and constructivism of contemporary authors such as Blackburn and Korsgaard while suggesting a way to avoid some of the problems and limitations of their approaches. Subsequently I turn to a criticism of the attempts by Lovibond and McDowell to enlist Wittgenstein in the support for a non-naturalist moral realism. A concluding chapter treats the ways that a broadly Wittgensteinian conception expands the subject of metaethics itself by questioning the primacy of discursive argument in moral thought and of moral propositions as the basic units of moral belief.
Mujeres de apocalipsis
(2016)
Mujeres del Apocalipsis propone nuevas lecturas de género en relación a las mujeres piadosas que habitaron Nueva España en el siglo XVIII. El estudio sebasa en un corpus de los archivos de sus procesos inquisitoriales, mucho de ellos aún indéditos. Estas mujeres ganaron libertad y accedieon a una parcial autonomía en el mundo colonial. A través de su lectura, se descubren las estrategías tácticas a través de las cuales las beatas pactaron una nueva forma de se mujer en aquella era.
La siguiente investigación aborda la relación entre mito y modernidad en la obra literaria (poemas, cuentos, novelas y crónicas) del autor chileno Rosamel del Valle (Curacaví, 1901 – Santiago de Chile, 1965). En sus distintos textos existe una tensión entre un proyecto poético basado en una visión mítica del mundo y un contexto histórico que privilegia posturas más racionalistas, relegando lo poético y lo mítico. Ya en el siglo XIX la modernidad y los fenómenos asociados de la modernización producen el desplazamiento de la poesía como discurso y del poeta como persona a una situación deficitaria dentro de la sociedad, lo que se extiende en el siglo XX. Debido a este conflicto Rosamel del Valle cuestiona en su obra los alcances de sus propios postulados, tanto estéticos como vitales. Esto conlleva una vacilación entre la reafirmación de su proyecto poético-vital y la consciencia del fracaso. Es por esto que la obra del poeta chileno contiene un Lebenswissen, que concibe a la poesía como una forma privilegiada de vida. Sin embargo, debido a las dificultades de las condiciones históricas es posible entender este Lebenswissen también como un ÜberLebenswissen (Ette).
En la primera parte del texto se estudia la concepción mítica de la poesía de del Valle y se analiza los distintos niveles en los que aparece el mito en su obra literaria: como pensamiento, como lenguaje y como narración tradicional. La identificación de la poesía como mito muestra las principales características de la poética de del Valle: una concepción ontológica que diferencia entre una dimensión visible y otra invisible de la realidad; la tendencia mística de la poesía; una concepción cíclica del tiempo que funda una relación con los discursos de la memoria y de la muerte, así como la idea de un pasado utópico que por medio de la poesía podría revivir en el presente; además de la figura de la mujer cómo símbolo del amor y de la poesía.
En la segunda parte se investiga la relación y las consecuencias de esta “poesía mítica” en el contexto de la modernidad. Se aborda especialmente el efecto en la poética de Rosamel del Valle de la Entzauberung der Welt (Weber), como experiencia específica de la época. Para esto se destaca ante todo sus impresiones sobre Nueva York. Esta ciudad en la que vivió y trabajó entre 1946 y 1963, se transforma en sus textos en un lugar en el que sería posible el “habitar poético del hombre” en la modernidad.
Homo academicus goes Pop
(2016)
Mit (wissens-)soziologischen, komparatistisch-historischen sowie textanalytischen Mitteln erörtert die Autorin in diesem Buch zunächst die Genese und Struktur des populärwissenschaftlichen Feldes im internationalen Vergleich. Ausgehend von literatur- und texttheoretischen Ausführungen zu unterschiedlichen nationalen Genres des populärwissenschaftlichen Schreibens und zur Theorie des Popularisierens im Allgemeinen untersucht sie darauf folgend im Kontext der historischen Disziplingenese der Biologie anhand ausgewählter Akteure der Genetik, Epigenetik, Sociobiology und Evolutionary Psychology die intellektuellen Netzwerke und ihre Textproduktion. Schließlich avanciert die fiktionale Erzählliteratur selbst zur Kritikerin lebenswissenschaftlicher Ideologien und Theorien.
Anhand ausgewählter literarischer Werke des 20. und 21. Jahrhunderts werden die Wechselbeziehungen zwischen fiktionaler Erzählliteratur und nicht-fiktionaler Sachbuchliteratur untersucht.
Die Autorin geht der Frage nach, auf welche Art und Weise Alltagsund Wissenschaftssprache in der Literatur miteinander interagieren und eine immanente Ideologiekritik spekulativer Theoriebildung des Popular Science Writing initiieren.
Dieses Buch richtet sich an Geistes- und Sozialwissenschaftler, die das wissenschaftspolitische Programm der »Literaturwissenschaft als Lebenswissenschaft« (O. Ette) in anwendungsorientierten Forschungskontexten erprobt wissen wollen. Mit seinen polemischkritischen Thesen und einem selbstreflexiv-ironischen Schreibstil facht es das Gespräch über Populärwissenschaften weiter an: Es ist das Populäre, das sich quer über die sozialen Felder legt und der »illusio« (P. Bourdieu) des akademischen Textspiels neue Wege eröffnet.
Leuchtkäfer & Orgelkoralle
(2016)
Leuchtende Käfer und Medusen, phosphoreszierende Meereswellen oder zu Stein erstarrende Korallen faszinierten den bisher vornehmlich als Dichter portraitierten Naturforscher Adelbert von Chamisso (1781–1838). Intensiver noch als den zoologischen und geologischen Phänomenen widmete er sich der Scientia amabilis – der liebenswerten Wissenschaft von den Pflanzen. Der vielseitig Talentierte verfasste seine Reise um die Welt (1836), die bis heute als eine der stilistisch anspruchvollsten und lesenswertesten Reisebeschreibungen gilt. Diese Studie widmet sich dezidiert den naturkundlichen Forschungen Chamissos im Kontext der dreijährigen Rurik-Expedition sowie den zugehörigen Textproduktionen. Mit einem umfassenden Text- und Materialkorpus werden literatur- und kulturwissenschaftliche sowie wissenschaftshistorische Fragestellungen an das Werk gelegt und ertragreich beantwortet. Für die Reiseliteraturforschung wird bisher unbeachtetes Quellenmaterial ans Licht gebracht, gängige Thesen werden widerlegt, Quellen anderer Besatzungsmitglieder vergleichend betrachtet. Die Studie stellt den Naturforscher Chamisso in den Fokus, ohne den Dichter auszublenden, und widmet sich Fragen der Generierung, Vernetzung und Darstellung naturkundlichen Wissens in Texten, Illustrationen und Materialien zur Expedition – sie ist insgesamt für die Literatur- und Geschichtswissenschaft ebenso innovativ wie für die interdisziplinäre Geschichte des Wissens.
Investigation of novel proteins and polysaccharides associated with coccoliths of Emiliania huxleyi
(2016)
Effects of plant community diversity and composition on fungal pathogens in experimental grasslands
(2016)
From dark to light
(2016)
Light-triggered release of bioactive compounds from HA/PLL multilayer films for stimulation of cells
(2016)
The concept of targeting cells and tissues by controlled delivery of molecules is essential in the field of biomedicine. The layer-by-layer (LbL) technology for the fabrication of polymer multilayer films is widely implemented as a powerful tool to assemble tailor-made materials for controlled drug delivery. The LbL films can as well be engineered to act as mimics of the natural cellular microenvironment. Thus, due to the myriad possibilities such as controlled cellular adhesion and drug delivery offered by LbL films, it becomes easily achievable to direct the fate of cells by growing them on the films.
The aim of this work was to develop an approach for non-invasive and precise control of the presentation of bioactive molecules to cells. The strategy is based on employment of the LbL films, which function as support for cells and at the same time as reservoirs for bioactive molecules to be released in a controlled manner. UV light is used to trigger the release of the stored ATP with high spatio-temporal resolution. Both physico-chemical (competitive intermolecular interactions in the film) and biological aspects (cellular response and viability) are addressed in this study.
Biopolymers hyaluronic acid (HA) and poly-L-lysine (PLL) were chosen as the building blocks for the LbL film assembly. Poor cellular adhesion to native HA/PLL films as well as significant degradation by cells within a few days were shown. However, coating the films with gold nanoparticles not only improved cellular adhesion and protected the films from degradation, but also formed a size-exclusion barrier with adjustable cut-off in the size range of a few tens of kDa.
The films were shown to have high reservoir capacity for small charged molecules (reaching mM levels in the film). Furthermore, they were able to release the stored molecules in a sustained manner. The loading and release are explained by a mechanism based on interactions between charges of the stored molecules and uncompensated charges of the biopolymers in the film. Charge balance and polymer dynamics in the film play the pivotal role.
Finally, the concept of light-triggered release from the films has been proven using caged ATP loaded into the films from which ATP was released on demand. ATP induces a fast cellular response, i.e. increase in intracellular [Ca2+], which was monitored in real-time. Limitations of the cellular stimulation by the proposed approach are highlighted by studying the stimulation as a function of irradiation parameters (time, distance, light power). Moreover, caging molecules bind to the film stronger than ATP does, which opens new perspectives for the use of the most diverse chemical compounds as caging molecules.
Employment of HA/PLL films as a nouvelle support for cellular growth and hosting of bioactive molecules, along with the possibility to stimulate individual cells using focused light renders this approach highly efficient and unique in terms of precision and spatio-temporal resolution among those previously described. With its high potential, the concept presented herein provides the foundation for the design of new intelligent materials for single cell studies, with the focus on tissue engineering, diagnostics, and other cell-based applications.
The Milky Way is only one out of billions of galaxies in the universe. However, it is a special galaxy because it allows to explore the main mechanisms involved in its evolution and formation history by unpicking the system star-by-star. Especially, the chemical fingerprints of its stars provide clues and evidence of past events in the Galaxy’s lifetime. These information help not only to decipher the current structure and building blocks of the Milky Way, but to learn more about the general formation process of galaxies.
In the past decade a multitude of stellar spectroscopic Galactic surveys have scanned millions of stars far beyond the rim of the solar neighbourhood. The obtained spectroscopic information provide unprecedented insights to the chemo-dynamics of the Milky Way. In addition analytic models and numerical simulations of the Milky Way provide necessary descriptions and predictions suited for comparison with observations in order to decode the physical properties that underlie the complex system of the Galaxy.
In the thesis various approaches are taken to connect modern theoretical modelling of galaxy formation and evolution with observations from Galactic stellar surveys. With its focus on the chemo-kinematics of the Galactic disk this work aims to determine new observational constraints on the formation of the Milky Way providing also proper comparisons with two different models. These are the population synthesis model TRILEGAL based on analytical distribution functions, which aims to simulate the number and distribution of stars in the Milky Way and its different components, and a hybrid model (MCM) that combines an N-body simulation of a Milky Way like galaxy in the cosmological framework with a semi-analytic chemical evolution model for the Milky Way. The major observational data sets in use come from two surveys, namely the “Radial Velocity Experiment” (RAVE) and the “Sloan Extension for Galactic Understanding and Exploration” (SEGUE).
In the first approach the chemo-kinematic properties of the thin and thick disk of the Galaxy as traced by a selection of about 20000 SEGUE G-dwarf stars are directly compared to the predictions by the MCM model. As a necessary condition for this, SEGUE's selection function and its survey volume are evaluated in detail to correct the spectroscopic observations for their survey specific selection biases. Also, based on a Bayesian method spectro-photometric distances with uncertainties below 15% are computed for the selection of SEGUE G-dwarfs that are studied up to a distance of 3 kpc from the Sun.
For the second approach two synthetic versions of the SEGUE survey are generated based on the above models. The obtained synthetic stellar catalogues are then used to create mock samples best resembling the compiled sample of observed SEGUE G-dwarfs. Generally, mock samples are not only ideal to compare predictions from various models. They also allow validation of the models' quality and improvement as with this work could be especially achieved for TRILEGAL. While TRILEGAL reproduces the statistical properties of the thin and thick disk as seen in the observations, the MCM model has shown to be more suitable in reproducing many chemo-kinematic correlations as revealed by the SEGUE stars. However, evidence has been found that the MCM model may be missing a stellar component with the properties of the thick disk that the observations clearly show. While the SEGUE stars do indicate a thin-thick dichotomy of the stellar Galactic disk in agreement with other spectroscopic stellar studies, no sign for a distinct metal-poor disk is seen in the MCM model.
Usually stellar spectroscopic surveys are limited to a certain volume around the Sun covering different regions of the Galaxy’s disk. This often prevents to obtain a global view on the chemo-dynamics of the Galactic disk. Hence, a suitable combination of stellar samples from independent surveys is not only useful for the verification of results but it also helps to complete the picture of the Milky Way. Therefore, the thesis closes with a comparison of the SEGUE G-dwarfs and a sample of RAVE giants. The comparison reveals that the chemo-kinematic relations agree in disk regions where the samples of both surveys show a similar number of stars. For those parts of the survey volumes where one of the surveys lacks statistics they beautifully complement each other. This demonstrates that the comparison of theoretical models on the one side, and the combined observational data gathered by multiple surveys on the other side, are key ingredients to understand and disentangle the structure and formation history of the Milky Way.
Proteins are natural polypeptides produced by cells; they can be found in both animals and plants, and possess a variety of functions. One of these functions is to provide structural support to the surrounding cells and tissues. For example, collagen (which is found in skin, cartilage, tendons and bones) and keratin (which is found in hair and nails) are structural proteins. When a tissue is damaged, however, the supporting matrix formed by structural proteins cannot always spontaneously regenerate. Tailor-made synthetic polypeptides can be used to help heal and restore tissue formation.
Synthetic polypeptides are typically synthesized by the so-called ring opening polymerization (ROP) of α-amino acid N-carboxyanhydrides (NCA). Such synthetic polypeptides are generally non-sequence-controlled and thus less complex than proteins. As such, synthetic polypeptides are rarely as efficient as proteins in their ability to self-assemble and form hierarchical or structural supramolecular assemblies in water, and thus, often require rational designing. In this doctoral work, two types of amino acids, γ-benzyl-L/D-glutamate (BLG / BDG) and allylglycine (AG), were selected to synthesize a series of (co)polypeptides of different compositions and molar masses.
A new and versatile synthetic route to prepare polypeptides was developed, and its mechanism and kinetics were investigated. The polypeptide properties were thoroughly studied and new materials were developed from them. In particular, these polypeptides were able to aggregate (or self-assemble) in solution into microscopic fibres, very similar to those formed by collagen. By doing so, they formed robust physical networks and organogels which could be processed into high water-content, pH-responsive hydrogels. Particles with highly regular and chiral spiral morphologies were also obtained by emulsifying these polypeptides. Such polypeptides and the materials derived from them are, therefore, promising candidates for biomedical applications.
Surface-enhanced Raman scattering (SERS) is a promising tool to obtain rich chemical information about analytes at trace levels. However, in order to perform selective experiments on individual molecules, two fundamental requirements have to be fulfilled. On the one hand, areas with high local field enhancement, so-called “hot spots”, have to be created by positioning the supporting metal surfaces in close proximity to each other. In most cases hot spots are formed in the gap between adjacent metal nanoparticles (NPs). On the other hand, the analyte has to be positioned directly in the hot spot in order to profit from the highest signal amplification. The use of DNA origami substrates provides both, the arrangement of AuNPs with nm precision as well as the ability to bind analyte molecules at predefined positions. Consequently, the present cumulative doctoral thesis aims at the development of a novel SERS substrate based on a DNA origami template. To this end, two DNA-functionalized gold nanoparticles (AuNPs) are attached to one DNA origami substrate resulting in the formation of a AuNP dimer and thus in a hot spot within the corresponding gap. The obtained structures are characterized by correlated atomic force microscopy (AFM) and SERS imaging which allows for the combination of structural and chemical information.
Initially, the proof-of principle is presented which demonstrates the potential of the novel approach. It is shown that the Raman signal of 15 nm AuNPs coated with dye-modified DNA
(dye: carboxytetramethylrhodamine (TAMRA)) is significantly higher for AuNP dimers arranged on a DNA origami platform in comparison to single AuNPs. Furthermore, by attaching single TAMRA molecules in the hot spot between two 5 nm AuNPs and optimizing the size of the AuNPs by electroless gold deposition, SERS experiments at the few-molecule level are presented. The initially used DNA origami-AuNPs design is further optimized in many respects. On the one hand, larger AuNPs up to a diameter of 60 nm are used which are additionally treated with a silver enhancement solution to obtain Au-Ag-core-shell NPs. On the other hand, the arrangement of both AuNPs is altered to improve the position of the dye molecule within the hot spot as well as to decrease the gap size between the two particles. With the optimized design the detection of single dye molecules (TAMRA and cyanine 3 (Cy3)) by means of SERS is demonstrated. Quantitatively, enhancement factors up to 10^10 are estimated which is sufficiently high to detect single dye molecules.
In the second part, the influence of graphene as an additional component of the SERS substrate is investigated. Graphene is a two-dimensional material with an outstanding combination of electronical, mechanical and optical properties. Here, it is demonstrated that
single layer graphene (SLG) replicates the shape of underlying non-modified DNA origami
substrates very well, which enables the monitoring of structural alterations by AFM imaging.
In this way, it is shown that graphene encapsulation significantly increases the structural
stability of bare DNA origami substrates towards mechanical force and prolonged exposure
to deionized water.
Furthermore, SLG is used to cover DNA origami substrates which are functionalized with a
40 nm AuNP dimer. In this way, a novel kind of hybrid material is created which exhibits
several advantages compared to the analogue non-covered SERS substrates. First, the fluorescence background of dye molecules that are located in between the AuNP surface and SLG is efficiently reduced. Second, the photobleaching rate of the incorporated dye molecules is decreased up to one order of magnitude. Third, due to the increased photostability of the investigated dye molecules, the performance of polarization-dependent series measurements on individual structures is enabled. This in turn reveals extensive information about the dye molecules in the hot spot as well as about the strain induced within the graphene lattice.
Although SLG can significantly influence the SERS substrate in the aforementioned ways, all
those effects are strongly related to the extent of contact with the underlying AuNP dimer.
Ionothermal carbon materials
(2016)
Alternative concepts for energy storage and conversion have to be developed, optimized and employed to fulfill the dream of a fossil-independent energy economy. Porous carbon materials play a major role in many energy-related devices. Among different characteristics, distinct porosity features, e.g., specific surface area (SSA), total pore volume (TPV), and the pore size distribution (PSD), are important to maximize the performance in the final device. In order to approach the aim to synthesize carbon materials with tailor-made porosity in a sustainable fashion, the present thesis focused on biomass-derived precursors employing and developing the ionothermal carbonization.
During the ionothermal carbonization, a salt melt simultaneously serves as solvent and porogen. Typically, eutectic mixtures containing zinc chloride are employed as salt phase. The first topic of the present thesis addressed the possibility to precisely tailor the porosity of ionothermal carbon materials by an experimentally simple variation of the molar composition of the binary salt mixture. The developed pore tuning tool allowed the synthesis of glucose derived carbon materials with predictable SSAs in the range of ~ 900 to ~ 2100 m2 g-1. Moreover, the nucleobase adenine was employed as precursor introducing nitrogen functionalities in the final material. Thereby, the chemical properties of the carbon materials are varied leading to new application fields. Nitrogen doped carbons (NDCs) are able to catalyze the oxygen reduction reaction (ORR) which takes place on the cathodic site of a fuel cell. The herein developed porosity tailoring allowed the synthesis of adenine derived NDCs with outstanding SSAs of up to 2900 m2 g-1 and very large TPV of 5.19 cm3 g-1. Furthermore, the influence of the porosity on the ORR could be directly investigated enabling the precise optimization of the porosity characteristics of NDCs for this application. The second topic addressed the development of a new method to investigate the not-yet unraveled mechanism of the oxygen reduction reaction using a rotating disc electrode setup. The focus was put on noble-metal free catalysts. The results showed that the reaction pathway of the investigated catalysts is pH-dependent indicating different active species at different pH-values. The third topic addressed the expansion of the used salts for the ionothermal approach towards hydrated calcium and magnesium chloride. It was shown that hydrated salt phases allowed the introduction of a secondary templating effect which was connected to the coexistence of liquid and solid salt phases. The method enabled the synthesis of fibrous NDCs with SSAs of up to 2780 m2 g-1 and very large TPV of 3.86 cm3 g-1. Moreover, the concept of active site implementation by a facile low-temperature metalation employing the obtained NDCs as solid ligands could be shown for the first time in the context of ORR.
Overall, the thesis may pave the way towards highly porous carbon with tailor-made porosity materials prepared by an inexpensive and sustainable pathway, which can be applied in energy related field thereby supporting the needed expansion of the renewable energy sector.
In the interest of producing functional catalysts from sustainable building-blocks, 1, 3-dicarboxylate imidazolium salts derived from amino acids were successfully modified to be suitable as N-Heterocyclic carbene (NHC) ligands within metal complexes. Complexes of Ag(I), Pd(II), and Ir(I) were successfully produced using known procedures using ligands derived from glycine, alanine, β-alanine and phenylalanine. The complexes were characterized in solid state using X-Ray crystallography, which allowed for the steric and electronic comparison of these ligands to well-known NHC ligands within analogous metal complexes.
The palladium complexes were tested as catalysts for aqueous-phase Suzuki-Miyaura cross-coupling. Water-solubility could be induced via ester hydrolysis of the N-bound groups in the presence of base. The mono-NHC–Pd complexes were seen to be highly active in the coupling of aryl bromides with phenylboronic acid; the active catalyst of which was determined to be mostly Pd(0) nanoparticles. Kinetic studies determined that reaction proceeds quickly in the coupling of bromoacetophenone, for both pre-hydrolyzed and in-situ hydrolysis catalyst dissolution. The catalyst could also be recycled for an extra run by simply re-using the aqueous layer.
The imidazolium salts were also used to produce organosilica hybrid materials. This was attempted via two methods: by post-grafting onto a commercial organosilica, and co-condensation of the corresponding organosilane. The co-condensation technique harbours potential for the production of solid-support catalysts.
Boom in der Krise
(2016)
Über das Verhältnis von Konsum und Individualität.
Konsum, Tourismus, Autofahren - sind die 1970er Jahre mit diesen Schlagworten adäquat beschrieben, oder handelt es sich nicht vielmehr um ein Jahrzehnt der Krisen? Einerseits war der Alltag der westdeutschen und britischen Bevölkerung durch eine kontinuierliche Ausweitung der Konsummöglichkeiten geprägt. Andererseits kommen die Folgen des Ölschocks im Bild der leeren Autobahnen zum Ausdruck. Sina Fabian greift beide Erzählweisen auf und diskutiert die 1970er Jahre im Spannungsverhältnis von Krise und Boom. Sie untersucht den Einfluss von Ölpreis- und Wirtschaftskrisen auf den Tourismus und die PKW-Nutzung als zwei teuren Konsumgütern, die in beiden Ländern gerade während des Untersuchungszeitraums an Bedeutung gewannen. Ebenso fragt sie, inwieweit sich der steigende Konsum tatsächlich als Ausdruck fortschreitender Individualisierung begreifen lässt? Stellen Pauschalreisen und Fließbandprodukte die Individualisierungsthese nicht weit eher in Frage? Anhand unterschiedlicher Quellen, die von statistischen Erhebungen bis hin zu Tagebüchern und Reiseberichten aus der Bevölkerung reichen, relativiert die Autorin herkömmliche Lesarten der inzwischen vielfach historisierten 1970er Jahre.
Zwecks Erklärung von komplexen Finanzierungsentscheidungen multinationaler Unternehmen mit besonderem Bezug zum Euroraum werden bisherige Ansätze im Bereich „Finance“ erweitert und empirisch belegt. Die Analyse beschränkt sich hier nicht ausschließlich auf die Kapitalstruktur (Eigen- und Fremdkapital), sondern ebenfalls auf die strukturellen Verhältnisse von Innen- und Außenfinanzierung. Damit wird einerseits der Forschungsstrang „Corporate Finance“ fortgeführt andererseits mit einem Cashflow basierten Ansatz erweitert. In Anlehnung an das theoretische Modell zur optimalen Finanzierungsstruktur (Innen- und Außenfinanzierung) werden auf eigener Datenbasis periodenspezifische Entwicklungen der Unternehmensfinanzierung im internationalen Kontext analysiert.
Die besonderen Auswirkungen der globalen Finanzkrise auf börsennotierte Unternehmen aus Deutschland, Frankreich und Italien werden hier mit einem neuen Ansatz empirisch untersucht. Dabei zeigt sich, dass die Folgen der globalen Finanzkrise für die Kapitalbeschaffung multinationaler Unternehmen, zunächst allein auf Basis der Kapitalstruktur scheinbar geringere Veränderungen verursachten, unter Berücksichtigung der Finanzierungsstruktur jedoch weitreichende Dynamiken zum Vorschein kamen. Demnach unterliegen die Finanzierungsvolumina und die entsprechenden Finanzierungsquoten im Krisenverlauf enormen Schwankungen, die vielschichtige Herausforderungen für das internationale Finanzmanagement verdeutlichen.
Das Buch befasst sich mit der Frage, ob und wie international tätige Unternehmen europäische Auslandsverluste bei der inländischen Besteuerung geltend machen können. Im Mittelpunkt der rechtlichen Auseinandersetzung steht die Rechtsprechungsentwicklung des EuGH seit der Rechtssache Marks & Spencer. Mit zunehmender Anzahl an Gerichtsentscheidungen wurde die grenzüberschreitende Verlustverrechnung im Rahmen der 'finalen Verluste' Gegenstand intensiver und kontroverser Rechtsdiskussion. Die vorliegende Arbeit nimmt eine kritische Bestandsaufnahme des nationalen Steuerrechts vor. Dabei untersucht der Autor, welche Folgen mit der EuGH-Rechtsprechung einhergehen und welche Lösungsansätze einen gangbaren Weg ebnen könnten, um dem Europäischen Gedanken eines gemeinsamen Binnenmarkts gerecht zu werden.
Die Bedeutung von Infrastrukturprojekten ist in der jüngeren Vergangenheit verstärkt in den Fokus des Gesetzgebers gerückt. Dieser ist in den letzten Jahren mehrfach grundlegend auf dem Gebiet des Investment- und Investmentsteuerrechts tätig geworden. Insbesondere die letzten Änderungen durch das AIFM-StAnpG als Anpassung steuerrechtlicher Vorschriften an die deutsche Umsetzung der AIFM-Richtlinie haben wieder Auswirkungen auf Infrastrukturinvestments. Die Untersuchung befasst sich zunächst mit dem Infrastrukturbegriff in Zusammenhang mit Infrastrukturinvestments und diskutiert sodann die steuerliche Behandlung von Anlegern solcher Infrastrukturinvestments unter besonderer Berücksichtigung des InvStG in der Fassung des AIFM-StAnpG. Dabei erfolgt die Betrachtung anhand von Infrastrukturfonds.
Thema dieses Buches ist die Umsatzsteuerorganschaft, die als ein Relikt der Bruttoallphasenumsatzsteuer heute nur der Verwaltungsvereinfachung dienen soll. Die steigende Zahl an höchstrichterlichen Entscheidungen zu ihrer Anwendung deutet jedoch schon an, dass sie mehr als nur eine Verwaltungsvereinfachung für die Betroffenen bereithält. Der Autor analysiert anhand der systematischen Stellung und der historischen Entwicklung die heutige Anwendung der Umsatzsteuerorganschaft im nationalen Recht. Ziel der Analyse ist, die einzelnen Merkmale mit einem konsistenten Definitionsinhalt zu versehen, auch vor dem Hintergrund der MwStSystRL. Hieraus lassen sich schließlich ihre Bedeutungen ableiten, insbesondere auch im Rahmen eines Insolvenzverfahrens.
Seit der Einführung von Antibiotika in die medizinische Behandlung von bakteriellen Infektionskrankheiten existiert ein Wettlauf zwischen der Evolution von Bakterienresistenzen und der Entwicklung wirksamer Antibiotika. Während bis in die 80er Jahre verstärkt an neuen Antibiotika geforscht wurde, gewinnen multiresistente Keime heute zunehmend die Oberhand. Um einzelne Pathogene erfolgreich nachzuweisen und zu bekämpfen, ist ein grundlegendes Wissen über den Erreger unumgänglich. Bakterielle Proteine, die bei einer Infektion vorrangig vom Immunsystem prozessiert und präsentiert werden, könnten für die Entwicklung von Impfstoffen oder gezielten Therapeutika nützlich sein. Auch für die Diagnostik wären diese immundominanten Proteine interessant. Allerdings herrscht ein Mangel an Wissen über spezifische Antigene vieler pathogener Bakterien, die eine eindeutige Diagnostik eines einzelnen Erregers erlauben würden.
Daher wurden in dieser Arbeit vier verschiedene Humanpathogene mittels Phage Display untersucht: Neisseria gonorrhoeae, Neisseria meningitidis, Borrelia burgdorferi und Clostridium difficile. Hierfür wurden aus der genomischen DNA der vier Erreger Bibliotheken konstruiert und durch wiederholte Selektion und Amplifikation, dem sogenannten Panning, immunogene Proteine isoliert. Für alle Erreger bis auf C. difficile wurden immunogene Proteine aus den jeweiligen Bibliotheken isoliert. Die identifizierten Proteine von N. meningitidis und B. burgdorferi waren größtenteils bekannt, konnten aber in dieser Arbeit durch Phage Display verifiziert werden. Für N. gonorrhoeae wurden 21 potentiell immunogene Oligopeptide isoliert, von denen sechs Proteine als neue zuvor unbeschriebene Proteine mit immunogenem Charakter identifiziert wurden. Von den Phagen-präsentierten Oligopeptide der 21 immunogenen Proteine wurden Epitopmappings mit verschiedenen polyklonalen Antikörpern durchgeführt, um immunogene Bereiche näher zu identifizieren und zu charakterisieren. Bei zehn Proteinen wurden lineare Epitope eindeutig mit drei polyklonalen Antikörpern identifiziert, von fünf weiteren Proteinen waren Epitope mit mindestens einem Antikörper detektierbar. Für eine weitere Charakterisierung der ermittelten Epitope wurden Alaninscans durchgeführt, die eine detaillierte Auskunft über kritische Aminosäuren für die Bindung des Antikörpers an das Epitop geben.
Ausgehend von dem neu identifizierten Protein mit immunogenem Charakter NGO1634 wurden 26 weitere Proteine aufgrund ihrer funktionellen Ähnlichkeit ausgewählt und mithilfe bioinformatischer Analysen auf ihre Eignung zur Entwicklung einer diagnostischen Anwendung analysiert. Durch Ausschluss der meisten Proteine aufgrund ihrer Lokalisation, Membrantopologie oder unspezifischen Proteinsequenz wurden scFv-Antikörper gegen acht Proteine mittels Phage Display generiert und anschließend als scFv-Fc-Fusionsantikörper produziert und charakterisiert.
Die hier identifizierten Proteine und linearen Epitope könnten einen Ansatzpunkt für die Entwicklung einer diagnostischen oder therapeutischen Anwendung bieten. Lineare Epitopsequenzen werden häufig für die Impfstoffentwicklung eingesetzt, sodass vor allem die in dieser Arbeit bestimmten Epitope von Membranproteinen interessante Kandidaten für weitere Untersuchungen in diese Richtung sind. Durch weitere Untersuchungen könnten möglicherweise unbekannte Virulenzfaktoren entdeckt werden, deren Inhibierung einen entscheidenden Einfluss auf Infektionen haben könnten.
En route towards advanced catalyst materials for the electrocatalytic water splitting reaction
(2016)
The thesis on hand deals with the development of new types of catalysts based on pristine metals and ceramic materials and their application as catalysts for the electrocatalytic water splitting reaction. In order to breathe life into this technology, cost-efficient, stable and efficient catalysts are imploringly desired. In this manner, the preparation of Mn-, N-, S-, P-, and C-containing nickel materials has been investigated together with the theoretical and electrochemical elucidation of their activity towards the hydrogen (and oxygen) evolution reaction. The Sabatier principle has been used as the principal guideline towards successful tuning of catalytic sites. Furthermore, two pathways have been chosen to ameliorate the electrocatalytic performance, namely, the direct improvement of intrinsic properties through appropriate material selection and secondly the increase of surface area of the catalytic material with an increased amount of active sites. In this manner, bringing materials with optimized hydrogen adsorption free energy onto high surface area support, catalytic performances approaching the golden standards of noble metals were feasible. Despite varying applied synthesis strategies (wet chemistry in organic solvents, ionothermal reaction, gas phase reaction), one goal has been systematically pursued: to understand the driving mechanism of the growth. Moreover, deeper understanding of inherent properties and kinetic parameters of the catalytic materials has been gained.
Eye movements serve as a window into ongoing visual-cognitive processes and can thus be used to investigate how people perceive real-world scenes. A key issue for understanding eye-movement control during scene viewing is the roles of central and peripheral vision, which process information differently and are therefore specialized for different tasks (object identification and peripheral target selection respectively). Yet, rather little is known about the contributions of central and peripheral processing to gaze control and how they are coordinated within a fixation during scene viewing. Additionally, the factors determining fixation durations have long been neglected, as scene perception research has mainly been focused on the factors determining fixation locations. The present thesis aimed at increasing the knowledge on how central and peripheral vision contribute to spatial and, in particular, to temporal aspects of eye-movement control during scene viewing. In a series of five experiments, we varied processing difficulty in the central or the peripheral visual field by attenuating selective parts of the spatial-frequency spectrum within these regions. Furthermore, we developed a computational model on how foveal and peripheral processing might be coordinated for the control of fixation duration. The thesis provides three main findings. First, the experiments indicate that increasing processing demands in central or peripheral vision do not necessarily prolong fixation durations; instead, stimulus-independent timing is adapted when processing becomes too difficult. Second, peripheral vision seems to play a prominent role in the control of fixation durations, a notion also implemented in the computational model. The model assumes that foveal and peripheral processing proceed largely in parallel and independently during fixation, but can interact to modulate fixation duration. Thus, we propose that the variation in fixation durations can in part be accounted for by the interaction between central and peripheral processing. Third, the experiments indicate that saccadic behavior largely adapts to processing demands, with a bias of avoiding spatial-frequency filtered scene regions as saccade targets. We demonstrate that the observed saccade amplitude patterns reflect corresponding modulations of visual attention. The present work highlights the individual contributions and the interplay of central and peripheral vision for gaze control during scene viewing, particularly for the control of fixation duration. Our results entail new implications for computational models and for experimental research on scene perception.
Die Elbe und ihr Einzugsgebiet sind vom Klimawandel betroffen. Um die Wirkkette von projizierten Klimaveränderungen auf den Wasserhaushalt und die daraus resultierenden Nährstoffeinträge und -frachten für große Einzugsgebiete wie das der Elbe zu analysieren, können integrierte Umweltmodellsysteme eingesetzt werden. Fallstudien, die mit diesen Modellsystemen ad hoc durchgeführt werden, repräsentieren den Istzustand von Modellentwicklungen und -unsicherheiten und sind damit statisch.
Diese Arbeit beschreibt den Einstieg in die Dynamisierung von Klimafolgenanalysen im Elbegebiet. Dies umfasst zum einen eine Plausibilitätsprüfung von Auswirkungsrechnungen, die mit Szenarien des statistischen Szenariengenerators STARS durchgeführt wurden, durch den Vergleich mit den Auswirkungen neuerer Klimaszenarien aus dem ISI-MIP Projekt, die dem letzten Stand der Klimamodellierung entsprechen. Hierfür wird ein integriertes Modellsystem mit "eingefrorenem Entwicklungsstand" verwendet. Die Klimawirkungsmodelle bleiben dabei unverändert. Zum anderen wird ein Bestandteil des integrierten Modellsystems – das ökohydrologische Modell SWIM – zu einer "live"-Version weiterentwickelt. Diese wird durch punktuelle Testung an langjährigen Versuchsreihen eines Lysimeterstandorts sowie an aktuellen Abflussreihen validiert und verbessert.
Folgende Forschungsfragen werden bearbeitet: (i) Welche Effekte haben unterschiedliche Klimaszenarien auf den Wasserhaushalt im Elbegebiet und ist eine Neubewertung der Auswirkung des Klimawandels auf den Wasserhaushalt notwendig?, (ii) Was sind die Auswirkungen des Klimawandels auf die Nährstoffeinträge und -frachten im Elbegebiet sowie die Wirksamkeit von Maßnahmen zur Reduktion der Nährstoffeinträge?, (iii) Ist unter der Nutzung (selbst einer sehr geringen Anzahl) verfügbarer tagesaktueller Witterungsdaten in einem stark heterogenen Einzugsgebiet eine valide Ansprache der aktuellen ökohydrologischen Situation des Elbeeinzugsgebiets möglich?
Die aktuellen Szenarien bestätigen die Richtung, jedoch nicht das Ausmaß der Klimafolgen: Die Rückgänge des mittleren jährlichen Gesamtabflusses und der monatlichen Abflüsse an den Pegeln bis Mitte des Jahrhunderts betragen für das STARS-Szenario ca. 30 %. Die Rückgänge bei den auf dem ISI-MIP-Szenario basierenden Modellstudien liegen hingegen nur bei ca. 10 %. Hauptursachen für diese Divergenz sind die Unterschiede in den Niederschlagsprojektionen sowie die Unterschiede in der jahreszeitlichen Verteilung der Erwärmung. Im STARS-Szenario gehen methodisch bedingt die Niederschläge zurück und der Winter erwärmt sich stärker als der Sommer. In dem ISI-MIP-Szenario bleiben die Niederschläge nahezu stabil und die Erwärmung im Sommer und Winter unterscheidet sich nur geringfügig.
Generell nehmen die Nährstoffeinträge und -frachten mit den Abflüssen in beiden Szenarien unterproportional ab, wobei die Frachten jeweils stärker als die Einträge zurückgehen. Die konkreten Effekte der Abflussänderungen sind gering und liegen im einstelligen Prozentbereich. Gleiches gilt für die Unterschiede zwischen den Szenarien. Der Effekt von zwei ausgewählten Maßnahmen zur Reduktion der Nährstoffeinträge und -frachten unterscheidet sich bei verschiedenen Abflussverhältnissen, repräsentiert durch unterschiedliche Klimaszenarien in unterschiedlich feuchter Ausprägung, ebenfalls nur geringfügig.
Die Beantwortung der ersten beiden Forschungsfragen zeigt, dass die Aktualisierung von Klimaszenarien in einem ansonsten "eingefrorenen" Verbund von ökohydrologischen Daten und Modellen eine wichtige Prüfoption für die Plausibilisierung von Klimafolgenanalysen darstellt. Sie bildet die methodische Grundlage für die Schlussfolgerung, dass bei der Wassermenge eine Neubewertung der Klimafolgen notwendig ist, während dies bei den Nährstoffeinträgen und -frachten nicht der Fall ist.
Die zur Beantwortung der dritten Forschungsfrage mit SWIM-live durchgeführten Validierungsstudien ergeben Diskrepanzen am Lysimeterstandort und bei den Abflüssen aus den Teilgebieten Saale und Spree. Sie lassen sich zum Teil mit der notwendigen Interpolationsweite der Witterungsdaten und dem Einfluss von Wasserbewirtschaftungsmaßnahmen erklären. Insgesamt zeigen die Validierungsergebnisse, dass schon die Pilotversion von SWIM-live für eine ökohydrologische Ansprache des Gebietswasserhaushaltes im Elbeeinzugsgebiet genutzt werden kann. SWIM-live ermöglicht eine unmittelbare Betrachtung und Beurteilung simulierter Daten. Dadurch werden Unsicherheiten bei der Modellierung direkt offengelegt und können infolge dessen reduziert werden. Zum einen führte die Verdichtung der meteorologischen Eingangsdaten durch die Verwendung von nun ca. 700 anstatt 19 Klima- bzw. Niederschlagstationen zu einer Verbesserung der Ergebnisse. Zum anderen wurde SWIM-live beispielhaft für einen Zyklus aus punktueller Modellverbesserung und flächiger Überprüfung der Simulationsergebnisse genutzt.
Die einzelnen Teilarbeiten tragen jeweils zur Dynamisierung von Klimafolgenanalysen im Elbegebiet bei. Der Anlass hierfür war durch die fehlerhaften methodischen Grundlagen von STARS gegeben. Die Sinnfälligkeit der Dynamisierung ist jedoch nicht an diesen konkreten Anlass gebunden, sondern beruht auf der grundlegenden Einsicht, dass Ad-hoc-Szenarienanalysen immer auch pragmatische Vereinfachungen zugrunde liegen, die fortlaufend überprüft werden müssen.
The aim of this thesis was the elucidation of different ionization methods (resonance-enhanced multiphoton ionization – REMPI, electrospray ionization – ESI, atmospheric pressure chemical ionization – APCI) in ion mobility (IM) spectrometry. In order to gain a better understanding of the ionization processes, several spectroscopic, mass spectrometric and theoretical methods were also used. Another focus was the development of experimental techniques, including a high resolution spectrograph and various combinations of IM and mass spectrometry.
The novel high resolution 2D spectrograph facilitates spectroscopic resolutions in the range of commercial echelle spectrographs. The lowest full width at half maximum of a peak achieved was 25 pm. The 2D spectrograph is based on the wavelength separation of light by the combination of a prism and a grating in one dimension, and an etalon in the second dimension. This instrument was successfully employed for the acquisition of Raman and laser-induced breakdown spectra.
Different spectroscopic methods (light scattering and fluorescence spectroscopy) permitting a spatial as well as spectral resolution, were used to investigate the release of ions in the electrospray. The investigation is based on the 50 nm shift of the fluorescence band of rhodamine 6G ions of during the transfer from the electrospray droplets to the gas phase.
A newly developed ionization chamber operating at reduced pressure (0.5 mbar) was coupled to a time-of-flight mass spectrometer. After REMPI of H2S, an ionization chemistry analogous to H2O was observed with this instrument. Besides H2S+ and its fragments, H3S+ and protonated analyte ions could be observed as a result of proton-transfer reactions.
For the elucidation of the peaks in IM spectra, a combination of IM spectrometer and linear quadrupole ion trap mass spectrometer was developed. The instrument can be equipped with various ionization sources (ESI, REMPI, APCI) and was used for the characterization of the peptide bradykinin and the neuroleptic promazine.
The ionization of explosive compounds in an APCI source based on soft x-radiation was investigated in a newly developed ionization chamber attached to the ion trap mass spectrometer. The major primary and secondary reactions could be characterized and explosive compound ions could be identified and assigned to the peaks in IM spectra. The assignment is based on the comparison of experimentally determined and calculated IM. The methods of calculation currently available exhibit large deviations, especially in the case of anions. Therefore, on the basis of an assessment of available methods, a novel hybrid method was developed and characterized.
Seit Jahren steigen Politikverdrossenheit und die Zahl der Menschen an, die sich von der Gesellschaft exkludiert fühlen. Können Basisaktivierung durch Quartiersmanagement und Community Organizing diesen Trends entgegenwirken? Fördert die Ermöglichung der gesellschaftlichen Teilhabe von benachteiligten Bevölkerungsgruppen die Sozialkapitalbildung im Sinne Putnams? Um diese Fragen zu beantworten, wurden die vorhandene Literatur analysiert und zahlreiche Experteninterviews geführt.
Konzessionen (auch: Zugeständnisse) spielen in Einkäufer-Zulieferer-Verhandlungen eine entscheidende Rolle, weil die beteiligten Verhandlungsparteien in der Regel nur über Konzessionen, das heißt über eine Abfolge von entgegenkommenden Angeboten, zu einem von beiden Seiten akzeptierten Verhandlungsergebnis kommen. Da Verhandelnde mit der Abgabe von eigenen Konzessionen jedoch einen Teil ihrer individuellen Verhandlungsmasse hergeben und durch Konzessionen des Gegenübers einen Teil zu ihrer Verhandlungsmasse dazu gewinnen können, beeinflussen Konzessionen maßgeblich die Verhandlungsperformance von Verhandelnden. Diese wiederum hat nachweislich einen Einfluss auf die Profitabilität von Unternehmen.
Vor diesem Hintergrund ist es sowohl für die Verhandlungsforschung als auch für die Verhandlungspraxis von Interesse, zu untersuchen, wann und wie Verhandelnde Konzessionen in Verhandlungen machen sollten, um die eigene Verhandlungsperformance zu optimieren. Im Rahmen der vorliegenden Untersuchung widmet sich die Autorin dieser Fragestellung, indem sie erstens (1) untersucht, ob Verhandelnde die erste Konzession in einer Verhandlung machen sollten, zweitens (2) analysiert, nach welchem Konzessionsmuster Verhandelnde verhandeln sollten und drittens (3) die Vorteilhaftigkeit der Abgabe von Konzessionen in Form von Paketangeboten in Verhandlungen mit mehreren Verhandlungsgegenständen (z. B. Preis, Menge und Lieferkonditionen) ermittelt. Mit der Bearbeitung dieser Teilfragestellungen schließt die Autorin zum einen Lücken in der Verhandlungsforschung und zum anderen leitet sie relevante Implikationen für ein systematisches Konzessionsmanagement in der Verhandlungspraxis ab.
Buyer-seller negotiations have significant impact on a company’s profitability, which makes practitioners aim at maximizing their performance. One lever for increasing bargaining performance is to pursue a clearly defined aspiration, i.e. one’s most desired outcome. In this context, the author explores the role of such aspirations in the three negotiation phases: preparation, bargaining, and striking a deal. She investigates determinants of aspirations, unintended consequences such as unethical bargaining behavior, and the consequences of overly ambitious aspirations. As a result, she does not only close existing gaps in negotiation research, but also derives valuable implications for practitioners
Foam fractionation of surfactant and protein solutions is a process dedicated to separate surface active molecules from each other due to their differences in surface activities. The process is based on forming bubbles in a certain mixed solution followed by detachment and rising of bubbles through a certain volume of this solution, and consequently on the formation of a foam layer on top of the solution column. Therefore, systematic analysis of this whole process comprises of at first investigations dedicated to the formation and growth of single bubbles in solutions, which is equivalent to the main principles of the well-known bubble pressure tensiometry. The second stage of the fractionation process includes the detachment of a single bubble from a pore or capillary tip and its rising in a respective aqueous solution. The third and final stage of the process is the formation and stabilization of the foam created by these bubbles, which contains the adsorption layers formed at the growing bubble surface, carried up and gets modified during the bubble rising and finally ends up as part of the foam layer.
Bubble pressure tensiometry and bubble profile analysis tensiometry experiments were performed with protein solutions at different bulk concentrations, solution pH and ionic strength in order to describe the process of accumulation of protein and surfactant molecules at the bubble surface. The results obtained from the two complementary methods allow understanding the mechanism of adsorption, which is mainly governed by the diffusional transport of the adsorbing protein molecules to the bubble surface. This mechanism is the same as generally discussed for surfactant molecules. However, interesting peculiarities have been observed for protein adsorption kinetics at sufficiently short adsorption times. First of all, at short adsorption times the surface tension remains constant for a while before it decreases as expected due to the adsorption of proteins at the surface. This time interval is called induction time and it becomes shorter with increasing protein bulk concentration. Moreover, under special conditions, the surface tension does not stay constant but even increases over a certain period of time. This so-called negative surface pressure was observed for BCS and BLG and discussed for the first time in terms of changes in the surface conformation of the adsorbing protein molecules. Usually, a negative surface pressure would correspond to a negative adsorption, which is of course impossible for the studied protein solutions. The phenomenon, which amounts to some mN/m, was rather explained by simultaneous changes in the molar area required by the adsorbed proteins and the non-ideality of entropy of the interfacial layer. It is a transient phenomenon and exists only under dynamic conditions.
The experiments dedicated to the local velocity of rising air bubbles in solutions were performed in a broad range of BLG concentration, pH and ionic strength. Additionally, rising bubble experiments were done for surfactant solutions in order to validate the functionality of the instrument. It turns out that the velocity of a rising bubble is much more sensitive to adsorbing molecules than classical dynamic surface tension measurements. At very low BLG or surfactant concentrations, for example, the measured local velocity profile of an air bubble is changing dramatically in time scales of seconds while dynamic surface tensions still do not show any measurable changes at this time scale. The solution’s pH and ionic strength are important parameters that govern the measured rising velocity for protein solutions. A general theoretical description of rising bubbles in surfactant and protein solutions is not available at present due to the complex situation of the adsorption process at a bubble surface in a liquid flow field with simultaneous Marangoni effects. However, instead of modelling the complete velocity profile, new theoretical work has been started to evaluate the maximum values in the profile as characteristic parameter for dynamic adsorption layers at the bubble surface more quantitatively.
The studies with protein-surfactant mixtures demonstrate in an impressive way that the complexes formed by the two compounds change the surface activity as compared to the original native protein molecules and therefore lead to a completely different retardation behavior of rising bubbles. Changes in the velocity profile can be interpreted qualitatively in terms of increased or decreased surface activity of the formed protein-surfactant complexes. It was also observed that the pH and ionic strength of a protein solution have strong effects on the surface activity of the protein molecules, which however, could be different on the rising bubble velocity and the equilibrium adsorption isotherms. These differences are not fully understood yet but give rise to discussions about the structure of protein adsorption layer under dynamic conditions or in the equilibrium state.
The third main stage of the discussed process of fractionation is the formation and characterization of protein foams from BLG solutions at different pH and ionic strength. Of course a minimum BLG concentration is required to form foams. This minimum protein concentration is a function again of solution pH and ionic strength, i.e. of the surface activity of the protein molecules. Although at the isoelectric point, at about pH 5 for BLG, the hydrophobicity and hence the surface activity should be the highest, the concentration and ionic strength effects on the rising velocity profile as well as on the foamability and foam stability do not show a maximum. This is another remarkable argument for the fact that the interfacial structure and behavior of BLG layers under dynamic conditions and at equilibrium are rather different. These differences are probably caused by the time required for BLG molecules to adapt respective conformations once they are adsorbed at the surface.
All bubble studies described in this work refer to stages of the foam fractionation process. Experiments with different systems, mainly surfactant and protein solutions, were performed in order to form foams and finally recover a solution representing the foamed material. As foam consists to a large extent of foam lamella – two adsorption layers with a liquid core – the concentration in a foamate taken from foaming experiments should be enriched in the stabilizing molecules. For determining the concentration of the foamate, again the very sensitive bubble rising velocity profile method was applied, which works for any type of surface active materials. This also includes technical surfactants or protein isolates for which an accurate composition is unknown.