Refine
Year of publication
- 2017 (301) (remove)
Document Type
- Doctoral Thesis (301) (remove)
Is part of the Bibliography
- yes (301) (remove)
Keywords
- Klimawandel (4)
- climate change (4)
- FRET (3)
- Nanopartikel (3)
- Adipositas (2)
- Arbeitsmarktpolitik (2)
- Bioraffinerie (2)
- Calciumphosphat (2)
- DNA origami (2)
- Epidemiologie (2)
Institute
- Institut für Biochemie und Biologie (51)
- Institut für Chemie (35)
- Institut für Geowissenschaften (28)
- Institut für Physik und Astronomie (26)
- Institut für Ernährungswissenschaft (17)
- Öffentliches Recht (17)
- Sozialwissenschaften (16)
- Wirtschaftswissenschaften (16)
- Historisches Institut (10)
- Institut für Umweltwissenschaften und Geographie (9)
- Department Linguistik (8)
- Department Psychologie (8)
- Department Erziehungswissenschaft (6)
- Department Sport- und Gesundheitswissenschaften (6)
- Hasso-Plattner-Institut für Digital Engineering GmbH (6)
- Institut für Informatik und Computational Science (6)
- Institut für Mathematik (6)
- Bürgerliches Recht (5)
- Extern (5)
- Institut für Germanistik (5)
- Institut für Künste und Medien (4)
- Institut für Romanistik (4)
- Institut für Jüdische Studien und Religionswissenschaft (3)
- Strafrecht (3)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (2)
- Institut für Philosophie (2)
- Department Grundschulpädagogik (1)
- Department Musik und Kunst (1)
- Institut für Anglistik und Amerikanistik (1)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (1)
"Wortabruf im Handumdrehen"?
(2017)
Shape change is a fundamental process occurring in biological tissues during embryonic development and regeneration of tissues and organs. This process is regulated by cells that are constrained within a complex environment of biochemical and physical cues. The spatial constraint due to geometry has a determining role on tissue mechanics and the spatial distribution of force patterns that, in turn, influences the organization of the tissue structure. An understanding of the underlying principles of tissue organization may have wide consequences for the understanding of healing processes and the development of organs and, as such, is of fundamental interest for the tissue engineering community.
This thesis aims to further our understanding of how the collective behaviour of cells is influenced by the 3D geometry of the environment. Previous research studying the role of geometry on tissue growth has mainly focused either on flat surfaces or on substrates where at least one of the principal curvatures is zero. In the present work, tissue growth from MC3T3-E1 pre-osteoblasts was investigated on surfaces of controlled mean curvature.
One key aspect of this thesis was the development of substrates of controlled mean curvature and their visualization in 3D. It was demonstrated that substrates of controlled mean curvature suitable for cell culture can be fabricated using liquid polymers and surface tension effects.
Using these substrates, it was shown that the mean surface curvature has a strong impact on the rate of tissue growth and on the organization of the tissue structure. It was thereby not only demonstrated that the amount of tissue produced (i.e. growth rates) by the cells depends on the mean curvature of the substrate but also that the tissue surface behaves like a viscous fluid with an equilibrium shape governed by the Laplace-Young-law. It was observed that more tissue was formed on highly concave surfaces compared to flat or convex surfaces.
Motivated by these observations, an analytical model was developed, where the rate of tissue growth is a function of the mean curvature, which could successfully describe the growth kinetics. This model was also able to reproduce the growth kinetics of previous experiments where tissues have been cultured in straight-sided prismatic pores.
A second part of this thesis focuses on the tissue structure, which influences the mechanical properties of the mature bone tissue. Since the extracellular matrix is produced by the cells, the cell orientation has a strong impact on the direction of the tissue fibres. In addition, it was recently shown that some cell types exhibit collective alignment similar to liquid crystals.
Based on this observation, a computational model of self-propelled active particles was developed to explore in an abstract manner how the collective behaviour of cells is influenced by 3D curvature. It was demonstrated that the 3D curvature has a strong impact on the self-organization of active particles and gives, therefore, first insights into the principles of self-organization of cells on curved surfaces.
Dark matter, DM, has not yet been directly observed, but it has a very solid theoretical basis. There are observations that provide indirect evidence, like galactic rotation curves that show that the galaxies are rotating too fast to keep their constituent parts, and galaxy clusters that bends the light coming from behind-lying galaxies more than expected with respect to the mass that can be calculated from what can be visibly seen. These observations, among many others, can be explained with theories that include DM. The missing piece is to detect something that can exclusively be explained by DM. Direct observation in a particle accelerator is one way and indirect detection using telescopes is another. This thesis is focused on the latter method.
The Very Energetic Radiation Imaging Telescope Array System, V ERITAS, is a telescope array that detects Cherenkov radiation. Theory predicts that DM particles annihilate into, e.g., a γγ pair and create a distinctive energy spectrum when detected by such telescopes, e.i., a monoenergetic line at the same energy as the particle mass. This so called ”smoking-gun” signature is sought with a sliding window line search within the sub-range ∼ 0.3 − 10 TeV of the VERITAS energy range, ∼ 0.01 − 30 TeV.
Standard analysis within the VERITAS collaboration uses Hillas analysis and look-up tables, acquired by analysing particle simulations, to calculate the energy of the particle causing the Cherenkov shower. In this thesis, an improved analysis method has been used. Modelling each shower as a 3Dgaussian should increase the energy recreation quality. Five dwarf spheroidal galaxies were chosen as targets with a total of ∼ 224 hours of data. The targets were analysed individually and stacked. Particle simulations were based on two simulation packages, CARE and GrISU.
Improvements have been made to the energy resolution and bias correction, up to a few percent each, in comparison to standard analysis. Nevertheless, no line with a relevant significance has been detected. The most promising line is at an energy of ∼ 422 GeV with an upper limit cross section of 8.10 · 10^−24 cm^3 s^−1 and a significance of ∼ 2.73 σ, before trials correction and ∼ 1.56 σ after. Upper limit cross sections have also been calculated for the γγ annihilation process and four other outcomes. The limits are in line with current limits using other methods, from ∼ 8.56 · 10^−26 − 6.61 · 10^−23 cm^3s^−1. Future larger telescope arrays, like the upcoming Cherenkov Telescope Array, CTA, will provide better results with the help of this analysis method.
Nowadays, graph data models are employed, when relationships between entities have to be stored and are in the scope of queries. For each entity, this graph data model locally stores relationships to adjacent entities. Users employ graph queries to query and modify these entities and relationships. These graph queries employ graph patterns to lookup all subgraphs in the graph data that satisfy certain graph structures. These subgraphs are called graph pattern matches. However, this graph pattern matching is NP-complete for subgraph isomorphism. Thus, graph queries can suffer a long response time, when the number of entities and relationships in the graph data or the graph patterns increases.
One possibility to improve the graph query performance is to employ graph views that keep ready graph pattern matches for complex graph queries for later retrieval. However, these graph views must be maintained by means of an incremental graph pattern matching to keep them consistent with the graph data from which they are derived, when the graph data changes. This maintenance adds subgraphs that satisfy a graph pattern to the graph views and removes subgraphs that do not satisfy a graph pattern anymore from the graph views.
Current approaches for incremental graph pattern matching employ Rete networks. Rete networks are discrimination networks that enumerate and maintain all graph pattern matches of certain graph queries by employing a network of condition tests, which implement partial graph patterns that together constitute the overall graph query. Each condition test stores all subgraphs that satisfy the partial graph pattern. Thus, Rete networks suffer high memory consumptions, because they store a large number of partial graph pattern matches. But, especially these partial graph pattern matches enable Rete networks to update the stored graph pattern matches efficiently, because the network maintenance exploits the already stored partial graph pattern matches to find new graph pattern matches. However, other kinds of discrimination networks exist that can perform better in time and space than Rete networks. Currently, these other kinds of networks are not used for incremental graph pattern matching.
This thesis employs generalized discrimination networks for incremental graph pattern matching. These discrimination networks permit a generalized network structure of condition tests to enable users to steer the trade-off between memory consumption and execution time for the incremental graph pattern matching. For that purpose, this thesis contributes a modeling language for the effective definition of generalized discrimination networks. Furthermore, this thesis contributes an efficient and scalable incremental maintenance algorithm, which updates the (partial) graph pattern matches that are stored by each condition test. Moreover, this thesis provides a modeling evaluation, which shows that the proposed modeling language enables the effective modeling of generalized discrimination networks. Furthermore, this thesis provides a performance evaluation, which shows that a) the incremental maintenance algorithm scales, when the graph data becomes large, and b) the generalized discrimination network structures can outperform Rete network structures in time and space at the same time for incremental graph pattern matching.
Recognizing, understanding, and responding to quantities are considerable skills for human beings. We can easily communicate quantities, and we are extremely efficient in adapting our behavior to numerical related tasks. One usual task is to compare quantities. We also use symbols like digits in numerical-related tasks. To solve tasks including digits, we must to rely on our previously learned internal number representations.
This thesis elaborates on the process of number comparison with the use of noisy mental representations of numbers, the interaction of number and size representations and how we use mental number representations strategically. For this, three studies were carried out.
In the first study, participants had to decide which of two presented digits was numerically larger. They had to respond with a saccade in the direction of the anticipated answer. Using only a small set of meaningfully interpretable parameters, a variant of random walk models is described that accounts for response time, error rate, and variance of response time for the full matrix of 72 digit pairs. In addition, the used random walk model predicts a numerical distance effect even for error response times and this effect clearly occurs in the observed data. In relation to corresponding correct answers error responses were systematically faster. However, different from standard assumptions often made in random walk models, this account required that the distributions of step sizes of the induced random walks be asymmetric to account for this asymmetry between correct and incorrect responses.
Furthermore, the presented model provides a well-defined framework to investigate the nature and scale (e.g., linear vs. logarithmic) of the mapping of numerical magnitude onto its internal representation. In comparison of the fits of proposed models with linear and logarithmic mapping, the logarithmic mapping is suggested to be prioritized.
Finally, we discuss how our findings can help interpret complex findings (e.g., conflicting speed vs. accuracy trends) in applied studies that use number comparison as a well-established diagnostic tool. Furthermore, a novel oculomotoric effect is reported, namely the saccadic overschoot effect. The participants responded by saccadic eye movements and the amplitude of these saccadic responses decreases with numerical distance.
For the second study, an experimental design was developed that allows us to apply the signal detection theory to a task where participants had to decide whether a presented digit was physically smaller or larger. A remaining question is, whether the benefit in (numerical magnitude – physical size) congruent conditions is related to a better perception than in incongruent conditions. Alternatively, the number-size congruency effect is mediated by response biases due to numbers magnitude. The signal detection theory is a perfect tool to distinguish between these two alternatives. It describes two parameters, namely sensitivity and response bias. Changes in the sensitivity are related to the actual task performance due to real differences in perception processes whereas changes in the response bias simply reflect strategic implications as a stronger preparation (activation) of an anticipated answer. Our results clearly demonstrate that the number-size congruency effect cannot be reduced to mere response bias effects, and that genuine sensitivity gains for congruent number-size pairings contribute to the number-size congruency effect.
Third, participants had to perform a SNARC task – deciding whether a presented digit was odd or even. Local transition probability of irrelevant attributes (magnitude) was varied while local transition probability of relevant attributes (parity) and global probability occurrence of each stimulus were kept constantly. Participants were quite sensitive in recognizing the underlying local transition probability of irrelevant attributes. A gain in performance was observed for actual repetitions of the irrelevant attribute in relation to changes of the irrelevant attribute in high repetition conditions compared to low repetition conditions. One interpretation of these findings is that information about the irrelevant attribute (magnitude) in the previous trial is used as an informative precue, so that participants can prepare early processing stages in the current trial, with the corresponding benefits and costs typical of standard cueing studies.
Finally, the results reported in this thesis are discussed in relation to recent studies in numerical cognition.
This cumulative dissertation consists of five chapters. In terms of research content, my thesis can be divided into two parts. Part one examines local interactions and spillover effects between small regional governments using spatial econometric methods. The second part focuses on patterns within municipalities and inspects which institutions of citizen participation, elections and local petitions, influence local housing policies.
Analyse der Funktion der dualen Lokalisation der 3-Mercaptopyruvat Sulfurtransferase im Menschen
(2017)
Rubisco catalyses the first step of CO2 assimilation into plant biomass. Despite its crucial role, it is notorious for its low catalytic rate and its tendency to fix O2 instead of CO2, giving rise to a toxic product that needs to be recycled in a process known as photorespiration. Since almost all our food supply relies on Rubisco, even small improvements in its specificity for CO2 could lead to an improvement of photosynthesis and ultimately, crop yield. In this work, we attempted to improve photosynthesis by decreasing photorespiration with an artificial CCM based on a fusion between Rubisco and a carbonic anhydrase (CA).
A preliminary set of plants contained fusions between one of two CAs, bCA1 and CAH3, and the N- or C-terminus of RbcL connected by a small flexible linker of 5 amino acids. Subsequently, further fusion proteins were created between RbcL C-terminus and bCA1/CAH3 with linkers of 14, 23, 32, and 41 amino acids. The transplastomic tobacco plants carrying fusions with bCA1 were able to grow autotrophically even with the shortest linkers, albeit at a low rate, and accumulated very low levels of the fusion protein. On the other hand, plants carrying fusions with CAH3 were autotrophic only with the longer linkers. The longest linker permitted nearly wild-type like growth of the plants carrying fusions with CAH3 and increased the levels of fusion protein, but also of smaller degradation products.
The fusion of catalytically inactive CAs to RbcL did not cause a different phenotype from the fusions with catalytically active CAs, suggesting that the selected CAs were not active in the fusion with RbcL or their activity did not have an effect on CO2 assimilation. However, fusions to RbcL did not abolish RbcL catalytic activity, as shown by the autotrophic growth, gas exchange and in vitro activity measurements. Furthermore, Rubisco carboxylation rate and specificity for CO2 was not altered in some of the fusion proteins, suggesting that despite the defect in RbcL folding or assembly caused by the fusions, the addition of 60-150 amino acids to RbcL does not affect its catalytic properties. On the contrary, most growth defects of the plants carrying RbcL-CA fusions are related to their reduced Rubisco content, likely caused by impaired RbcL folding or assembly. Finally, we found that fusions with RbcL C-terminus were better tolerated than with the N-terminus, and increasing the length of the linker relieved the growth impairment imposed by the fusion to RbcL. Together, the results of this work constitute considerable relevant findings for future Rubisco engineering.
The central aim of this thesis is to demonstrate the benefits of innovative frequency-based methods to better explain the variability observed in lake ecosystems. Freshwater ecosystems may be the most threatened part of the hydrosphere. Lake ecosystems are particularly sensitive to changes in climate and land use because they integrate disturbances across their entire catchment. This makes understanding the dynamics of lake ecosystems an intriguing and important research priority. This thesis adds new findings to the baseline knowledge regarding variability in lake ecosystems. It provides a literature-based, data-driven and methodological framework for the investigation of variability and patterns in environmental parameters in the time frequency domain.
Observational data often show considerable variability in the environmental parameters of lake ecosystems. This variability is mostly driven by a plethora of periodic and stochastic processes inside and outside the ecosystems. These run in parallel and may operate at vastly different time scales, ranging from seconds to decades. In measured data, all of these signals are superimposed, and dominant processes may obscure the signals of other processes, particularly when analyzing mean values over long time scales. Dominant signals are often caused by phenomena at long time scales like seasonal cycles, and most of these are well understood in the limnological literature. The variability injected by biological, chemical and physical processes operating at smaller time scales is less well understood. However, variability affects the state and health of lake ecosystems at all time scales. Besides measuring time series at sufficiently high temporal resolution, the investigation of the full spectrum of variability requires innovative methods of analysis.
Analyzing observational data in the time frequency domain allows to identify variability at different time scales and facilitates their attribution to specific processes. The merit of this approach is subsequently demonstrated in three case studies. The first study uses a conceptual analysis to demonstrate the importance of time scales for the detection of ecosystem responses to climate change. These responses often occur during critical time windows in the year, may exhibit a time lag and can be driven by the exceedance of thresholds in their drivers. This can only be detected if the temporal resolution of the data is high enough. The second study applies Fast Fourier Transform spectral analysis to two decades of daily water temperature measurements to show how temporal and spatial scales of water temperature variability can serve as an indicator for mixing in a shallow, polymictic lake. The final study uses wavelet coherence as a diagnostic tool for limnology on a multivariate high-frequency data set recorded between the onset of ice cover and a cyanobacteria summer bloom in the year 2009 in a polymictic lake. Synchronicities among limnological and meteorological time series in narrow frequency bands were used to identify and disentangle prevailing limnological processes.
Beyond the novel empirical findings reported in the three case studies, this thesis aims to more generally be of interest to researchers dealing with now increasingly available time series data at high temporal resolution. A set of innovative methods to attribute patterns to processes, their drivers and constraints is provided to help make more efficient use of this kind of data.
This dissertation explores whether the processing of ellipsis is affected by changes in the complexity of the antecedent, either due to added linguistic material or to the presence of a temporary ambiguity. Murphy (1985) hypothesized that ellipsis is resolved via a string copying procedure when the antecedent is within the same sentence, and that copying longer strings takes more time. Such an account also implies that the antecedent is copied without its structure, which in turn implies that recomputing its syntax and semantics may be necessary at the ellipsis gap. Alternatively, several accounts predict null effects of antecedent complexity, as well as no reparsing. These either involve a structure copying mechanism that is cost-free and whose finishing time is thus independent of the form of the antecedent (Frazier & Clifton, 2001), treat ellipsis as a pointer into content-addressable memory with direct access (Martin & McElree, 2008, 2009), or assume that one structure is ‘shared’ between antecedent and gap (Frazier & Clifton, 2005).
In a self-paced reading study on German sluicing, temporarily ambiguous garden-path clauses were used as antecedents, but no evidence of reparsing in the form of a slowdown at the ellipsis site was found. Instead, results suggest that antecedents which had been reanalyzed from an initially incorrect structure were easier to retrieve at the gap. This finding that can be explained within the framework of cue-based retrieval parsing (Lewis & Vasishth, 2005), where additional syntactic operations on a structure yield memory reactivation effects.
Two further self-paced reading studies on German bare argument ellipsis and English verb phrase ellipsis investigated if adding linguistic content to the antecedent would increase processing times for the ellipsis, and whether insufficiently demanding comprehension tasks may have been responsible for earlier null results (Frazier & Clifton, 2000; Martin & McElree, 2008). It has also been suggested that increased antecedent complexity should shorten rather than lengthen retrieval times by providing more unique memory features (Hofmeister, 2011). Both experiments failed to yield reliable evidence that antecedent complexity affects ellipsis processing times in either direction, irrespectively of task demands.
Finally, two eye-tracking studies probed more deeply into the proposed reactivation-induced speedup found in the first experiment. The first study used three different kinds of French garden-path sentences as antecedents, with two of them failing to yield evidence for reactivation. Moreover, the third sentence type showed evidence suggesting that having failed to assign a structure to the antecedent leads to a slowdown at the ellipsis site, as well as regressions towards the ambiguous part of the sentence. The second eye-tracking study used the same materials as the initial self-paced reading study on German, with results showing a pattern similar to the one originally observed, with some notable differences.
Overall, the experimental results are compatible with the view that adding linguistic material to the antecedent has no or very little effect on the ease with which ellipsis is resolved, which is consistent with the predictions of cost-free copying, pointer-based approaches and structure sharing. Additionally, effects of the antecedent’s parsing history on ellipsis processing may be due to reactivation, the availability of multiple representations in memory, or complete failure to retrieve a matching target.
Arbeit vor Rente
(2017)
Schon vor der Staatsgründung legte die SED die Grundlagen für ein neues System der sozialen Sicherung und wandelte den traditionellen Wohlfahrtsstaat in einen "workfarestate" um. Carolin Wiethoff richtet den Blick auf die Auswirkungen dieser Politik auf die Menschen, die aufgrund einer Erwerbsminderung nicht mehr oder nur noch eingeschränkt arbeiten konnten. Ihre Studie untersucht über einen Zeitraum von 40 Jahren hinweg die soziale Sicherung bei Invalidität und sozialpolitische Initiativen zur beruflichen Rehabilitation. Die beiden Bereiche waren eng miteinander verbunden, weil es den politisch Verantwortlichen in der DDR stets darum ging, möglichst viele Bürger in den Arbeitsprozess zu integrieren und eine dauerhafte Invalidisierung zu vermeiden. Im Mittelpunkt der Untersuchung steht neben dem stellenweise konfliktreichen Zusammenspiel der einzelnen Akteure im Partei-und Staatsapparat die betriebliche Praxis, denn in der DDR war Sozialpolitik besonders stark auf die Betriebe zentriert. Anhand des Eisenhüttenkombinates Ost, einem Schwerpunktbetrieb der DDR, werden die Organisation des betrieblichen Gesundheits- und Sozialwesens und Schwierigkeiten bei der Umsetzung staatlicher Vorgaben deutlich.
Researchers have made many approaches to study the complexities of the mammalian taste system; however molecular mechanisms of taste processing in the early structures of the central taste pathway remain unclear. More recently the Arc catFISH (cellular compartment analysis of temporal activity by fluorescent in situ hybridisation) method has been used in our lab to study neural activation following taste stimulation in the first central structure in the taste pathway, the nucleus of the solitary tract. This method uses the immediate early gene Arc as a neural activity marker to identify taste-responsive neurons. Arc plays a critical role in memory formation and is necessary for conditioned taste aversion memory formation. In the nucleus of the solitary tract only bitter taste stimulation resulted in increased Arc expression, however this did not occur following stimulation with tastants of any other taste quality. The primary target for gustatory NTS neurons is the parabrachial nucleus (PbN) and, like Arc, the PbN plays an important role in conditioned taste aversion learning.
The aim of this thesis is to investigate Arc expression in the PbN following taste stimulation to elucidate the molecular identity and function of Arc expressing, taste- responsive neurons. Naïve and taste-conditioned mice were stimulated with tastants from each of the five basic taste qualities (sweet, salty, sour, umami, and bitter), with additional bitter compounds included for comparison. The expression patterns of Arc and marker genes were analysed using in situ hybridisation (ISH). The Arc catFISH method was used to observe taste-responsive neurons following each taste stimulation. A double fluorescent in situ hybridisation protocol was then established to investigate possible neuropeptide genes involved in neural responses to taste stimulation.
The results showed that bitter taste stimulation induces increased Arc expression in the PbN in naïve mice. This was not true for other taste qualities. In mice conditioned to find an umami tastant aversive, subsequent umami taste stimulation resulted in an increase in Arc expression similar to that seen in bitter-stimulated mice. Taste-responsive Arc expression was denser in the lateral PbN than the medial PbN. In mice that received two temporally separated taste stimulations, each stimulation time-point showed a distinct population of Arc-expressing neurons, with only a small population (10 – 18 %) of neurons responding to both stimulations. This suggests that either each stimulation event activates a different population of neurons, or that Arc is marking something other than simple cellular activation, such as long-term cellular changes that do not occur twice within a 25 minute time frame. Investigation using the newly established double-FISH protocol revealed that, of the bitter-responsive Arc expressing neuron population: 16 % co-expressed calcitonin RNA; 17 % co-expressed glucagon-like peptide 1 receptor RNA; 17 % co-expressed hypocretin receptor 1 RNA; 9 % co-expressed gastrin-releasing peptide RNA; and 20 % co-expressed neurotensin RNA. This co-expression with multiple different neuropeptides suggests that bitter-activated Arc expression mediates multiple neural responses to the taste event, such as taste aversion learning, suppression of food intake, increased heart rate, and involves multiple brain structures such as the lateral hypothalamus, amygdala, bed nucleus of the stria terminalis, and the thalamus.
The increase in Arc-expression suggests that bitter taste stimulation, and umami taste stimulation in umami-averse animals, may result in an enhanced state of Arc- dependent synaptic plasticity in the PbN, allowing animals to form taste-relevant memories to these aversive compounds more readily. The results investigating neuropeptide RNA co- expression suggest the amygdala, bed nucleus of the stria terminalis, and thalamus as possible targets for bitter-responsive Arc-expressing PbN neurons.
Bobrowski hatte nach dem Abitur die Absicht geäußert, Kunstgeschichte zu studieren, doch Krieg und Kriegsgefangenschaft vereitelten seinen Plan: Der Wehrmachtsangehörige wurde einzig im Winter 1941/1942 für ein Studiensemester an der Universität Berlin vom Kriegsdienst freigestellt. Nachhaltig beeindruckt war Bobrowski insbesondere von der Vorlesung „Deutsche Kunst der Goethezeit“ des Lehrstuhlinhabers Wilhelm Pinder. Trotz eines grundlegenden Einflusses ist indessen zu keinem Zeitpunkt Pinders ideologischer Hintergrund in Bobrowskis Gedichten manifest geworden. Nach der Rückkehr aus sowjetischer Gefangenschaft an Weihnachten 1949 war für den mittlerweile Zweiunddreißigjährigen an ein Studium nicht mehr zu denken. Die lebenslange intermediale Auseinandersetzung mit Werken der bildenden Kunst in seinem Œuvre kann indessen als Ausdruck seiner vielfältigen kulturgeschichtlichen Interessen und Neigungen interpretiert werden. Die Lebensphasen des Dichters korrelieren mit einer motivischen Entwicklung seiner Bildgedichte: Insbesondere half ihm die unantastbare Ästhetik bedeutender Kunstwerke, das Grauen der letzten Kriegsjahre und die Entbehrungen in sowjetischer Kriegsgefangenschaft zu überwinden. Didaktisch-moralische Zielsetzungen prägten zunächst die in den Jahren nach seiner Heimkehr entstandenen Gedichte, bevor sich Bobrowski inhaltlich und formal von diesem Gedichttypus zu lösen vermochte und vermehrt Gedichte zu schreiben begann, die kulturgeschichtliche Dimensionen annahmen und historische, mythologische, biblische und religionsphilosophische Themen in epochenübergreifende Zusammenhänge stellten. Die Gedichte über die Künstler Jawlensky und Calder berühren gleichzeitig kulturlandschaftliche Aspekte. Im letzten Lebensjahrzehnt interessierte sich Bobrowski zunehmend für die Kunst des 20. Jahrhunderts, während die moderne Architektur aus seinem Werk ausgeklammert blieb.
Architektur bildet eine Leitmotivik in Bobrowskis lyrischem Werk. Die übertragene Bedeutungsebene der in den Gedichten benannten sakralen und profanen Einzelbauten, aber auch der städtischen und dörflichen Ensembles sowie einzelner Gebäudeteile, verändert sich mehrfach im Laufe der Jahre. Ausgehend von traditionellen, paargereimten Jugendgedichten in jambischem Versmaß, in denen architektonische Elemente Teil einer Wahrnehmung bilden, die alles Außerästhetische ausblendet, wandelt sich der Sinngehalt der Sakral- und Profanbauten in Bobrowskis lyrischem Werk ein erstes Mal während den Kriegsjahren in Russland, die der Wehrmachtsangehörige am Ilmensee verbracht hat. In den damals entstandenen Oden zeugen die architektonischen Relikte von Leid, Tod und Zerstörung. Noch fehlt indessen der später so zentrale Gedanke der Schuld, der erst im Rückblick auf jene Zeit in den Gedichten, die nach der Rückkehr aus der Kriegsgefangenschaft bis zu Bobrowskis frühem Tod entstanden sind, thematisiert worden ist.
Gegen Ende des Kriegs und in den Jahren der Kriegsgefangenschaft besinnt sich Bobrowski erneut auf Heimatthemen, und die Architektur in seinen Gedichten wird zu einem ästhetisch überhöhten Fluchtpunkt seiner Sehnsucht nach Ostpreußen und dem Memelgebiet. In Kriegsgefangenschaft tritt erstmals der Aspekt des Sublimen in seinen Gedichten auf, und zwar sowohl bezogen auf die Malerei als auch auf die Architektur. Dieser Gedanke wird einerseits nach der Rückkehr nach Berlin in den Gedichten über die Architektur gotischer Kathedralen und das bauliche Erbe des Klassizismus weitergesponnen, doch steht in den damals entstandenen Gedichten das Kulturerbe Europas auch für historisches Unrecht und eine schwere, weit zurückreichende Schuld.
Von dieser auf den ganzen Kontinent bezogenen Kritik wendet sich Bobrowski in den nachfolgenden Jahren ab und konzentriert sich auf die Schuld der Deutschen gegenüber den Völkern Osteuropas. Damit erhält auch die Architektur in seinen Gedichten eine neue Bedeutung. Die Relikte der Ritterburgen des deutschen Ordens zeugen von der Herrschaft der mittelalterlichen Eroberer und verschmelzen dabei mit der Natur: Das Zeichenhafte der Architektur wird Teil der Landschaft. Im letzten Lebensjahrzehnt entstehen vermehrt Gedichte, die sich auf Parkanlagen und städtische Grünräume beziehen.
Der Dichter hat sich nicht nur auf persönliche Erfahrungen, sondern mitunter auch auf Bildquellen abstützt, ohne dass er das Original je gesehen hätte. Nur schwer zugänglich sind die Gedichte über Chagall und Gauguin ohne die Erkenntnis, dass sie sich auf Bildvorlagen in schmalen, populärwissenschaftlichen Büchern beziehen, die Bobrowski jeweils kurz vor der Niederschrift der entsprechenden Gedichte erworben hat. Anders verhält es sich mit jenen russischen Kirchen, die Eingang in sein lyrisches Werk gefunden haben. Bobrowski hat sie alle selbst im Krieg gesehen, und die meisten scheinen noch heute zu bestehen und können mit einiger Sicherheit identifiziert werden, wozu auch die Briefe des Dichters aus jener Zeit beitragen.
Natural hazards can have serious societal and economic impacts. Worldwide, around one third of economic losses due to natural hazards are attributable to floods. The majority of natural hazards are triggered by weather-related extremes such as heavy precipitation, rapid snow melt, or extreme temperatures. Some of them, and in particular floods, are expected to further increase in terms of frequency and/or intensity in the coming decades due to the impacts of climate change. In this context, the European Alps areas are constantly disclosed as being particularly sensitive.
In order to enhance the resilience of societies to natural hazards, risk assessments are substantial as they can deliver comprehensive risk information to be used as a basis for effective and sustainable decision-making in natural hazards management. So far, current assessment approaches mostly focus on single societal or economic sectors – e.g. flood damage models largely concentrate on private-sector housing – and other important sectors, such as the transport infrastructure sector, are widely neglected. However, transport infrastructure considerably contributes to economic and societal welfare, e.g. by ensuring mobility of people and goods. In Austria, for example, the national railway network is essential for the European transit of passengers and freights as well as for the development of the complex Alpine topography. Moreover, a number of recent experiences show that railway infrastructure and transportation is highly vulnerable to natural hazards. As a consequence, the Austrian Federal Railways had to cope with economic losses on the scale of several million euros as a result of flooding and other alpine hazards.
The motivation of this thesis is to contribute to filling the gap of knowledge about damage to railway infrastructure caused by natural hazards by providing new risk information for actors and stakeholders involved in the risk management of railway transportation. Hence, in order to support the decision-making towards a more effective and sustainable risk management, the following two shortcomings in natural risks research are approached: i) the lack of dedicated models to estimate flood damage to railway infrastructure, and ii) the scarcity of insights into possible climate change impacts on the frequency of extreme weather events with focus on future implications for railway transportation in Austria.
With regard to flood impacts to railway infrastructure, the empirically derived damage model Railway Infrastructure Loss (RAIL) proved expedient to reliably estimate both structural flood damage at exposed track sections of the Northern Railway and resulting repair cost. The results show that the RAIL model is capable of identifying flood risk hot spots along the railway network and, thus, facilitates the targeted planning and implementation of (technical) risk reduction measures. However, the findings of this study also show that the development and validation of flood damage models for railway infrastructure is generally constrained by the continuing lack of detailed event and damage data.
In order to provide flood risk information on the large scale to support strategic flood risk management, the RAIL model was applied for the Austrian Mur River catchment using three different hydraulic scenarios as input as well as considering an increased risk aversion of the railway operator. Results indicate that the model is able to deliver comprehensive risk information also on the catchment level. It is furthermore demonstrated that the aspect of risk aversion can have marked influence on flood damage estimates for the study area and, hence, should be considered with regard to the development of risk management strategies.
Looking at the results of the investigation on future frequencies of extreme weather events jeopardizing railway infrastructure and transportation in Austria, it appears that an increase in intense rainfall events and heat waves has to be expected, whereas heavy snowfall and cold days are likely to decrease. Furthermore, results indicate that frequencies of extremes are rather sensitive to changes of the underlying thresholds. It thus emphasizes the importance to carefully define, validate, and — if needed — to adapt the thresholds that are used to detect and forecast meteorological extremes. For this, continuous and standardized documentation of damaging events and near-misses is a prerequisite.
Overall, the findings of the research presented in this thesis agree on the necessity to improve event and damage documentation procedures in order to enable the acquisition of comprehensive and reliable risk information via risk assessments and, thus, support strategic natural hazards management of railway infrastructure and transportation.
Ecosystem services (ESs) are defined as the contributions that ecosystems make to human wellbeing and are increasingly being used as an approach to explore the importance of ecosystems for humans through their valuation. Although value plurality has been recognised long before the mainstreaming of ESs research, socio-cultural valuation is still underrepresented in ESs assessments. It is the central goal of this PhD dissertation to explore the ability of socio-cultural valuation methods for the operationalisation of ESs research in land management. To address this, I formulated three research objectives that are briefly outlined below and relate to the three studies conducted during this dissertation.
The first objective relates to the assessment of the current role of socio-cultural valuation in ESs research. Human values are central to ESs research yet non-monetary socio-cultural valuation methods have been found underrepresented in the field of ESs science. In regard to the unbalanced consideration of value domains and conceptual uncertainties, I perform a systematic literature review aiming to answer the research question: To what extent have socio-cultural values been addressed in ESs assessments.
The second objective aims to test socio-cultural valuation methods of ESs and their relevance for land use preferences by exploring their methodological opportunities and limitations. Socio-cultural valuation methods have only recently become a focus in ESs research and therefore bear various uncertainties in regard to their methodological implications. To overcome these uncertainties, I analysed responses to a visitor survey. The research questions related to the second objective were: What are the implications of different valuation methods for ESs values? To what extent are land use preferences explained by socio-cultural values of ESs?
The third objective addressed in this dissertation is the implementation of ESs research into land management through socio-cultural valuation. Though it is emphasised that the ESs approach can assist decision making, there is little empirical evidence of the effect of ESs knowledge on land management. I proposed a way to implement transdisciplinary, spatially explicit research on ESs by answering the following research questions: Which landscape features underpinning ESs supply are considered in land management? How can participatory approaches accounting for ESs be operationalised in land management?
The empirical research resulted in five main findings that provide answers to the research questions. First, this dissertation provides evidence that socio-cultural values are an integral part of ESs research. I found that they can be assessed for provisioning, regulating, and cultural services though they are linked to cultural services to a greater degree. Socio-cultural values have been assessed by monetary and non-monetary methods and their assessment is effectively facilitated by stakeholder participation. Second, I found that different methods of socio-cultural valuation revealed different information. Whereas rating revealed a general value of ESs, weighting was found more suitable to identify priorities across ESs. Value intentions likewise differed in the distribution of values, generally implying a higher value for others than for respondents themselves. Third, I showed that ESs values were distributed similarly across groups with differing land use preferences. Thus, I provided empirical evidence that ESs values and landscape values should not be used interchangeably. Fourth, I showed which landscape features important for ESs supply in a Scottish regional park are not sufficiently accounted for in the current management strategy. This knowledge is useful for the identification of priority sites for land management. Finally, I provide an approach to explore how ESs knowledge elicited by participatory mapping can be operationalised in land management. I demonstrate how stakeholder knowledge and values can be used for the identification of ESs hotspots and how these hotspots can be compared to current management priorities.
This dissertation helps to bridge current gaps of ESs science by advancing the understanding of the current role of socio-cultural values in ESs research, testing different methods and their relevance for land use preferences, and implementing ESs knowledge into land management. If and to what extent ESs and their values are implemented into ecosystem management is mainly the choice of the management. An advanced understanding of socio-cultural valuation methods contributes to the normative basis of this management, while the proposal for the implementation of ESs in land management presents a practical approach of how to transfer this type of knowledge into practice. The proposed methods for socio-cultural valuation can support guiding land management towards a balanced consideration of ESs and conservation goals.
The aim of this thesis is to develop approaches to automatically recognise the structure of argumentation in short monological texts. This amounts to identifying the central claim of the text, supporting premises, possible objections, and counter-objections to these objections, and connecting them correspondingly to a structure that adequately describes the argumentation presented in the text.
The first step towards such an automatic analysis of the structure of argumentation is to know how to represent it. We systematically review the literature on theories of discourse, as well as on theories of the structure of argumentation against a set of requirements and desiderata, and identify the theory of J. B. Freeman (1991, 2011) as a suitable candidate to represent argumentation structure. Based on this, a scheme is derived that is able to represent complex argumentative structures and can cope with various segmentation issues typically occurring in authentic text.
In order to empirically test our scheme for reliability of annotation, we conduct several annotation experiments, the most important of which assesses the agreement in reconstructing argumentation structure. The results show that expert annotators produce very reliable annotations, while the results of non-expert annotators highly depend on their training in and commitment to the task.
We then introduce the 'microtext' corpus, a collection of short argumentative texts. We report on the creation, translation, and annotation of it and provide a variety of statistics. It is the first parallel corpus (with a German and English version) annotated with argumentation structure, and -- thanks to the work of our colleagues -- also the first annotated according to multiple theories of (global) discourse structure.
The corpus is then used to develop and evaluate approaches to automatically predict argumentation structures in a series of six studies: The first two of them focus on learning local models for different aspects of argumentation structure. In the third study, we develop the main approach proposed in this thesis for predicting globally optimal argumentation structures: the 'evidence graph' model. This model is then systematically compared to other approaches in the fourth study, and achieves state-of-the-art results on the microtext corpus. The remaining two studies aim to demonstrate the versatility and elegance of the proposed approach by predicting argumentation structures of different granularity from text, and finally by using it to translate rhetorical structure representations into argumentation structures.
Bewaffnete Intellektuelle
(2017)
Auf der Suche nach der geheimen Herrschaftslehre der Nazis begibt sich Michael Zantke in eine tiefe und umfassende Auseinandersetzung mit den geistigen Wurzeln des Nationalsozialismus. Er beleuchtet die Diskussionen in Deutschland um Machiavelli und überprüft die Texte auf ihren Bezug zur Gegenwart des Nationalsozialismus. Dabei gelingt es ihm, die politische Rolle der Intellektuellen im „Dritten Reich“ und die Unterschiede zwischen Nationalsozialismus, Faschismus und Konservativer Revolution herauszuarbeiten. Diese Nuancen sind nicht nur historisch bedeutungsvoll, sie sind auch für die heutige Diskussion über Rechtsnationalismus, Rechtsradikalismus und die Neue Rechte von Nutzen.
Borehole instabilities are frequently encountered when drilling through finely laminated, organic rich shales (Økland and Cook, 1998; Ottesen, 2010; etc.); such instabilities should be avoided to assure a successful exploitation and safe production of the contained unconventional hydrocarbons. Borehole instabilities, such as borehole breakouts or drilling induced tensile fractures, may lead to poor cementing of the borehole annulus, difficulties with recording and interpretation of geophysical logs, low directional control and in the worst case the loss of the well. If these problems are not recognized and expertly remedied, pollution of the groundwater or the emission of gases into the atmosphere can occur since the migration paths of the hydrocarbons in the subsurface are not yet fully understood (e.g., Davies et al., 2014; Zoback et al., 2010). In addition, it is often mentioned that the drilling problems encountered and the resulting downtimes of the wellbore system in finely laminated shales significantly increase drilling costs (Fjaer et al., 2008; Aadnoy and Ong, 2003).
In order to understand and reduce the borehole instabilities during drilling in unconventional shales, we investigate stress-induced irregular extensions of the borehole diameter, which are also referred to as borehole breakouts. For this purpose, experiments with different borehole diameters, bedding plane angles and stress boundary conditions were performed on finely laminated Posidonia shales. The Lower Jurassic Posidonia shale is one of the most productive source rocks for conventional reservoirs in Europe and has the greatest potential for unconventional oil and gas in Europe (Littke et al., 2011).
In this work, Posidonia shale specimens from the North (PN) and South (PS) German basins were selected and characterized petrophysically and mechanically. The composition of the two shales is dominated by calcite (47-56%) followed by clays (23-28%) and quartz (16-17%). The remaining components are mainly pyrite and organic matter. The porosity of the shales varies considerably and is up to 10% for PS and 1% for PN, which is due to a larger deposition depth of PN. Both shales show marked elasticity and strength anisotropy, which can be attributed to a macroscopic distribution and orientation of soft and hard minerals. Under load the hard minerals form a load-bearing, supporting structure, while the soft minerals compensate the deformation. Therefore, if loaded parallel to the bedding, the Posidonia shale is more brittle than loaded normal to the bedding. The resulting elastic anisotropy, which can be defined by the ratio of the modulus of elasticity parallel and normal to the bedding, is about 50%, while the strength anisotropy (i.e., the ratio of uniaxial compressive strength normal and parallel to the bedding) is up to 66%. Based on the petrophysical characterization of the two rocks, a transverse isotropy (TVI) was derived. In general, PS is softer and weaker than PN, which is due to the stronger compaction of the material due to the higher burial depth.
Conventional triaxial borehole breakout experiments on specimens with different borehole diameters showed that, when the diameter of the borehole is increased, the stress required to initiate borehole breakout decreases to a constant value. This value can be expressed as the ratio of the tangential stress and the uniaxial compressive strength of the rock. The ratio increases exponentially with decreasing borehole diameter from about 2.5 for a 10 mm diameter hole to ~ 7 for a 1 mm borehole (increase of initiation stress by 280%) and can be described by a fracture mechanic based criterion. The reduction in borehole diameter is therefore a considerable aspect in reducing the risk of breakouts. New drilling techniques with significantly reduced borehole diameters, such as "fish-bone" holes, are already underway and are currently being tested (e.g., Xing et al., 2012).
The observed strength anisotropy and the TVI material behavior are also reflected in the observed breakout processes at the borehole wall. Drill holes normal to the bedding develop breakouts in a plane of isotropy and are not affected by the strength or elasticity anisotropy. The observed breakouts are point-symmetric and form compressive shear failure planes, which can be predicted by a Mohr-Coulomb failure approach. If the shear failure planes intersect, conjugate breakouts can be described as "dog-eared” breakouts.
While the initiation of breakouts for wells oriented normal to the stratification has been triggered by random local defects, reduced strengths parallel to bedding planes are the starting point for breakouts for wells parallel to the bedding. In the case of a deflected borehole trajectory, therefore, the observed failure type changes from shear-induced failure surfaces to buckling failure of individual layer packages. In addition, the breakout depths and widths increased, resulting in a stress-induced enlargement of the borehole cross-section and an increased output of rock material into the borehole. With the transition from shear to buckling failure and changing bedding plane angle with respect to the borehole axis, the stress required for inducing wellbore breakouts drops by 65%.
These observations under conventional triaxial stress boundary conditions could also be confirmed under true triaxial stress conditions. Here breakouts grew into the rock as a result of buckling failure, too. In this process, the broken layer packs rotate into the pressure-free drill hole and detach themselves from the surrounding rock by tensile cracking. The final breakout shape in Posidonia shale can be described as trapezoidal when the bedding planes are parallel to the greatest horizontal stress and to the borehole axis. In the event that the largest horizontal stress is normal to the stratification, breakouts were formed entirely by shear fractures between the stratification and required higher stresses to initiate similar to breakouts in conventional triaxial experiments with boreholes oriented normal to the bedding.
In the content of this work, a fracture mechanics-based failure criterion for conventional triaxial loading conditions in isotropic rocks (Dresen et al., 2010) has been successfully extended to true triaxial loading conditions in the transverse isotropic rock to predict the initiation of borehole breakouts. The criterion was successfully verified on the experiments carried out.
The extended failure criterion and the conclusions from the laboratory and numerical work may help to reduce the risk of borehole breakouts in unconventional shales.
Calcium carbonate formation
(2017)
Mathematical models of bacterial growth have been successfully applied to study the relationship between antibiotic drug exposure and the antibacterial effect. Since these models typically lack a representation of cellular processes and cell physiology, the mechanistic integration of drug action is not possible on the cellular level. The cellular mechanisms of drug action, however, are particularly relevant for the prediction, analysis and understanding of interactions between antibiotics. Interactions are also studied experimentally, however, a lacking consent on the experimental protocol hinders direct comparison of results. As a consequence, contradictory classifications as additive, synergistic or antagonistic are reported in literature.
In the present thesis we developed a novel mathematical model for bacterial growth that integrates cell-level processes into the population growth level. The scope of the model is to predict bacterial growth under antimicrobial perturbation by multiple antibiotics in vitro.
To this end, we combined cell-level data from literature with population growth data for Bacillus subtilis, Escherichia coli and Staphylococcus aureus. The cell-level data described growth-determining characteristics of a reference cell, including the ribosomal concentration and efficiency. The population growth data comprised extensive time-kill curves for clinically relevant antibiotics (tetracycline, chloramphenicol, vancomycin, meropenem, linezolid, including dual combinations).
The new cell-level approach allowed for the first time to simultaneously describe single and combined effects of the aforementioned antibiotics for different experimental protocols, in particular different growth phases (lag and exponential phase). Consideration of ribosomal dynamics and persisting sub-populations explained the decreased potency of linezolid on cultures in the lag phase compared to exponential phase cultures. The model captured growth rate dependent killing and auto-inhibition of meropenem and - also for vancomycin exposure - regrowth of the bacterial cultures due to adaptive resistance development. Stochastic interaction surface analysis demonstrated the pronounced antagonism between meropenem and linezolid to be robust against variation in the growth phase and pharmacodynamic endpoint definition, but sensitive to a change in the experimental duration.
Furthermore, the developed approach included a detailed representation of the bacterial cell-cycle. We used this representation to describe septation dynamics during the transition of a bacterial culture from the exponential to stationary growth phase. Resulting from a new mechanistic understanding of transition processes, we explained the lag time between the increase in cell number and bacterial biomass during the transition from the lag to exponential growth phase. Furthermore, our model reproduces the increased intracellular RNA mass fraction during long term exposure of bacteria to chloramphenicol.
In summary, we contribute a new approach to disentangle the impact of drug effects, assay readout and experimental protocol on antibiotic interactions. In the absence of a consensus on the corresponding experimental protocols, this disentanglement is key to translate information between heterogeneous experiments and also ultimately to the clinical setting.
Chloroplast membranes have a unique composition characterized by very high contents of the galactolipids, MGDG and DGDG. Many studies on constitutive, galactolipid-deficient mutants revealed conflicting results about potential functions of galactolipids in photosynthetic membranes. Likely, this was caused by pleiotropic effects such as starvation artefacts because of impaired photosynthesis from early developmental stages of the plants onward. Therefore, an ethanol inducible RNAi-approach has been taken to suppress two key enzymes of galactolipid biosynthesis in the chloroplast, MGD1 and DGD1. Plants were allowed to develop fully functional source leaves prior to induction, which then could support plant growth. Then, after the ethanol induction, both young and mature leaves were investigated over time.
Our studies revealed similar changes in both MGDG- and DGDG-deficient lines, however young and mature leaves of transgenic lines showed a different response to galactolipid deficiency. While no changes of photosynthetic parameters and minor changes in lipid content were observed in mature leaves of transgenic lines, strong reductions in total chlorophyll content and in the accumulation of all photosynthetic complexes and significant changes in contents of various lipid groups occurred in young leaves. Microscopy studies revealed an appearance of lipid droplets in the cytosol of young leaves in all transgenic lines which correlates with significantly higher levels of TAGs. Since in young leaves the production of membrane lipids is lowered, the excess of fatty acids is used for storage lipids production, resulting in the accumulation of TAGs.
Our data indicate that both investigated galactolipids serve as structural lipids since changes in photosynthetic parameters were mainly the result of reduced amounts of all photosynthetic constituents. In response to restricted galactolipid synthesis, thylakoid biogenesis is precisely readjusted to keep the proper stoichiometry and functionality of the photosynthetic apparatus. Ultimately, the data revealed that downregulation of one galactolipid triggers changes not only in chloroplasts but also in the nucleus as shown by downregulation of nuclear encoded subunits of the photosynthetic complexes.
In this work the human AOX1 was characterized and detailed aspects regarding the expression, the enzyme kinetics and the production of reactive oxygen species (ROS) were investigated. The hAOX1 is a cytosolic enzyme belonging to the molybdenum hydroxylase family. Its catalytically active form is a homodimer with a molecular weight of 300 kDa. Each monomer (150 kDa) consists of three domains: a N-terminal domain (20 kDa) containing two [2Fe-2S] clusters, a 40 kDa intermediate domain containing a flavin adenine dinucleotide (FAD), and a C-terminal domain (85 kDa) containing the substrate binding pocket and the molybdenum cofactor (Moco). The hAOX1 has an emerging role in the metabolism and pharmacokinetics of many drugs, especially aldehydes and N- heterocyclic compounds.
In this study, the hAOX1 was hetereogously expressed in E. coli TP1000 cells, using a new codon optimized gene sequence which improved the expressed protein yield of around 10-fold compared to the previous expression systems for this enzyme. To increase the catalytic activity of hAOX1, an in vitro chemical sulfuration was performed to favor the insertion of the equatorial sulfido ligand at the Moco with consequent increased enzymatic activity of around 10-fold. Steady-state kinetics and inhibition studies were performed using several substrates, electron acceptors and inhibitors. The recombinant hAOX1 showed higher catalytic activity when molecular oxygen was used as electron acceptor. The highest turn over values were obtained with phenanthridine as substrate. Inhibition studies using thioridazine (phenothiazine family), in combination with structural studies performed in the group of Prof. M.J. Romão, Nova Universidade de Lisboa, showed a new inhibition site located in proximity of the dimerization site of hAOX1. The inhibition mode of thioridazine resulted in a noncompetitive inhibition type. Further inhibition studies with loxapine, a thioridazine-related molecule, showed the same type of inhibition. Additional inhibition studies using DCPIP and raloxifene were carried out.
Extensive studies on the FAD active site of the hAOX1 were performed. Twenty new hAOX1 variants were produced and characterized. The hAOX1 variants generated in this work were divided in three groups: I) hAOX1 single nucleotide polymorphisms (SNP) variants; II) XOR- FAD loop hAOX1 variants; III) additional single point hAOX1 variants. The hAOX1 SNP variants G46E, G50D, G346R, R433P, A439E, K1231N showed clear alterations in their catalytic activity, indicating a crucial role of these residues into the FAD active site and in relation to the overall reactivity of hAOX1.
Furthermore, residues of the bovine XOR FAD flexible loop (Q423ASRREDDIAK433) were introduced in the hAOX1. FAD loop hAOX1 variants were produced and characterized for their stability and catalytic activity. Especially the variants hAOX1 N436D/A437D/L438I, N436D/A437D/L438I/I440K and Q434R/N436D/A437D/L438I/I440K showed decreased catalytic activity and stability. hAOX1 wild type and variants were tested for reactivity toward NADH but no reaction was observed.
Additionally, the hAOX1 wild type and variants were tested for the generation of reactive oxygen species (ROS). Interestingly, one of the SNP variants, hAOX1 L438V, showed a high ratio of superoxide prodction. This result showed a critical role for the residue Leu438 in the mechanism of oxygen radicals formation by hAOX1. Subsequently, further hAOX1 variants having the mutated Leu438 residue were produced. The variants hAOX1 L438A, L438F and L438K showed superoxide overproduction of around 85%, 65% and 35% of the total reducing equivalent obtained from the substrate oxidation.
The results of this work show for the first time a characterization of the FAD active site of the hAOX1, revealing the importance of specific residues involved in the generation of ROS and effecting the overall enzymatic activity of hAOX1. The hAOX1 SNP variants presented here indicate that those allelic variations in humans might cause alterations ROS balancing and clearance of drugs in humans.
Im Sinne des Refinements von Tierversuchen sollen alle Bedingungen während der Zucht, der Haltung und des Transports von zu Versuchszwecken gehaltenen Tieren und alle Methoden während des Versuchs so verbessert werden, dass die verwendeten Tiere ein minimales Maß an potentiellem Distress, Schmerzen oder Leiden erfahren. Zudem soll ihr Wohlbefinden durch die Möglichkeit des Auslebens speziesspezifischer Verhaltensweisen und die Anwendung tierschonender Verfahren maximal gefördert werden. Zur Etablierung von Grundsätzen des Refinements sind grundlegende Kenntnisse über die physiologischen Bedürfnisse und Verhaltensansprüche der jeweiligen Spezies unabdingbar. Die Experimentatoren sollten das Normalverhalten der Tiere kennen, um potentielle Verhaltensabweichungen, wie Stereotypien, zu verstehen und interpretieren zu können. Standardisierte Haltungsbedingungen von zu Versuchszwecken gehaltenen Mäusen weichen in diversen Aspekten von der natürlichen Umgebung ab und erfordern eine gewisse Adaptation. Ist ein Tier über einen längeren Zeitraum unfähig, sich an die gegebenen Umstände anzupassen, können abnormale Verhaltensweisen, wie Stereotypien auftreten. Stereotypien werden definiert als Abweichungen vom Normalverhalten, die repetitiv und ohne Abweichungen im Ablauf ausgeführt werden, scheinbar keiner Funktion dienen und der konkreten Umweltsituation nicht immer entsprechen.
Bisher war unklar, in welchem Ausmaß stereotypes Verhalten den metabolischen Phänotyp eines Individuums beeinflusst. Ziel dieser Arbeit war es daher, das stereotype Verhalten der FVB/NJ-Maus erstmals detailliert zu charakterisieren, systematisch zusammenzutragen, welche metabolischen Konsequenzen dieses Verhalten bedingt und wie sich diese auf das Wohlbefinden der Tiere und die Verwendung stereotyper Tiere in Studien mit tierexperimentellem Schwerpunkt auswirken.
Der Versuch begann mit der Charakterisierung der mütterlichen Fürsorge in der Parentalgeneration. Insgesamt wurden 35 Jungtiere der F1-Generation vom Absatz an, über einen Zeitraum von 11 Wochen einzeln gehalten, kontinuierlich beobachtet, bis zum Versuchsende wöchentlich Kotproben gesammelt und das Körpergewicht bestimmt. Zusätzlich erfolgten begleitende Untersuchungen wie Verhaltenstests und die Erfassung der physischen Aktivität und metabolischer Parameter. Anschließend wurden u.a. die zerebralen Serotonin- und Dopamingehalte, fäkale Glucocorticoidlevels, hepatisches Glykogen und muskuläre Glykogen- und Triglyceridlevels bestimmt.
Nahezu unabhängig von der mütterlichen Herkunft entwickelte sich bei mehr als der Hälfte der 35 Jungtiere in der F1-Generation stereotypes Verhalten. Diese Daten deuten darauf hin, dass es keine Anzeichen für das Erlernen oder eine direkte genetische Transmission stereotypen Verhaltens bei der FVB/NJ-Maus gibt. Über den gesamten Beobachtungszeitraum zeichneten sich die stereotypen FVB/NJ-Mäuse durch ein eingeschränktes Verhaltensrepertoire aus. Zu Gunsten der erhöhten Aktivität und des Ausübens stereotypen Verhaltens lebten sie insgesamt weniger andere Verhaltensweisen (Klettern, Graben, Nagen) aus. Darüber hinaus waren Stereotypien sowohl im 24-Stunden Open Field Test als auch in der Messeinrichtung der indirekten Tierkalorimetrie mit einer erhöhten Aktivität und Motilität assoziiert, während die circadiane Rhythmik nicht divergierte. Diese erhöhte körperliche Betätigung spiegelte sich in den niedrigeren Körpergewichtsentwicklungen der stereotypen Tiere wieder. Außerdem unterschieden sich die Körperfett- und Körpermuskelanteile.
Zusammenfassend lässt sich sagen, dass das Ausüben stereotypen Verhaltens zu Differenzen im metabolischen Phänotyp nicht-stereotyper und stereotyper FVB/NJ-Mäuse führt. Im Sinne der „Guten Wissenschaftlichen Praxis“ sollte das zentrale Ziel jedes Wissenschaftlers sein, aussagekräftige und reproduzierbare Daten hervorzubringen. Jedoch können keine validen Resultate von Tieren erzeugt werden, die in Aspekten variieren, die für den vorgesehenen Zweck der Studie nicht berücksichtigt wurden. Deshalb sollten nicht-stereotype und stereotype Individuen nicht innerhalb einer Versuchsgruppe randomisiert werden. Stereotype Tiere demzufolge von geplanten Studien auszuschließen, würde allerdings dem Gebot des zweiten R’s – der Reduction – widersprechen. Um Refinement zu garantieren, sollte der Fokus auf der maximal erreichbaren Prävention stereotypen Verhaltens liegen. Diverse Studien haben bereits gezeigt, dass die Anreicherung der Haltungsumwelt (environmental enrichment) zu einer Senkung der Prävalenz von Stereotypien bei Mäusen führt, dennoch kommen sie weiterhin vor. Daher sollte environmental enrichment zukünftig weniger ein „Kann“, sondern ein „Muss“ sein – oder vielmehr: der Goldstandard. Zudem würde eine profunde phänotypische Charakterisierung dazu beitragen, Mausstämme zu erkennen, die zu Stereotypien neigen und den für den spezifischen Zweck am besten geeigneten Mausstamm zu identifizieren, bevor ein Experiment geplant wird.
Tremendous progress in the development of thin film solar cell techniques has been made over the last decade. The field of organic solar cells is constantly developing, new material classes like Perowskite solar cells are emerging and different types of hybrid organic/inorganic material combinations are being investigated for their physical properties and their applicability in thin film electronics. Besides typical single-junction architectures for solar cells, multi-junction concepts are also being investigated as they enable the overcoming of theoretical limitations of a single-junction. In multi-junction devices each sub-cell operates in different wavelength regimes and should exhibit optimized band-gap energies. It is exactly this tunability of the band-gap energy that renders organic solar cell materials interesting candidates for multi-junction applications. Nevertheless, only few attempts have been made to combine inorganic and organic solar cells in series connected multi-junction architectures. Even though a great diversity of organic solar cells exists nowadays, their open circuit voltage is usually low compared to the band-gap of the active layer. Hence, organic low band-gap solar cells in particular show low open circuit voltages and the key factors that determine the voltage losses are not yet fully understood. Besides open circuit voltage losses the recombination of charges in organic solar cells is also a prevailing research topic, especially with respect to the influence of trap states.
The exploratory focus of this work is therefore set, on the one hand, on the development of hybrid organic/inorganic multi-junctions and, on the other hand, on gaining a deeper understanding of the open circuit voltage and the recombination processes of organic solar cells.
In the first part of this thesis, the development of a hybrid organic/inorganic triple-junction will be discussed which showed at that time (Jan. 2015) a record power conversion efficiency of 11.7%. The inorganic sub-cells of these devices consist of hydrogenated amorphous silicon and were delivered by the Competence Center Thin-Film and Nanotechnology for Photovoltaics in Berlin. Different recombination contacts and organic sub-cells were tested in conjunction with these inorganic sub-cells on the basis of optical modeling predictions for the optimal layer thicknesses to finally reach record efficiencies for this type of solar cells.
In the second part, organic model systems will be investigated to gain a better understanding of the fundamental loss mechanisms that limit the open circuit voltage of organic solar cells. First, bilayer systems with different orientation of the donor and acceptor molecules were investigated to study the influence of the donor/acceptor orientation on non-radiative voltage loss. Secondly, three different bulk heterojunction solar cells all comprising the same amount of fluorination and the same polymer backbone in the donor component were examined to study the influence of long range electrostatics on the open circuit voltage. Thirdly, the device performance of two bulk heterojunction solar cells was compared which consisted of the same donor polymer but used different fullerene acceptor molecules. By this means, the influence of changing the energetics of the acceptor component on the open circuit voltage was investigated and a full analysis of the charge carrier dynamics was presented to unravel the reasons for the worse performance of the solar cell with the higher open circuit voltage. In the third part, a new recombination model for organic solar cells will be introduced and its applicability shown for a typical low band-gap cell. This model sheds new light on the recombination process in organic solar cells in a broader context as it re-evaluates the recombination pathway of charge carriers in devices which show the presence of trap states. Thereby it addresses a current research topic and helps to resolve alleged discrepancies which can arise from the interpretation of data derived by different measurement techniques.
We present electrical impedance measurements of amoeboid cells on microelectrodes. The model organism Dictyostelium discoideum shows under starvation conditions a transition to collective behavior when chemotactic cells collect in multicellular aggregates. We show how impedance recordings give a precise picture of the stages of aggregation by tracing the dynamics of cell-substrate adhesion. Furthermore, we present for the first time systematic single cell measurements of wild type cells and four mutant strains that differ in their substrate adhesion strength. We recorded the projected cell area by time lapse microscopy and found a correlation between quasi-periodic oscillations in the kinetics of the projected area - the cell shape oscillation - and the long-term trend in the impedance signal. Typically, amoeboid motility advances via a cycle of membrane protrusion, substrate adhesion, traction of the cell body and tail retraction. This motility cycle results in the quasi-periodic oscillations of the projected cell area and the impedance. In all cell lines measured, similar periods were observed for this cycle, despite the differences in attachment strength. We observed that cell-substrate attachment strength strongly affects the impedance in that the deviations from mean (the magnitude of fluctuations) are enhanced in cells that effectively transmit forces, generated by the cytoskeleton, to the substrate. For example, in talA- cells, which lack the actin anchoring protein talin, the fluctuations are strongly reduced. Single cell force spectroscopy and results from a detachment assay, where adhesion is measured by exposing cells to shear stress, confirm that the magnitude of impedance fluctuations is a correct measure for the strength of substrate adhesion. Finally, we also worked on the integration of cell-substrate impedance sensors into microfluidic devices. A chip-based electrical chemotaxis assay is designed which measures the speed of chemotactic cells migrating over microelectrodes along a chemical concentration gradient.
In the wake of 21st century, humanity witnessed a phenomenal raise of urban agglomerations as powerhouses for innovation and socioeconomic growth. Driving much of national (and in few instances even global) economy, such a gargantuan raise of cities is also accompanied by subsequent increase in energy, resource consumption and waste generation. Much of anthropogenic transformation of Earth's environment in terms of environmental pollution at local level to planetary scale in the form of climate change is currently taking place in cities. Projected to be crucibles for entire humanity by the end of this century, the ultimate fate of humanity predominantly lies in the hands of technological innovation, urbanites' attitudes towards energy/resource consumption and development pathways undertaken by current and future cities. Considering the unparalleled energy, resource consumption and emissions currently attributed to global cities, this thesis addresses these issues from an efficiency point of view. More specifically, this thesis addresses the influence of population size, density, economic geography and technology in improving urban greenhouse gas (GHG) emission efficiency and identifies the factors leading to improved eco-efficiency in cities. In order to investigate the in uence of these factors in improving emission and resource efficiency in cities, a multitude of freely available datasets were coupled with some novel methodologies and analytical approaches in this thesis.
Merging the well-established Kaya Identity to the recently developed urban scaling laws, an Urban Kaya Relation is developed to identify whether large cities are more emission efficient and the intrinsic factors leading to such (in)efficiency. Applying Urban Kaya Relation to a global dataset of 61 cities in 12 countries, this thesis identifed that large cities in developed regions of the world will bring emission efficiency gains because of the better technologies implemented in these cities to produce and utilize energy consumption while the opposite is the case for cities in developing regions. Large cities in developing countries are less efficient mainly because of their affluence and lack of efficient technologies. Apart from the in uence of population size on emission efficiency, this thesis identified the crucial role played by population density in improving building and on-road transport sector related emission efficiency in cities. This is achieved by applying the City Clustering Algorithm (CCA) on two different gridded land use datasets and a standard emission inventory to attribute these sectoral emissions to all inhabited settlements in the USA. Results show that doubling the population density would entail a reduction in the total CO2 emissions in buildings and on-road sectors typically by at least 42 %. Irrespective of their population size and density, cities are often blamed for their intensive resource consumption that threatens not only local but also global sustainability. This thesis merged the concept of urban metabolism with benchmarking and identified cities which are eco-efficient. These cities enable better socioeconomic conditions while being less burden to the environment. Three environmental burden indicators (annual average NO2 concentration, per capita waste generation and water consumption) and two socioeconomic indicators (GDP per capita and employment ratio) for 88 most populous European cities are considered in this study. Using two different non-parametric ranking methods namely regression residual ranking and Data Envelopment Analysis (DEA), eco-efficient cities and their determining factors are identified. This in-depth analysis revealed that mature cities with well-established economic structures such as Munich, Stockholm and Oslo are eco-efficient. Further, correlations between objective eco-efficiency ranking with each of the indicator rankings and the ranking of urbanites' subjective perception about quality of life are analyzed. This analysis revealed that urbanites' perception about quality of life is not merely confined to the socioeconomic well-being but rather to their combination with lower environmental burden.
In summary, the findings of this dissertation has three general conclusions for improving emission and ecological efficiency in cities. Firstly, large cities in emerging nations face a huge challenge with respect to improving their emission efficiency. The task in front of these cities is threefold: (1) deploying efficient technologies for the generation of electricity and improvement of public transportation to unlock their leap frogging potential, (2) addressing the issue of energy poverty and (3) ensuring that these cities do not develop similar energy consumption patterns with infrastructure lock-in behavior similar to those of cities in developed regions. Secondly, the on-going urban sprawl as a global phenomenon will decrease the emission efficiency within the building and transportation sector. Therefore, local policy makers should identify adequate fiscal and land use policies to curb urban sprawl. Lastly, since mature cities with well-established economic structures are more eco-efficient and urbanites' perception re ects its combination with decreasing environmental burden; there is a need to adopt and implement strategies which enable socioeconomic growth in cities whilst decreasing their environment burden.
The Yukon Coast in Canada is an ice-rich permafrost coast and highly sensitive to changing environmental conditions. Retrogressive thaw slumps are a common thermoerosion feature along this coast, and develop through the thawing of exposed ice-rich permafrost on slopes and removal of accumulating debris. They contribute large amounts of sediment, including organic carbon and nitrogen, to the nearshore zone.
The objective of this study was to 1) identify the climatic and geomorphological drivers of sediment-meltwater release, 2) quantify the amount of released meltwater, sediment, organic carbon and nitrogen, and 3) project the evolution of sediment-meltwater release of retrogressive thaw slumps in a changing future climate.
The analysis is based on data collected over 18 days in July 2013 and 18 days in August 2012. A cut-throat flume was set up in the main sediment-meltwater channel of the largest retrogressive thaw slump on Herschel Island. In addition, two weather stations, one on top of the undisturbed tundra and one on the slump floor, measured incoming solar radiation, air temperature, wind speed and precipitation. The discharge volume eroding from the ice-rich permafrost and retreating snowbanks was measured and compared to the meteorological data collected in real time with a resolution of one minute.
The results show that the release of sediment-meltwater from thawing of the ice-rich permafrost headwall is strongly related to snowmelt, incoming solar radiation and air temperature. Snowmelt led to seasonal differences, especially due to the additional contribution of water to the eroding sediment-meltwater from headwall ablation, lead to dilution of the sediment-meltwater composition. Incoming solar radiation and air temperature were the main drivers for diurnal and inter-diurnal fluctuations. In July (2013), the retrogressive thaw slump released about 25 000 m³ of sediment-meltwater, containing 225 kg dissolved organic carbon and 2050 t of sediment, which in turn included 33 t organic carbon, and 4 t total nitrogen. In August (2012), just 15 600 m³ of sediment-meltwater was released, since there was no additional contribution from snowmelt. However, even without the additional dilution, 281 kg dissolved organic carbon was released. The sediment concentration was twice as high as in July, with sediment contents of up to 457 g l-1 and 3058 t of sediment, including 53 t organic carbon and 5 t nitrogen, being released.
In addition, the data from the 36 days of observations from Slump D were upscaled to cover the main summer season of 1 July to 31 August (62 days) and to include all 229 active retrogressive thaw slumps along the Yukon Coast. In total, all retrogressive thaw slumps along the Yukon Coast contribute a minimum of 1.4 Mio. m³ sediment-meltwater each thawing season, containing a minimum of 172 000 t sediment with 3119 t organic carbon, 327 t nitrogen and 17 t dissolved organic carbon. Therefore, in addition to the coastal erosion input to the Beaufort Sea, retrogressive thaw slumps additionally release 3 % of sediment and 8 % of organic carbon into the ocean. Finally, the future evolution of retrogressive thaw slumps under a warming scenario with summer air temperatures increasing by 2-3 °C by 2081-2100, would lead to an increase of 109-114% in release of sediment-meltwater.
It can be concluded that retrogressive thaw slumps are sensitive to climatic conditions and under projected future Arctic warming will contribute larger amounts of thawed permafrost material (including organic carbon and nitrogen) into the environment.
Via their powerful radiation, stellar winds, and supernova explosions, massive stars (Mini & 8 M☉) bear a tremendous impact on galactic evolution. It became clear in recent decades that the majority of massive stars reside in binary systems. This thesis sets as a goal to quantify the impact of binarity (i.e., the presence of a companion star) on massive stars. For this purpose, massive binary systems in the Local Group, including OB-type binaries, high mass X-ray binaries (HMXBs), and Wolf-Rayet (WR) binaries, were investigated by means of spectral, orbital, and evolutionary analyses.
The spectral analyses were performed with the non-local thermodynamic equillibrium (non-LTE) Potsdam Wolf-Rayet (PoWR) model atmosphere code. Thanks to critical updates in the calculation of the hydrostatic layers, the code became a state-of-the-art tool applicable for all types of hot massive stars (Chapter 2). The eclipsing OB-type triple system δ Ori served as an intriguing test-case for the new version of the PoWR code, and provided key insights regarding the formation of X-rays in massive stars (Chapter 3). We further analyzed two prototypical HMXBs, Vela X-1 and IGR J17544-2619, and obtained fundamental conclusions regarding the dichotomy of two basic classes of HMXBs (Chapter 4). We performed an exhaustive analysis of the binary R 145 in the Large Magellanic Cloud (LMC), which was claimed to host the most massive stars known. We were able to disentangle the spectrum of the system, and performed an orbital, polarimetric, and spectral analysis, as well as an analysis of the wind-wind collision region. The true masses of the binary components turned out to be significantly lower than suggested, impacting our understanding of the initial mass function and stellar evolution at low metallicity (Chapter 5). Finally, all known WR binaries in the Small Magellanic Cloud (SMC) were analyzed. Although it was theoretical predicted that virtually all WR stars in the SMC should be formed via mass-transfer in binaries, we find that binarity was not important for the formation of the known WR stars in the SMC, implying a strong discrepancy between theory and observations (Chapter 6).
Conformational transition of peptide-functionalized cryogels enabling shape-memory capability
(2017)
Observational and computational extragalactic astrophysics are two fields of research that study a similar subject from different perspectives. Observational extragalactic astrophysics aims, by recovering the spectral energy distribution of galaxies at different wavelengths, to reliably measure their properties at different cosmic times and in a large variety of environments. Analyzing the light collected by the instruments, observers try to disentangle the different processes occurring in galaxies at the scales of galactic physics, as well as the effect of larger scale processes such as mergers and accretion, in order to obtain a consistent picture of galaxy formation and evolution. On the other hand, hydrodynamical simulations of galaxy formation in cosmological context are able to follow the evolution of a galaxy along cosmic time, taking into account both external processes such as mergers, interactions and accretion, and internal mechanisms such as feedback from Supernovae and Active Galactic Nuclei. Due to the great advances in both fields of research, we have nowadays available spectral and photometric information for a large number of galaxies in the Universe at different cosmic times, which has in turn provided important knowledge about the evolution of the Universe; at the same time, we are able to realistically simulate galaxy formation and evolution in large volumes of the Universe, taking into account the most relevant physical processes occurring in galaxies.
As these two approaches are intrinsically different in their methodology and in the information they provide, the connection between simulations and observations is still not fully established, although simulations are often used in galaxies' studies to interpret observations and assess the effect of the different processes acting on galaxies on the observable properties, and simulators usually test the physical recipes implemented in their hydrodynamical codes through the comparison with observations. In this dissertation we aim to better connect the observational and computational approaches in the study of galaxy formation and evolution, using the methods and results of one field to test and validate the methods and results of the other.
In a first work we study the biases and systematics in the derivation of the galaxy properties in observations. We post-process hydrodynamical cosmological simulations of galaxy formation to calculate the galaxies' Spectral Energy Distributions (SEDs) using different approaches, including radiative transfer techniques. Comparing the direct results of the simulations with the quantities obtained applying observational techniques to these synthetic SEDs, we are able to make an analysis of the biases intrinsic in the observational algorithms, and quantify their accuracy in recovering the galaxies' properties, as well as estimating the uncertainties affecting a comparison between simulations and observations when different approaches to obtain the observables are followed. Our results show that for some quantities such as the stellar ages, metallicities and gas oxygen abundances large differences can appear, depending on the technique applied in the derivation.
In a second work we compare a set of fifteen galaxies similar in mass to the Milky Way and with a quiet merger history in the recent past (hence expected to have properties close to spiral galaxies), simulated in a cosmological context, with data from the Sloan Digital Sky Survey (SDSS). We use techniques to obtain the observables as similar as possible to the ones applied in SDSS, with the aim of making an unbiased comparison between our set of hydrodynamical simulations and SDSS observations. We quantify the differences in the physical properties when these are obtained directly from the simulations without post-processing, or mimicking the SDSS observational techniques. We fit linear relations between the values derived directly from the simulations and following SDSS observational procedures, which in most of the cases have relatively high correlation, that can be easily used to more reliably compare simulations with SDSS data. When mimicking SDSS techniques, these simulated galaxies are photometrically similar to galaxies in the SDSS blue sequence/green valley, but have in general older ages, lower SFRs and metallicities compared to the majority of the spirals in the observational dataset.
In a third work, we post-process hydrodynamical simulations of galaxies with radiative transfer techniques, to generate synthetic data that mimic the properties of the CALIFA Integral Field Spectroscopy (IFS) survey. We reproduce the main characteristics of the CALIFA observations in terms of field of view and spaxel physical size, data format, point spread functions and detector noise. This 3-dimensional dataset is suited to be analyzed by the same algorithms applied to the CALIFA dataset, and can be used as a tool to test the ability of the observational algorithms in recovering the properties of the CALIFA galaxies. To this purpose, we also generate the resolved maps of the simulations' properties, calculated directly from the hydrodynamical snapshots, or from the simulated spectra prior to the addition of the noise.
Our work shows that a reliable connection between the models and the data is of crucial importance both to judge the output of galaxy formation codes and to accurately test the observational algorithms used in the analysis of galaxy surveys' data. A correct interpretation of observations will be particularly important in the future, in light of the several ongoing and planned large galaxy surveys that will provide the community with large datasets of properties of galaxies (often spatially-resolved) at different cosmic times, allowing to study galaxy formation physics at a higher level of detail than ever before. We have shown that neglecting the observational biases in the comparison between simulations and an observational dataset may move the simulations to different regions in the planes of the observables, strongly affecting the assessment of the correctness of the sub-resolution physical models implemented in galaxy formation codes, as well as the interpretation of given observational results using simulations.
All life-sustaining processes are ultimately driven by thousands of biochemical reactions occurring in the cells: the metabolism. These reactions form an intricate network which produces all required chemical compounds, i.e., metabolites, from a set of input molecules. Cells regulate the activity through metabolic reactions in a context-specific way; only reactions that are required in a cellular context, e.g., cell type, developmental stage or environmental condition, are usually active, while the rest remain inactive. The context-specificity of metabolism can be captured by several kinds of experimental data, such as by gene and protein expression or metabolite profiles. In addition, these context-specific data can be assimilated into computational models of metabolism, which then provide context-specific metabolic predictions.
This thesis is composed of three individual studies focussing on context-specific experimental data integration into computational models of metabolism. The first study presents an optimization-based method to obtain context-specific metabolic predictions, and offers the advantage of being fully automated, i.e., free of user defined parameters. The second study explores the effects of alternative optimal solutions arising during the generation of context-specific metabolic predictions. These alternative optimal solutions are metabolic model predictions that represent equally well the integrated data, but that can markedly differ. This study proposes algorithms to analyze the space of alternative solutions, as well as some ways to cope with their impact in the predictions.
Finally, the third study investigates the metabolic specialization of the guard cells of the plant Arabidopsis thaliana, and compares it with that of a different cell type, the mesophyll cells. To this end, the computational methods developed in this thesis are applied to obtain metabolic predictions specific to guard cell and mesophyll cells. These cell-specific predictions are then compared to explore the differences in metabolic activity between the two cell types. In addition, the effects of alternative optima are taken into consideration when comparing the two cell types. The computational results indicate a major reorganization of the primary metabolism in guard cells. These results are supported by an independent 13C labelling experiment.
The thesis focuses on the inter-departmental coordination of adaptation and mitigation of demographic change in East Germany. All Eastern German States (Länder) have set up inter-departmental committees (IDCs) that are expected to deliver joint strategies to tackle demographic change. IDCs provide an organizational setting for potential positive coordination, i.e. a joint approach to problem solving that pools and utilizes the expertise of many departments in a constructive manner from the very beginning. Whether they actually achieve positive coordination is contested within the academic debate. This motivates the first research question of this thesis: Do IDCs achieve positive coordination?
Interdepartmental committees and their role in horizontal coordination within the core executive triggered interest among scholars already more than fifty years ago. However, we don’t know much about their actual importance for the inter-departmental preparation of cross-cutting policies. Until now, few studies can be found that analyzes inter-departmental committees in a comparative way trying to identify whether they achieve positive coordination and what factors shape the coordination process and output of IDCs.
Each IDC has a chair organization that is responsible for managing the interactions within the IDCs. The chair organization is important, because it organizes and structures the overall process of coordination in the IDC. Consequently, the chair of an IDC serves as the main boundary-spanner and therefore has remarkable influence by arranging meetings and the work schedule or by distributing internal roles. Interestingly, in the German context we find two organizational approaches: while some states decided to put a line department (e.g. Department of Infrastructure) in charge of managing the IDC, others rely on the State Chancelleries, i.e. the center of government.
This situation allows for comparative research design that can address the role of the State Chancellery in inter-departmental coordination of cross-cutting policies. This is relevant, because the role of the center is crucial when studying coordination within central government. The academic debate on the center of government in the German politico-administrative system is essentially divided into two camps. One camp claims that the center can improve horizontal coordination and steer cross-cutting policy-making more effectively, while the other camp points to limits to central coordination due to departmental autonomy. This debate motivates the second research question of this thesis: Does the State Chancellery as chair organization achieve positive coordination in IDCs?
The center of government and its role in the German politic-administrative system has attracted academic attention already in the 1960s and 1970s. There is a research desiderate regarding the center’s role during the inter-departmental coordination process. There are only few studies that explicitly analyze centers of government and their role in coordination of cross-cutting policies, although some single case studies have been published. This gap in the academic debate will be addressed by the answer to the second research question.
The dependent variable of this study is the chair organization of IDCs. The value of this variable is dichotomous: either an IDC is chaired by a Line department or by a State Chancellery. We are interested whether this variable has an effect on two dependent variables. First, we will analyze the coordination process, i.e. interaction among bureaucrats within the IDC. Second, the focus of this thesis will be on the coordination result, i.e. the demography strategies that are produced by the respective IDCs.
In terms of the methodological approach, this thesis applies a comparative case study design based on a most-similar-systems logic. The German Federalism is quite suitable for such designs. Since the institutional framework largely is the same across all states, individual variables and their effect can be isolated and plausibly analyzed. To further control for potential intervening variables, we will limit our case selection to states located in East Germany, because the demographic situation is most problematic in the Eastern part of Germany, i.e. there is a equal problem pressure. Consequently, we will analyze five cases: Thuringia, Saxony-Anhalt (line department) and Brandenburg, Mecklenburg-Vorpommern and Saxony (State Chancellery).
There is no grand coordination theory that is ready to be applied to our case studies. Therefore, we need to tailor our own approach. Our assumption is that the individual chair organization has an effect on the coordination process and output of IDCs, although all cases are embedded in the same institutional setting, i.e. the German politico-administrative system. Therefore, we need an analytical approach than incorporates institutionalist and agency-based arguments. Therefore, this thesis will utilize Actor-Centered Institutionalism (ACI). Broadly speaking, ACI conceptualizes actors’ behavior as influenced - but not fully determined - by institutions. Since ACI is rather abstract we need to adapt it for the purpose of this thesis. Line Departments and State Chancelleries will be modeled as distinct actors with different action orientations and capabilities to steer the coordination process. However, their action is embedded within the institutional context of governments, which we will conceptualize as being comprised of regulative (formal rules) and normative (social norms) elements.
Culture-driven innovation
(2017)
This cumulative dissertation deals with the potential of underexplored cultural sources for innovation.
Nowadays, firms recognize an increasing demand for innovation to keep pace with an ever-growing dynamic worldwide competition. Knowledge is one of the most crucial sources and resource, while until now innovation has been foremost driven by technology. But since the last years, we have been witnessing a change from technology's role as a driver of innovation to an enabler of innovation. Innovative products and services increasingly differentiate through emotional qualities and user experience. These experiences are hard to grasp and require alignment in innovation management theory and practice.
This work cares about culture in a broader matter as a source for innovation. It investigates the requirements and fundamentals for "culture-driven innovation" by studying where and how to unlock cultural sources. The research questions are the following: What are cultural sources for knowledge and innovation? Where can one find cultural sources and how to tap into them?
The dissertation starts with an overview of its central terms and introduces cultural theories as an overarching frame to study cultural sources for innovation systematically. Here, knowledge is not understood as something an organization owns like a material resource, but it is seen as something created and taking place in practices. Such a practice theoretical lens inheres the rejection of the traditional economic depiction of the rational Homo Oeconomicus. Nevertheless, it also rejects the idea of the Homo Sociologicus about the strong impact of society and its values on individual actions. Practice theory approaches take account of both concepts by underscoring the dualism of individual (agency, micro-level) and structure (society, macro-level). Following this, organizations are no enclosed entities but embedded within their socio-cultural environment, which shapes them and is also shaped by them.
Then, the first article of this dissertation acknowledges a methodological stance of this dualism by discussing how mixed methods support an integrated approach to study the micro- and macro-level. The article focuses on networks (thus communities) as a central research unit within studies of entrepreneurship and innovation.
The second article contains a network analysis and depicts communities as central loci for cultural sources and knowledge. With data from the platform Meetup.com about events etc., the study explores which overarching communities and themes have been evolved in Berlin's start up and tech scene.
While the latter study was about where to find new cultural sources, the last article addresses how to unlock such knowledge sources. It develops the concept of a cultural absorptive capacity, that is the capability of organizations to open up towards cultural sources. Furthermore, the article points to the role of knowledge intermediaries in the early phases of knowledge acquisition. Two case studies on companies working with artists illustrate the roles of such intermediaries and how they support firms to gain knowledge from cultural sources.
Overall, this dissertation contributes to a better understanding of culture as a source for innovation from a theoretical, methodological, and practitioners' point of view. It provides basic research to unlock the potential of such new knowledge sources for companies - sources that so far have been neglected in innovation management.
In dieser Arbeit werden drei Themen im Zusammenhang mit den spektroskopischen Eigenschaften von Cumarin- (Cou) und DBD-Farbstoffen ([1,3]Dioxolo[4,5-f][1,3]benzodioxol) behandelt. Der erste Teil zeigt die grundlegende spektroskopische Charakterisierung von 7-Aminocumarinen und ihre potentielle Anwendung als Fluoreszenzsonde für Fluoreszenzimmunassays. Im zweiten Teil werden mit die photophysikalischen Eigenschaften der Cumarine genutzt um Cou- und DBD-funktionalisierte Oligo-Spiro-Ketal-Stäbe (OSTK) und ihre Eigenschaften als Membransonden zu untersuchen. Der letzte Teil beschäftigt sich mit der Synthese und der Charakterisierung von Cou- und DBD-funktionalisierten Polyprolinen als Referenzsysteme für schwefelfunktionalisierte OSTK-Stäbe und ihrer Kopplung an Goldnanopartikel.
Immunochemische Analysemethoden sind in der klinischen Diagnostik sehr erfolgreich und werden heute auch für die Nahrungsmittelkontrolle und Überwachung von Umweltfragen mit einbezogen. Dadurch sind sie von großem Interesse für weitere Forschungen. Unter den verschiedenen Immunassays zeichnen sich lumineszenzbasierte Formate durch ihre herausragende Sensitivität aus, die dieses Format für zukünftige Anwendungen besonders attraktiv macht. Die Notwendigkeit von Multiparameterdetektionsmöglichkeiten erfordert einen Werkzeugkasten mit Farbstoffen, um die biochemische Reaktion in ein optisch detektierbares Signal umzuwandeln. Hier wird bei einem Multiparameteransatz jeder Analyt durch einen anderen Farbstoff mit einer einzigartigen Emissionsfarbe, die den blauen bis roten Spektralbereich abdecken, oder eine einzigartige Abklingzeit detektiert. Im Falle eines kompetitiven Immunassayformats wäre für jeden der verschiedenen Farbstoffe ein einzelner Antikörper erforderlich. In der vorliegenden Arbeit wird ein leicht modifizierter Ansatz unter Verwendung einer Cumarineinheit, gegen die hochspezifische monoklonale Antikörper (mAb) erzeugt wurden, als grundlegendes Antigen präsentiert. Durch eine Modifikation der Stammcumarineinheit an einer Position des Moleküls, die für die Erkennung durch den Antikörper nicht relevant ist, kann auf den vollen Spektralbereich von blau bis tiefrot zugegriffen werden. In dieser Arbeit wird die photophysikalische Charakterisierung der verschiedenen Cumarinderivate und ihrer entsprechenden Immunkomplexe mit zwei verschiedenen, aber dennoch hochspezifischen, Antikörpern präsentiert. Die Cumarinfarbstoffe und ihre Immunkomplexe wurden durch stationäre und zeitaufgelöste Absorptions- sowie Fluoreszenzemissionsspektroskopie charakterisiert. Darüber hinaus wurden Fluoreszenzdepolarisationsmessungen durchgeführt, um die Daten zu vervollständigen, die die verschiedenen Bindungsmodi der beiden Antikörper betonten. Im Gegensatz zu häufig eingesetzten Nachweissystemen wurde eine massive Fluoreszenzverstärkung bei der Bildung des Antikörper-Farbstoffkomplexes bis zu einem Faktor von 50 gefunden. Wegen der leichten Emissionsfarbenänderung durch das Anpassen der Cumarinsubstitution in der für die Antigenbindung nicht relevanten Position des Elternmoleküls, ist eine Farbstoff-Toolbox vorhanden, die bei der Konstruktion von kompetitiven Multiparameterfluoreszenzverstärkungsimmunassays verwendet werden kann.
Oligo-Spiro-Thio-Ketal-Stäbe werden aufgrund ihres hydrophoben Rückgrats leicht in Doppellipidschichten eingebaut und deshalb als optische Membransonde verwendet. Wegen ihres geringen Durchmessers wird nur eine minimale Störung der Doppellipidschicht verursacht. Durch die Markierung mit Fluoreszenzfarbstoffen sind neuartige Förster-Resonanz-Energietransfersonden mit hoch definierten relativen Orientierungen der Übergangsdipolmomente der Donor- und Akzeptorfarbstoffe zugänglich und macht die Klasse der OSTK-Sonden zu einem leistungsstarken, flexiblen Werkzeugkasten für optische Biosensoranwendungen. Mit Hilfe von stationären und zeitaufgelösten Fluoreszenzexperimenten wurde der Einbau von Cumarin- und DBD markierten OSTK-Stäben in großen unilamellaren Vesikeln untersucht und die Ergebnisse durch Fluoreszenzdepolarisationsmessungen untermauert.
Der letzte Teil dieser Arbeit beschäftigt sich mit der Synthese und Charakterisierung von Cou- und DBD-funktionalisierten Polyprolinen und ihrer Kopplung an Goldnanopartikel. Die farbstoffmarkierten Polyproline konnten erfolgreich hergestellt werden. Es zeigten sich deutlich Einflüsse auf die spektroskopischen Eigenschaften der Farbstoffe durch die Bindung an die Polyprolinhelix. Die Kopplung an die 5 nm großen AuNP konnte erfolgreich durchgeführt werden. Die Erfahrungen, die durch die Kopplung der Polyproline an die AuNP, gewonnen wurde, ist die Basis für eine Einzelmolekül-AFM-FRET-Nanoskopie mit OSTK-Stäben.
Among modern functional materials, the class of nitrogen-containing carbons combines non-toxicity and sustainability with outstanding properties. The versatility of this materials class is based on the opportunity to tune electronic and catalytic properties via the nitrogen content and –motifs: This ranges from the electronically conducting N-doped carbon, where few carbon atoms in the graphitic lattice are substituted by nitrogen, to the organic semiconductor graphitic carbon nitride (g-C₃N₄), with a structure based on tri-s-triazine units.
In general, composites can reveal outstanding catalytic properties due to synergistic behavior, e.g. the formation of electronic heterojunctions. In this thesis, the formation of an “all-carbon” heterojunction was targeted, i.e. differences in the electronic properties of the single components were achieved by the introduction of different nitrogen motives into the carbon lattice. Such composites are promising as metal-free catalysts for the photocatalytic water splitting. Here, hydrogen can be generated from water by light irradiation with the use of a photocatalyst. As first part of the heterojunction, the organic semiconductor g-C₃N₄ was employed, because of its suitable band structure for photocatalytic water splitting, high stability and non-toxicity. The second part was chosen as C₂N, a recently discovered semiconductor. Compared to g-C₃N₄, the less nitrogen containing C₂N has a smaller band gap and a higher absorption coefficient in the visible light range, which is expected to increase the optical absorption in the composite eventually leading to an enhanced charge carrier separation due to the formation of an electronic heterojunction.
The aim of preparing an “all-carbon” composite included the research on appropriate precursors for the respective components g-C₃N₄ and C₂N, as well as strategies for appropriate structuring. This was targeted by applying precursors which can form supramolecular pre-organized structures. This allows for more control over morphology and atom patterns during the carbonization process.
In the first part of this thesis, it was demonstrated how the photocatalytic activity of g-C₃N₄ can be increased by the targeted introduction of defects or surface terminations. This was achieved by using caffeine as a “growth stopping” additive during the formation of the hydrogen-bonded supramolecular precursor complexes. The increased photocatalytic activity of the obtained materials was demonstrated with dye degradation experiments.
The second part of this thesis was focused on the synthesis of the second component C₂N. Here, a deep eutectic mixture from hexaketocyclohexane and urea was structured using the biopolymer chitosan. This scaffolding resulted in mesoporous nitrogen-doped carbon monoliths and beads. CO₂- and dye-adsorption experiments with the obtained monolith material revealed a high isosteric heat of CO₂-adsorption and showed the accessibility of the monolithic pore system to larger dye molecules. Furthermore, a novel precursor system for C₂N was explored, based on organic crystals from squaric acid and urea. The respective C₂N carbon with an unusual sheet-like morphology could be synthesized by carbonization of the crystals at 550 °C. With this precursor system, also microporous C₂N carbon with a BET surface area of 865 m²/g was obtained by “salt-templating” with ZnCl₂.
Finally, the preparation of a g-C₃N₄/C₂N “all carbon” composite heterojunction was attempted by the self-assembly of g-C₃N₄ and C₂N nanosheets and tested for photocatalytic water splitting. Indeed, the composites revealed high rates of hydrogen evolution when compared to bulk g-C₃N₄. However, the increased catalytic activity was mainly attributed to the high surface area of the nanocomposites rather than to the composition. With regard to alternative composite synthesis ways, first experiments indicated N-Methyl-2-pyrrolidon to be suitable for higher concentrated dispersion of C₂N nanosheets. Eventually, the results obtained in this thesis provide precious synthetic contributions towards the preparation and processing of carbon/nitrogen compounds for energy applications.
Durch das Gesetz über steuerliche Begleitmaßnahmen zur Einführung der Europäischen Gesellschaft und zur Änderung weiterer steuerrechtlicher Vorschriften (SEStEG) wurden die sog. Entstrickungsregeln des § 4 Abs. 1 Satz 3 EStG sowie des § 12 Abs. 1 HS 1 KStG in das deutsche Steuerrecht aufgenommen. Die Vorschriften verfolgen das Ziel, das deutsche Besteuerungssubstrat abzusichern. Hierzu führte der Gesetzgeber gesetzestechnisch das Tatbestandsmerkmal des Besteuerungsrechts der Bundesrepublik Deutschland hinsichtlich der Veräußerung oder Nutzung eines Wirtschaftsguts ein. Die Bedeutung dieses Begriffs sowie der Anwendungsbereich der Entstrickungsregeln sind seitdem in der Literatur vielfach diskutiert worden. Weitere Unsicherheit ergibt sich aus den Urteilen des BFH zur Aufgabe der finalen Entnahmetheorie. Diese betreffen zwar nicht das Besteuerungsrecht der Bundesrepublik als gesetzliches Tatbestandsmerkmal, da sie zu einer Rechtslage vor Geltung des SEStEG ergangen sind. Inhaltlich setzen sich die Entscheidungen jedoch ebenfalls mit der Möglichkeit Deutschlands auseinander, sein Besteuerungsrecht gegenüber anderen Staaten durchzusetzen.
Gegenstand dieser Studie ist die Auslegung des Besteuerungsrechts der Bundesrepublik Deutschland hinsichtlich der Veräußerung oder Nutzung eines Wirtschaftsguts in der Form, in der es als Tatbestandsmerkmal Eingang in die Steuergesetze gefunden hat. Die Änderungen des SEStEG mit der Verwendung des Besteuerungsrechts der Bundesrepublik als normatives Tatbestandsmerkmal betrafen neben dem EStG und KStG auch das AStG und das UmwStG. Darüber hinaus könnte es das Potential für weitere zukünftige steuergesetzliche Regelungen zur Absicherung des deutschen Steueraufkommens aufweisen.
Die Naturphilosophie und die Politische Philosophie werden gemeinhin als Disziplinen aufgefasst, die grundlegend verschiedene Problembereiche zum Gegenstand haben. Die Aporien zu überwinden, welche daraus resultieren, ist die Pointe von Plessners Philosophischer Anthropologie.
In dieser Studie wird gezeigt, wie Plessner in der Aneignung elementarer Topoi der klassischen Ontologie eine strukturell neuartige "Ontologie des Organischen" entwickelt. Dieser von Plessner beiläufig verwendete Ausdruck wird in dieser Studie systematisch entwickelt. Was dabei elaboriert wird, ist eine komplexe naturphilosophische "Ontologie des Ausgleichs". In dieser Ontologie des Ausgleichs werden elementare naturphilosophische Ausgleichsleistungen expliziert, die als solche einen Doppelsinn haben: was auf der organismischen Ebene strukturell als Ausgleich des organischen Körpers mit sich selbst und der Umwelt expliziert wird, nimmt im menschlichen Bereich die Gestalt einer Ontologie der Personen an, wo die strukturell identische Ausgleichsleistung als das Spiel der Personalisierung zu vollziehen ist.
Plessners Ansatz wird hier als ein auf diese Fragen neuartig antwortender, gleichberechtigt als ontologischer und sozialphilosophischer Ansatz expliziert.
Das vorliegende Werk widmet sich den an deutschen Hochschulen oftmals wiederkehrenden Fragestellungen zur Ausgestaltung und Anwendung von Prüfungsverfahren. Unter Bezugnahme auf die bestehenden verfassungs- und verwaltungsrechtlichen Anforderungen, die für Hochschulabschlussprüfungen als auch für studienbegleitende Leistungskontrollen gelten, wird darin umfassend erörtert,
- für welche Prüfungsleistungen eine Begründungspflicht besteht,
- in welchen Fällen eine Kollegialprüfung durchzuführen ist,
- welche Prüflinge für die Herstellung einheitlicher Prüfungsbedingungen miteinander vergleichbar sind,
- welche verwaltungsrechtliche Qualität Hochschulprüfungsleistungen anhaftet,
- ob generell ein Anspruch auf ein verwaltungsinternes Kontrollverfahren gegen die gerügte Bewertung einer Hochschulprüfungsleistungen besteht und wenn ja in welchem Umfang, bzw. mit welchen rechtlichen Instrumentarien auf eine Bewertungsrüge zu reagieren ist
- und ob das Rechtsinstitut der reformatio in peius bei der Bescheidung einer Bewertungsrüge Anwendung finden darf.
Im Anschluss daran wird unter Berücksichtigung der aktuellen Rechtslage dargestellt, inwieweit der parlamentarische Gesetzgeber dem verfassungsrechtlich gebotenen Kodifikationsbedarf im Hochschulprüfungsrecht entspricht und an welchen Stellen Nachbesserungsbedarf besteht.
Die Arbeit befasst sich mit der Frage nach Existenz und Umfang des Schädigungsverbots im Völkerrecht. Dabei liegt der Arbeit das Verständnis zugrunde, dass auch rechtmäßige Handlungen der Staaten durch die zunehmende Interdependenz zu Beeinträchtigungen bis hin zu Schädigungen bei anderen Staaten führen können. Dabei wurden die Referenzgebiete mit Blick darauf gewählt, dass es sich beim Umweltvölkerrecht um ein gewohnheitsrechtlich verankertes Schädigungsverbot zum Schutze der territorialen Souveränität handelt, beim Welthandelsrecht und Währungsrecht das Schädigungsverbot in Form einer vertraglichen Ausgestaltung vorliegt und beim Steuerrecht überlegt werden kann, welche grundsätzlichen Überlegungen zur Akzeptanz eines Schädigungsverbots in einem Gebiet führen, das jedenfalls auf multilateraler Ebene noch nicht vertraglich durchdrungen ist.
Im Jahr 2008 lebte das bereits eingeführte und nunmehr in den §§ 76 ff. JGG kodifizierte vereinfachte Jugendverfahren erneut auf, als mit dem ‚Neuköllner Modell‘ eine besonders beschleunigte Verfahrensform vorgestellt wurde. Benannt wurde das Verfahren nach dem gleichnamigen Berliner Bezirk Neukölln, in dem das Verfahren am 1. Januar 2008 als Probelauf begann. Das Modell beruhte auf einer Arbeitsabsprache von einzelnen Jugendrichtern, Vertretern der Jugendstaatsanwaltschaft, der Jugendgerichtshilfe und Ermittlungsbeamten in Berlin.
Inès Ben Miled untersucht das vereinfachte Jugendverfahren und das besonders beschleunigte Verfahren nach dem Neuköllner Modell und unternimmt u.a. den Versuch, Gemeinsamkeiten und Abweichungen beider Verfahrensformen aufzuzeigen.
Durch Art. 20 Abs. 1 des Grundgesetzes wird die Bundesrepublik Deutschland als demokratischer und sozialer Bundesstaat insbesondere dem Föderalismus verpflichtet. Er ist neben der Demokratie eine der Säulen unseres Staatswesens. Im Bewusstsein dieser Grundentscheidung unserer Verfassung fällt mit Art. 115f Abs. 1 Nr. 2 GG eine Vorschrift auf, die hiervon im Verteidigungsfall eine weitreichende Ausnahme zu ermöglichen scheint. Die Bundesregierung soll dann unter bestimmten Voraussetzungen außer der Bundesverwaltung auch den Landesregierungen und Landesbehörden Weisungen erteilen können. Es stellt sich in Anbetracht eines solchen Ausnahmerechts die Frage, wie sich dieses Weisungsrecht der Bundesregierung in unser Rechtssystem einfügt.
Der Autor nähert sich dieser Frage zunächst über die geschichtlichen Hintergründe, die zur Einfügung der Vorschrift geführt haben. Er geht detailliert auf die Voraussetzungen dieses Weisungsrechts der Bundesregierung ein und stellt es in seinen systematischen Zusammenhang. Neben einer Darstellung des Weisungsbegriffs als Möglichkeit der Einflussnahme auf die Bundesverwaltung und die Länder, werden auch die damit umschriebenen Weisungsadressaten näher untersucht. Auch den Fragen, welchen Gegenstand Weisungen nach dieser Vorschrift haben können, wie sie zu erlassen sind und welche Wirkungen sich aus ihnen ergeben, wird in der Untersuchung detailreich nachgegangen. Daneben behandelt der Autor die sich daraus ergebenden Anschlussfragen, welcher Rechtsschutz gegen derartige Weisungen besteht, wer damit verbundene Aufgaben zu finanzieren hat und wer für eventuelle Schäden zu haften hat. Das Werk schließt mit einer Erörterung, ob es sich bei dieser Vorschrift um eine verfassungswidrige Verfassungsnorm handelt, und einem Blick auf internationale Vorschriften, die Einfluss auf das Weisungsrecht nehmen könnten.
Data profiling is the computer science discipline of analyzing a given dataset for its metadata. The types of metadata range from basic statistics, such as tuple counts, column aggregations, and value distributions, to much more complex structures, in particular inclusion dependencies (INDs), unique column combinations (UCCs), and functional dependencies (FDs). If present, these statistics and structures serve to efficiently store, query, change, and understand the data. Most datasets, however, do not provide their metadata explicitly so that data scientists need to profile them.
While basic statistics are relatively easy to calculate, more complex structures present difficult, mostly NP-complete discovery tasks; even with good domain knowledge, it is hardly possible to detect them manually. Therefore, various profiling algorithms have been developed to automate the discovery. None of them, however, can process datasets of typical real-world size, because their resource consumptions and/or execution times exceed effective limits.
In this thesis, we propose novel profiling algorithms that automatically discover the three most popular types of complex metadata, namely INDs, UCCs, and FDs, which all describe different kinds of key dependencies. The task is to extract all valid occurrences from a given relational instance. The three algorithms build upon known techniques from related work and complement them with algorithmic paradigms, such as divide & conquer, hybrid search, progressivity, memory sensitivity, parallelization, and additional pruning to greatly improve upon current limitations. Our experiments show that the proposed algorithms are orders of magnitude faster than related work. They are, in particular, now able to process datasets of real-world, i.e., multiple gigabytes size with reasonable memory and time consumption.
Due to the importance of data profiling in practice, industry has built various profiling tools to support data scientists in their quest for metadata. These tools provide good support for basic statistics and they are also able to validate individual dependencies, but they lack real discovery features even though some fundamental discovery techniques are known for more than 15 years. To close this gap, we developed Metanome, an extensible profiling platform that incorporates not only our own algorithms but also many further algorithms from other researchers. With Metanome, we make our research accessible to all data scientists and IT-professionals that are tasked with data profiling. Besides the actual metadata discovery, the platform also offers support for the ranking and visualization of metadata result sets.
Being able to discover the entire set of syntactically valid metadata naturally introduces the subsequent task of extracting only the semantically meaningful parts. This is challenge, because the complete metadata results are surprisingly large (sometimes larger than the datasets itself) and judging their use case dependent semantic relevance is difficult. To show that the completeness of these metadata sets is extremely valuable for their usage, we finally exemplify the efficient processing and effective assessment of functional dependencies for the use case of schema normalization.
During the course of millions of years, evolutionary forces have shaped the current distribution of species and their genetic variability, by influencing their phylogeny, adaptability and probability of survival. Southeast Asia is an extraordinary biodiverse region, where past climate events have resulted in dramatic changes in land availability and distribution of vegetation, resulting likewise in periodic connections between isolated islands and the mainland. These events have influenced the way species are distributed throughout this region but, more importantly, they influenced the genesis of genetic diversity. Despite the observation that a shared paleo-history resulted in very diverse species phylogeographic patterns, the mechanisms behind these patterns are still poorly understood.
In this thesis, I investigated and contrasted the phylogeography of three groups of ungulate species distributed within South and Southeast Asia, aiming to understand what mechanisms have shaped speciation and geographical distribution of genetic variability. For that purpose, I analysed the mitogenomes of historical samples, in order to account for populations from the entire range of species distributions – including populations that no longer exist. This thesis is organized in three manuscripts, which correspond to the three investigated groups: red muntjacs, Rusa deer and Asian rhinoceros.
Red muntjacs are a widely distributed species and occur in very different habitats. We found evidence for gene-flow among populations of different islands, indicative of their ability to utilize the available land corridors. However, we described also the existence of at least two dispersal barriers that created population differentiation within this group; one isolated Sundaic and Mainland populations and the second separated individuals from Sri Lanka.
Second, the two Rusa species investigated here revealed another consequence of the historical land connections. While the two species were monophyletic, we found evidence of hybridisation in Java, facilitated by the expansion of the widespread sambar, Rusa unicolor. Consequently, I found that all the individuals of Javan deer, R. timorensis which were transported to the east of Sundaland by humans, to be of hybrid descent.
In the last manuscript, we were able to include samples from the extinct mainland populations of both Sumatran and Javan rhinoceros. The results revealed a much higher genetic diversity of the historical populations than ever reported for the contemporaneous survivors. Their evolutionary histories revealed a close relationship to climatic events of the Pleistocene but, more importantly, point out the vast extent of genetic erosion within these two endangered species.
The specific phylogeographic history of the species showed some common patters of genetic differentiation that could be directly linked to the climatic and geological changes on the Sunda Shelf during the Pleistocene. However, by contrasting these results I discussed that the same geological events
did not always result in similar histories. One obvious example was the different permeability of the land corridors of Sundaland, as the ability of each species to utilize this newly available land was directly related to their specific ecological requirements. Taken together, these results have an important contribution to the general understanding of evolution in this biodiversity hotspot and the main drivers shaping the distribution of genetic diversity, but could also have important consequences for taxonomy and conservation of the three investigated groups.
Der Gefangene als Phantom
(2017)
Der »hooded man« aus dem Abu-Ghraib-Folterskandal war 2004 medial omnipräsent und erhitzt seitdem Diskussionen über die Repräsentation von Gefangenschaft als Form der Selbst- und Fremdbeschreibung. Stephanie Siewert zeigt, dass die Inszenierungen von Gemeinschaft in Darstellungen der Gefangenschaft nicht neu sind. Ihre transnational angelegte Studie zeichnet nach, wie Literatur und Medien seit Mitte des 19. Jahrhunderts an der Herstellung und Dekonstruktion einer Phantom-Position beteiligt sind, die sich in der Moderne über verschiedene Strukturen der Bannung manifestiert. Dabei wird das Wechselspiel ethnischer, sozialer und geschlechterspezifischer Zuschreibungen in den ästhetischen Anordnungen und Verfahren des Verschwindenmachens betont.
Der konsularische Schutz
(2017)
Anlässlich der Zunahme von Entführungen deutscher Staatsangehöriger im Ausland und des im Jahr 2009 ergangenen Urteils des Bundesverwaltungsgerichts zu diesem Thema legt die vorliegende Arbeit eine detaillierte und umfassende Analyse der Rechtsgrundlagen für die Gewährung konsularischen Schutzes durch die Auslandsvertretungen der Bundesrepublik Deutschland vor.
Das erste Kapitel beinhaltet eine detaillierte Darstellung der sich aus dem Völker-, Europa- und Verfassungsrecht sowie aus dem Konsulargesetz ergebenen staatlichen Handlungspflichten sowie möglicher damit einhergehender Individualansprüche auf die Ausübung konsularischen Schutzes im Einzelfall.
Im zweiten Kapitel werden die Voraussetzungen der Gewährung konsularischen Schutzes nach dem Konsulargesetz dargestellt. Den Schwerpunkt bildet hierbei die Bestimmung des Anwendungsbereiches des § 5 Abs. 1 Satz 1 Konsulargesetz unter Berücksichtigung des Urteils des Bundesverwaltungsgerichtes vom 28.05.2009, welches nach Auffassung der Verfasserin den Anwendungsbereich dieser Norm verkennt. Bei § 5 Abs. 1 Satz 1 Konsulargesetz handelt es sich um eine besondere sozialhilferechtliche Norm außerhalb der SGB XII, welche die konsularische Hilfe allein in wirtschaftlichen Notlagen regelt.
Das dritte Kapitel analysiert die bestehenden Regelungen über die Erstattung der im Rahmen der Gewährung konsularischen Schutzes entstandenen Kosten und erklärt deren Systematik. Ferner erfolgt ein Ausblick auf die künftigen Regelungen der Kostenerstattung nach dem Bundesgebührengesetz sowie der damit einhergehenden Rechtsverordnung.
Zum Abschluss werden die Ergebnisse anhand eines historischen Falles zusammengefasst sowie ein Gesetzesvorschlag vorgestellt, welcher die gefundenen Unklarheiten und Unstimmigkeiten im Konsulargesetz beheben kann.
Carbohydrate-protein interactions are ubiquitous in nature. They provide the initial molecular contacts in many cell-cell processes as for example immune responses, signal transduction, egg fertilization and infection processes of pathogenic viruses and bacteria. Furthermore, bacteria themselves are infected by bacteriophages, viruses which can cause the bacterial lysis, but do not affect other hosts. The infection process of a bacteriophage involves the specific detection and binding of the bacterium, which can be based on a carbohydrate-protein interaction. The mechanism of specific detection of pathogenic bacteria can thereby be useful for the development of bacteria sensors in the food industry or for tools in diagnostics.
Bacteriophages of the Podoviridae family use tailspike proteins for the specific detection of enteritis causing bacteria as Escherichia coli, Salmonella spp. or Shigella flexneri. The tailspike protein provides the first contact by binding to the carbohydrate containing O-antigen part of lipopolysaccharide in the Gram-negative cell wall. After binding to O-antigen repeating units, the enzymatic activity of tailspike proteins leads to cleavage of the carbohydrate chains, which enables the bacteriophage to approach the bacterial surface for DNA injection. Tailspike proteins thereby exhibit a relatively low affinity to the oligosaccharide structures of O-antigen due to the necessary binding, cleavage and release cycle, compared for example to antibodies. In this work it was aimed to study the determinants that influence carbohydrate affinity in the extended TSP binding grooves. This is a prerequisite to design a high-affinity tailspike protein based bacteria sensor.
For this purpose the tailspike protein of the bacteriophage Sf6 (Sf6 TSP) was used, which specifically binds Shigella flexneri Y O-antigen with two tetrasaccharide repeating units at the intersubunits of the trimeric β-helix protein. The Sf6 TSP endorhamnosidase cleaves the O-antigen, which leads to an octasaccharide as the main product. The binding affinity of inactive Sf6 TSP towards polysaccharide was characterized by fluorescence titration experiments and surface plasmon resonance (SPR).
Moreover, cysteine mutations were introduced into the Sf6 TSP binding site for the covalent thiol-coupling of an environment-sensitive fluorescent label to obtain a sensor for Shigella flexneri Y based on TSP-O-antigen recognition. This sensor showed a more than 100 % amplitude increase of a visible light fluorescence upon the binding of a polysaccharide test solution. Improvements of the TSP sensor can be achieved by increasing the tailspike affinity towards the O-antigen. Therefore molecular dynamics simulations evaluating ligand flexibility, hydrogen bond occupancies and water network distributions were used for affinity prediction on the available cysteine mutants of Sf6 TSP. The binding affinities were experimentally analyzed by SPR. This combined computational and experimental set-up for the design of a high-affinity carbohydrate binding protein could successfully distinguish strongly increased and decreased affinities of single amino acid mutants.
A thermodynamically and structurally well characterized set of another tailspike protein HK620 TSP with high-affinity mutants was used to evaluate the influence of water molecules on binding affinity. The free enthalpy of HK620 TSP oligosaccharide complex formation thereby either derived from the replacement of a conserved water molecule or by immobilization of two water molecules upon ligand binding. Furthermore, the enthalpic and entropic contributions of water molecules in a hydrophobic binding pocket could be assigned by free energy calculations. The findings in this work can be helpful for the improvement of carbohydrate docking and carbohydrate binding protein engineering algorithms in the future.
Detection and Kirchhoff-type migration of seismic events by use of a new characteristic function
(2017)
The classical method of seismic event localization is based on the picking of body wave arrivals, ray tracing and inversion of travel time data. Travel time picks with small uncertainties are required to produce reliable and accurate results with this kind of source localization. Hence recordings, with a low Signal-to-Noise Ratio (SNR) cannot be used in a travel time based inversion. Low SNR can be related with weak signals from distant and/or low magnitude sources as well as with a high level of ambient noise. Diffraction stacking is considered as an alternative seismic event localization method that enables also the processing of low SNR recordings by mean of stacking the amplitudes of seismograms along a travel time function. The location of seismic event and its origin time are determined based on the highest stacked amplitudes (coherency) of the image function. The method promotes an automatic processing since it does not need travel time picks as input data.
However, applying diffraction stacking may require longer computation times if only limited computer resources are used. Furthermore, a simple diffraction stacking of recorded amplitudes could possibly fail to locate the seismic sources if the focal mechanism leads to complex radiation patterns which typically holds for both natural and induced seismicity.
In my PhD project, I have developed a new work flow for the localization of seismic events which is based on a diffraction stacking approach. A parallelized code was implemented for the calculation of travel time tables and for the determination of an image function to reduce computation time. In order to address the effects from complex source radiation patterns, I also suggest to compute diffraction stacking from a characteristic function (CF) instead of stacking the original wave form data. A new CF, which is called in the following mAIC (modified from Akaike Information Criterion) is proposed. I demonstrate that, the performance of the mAIC does not depend on the chosen length of the analyzed time window and that both P- and S-wave onsets can be detected accurately. To avoid cross-talk between P- and S-waves due to inaccurate velocity models, I separate the P- and S-waves from the mAIC function by making use of polarization attributes. Then, eventually the final image function is represented by the largest eigenvalue as a result of the covariance analysis between P- and S-image functions. Before applying diffraction stacking, I also apply seismogram denoising by using Otsu thresholding in the time-frequency domain.
Results from synthetic experiments show that the proposed diffraction stacking provides reliable results even from seismograms with low SNR=1. Tests with different presentations of the synthetic seismograms (displacement, velocity, and acceleration) shown that, acceleration seismograms deliver better results in case of high SNR, whereas displacement seismograms provide more accurate results in case of low SNR recordings. In another test, different measures (maximum amplitude, other statistical parameters) were used to determine the source location in the final image function. I found that the statistical approach is the preferred method particularly for low SNR.
The work flow of my diffraction stacking method was finally applied to local earthquake data from Sumatra, Indonesia. Recordings from a temporary network of 42 stations deployed for 9 months around the Tarutung pull-apart Basin were analyzed. The seismic event locations resulting from the diffraction stacking method align along a segment of the Sumatran Fault. A more complex distribution of seismicity is imaged within and around the Tarutung Basin. Two lineaments striking N-S were found in the middle of the Tarutung Basin which support independent results from structural geology. These features are interpreted as opening fractures due to local extension. A cluster of seismic events repeatedly occurred in short time which might be related to fluid drainage since two hot springs are observed at the surface near to this cluster.
Nicht nur die Griechenland-Krise und die drohende Zahlungsunfähigkeit des Landes beschäftigen die Öffentlichkeit seit Jahren. Das Thema Staateninsolvenz ist dauerhaft präsent, zuletzt geriet Venezuela 2017 in massive finanzielle Schwierigkeiten. Gleichwohl fehlt es weiterhin an einem geordneten Insolvenzverfahren für Staaten.
Neben der einheimischen Bevölkerung sind häufig auch ausländische Anleihegläubiger von einer Staateninsolvenz beeinflusst und müssen im Rahmen der Umschuldung auf einen erheblichen Teil der Anleihesumme verzichten. Einzig Großgläubiger (häufig Hedge Fonds) können oftmals die volle Rückzahlung der Anleihesumme erwirken, indem sie andernfalls die Umschuldung durch komplizierte Vollstreckungshandlungen stören. Diese Möglichkeit hat einen Sekundärmarkt entstehen lassen, auf dem Hedge Fonds mit diesem Geschäftsmodell Anleihen von Kleinanlegern aufkaufen.
Diesen Gegebenheiten widmet sich der Verfasser. Es wird erörtert, inwieweit eine Anwendung der Grundsätze des Wegfalls der Geschäftsgrundlage Abhilfe dafür schaffen könnte, dass es an einem geordneten Insolvenzverfahren für Staaten fehlt. Dabei werden die Voraussetzungen des § 313 BGB einer kritischen Würdigung unterzogen.
Darüber hinaus erscheint vor dem Hintergrund der Euro-Krise und den Verwerfungen in der EU im Jahr 2016 (sog. Brexit) ein Auseinanderbrechen der Euro-Zone wieder wahrscheinlicher. Ein wirtschaftlich angeschlagener Mitgliedsstaat könnte den gemeinsamen Währungsraum verlassen, um durch Abwertungsmaßnahmen seine heimische Wirtschaft zu stützen. Auch erlebt der Protektionismus durch den 2016 gewählten US-Präsidenten Donald Trump eine Renaissance. Daher könnten sich nunmehr auch insolvente Staaten zu protektionistischen Maßnahmen verleitet sehen, um Arbeitsplätze zu schaffen und heimischen Unternehmen einen Vorteil zu verschaffen. Diese aktuellen und möglichen Entwicklungen und ihre Auswirkungen auf grenzüberschreitende Vertragsverhältnisse werden abschließend behandelt.
With Saccharomyces cerevisiae being a commonly used host organism for synthetic biology and biotechnology approaches, the work presented here aims at the development of novel tools to improve and facilitate pathway engineering and heterologous protein production in yeast. Initially, the multi-part assembly strategy AssemblX was established, which allows the fast, user-friendly and highly efficient construction of up to 25 units, e.g. genes, into a single DNA construct. To speed up complex assembly projects, starting from sub-gene fragments and resulting in mini-chromosome sized constructs, AssemblX follows a level-based approach: Level 0 stands for the assembly of genes from multiple sub-gene fragments; Level 1 for the combination of up to five Level 0 units into one Level 1 module; Level 2 for linkages of up to five Level 1 modules into one Level 2 module. This way, all Level 0 and subsequently all Level 1 assemblies can be carried out simultaneously. Individually planned, overlap-based Level 0 assemblies enable scar-free and sequence-independent assemblies of transcriptional units, without limitations in fragment number, size or content. Level 1 and Level 2 assemblies, which are carried out via predefined, computationally optimized homology regions, follow a standardized, highly efficient and PCR-free scheme. AssemblX follows a virtually sequence-independent scheme with no need for time-consuming domestication of assembly parts. To minimize the risk of human error and to facilitate the planning of assembly projects, especially for individually designed Level 0 constructs, the whole AssemblX process is accompanied by a user-friendly webtool. This webtool provides the user with an easy-to-use operating surface and returns a bench-protocol including all cloning steps. The efficiency of the assembly process is further boosted through the implementation of different features, e.g. ccdB counter selection and marker switching/reconstitution. Due to the design of homology regions and vector backbones the user can flexibly choose between various overlap-based cloning methods, enabling cost-efficient assemblies which can be carried out either in E. coli or yeast. Protein production in yeast is additionally supported by a characterized library of 40 constitutive promoters, fully integrated into the AssemblX toolbox. This provides the user with a starting point for protein balancing and pathway engineering. Furthermore, the final assembly cassette can be subcloned into any vector, giving the user the flexibility to transfer the individual construct into any host organism different from yeast.
As successful production of heterologous compounds generally requires a precise adjustment of protein levels or even manipulation of the host genome to e.g. inhibit unwanted feedback regulations, the optogenetic transcriptional regulation tool PhiReX was designed. In recent years, light induction was reported to enable easy, reversible, fast, non-toxic and nearly gratuitous regulation, thereby providing manifold advantages compared to conventional chemical inducers. The optogenetic interface established in this study is based on the photoreceptor PhyB and its interacting protein PIF3. Both proteins, derived from Arabidopsis thaliana, dimerize in a red/far-red light-responsive manner. This interaction depends on a chromophore, naturally not available in yeast. By fusing split proteins to both components of the optical dimerizer, active enzymes can be reconstituted in a light-dependent manner. For the construction of the red/far-red light sensing gene expression system PhiReX, a customizable synTALE-DNA binding domain was fused to PhyB, and a VP64 activation domain to PIF3. The synTALE-based transcription factor allows programmable targeting of any desired promoter region. The first, plasmid-based PhiReX version mediates chromophore- and light-dependent expression of the reporter gene, but required further optimization regarding its robustness, basal expression and maximum output. This was achieved by genome-integration of the optical regulator pair, by cloning the reporter cassette on a high-copy plasmid and by additional molecular modifications of the fusion proteins regarding their cellular localization. In combination, this results in a robust and efficient activation of cells over an incubation time of at least 48 h. Finally, to boost the potential of PhiReX for biotechnological applications, yeast was engineered to produce the chromophore. This overcomes the need to supply the expensive and photo-labile compound exogenously. The expression output mediated through PhiReX is comparable to the strong constitutive yeast TDH3 promoter and - in the experiments described here - clearly exceeds the commonly used galactose inducible GAL1 promoter.
The fast-developing field of synthetic biology enables the construction of complete synthetic genomes. The upcoming Synthetic Yeast Sc2.0 Project is currently underway to redesign and synthesize the S. cerevisiae genome. As a prerequisite for the so-called “SCRaMbLE” system, all Sc2.0 chromosomes incorporate symmetrical target sites for Cre recombinase (loxPsym sites), enabling rearrangement of the yeast genome after induction of Cre with the toxic hormonal substance beta-estradiol. To overcome the safety concern linked to the use of beta-estradiol, a red light-inducible Cre recombinase, dubbed L-SCRaMbLE, was established in this study. L-SCRaMbLE was demonstrated to allow a time- and chromophore-dependent recombination with reliable off-states when applied to a plasmid containing four genes of the beta-carotene pathway, each flanked with loxPsym sites. When directly compared to the original induction system, L-SCRaMbLE generates a larger variety of recombination events and lower basal activity. In conclusion, L-SCRaMbLE provides a promising and powerful tool for genome rearrangement.
The three tools developed in this study provide so far unmatched possibilities to tackle complex synthetic biology projects in yeast by addressing three different stages: fast and reliable biosynthetic pathway assembly; highly specific, orthogonal gene regulation; and tightly controlled synthetic evolution of loxPsym-containing DNA constructs.
Development of a reliable and environmentally friendly synthesis for fluorescence carbon nanodots
(2017)
Carbon nanodots (CNDs) have generated considerable attention due to their promising properties, e.g. high water solubility, chemical inertness, resistance to photobleaching, high biocompatibility and ease of functionalization. These properties render them ideal for a wide range of functions, e.g. electrochemical applications, waste water treatment, (photo)catalysis, bio-imaging and bio-technology, as well as chemical sensing, and optoelectronic devices like LEDs. In particular, the ability to prepare CNDs from a wide range of accessible organic materials makes them a potential alternative for conventional organic dyes and semiconductor quantum dots (QDs) in various applications. However, current synthesis methods are typically expensive and depend on complex and time-consuming processes or severe synthesis conditions and toxic chemicals. One way to reduce overall preparation costs is the use of biological waste as starting material. Hence, natural carbon sources such as pomelo peal, egg white and egg yolk, orange juice, and even eggshells, to name a few; have been used for the preparation of CNDs. While the use of waste is desirable, especially to avoid competition with essential food production, most starting-materials lack the essential purity and structural homogeneity to obtain homogeneous carbon dots. Furthermore, most synthesis approaches reported to date require extensive purification steps and have resulted in carbon dots with heterogeneous photoluminescent properties and indefinite composition. For this reason, among others, the relationship between CND structure (e.g. size, edge shape, functional groups and overall composition) and photophysical properties is yet not fully understood. This is particularly true for carbon dots displaying selective luminescence (one of their most intriguing properties), i.e. their PL emission wavelength can be tuned by varying the excitation wavelength.
In this work, a new reliable, economic, and environmentally-friendly one-step synthesis is established to obtain CNDs with well-defined and reproducible photoluminescence (PL) properties via the microwave-assisted hydrothermal treatment of starch, carboxylic acids and Tris-EDTA (TE) buffer as carbon- and nitrogen source, respectively. The presented microwave-assisted hydrothermal precursor carbonization (MW-hPC) is characterized by its cost-efficiency, simplicity, short reaction times, low environmental footprint, and high yields of approx. 80% (w/w). Furthermore, only a single synthesis step is necessary to obtain homogeneous water-soluble CNDs with no need for further purification.
Depending on starting materials and reaction conditions different types of CNDs have been prepared. The as-prepared CNDs exhibit reproducible, highly homogeneous and favourable PL properties with narrow emission bands (approx. 70nm FWHM), are non-blinking, and are ready to use without need for further purification, modification or surface passivation agents. Furthermore, the CNDs are comparatively small (approx. 2.0nm to 2.4nm) with narrow size distributions; are stable over a long period of time (at least one year), either in solution or as a dried solid; and maintain their PL properties when re-dispersed in solution. Depending on CND type, the PL quantum yield (PLQY) can be adjusted from as low as 1% to as high as 90%; one of the highest reported PLQY values (for CNDs) so far.
An essential part of this work was the utilization of a microwave synthesis reactor, allowing various batch sizes and precise control over reaction temperature and -time, pressure, and heating- and cooling rate, while also being safe to operate at elevated reaction conditions (e.g. 230 ±C and 30 bar). The hereby-achieved high sample throughput allowed, for the first time, the thorough investigation of a wide range of synthesis parameters, providing valuable insight into the CND formation. The influence of carbon- and nitrogen source, precursor concentration and -combination, reaction time and -temperature, batch size, and post-synthesis purification steps were carefully investigated regarding their influence on the optical properties of as-synthesized CNDs. In addition, the change in photophysical properties resulting from the conversion of CND solution into solid and back into the solution was investigated. Remarkably, upon freeze-drying the initial brown CND-solution turns into a non-fluorescent white/slightly yellow to brown solid which recovers PL in aqueous solution. Selected CND samples were also subject to EDX, FTIR, NMR, PL lifetime (TCSPC), particle size (TEM), TGA and XRD analysis. Besides structural characterization, the pH- and excitation dependent PL characteristics (i.e. selective luminescence) were examined; giving inside into the origin of photophysical properties and excitation dependent behaviour of CNDs. The obtained results support the notion that for CNDs the nature of the surface states determines the PL properties and that excitation dependent behaviour is caused by the “Giant Red-Edge Excitation Shift” (GREES).
In this work, a sensor system based on thermoresponsive materials is developed by utilizing a modular approach. By synthesizing three different key monomers containing either a carboxyl, alkene or alkyne end group connected with a spacer to the methacrylic polymerizable unit, a flexible copolymerization strategy has been set up with oligo ethylene glycol methacrylates. This allows to tune the lower critical solution temperature (LCST) of the polymers in aqueous media. The molar masses are variable thanks to the excurse taken in polymerization in ionic liquids thus stretching molar masses from 25 to over 1000 kDa. The systems that were shown shown to be effective in aqueous solution could be immobilized on surfaces by copolymerizing photo crosslinkable units. The immobilized systems were formulated to give different layer thicknesses, swelling ratios and mesh sizes depending on the demand of the coupling reaction.
The coupling of detector units or model molecules is approached via reactions of the click chemistry pool, and the reactions are evaluated on their efficiency under those aspects, too. These coupling reactions are followed by surface plasmon resonance spectroscopy (SPR) to judge efficiency. With these tools at hand, Salmonella saccharides could be selectively detected by SPR. Influenza viruses were detected in solution by turbidimetry in solution as well as by a copolymerized solvatochromic dye to track binding via the changes of the polymers’ fluorescence by said binding event. This effect could also be achieved by utilizing the thermoresponsive behavior. Another demonstrator consists of the detection system bound to a quartz surface, thus allowing the virus detection on a solid carrier.
The experiments show the great potential of combining the concepts of thermoresponsive materials and click chemistry to develop technically simple sensors for large biomolecules and viruses.
Plant cells host two important organelles: mitochondria, known as the cell’s ‘powerhouse’, which act by converting oxygen and nutrients into ATP, and plastids, which perform photosynthesis. These organelles contain their own genomes that encode proteins required for gene expression and energy metabolism. Transformation technologies offer great potential for investigating all aspects of the physiology and gene expression of these organelles in vivo. In addition, organelle transformation can be a valuable tool for biotechnology and molecular plant breeding. Plastid transformation systems are well-developed for a few higher plants, however, mitochondrial transformation has so far only been reported for Saccharomyces cerevisiae and the unicellular alga Chlamydomonas reinhardtii.
Development of an efficient new selection marker for plastid transformation is important for several reasons, including facilitating supertransformation of the plastid genome for metabolic engineering purposes and for producing multiple knock-outs or site-directed mutagenesis of two unlinked genes. In this work, we developed a novel selection system for Nicotiana tabacum (tobacco) chloroplast transformation with an alternative marker. The marker gene, aac(6′)-Ie/aph(2′′)-Ia, was cloned into different plastid transformation vectors and several candidate aminoglycoside antibiotics were investigated as selection agents. Generally, the efficiency of selection and the transformation efficiency with aac(6′)-Ie/aph(2′′)-Ia as selectable marker in combination with the aminoglycoside antibiotic tobramycin was similarly high as that with the standard marker gene aadA and spectinomycin selection. Furthermore, our new selection system may be useful for the development of plastid transformation for new species, including cereals, the world’s most important food crops, and could also be helpful for the establishment of a selection system for mitochondrial transformation.
To date, all attempts to achieve mitochondrial transformation for higher plants have been unsuccessful. A mitochondrial transformation system for higher plants would not only provide a potential for studying mitochondrial physiology but could also provide a method to introduce cytoplasmic male sterility into crops to produce hybrid seeds. Establishing a stable mitochondrial transformation system in higher plants requires several steps including delivery of foreign DNA, stable integration of the foreign sequences into the mitochondrial genome, efficient expression of the transgene, a highly regenerable tissue culture system that allows regeneration of the transformed cells into plants, and finally, a suitable selection system to identify cells with transformed mitochondrial genomes. Among all these requirements, finding a good selection is perhaps the most important obstacle towards the development of a mitochondrial transformation system for higher plants. In this work, two selection systems were tested for mitochondrial transformation: kanamycin as a selection system in combination with the antibiotic-inactivating marker gene nptII, and sulfadiazine as a selection agent that inhibits the folic acid biosynthesis pathway residing in plant mitochondria in combination with the sul gene encoding an enzyme that is insensitive to inhibition by sulfadiazine. Nuclear transformation experiments were considered as proof of the specificity of the sulfadiazine selection system for mitochondria. We showed that an optimized sulfadiazine selection system, with the Sul protein targeted to mitochondria, is much more efficient than the previous sulfadiazine selection system, in which the Sul protein was targeted to the chloroplast. We also showed by systematic experiments that the efficiency of selection and nuclear transformation of the optimized sulfadiazine selection was higher compared to the standard kanamycin selection system. Finally, we also investigated the suitability of this selection system for nuclear transformation of the model alga Chlamydomonas reinhardtii, obtaining promising results. Although we designed several mitochondrial transformation vectors with different expression elements and integration sites in the mitochondrial genome based on the sulfadiazine system, and different tissue culture condition were also considered, we were not able to obtain mitochondrial transformation with this system. Nonetheless, establishing the sul gene as an efficient and specific selection marker for mitochondria addresses one of the major bottlenecks and may pave the way to achieve mitochondrial transformation in higher plants.
Die vorliegende Dissertation thematisiert den Unterschied zwischen Einstellungen, die auf der persönlichen Ebene im Rahmen demoskopischer Interviews erfragt und zu einem „Meinungsbild“ aggregiert werden und der öffentlichen Meinung, dem wahrgenommenen Meinungsklima zu einer Thematik. Die Daten der langjährigen Bevölkerungsbefragung des Zentrums für Militärgeschichte und Sozialwissenschaften der Bundeswehr (ZMSBw) weisen, hinsichtlich der persönlichen Einstellung der Bundesbürger zu den Streitkräften, seit vielen Jahren beständig darauf hin, dass die Mehrheit der Bürgerinnen und Bürger der Bundeswehr positiv gegenübersteht. Gleichzeitig existiert in Teilen der Bevölkerung die Meinungsklima-wahrnehmung, dass die Bundeswehr auf gesamtgesellschaftlicher Ebene eher kritisch gesehen wird. Der im Rahmen dieser Arbeit erstmalig entwickelte medienzentrierte Untersuchungsansatz des Phänomens, welches als Ausprägung pluralistischer Ignoranz theoretisch hergeleitet wurde, fokussiert, neben dem Einfluss eines doppelten Meinungsklimas, auf die Wirkung medienspezifischer Wahrnehmungsphänomene (Hostile-Media-Phänomen und Third-Person-Wahrnehmung), um die beobachtete Diskrepanz zwischen persönlicher Einstellung und Meinungsklimawahrnehmung zum Thema Ansehen der Bundeswehr zu erklären.
Im Rahmen einer quantitativen Bevölkerungsbefragung wurden Indikatoren entwickelt, um die aufgestellten Hypothesen einer empirischen Überprüfung zu unterziehen. Die deskriptiven Analysen zur Richtung und Ausprägung der Diskrepanzwahrnehmung ergaben, dass sich die Bürgerinnen und Bürger eher in der Weise irren, dass sie das Meinungsklima zum Thema Ansehen der Bundeswehr negativer einschätzen als das Ansehen, welches sie den Streitkräften persönlich entgegenbringen (negative Diskrepanz-wahrnehmung). Außerdem zeigte sich, dass die Diskrepanzwahrnehmung zurückging, wenn dem Untersuchungsthema ein emotionales Potenzial zugesprochen wurde. In einem solchen Fall tendieren die Probanden dazu, die eigene Meinung dicht an der antizipierten Mehrheitsmeinung zu positionieren, um sich keinem Rechtfertigungsdruck oder schlimmstenfalls sozialer Isolation auszusetzen.
Die Ergebnisse der Analysen der vier zentralen erklärenden Variablen zeigten auf, dass sich alle formulierten Hypothesen zur Richtung der Diskrepanzwahrnehmung bestätigten. Eine vermehrte Mediennutzung, eine negative Wahrnehmung der generellen bundeswehrbezogenen Medienberichterstattung, eine persönlich positive Einstellung zur Bundeswehr und die Wahrnehmung, dass die Medien auf Dritte stärker wirken als auf die eigene Person trugen jeweils zu einem Anstieg der negativen Diskrepanzwahrnehmung zum Thema Ansehen der Bundeswehr bei. Personen, die diese Merkmale aufwiesen, schätzten das Meinungsklima zum Thema Ansehen der Bundeswehr negativer ein als das Ansehen, welches sie den Streitkräften persönlich entgegenbrachten. Die Analyse der Stärke der jeweiligen Effekte verdeutlichte jedoch, dass die verwendeten Erklärungsansätze jeweils nur einen kleinen oder mittleren Beitrag zur Erklärung der Diskrepanzwahrnehmung leisten konnten.
Dieses Ergebnis kann dadurch begründet werden, dass sich das Untersuchungsthema, neben der Ermangelung einer kontinuierlichen Medienberichterstattung und eines breiten öffentlichen Diskurses zum Thema Ansehen der Bundeswehr sowie fehlender persönlicher Bezüge zu den Streitkräften, in der Analyse insbesondere als zu wenig konfliktträchtig erwies. Ob die Bundeswehr gesellschaftliches Ansehen erfährt, besitzt für den Großteil der Bevölkerung nur eine geringe persönliche Relevanz. Aus diesen Gründen scheint dieses Thema nicht dazu geeignet zu sein, um die in dieser Dissertation als Erklärungsfaktoren herangezogenen medienspezifischen Wahrnehmungsphänomene auszubilden. Dieses Ergebnis impliziert, dass die Diskrepanz zwischen persönlicher Einstellung und Meinungs-klimawahrnehmung zum Thema Ansehen der Bundeswehr von einer Reihe weiterer Faktoren beeinflusst wird, die es im Rahmen zukünftiger Forschungsarbeiten aufzuspüren und zu untersuchen gilt.
Eine mit dem Wort verleiten beschriebene Tathandlung findet sich in 15 verschiedenen Tatbestanden des deutschen Strafrechts. Der Autor untersucht erfolgreich die Moglichkeit, Zweckmaigkeit und Gebotenheit der Bestimmung eines einheitlichen Verleitensbegriffs. Mittels einer systematisch vergleichenden Analyse samtlicher der teils seit mehr als 140 Jahren geltenden Verleitungsdelikte ermittelt er einen einheitlichen Inhalt des Verleitens und seines Wesens. In der Rechtswissenschaft handelt es sich damit um die erste umfassende und verallgemeinernde Abhandlung zum Verleitensbegriff. Abschlieend befasst sich der Autor mit strafrechtsdogmatischen Einzelfragen, die sich aus der Verwendung des Merkmals verleiten ergeben.
Die Frage der steuerlichen Behandlung gemischt privat und betrieblich-beruflich veranlasster Aufwendungen ist auf akademischer und praktischer Ebene ein ewiges Streitthema. Mit der Entscheidung des Großen Senats des BFH vom 21.9.2009 (GrS 1/06) wurde eine Kehrtwende in der fast vier Jahrzehnte geltenden Grundlagenrechtsprechung eingeleitet, welche eine umfangreichere steuerliche Berücksichtigung gemischter Aufwendungen ermöglicht. Ein umfassendes Aufteilungsgebot gilt jedoch auch künftig nicht. Dieses Grundlagenwerk erarbeitet und untersucht die Grundsätze der steuerlichen Behandlung gemischter Aufwendungen umfassend und berücksichtigt neben der historischen und aktuellen Rechtsprechung u.a. auch verfahrensrechtliche Aspekte. Die Arbeit stellt für die akademische Auseinandersetzung und für die praktische Arbeit gleichsam eine fruchtbare Basis dar.
Der Autor ist Steuerberater in Berlin und Lehrbeauftragter für Steuerlehre an der Universität Potsdam.
Unter hybriden Finanzinstrumenten werden ganz allgemein Mischformen zwischen Eigen- und Fremdkapital verstanden. Aufgrund ihrer flexiblen Ausgestaltung stellen hybride Finanzinstrumente eine in vielfacher Hinsicht vorteilhafte Alternative zu klassischen Eigen- und Fremdkapitalinstrumenten dar. Bei der konkreten Ausgestaltung der Finanzinstrumente in der Praxis gilt es insbesondere zu berücksichtigen, wie sich diese beim Emittenten und beim Inhaber handels- und steuerbilanziell abbilden lassen. Auf Ebene des Emittenten stellt sich die Frage, ob das zugeführte Kapital als Eigen- oder Fremdkapital zu bilanzieren ist. Auf Ebene des Inhabers stellt sich die Frage, ob strukturierte hybride Finanzinstrumente einheitlich oder getrennt in ihre einzelnen Komponenten zu bilanzieren sind. Dabei gilt es zu beachten, dass sowohl die Frage der Abgrenzung zwischen Eigen- und Fremdkapital als auch die Frage der Abgrenzung der Beurteilungseinheit zum Teil wesentliche Rechtsfolgedivergenzen nach sich ziehen können. Es ist daher ein wesentliches Anliegen dieser Untersuchung sowohl für das Handels- als auch für das Steuerbilanzrecht klare und eindeutige Abgrenzungskriterien zu formulieren. Die Studie richtet sich einerseits an Wissenschaftler, die eine fundierte und kritische Auseinandersetzung mit der Thematik erwarten und andererseits an Praktiker, die auf der Suche nach konkreten Lösungen und Gestaltungsmöglichkeiten im Zusammenhang mit hybriden Finanzinstrumenten sind.
Die Hebung stiller Lasten ist eine Problematik, die in den letzten Jahren in Rechtsprechung und Literatur kontrovers diskutiert wurde und aufgrund Grund befürchteter Steuerausfälle in Milliardenhöhe zu den Neuregelungen in § 4f und § 5 Abs. 7 EStG geführt hat.
Der Autor nimmt diese Neuregelungen zum Anlass, die steuerbilanziellen Grundlagen herauszuarbeiten, die alte Rechtslage zu analysieren und die neue Rechtslage im Lichte dieser Erkenntnisse zu bewerten.
Im Zusammenhang mit der Darstellung der steuerbilanziellen Grundlagen geht der Autor auf die verfassungsrechtliche Rechtfertigung stiller Lasten ein und setzt sich mit der Rechtsprechung des Bundesverfassungsgerichts auseinander.
Darauf folgt eine Bewertung der Rechtsprechung des Bundesfinanzhofs, der die Realisation der stillen Lasten mit dem Realisationsprinzip und dem Prinzip der Erfolgsneutralität von Anschaffungsvorgängen begründet hat. Daraufhin untersucht der Autor, inwieweit diese Grundsätze auf die Neuregelung übertragen werden können.
Die Beweiswürdigung gilt traditionell als „Domäne des Tatrichters“. Inzwischen untersuchen die Revisionsgerichte tatgerichtliche Urteile allerdings verstärkt auf sogenannte Beweiswürdigungsfehler. Besonders häufig wird hierbei „Lückenhaftigkeit“ moniert. Die Konturen dieses Fehlertyps sind jedoch schwach; ebenso wenig wurde bislang das strukturelle Verhältnis zu den übrigen Fehlertypen herausgearbeitet. Der Verfasser nähert sich dem Problemkreis deduktiv, ausgehend vom verräumlichten Verständnis der Beweiswürdigung als dem Versuch, die Lücke zwischen Prozess- und Tatgeschehen zu schließen.
Die Untersuchung führt an die erkenntnistheoretischen wie auch normativen Grenzen dieses Schließverfahrens und verdeutlicht, dass die Lücke auch unter Idealbedingungen rational nur verkleinert, nicht aber geschlossen werden kann. Anschließend tritt der Verfasser der herrschenden Auffassung entgegen, die für die Überzeugungsbildung eine subjektiv-irrationale Überwindung objektiv-rationaler Ungewissheit fordert. Aus alldem entwickelt er schließlich ein Konzept der lückenhaften Beweiswürdigung.
Die Arbeit wurde mit dem Wolf-Rüdiger-Bub-Preis ausgezeichnet.
Der griechisch-turkische Konflikt stellt in der Geschichte der NATO einen besonderen Fall dar. Die latente Gefahr einer militarischen Auseinandersetzung unter den eigenen Bundnispartnern unterschied die Streitigkeiten zwischen Griechenland und der Turkei deutlich von allen anderen Krisen, die innerhalb der Allianz bestanden. Dieser Band widmet sich der Frage, welche Anlaufe die NATO unternahm, um die fortwahrenden griechisch-turkischen Spannungen zu entscharfen, die phasenweise in einen offenen Krieg zu munden drohten. Am Beispiel der Sudostflanke, die heute, rund 25 Jahre nach dem Ende des Kalten Krieges, wegen zahlreicher Kriege und Konflikte erneut im Blickpunkt der Offentlichkeit steht, wird die Fahigkeit der Atlantischen Allianz uberpruft, Auseinandersetzungen zwischen verfeindeten Bundnispartnern beizulegen. Im Fokus der Arbeit stehen zudem die damaligen Manahmen der NATO, um die beiden Staaten trotz bestehender Feindseligkeiten zu integrieren und langfristig an sich zu binden.
Die Rechtsgrundlagen des grenzüberschreitenden Kooperationsrechts zwischen Gebietskörperschaften
(2017)
Durch den Siegeszug der interkommunalen Zusammenarbeit und ihrer rechtlichen Grundlagen in den letzten Jahrzehnten wurde das Kooperationsrecht zwischen Gebietskörperschaften innerhalb von nationalen oder regionalen Rechtsordnungen umfassend erforscht. Die Frage nach der Anwendung dieses Rechts in einem grenzüberschreitenden Zusammenhang fand hingegen kaum Beachtung. Diese Arbeit zielt deshalb auf eine systematische Einordnung der Rechtsquellen der nationalen und europäischen Rechtsordnungen ab, die für die grenzüberschreitende Zusammenarbeit zwischen Gebietskörperschaften Anwendung finden.
Die Rote Gefahr
(2017)
Die Arbeit beschäftigt sich mit den Grundsätzen der wirtschaftlichen Betrachtungsweise im Steuerrecht, insbesondere im Grunderwerbsteuerrecht. Hierbei werden die historischen, dogmatischen und verfassungsrechtlichen Gesichtspunkte der wirtschaftlichen Betrachtungsweise beleuchtet. Im Anschluss werden diese Ergebnisse auf die Besonderheiten des Grunderwerbsteuergesetzes bezogen. Neben einer allgemeinen Darstellung grunderwerbsteuerrechtlicher Vorschriften, steht die Entwicklung und die verfassungsrechtliche Rechtfertigung der wirtschaftlichen Betrachtungsweise am Beispiel des § 1 Abs. 2a GrEStG im Mittelpunkt der Analyse. Die Gesetzesänderungen zu § 1 Abs. 2a GrEStG sowie die Entscheidungen des Bundesfinanzhofs aus den Jahren 2013, 2014 und 2015 werden in diesem Zusammenhang eingehend dargestellt und in Bezug auf die wirtschaftliche Betrachtungsweise und ihrer Bedeutung im Grunderwerbsteuergesetz erläutert.
We analyze an inverse noisy regression model under random design with the aim of estimating the unknown target function based on a given set of data, drawn according to some unknown probability distribution. Our estimators are all constructed by kernel methods, which depend on a Reproducing Kernel Hilbert Space structure using spectral regularization methods.
A first main result establishes upper and lower bounds for the rate of convergence under a given source condition assumption, restricting the class of admissible distributions. But since kernel methods scale poorly when massive datasets are involved, we study one example for saving computation time and memory requirements in more detail. We show that Parallelizing spectral algorithms also leads to minimax optimal rates of convergence provided the number of machines is chosen appropriately.
We emphasize that so far all estimators depend on the assumed a-priori smoothness of the target function and on the eigenvalue decay of the kernel covariance operator, which are in general unknown. To obtain good purely data driven estimators constitutes the problem of adaptivity which we handle for the single machine problem via a version of the Lepskii principle.
Die Hybridomtechnik zur Produktion von monoklonalen Antikörpern ermöglichte einen großen Schritt in der Entwicklung von Immunoassays für die biochemische Forschung und klinische Diagnostik. Auch die Produktion von Antikörpern gegen niedermolekulare Analyten, Haptene, typische Targets in der Lebensmittel- und Umweltanalytik, erlangte in den letzten Jahren eine immer größere Bedeutung. Im Zuge der Durchführung der Hybridomtechnik werden tausende Antikörper-sezernierende und nicht-sezernierende Zellen generiert. Die Selektion der wenigen antigenselektiven Hybridomzellen zählt dabei zu den herausforderndsten Schritten für die Antikörpergewinnung. Bisherige Selektionsverfahren, wie die Limiting-Dilution-Klonierung in Verbindung mit Enzyme-linked Immunosorbent Assays (ELISAs), garantieren keine Monoklonalität und erlauben nur das Screening von einigen wenigen Zellklonen. Hingegen ermöglichen Hochdurchsatz-Selektionsmethoden, wie die Fluoreszenz-aktivierte Zellsortierung (FACS), einen sehr hohen Probendurchsatz. Eine Einzelzellablage garantiert hierbei Monoklonalität. Jedoch sind die dafür erforderlichen Zellmarkierungen oftmals zellschädigend oder aufwendig zu generieren. Auch ist bisher noch keine Markierungsmethode bekannt, die es ermöglicht, Hapten-selektive Hybridomzellen durchflusszytometrisch zu analysieren und eine FACS-Selektion durchzuführen.
Aus diesem Grund wurden in dieser Arbeit zwei Zellmarkierungsmethoden entwickelt, die dies ermöglichen sollten. Die membranständigen Antikörper von Hybridomzellen sollten entweder direkt oder indirekt immunfluoreszenz-markiert und dadurch für die Durchflusszytometrie und FACS-Selektion zugänglich gemacht werden. Die direkte Markierung wurde mittels eines Hapten-Fluorophor-Konjugats durchgeführt. Sie ermöglichte erstmalig den Anteil an Haptenselektiven Hybridomzellen in einer Hybridomzelllinie zu überprüfen. Dies konnte für zwei Hapten-selektive Hybridomzelllinien, die Antikörper gegen das Hormon 17β-Estradiol und das Cardenolid Digoxigenin bilden, gezeigt werden. Durchflusszytometrie und ELISAs lieferten vergleichbare Ergebnisse. Zellen, die Hapten-selektiv markiert werden konnten, sezernierten ebenfalls Hapten-selektive Antikörper. Des Weiteren konnte die direkte Markierung dazu genutzt werden, zwei Mykotoxin-selektive Hybridomzelllinien, welche Antikörper gegen Aflatoxin und Zearalenon bilden, auf Monoklonalität zu testen. Dies ist mittels ELISA nicht möglich. Die Markierungsmethode eignete sich jedoch nur für fixierte Hybridomzellen. Eine Markierung von lebenden Zellen konnte weder durchflusszytometrisch noch mittels konfokaler Laser-Scanning-Mikroskopie gezeigt werden.
Dies gelang erst mit einer neu entwickelten indirekten Immunfluoreszenzmarkierung. Dabei wurden die Zellen zunächst mit einem Hapten-Peroxidase-Konjugat inkubiert, gefolgt von einem Fluorophor-markierten anti-HRP-Antikörper-Konjugat. Dies wurde für zwei Analyten, das Hormon Estron und das Antiepileptikum Carbamazepin, gezeigt. Die indirekte Markierung wurde erfolgreich dazu verwendet, Carbamazepin-selektive Hybridomzellen aus einem Fusionsansatz für die monoklonale Antikörperproduktion auszusortieren. Damit wurde erstmalig eine Zellmarkierungsmethode entwickelt, die eine Hochdurchsatz-Selektion lebender Hybridomzellen aus einem Fusionsansatz ermöglicht. Sie ist nicht zellschädigend und kann zusätzlich zur Selektion Hapten-selektiver Plasmazellen verwendet werden.
Galaxies are among the most complex systems that can currently be modelled with a computer. A realistic simulation must take into account cosmology and gravitation as well as effects of plasma, nuclear, and particle physics that occur on very different time, length, and energy scales. The Milky Way is the ideal test bench for such simulations, because we can observe millions of its individual stars whose kinematics and chemical composition are records of the evolution of our Galaxy. Thanks to the advent of multi-object spectroscopic surveys, we can systematically study stellar populations in a much larger volume of the Milky Way. While the wealth of new data will certainly revolutionise our picture of the formation and evolution of our Galaxy and galaxies in general, the big-data era of Galactic astronomy also confronts us with new observational, theoretical, and computational challenges.
This thesis aims at finding new observational constraints to test Milky-Way models, primarily based on infra-red spectroscopy from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) and asteroseismic data from the CoRoT mission. We compare our findings with chemical-evolution models and more sophisticated chemodynamical simulations. In particular we use the new powerful technique of combining asteroseismic and spectroscopic observations that allows us to test the time dimension of such models for the first time. With CoRoT and APOGEE (CoRoGEE) we can infer much more precise ages for distant field red-giant stars, opening up a new window for Galactic archaeology.
Another important aspect of this work is the forward-simulation approach that we pursued when interpreting these complex datasets and comparing them to chemodynamical models.
The first part of the thesis contains the first chemodynamical study conducted with the APOGEE survey. Our sample comprises more than 20,000 red-giant stars located within 6 kpc from the Sun, and thus greatly enlarges the Galactic volume covered with high-resolution spectroscopic observations. Because APOGEE is much less affected by interstellar dust extinction, the sample covers the disc regions very close to the Galactic plane that are typically avoided by optical surveys. This allows us to investigate the chemo-kinematic properties of the Milky Way's thin disc outside the solar vicinity. We measure, for the first time with high-resolution data, the radial metallicity gradient of the disc as a function of distance from the Galactic plane, demonstrating that the gradient flattens and even changes its sign for mid-plane distances greater than 1 kpc.
Furthermore, we detect a gap between the high- and low-[$\alpha$/Fe] sequences in the chemical-abundance diagram (associated with the thin and thick disc) that unlike in previous surveys can hardly be explained by selection effects. Using 6D kinematic information, we also present chemical-abundance diagrams cleaned from stars on kinematically hot orbits. The data allow us to confirm without doubt that the scale length of the (chemically-defined) thick disc is significantly shorter than that of the thin disc.
In the second part, we present our results of the first combination of asteroseismic and spectroscopic data in the context of Galactic Archaeology. We analyse APOGEE follow-up observations of 606 solar-like oscillating red giants in two CoRoT fields close to the Galactic plane. These stars cover a large radial range of the Galactic disc (4.5 kpc $\lesssim R_{\rm Gal}\lesssim15$ kpc) and a large age baseline (0.5 Gyr $\lesssim \tau\lesssim$ 13 Gyr), allowing us to study the age- and radius-dependence of the [$\alpha$/Fe] vs. [Fe/H] distributions. We find that the age distribution of the high-[$\alpha$/Fe] sequence appears to be broader than expected from a monolithically-formed old thick disc that stopped to form stars 10 Gyr ago. In particular, we discover a significant population of apparently young, [$\alpha$/Fe]-rich stars in the CoRoGEE data whose existence cannot be explained by standard chemical-evolution models. These peculiar stars are much more abundant in the inner CoRoT field LRc01 than in the outer-disc field LRc01, suggesting that at least part of this population has a chemical-evolution rather than a stellar-evolution origin, possibly due to a peculiar chemical-enrichment history of the inner disc. We also find that strong radial migration is needed to explain the abundance of super-metal-rich stars in the outer disc.
Finally, we use the CoRoGEE sample to study the time evolution of the radial metallicity gradient in the thin disc, an observable that has been the subject of observational and theoretical debate for more than 20 years. By dividing the CoRoGEE dataset into six age bins, performing a careful statistical analysis of the radial [Fe/H], [O/H], and [Mg/Fe] distributions, and accounting for the biases introduced by the observation strategy, we obtain reliable gradient measurements. The slope of the radial [Fe/H] gradient of the young red-giant population ($-0.058\pm0.008$ [stat.] $\pm0.003$ [syst.] dex/kpc) is consistent with recent Cepheid data. For the age range of $1-4$ Gyr, the gradient steepens slightly ($-0.066\pm0.007\pm0.002$ dex/kpc), before flattening again to reach a value of $\sim-0.03$ dex/kpc for stars with ages between 6 and 10 Gyr. This age dependence of the [Fe/H] gradient can be explained by a nearly constant negative [Fe/H] gradient of $\sim-0.07$ dex/kpc in the interstellar medium over the past 10 Gyr, together with stellar heating and migration. Radial migration also offers a new explanation for the puzzling observation that intermediate-age open clusters in the solar vicinity (unlike field stars) tend to have higher metallicities than their younger counterparts. We suggest that non-migrating clusters are more likely to be kinematically disrupted, which creates a bias towards high-metallicity migrators from the inner disc and may even steepen the intermediate-age cluster abundance gradient.
The work done during the PhD studies has been focused on measurements of distribution functions of rotating galaxies using integral field spectroscopy observations.
Throughout the main body of research presented here we have been using CALIFA (Calar Alto Legacy Integral Field Area) survey stellar velocity fields to obtain robust measurements of circular velocities for rotating galaxies of all morphological types. A crucial part of the work was enabled by well-defined CALIFA sample selection criteria: it enabled reconstructing sample-independent distributions of galaxy properties.
In Chapter 2, we measure the distribution in absolute magnitude - circular velocity space for a well-defined sample of 199 rotating CALIFA galaxies using their stellar kinematics. Our aim in this analysis is to avoid subjective selection criteria and to take volume and large-scale structure factors into account. Using stellar velocity fields instead of gas emission line kinematics allows including rapidly rotating early type galaxies. Our initial sample contains 277 galaxies with available stellar velocity fields and growth curve r-band photometry. After rejecting 51 velocity fields that could not be modelled due to the low number of bins, foreground contamination or significant interaction we perform Markov Chain Monte Carlo (MCMC) modelling of the velocity fields, obtaining the rotation curve and kinematic parameters and their realistic uncertainties. We perform an extinction correction and calculate the circular velocity v_circ accounting for pressure support a given galaxy has. The resulting galaxy distribution on the M_r - v_circ plane is then modelled as a mixture of two distinct populations, allowing robust and reproducible rejection of outliers, a significant fraction of which are slow rotators. The selection effects are understood well enough that the incompleteness of the sample can be corrected and the 199 galaxies can be weighted by volume and large-scale structure factors enabling us to fit a volume-corrected Tully-Fisher relation (TFR). More importantly, we also provide the volume-corrected distribution of galaxies in the M_r - v_circ plane, which can be compared with cosmological simulations. The joint distribution of the luminosity and circular velocity space densities, representative over the range of -20 > M_r > -22 mag, can place more stringent constraints on the galaxy formation and evolution scenarios than linear TFR fit parameters or the luminosity function alone.
In Chapter 3, we measure one of the marginal distributions of the M_r - v_circ distribution: the circular velocity function of rotating galaxies. The velocity function is a fundamental observable statistic of the galaxy population, being of a similar importance as the luminosity function, but much more difficult to measure. We present the first directly measured circular velocity function that is representative between 60 < v_circ < 320 km s^-1 for galaxies of all morphological types at a given rotation velocity. For the low mass galaxy population 60 < v_circ < 170 km s^-1, we use the HIPASS velocity function. For the massive galaxy population 170 < v_circ < 320 km s^-1, we use stellar circular velocities from CALIFA. The CALIFA velocity function includes homogeneous velocity measurements of both late and early-type rotation-supported galaxies. It has the crucial advantage of not missing gas-poor massive ellipticals that HI surveys are blind to. We show that both velocity functions can be combined in a seamless manner, as their ranges of validity overlap. The resulting observed velocity function is compared to velocity functions derived from cosmological simulations of the z = 0 galaxy population. We find that dark matter-only simulations show a strong mismatch with the observed VF. Hydrodynamic Illustris simulations fare better, but still do not fully reproduce observations.
In Chapter 4, we present some other work done during the PhD studies, namely, a method that improves the precision of specific angular measurements by combining simultaneous Markov Chain Monte Carlo modelling of ionised gas 2D velocity fields and HI linewidths. To test the method we use a sample of 25 galaxies from the Sydney-AAO Multi-object Integral field (SAMI) survey that had matching ALFALFA HI linewidths. Such a method allows constraining the rotation curve both in the inner regions of a galaxy and in its outskirts, leading to increased precision of specific angular momentum measurements. It could be used to further constrain the observed relation between galaxy mass, specific angular momentum and morphology (Obreschkow & Glazebrook 2014).
Mathematical and computational methods are presented in the appendices.
Zahlreiche mittelalterliche Reliquiare zeigen ihren kostbaren Inhalt hinter Kristall, Glas oder Maßwerkdurchbrüchen. Gängige Auffassung ist, dass der Wunsch nach Sichtbarkeit der Reliquien für diese Formentwicklung maßgeblich war. Diese These wird anhand der Reliquiare des ehemaligen Essener Frauenstiftes diskutiert, ergänzt durch Befunde an der Stiftskirche und Quellen wie den Essener Liber ordinarius aus dem 14. Jahrhundert. Entgegen der heutigen musealen Nachsicht waren Reliquiare im Mittelalter meist verborgen oder nur schemenhaft zu erkennen, und viele ihrer Öffnungen lassen eine tatsächliche Sichtbarmachung kaum zu. Vielmehr haben Vorstellungen einer Durchlässigkeit der heilbringenden Reliquienkraft die Formen der mittelalterlichen Reliquiare geprägt und lassen sich ebenso in sakraler Schatzkunst wie in Kirchenarchitektur nachweisen. Der Band enthält zudem einen Katalog der Essener Reliquiare vom 10. Jahrhundert bis um das Jahr 1500.
In this era of high-speed informatization and globalization, online education is no longer an exquisite concept in the ivory tower, but a rapidly developing industry closely relevant to people's daily lives. Numerous lectures are recorded in form of multimedia data, uploaded to the Internet and made publicly accessible from anywhere in this world. These lectures are generally addressed as e-lectures. In recent year, a new popular form of e-lectures, the Massive Open Online Courses (MOOCs), boosts the growth of online education industry and somehow turns "learning online" into a fashion.
As an e-learning provider, besides to keep improving the quality of e-lecture content, to provide better learning environment for online learners is also a highly important task. This task can be preceded in various ways, and one of them is to enhance and upgrade the learning materials provided: e-lectures could be more than videos. Moreover, this process of enhancement or upgrading should be done automatically, without giving extra burdens to the lecturers or teaching teams, and this is the aim of this thesis.
The first part of this thesis is an integrated framework of multi-lingual subtitles production, which can help online learners penetrate the language barrier. The framework consists of Automatic Speech Recognition (ASR), Sentence Boundary Detection (SBD) and Machine Translation (MT), among which the proposed SBD solution is major technical contribution, building on Deep Neural Network (DNN) and Word Vector (WV) and achieving state-of-the-art performance. Besides, a quantitative evaluation with dozens of volunteers is also introduced to measure how these auto-generated subtitles could actually help in context of e-lectures.
Secondly, a technical solution "TOG" (Tree-Structure Outline Generation) is proposed to extract textual content from the displaying slides recorded in video and re-organize them into a hierarchical lecture outline, which may serve in multiple functions, such like preview, navigation and retrieval. TOG runs adaptively and can be roughly divided into intra-slide and inter-slides phases. Table detection and lecture video segmentation can be implemented as sub- or post-application in these two phases respectively. Evaluation on diverse e-lectures shows that all the outlines, tables and segments achieved are trustworthily accurate.
Based on the subtitles and outlines previously created, lecture videos can be further split into sentence units and slide-based segment units. A lecture highlighting process is further applied on these units, in order to capture and mark the most important parts within the corresponding lecture, just as what people do with a pen when reading paper books. Sentence-level highlighting depends on the acoustic analysis on the audio track, while segment-level highlighting focuses on exploring clues from the statistical information of related transcripts and slide content. Both objective and subjective evaluations prove that the proposed lecture highlighting solution is with decent precision and welcomed by users.
All above enhanced e-lecture materials have been already implemented in actual use or made available for implementation by convenient interfaces.
This is a cumulative dissertation comprising three original studies (one published, one in revision, one submitted; Effective December 2017) investigating how reptile species in arid Australia respond to various climatic parameters at different spatial scales and analysing the two potential main underlying mechanisms: thermoregulatory behaviour and species interactions. This dissertation combines extensive individual-based field data across trophic levels, selected field experiments, statistical analyses, and predictive modelling techniques. Mechanisms and processes detected in this dissertation can now be used to predict potential future changes in the community of arid-zone lizards. This knowledge will help improving our fundamental understanding of the consequences of global change and thereby prevent biodiversity loss in a vulnerable ecosystem.
In the arable soil landscape of hummocky ground moraines, an erosion-affected spatial differentiation of soils can be observed. Man-made erosion leads to soil profile modifications along slopes with changed solum thickness and modified properties of soil horizons due to water erosion in combination with tillage operations. Soil erosion creates, thereby, spatial patterns of soil properties (e.g., texture and organic matter content) and differences in crop development. However, little is known about the manner in which water fluxes are affected by soil-crop interactions depending on contrasting properties of differently-developed soil horizons and how water fluxes influence the carbon transport in an eroded landscape. To identify such feedbacks between erosion-induced soil profile modifications and the 1D-water and solute balance, high-precision weighing lysimeters equipped with a wide range of sensor technique were filled with undisturbed soil monoliths that differed in the degree of past soil erosion. Furthermore, lysimeter effluent concentrations were analyzed for dissolved carbon fractions in bi-weekly intervals.
The water balance components measured by high precision lysimeters varied from the most eroded to the less eroded monolith up to 83 % (deep drainage) primarily caused due to varying amounts of precipitation and evapotranspiration for a 3-years period. Here, interactions between crop development and contrasting rainfall interception by above ground biomass could explain differences in water balance components. Concentrations of dissolved carbon in soil water samples were relatively constant in time, suggesting carbon leaching was mainly affected by water fluxes in this observation period. For the lysimeter-based water balance analysis, a filtering scheme was developed considering temporal autocorrelation. The minute-based autocorrelation analysis of mass changes from lysimeter time series revealed characteristic autocorrelation lengths ranging from 23 to 76 minutes. Thereby, temporal autocorrelation provided an optimal approximation of precipitation quantities. However, the high temporal resolution in lysimeter time series is restricted by the lengths of autocorrelation.
Erosion-induced but also gradual changes in soil properties were reflected by dynamics of soil water retention properties in the lysimeter soils. Short-term and long-term hysteretic water retention data suggested seasonal wettability problems of soils increasingly limited rewetting of previously dried pore regions. Differences in water retention were assigned to soil tillage operations and the erosion history at different slope positions. The threedimensional spatial pattern of soil types that result from erosional soil profile modifications were also reflected in differences of crop root development at different landscape positions. Contrasting root densities revealed positive relations of root and aboveground plant characteristics. Differences in the spatially-distributed root growth between different eroded soil types provided indications that root development was affected by the erosion-induced soil evolution processes.
Overall, the current thesis corroborated the hypothesis that erosion-induced soil profile modifications affect the soil water balance, carbon leaching and soil hydraulic properties, but also the crop root system is influenced by erosion-induced spatial patterns of soil properties in the arable hummocky post glacial soil landscape. The results will help to improve model predictions of water and solute movement in arable soils and to understand interactions between soil erosion and carbon pathways regarding sink-or-source terms in landscapes.