Refine
Has Fulltext
- yes (135) (remove)
Year of publication
- 2017 (135) (remove)
Document Type
- Doctoral Thesis (135) (remove)
Keywords
- FRET (3)
- Klimawandel (3)
- Nanopartikel (3)
- climate change (3)
- Adipositas (2)
- Arbeitsmarktpolitik (2)
- Bioraffinerie (2)
- Calciumphosphat (2)
- DNA origami (2)
- Depression (2)
Institute
- Institut für Chemie (25)
- Institut für Geowissenschaften (21)
- Institut für Physik und Astronomie (14)
- Institut für Biochemie und Biologie (13)
- Department Psychologie (8)
- Wirtschaftswissenschaften (7)
- Department Linguistik (6)
- Institut für Mathematik (6)
- Department Sport- und Gesundheitswissenschaften (5)
- Institut für Ernährungswissenschaft (5)
Die vorliegende Arbeit mit dem Titel „Eine Frage der Zeit. Wie Einflüsse individueller Merkmale auf Einkommen bei Frauen über ihre familiären Verpflichtungen vermittelt werden“ geht der Frage der Heterogenität bei weiblichen Einkommensergebnissen nach. Dabei steht die Thematik der individuellen Investitionen in die familiäre Arbeit als erklärender Faktor im Vordergrund und es wird der Frage nachgegangen, warum die einen Frauen viele und andere weniger häusliche Verpflichtungen übernehmen. Hierfür werden das individuelle Humankapital der Frauen, ihre Werteorientierungen und individuelle berufliche Motivationen aus der Jugendzeit und im Erwachsenenalter herangezogen. Die analysierten Daten (Daten der LifE-Studie) repräsentieren eine Langzeitperspektive vom 16. bis zum 45. Lebensjahr der befragten Frauen. Zusammenfassend kann im Ergebnis gezeigt werden, dass ein Effekt familiärer Verpflichtungen auf Einkommensergebnisse bei Frauen im frühen und mittleren Erwachsenenalter als Zeiteffekt über die investierte Erwerbsarbeitszeit vermittelt wird. Die Relevanz privater Routinearbeiten für Berufserfolge von Frauen und insbesondere Müttern stellt somit eine Frage der Zeit dar. Weiterhin kann für individuelle Einflüsse auf Einkommen bei Frauen gezeigt werden, dass höhere zeitliche Investitionen in den Beruf von Frauen mit hohem Bildungsniveau als indirect-only-Mediation nur über die Umverteilung häuslicher Arbeiten erklärbar werden. Frauen sind demnach zwar Gewinnerinnen der Bildungsexpansion. Die Bildungsexpansion stellt jedoch auch die Geschichte der Entstehung eines Vereinbarkeitskonflikts für eben diese Frauen dar, weil die bis heute virulenten Beharrungskräfte hinsichtlich der Frauen zugeschriebenen familiären Verpflichtungen mit ihren gestiegenen beruflichen Erwartungen und Chancen kollidieren. Die Arbeit leistet in ihren Analyseresultaten einen wichtigen Beitrag zur Erklärung heterogener Investitionen von Frauen in den Beruf und ihrer Einkommensergebnisse aus dem Privaten heraus.
In this work the human AOX1 was characterized and detailed aspects regarding the expression, the enzyme kinetics and the production of reactive oxygen species (ROS) were investigated. The hAOX1 is a cytosolic enzyme belonging to the molybdenum hydroxylase family. Its catalytically active form is a homodimer with a molecular weight of 300 kDa. Each monomer (150 kDa) consists of three domains: a N-terminal domain (20 kDa) containing two [2Fe-2S] clusters, a 40 kDa intermediate domain containing a flavin adenine dinucleotide (FAD), and a C-terminal domain (85 kDa) containing the substrate binding pocket and the molybdenum cofactor (Moco). The hAOX1 has an emerging role in the metabolism and pharmacokinetics of many drugs, especially aldehydes and N- heterocyclic compounds.
In this study, the hAOX1 was hetereogously expressed in E. coli TP1000 cells, using a new codon optimized gene sequence which improved the expressed protein yield of around 10-fold compared to the previous expression systems for this enzyme. To increase the catalytic activity of hAOX1, an in vitro chemical sulfuration was performed to favor the insertion of the equatorial sulfido ligand at the Moco with consequent increased enzymatic activity of around 10-fold. Steady-state kinetics and inhibition studies were performed using several substrates, electron acceptors and inhibitors. The recombinant hAOX1 showed higher catalytic activity when molecular oxygen was used as electron acceptor. The highest turn over values were obtained with phenanthridine as substrate. Inhibition studies using thioridazine (phenothiazine family), in combination with structural studies performed in the group of Prof. M.J. Romão, Nova Universidade de Lisboa, showed a new inhibition site located in proximity of the dimerization site of hAOX1. The inhibition mode of thioridazine resulted in a noncompetitive inhibition type. Further inhibition studies with loxapine, a thioridazine-related molecule, showed the same type of inhibition. Additional inhibition studies using DCPIP and raloxifene were carried out.
Extensive studies on the FAD active site of the hAOX1 were performed. Twenty new hAOX1 variants were produced and characterized. The hAOX1 variants generated in this work were divided in three groups: I) hAOX1 single nucleotide polymorphisms (SNP) variants; II) XOR- FAD loop hAOX1 variants; III) additional single point hAOX1 variants. The hAOX1 SNP variants G46E, G50D, G346R, R433P, A439E, K1231N showed clear alterations in their catalytic activity, indicating a crucial role of these residues into the FAD active site and in relation to the overall reactivity of hAOX1.
Furthermore, residues of the bovine XOR FAD flexible loop (Q423ASRREDDIAK433) were introduced in the hAOX1. FAD loop hAOX1 variants were produced and characterized for their stability and catalytic activity. Especially the variants hAOX1 N436D/A437D/L438I, N436D/A437D/L438I/I440K and Q434R/N436D/A437D/L438I/I440K showed decreased catalytic activity and stability. hAOX1 wild type and variants were tested for reactivity toward NADH but no reaction was observed.
Additionally, the hAOX1 wild type and variants were tested for the generation of reactive oxygen species (ROS). Interestingly, one of the SNP variants, hAOX1 L438V, showed a high ratio of superoxide prodction. This result showed a critical role for the residue Leu438 in the mechanism of oxygen radicals formation by hAOX1. Subsequently, further hAOX1 variants having the mutated Leu438 residue were produced. The variants hAOX1 L438A, L438F and L438K showed superoxide overproduction of around 85%, 65% and 35% of the total reducing equivalent obtained from the substrate oxidation.
The results of this work show for the first time a characterization of the FAD active site of the hAOX1, revealing the importance of specific residues involved in the generation of ROS and effecting the overall enzymatic activity of hAOX1. The hAOX1 SNP variants presented here indicate that those allelic variations in humans might cause alterations ROS balancing and clearance of drugs in humans.
Nowadays, the need to protect the environment becomes more urgent than ever. In the field of chemistry, this translates to practices such as waste prevention, use of renewable feedstocks, and catalysis; concepts based on the principles of green chemistry. Polymers are an important product in the chemical industry and are also in the focus of these changes. In this thesis, more sustainable approaches to make two classes of polymers, polypeptoids and polyesters, are described.
Polypeptoids or poly(alkyl-N-glycines) are isomers of polypeptides and are biocompatible, as well as degradable under biologically relevant conditions. In addition to that, they can have interesting properties such as lower critical solution temperature (LCST) behavior. They are usually synthesized by the ring opening polymerization (ROP) of N-carboxy anhydrides (NCAs), which are produced with the use of toxic compounds (e.g. phosgene) and which are highly sensitive to humidity. In order to avoid the direct synthesis and isolation of the NCAs, N-phenoxycarbonyl-protected N-substituted glycines are prepared, which can yield the NCAs in situ. The conditions for the NCA synthesis and its direct polymerization are investigated and optimized for the simplest N-substituted glycine, sarcosine. The use of a tertiary amine in less than stoichiometric amounts compared to the N-phenoxycarbonyl--sarcosine seems to accelerate drastically the NCA formation and does not affect the efficiency of the polymerization. In fact, well defined polysarcosines that comply to the monomer to initiator ratio can be produced by this method. This approach was also applied to other N-substituted glycines.
Dihydroxyacetone is a sustainable diol produced from glycerol, and has already been used for the synthesis of polycarbonates. Here, it was used as a comonomer for the synthesis of polyesters. However, the polymerization of dihydroxyacetone presented difficulties, probably due to the insolubility of the macromolecular chains. To circumvent the problem, the dimethyl acetal protected dihydroxyacetone was polymerized with terephthaloyl chloride to yield a soluble polymer. When the carbonyl was recovered after deprotection, the product was insoluble in all solvents, showing that the carbonyl in the main chain hinders the dissolution of the polymers. The solubility issue can be avoided, when a 1:1 mixture of dihydroxyacetone/ ethylene glycol is used to yield a soluble copolyester.
Self-adaptive data quality
(2017)
Carrying out business processes successfully is closely linked to the quality of the data inventory in an organization. Lacks in data quality lead to problems: Incorrect address data prevents (timely) shipments to customers. Erroneous orders lead to returns and thus to unnecessary effort. Wrong pricing forces companies to miss out on revenues or to impair customer satisfaction. If orders or customer records cannot be retrieved, complaint management takes longer. Due to erroneous inventories, too few or too much supplies might be reordered.
A special problem with data quality and the reason for many of the issues mentioned above are duplicates in databases. Duplicates are different representations of same real-world objects in a dataset. However, these representations differ from each other and are for that reason hard to match by a computer. Moreover, the number of required comparisons to find those duplicates grows with the square of the dataset size. To cleanse the data, these duplicates must be detected and removed. Duplicate detection is a very laborious process. To achieve satisfactory results, appropriate software must be created and configured (similarity measures, partitioning keys, thresholds, etc.). Both requires much manual effort and experience.
This thesis addresses automation of parameter selection for duplicate detection and presents several novel approaches that eliminate the need for human experience in parts of the duplicate detection process.
A pre-processing step is introduced that analyzes the datasets in question and classifies their attributes semantically. Not only do these annotations help understanding the respective datasets, but they also facilitate subsequent steps, for example, by selecting appropriate similarity measures or normalizing the data upfront. This approach works without schema information.
Following that, we show a partitioning technique that strongly reduces the number of pair comparisons for the duplicate detection process. The approach automatically finds particularly suitable partitioning keys that simultaneously allow for effective and efficient duplicate retrieval. By means of a user study, we demonstrate that this technique finds partitioning keys that outperform expert suggestions and additionally does not need manual configuration. Furthermore, this approach can be applied independently of the attribute types.
To measure the success of a duplicate detection process and to execute the described partitioning approach, a gold standard is required that provides information about the actual duplicates in a training dataset. This thesis presents a technique that uses existing duplicate detection results and crowdsourcing to create a near gold standard that can be used for the purposes above. Another part of the thesis describes and evaluates strategies how to reduce these crowdsourcing costs and to achieve a consensus with less effort.
Die Arktis erwärmt sich schneller als der Rest der Erde. Die Auswirkungen manifestieren sich unter Anderem in einer verstärkten Erwärmung der arktischen Grenzschicht. Diese Arbeit befasst sich mit Wechselwirkungen zwischen synoptischen Zyklonen und der arktischen Atmosphäre auf lokalen bis überregionalen Skalen. Ausgangspunkt dafür sind Messdaten und Modellsimulationen für den Zeitraum der N-ICE2015 Expedition, die von Anfang Januar bis Ende Juni 2015 im arktischen Nordatlantiksektor stattgefunden hat.
Anhand von Radiosondenmessungen lassen sich Auswirkungen von synoptischen Zyklonen am deutlichsten im Winter erkennen, da sie durch die Advektion warmer und feuchter Luftmassen in die Arktis den Zustand der Atmosphäre von einem strahlungs-klaren in einen strahlungs-opaken ändern. Obwohl dieser scharfe Kontrast nur im Winter existiert, zeigt die Analyse, dass der integrierte Wasserdampf als Indikator für die Advektion von Luftmassen aus niedrigen Breiten in die Arktis auch im Frühjahr geeignet ist. Neben der Advektion von Luftmassen wird der Einfluss der Zyklonen auf die statische Stabilität charakterisiert. Beim Vergleich der N-ICE2015 Beobachtungen mit der SHEBA Kampagne (1997/1998), die über dickerem Eis stattfand, finden sich trotz der unterschiedlichen Meereisregime Ähnlichkeiten in der statischen Stabilität der Atmosphäre. Die beobachteten Differenzen in der Stabilität lassen sich auf Unterschiede in der synoptischen Aktivität zurückführen. Dies lässt vermuten, dass die dünnere Eisdecke auf saisonalen Zeitskalen nur einen geringen Einfluss auf die thermodynamische Struktur der arktischen Troposphäre besitzt, solange eine dicke Schneeschicht sie bedeckt. Ein weiterer Vergleich mit den parallel zur N-ICE2015 Kampagne gestarteten Radiosonden der AWIPEV Station in Ny-Åesund, Spitzbergen, macht deutlich, dass die synoptischen Zyklonen oberhalb der Orographie auf saisonalen Zeitskalen das Wettergeschehen bestimmen.
Des Weiteren werden für Februar 2015 die Auswirkungen von in der Vertikalen variiertem Nudging auf die Entwicklung der Zyklonen am Beispiel des hydrostatischen regionalen Klimamodells HIRHAM5 untersucht. Es zeigt sich, dass die Unterschiede zwischen den acht Modellsimulationen mit abnehmender Anzahl der genudgten Level zunehmen. Die größten Differenzen resultieren vornehmlich aus dem zeitlichen Versatz der Entwicklung synoptischer Zyklonen. Zur Korrektur des Zeitversatzes der Zykloneninitiierung genügt es bereits, Nudging in den unterstem 250 m der Troposphäre anzuwenden. Daneben findet sich zwischen den genudgten HIRHAM5-Simulation und den in situ Messungen der gleiche positive Temperaturbias, den auch ERA-Interim besitzt. Das freie HIRHAM hingegen reproduziert das positive Ende der N-ICE2015 Temperaturverteilung gut, besitzt aber einen starken negativen Bias, der sehr wahrscheinlich aus einer Unterschätzung des Feuchtegehalts resultiert. An Beispiel einer Zyklone wird gezeigt, dass Nudging Einfluss auf die Lage der Höhentiefs besitzt, die ihrerseits die Zyklonenentwicklung am Boden beeinflussen. Im Weiteren wird mittels eines für kleine Ensemblegrößen geeigneten Varianzmaßes eine statistische Einschätzung der Wirkung des Nudgings auf die Vertikale getroffen. Es wird festgestellt, dass die Ähnlichkeit der Modellsimulationen in der unteren Troposphäre generell höher ist als darüber und in 500 hPa ein lokales Minimum besitzt.
Im letzten Teil der Analyse wird die Wechselwirkung der oberen und unteren Stratosphäre anhand zuvor betrachteter Zyklonen mit Daten der ERA-Interim Reanalyse untersucht. Lage und Ausrichtung des Polarwirbels erzeugten ab Anfang Februar 2015 eine ungewöhnlich große Meridionalkomponente des Tropopausenjets, die Zugbahnen in die zentrale Arktis begünstigte. Am Beispiel einer Zyklone wird die Übereinstimmung der synoptischen Entwicklung mit den theoretischen Annahmen über den abwärts gerichteten Einfluss der Stratosphäre auf die Troposphäre hervorgehoben. Dabei spielt die nicht-lineare Wechselwirkung zwischen der Orographie Grönlands, einer Intrusion stratosphärischer Luft in die Troposphäre sowie einer in Richtung Arktis propagierender Rossby-Welle eine tragende Rolle. Als Indikator dieser Wechselwirkung werden horizontale Signaturen aus abwechselnd aufsteigender und absinkender Luft innerhalb der Troposphäre identifiziert.
The valorization of carbohydrates is one of the most promising fields in green chemistry, as it enables to produce bulk chemicals and fuels out of renewable and abundant resources, instead of further exploiting fossil feedstocks. The focus in this thesis is the conversion of fructose, using dehydration and hydrodeoxygenation reactions. The main goal is to find an easy continuous process, including the solubility of the sugar in a green solvent, the conversion over a solid acid as well as over a metal@tungsten carbide catalyst.
At the beginning of this thesis, solid acid catalysts are synthesized by using carbohydrate material like glucose and starch at high temperatures (up to 600 °C). Additionally a third carbon is synthesized, using an activation method based on Ca(OH)2. After carbonization and further sulfonation, using fuming sulfuric acid, the three resulting catalysts are characterized together with sulfonated carbon black and Amberlyst 15 as references. In order to test all solid acid catalysts in reaction, a 250 mm x 4.6 mm stainless steel column is used as a fixed-bed continuous reactor. The temperature (110 °C to 250 °C) and residence time (2 to 30 minutes) is varied, and a direct relationship between contact time and selectivity is determined. The reaction mechanism, as well as the product distribution is showing a dehydration step of fructose towards 5-hydroxymethylfurfural (HMF). These furan-ring molecules are considered as “sleeping giants”, due to the possibility of using them as fuel, but also for upgrading them to chemicals like terephthalic acid or p-xylene. Consecutive reactions are producing levulinic acid, as well as condensation products with ethanol and formic acid. The activated carbon is additionally showing a 2 % yield of 2,5-Dimethylfuran (DMF) production, pointing towards the extraordinary properties of this catalyst. Without a metal catalyst present, what is normally necessary for hydrogenation reactions, a transferhydrogenation (with formic acid) is observed. The active catalyst was therefore carbon itself, what activated the hydrogen on its surface. This phenomenon was just very rarely observed so far. Expensive noble metals are the material of choice, when it comes to hydrogenation reactions nowadays and cheaper alternatives are necessary.
By postulating a similar electronic structure of tungsten carbide (WC) to platinum by Lewy and Boudart, research is focusing on the replacement of Pt. The production of nano-sized tungsten carbide particles (7.5 ± 2.5 nm, 70 m2 g-1) is enabled by the so called “urea glass route” and its catalytic performances are compared to commercial material. It is shown, that the activity is strongly dependent on the size of the particles as well as the surface area. Nano-sized tungsten carbide is showing activity for hydrogenation reactions under mild conditions (maximum 150 °C, 30 bar). This material therefore opens up new possibilities for replacing the rare and expensive platinum with tungsten carbide based catalysts.
Additionally different metal nanoparticles of palladium, copper and nickel are deposited on top of WC to further promote its reactivity. The nickel nanoparticles are strongly connected to WC and showed the best activity as well as selectivity for upgrading HMF with hydrodeoxygenation. The Ni@WC is not leaching and is showing very good hydrodeoxygenation properties with DMF yields up to 90 percent. Copper@WC is not showing good activity and palladium@WC enables undesired consecutive reactions, hydrogenating the furan ring system.
In order to enable the upgrade of fructose to DMF directly in a continuous system, the current H CUBE Pro TM hydrogenation system is customized with a second reaction column. A 250 mm x 4.6 mm stainless steel reactor column is connected ahead of the hydrogen insertion, enabling the dehydration of fructose to HMF derivatives, before pumping these products into the second column for hydrogenation. The overall residence time in the two column reactor system is 14 minutes. The overall results are an almost full conversion with a yield of 38.5 % DMF and 47 % yield of EL. The main disadvantage is the formation of higher mass products, so called humins, which start depositing on top of the catalysts, blocking their active sites.
In general it can be stated, that a two column system goes along with a higher investment as well as more maintenance costs, compared to a one column catalytic approach. To develop a catalyst, which is on the one hand able to dehydrate as well as hydrodeoxygenate the reactants, is aimed for at the last part of the thesis. The activated carbon however shows already activity for hydrodeoxygenation without any metal present and offers itself therefore as an alternative to overcome the temperature instability of Amberlyst 15 (max. 120 °C) for a combined DMF production directly from fructose. The activity for the upgrade to DMF is increased from 2 % to 12 % DMF yield in one mixed continuous column.
In order to scale up the entire one column approach, an 800 mm x 28.5 mm inner diameter column was planned and manufactured. The system is scaled up and assembled, whereas this flow reactor system is able to be run with 50 mL min-1 maximum flow rate, to stand a pressure of maximum 100 bar and be heated to around 500 °C. The tubing and connections, as well as the used devices are planned according to be safe and easy in use. The scaled-up approach offers a reaction column 120 times bigger (510 ml) then the first extension of the commercial system. This further extension offers the possibility of ranging between 1 and 1000 mL min-1, making it possible to use the approach in pilot plant applications.
Development of a reliable and environmentally friendly synthesis for fluorescence carbon nanodots
(2017)
Carbon nanodots (CNDs) have generated considerable attention due to their promising properties, e.g. high water solubility, chemical inertness, resistance to photobleaching, high biocompatibility and ease of functionalization. These properties render them ideal for a wide range of functions, e.g. electrochemical applications, waste water treatment, (photo)catalysis, bio-imaging and bio-technology, as well as chemical sensing, and optoelectronic devices like LEDs. In particular, the ability to prepare CNDs from a wide range of accessible organic materials makes them a potential alternative for conventional organic dyes and semiconductor quantum dots (QDs) in various applications. However, current synthesis methods are typically expensive and depend on complex and time-consuming processes or severe synthesis conditions and toxic chemicals. One way to reduce overall preparation costs is the use of biological waste as starting material. Hence, natural carbon sources such as pomelo peal, egg white and egg yolk, orange juice, and even eggshells, to name a few; have been used for the preparation of CNDs. While the use of waste is desirable, especially to avoid competition with essential food production, most starting-materials lack the essential purity and structural homogeneity to obtain homogeneous carbon dots. Furthermore, most synthesis approaches reported to date require extensive purification steps and have resulted in carbon dots with heterogeneous photoluminescent properties and indefinite composition. For this reason, among others, the relationship between CND structure (e.g. size, edge shape, functional groups and overall composition) and photophysical properties is yet not fully understood. This is particularly true for carbon dots displaying selective luminescence (one of their most intriguing properties), i.e. their PL emission wavelength can be tuned by varying the excitation wavelength.
In this work, a new reliable, economic, and environmentally-friendly one-step synthesis is established to obtain CNDs with well-defined and reproducible photoluminescence (PL) properties via the microwave-assisted hydrothermal treatment of starch, carboxylic acids and Tris-EDTA (TE) buffer as carbon- and nitrogen source, respectively. The presented microwave-assisted hydrothermal precursor carbonization (MW-hPC) is characterized by its cost-efficiency, simplicity, short reaction times, low environmental footprint, and high yields of approx. 80% (w/w). Furthermore, only a single synthesis step is necessary to obtain homogeneous water-soluble CNDs with no need for further purification.
Depending on starting materials and reaction conditions different types of CNDs have been prepared. The as-prepared CNDs exhibit reproducible, highly homogeneous and favourable PL properties with narrow emission bands (approx. 70nm FWHM), are non-blinking, and are ready to use without need for further purification, modification or surface passivation agents. Furthermore, the CNDs are comparatively small (approx. 2.0nm to 2.4nm) with narrow size distributions; are stable over a long period of time (at least one year), either in solution or as a dried solid; and maintain their PL properties when re-dispersed in solution. Depending on CND type, the PL quantum yield (PLQY) can be adjusted from as low as 1% to as high as 90%; one of the highest reported PLQY values (for CNDs) so far.
An essential part of this work was the utilization of a microwave synthesis reactor, allowing various batch sizes and precise control over reaction temperature and -time, pressure, and heating- and cooling rate, while also being safe to operate at elevated reaction conditions (e.g. 230 ±C and 30 bar). The hereby-achieved high sample throughput allowed, for the first time, the thorough investigation of a wide range of synthesis parameters, providing valuable insight into the CND formation. The influence of carbon- and nitrogen source, precursor concentration and -combination, reaction time and -temperature, batch size, and post-synthesis purification steps were carefully investigated regarding their influence on the optical properties of as-synthesized CNDs. In addition, the change in photophysical properties resulting from the conversion of CND solution into solid and back into the solution was investigated. Remarkably, upon freeze-drying the initial brown CND-solution turns into a non-fluorescent white/slightly yellow to brown solid which recovers PL in aqueous solution. Selected CND samples were also subject to EDX, FTIR, NMR, PL lifetime (TCSPC), particle size (TEM), TGA and XRD analysis. Besides structural characterization, the pH- and excitation dependent PL characteristics (i.e. selective luminescence) were examined; giving inside into the origin of photophysical properties and excitation dependent behaviour of CNDs. The obtained results support the notion that for CNDs the nature of the surface states determines the PL properties and that excitation dependent behaviour is caused by the “Giant Red-Edge Excitation Shift” (GREES).
The Cauchy problem for the linearised Einstein equation and the Goursat problem for wave equations
(2017)
In this thesis, we study two initial value problems arising in general relativity. The first is the Cauchy problem for the linearised Einstein equation on general globally hyperbolic spacetimes, with smooth and distributional initial data. We extend well-known results by showing that given a solution to the linearised constraint equations of arbitrary real Sobolev regularity, there is a globally defined solution, which is unique up to addition of gauge solutions. Two solutions are considered equivalent if they differ by a gauge solution. Our main result is that the equivalence class of solutions depends continuously on the corre- sponding equivalence class of initial data. We also solve the linearised constraint equations in certain cases and show that there exist arbitrarily irregular (non-gauge) solutions to the linearised Einstein equation on Minkowski spacetime and Kasner spacetime.
In the second part, we study the Goursat problem (the characteristic Cauchy problem) for wave equations. We specify initial data on a smooth compact Cauchy horizon, which is a lightlike hypersurface. This problem has not been studied much, since it is an initial value problem on a non-globally hyperbolic spacetime. Our main result is that given a smooth function on a non-empty, smooth, compact, totally geodesic and non-degenerate Cauchy horizon and a so called admissible linear wave equation, there exists a unique solution that is defined on the globally hyperbolic region and restricts to the given function on the Cauchy horizon. Moreover, the solution depends continuously on the initial data. A linear wave equation is called admissible if the first order part satisfies a certain condition on the Cauchy horizon, for example if it vanishes. Interestingly, both existence of solution and uniqueness are false for general wave equations, as examples show. If we drop the non-degeneracy assumption, examples show that existence of solution fails even for the simplest wave equation. The proof requires precise energy estimates for the wave equation close to the Cauchy horizon. In case the Ricci curvature vanishes on the Cauchy horizon, we show that the energy estimates are strong enough to prove local existence and uniqueness for a class of non-linear wave equations. Our results apply in particular to the Taub-NUT spacetime and the Misner spacetime. It has recently been shown that compact Cauchy horizons in spacetimes satisfying the null energy condition are necessarily smooth and totally geodesic. Our results therefore apply if the spacetime satisfies the null energy condition and the Cauchy horizon is compact and non-degenerate.
In the arable soil landscape of hummocky ground moraines, an erosion-affected spatial differentiation of soils can be observed. Man-made erosion leads to soil profile modifications along slopes with changed solum thickness and modified properties of soil horizons due to water erosion in combination with tillage operations. Soil erosion creates, thereby, spatial patterns of soil properties (e.g., texture and organic matter content) and differences in crop development. However, little is known about the manner in which water fluxes are affected by soil-crop interactions depending on contrasting properties of differently-developed soil horizons and how water fluxes influence the carbon transport in an eroded landscape. To identify such feedbacks between erosion-induced soil profile modifications and the 1D-water and solute balance, high-precision weighing lysimeters equipped with a wide range of sensor technique were filled with undisturbed soil monoliths that differed in the degree of past soil erosion. Furthermore, lysimeter effluent concentrations were analyzed for dissolved carbon fractions in bi-weekly intervals.
The water balance components measured by high precision lysimeters varied from the most eroded to the less eroded monolith up to 83 % (deep drainage) primarily caused due to varying amounts of precipitation and evapotranspiration for a 3-years period. Here, interactions between crop development and contrasting rainfall interception by above ground biomass could explain differences in water balance components. Concentrations of dissolved carbon in soil water samples were relatively constant in time, suggesting carbon leaching was mainly affected by water fluxes in this observation period. For the lysimeter-based water balance analysis, a filtering scheme was developed considering temporal autocorrelation. The minute-based autocorrelation analysis of mass changes from lysimeter time series revealed characteristic autocorrelation lengths ranging from 23 to 76 minutes. Thereby, temporal autocorrelation provided an optimal approximation of precipitation quantities. However, the high temporal resolution in lysimeter time series is restricted by the lengths of autocorrelation.
Erosion-induced but also gradual changes in soil properties were reflected by dynamics of soil water retention properties in the lysimeter soils. Short-term and long-term hysteretic water retention data suggested seasonal wettability problems of soils increasingly limited rewetting of previously dried pore regions. Differences in water retention were assigned to soil tillage operations and the erosion history at different slope positions. The threedimensional spatial pattern of soil types that result from erosional soil profile modifications were also reflected in differences of crop root development at different landscape positions. Contrasting root densities revealed positive relations of root and aboveground plant characteristics. Differences in the spatially-distributed root growth between different eroded soil types provided indications that root development was affected by the erosion-induced soil evolution processes.
Overall, the current thesis corroborated the hypothesis that erosion-induced soil profile modifications affect the soil water balance, carbon leaching and soil hydraulic properties, but also the crop root system is influenced by erosion-induced spatial patterns of soil properties in the arable hummocky post glacial soil landscape. The results will help to improve model predictions of water and solute movement in arable soils and to understand interactions between soil erosion and carbon pathways regarding sink-or-source terms in landscapes.
The first main goal of this thesis is to develop a concept of approximate differentiability of higher order for subsets of the Euclidean space that allows to characterize higher order rectifiable sets, extending somehow well known facts for functions. We emphasize that for every subset A of the Euclidean space and for every integer k ≥ 2 we introduce the approximate differential of order k of A and we prove it is a Borel map whose domain is a (possibly empty) Borel set. This concept could be helpful to deal with higher order rectifiable sets in applications.
The other goal is to extend to general closed sets a well known theorem of Alberti on the second order rectifiability properties of the boundary of convex bodies. The Alberti theorem provides a stratification of second order rectifiable subsets of the boundary of a convex body based on the dimension of the (convex) normal cone. Considering a suitable generalization of this normal cone for general closed subsets of the Euclidean space and employing some results from the first part we can prove that the same stratification exists for every closed set.
Nanolenses are linear chains of differently-sized metal nanoparticles, which can theoretically provide extremely high field enhancements. The complex structure renders their synthesis challenging and has hampered closer analyses so far. Here, the technique of DNA origami was used to self-assemble DNA-coated 10 nm, 20 nm, and 60 nm gold or silver nanoparticles into gold or silver nanolenses. Three different geometrical arrangements of gold nanolenses were assembled, and for each of the three, sets of single gold nanolenses were investigated in detail by atomic force microscopy, scanning electron microscopy, dark-field scattering and Raman spectroscopy. The surface-enhanced Raman scattering (SERS) capabilities of the single nanolenses were assessed by labelling the 10 nm gold nanoparticle selectively with dye molecules. The experimental data was complemented by finite-difference time-domain simulations. For those gold nanolenses which showed the strongest field enhancement, SERS signals from the two different internal gaps were compared by selectively placing probe dyes on the 20 nm or 60 nm gold particles. The highest enhancement was found for the gap between the 20 nm and 10 nm nanoparticle, which is indicative of a cascaded field enhancement. The protein streptavidin was labelled with alkyne groups and served as a biological model analyte, bound between the 20 nm and 10 nm particle of silver nanolenses. Thereby, a SERS signal from a single streptavidin could be detected. Background peaks observed in SERS measurements on single silver nanolenses could be attributed to amorphous carbon. It was shown that the amorphous carbon is generated in situ.
Background: Obesity is thought to be the consequence of an unhealthy nutrition and a lack of physical activity. Although the resulting metabolic alterations such as impaired glucose homeostasis and insulin sensitivity can usually be improved by physical activity, some obese patients fail to enhance skeletal muscle metabolic health with exercise training. Since this might be largely heritable, maternal nutrition during pregnancy and lactation is hypothesized to impair offspring skeletal muscle physiology.
Objectives: This PhD thesis aims to investigate the consequences of maternal high-fat diet (mHFD) consumption on offspring skeletal muscle physiology and exercise performance. We could show that maternal high-fat diet during gestation and lactation decreases the offspring’s training efficiency and endurance performance by influencing the epigenetic profile of their skeletal muscle and altering the adaptation to an acute exercise bout, which in long-term, increases offspring obesity susceptibility.
Experimental setup: To investigate this issue in detail, we conducted several studies with a similar maternal feeding regime. Dams (C57BL/6J) were either fed a low-fat diet (LFD; 10 energy% from fat) or high-fat diet (HFD; 40 energy% from fat) during pregnancy and lactation. After weaning, male offspring of both maternal groups were switched to a LFD, on which they remained until sacrifice in week 6, 15 or 25. In one study, LFD feeding was followed by HFD provision from week 15 until week 25 to elucidate the effects on offspring obesity susceptibility. In week 7, all mice were randomly allocated to a sedentary group (without running wheel) or an exercised group (with running wheel for voluntary exercise training). Additionally, treadmill endurance tests were conducted to investigate training performance and efficiency. In order to uncover regulatory mechanisms, each study was combined with a specific analytical setup, such as whole genome microarray analysis, gene and protein expression analysis, DNA methylation analyses, and enzyme activity assays.
Results: mHFD offspring displayed a reduced training efficiency and endurance capacity. This was not due to an altered skeletal muscle phenotype with changes in fiber size, number, and type. DNA methylation measurements in 6 week old offspring showed a hypomethylation of the Nr4a1 gene in mHFD offspring leading to an increased gene expression. Since Nr4a1 plays an important role in the regulation of skeletal muscle energy metabolism and early exercise adaptation, this could affect offspring training efficiency and exercise performance in later life.
Investigation of the acute response to exercise showed that mHFD offspring displayed a reduced gene expression of vascularization markers (Hif1a, Vegfb, etc) pointing towards a reduced angiogenesis which could possibly contribute to their reduced endurance capacity. Furthermore, an impaired glucose utilization of skeletal muscle during the acute exercise bout by an impaired skeletal muscle glucose handling was evidenced by higher blood glucose levels, lower GLUT4 translocation and diminished Lactate dehydrogenase activity in mHFD offspring immediately after the endurance test. These points towards a disturbed use of glucose as a substrate during endurance exercise. Prolonged HFD feeding during adulthood increases offspring fat mass gain in mHFD offspring compared to offspring from low-fat fed mothers and also reduces their insulin sensitivity pointing towards a higher obesity and diabetes susceptibility despite exercise training. Consequently, mHFD reduces offspring responsiveness to the beneficial effects of voluntary exercise training.
Conclusion: The results of this PhD thesis demonstrate that mHFD consumption impairs the offspring’s training efficiency and endurance capacity, and reduced the beneficial effects of exercise on the development of diet-induced obesity and insulin resistance in the offspring.
This might be due to changes in skeletal muscle epigenetic profile and/or an impaired skeletal muscle angiogenesis and glucose utilization during an acute exercise bout, which could contribute to a disturbed adaptive response to exercise training.
Ziel der vorliegenden Arbeit war die Synthese und Charakterisierung von anisotropen Goldnanopartikeln in einer geeigneten Polyelektrolyt-modifizierten Templatphase. Der Mittelpunkt bildet dabei die Auswahl einer geeigneten Templatphase, zur Synthese von einheitlichen und reproduzierbaren anisotropen Goldnanopartikeln mit den daraus resultierenden besonderen Eigenschaften. Bei der Synthese der anisotropen Goldnanopartikeln lag der Fokus in der Verwendung von Vesikeln als Templatphase, wobei hier der Einfluss unterschiedlicher strukturbildender Polymere (stark alternierende Maleamid-Copolymere PalH, PalPh, PalPhCarb und PalPhBisCarb mit verschiedener Konformation) und Tenside (SDS, AOT – anionische Tenside) bei verschiedenen Synthese- und Abtrennungsbedingungen untersucht werden sollte.
Im ersten Teil der Arbeit konnte gezeigt werden, dass PalPhBisCarb bei einem pH-Wert von 9 die Bedingungen eines Röhrenbildners für eine morphologische Transformation von einer vesikulären Phase in eine röhrenförmige Netzwerkstruktur erfüllt und somit als Templatphase zur formgesteuerten Bildung von Nanopartikeln genutzt werden kann.
Im zweiten Teil der Arbeit wurde dargelegt, dass die Templatphase PalPhBisCarb (pH-Wert von 9, Konzentration von 0,01 wt.%) mit AOT als Tensid und PL90G als Phospholipid (im Verhältnis 1:1) die effektivste Wahl einer Templatphase für die Bildung von anisotropen Strukturen in einem einstufigen Prozess darstellt. Bei einer konstanten Synthesetemperatur von 45 °C wurden die besten Ergebnisse bei einer Goldchloridkonzentration von 2 mM, einem Gold-Templat-Verhältnis von 3:1 und einer Synthesezeit von 30 Minuten erzielt. Ausbeute an anisotropen Strukturen lag bei 52 % (Anteil an dreieckigen Nanoplättchen von 19 %). Durch Erhöhung der Synthesetemperatur konnte die Ausbeute auf 56 % (29 %) erhöht werden.
Im dritten Teil konnte durch zeitabhängige Untersuchungen gezeigt werden, dass bei Vorhandensein von PalPhBisCarb die Bildung der energetisch nicht bevorzugten Plättchen-Strukturen bei Raumtemperatur initiiert wird und bei 45 °C ein Optimum annimmt.
Kintetische Untersuchungen haben gezeigt, dass die Bildung dreieckiger Nanoplättchen bei schrittweiser Zugabe der Goldchlorid-Präkursorlösung zur PalPhBisCarb enthaltenden Templatphase durch die Dosierrate der vesikulären Templatphase gesteuert werden kann. In umgekehrter Weise findet bei Zugabe der Templatphase zur Goldchlorid-Präkursorlösung bei 45 °C ein ähnlicher, kinetisch gesteuerter Prozess der Bildung von Nanodreiecken statt mit einer maximalen Ausbeute dreieckigen Nanoplättchen von 29 %.
Im letzten Kapitel erfolgten erste Versuche zur Abtrennung dreieckiger Nanoplättchen von den übrigen Geometrien der gemischten Nanopartikellösung mittels tensidinduzierter Verarmungsfällung. Bei Verwendung von AOT mit einer Konzentration von 0,015 M wurde eine Ausbeute an Nanoplättchen von 99 %, wovon 72 % dreieckiger Geometrien hatten, erreicht.
Die Elektrosprayionisation (ESI) ist eine der weitverbreitetsten Ionisationstechniken für flüssige Pro-ben in der Massen- und Ionenmobilitäts(IM)-Spektrometrie. Aufgrund ihrer schonenden Ionisierung wird ESI vorwiegend für empfindliche, komplexe Moleküle in der Biologie und Medizin eingesetzt. Überdies ist sie allerdings für ein sehr breites Spektrum an Substanzklassen anwendbar. Die IM-Spektrometrie wurde ursprünglich zur Detektion gasförmiger Proben entwickelt, die hauptsächlich durch radioaktive Quellen ionisiert werden. Sie ist die einzige analytische Methode, bei der Isomere in Echtzeit getrennt und über ihre charakteristische IM direkt identifiziert werden können. ESI wurde in den 90ger Jahren durch die Hill Gruppe in die IM-Spektrometrie eingeführt. Die Kombination wird bisher jedoch nur von wenigen Gruppen verwendet und hat deshalb noch ein hohes Entwick-lungspotential. Ein vielversprechendes Anwendungsfeld ist der Einsatz in der Hochleistungs-flüssigkeitschromatographie (HPLC) zur mehrdimensionalen Trennung. Heutzutage ist die HPLC die Standardmethode zur Trennung komplexer Proben in der Routineanalytik. HPLC-Trennungsgänge sind jedoch häufig langwierig und der Einsatz verschiedener Laufmittel, hoher Flussraten, von Puffern, sowie Laufmittelgradienten stellt hohe Anforderungen an die Detektoren. Die ESI-IM-Spektrometrie wurde in einigen Studien bereits als HPLC-Detektor eingesetzt, war dort bisher jedoch auf Flussratensplitting oder geringe Flussraten des Laufmittels beschränkt.
In dieser kumulativen Doktorarbeit konnte daher erstmals ein ESI IM-Spektrometer als HPLC-Detektor für den Flussratenbereich von 200-1500 μl/min entwickelt werden. Anhand von fünf Publi-kationen wurden (1) über eine umfassende Charakterisierung die Eignung des Spektrometers als HPLC-Detektor festgestellt, (2) ausgewählte komplexe Trenngänge präsentiert und (3) die Anwen-dung zum Reaktionsmonitoring und (4, 5) mögliche Weiterentwicklungen gezeigt.
Erfolgreich konnten mit dem selbst-entwickelten ESI IM-Spektrometer typische HPLC-Bedingungen wie Wassergehalte im Laufmittel von bis zu 90%, Pufferkonzentrationen von bis zu 10 mM, sowie Nachweisgrenzen von bis zu 50 nM erreicht werden. Weiterhin wurde anhand der komplexen Trennungsgänge (24 Pestizide/18 Aminosäuren) gezeigt, dass die HPLC und die IM-Spektrometrie eine hohe Orthogonalität besitzen. Eine effektive Peakkapazität von 240 wurde so realisiert. Auf der HPLC-Säule koeluierende Substanzen konnten über die Driftzeit getrennt und über ihre IM identifi-ziert werden, sodass die Gesamttrennzeiten erheblich minimiert werden konnten. Die Anwend-barkeit des ESI IM-Spektrometers zur Überwachung chemischer Synthesen wurde anhand einer dreistufigen Reaktion demonstriert. Es konnten die wichtigsten Edukte, Zwischenprodukte und Produkte aller Stufen identifiziert werden. Eine quantitative Auswertung war sowohl über eine kurze HPLC-Vortrennung als auch durch die Entwicklung eines eigenen Kalibrierverfahrens, welches die Ladungskonkurrenz bei ESI berücksichtigt, ohne HPLC möglich. Im zweiten Teil der Arbeit werden zwei Weiterentwicklungen des Spektrometers präsentiert. Eine Möglichkeit ist die Reduzierung des Drucks in den intermediären Bereich (300 - 1000 mbar) mit dem Ziel der Verringerung der benötigten Spannungen. Mithilfe von Streulichtbildern und Strom-Spannungs-Kurven wurden für geringe Drücke eine verminderte Freisetzung der Analyt-Ionen aus den Tropfen festgestellt. Die Verluste konnten jedoch über höhere elektrische Feldstärken ausgeglichen werden, sodass gleiche Nachweisgrenzen bei 500 mbar und bei 1 bar erreicht wurden. Die zweite Weiterentwicklung ist ein neuartiges Ionentors mit Pulsschaltung, welches eine Verdopplung der Auflösung auf bis zu R > 100 bei gleicher Sensitivität ermöglichte. Eine denkbare Anwendung im Bereich der Peptidanalytik wurde mit beachtlichen Auflösungen der Peptide von R = 90 gezeigt.
The motivation of this work was to investigate the self-assembly of a block copolymer species that attended little attraction before, double hydrophilic block copolymers (DHBCs). DHBCs consist of two linear hydrophilic polymer blocks. The self-assembly of DHBCs towards suprastructures such as particles and vesicles is determined via a strong difference in hydrophilicity between the corresponding blocks leading to a microphase separation due to immiscibility. The benefits of DHBCs and the corresponding particles and vesicles, such as biocompatibility, high permeability towards water and hydrophilic compounds as well as the large amount of possible functionalizations that can be addressed to the block copolymers make the application of DHBC based structures a viable choice in biomedicine. In order to assess a route towards self-assembled structures from DHBCs that display the potential to act as cargos for future applications, several block copolymers containing two hydrophilic polymer blocks were synthesized. Poly(ethylene oxide)-b-poly(N-vinylpyrrolidone) (PEO-b-PVP) and Poly(ethylene oxide)-b-poly(N-vinylpyrrolidone-co-N-vinylimidazole) (PEO-b-P(VP-co-VIm) block copolymers were synthesized via reversible deactivation radical polymerization (RDRP) techniques starting from a PEO-macro chain transfer agent. The block copolymers displayed a concentration dependent self-assembly behavior in water which was determined via dynamic light scattering (DLS). It was possible to observe spherical particles via laser scanning confocal microscopy (LSCM) and cryogenic scanning electron microscopy (cryo SEM) at highly concentrated solutions of PEO-b-PVP. Furthermore, a crosslinking strategy with (PEO-b-P(VP-co-VIm) was developed applying a diiodo derived crosslinker diethylene glycol bis(2-iodoethyl) ether to form quaternary amines at the VIm units. The formed crosslinked structures proved stability upon dilution and transfer into organic solvents. Moreover, self-assembly and crosslinking in DMF proved to be more advantageous and the crosslinked structures could be successfully transferred to aqueous solution. The afforded spherical submicron particles could be visualized via LSCM, cryo SEM and Cryo TEM.
Double hydrophilic pullulan-b-poly(acrylamide) block copolymers were synthesized via copper catalyzed alkyne azide cycloaddition (CuAAC) starting from suitable pullulan alkyne and azide functionalized poly(N,N-dimethylacrylamide) (PDMA) and poly(N-ethylacrylamide) (PEA) homopolymers. The conjugation reaction was confirmed via SEC and 1H-NMR measurements. The self-assembly of the block copolymers was monitored with DLS and static light scattering (SLS) measurements indicating the presence of hollow spherical structures. Cryo SEM measurements could confirm the presence of vesicular structures for Pull-b-PEA block copolymers. Solutions of Pull-b-PDMA displayed particles in cryo SEM. Moreover, an end group functionalization of Pull-b-PDMA with Rhodamine B allowed assessing the structure via LSCM and hollow spherical structures were observed indicating the presence of vesicles, too.
An exemplified pathway towards a DHBC based drug delivery vehicle was demonstrated with the block copolymer Pull-b-PVP. The block copolymer was synthesized via RAFT/MADIX techniques starting from a pullulan chain transfer agent. Pull-b-PVP displayed a concentration dependent self-assembly in water with an efficiency superior to the PEO-b-PVP system, which could be observed via DLS. Cryo SEM and LSCM microscopy displayed the presence of spherical structures. In order to apply a reversible crosslinking strategy on the synthesized block copolymer, the pullulan block was selectively oxidized to dialdehydes with NaIO4. The oxidation of the block copolymer was confirmed via SEC and 1H-NMR measurements. The self-assembled and oxidized structures were subsequently crosslinked with cystamine dihiydrochloride, a pH and redox responsive crosslinker resulting in crosslinked vesicles which were observed via cryo SEM. The vesicular structures of crosslinked Pull-b-PVP could be disassembled by acid treatment or the application of the redox agent tris(2-carboxyethyl)-phosphin-hydrochloride. The successful disassembly was monitored with DLS measurements.
To conclude, self-assembled structures from DHBCs such as particles and vesicles display a strong potential to generate an impact on biomedicine and nanotechnologies. The variety of DHBC compositions and functionalities are very promising features for future applications.
Recognizing, understanding, and responding to quantities are considerable skills for human beings. We can easily communicate quantities, and we are extremely efficient in adapting our behavior to numerical related tasks. One usual task is to compare quantities. We also use symbols like digits in numerical-related tasks. To solve tasks including digits, we must to rely on our previously learned internal number representations.
This thesis elaborates on the process of number comparison with the use of noisy mental representations of numbers, the interaction of number and size representations and how we use mental number representations strategically. For this, three studies were carried out.
In the first study, participants had to decide which of two presented digits was numerically larger. They had to respond with a saccade in the direction of the anticipated answer. Using only a small set of meaningfully interpretable parameters, a variant of random walk models is described that accounts for response time, error rate, and variance of response time for the full matrix of 72 digit pairs. In addition, the used random walk model predicts a numerical distance effect even for error response times and this effect clearly occurs in the observed data. In relation to corresponding correct answers error responses were systematically faster. However, different from standard assumptions often made in random walk models, this account required that the distributions of step sizes of the induced random walks be asymmetric to account for this asymmetry between correct and incorrect responses.
Furthermore, the presented model provides a well-defined framework to investigate the nature and scale (e.g., linear vs. logarithmic) of the mapping of numerical magnitude onto its internal representation. In comparison of the fits of proposed models with linear and logarithmic mapping, the logarithmic mapping is suggested to be prioritized.
Finally, we discuss how our findings can help interpret complex findings (e.g., conflicting speed vs. accuracy trends) in applied studies that use number comparison as a well-established diagnostic tool. Furthermore, a novel oculomotoric effect is reported, namely the saccadic overschoot effect. The participants responded by saccadic eye movements and the amplitude of these saccadic responses decreases with numerical distance.
For the second study, an experimental design was developed that allows us to apply the signal detection theory to a task where participants had to decide whether a presented digit was physically smaller or larger. A remaining question is, whether the benefit in (numerical magnitude – physical size) congruent conditions is related to a better perception than in incongruent conditions. Alternatively, the number-size congruency effect is mediated by response biases due to numbers magnitude. The signal detection theory is a perfect tool to distinguish between these two alternatives. It describes two parameters, namely sensitivity and response bias. Changes in the sensitivity are related to the actual task performance due to real differences in perception processes whereas changes in the response bias simply reflect strategic implications as a stronger preparation (activation) of an anticipated answer. Our results clearly demonstrate that the number-size congruency effect cannot be reduced to mere response bias effects, and that genuine sensitivity gains for congruent number-size pairings contribute to the number-size congruency effect.
Third, participants had to perform a SNARC task – deciding whether a presented digit was odd or even. Local transition probability of irrelevant attributes (magnitude) was varied while local transition probability of relevant attributes (parity) and global probability occurrence of each stimulus were kept constantly. Participants were quite sensitive in recognizing the underlying local transition probability of irrelevant attributes. A gain in performance was observed for actual repetitions of the irrelevant attribute in relation to changes of the irrelevant attribute in high repetition conditions compared to low repetition conditions. One interpretation of these findings is that information about the irrelevant attribute (magnitude) in the previous trial is used as an informative precue, so that participants can prepare early processing stages in the current trial, with the corresponding benefits and costs typical of standard cueing studies.
Finally, the results reported in this thesis are discussed in relation to recent studies in numerical cognition.
Aufgrund verschiedener wissenschaftlicher Erkenntnisse wird jungen Sporttreibenden vom Gebrauch von Nahrungsergänzungsmitteln (NEM) abgeraten. Diese Dissertation verfolgt vor dem Hintergrund der Theorie der Zielsysteme (TDZ) das Ziel der Erstellung anwendungsorientieren Handlungswissens, anhand dessen Interventionsempfehlungen zur Reduzierung des prävalenten NEM-Konsums im Nachwuchssport ableitbar sind. Insgesamt wurden sechs Untersuchungen durchgeführt. Die Versuchsteilnehmenden absolvierten in sämtlichen Studien eine Variante der lexikalischen Entscheidungsaufgabe. Diese Aufgabe diente der Operationalisierung von automatisch aktivier- und abrufbaren nahrungsergänzungsmittelbezogenen Ziel-Mittel-Relationen.
In einer Stichprobe von Sportstudierenden zeigte sich, dass NEM mit dem Ziel Leistung assoziiert sind (Studie 1). Unter Berücksichtigung des NEM-Konsums wurde dieses Ergebnis für Nachwuchsathletinnen und -athleten aus dem Breitensport repliziert (Studie 2). Zusätzlich konnte in beiden Studien die Bedeutung dieser Ziel-Mittel-Relationen für das Verhalten nachgewiesen werden. In den nachfolgenden Untersuchungen wurden spezifische Veränderungsmechanismen der verhaltensleitenden Ziel-Mittel-Relation aus Leistung und NEM zunächst an Sportstudierenden experimentell evaluiert. Durch das Herausstellen der fehlenden leistungssteigernden Wirkung von NEM konnte diese Zielassoziation nicht modifiziert werden (Studie 3). Das Betonen gesundheitsschädigender Konsequenzen (Studie 4) und das Akzentuieren einer gesunden Ernährung (Studie 5) erwiesen sich demgegenüber als geeignet zur Veränderung der Ziel-Mittel-Relation. Das Herausstellen einer gesunden Ernährung führte deskriptiv bei Nachwuchsathletinnen und -athleten ebenfalls zur Modifikation der Zielassoziation (Studie 6). Die inferenzstatistische Bestätigung der Ergebnisse dieser Studie steht aufgrund der geringen Teststärke der Untersuchung noch aus.
Insgesamt verdeutlichen die Ergebnisse, dass die auf Ebene automatischer Kognitionen bestehende und verhaltensleitende Assoziation des Gebrauchs von NEM mit Leistung durch die Akzentuierung gesundheitlicher Perspektiven experimentell verändert werden kann. Abschließend wird die theoretische und praktische Bedeutung des erstellten Handlungswissen für künftige Interventionsempfehlungen zur Reduzierung des Gebrauchs von NEM diskutiert.
In this era of high-speed informatization and globalization, online education is no longer an exquisite concept in the ivory tower, but a rapidly developing industry closely relevant to people's daily lives. Numerous lectures are recorded in form of multimedia data, uploaded to the Internet and made publicly accessible from anywhere in this world. These lectures are generally addressed as e-lectures. In recent year, a new popular form of e-lectures, the Massive Open Online Courses (MOOCs), boosts the growth of online education industry and somehow turns "learning online" into a fashion.
As an e-learning provider, besides to keep improving the quality of e-lecture content, to provide better learning environment for online learners is also a highly important task. This task can be preceded in various ways, and one of them is to enhance and upgrade the learning materials provided: e-lectures could be more than videos. Moreover, this process of enhancement or upgrading should be done automatically, without giving extra burdens to the lecturers or teaching teams, and this is the aim of this thesis.
The first part of this thesis is an integrated framework of multi-lingual subtitles production, which can help online learners penetrate the language barrier. The framework consists of Automatic Speech Recognition (ASR), Sentence Boundary Detection (SBD) and Machine Translation (MT), among which the proposed SBD solution is major technical contribution, building on Deep Neural Network (DNN) and Word Vector (WV) and achieving state-of-the-art performance. Besides, a quantitative evaluation with dozens of volunteers is also introduced to measure how these auto-generated subtitles could actually help in context of e-lectures.
Secondly, a technical solution "TOG" (Tree-Structure Outline Generation) is proposed to extract textual content from the displaying slides recorded in video and re-organize them into a hierarchical lecture outline, which may serve in multiple functions, such like preview, navigation and retrieval. TOG runs adaptively and can be roughly divided into intra-slide and inter-slides phases. Table detection and lecture video segmentation can be implemented as sub- or post-application in these two phases respectively. Evaluation on diverse e-lectures shows that all the outlines, tables and segments achieved are trustworthily accurate.
Based on the subtitles and outlines previously created, lecture videos can be further split into sentence units and slide-based segment units. A lecture highlighting process is further applied on these units, in order to capture and mark the most important parts within the corresponding lecture, just as what people do with a pen when reading paper books. Sentence-level highlighting depends on the acoustic analysis on the audio track, while segment-level highlighting focuses on exploring clues from the statistical information of related transcripts and slide content. Both objective and subjective evaluations prove that the proposed lecture highlighting solution is with decent precision and welcomed by users.
All above enhanced e-lecture materials have been already implemented in actual use or made available for implementation by convenient interfaces.
In Zeiten eines sich schnell ändernden und vielseitigen Energiemarktes müssen Kohlenstoffmaterialien für verschiedene Anforderungen einsetzbar sein. Dies erfordert flexibel synthetisierbare Kohlenstoffmaterialien bevorzugt aus günstigen und nachhaltigen Kohlenstoffquellen. Es ist allerdings nicht leicht Vorläuferverbindungen auszumachen, welche sich einerseits für verschiedene Herstellungsverfahren eignen und deren Kohlenstoffprodukte andererseits in spezifischen Eigenschaften, wie der Struktur, des Stickstoffanteils, der Oberfläche und der Porengrößen, eingestellt werden können. In diesem Zusammenhang können natürliche Polyphenole, etwa überschüssige Tannine aus der Weinproduktion, eine neue Welt zu hoch funktionalen und vielseitig einstellbaren Kohlenstoffmaterialien mit hohen Ausbeuten öffnen.
Das Hauptziel dieser vorliegenden Thesis war es neue funktionale, einstellbare und skalierbare nanostrukturierte Kohlenstoffmaterialien aus Tanninen (insbesondere Tanninsäure) für unterschiedliche elektrochemische Zwecke zu synthetisieren und zu charakterisieren. Ermöglicht wurde dies durch unterschiedliche synthetische Herangehensweisen, wie etwa der polymeren Strukturdirektion, dem ionothermalen Templatieren und der weichen Templatierung. An Stelle des weitläufig gebräuchlichen, aber kanzerogenen Vernetzungsagens Formaldehyd wurden bei den vorgestellten Synthesen Harnstoff und Thioharnstoff gewählt, um zugleich die synthetisierten Kohlenmaterialien variabel dotieren zu können.
Daher wurden im ersten Teil der Arbeit die Wechselwirkungen, Reaktionen und thermischen Verhaltensweisen von Tanninsäure und Mixturen von Tanninsäure und Harnstoff bzw. Thioharnstoff untersucht, um daraus wichtige Erkenntnisse für die verschiedenen Kohlenstoffsynthesen zu gewinnen.
Durch die Verwendung eines polymeren Strukturierungsagenz Pluronic P123 konnten in einer ersten Kohlenstoffsynthese nachhaltige und dotierbare Kohlenstoffpartikel mit Durchmessern im Nanometerbereich aus Tanninsäure und Harnstoff hergestellt werden. Es konnte dabei gezeigt werden, dass durch die Modifikation der verschiedenen Syntheseparameter die Kohlenstoffnanopartikel gemäß ihres gemittelten Partikeldurchmessers, ihrer BET-Oberfläche, ihrer Komposition, ihrer Leitfähigkeit und ihrer chemischen Stabilität einstellbar sind. Dies eröffnete die Möglichkeit diese Kohlenstoffpartikel als alternatives und nachhaltiges Rußmaterial einzusetzen.
Weiterhin war es durch die ionothermale Templatierung möglich poröse, dotierte und kontrollierbare Kohlenstoffpartikel mit hohen spezifischen Oberflächen aus den gewählten Präkursorverbindungen zu synthetisieren, die sich für den Einsatz in Superkondensatoren eignen.
Auf diesen Erkenntnissen aufbauend konnten mittels der Rotationsbeschichtung poröse binderfreie und strukturierte Kohlenstofffilme synthetisiert werden, die eine spinodale Struktur aufwiesen. Anhand der Modifikation der Stammlösungskonzentration, der Rotationsgeschwindigkeit und der verwendeten Substrate konnten die Filmdicke (100-1000 nm), die Morphologie und Gesamtoberfläche gezielt beeinflusst werden. Die erweiterte elektrochemische Analyse zeigte außerdem ein sehr gut zugängliches Porensystem der porösen Kohlenstofffilme.
Allumfassend konnten demnach verschiedene Synthesewege für Kohlenstoffmaterialien aus Tanninen aufgezeigt werden, die verschiedenartig strukturiert und kontrolliert werden können und sich für diverse Anwendungsgebiete eignen.
BACKGROUND: Aggressive behavior at an early age is linked to a broad range of psychosocial problems in later life. That is why risk factors of the occurrence and the development of aggression have been examined for a long time in psychological science. The present doctoral dissertation aims to expand this research by investigating risk factors in three intrapersonal domains using the prominent social-information processing approach by Crick and Dodge (1994) as a framework model. Anger regulation was examined as an affective, theory of mind as a cognitive, and physical attractiveness as an appearance-related developmental factor of aggression in middle childhood. An additional goal of this work was to develop and validate a behavioral observation assessment of anger regulation as past research lacked in ecologically valid measures of anger regulation that are applicable for longitudinal studies.
METHODS: Three empirical studies address the aforementioned intrapersonal risk factors. In each study, data from the PIER-project were used, a three-wave-longitudinal study covering three years with a total sample size of 1,657 children in the age between 6 and 11 years (at the first measurement point). The central constructs were assessed via teacher-reports (aggression), behavioral observation (anger regulation), computer tests (theory of mind), and independent ratings (physical attractiveness). The predictive value of each proposed risk factor for the development of aggressive behavior was examined via structural equation modeling.
RESULTS AND CONCLUSION: The newly developed behavioral observation measure was found to be a reliable and valid tool to assess anger regulation in middle childhood, but limited in capturing a full range of relevant regulation strategies. That might be the reason, why maladaptive anger regulation was not found to function as a risk factor of subsequent aggressive behavior. However, children’s deficits in theory of mind and a low level in physical attractiveness significantly predicted later aggression. Problematic peer relationships were identified as underlying the link between low attractiveness and aggression. Thus, fostering children’s skills in theory of mind and their ability to call existing beliefs about the nature of more versus less attractive individuals into question may be important starting points for the prevention of aggressive behavior in middle childhood.
Functional nanoporous carbon-based materials derived from oxocarbon-metal coordination complexes
(2017)
Nanoporous carbon based materials are of particular interest for both science and industry due to their exceptional properties such as a large surface area, high pore volume, high electroconductivity as well as high chemical and thermal stability. Benefiting from these advantageous properties, nanoporous carbons proved to be useful in various energy and environment related applications including energy storage and conversion, catalysis, gas sorption and separation technologies. The synthesis of nanoporous carbons classically involves thermal carbonization of the carbon precursors (e.g. phenolic resins, polyacrylonitrile, poly(vinyl alcohol) etc.) followed by an activation step and/or it makes use of classical hard or soft templates to obtain well-defined porous structures. However, these synthesis strategies are complicated and costly; and make use of hazardous chemicals, hindering their application for large-scale production. Furthermore, control over the carbon materials properties is challenging owing to the relatively unpredictable processes at the high carbonization temperatures.
In the present thesis, nanoporous carbon based materials are prepared by the direct heat treatment of crystalline precursor materials with pre-defined properties. This synthesis strategy does not require any additional carbon sources or classical hard- or soft templates. The highly stable and porous crystalline precursors are based on coordination compounds of the squarate and croconate ions with various divalent metal ions including Zn2+, Cu2+, Ni2+, and Co2+, respectively. Here, the structural properties of the crystals can be controlled by the choice of appropriate synthesis conditions such as the crystal aging temperature, the ligand/metal molar ratio, the metal ion, and the organic ligand system. In this context, the coordination of the squarate ions to Zn2+ yields porous 3D cube crystalline particles. The morphology of the cubes can be tuned from densely packed cubes with a smooth surface to cubes with intriguing micrometer-sized openings and voids which evolve on the centers of the low index faces as the crystal aging temperature is raised. By varying the molar ratio, the particle shape can be changed from truncated cubes to perfect cubes with right-angled edges.
These crystalline precursors can be easily transformed into the respective carbon based materials by heat treatment at elevated temperatures in a nitrogen atmosphere followed by a facile washing step. The resulting carbons are obtained in good yields and possess a hierarchical pore structure with well-organized and interconnected micro-, meso- and macropores. Moreover, high surface areas and large pore volumes of up to 1957 m2 g-1 and 2.31 cm3 g-1 are achieved, respectively, whereby the macroscopic structure of the precursors is preserved throughout the whole synthesis procedure.
Owing to these advantageous properties, the resulting carbon based materials represent promising supercapacitor electrode materials for energy storage applications. This is exemplarily demonstrated by employing the 3D hierarchical porous carbon cubes derived from squarate-zinc coordination compounds as electrode material showing a specific capacitance of 133 F g-1 in H2SO4 at a scan rate of 5 mV s-1 and retaining 67% of this specific capacitance when the scan rate is increased to 200 mV s-1.
In a further application, the porous carbon cubes derived from squarate-zinc coordination compounds are used as high surface area support material and decorated with nickel nanoparticles via an incipient wetness impregnation. The resulting composite material combines a high surface area, a hierarchical pore structure with high functionality and well-accessible pores. Moreover, owing to their regular micro-cube shape, they allow for a good packing of a fixed-bed flow reactor along with high column efficiency and a minimized pressure drop throughout the packed reactor. Therefore, the composite is employed as heterogeneous catalyst in the selective hydrogenation of 5-hydroxymethylfurfural to 2,5-dimethylfuran showing good catalytic performance and overcoming the conventional problem of column blocking.
Thinking about the rational design of 3D carbon geometries, the functions and properties of the resulting carbon-based materials can be further expanded by the rational introduction of heteroatoms (e.g. N, B, S, P, etc.) into the carbon structures in order to alter properties such as wettability, surface polarity as well as the electrochemical landscape. In this context, the use of crystalline materials based on oxocarbon-metal ion complexes can open a platform of highly functional materials for all processes that involve surface processes.
In littoral zones of lakes, multiple processes determine lake ecology and water quality. Lacustrine groundwater discharge (LGD), most frequently taking place in littoral zones, can transport or mobilize nutrients from the sediments and thus contribute significantly to lake eutrophication. Furthermore, lake littoral zones are the habitat of benthic primary producers, namely submerged macrophytes and periphyton, which play a key role in lake food webs and influence lake water quality. Groundwater-mediated nutrient-influx can potentially affect the asymmetric competition between submerged macrophytes and periphyton for light and nutrients. While rooted macrophytes have superior access to sediment nutrients, periphyton can negatively affect macrophytes by shading. LGD may thus facilitate periphyton production at the expense of macrophyte production, although studies on this hypothesized effect are missing.
The research presented in this thesis is aimed at determining how LGD influences periphyton, macrophytes, and the interactions between these benthic producers. Laboratory experiments were combined with field experiments and measurements in an oligo-mesotrophic hard water lake.
In the first study, a general concept was developed based on a literature review of the existing knowledge regarding the potential effects of LGD on nutrients and inorganic and organic carbon loads to lakes, and the effect of these loads on periphyton and macrophytes. The second study includes a field survey and experiment examining the effects of LGD on periphyton in an oligotrophic, stratified hard water lake (Lake Stechlin). This study shows that LGD, by mobilizing phosphorus from the sediments, significantly promotes epiphyton growth, especially at the end of the summer season when epilimnetic phosphorus concentrations are low. The third study focuses on the potential effects of LGD on submerged macrophytes in Lake Stechlin. This study revealed that LGD may have contributed to an observed change in macrophyte community composition and abundance in the shallow littoral areas of the lake. Finally, a laboratory experiment was conducted which mimicked the conditions of a seepage lake. Groundwater circulation was shown to mobilize nutrients from the sediments, which significantly promoted periphyton growth. Macrophyte growth was negatively affected at high periphyton biomasses, confirming the initial hypothesis.
More generally, this thesis shows that groundwater flowing into nutrient-limited lakes may import or mobilize nutrients. These nutrients first promote periphyton, and subsequently provoke radical changes in macrophyte populations before finally having a possible influence on the lake’s trophic state. Hence, the eutrophying effect of groundwater is delayed and, at moderate nutrient loading rates, partly dampened by benthic primary producers. The present research emphasizes the importance and complexity of littoral processes, and the need to further investigate and monitor the benthic environment. As present and future global changes can significantly affect LGD, the understanding of these complex interactions is required for the sustainable management of lake water quality.
In the wake of 21st century, humanity witnessed a phenomenal raise of urban agglomerations as powerhouses for innovation and socioeconomic growth. Driving much of national (and in few instances even global) economy, such a gargantuan raise of cities is also accompanied by subsequent increase in energy, resource consumption and waste generation. Much of anthropogenic transformation of Earth's environment in terms of environmental pollution at local level to planetary scale in the form of climate change is currently taking place in cities. Projected to be crucibles for entire humanity by the end of this century, the ultimate fate of humanity predominantly lies in the hands of technological innovation, urbanites' attitudes towards energy/resource consumption and development pathways undertaken by current and future cities. Considering the unparalleled energy, resource consumption and emissions currently attributed to global cities, this thesis addresses these issues from an efficiency point of view. More specifically, this thesis addresses the influence of population size, density, economic geography and technology in improving urban greenhouse gas (GHG) emission efficiency and identifies the factors leading to improved eco-efficiency in cities. In order to investigate the in uence of these factors in improving emission and resource efficiency in cities, a multitude of freely available datasets were coupled with some novel methodologies and analytical approaches in this thesis.
Merging the well-established Kaya Identity to the recently developed urban scaling laws, an Urban Kaya Relation is developed to identify whether large cities are more emission efficient and the intrinsic factors leading to such (in)efficiency. Applying Urban Kaya Relation to a global dataset of 61 cities in 12 countries, this thesis identifed that large cities in developed regions of the world will bring emission efficiency gains because of the better technologies implemented in these cities to produce and utilize energy consumption while the opposite is the case for cities in developing regions. Large cities in developing countries are less efficient mainly because of their affluence and lack of efficient technologies. Apart from the in uence of population size on emission efficiency, this thesis identified the crucial role played by population density in improving building and on-road transport sector related emission efficiency in cities. This is achieved by applying the City Clustering Algorithm (CCA) on two different gridded land use datasets and a standard emission inventory to attribute these sectoral emissions to all inhabited settlements in the USA. Results show that doubling the population density would entail a reduction in the total CO2 emissions in buildings and on-road sectors typically by at least 42 %. Irrespective of their population size and density, cities are often blamed for their intensive resource consumption that threatens not only local but also global sustainability. This thesis merged the concept of urban metabolism with benchmarking and identified cities which are eco-efficient. These cities enable better socioeconomic conditions while being less burden to the environment. Three environmental burden indicators (annual average NO2 concentration, per capita waste generation and water consumption) and two socioeconomic indicators (GDP per capita and employment ratio) for 88 most populous European cities are considered in this study. Using two different non-parametric ranking methods namely regression residual ranking and Data Envelopment Analysis (DEA), eco-efficient cities and their determining factors are identified. This in-depth analysis revealed that mature cities with well-established economic structures such as Munich, Stockholm and Oslo are eco-efficient. Further, correlations between objective eco-efficiency ranking with each of the indicator rankings and the ranking of urbanites' subjective perception about quality of life are analyzed. This analysis revealed that urbanites' perception about quality of life is not merely confined to the socioeconomic well-being but rather to their combination with lower environmental burden.
In summary, the findings of this dissertation has three general conclusions for improving emission and ecological efficiency in cities. Firstly, large cities in emerging nations face a huge challenge with respect to improving their emission efficiency. The task in front of these cities is threefold: (1) deploying efficient technologies for the generation of electricity and improvement of public transportation to unlock their leap frogging potential, (2) addressing the issue of energy poverty and (3) ensuring that these cities do not develop similar energy consumption patterns with infrastructure lock-in behavior similar to those of cities in developed regions. Secondly, the on-going urban sprawl as a global phenomenon will decrease the emission efficiency within the building and transportation sector. Therefore, local policy makers should identify adequate fiscal and land use policies to curb urban sprawl. Lastly, since mature cities with well-established economic structures are more eco-efficient and urbanites' perception re ects its combination with decreasing environmental burden; there is a need to adopt and implement strategies which enable socioeconomic growth in cities whilst decreasing their environment burden.
Lithospheric plates move over the low viscosity asthenosphere balancing several forces. The driving forces include basal shear stress exerted by mantle convection and plate boundary forces such as slab pull and ridge push, whereas the resisting forces include inter-plate friction, trench resistance, and cratonic root resistance. These generate plate motions, the lithospheric stress field and dynamic topography which are observed with different geophysical methods. The orientation and tectonic regime of the observed crustal/lithospheric stress field further contribute to our knowledge of different deformation processes occurring within the Earth's crust and lithosphere. Using numerical models previous studies were able to identify major forces generating stresses in the crust and lithosphere which also contribute to the formation of topography as well as driving lithospheric plates. They showed that the first-order stress pattern explaining about 80\,\% of the stress field originates from a balance of forces acting at the base of the moving lithospheric plates due to convective flow in the underlying mantle. The remaining second-order stress pattern is due to lateral density variations in the crust and lithosphere in regions of pronounced topography and high gravitational potential, such as the Himalayas and mid-ocean ridges. By linking global lithosphere dynamics to deep mantle flow this study seeks to evaluate the influence of shallow and deep density heterogenities on plate motions, lithospheric stress field and dynamic topography using the geoid as a major constraint for mantle rheology. We use the global 3D lithosphere-asthenosphere model SLIM3D with visco-elasto-plastic rheology coupled at 300 km depth to a spectral model of mantle flow. The complexity of the lithosphere-asthenosphere component allows for the simulation of power-law rheology with creep parameters accounting for both diffusion and dislocation creep within the uppermost 300 km.
First we investigate the influence of intra-plate friction and asthenospheric viscosity on present-day plate motions. Previous modelling studies have suggested that small friction coefficients (µ < 0.1, yield stress ~ 100 MPa) can lead to plate tectonics in models of mantle convection. Here we show that, in order to match present-day plate motions and net rotation, the frictional parameter must be less than 0.05. We are able to obtain a good fit with the magnitude and orientation of observed plate velocities (NUVEL-1A) in a no-net-rotation (NNR) reference frame with µ < 0.04 and minimum asthenosphere viscosity ~ 5*10e19 Pas to 10e20 Pas. Our estimates of net rotation (NR) of the lithosphere suggest that amplitudes ~ 0.1-0.2 °/Ma, similar to most observation-based estimates, can be obtained with asthenosphere viscosity cutoff values of ~ 10e19 Pas to 5*10e19 Pas and friction coefficient µ < 0.05.
The second part of the study investigates further constraints on shallow and deep mantle heterogeneities causing plate motion by predicting lithosphere stress field and topography and validating with observations. Lithosphere stresses and dynamic topography are computed using the modelling setup and rheological parameters for prescribed plate motions. We validate our results with the World Stress Map 2016 (WSM2016) and the observed residual topography. Here we tested a number of upper mantle thermal-density structures. The one used to calculate plate motions is considered the reference thermal-density structure. This model is derived from a heat flow model combined with a sea floor age model. In addition we used three different thermal-density structures derived from global S-wave velocity models to show the influence of lateral density heterogeneities in the upper 300 km on model predictions. A large portion of the total dynamic force generating stresses in the crust/lithosphere has its origin in the deep mantle, while topography is largely influenced by shallow heterogeneities. For example, there is hardly any difference between the stress orientation patterns predicted with and without consideration of the heterogeneities in the upper mantle density structure across North America, Australia, and North Africa. However, the crust is dominant in areas of high altitude for the stress orientation compared to the all deep mantle contribution.
This study explores the sensitivity of all the considered surface observables with regards to model parameters providing insights into the influence of the asthenosphere and plate boundary rheology on plate motion as we test various thermal-density structures to predict stresses and topography.
This cumulative dissertation consists of five chapters. In terms of research content, my thesis can be divided into two parts. Part one examines local interactions and spillover effects between small regional governments using spatial econometric methods. The second part focuses on patterns within municipalities and inspects which institutions of citizen participation, elections and local petitions, influence local housing policies.
The central aim of this thesis is to demonstrate the benefits of innovative frequency-based methods to better explain the variability observed in lake ecosystems. Freshwater ecosystems may be the most threatened part of the hydrosphere. Lake ecosystems are particularly sensitive to changes in climate and land use because they integrate disturbances across their entire catchment. This makes understanding the dynamics of lake ecosystems an intriguing and important research priority. This thesis adds new findings to the baseline knowledge regarding variability in lake ecosystems. It provides a literature-based, data-driven and methodological framework for the investigation of variability and patterns in environmental parameters in the time frequency domain.
Observational data often show considerable variability in the environmental parameters of lake ecosystems. This variability is mostly driven by a plethora of periodic and stochastic processes inside and outside the ecosystems. These run in parallel and may operate at vastly different time scales, ranging from seconds to decades. In measured data, all of these signals are superimposed, and dominant processes may obscure the signals of other processes, particularly when analyzing mean values over long time scales. Dominant signals are often caused by phenomena at long time scales like seasonal cycles, and most of these are well understood in the limnological literature. The variability injected by biological, chemical and physical processes operating at smaller time scales is less well understood. However, variability affects the state and health of lake ecosystems at all time scales. Besides measuring time series at sufficiently high temporal resolution, the investigation of the full spectrum of variability requires innovative methods of analysis.
Analyzing observational data in the time frequency domain allows to identify variability at different time scales and facilitates their attribution to specific processes. The merit of this approach is subsequently demonstrated in three case studies. The first study uses a conceptual analysis to demonstrate the importance of time scales for the detection of ecosystem responses to climate change. These responses often occur during critical time windows in the year, may exhibit a time lag and can be driven by the exceedance of thresholds in their drivers. This can only be detected if the temporal resolution of the data is high enough. The second study applies Fast Fourier Transform spectral analysis to two decades of daily water temperature measurements to show how temporal and spatial scales of water temperature variability can serve as an indicator for mixing in a shallow, polymictic lake. The final study uses wavelet coherence as a diagnostic tool for limnology on a multivariate high-frequency data set recorded between the onset of ice cover and a cyanobacteria summer bloom in the year 2009 in a polymictic lake. Synchronicities among limnological and meteorological time series in narrow frequency bands were used to identify and disentangle prevailing limnological processes.
Beyond the novel empirical findings reported in the three case studies, this thesis aims to more generally be of interest to researchers dealing with now increasingly available time series data at high temporal resolution. A set of innovative methods to attribute patterns to processes, their drivers and constraints is provided to help make more efficient use of this kind of data.
Natural products and their derivatives have always been a source of drug leads. In particular, bacterial compounds have played an important role in drug development, for example in the field of antibiotics. A decrease in the discovery of novel leads from natural sources and the hope of finding new leads through the generation of large libraries of drug-like compounds by combinatorial chemistry aimed at specific molecular targets drove the pharmaceutical companies away from research on natural products. However, recent technological advances in genetics, bioinformatics and analytical chemistry have revived the interest in natural products. The ribosomally synthesized and post-translationally modified peptides (RiPPs) are a group of natural products generated by the action of post-translationally modifying enzymes on precursor peptides translated from mRNA by ribosomes. The great substrate promiscuity exhibited by many of the enzymes from RiPP biosynthetic pathways have led to the generation of hundreds of novel synthetic and semisynthetic variants, including variants carrying non-canonical amino acids (ncAAs). The microviridins are a family of RiPPs characterized by their atypical tricyclic structure composed of lactone and lactam rings, and their activity as serine protease inhibitors. The generalities of their biosynthetic pathway have already been described, however, the lack of information on details such as the protease responsible for cleaving off the leader peptide from the cyclic core peptide has impeded the fast and cheap production of novel microviridin variants. In the present work, knowledge on leader peptide activation of enzymes from other RiPP families has been extrapolated to the microviridin family, making it possible to bypass the need of a leader peptide. This feature allowed for the exploitation of the microviridin biosynthetic machinery for the production of novel variants through the establishment of an efficient one-pot in vitro platform. The relevance of this chemoenzymatic approach has been exemplified by the synthesis of novel potent serine protease inhibitors from both rationally-designed peptide libraries and bioinformatically predicted microviridins. Additionally, new structure-activity relationships (SARs) could be inferred by screening microviridin intermediates. The significance of this technique was further demonstrated by the simple incorporation of ncAAs into the microviridin scaffold.
Start-up incentives targeted at unemployed individuals have become an important tool of the Active Labor Market Policy (ALMP) to fight unemployment in many countries in recent years. In contrast to traditional ALMP instruments like training measures, wage subsidies, or job creation schemes, which are aimed at reintegrating unemployed individuals into dependent employment, start-up incentives are a fundamentally different approach to ALMP, in that they intend to encourage and help unemployed individuals to exit unemployment by entering self-employment and, thus, by creating their own jobs. In this sense, start-up incentives for unemployed individuals serve not only as employment and social policy to activate job seekers and combat unemployment but also as business policy to promote entrepreneurship. The corresponding empirical literature on this topic so far has been mainly focused on the individual labor market perspective, however. The main part of the thesis at hand examines the new start-up subsidy (“Gründungszuschuss”) in Germany and consists of four empirical analyses that extend the existing evidence on start-up incentives for unemployed individuals from multiple perspectives and in the following directions:
First, it provides the first impact evaluation of the new start-up subsidy in Germany. The results indicate that participation in the new start-up subsidy has significant positive and persistent effects on both reintegration into the labor market as well as the income profiles of participants, in line with previous evidence on comparable German and international programs, which emphasizes the general potential of start-up incentives as part of the broader ALMP toolset. Furthermore, a new innovative sensitivity analysis of the applied propensity score matching approach integrates findings from entrepreneurship and labor market research about the key role of an individual’s personality on start-up decision, business performance, as well as general labor market outcomes, into the impact evaluation of start-up incentives. The sensitivity analysis with regard to the inclusion and exclusion of usually unobserved personality variables reveals that differences in the estimated treatment effects are small in magnitude and mostly insignificant. Consequently, concerns about potential overestimation of treatment effects in previous evaluation studies of similar start-up incentives due to usually unobservable personality variables are less justified, as long as the set of observed control variables is sufficiently informative (Chapter 2).
Second, the thesis expands our knowledge about the longer-term business performance and potential of subsidized businesses arising from the start-up subsidy program. In absolute terms, the analysis shows that a relatively high share of subsidized founders successfully survives in the market with their original businesses in the medium to long run. The subsidy also yields a “double dividend” to a certain extent in terms of additional job creation. Compared to “regular”, i.e., non-subsidized new businesses founded by non-unemployed individuals in the same quarter, however, the economic and growth-related impulses set by participants of the subsidy program are only limited with regard to employment growth, innovation activity, or investment. Further investigations of possible reasons for these differences show that differential business growth paths of subsidized founders in the longer run seem to be mainly limited by higher restrictions to access capital and by unobserved factors, such as less growth-oriented business strategies and intentions, as well as lower (subjective) entrepreneurial persistence. Taken together, the program has only limited potential as a business and entrepreneurship policy intended to induce innovation and economic growth (Chapters 3 and 4).
And third, an empirical analysis on the level of German regional labor markets yields that there is a high regional variation in subsidized start-up activity relative to overall new business formation. The positive correlation between regular start-up intensity and the share among all unemployed individuals who participate in the start-up subsidy program suggests that (nascent) unemployed founders also profit from the beneficial effects of regional entrepreneurship capital. Moreover, the analysis of potential deadweight and displacement effects from an aggregated regional perspective emphasizes that the start-up subsidy for unemployed individuals represents a market intervention into existing markets, which affects incumbents and potentially produces inefficiencies and market distortions. This macro perspective deserves more attention and research in the future (Chapter 5).
In Germany more than 200.000 persons die of cancer every year, which makes it the second most common cause of death. Chemotherapy and radiation therapy are often combined to exploit a supra-additive effect, as some chemotherapeutic agents like halogenated nucleobases sensitize the cancerous tissue to radiation. The radiosensitizing action of certain therapeutic agents can be at least partly assigned to their interaction with secondary low energy electrons (LEEs) that are generated along the track of the ionizing radiation. In the therapy of cancer DNA is an important target, as severe DNA damage like double strand breaks induce the cell death. As there is only a limited number of radiosensitizing agents in clinical practice, which are often strongly cytotoxic, it would be beneficial to get a deeper understanding of the interaction of less toxic potential radiosensitizers with secondary reactive species like LEEs. Beyond that LEEs can be generated by laser illuminated nanoparticles that are applied in photothermal therapy (PTT) of cancer, which is an attempt to treat cancer by an increase of temperature in the cells. However, the application of halogenated nucleobases in PTT has not been taken into account so far. In this thesis the interaction of the potential radiosensitizer 8-bromoadenine (8BrA) with LEEs was studied. In a first step the dissociative electron attachment (DEA) in the gas phase was studied in a crossed electron-molecular beam setup. The main fragmentation pathway was revealed as the cleavage of the C-Br bond. The formation of a stable parent anion was observed for electron energies around 0 eV. Furthermore, DNA origami nanostructures were used as platformed to determine electron induced strand break cross sections of 8BrA sensitized oligonucleotides and the corresponding nonsensitized sequence as a function of the electron energy. In this way the influence of the DEA resonances observed for the free molecules on the DNA strand breaks was examined. As the surrounding medium influences the DEA, pulsed laser illuminated gold nanoparticles (AuNPs) were used as a nanoscale electron source in an aqueous environment. The dissociation of brominated and native nucleobases was tracked with UV-Vis absorption spectroscopy and the generated fragments were identified with surface enhanced Raman scattering (SERS). Beside the electron induced damage, nucleobase analogues are decomposed in the vicinity of the laser illuminatednanoparticles due to the high temperatures. In order to get a deeper understanding of the different dissociation mechanisms, the thermal decomposition of the nucleobases in these systems was studied and the influence of the adsorption kinetics of the molecules was elucidated. In addition to the pulsed laser experiments, a dissociative electron transfer from plasmonically generated ”hot electrons” to 8BrA was observed under low energy continuous wave laser illumination and tracked with SERS. The reaction was studied on AgNPs and AuNPs as a function of the laser intensity and wavelength. On dried samples the dissociation of the molecule was described by fractal like kinetics. In solution, the dissociative electron transfer was observed as well. It turned out that the timescale of the reaction rates were slightly below typical integration times of Raman spectra. In consequence such reactions need to be taken into account in the interpretation of SERS spectra of electrophilic molecules. The findings in this thesis help to understand the interaction of brominated nucleobases with plasmonically generated electrons and free electrons. This might help to evaluate the potential radiosensitizing action of such molecules in cancer radiation therapy and PTT.
Data profiling is the computer science discipline of analyzing a given dataset for its metadata. The types of metadata range from basic statistics, such as tuple counts, column aggregations, and value distributions, to much more complex structures, in particular inclusion dependencies (INDs), unique column combinations (UCCs), and functional dependencies (FDs). If present, these statistics and structures serve to efficiently store, query, change, and understand the data. Most datasets, however, do not provide their metadata explicitly so that data scientists need to profile them.
While basic statistics are relatively easy to calculate, more complex structures present difficult, mostly NP-complete discovery tasks; even with good domain knowledge, it is hardly possible to detect them manually. Therefore, various profiling algorithms have been developed to automate the discovery. None of them, however, can process datasets of typical real-world size, because their resource consumptions and/or execution times exceed effective limits.
In this thesis, we propose novel profiling algorithms that automatically discover the three most popular types of complex metadata, namely INDs, UCCs, and FDs, which all describe different kinds of key dependencies. The task is to extract all valid occurrences from a given relational instance. The three algorithms build upon known techniques from related work and complement them with algorithmic paradigms, such as divide & conquer, hybrid search, progressivity, memory sensitivity, parallelization, and additional pruning to greatly improve upon current limitations. Our experiments show that the proposed algorithms are orders of magnitude faster than related work. They are, in particular, now able to process datasets of real-world, i.e., multiple gigabytes size with reasonable memory and time consumption.
Due to the importance of data profiling in practice, industry has built various profiling tools to support data scientists in their quest for metadata. These tools provide good support for basic statistics and they are also able to validate individual dependencies, but they lack real discovery features even though some fundamental discovery techniques are known for more than 15 years. To close this gap, we developed Metanome, an extensible profiling platform that incorporates not only our own algorithms but also many further algorithms from other researchers. With Metanome, we make our research accessible to all data scientists and IT-professionals that are tasked with data profiling. Besides the actual metadata discovery, the platform also offers support for the ranking and visualization of metadata result sets.
Being able to discover the entire set of syntactically valid metadata naturally introduces the subsequent task of extracting only the semantically meaningful parts. This is challenge, because the complete metadata results are surprisingly large (sometimes larger than the datasets itself) and judging their use case dependent semantic relevance is difficult. To show that the completeness of these metadata sets is extremely valuable for their usage, we finally exemplify the efficient processing and effective assessment of functional dependencies for the use case of schema normalization.
Galaxies evolve on cosmological timescales and to study this evolution we can either study the stellar populations, tracing the star formation and chemical enrichment, or the dynamics, tracing interactions and mergers of galaxies as well as accretion. In the last decades this field has become one of the most active research areas in modern astrophysics and especially the use of integral field spectrographs furthered our understanding. This work is based on data of NGC 5102 obtained with the panoramic integral field spectrograph MUSE. The data are analysed with two separate and complementary approaches: In the first part, standard methods are used to measure the kinematics and than model the gravitational potential using these exceptionally high-quality data. In the second part I develop the new method of surface brightness fluctuation spectroscopy and quantitatively explore its potential to investigate the bright evolved stellar population.
Measuring the kinematics of NGC 5102 I discover that this low-luminosity S0 galaxy hosts two counter rotating discs. The more central stellar component co-rotates with the large amount of HI gas. Investigating the populations I find strong central age and metallicity gradients with a younger and more metal rich central population. The spectral resolution of MUSE does not allow to connect these population gradients with the two counter rotating discs.
The kinematic measurements are modelled with Jeans anisotropic models to infer the gravitational potential of NGC 5102. Under the self-consistent mass-follows-light assumption none of the Jeans models is able to reproduce the observed kinematics. To my knowledge this is the strongest evidence evidence for a dark matter dominated system obtained with this approach so far. Including a Navarro, Frenk & White dark matter halo immediately solves the discrepancies. A very robust result is the logarithmic slope of the total matter density. For this low-mass galaxy I find a value of -1.75 +- 0.04, shallower than an isothermal halo and even shallower than published values for more massive galaxies. This confirms a tentative relation between total mass slope and stellar mass of galaxies.
The Surface Brightness Fluctuation (SBF) method is a well established distance measure, but due to its sensitive to bright stars also used to study evolved stars in unresolved stellar populations. The wide-field spectrograph MUSE offers the possibility to apply this technique for the first time to spectroscopic data. In this thesis I develop the spectroscopic SBF technique and measure the first SBF spectrum of any galaxy. I discuss the challenges for measuring SBF spectra that rise due to the complexity of integral field spectrographs compared to imaging instruments.
Since decades, stellar population models indicate that SBFs in intermediate-to-old stellar systems are dominated by red giant branch and asymptotic giant branch stars. Especially the later carry significant model uncertainties, making these stars a scientifically interesting target. Comparing the NGC 5102 SBF spectrum with stellar spectra I show for the first time that M-type giants cause the fluctuations. Stellar evolution models suggest that also carbon rich thermally pulsating asymptotic giant branch stars should leave a detectable signal in the SBF spectrum. I cannot detect a significant contribution from these stars in the NGC 5102 SBF spectrum.
I have written a stellar population synthesis tool that predicts for the first time SBF spectra. I compute two sets of population models: based on observed and on theoretical stellar spectra. In comparing the two models I find that the models based on observed spectra predict weaker molecular features. The comparison with the NGC 5102 spectrum reveals that these models are in better agreement with the data.
Hintergrund
Für Patienten mit hochgradiger Aortenklappenstenose, die aufgrund ihres Alters oder ihrer Multimorbidität ein hohes Operationsrisiko tragen, konnte mit der kathetergestützten Aortenklappenkorrektur (transcatheter aortic valve implantation, TAVI) eine vielversprechende Alternative zum herzchirurgischen Eingriff etabliert werden. Explizite Daten zur multidisziplinären kardiologischen Rehabilitation nach TAVI liegen bislang nicht vor. Ziel vorliegender Arbeit war, den Effekt der kardiologischen Rehabilitation auf die körperliche Leistungsfähigkeit, den emotionalen Status, die Lebensqualität und die Gebrechlichkeit bei Patienten nach TAVI zu untersuchen sowie Prädiktoren für die Veränderung der körperlichen Leistungsfähigkeit und der Lebensqualität zu identifizieren.
Methodik
Zwischen 10/2013 und 07/2015 wurden 136 Patienten (80,6 ± 5,0 Jahre, 47,8 % Männer) in Anschlussheilbehandlung nach TAVI in drei kardiologischen Rehabilitationskliniken eingeschlossen. Zur Beurteilung des Effekts der kardiologischen Rehabilitation wurden jeweils zu Beginn und Ende der Rehabilitation der Frailty (Gebrechlichkeits)-Index (Score bestehend aus Barthel-Index, Instrumental Activities of Daily Living, Mini Mental State Exam, Mini Nutritional Assessment, Timed Up and Go und subjektiver Mobilitätsverschlechterung), die Lebensqualität im Short-Form 12 (SF-12) sowie die funktionale körperliche Leistungsfähigkeit im 6-Minuten Gehtest (6-minute walk test, 6MWT) und die maximale körperliche Leistungsfähigkeit in der Belastungs-Ergometrie erhoben. Zusätzlich wurden soziodemographische Daten (z. B. Alter und Geschlecht), Komorbiditäten (z. B. chronisch obstruktive Lungenerkrankung, koronare Herzkrankheit und Karzinom), kardiovaskuläre Risikofaktoren und die NYHA-Klasse dokumentiert. Prädiktoren für die Veränderung der körperlichen Leistungsfähigkeit und Lebensqualität wurden mit Kovarianzanalysen angepasst.
Ergebnisse
Die maximale Gehstrecke im 6MWT konnte um 56,3 ± 65,3 m (p < 0,001) und die maximale körperliche Leistungsfähigkeit in der Belastungs-Ergometrie um 8,0 ± 14,9 Watt (p < 0001) gesteigert werden. Weiterhin konnte eine Verbesserung im SF-12 sowohl in der körperlichen Summenskala um 2,5 ± 8,7 Punkte (p = 0,001) als auch in der psychischen Summenskala um 3,4 ± 10,2 Punkte (p = 0,003) erreicht werden. In der multivariaten Analyse waren ein höheres Alter und eine höhere Bildung signifikant mit einer geringeren Zunahme im 6MWT assoziiert, währenddessen eine bessere kognitive Leistungsfähigkeit und Adipositas einen positiven prädiktiven Wert aufwiesen. Eine höhere Selbstständigkeit und ein besserer Ernährungsstatus beeinflussten die Veränderung in der körperlichen Summenskala des SF-12 positiv, währenddessen eine bessere kognitive Leistungsfähigkeit einen Prädiktor für eine geringere Veränderung darstellte. Des Weiteren hatten die jeweiligen Ausgangswerte der körperlichen und psychischen Summenskala im SF-12 einen inversen Einfluss auf die Veränderungen in der gleichen Skala.
Schlussfolgerung
Eine multidisziplinäre kardiologische Rehabilitation kann sowohl die körperliche Leistungs-fähigkeit und Lebensqualität verbessern als auch die Gebrechlichkeit von Patienten nach kathetergestützter Aortenklappenkorrektur verringern. Daraus resultierend gilt es, spezifische Assessments für die kardiologische Rehabilitation zu entwickeln. Weiterhin ist es notwendig, individualisierte Therapieprogramme mit besonderem Augenmerk auf kognitive Funktionen und Ernährung zu initiieren, um die Selbstständigkeit hochbetagter Patienten zu erhalten bzw. wiederherzustellen und um die Pflegebedürftigkeit der Patienten hinauszuzögern.
The ionosphere, which is strongly influenced by the Sun, is known to be also affected by meteorological processes. These processes, despite having their origin in the troposphere and stratosphere, interact with the upper atmosphere. Such an interaction between atmospheric layers is known as vertical coupling. During geomagnetically quiet times, when near-Earth space is not under the influence of solar storms, these processes become important drivers for ionospheric variability. Studying the link between these processes in the lower atmosphere and the ionospheric variability is important for our understanding of fundamental mechanisms in ionospheric and meteorological research.
A prominent example of vertical coupling between the stratosphere and the ionosphere are the so-called stratospheric sudden warming (SSW) events that occur usually during northern winters and result in an increase in the polar stratospheric temperature and a reversal of the circumpolar winds. While the phenomenon of SSW is confined to the northern polar stratosphere, its influence on the ionosphere can be observed even at equatorial latitudes. During SSW events, the connection between the polar stratosphere and the equatorial ionosphere is believed to be through the modulation of global atmospheric tides. These tides are fundamental for the ionospheric E-region wind dynamo that generates electric fields and currents in the ionosphere. Observations of ionospheric currents indicate a large enhancement of the semidiurnal lunar tide in response to SSW events. Thus, the semidiurnal lunar tide becomes an important driver of ionospheric variability during SSW events.
In this thesis, the ionospheric effect of SSW events is investigated in the equatorial region, where a narrow but an intense E-region current known as the equatorial electrojet (EEJ) flows above the dip equator during the daytime. The day-to-day variability of the EEJ can be determined from magnetic field records at geomagnetic observatories close to the dip equator. Such magnetic data are available for several decades and allows to investigate the impact of SSW events on the EEJ and, even more importantly, helps in understanding the effects of SSW events on the equatorial ionosphere. An excellent long-term record of the geomagnetic field at the equator from 1922 onwards is available for the observatory Huancayo in Peru and is extensively utilized in this study.
The central subject of this thesis is the investigation of lunar tides in the EEJ during SSW events by analyzing long time series. This is done by estimating the lunar tidal amplitude in the EEJ from the magnetic records at Huancayo and by comparing them to measurements of the polar stratospheric wind and temperature, which led to the identification of the known SSW events from 1952 onwards. One goal of this thesis is to identify SSW events that predate 1952. To this end, superposed epoch analysis (SEA) is employed to establish a relationship between the lunar tidal power and the wind and temperature conditions in the lower atmosphere. A threshold value for the lunar tidal power is identified that is discriminative for the known SSW events. This threshold is then used to identify lunar tidal enhancements, which are indicative for any historic SSW events prior to 1952. It can be shown, that the number of lunar tidal enhancements and thus the occurrence frequency of historic SSW events between 1926 and 1952 is similar to the occurrence frequency of the known SSW events from 1952 onwards.
Next to the classic SSW definition, the concept of polar vortex weakening (PVW) is utilized in this thesis. PVW is defined for higher latitudes and altitudes (≈ 40km) than the classical SSW definition (≈ 32km). The correlation between the timing and magnitude of lunar tidal enhancements in the EEJ and the timing and magnitude of PVW is found to be better than for the classic SSW definition. This suggests that the lunar tidal enhancements in the EEJ are closely linked to the state of the middle atmosphere.
Geomagnetic observatories located in different longitudes at the dip equator allow investigating the longitudinally dependent variability of the EEJ during SSW events. For this purpose, the lunar tidal enhancements in the EEJ are determined for the Peruvian and Indian sectors during the major SSW events of the years 2006 and 2009. It is found that the lunar tidal amplitude shows similar enhancements in the Peruvian sector during both SSW events, while the enhancements are notably different for the two events in the Indian sector.
In summary, this thesis shows that lunar tidal enhancements in the EEJ are indeed correlated to the occurrence of SSW events and they should be considered a prominent driver of low latitude ionospheric variability. Secondly, lunar tidal enhancements are found to be longitudinally variable. This suggests that regional effects, such as ionospheric conductivity and the geometry and strength of the geomagnetic field, also play an important role and have to be considered when investigating the mechanisms behind vertical coupling.
In dieser Arbeit werden drei Themen im Zusammenhang mit den spektroskopischen Eigenschaften von Cumarin- (Cou) und DBD-Farbstoffen ([1,3]Dioxolo[4,5-f][1,3]benzodioxol) behandelt. Der erste Teil zeigt die grundlegende spektroskopische Charakterisierung von 7-Aminocumarinen und ihre potentielle Anwendung als Fluoreszenzsonde für Fluoreszenzimmunassays. Im zweiten Teil werden mit die photophysikalischen Eigenschaften der Cumarine genutzt um Cou- und DBD-funktionalisierte Oligo-Spiro-Ketal-Stäbe (OSTK) und ihre Eigenschaften als Membransonden zu untersuchen. Der letzte Teil beschäftigt sich mit der Synthese und der Charakterisierung von Cou- und DBD-funktionalisierten Polyprolinen als Referenzsysteme für schwefelfunktionalisierte OSTK-Stäbe und ihrer Kopplung an Goldnanopartikel.
Immunochemische Analysemethoden sind in der klinischen Diagnostik sehr erfolgreich und werden heute auch für die Nahrungsmittelkontrolle und Überwachung von Umweltfragen mit einbezogen. Dadurch sind sie von großem Interesse für weitere Forschungen. Unter den verschiedenen Immunassays zeichnen sich lumineszenzbasierte Formate durch ihre herausragende Sensitivität aus, die dieses Format für zukünftige Anwendungen besonders attraktiv macht. Die Notwendigkeit von Multiparameterdetektionsmöglichkeiten erfordert einen Werkzeugkasten mit Farbstoffen, um die biochemische Reaktion in ein optisch detektierbares Signal umzuwandeln. Hier wird bei einem Multiparameteransatz jeder Analyt durch einen anderen Farbstoff mit einer einzigartigen Emissionsfarbe, die den blauen bis roten Spektralbereich abdecken, oder eine einzigartige Abklingzeit detektiert. Im Falle eines kompetitiven Immunassayformats wäre für jeden der verschiedenen Farbstoffe ein einzelner Antikörper erforderlich. In der vorliegenden Arbeit wird ein leicht modifizierter Ansatz unter Verwendung einer Cumarineinheit, gegen die hochspezifische monoklonale Antikörper (mAb) erzeugt wurden, als grundlegendes Antigen präsentiert. Durch eine Modifikation der Stammcumarineinheit an einer Position des Moleküls, die für die Erkennung durch den Antikörper nicht relevant ist, kann auf den vollen Spektralbereich von blau bis tiefrot zugegriffen werden. In dieser Arbeit wird die photophysikalische Charakterisierung der verschiedenen Cumarinderivate und ihrer entsprechenden Immunkomplexe mit zwei verschiedenen, aber dennoch hochspezifischen, Antikörpern präsentiert. Die Cumarinfarbstoffe und ihre Immunkomplexe wurden durch stationäre und zeitaufgelöste Absorptions- sowie Fluoreszenzemissionsspektroskopie charakterisiert. Darüber hinaus wurden Fluoreszenzdepolarisationsmessungen durchgeführt, um die Daten zu vervollständigen, die die verschiedenen Bindungsmodi der beiden Antikörper betonten. Im Gegensatz zu häufig eingesetzten Nachweissystemen wurde eine massive Fluoreszenzverstärkung bei der Bildung des Antikörper-Farbstoffkomplexes bis zu einem Faktor von 50 gefunden. Wegen der leichten Emissionsfarbenänderung durch das Anpassen der Cumarinsubstitution in der für die Antigenbindung nicht relevanten Position des Elternmoleküls, ist eine Farbstoff-Toolbox vorhanden, die bei der Konstruktion von kompetitiven Multiparameterfluoreszenzverstärkungsimmunassays verwendet werden kann.
Oligo-Spiro-Thio-Ketal-Stäbe werden aufgrund ihres hydrophoben Rückgrats leicht in Doppellipidschichten eingebaut und deshalb als optische Membransonde verwendet. Wegen ihres geringen Durchmessers wird nur eine minimale Störung der Doppellipidschicht verursacht. Durch die Markierung mit Fluoreszenzfarbstoffen sind neuartige Förster-Resonanz-Energietransfersonden mit hoch definierten relativen Orientierungen der Übergangsdipolmomente der Donor- und Akzeptorfarbstoffe zugänglich und macht die Klasse der OSTK-Sonden zu einem leistungsstarken, flexiblen Werkzeugkasten für optische Biosensoranwendungen. Mit Hilfe von stationären und zeitaufgelösten Fluoreszenzexperimenten wurde der Einbau von Cumarin- und DBD markierten OSTK-Stäben in großen unilamellaren Vesikeln untersucht und die Ergebnisse durch Fluoreszenzdepolarisationsmessungen untermauert.
Der letzte Teil dieser Arbeit beschäftigt sich mit der Synthese und Charakterisierung von Cou- und DBD-funktionalisierten Polyprolinen und ihrer Kopplung an Goldnanopartikel. Die farbstoffmarkierten Polyproline konnten erfolgreich hergestellt werden. Es zeigten sich deutlich Einflüsse auf die spektroskopischen Eigenschaften der Farbstoffe durch die Bindung an die Polyprolinhelix. Die Kopplung an die 5 nm großen AuNP konnte erfolgreich durchgeführt werden. Die Erfahrungen, die durch die Kopplung der Polyproline an die AuNP, gewonnen wurde, ist die Basis für eine Einzelmolekül-AFM-FRET-Nanoskopie mit OSTK-Stäben.
Personal Big Data
(2017)
Many users of cloud-based services are concerned about questions of data privacy. At the same time, they want to benefit from smart data-driven services, which require insight into a person’s individual behaviour. The modus operandi of user modelling is that data is sent to a remote server where the model is constructed and merged with other users’ data. This thesis proposes selective cloud computing, an alternative approach, in which the user model is constructed on the client-side and only an abstracted generalised version of the model is shared with the remote services.
In order to demonstrate the applicability of this approach, the thesis builds an exemplary client-side user modelling technique. As this thesis is carried out in the area of Geoinformatics and spatio-temporal data is particularly sensitive, the application domain for this experiment is the analysis and prediction of a user’s spatio-temporal behaviour.
The user modelling technique is grounded in an innovative conceptual model, which builds upon spatial network theory combined with time-geography. The spatio-temporal constraints of time-geography are applied to the network structure in order to create individual spatio-temporal action spaces. This concept is translated into a novel algorithmic user modelling approach which is solely driven by the user’s own spatio-temporal trajectory data that is generated by the user’s smartphone.
While modern smartphones offer a rich variety of sensory data, this thesis only makes use of spatio-temporal trajectory data, enriched by activity classification, as the input and foundation for the algorithmic model. The algorithmic model consists of three basal components: locations (vertices), trips (edges), and clusters (neighbourhoods).
After preprocessing the incoming trajectory data in order to identify locations, user feedback is used to train an artificial neural network to learn temporal patterns for certain location types (e.g. work, home, bus stop, etc.). This Artificial Neural Network (ANN) is used to automatically detect future location types by their spatio-temporal patterns. The same is done in order to predict the duration of stay at a certain location. Experiments revealed that neural nets were the most successful statistical and machine learning tool to detect those patterns. The location type identification algorithm reached an accuracy of 87.69%, the duration prediction on binned data was less successful and deviated by an average of 0.69 bins. A challenge for the location type classification, as well as for the subsequent components, was the imbalance of trips and connections as well as the low accuracy of the trajectory data. The imbalance is grounded in the fact that most users exhibit strong habitual patterns (e.g. home > work), while other patterns are rather rare by comparison. The accuracy problem derives from the energy-saving location sampling mode, which creates less accurate results.
Those locations are then used to build a network that represents the user’s spatio-temporal behaviour. An initial untrained ANN to predict movement on the network only reached 46% average accuracy. Only lowering the number of included edges, focusing on more common trips, increased the performance. In order to further improve the algorithm, the spatial trajectories were introduced into the predictions. To overcome the accuracy problem, trips between locations were clustered into so-called spatial corridors, which were intersected with the user’s current trajectory. The resulting intersected trips were ranked through a k-nearest-neighbour algorithm. This increased the performance to 56%. In a final step, a combination of a network and spatial clustering algorithm was built in order to create clusters, therein reducing the variety of possible trips. By only predicting the destination cluster instead of the exact location, it is possible to increase the performance to 75% including all classes.
A final set of components shows in two exemplary ways how to deduce additional inferences from the underlying spatio-temporal data. The first example presents a novel concept for predicting the ‘potential memorisation index’ for a certain location. The index is based on a cognitive model which derives the index from the user’s activity data in that area. The second example embeds each location in its urban fabric and thereby enriches its cluster’s metadata by further describing the temporal-semantic activity in an area (e.g. going to restaurants at noon).
The success of the client-side classification and prediction approach, despite the challenges of inaccurate and imbalanced data, supports the claimed benefits of the client-side modelling concept. Since modern data-driven services at some point do need to receive user data, the thesis’ computational model concludes with a concept for applying generalisation to semantic, temporal, and spatial data before sharing it with the remote service in order to comply with the overall goal to improve data privacy. In this context, the potentials of ensemble training (in regards to ANNs) are discussed in order to highlight the potential of only sharing the trained ANN instead of the raw input data.
While the results of our evaluation support the assets of the proposed framework, there are two important downsides of our approach compared to server-side modelling. First, both of these server-side advantages are rooted in the server’s access to multiple users’ data. This allows a remote service to predict spatio-in the user-specific data, which represents the second downside. While minor classes will likely be minor classes in a bigger dataset as well, for each class, there will still be more variety than in the user-specific dataset. The author emphasises that the approach presented in this work holds the potential to change the privacy paradigm in modern data-driven services. Finding combinations of client- and server-side modelling could prove a promising new path for data-driven innovation.
Beyond the technological perspective, throughout the thesis the author also offers a critical view on the data- and technology-driven development of this work. By introducing the client-side modelling with user-specific artificial neural networks, users generate their own algorithm. Those user-specific algorithms are influenced less by generalised biases or developers’ prejudices. Therefore, the user develops a more diverse and individual perspective through his or her user model. This concept picks up the idea of critical cartography, which questions the status quo of how space is perceived and represented.
In der vorliegenden Arbeit wurden verschiedene Polymere hergestellt, die bestimmte funktionelle Gruppen beinhalten. Diese Gruppen werden zum Teil durch Alkylketten geschützt, zum Teil liegen sie ungeschützt im Polymer vor. Mit diesen Polymeren wurden Untersuchungen mit knochenähnlichen Materialien sogenanntem Calciumphosphat durchgeführt. Es wurde der Einfluss der verschiedenen Polymere auf die Bildung dieser knochenähnlichen Substanzen untersucht und auch der Einfluss auf die Stabilität und das Auflösungsverhalten der Calciumphosphate. Dabei sollte ein besonderes Augenmerk auf die funktionellen Gruppen, sogenannte Phosphonsäuren und deren Ester, die die Phosphonsäuren schützen, gesetzt werden. Es stellte sich heraus, dass bei der Bildung der knochenähnlichen Materialien die Polymere mit Estergruppen eine leichte Förderung der Calciumphosphat-Bildung verursachen, während die ungeschützten Polymere die Bildung des „Knochenmaterials“ sehr stark verzögern. Dieser Effekt verstärkt sich noch, wenn eine weitere bestimmte Komponente zum Polymer hinzukommt und somit ein Copolymer gebildet wird. Diese Copolymere beschleunigen bzw. verlangsamen die Calciumphosphatbildung noch stärker. Werden Polymere mit einem anderen Polymergerüst aber den gleichen Phosphonsäuresetern in den Seitenketten verwendet, ändert sich der Einfluss der Calciumphosphat-Bildung wenig. Verglichen mit Polymeren ohne solche Phosphonsäuregruppen wird erkennbar, dass es weniger die Phosphonsäuregruppe ist, die die Mineralisation beeinflusst, sondern es eher eine Folge der Säure im Polymer ist.
Wird die Stabilisierung und Auflösung der Knochenähnlichen Substanzen betrachtet, fällt auf, dass auch hier wieder die Säuren den größten Effekt ausüben. Die Phosphonsäuregruppen scheinen dabei jedoch tatsächlich einen besonderen Effekt auszuüben, da bei diesen die Stabilisierung und auch das Auflösungsvermögen von Calciumphospaht von allen untersuchten Polymeren am größten sind.
In der Arbeit konnte außerdem gezeigt werden, dass die Polymere und Copolymere mit Phosphonsäuregruppen einen leicht positiven Effekt auf die Zahngesundheit zeigen. Die Zahl von Bakterien auf der Zahnoberfläche konnte reduziert werden und bei der Untersuchung der Zahnauflösung wurde eine glattere Zahnoberfläche erhalten, jedoch wurde auch mit den untersuchten Polymeren der Zahn im Inneren angegriffen. Weitere Untersuchungen können hier noch genaueren Aufschluss geben. Außerdem sollten auch die Polymere mit dem unterschiedlichen Polymergerüst und Phosphonsäureestergruppen untersucht werden.
Letztere Polymere wurden verwendet, um festere “gelartige“ Polymernetzwerke herzustellen und deren Einfluss auf die Calciumphosphatmineralisation zu untersuchen. Es stellte sich heraus, dass ohne das Einbetten einiger Calciumphosphatteilchen keine Bildung von Calciumphospaht an den Materialien ausgelöst wurde, wurden die sogenannten Hydrogele jedoch mit Calciumphosphatpartikeln geimpft, konnte deutliches weiteres Calciumphosphatwachstum beobachtet werden. Das Material lässt sich auch in verschiedene Formen bringen. Somit könnte das System nach weiteren Untersuchungen zur Verträglichkeit mit Zellen oder Geweben ein mögliches Material für Implantate darstellen, mit denen gezielt Knochenwachstum eingeleitet werden könnte.
Prosody is a rich source of information that heavily supports spoken language comprehension. In particular, prosodic phrase boundaries divide the continuous speech stream into chunks reflecting the semantic and syntactic structure of an utterance. This chunking or prosodic phrasing plays a critical role in both spoken language processing and language acquisition. Aiming at a better understanding of the underlying processing mechanisms and their acquisition, the present work investigates factors that influence prosodic phrase boundary perception in adults and infants. Using the event-related potential (ERP) technique, three experimental studies examined the role of prosodic context (i.e., phrase length) in German phrase boundary perception and of the main prosodic boundary cues, namely pitch change, final lengthening, and pause. With regard to the boundary cues, the dissertation focused on the questions which cues or cue combination are essential for the perception of a prosodic boundary and on whether and how this cue weighting develops during infancy.
Using ERPs is advantageous because the technique captures the immediate impact of (linguistic) information during on-line processing. Moreover, as it can be applied independently of specific task demands or an overt response performance, it can be used with both infants and adults. ERPs are particularly suitable to study the time course and underlying mechanisms of boundary perception, because a specific ERP component, the Closure Positive Shift (CPS) is well established as neuro-physiological indicator of prosodic boundary perception in adults.
The results of the three experimental studies first underpin that the prosodic context plays an immediate role in the processing of prosodic boundary information. Moreover, the second study reveals that adult listeners perceive a prosodic boundary also on the basis of a sub-set of the boundary cues available in the speech signal. Both ERP and simultaneously collected behavioral data (i.e., prosodic judgements) suggest that the combination of pitch change and final lengthening triggers boundary perception; however, when presented as single cues, neither pitch change nor final lengthening were sufficient. Finally, testing six- and eight-month-old infants shows that the early sensitivity for prosodic information is reflected in a brain response resembling the adult CPS. For both age groups, brain responses to prosodic boundaries cued by pitch change and final lengthening revealed a positivity that can be interpreted as a CPS-like infant ERP component. In contrast, but comparable to the adults’ response pattern, pitch change as a single cue does not provoke an infant CPS. These results show that infant phrase boundary perception is not exclusively based on pause detection and hint at an early ability to exploit subtle, relational prosodic cues in speech perception.
Throughout all different socio-historical tensions undergone by the Latin American modernit(ies), the literary-historical production as well as the reflection on the topic - regional, national, supranational and/or continental – have been part of the critical and intellectual itinerary of very significant political and cultural projects, whose particular development allows the analysis of the socio-discursive dynamics fulfilled by the literary historiography in the search of a historical conscience and representation of the esthetic-literary processes.
In present literary and cultural Central American literary studies, the academic thought on the development of the literary historiography has given place to some works whose main objects of study involve a significant corpus of national literature histories published mainly in the 20th century, between the forties and the eighties. Although these studies differ greatly from the vast academic production undertaken by the literary critics in the last two decades, the field of research of the literary historiography in Central America has made a theoretical-methodological effort, as of the eighties and until now, to analyze the local literary-historical productions.
However, this effort was carried out more systematically in the last five years of the 20th century, within the Central American democratic transition and post-war context, when a national, supra national and transnational model of literary history was boosted. This gave place to the creation and launching of the project Hacia una Historia de las Literaturas Centroamericanas (HILCAS) at the beginning of the new millennium.
Given the ideological relevance which the literary historiography has played in the process of the historical formation of the Hispano-American States, and whose philological tradition has also had an impact in the various Central American nation states, the emergence of this historiographic project marks an important rupture in relation with the national paradigms, and it is also manifested in a movement of transition and tension with regard to the new cultural, comparative and transareal dynamics, which seek to understand the geographical, transnational, medial and transdisciplinary movements within which the esthetic narrative processes and the idea and formation of a critical Central American subject gain shape.
Taking this aspect into account, our study puts forward as its main hypothesis that the historiographic thought developed as a consequence of the project Hacia una Historia de las Literaturas Centroamericanas (HILCAS) constitutes a socio-discursive practice, which reflects the formation of a historic-literary conscience and of a critical intellectual subject, an emergence that takes place between the mid-nineties and the first decade of the 21st century.
In this respect, and taking as a basis the general purpose of this investigation indicated before, the main justification for our object of study consists of making the Central American historiographic reflection visible as a part of the epistemological and cultural changes shown by the Latin American historiographic thought, and from which a new way of conceptualization of the space, the coexistence and the historic conscience emerge with regard to the esthetic-literary practices and processes.
Based on the field and hypothesis stated before, the general purpose of this research is framed by the socio-discursive dimension fulfilled by the Latin American literary historiography, and it aims to analyze the Central American historical-literary thought developed between the second half of the nineties and the beginning of the first decade of the 21st century.
Der konsularische Schutz
(2017)
Anlässlich der Zunahme von Entführungen deutscher Staatsangehöriger im Ausland und des im Jahr 2009 ergangenen Urteils des Bundesverwaltungsgerichts zu diesem Thema legt die vorliegende Arbeit eine detaillierte und umfassende Analyse der Rechtsgrundlagen für die Gewährung konsularischen Schutzes durch die Auslandsvertretungen der Bundesrepublik Deutschland vor.
Das erste Kapitel beinhaltet eine detaillierte Darstellung der sich aus dem Völker-, Europa- und Verfassungsrecht sowie aus dem Konsulargesetz ergebenen staatlichen Handlungspflichten sowie möglicher damit einhergehender Individualansprüche auf die Ausübung konsularischen Schutzes im Einzelfall.
Im zweiten Kapitel werden die Voraussetzungen der Gewährung konsularischen Schutzes nach dem Konsulargesetz dargestellt. Den Schwerpunkt bildet hierbei die Bestimmung des Anwendungsbereiches des § 5 Abs. 1 Satz 1 Konsulargesetz unter Berücksichtigung des Urteils des Bundesverwaltungsgerichtes vom 28.05.2009, welches nach Auffassung der Verfasserin den Anwendungsbereich dieser Norm verkennt. Bei § 5 Abs. 1 Satz 1 Konsulargesetz handelt es sich um eine besondere sozialhilferechtliche Norm außerhalb der SGB XII, welche die konsularische Hilfe allein in wirtschaftlichen Notlagen regelt.
Das dritte Kapitel analysiert die bestehenden Regelungen über die Erstattung der im Rahmen der Gewährung konsularischen Schutzes entstandenen Kosten und erklärt deren Systematik. Ferner erfolgt ein Ausblick auf die künftigen Regelungen der Kostenerstattung nach dem Bundesgebührengesetz sowie der damit einhergehenden Rechtsverordnung.
Zum Abschluss werden die Ergebnisse anhand eines historischen Falles zusammengefasst sowie ein Gesetzesvorschlag vorgestellt, welcher die gefundenen Unklarheiten und Unstimmigkeiten im Konsulargesetz beheben kann.
Hintergrund: Etablierte Protein- und Nukleinsäure-basierte Methoden für den spezifischen Pathogennachweis sind nur unter standardisierten Laborbedingungen von geschultem Personal durchführbar und daher mit einem hohen Zeit- und Kostenaufwand verbunden. In der Nukleinsäure-basierten Diagnostik kann durch die Einführung der isothermalen Amplifikation eine schnelle und kostengünstige Alternative zur Polymerase-Kettenreaktion (PCR) verwendet werden. Die Loop-mediated isothermal amplification (LAMP) bietet aufgrund der hohen Amplifikationseffizienz vielfältige Detektionsmöglichkeiten, die sowohl für Schnelltest- als auch für Monitoring-Anwendungen geeignet sind.
Ein wesentliches Ziel dieser Arbeit war die Verbesserung der Anwendbarkeit der LAMP und die Entwicklung einer neuen Methode für den einfachen, schnellen und günstigen Nachweis von Pathogenen mittels alternativer DNA- oder Pyrophosphat-abhängiger Detektionsverfahren. Hier wurden zunächst direkte und indirekte Detektionsmethoden untersucht und darauf aufbauend ein Verfahren entwickelt, mit dem neue Metallionen-abhängige Fluoreszenzfarbstoffe für die selektive Detektion von Pyrophosphat in der LAMP und anderen enzymatischen Reaktionen identifiziert werden können. Als Alternative für die DNA-basierte Detektion in der digitalen LAMP sollten die zuvor etablierten Farbstoffe für den Pyrophosphatnachweis in einer Emulsion getestet werden. Abschließend wurde ein neuer Reaktionsmechanismus für die effiziente Generierung hochmolekularer DNA unter isothermalen Bedingungen als Alternative zur LAMP entwickelt.
Ergebnisse: Für den Nachweis RNA- und DNA-basierter Phythopathogene konnte die Echtzeit- und Endpunktdetektion mit verschiedenen Farbstoffen in einem geschlossenen System etabliert werden. Hier wurde Berberin als DNA-interkalierender Fluoreszenzfarbstoff mit vergleichbarer Sensitivität zu SYBR Green und EvaGreen erfolgreich in der LAMP mit Echtzeitdetektion eingesetzt. Ein Vorteil von Berberin gegenüber den anderen Farbstoffen ist die Toleranz der DNA-Polymerase auch bei hohen Farbstoffkonzentrationen. Berberin kann daher auch in der geschlossenen LAMP-Reaktion ohne zusätzliche Anpassung der Reaktionsbedingungen für die Endpunktdetektion verwendet werden. Darüber hinaus konnte Hydroxynaphtholblau (HNB), das für den kolorimetrischen Endpunktnachweis bekannt ist, erstmals auch für die fluorimetrische Detektion der LAMP in Echtzeit eingesetzt werden. Zusätzlich konnten in der Arbeit weitere Metallionen-abhängige Farbstoffe zur indirekten Detektion der LAMP über das Pyrophosphat identifiziert werden. Dafür wurde eine iterative Methode entwickelt, mit der potenzielle Farbstoffe hinsichtlich ihrer Enzymkompatibilität und ihrer spektralen Eigenschaften bei An- oder Abwesenheit von Manganionen selektiert werden können. Mithilfe eines kombinatorischen Screenings im Mikrotiterplattenformat konnte die komplexe Konzentrationsabhängigkeit zwischen den einzelnen Komponenten für einen fluorimetrischen Verdrängungsnachweis untersucht werden. Durch die Visualisierung des Signal-Rausch-Verhältnis’ als Intensitätsmatrix (heatmap) konnten zunächst Alizarinrot S und Tetrazyklin unter simulierten Reaktionsbedingungen selektiert werden. In der anschließenden enzymatischen LAMP-Reaktion konnte insbesondere Alizarinrot S als günstiger, nicht-toxischer und robuster Fluoreszenzfarbstoff identifiziert werden und zeigte eine Pyrophosphat-abhängige Zunahme der Fluoreszenzintensität. Die zuvor etablierten Farbstoffe (HNB, Calcein und Alizarinrot S) konnten anschließend erfolgreich für die indirekte, fluorimetrische Detektion von Pyrophosphat in einer LAMP-optimierten Emulsion eingesetzt werden. Die Stabilität und Homogenität der generierten Emulsion wurde durch den Zusatz des Emulgators Poloxamer 188 verbessert. Durch die fluoreszenzmikroskopische Analyse der Emulsion war eine eindeutige Diskriminierung der positiven und negativen Tröpfchen vor allem bei Einsatz von Calcein und Alizarinrot S möglich. Aufgrund des komplexen Primer-Designs und der hohen Wahrscheinlichkeit unspezifischer Amplifikation in der LAMP wurde eine neue Bst DNA-Polymerase-abhängige isothermale Amplifikationsreaktion entwickelt. Durch die Integration einer spezifischen Linkerstruktur (abasische Stelle oder Hexaethylenglykol) zwischen zwei Primersequenzen konnte ein bifunktioneller Primer die effiziente Regenerierung der Primerbindungsstellen gewährleisten. Der neue Primer induziert nach der spezifischen Hybridisierung auf dem Templat die Rückfaltung zu einer Haarnadelstruktur und blockiert gleichzeitig die Polymeraseaktivität am Gegenstrang, wodurch eine autozyklische Amplifikation trotz konstanter Reaktionstemperatur möglich ist. Die Effizienz der „Hinge-initiated Primer dependent Amplification“ (HIP) konnte abschließend durch die Verkürzung der Distanz zwischen einem modifizierten Hinge-Primer und einem PCR-ähnlichen Primer verbessert werden.
Schlussfolgerung: Die LAMP hat sich aufgrund der hohen Robustheit und Effizienz zu einer leistungsfähigen Alternative für die klassische PCR in der molekularbiologischen Diagnostik entwickelt. Unterschiedliche Detektionsverfahren verbessern die Leistungsfähigkeit der qualitativen und quantitativen LAMP für die Feldanwendungen und für die Diagnostik, da die neuen DNA- und Pyrophosphat-abhängigen Nachweismethoden in einer geschlossenen Reaktion eingesetzt werden können und so eine einfache Pathogendiagnostik ermöglichen. Die gezeigten Methoden können darüber hinaus zu einer Kostensenkung und Zeitersparnis gegenüber den herkömmlichen Methoden beitragen. Ein attraktives Ziel stellt die Weiterentwicklung der HIP für den Pathogennachweis als Alternative zur LAMP dar. Hierbei können die neuen LAMP-Detektionsverfahren ebenfalls Anwendung finden. Die Verwendung von Bst DNA-Polymerase-abhängigen Reaktionen ermöglicht darüber hinaus die Integration einer robusten isothermalen Amplifikation in mikrofluidische Systeme. Durch die Kombination der Probenvorbereitung, Amplifikation und Detektion sind zukünftige Anwendungen mit kurzer Analysezeit und geringem apparativen Aufwand insbesondere in der Pathogendiagnostik möglich.
Magnetische Eisenoxidnanopartikel werden bereits seit geraumer Zeit erfolgreich als MRT-Kontrastmittel in der klinischen Bildgebung eingesetzt. Durch Optimierung der magnetischen Eigenschaften der Nanopartikel kann die Aussagekraft von MR-Aufnahmen verbessert und somit der diagnostische Wert einer MR-Anwendung weiter erhöht werden. Neben der Verbesserung bestehender Verfahren wird die bildgebende Diagnostik ebenso durch die Entwicklung neuer Verfahren, wie dem Magnetic Particle Imaging, vorangetrieben. Da hierbei das Messsignal von den magnetischen Nanopartikeln selbst erzeugt wird, birgt das MPI einen enormen Vorteil hinsichtlich der Sensitivität bei gleichzeitig hoher zeitlicher und räumlicher Auflösung. Da es aktuell jedoch keinen kommerziell vertriebenen in vivo-tauglichen MPI-Tracer gibt, besteht ein dringender Bedarf an geeigneten innovativen Tracermaterialien. Daraus resultierte die Motivation dieser Arbeit biokompatible und superparamagnetische Eisenoxidnanopartikel für den Einsatz als in vivo-Diagnostikum insbesondere im Magnetic Particle Imaging zu entwickeln. Auch wenn der Fokus auf der Tracerentwicklung für das MPI lag, wurde ebenso die MR-Performance bewertet, da geeignete Partikel somit alternativ oder zusätzlich als MR-Kontrastmittel mit verbesserten Kontrasteigenschaften eingesetzt werden könnten.
Die Synthese der Eisenoxidnanopartikel erfolgte über die partielle Oxidation von gefälltem Eisen(II)-hydroxid und Green Rust sowie eine diffusionskontrollierte Kopräzipitation in einem Hydrogel.
Mit der partiellen Oxidation von Eisen(II)-hydroxid und Green Rust konnten erfolgreich biokompatible und über lange Zeit stabile Eisenoxidnanopartikel synthetisiert werden. Zudem wurden geeignete Methoden zur Formulierung und Sterilisierung etabliert, wodurch zahlreiche Voraussetzungen für eine Anwendung als in vivo-Diagnostikum geschaffen wurden. Weiterhin ist auf Grundlage der MPS-Performance eine hervorragende Eignung dieser Partikel als MPI-Tracer zu erwarten, wodurch die Weiterentwicklung der MPI-Technologie maßgeblich vorangetrieben werden könnte. Die Bestimmung der NMR-Relaxivitäten sowie ein initialer in vivo-Versuch zeigten zudem das große Potential der formulierten Nanopartikelsuspensionen als MRT-Kontrastmittel. Die Modifizierung der Partikeloberfläche ermöglicht ferner die Herstellung zielgerichteter Nanopartikel sowie die Markierung von Zellen, wodurch das mögliche Anwendungsspektrum maßgeblich erweitert wurde.
Im zweiten Teil wurden Partikel durch eine diffusionskontrollierte Kopräzipitation im Hydrogel, wobei es sich um eine bioinspirierte Modifikation der klassischen Kopräzipitation handelt, synthetisiert, wodurch Partikel mit einer durchschnittlichen Kristallitgröße von 24 nm generiert werden konnten. Die Bestimmung der MPS- und MR-Performance elektrostatisch stabilisierter Partikel ergab vielversprechende Resultate. In Vorbereitung auf die Entwicklung eines in vivo-Diagnostikums wurden die Partikel anschließend erfolgreich sterisch stabilisiert, wodurch der kolloidale Zustand in MilliQ-Wasser über lange Zeit aufrechterhalten werden konnte. Durch Zentrifugation konnten die Partikel zudem erfolgreich in verschiedene Größenfraktionen aufgetrennt werden. Dies ermöglichte die Bestimmung der idealen Aggregatgröße dieses Partikelsystems in Bezug auf die MPS-Performance.
Thermal cis-trans isomerization of azobenzene studied by path sampling and QM/MM stochastic dynamics
(2017)
Azobenzene-based molecular photoswitches have extensively been applied to biological systems, involving photo-control of peptides, lipids and nucleic acids. The isomerization between the stable trans and the metastable cis state of the azo moieties leads to pronounced changes in shape and other physico-chemical properties of the molecules into which they are incorporated. Fast switching can be induced via transitions to excited electronic states and fine-tuned by a large number of different substituents at the phenyl rings. But a rational design of tailor-made azo groups also requires control of their stability in the dark, the half-lifetime of the cis isomer. In computational chemistry, thermally activated barrier crossing on the ground state Born-Oppenheimer surface can efficiently be estimated with Eyring’s transition state theory (TST) approach; the growing complexity of the azo moiety and a rather heterogeneous environment, however, may render some of the underlying simplifying assumptions problematic.
In this dissertation, a computational approach is established to remove two restrictions at once: the environment is modeled explicitly by employing a quantum mechanical/molecular mechanics (QM/MM) description; and the isomerization process is tracked by analyzing complete dynamical pathways between stable states. The suitability of this description is validated by using two test systems, pure azo benzene and a derivative with electron donating and electron withdrawing substituents (“push-pull” azobenzene). Each system is studied in the gas phase, in toluene and in polar DMSO solvent. The azo molecules are treated at the QM level using a very recent, semi-empirical approximation to density functional theory (density functional tight binding approximation). Reactive pathways are sampled by implementing a version of the so-called transition path sampling method (TPS), without introducing any bias into the system dynamics. By analyzing ensembles of reactive trajectories, the change in isomerization pathway from linear inversion to rotation in going from apolar to polar solvent, predicted by the TST approach, could be verified for the push-pull derivative. At the same time, the mere presence of explicit solvation is seen to broaden the distribution of isomerization pathways, an effect TST cannot account for.
Using likelihood maximization based on the TPS shooting history, an improved reaction coordinate was identified as a sine-cosine combination of the central bend angles and the rotation dihedral, r (ω,α,α′). The computational van’t Hoff analysis for the activation entropies was performed to gain further insight into the differential role of solvent for the case of the unsubstituted and the push-pull azobenzene. In agreement with the experiment, it yielded positive activation entropies for azobenzene in the DMSO solvent while negative for the push-pull derivative, reflecting the induced ordering of solvent around the more dipolar transition state associated to the latter compound. Also, the dynamically corrected rate constants were evaluated using the reactive flux approach where an increase comparable to the experimental one was observed for a high polarity medium for both azobenzene derivatives.
Nowadays, graph data models are employed, when relationships between entities have to be stored and are in the scope of queries. For each entity, this graph data model locally stores relationships to adjacent entities. Users employ graph queries to query and modify these entities and relationships. These graph queries employ graph patterns to lookup all subgraphs in the graph data that satisfy certain graph structures. These subgraphs are called graph pattern matches. However, this graph pattern matching is NP-complete for subgraph isomorphism. Thus, graph queries can suffer a long response time, when the number of entities and relationships in the graph data or the graph patterns increases.
One possibility to improve the graph query performance is to employ graph views that keep ready graph pattern matches for complex graph queries for later retrieval. However, these graph views must be maintained by means of an incremental graph pattern matching to keep them consistent with the graph data from which they are derived, when the graph data changes. This maintenance adds subgraphs that satisfy a graph pattern to the graph views and removes subgraphs that do not satisfy a graph pattern anymore from the graph views.
Current approaches for incremental graph pattern matching employ Rete networks. Rete networks are discrimination networks that enumerate and maintain all graph pattern matches of certain graph queries by employing a network of condition tests, which implement partial graph patterns that together constitute the overall graph query. Each condition test stores all subgraphs that satisfy the partial graph pattern. Thus, Rete networks suffer high memory consumptions, because they store a large number of partial graph pattern matches. But, especially these partial graph pattern matches enable Rete networks to update the stored graph pattern matches efficiently, because the network maintenance exploits the already stored partial graph pattern matches to find new graph pattern matches. However, other kinds of discrimination networks exist that can perform better in time and space than Rete networks. Currently, these other kinds of networks are not used for incremental graph pattern matching.
This thesis employs generalized discrimination networks for incremental graph pattern matching. These discrimination networks permit a generalized network structure of condition tests to enable users to steer the trade-off between memory consumption and execution time for the incremental graph pattern matching. For that purpose, this thesis contributes a modeling language for the effective definition of generalized discrimination networks. Furthermore, this thesis contributes an efficient and scalable incremental maintenance algorithm, which updates the (partial) graph pattern matches that are stored by each condition test. Moreover, this thesis provides a modeling evaluation, which shows that the proposed modeling language enables the effective modeling of generalized discrimination networks. Furthermore, this thesis provides a performance evaluation, which shows that a) the incremental maintenance algorithm scales, when the graph data becomes large, and b) the generalized discrimination network structures can outperform Rete network structures in time and space at the same time for incremental graph pattern matching.
With Saccharomyces cerevisiae being a commonly used host organism for synthetic biology and biotechnology approaches, the work presented here aims at the development of novel tools to improve and facilitate pathway engineering and heterologous protein production in yeast. Initially, the multi-part assembly strategy AssemblX was established, which allows the fast, user-friendly and highly efficient construction of up to 25 units, e.g. genes, into a single DNA construct. To speed up complex assembly projects, starting from sub-gene fragments and resulting in mini-chromosome sized constructs, AssemblX follows a level-based approach: Level 0 stands for the assembly of genes from multiple sub-gene fragments; Level 1 for the combination of up to five Level 0 units into one Level 1 module; Level 2 for linkages of up to five Level 1 modules into one Level 2 module. This way, all Level 0 and subsequently all Level 1 assemblies can be carried out simultaneously. Individually planned, overlap-based Level 0 assemblies enable scar-free and sequence-independent assemblies of transcriptional units, without limitations in fragment number, size or content. Level 1 and Level 2 assemblies, which are carried out via predefined, computationally optimized homology regions, follow a standardized, highly efficient and PCR-free scheme. AssemblX follows a virtually sequence-independent scheme with no need for time-consuming domestication of assembly parts. To minimize the risk of human error and to facilitate the planning of assembly projects, especially for individually designed Level 0 constructs, the whole AssemblX process is accompanied by a user-friendly webtool. This webtool provides the user with an easy-to-use operating surface and returns a bench-protocol including all cloning steps. The efficiency of the assembly process is further boosted through the implementation of different features, e.g. ccdB counter selection and marker switching/reconstitution. Due to the design of homology regions and vector backbones the user can flexibly choose between various overlap-based cloning methods, enabling cost-efficient assemblies which can be carried out either in E. coli or yeast. Protein production in yeast is additionally supported by a characterized library of 40 constitutive promoters, fully integrated into the AssemblX toolbox. This provides the user with a starting point for protein balancing and pathway engineering. Furthermore, the final assembly cassette can be subcloned into any vector, giving the user the flexibility to transfer the individual construct into any host organism different from yeast.
As successful production of heterologous compounds generally requires a precise adjustment of protein levels or even manipulation of the host genome to e.g. inhibit unwanted feedback regulations, the optogenetic transcriptional regulation tool PhiReX was designed. In recent years, light induction was reported to enable easy, reversible, fast, non-toxic and nearly gratuitous regulation, thereby providing manifold advantages compared to conventional chemical inducers. The optogenetic interface established in this study is based on the photoreceptor PhyB and its interacting protein PIF3. Both proteins, derived from Arabidopsis thaliana, dimerize in a red/far-red light-responsive manner. This interaction depends on a chromophore, naturally not available in yeast. By fusing split proteins to both components of the optical dimerizer, active enzymes can be reconstituted in a light-dependent manner. For the construction of the red/far-red light sensing gene expression system PhiReX, a customizable synTALE-DNA binding domain was fused to PhyB, and a VP64 activation domain to PIF3. The synTALE-based transcription factor allows programmable targeting of any desired promoter region. The first, plasmid-based PhiReX version mediates chromophore- and light-dependent expression of the reporter gene, but required further optimization regarding its robustness, basal expression and maximum output. This was achieved by genome-integration of the optical regulator pair, by cloning the reporter cassette on a high-copy plasmid and by additional molecular modifications of the fusion proteins regarding their cellular localization. In combination, this results in a robust and efficient activation of cells over an incubation time of at least 48 h. Finally, to boost the potential of PhiReX for biotechnological applications, yeast was engineered to produce the chromophore. This overcomes the need to supply the expensive and photo-labile compound exogenously. The expression output mediated through PhiReX is comparable to the strong constitutive yeast TDH3 promoter and - in the experiments described here - clearly exceeds the commonly used galactose inducible GAL1 promoter.
The fast-developing field of synthetic biology enables the construction of complete synthetic genomes. The upcoming Synthetic Yeast Sc2.0 Project is currently underway to redesign and synthesize the S. cerevisiae genome. As a prerequisite for the so-called “SCRaMbLE” system, all Sc2.0 chromosomes incorporate symmetrical target sites for Cre recombinase (loxPsym sites), enabling rearrangement of the yeast genome after induction of Cre with the toxic hormonal substance beta-estradiol. To overcome the safety concern linked to the use of beta-estradiol, a red light-inducible Cre recombinase, dubbed L-SCRaMbLE, was established in this study. L-SCRaMbLE was demonstrated to allow a time- and chromophore-dependent recombination with reliable off-states when applied to a plasmid containing four genes of the beta-carotene pathway, each flanked with loxPsym sites. When directly compared to the original induction system, L-SCRaMbLE generates a larger variety of recombination events and lower basal activity. In conclusion, L-SCRaMbLE provides a promising and powerful tool for genome rearrangement.
The three tools developed in this study provide so far unmatched possibilities to tackle complex synthetic biology projects in yeast by addressing three different stages: fast and reliable biosynthetic pathway assembly; highly specific, orthogonal gene regulation; and tightly controlled synthetic evolution of loxPsym-containing DNA constructs.
The functioning of the surface water-groundwater interface as buffer, filter and reactive zone is important for water quality, ecological health and resilience of streams and riparian ecosystems. Solute and heat exchange across this interface is driven by the advection of water. Characterizing the flow conditions in the streambed is challenging as flow patterns are often complex and multidimensional, driven by surface hydraulic gradients and groundwater discharge. This thesis presents the results of an integrated approach of studies, ranging from the acquisition of field data, the development of analytical and numerical approaches to analyse vertical temperature profiles to the detailed, fully-integrated 3D numerical modelling of water and heat flux at the reach scale. All techniques were applied in order to characterize exchange flux between stream and groundwater, hyporheic flow paths and temperature patterns.
The study was conducted at a reach-scale section of the lowland Selke River, characterized by distinctive pool riffle sequences and fluvial islands and gravel bars. Continuous time series of hydraulic heads and temperatures were measured at different depths in the river bank, the hyporheic zone and within the river. The analyses of the measured diurnal temperature variation in riverbed sediments provided detailed information about the exchange flux between river and groundwater. Beyond the one-dimensional vertical water flow in the riverbed sediment, hyporheic and parafluvial flow patterns were identified. Subsurface flow direction and magnitude around fluvial islands and gravel bars at the study site strongly depended on the position around the geomorphological structures and on the river stage. Horizontal water flux in the streambed substantially impacted temperature patterns in the streambed. At locations with substantial horizontal fluxes the penetration depths of daily temperature fluctuations was reduced in comparison to purely vertical exchange conditions.
The calibrated and validated 3D fully-integrated model of reach-scale water and heat fluxes across the river-groundwater interface was able to accurately represent the real system. The magnitude and variations of the simulated temperatures matched the observed ones, with an average mean absolute error of 0.7 °C and an average Nash Sutcliffe Efficiency of 0.87. The simulation results showed that the water and heat exchange at the surface water-groundwater interface is highly variable in space and time with zones of daily temperature oscillations penetrating deep into the sediment and spots of daily constant temperature following the average groundwater temperature. The average hyporheic flow path temperature was found to strongly correlate with the flow path residence time (flow path length) and the temperature gradient between river and groundwater. Despite the complexity of these processes, the simulation results allowed the derivation of a general empirical relationship between the hyporheic residence times and temperature patterns. The presented results improve our understanding of the complex spatial and temporal dynamics of water flux and thermal processes within the shallow streambed. Understanding these links provides a general basis from which to assess hyporheic temperature conditions in river reaches.
In this thesis, stochastic dynamics modelling collective motions of populations, one of the most mysterious type of biological phenomena, are considered. For a system of N particle-like individuals, two kinds of asymptotic behaviours are studied : ergodicity and flocking properties, in long time, and propagation of chaos, when the number N of agents goes to infinity. Cucker and Smale, deterministic, mean-field kinetic model for a population without a hierarchical structure is the starting point of our journey : the first two chapters are dedicated to the understanding of various stochastic dynamics it inspires, with random noise added in different ways. The third chapter, an attempt to improve those results, is built upon the cluster expansion method, a technique from statistical mechanics. Exponential ergodicity is obtained for a class of non-Markovian process with non-regular drift. In the final part, the focus shifts onto a stochastic system of interacting particles derived from Keller and Segel 2-D parabolicelliptic model for chemotaxis. Existence and weak uniqueness are proven.
Das Thema der vorliegenden Arbeit ist die semantische Suche im Kontext heutiger Informationsmanagementsysteme. Zu diesen Systemen zählen Intranets, Web 3.0-Anwendungen sowie viele Webportale, die Informationen in heterogenen Formaten und Strukturen beinhalten. Auf diesen befinden sich einerseits Daten in strukturierter Form und andererseits Dokumente, die inhaltlich mit diesen Daten in Beziehung stehen. Diese Dokumente sind jedoch in der Regel nur teilweise strukturiert oder vollständig unstrukturiert. So beschreiben beispielsweise Reiseportale durch strukturierte Daten den Zeitraum, das Reiseziel, den Preis einer Reise und geben in unstrukturierter Form weitere Informationen, wie Beschreibungen zum Hotel, Zielort, Ausflugsziele an.
Der Fokus heutiger semantischer Suchmaschinen liegt auf dem Finden von Wissen entweder in strukturierter Form, auch Faktensuche genannt, oder in semi- bzw. unstrukturierter Form, was üblicherweise als semantische Dokumentensuche bezeichnet wird. Einige wenige Suchmaschinen versuchen die Lücke zwischen diesen beiden Ansätzen zu schließen. Diese durchsuchen zwar gleichzeitig strukturierte sowie unstrukturierte Daten, werten diese jedoch entweder weitgehend voneinander unabhängig aus oder schränken die Suchmöglichkeiten stark ein, indem sie beispielsweise nur bestimmte Fragemuster unterstützen. Hierdurch werden die im System verfügbaren Informationen nicht ausgeschöpft und gleichzeitig unterbunden, dass Zusammenhänge zwischen einzelnen Inhalten der jeweiligen Informationssysteme und sich ergänzende Informationen den Benutzer erreichen.
Um diese Lücke zu schließen, wurde in der vorliegenden Arbeit ein neuer hybrider semantischer Suchansatz entwickelt und untersucht, der strukturierte und semi- bzw. unstrukturierte Inhalte während des gesamten Suchprozesses kombiniert. Durch diesen Ansatz werden nicht nur sowohl Fakten als auch Dokumente gefunden, es werden auch Zusammenhänge, die zwischen den unterschiedlich strukturierten Daten bestehen, in jeder Phase der Suche genutzt und fließen in die Suchergebnisse mit ein. Liegt die Antwort zu einer Suchanfrage nicht vollständig strukturiert, in Form von Fakten, oder unstrukturiert, in Form von Dokumenten vor, so liefert dieser Ansatz eine Kombination der beiden. Die Berücksichtigung von unterschiedlich Inhalten während des gesamten Suchprozesses stellt jedoch besondere Herausforderungen an die Suchmaschine. Diese muss in der Lage sein, Fakten und Dokumente in Abhängigkeit voneinander zu durchsuchen, sie zu kombinieren sowie die unterschiedlich strukturierten Ergebnisse in eine geeignete Rangordnung zu bringen. Weiterhin darf die Komplexität der Daten nicht an die Endnutzer weitergereicht werden. Die Darstellung der Inhalte muss vielmehr sowohl bei der Anfragestellung als auch bei der Darbietung der Ergebnisse verständlich und leicht interpretierbar sein.
Die zentrale Fragestellung der Arbeit ist, ob ein hybrider Ansatz auf einer vorgegebenen Datenbasis die Suchanfragen besser beantworten kann als die semantische Dokumentensuche und die Faktensuche für sich genommen, bzw. als eine Suche die diese Ansätze im Rahmen des Suchprozesses nicht kombiniert. Die durchgeführten Evaluierungen aus System- und aus Benutzersicht zeigen, dass die im Rahmen der Arbeit entwickelte hybride semantische Suchlösung durch die Kombination von strukturierten und unstrukturierten Inhalten im Suchprozess bessere Antworten liefert als die oben genannten Verfahren und somit Vorteile gegenüber bisherigen Ansätzen bietet. Eine Befragung von Benutzern macht deutlich, dass die hybride semantische Suche als verständlich empfunden und für heterogen strukturierte Datenmengen bevorzugt wird.
Estuarine marshes are ecosystems that are situated at the transition zone between land and water and are thus controlled by physical and biological interactions. Marsh vegetation offers important ecosystem services by filtrating solid and dissolved substances from the water and providing habitat. By buffering a large part of the arriving flow velocity, attenuating wave energy and serving as erosion control for riverbanks, tidal marshes furthermore reduce the destructive effects of storm surges and storm waves and thus contribute to ecosystem-based shore protection. However, in many estuaries, extensive embankments, artificial bank protection, river dredging and agriculture threaten tidal marshes. Global warming might entail additional risks, such as changes in water levels, an increase of the tidal amplitude and a resulting shift of the salinity zones. This can affect the dynamics of the shore and foreland vegetation, and vegetation belts can be narrowed or fragmented. Against this background, it is crucial to gain a better understanding of the processes underlying the spatio temporal vegetation dynamics in brackish marshes. Furthermore, a better understanding of how plant-habitat relationships generate patterns in tidal marsh vegetation is vital to maintain ecosystem functions and assess the response of marshes to environmental change as well as the success of engineering and restoration projects.
For this purpose, three research objectives were addressed within this thesis: (1) to explore the possibility of vegetation serving as self-adaptive shore protection by quantifying the reduction of current velocity in the vegetation belt and the morphologic plasticity of a brackish marsh pioneer, (2) to disentangle the roles of abiotic factors and interspecific competition on species distribution and stand characteristics in brackish marshes, and (3) to develop a mechanistic vegetation model that helps analysing the influence of habitat conditions on the spatio-temporal dynamic of tidal marsh vegetation. These aspects were investigated using a combination of field studies and statistical as well as process-based modelling.
To explore the possibility of vegetation serving as self-adaptive coastal protection, in the first study, we measured current velocity with and without living vegetation, recorded ramet density and plant thickness during two growing periods at two locations in the Elbe estuary and assessed the adaptive value of a larger stem diameter of plants at locations with higher mechanical stress by biomechanical measurements. The results of this study show that under non-storm conditions, the vegetation belt of the marsh pioneer Bolboschoenus maritimus is able to buffer a large proportion of the flow velocity. We were furthermore able to show that morphological traits of plant species are adapted to hydrodynamic forces by demonstrating a positive correlation between ramet thickness and cross-shore current. In addition, our measurements revealed that thicker ramets growing at the front of the vegetation belt have a significantly higher stability than ramets inside the vegetation belt. This self-adaptive effect improves the ability of B. maritimus to grow and persist in the pioneer zone and could provide an adaptive value in habitats with high mechanical stress.
In the second study, we assessed the distribution of the two marsh species and a set of stand characteristics, namely aboveground and belowground biomass, ramet density, ramet height and the percentage of flowering ramets. Furthermore, we collected information on several abiotic habitat factors to test their effect on plant growth and zonation with generalised linear models (GLMs). Our results demonstrate that flow velocity is the main factor controlling the distribution of Bolboschoenus maritimus and Phragmites australis. Additionally, inundation height and duration, as well as intraspecific competition affect distribution patterns. This study furthermore shows that cross-shore flow velocity does not only directly influence the distribution of the two marsh species, but also alters the plants’ occurrence relative to inun-dation height and duration. This suggests an effect of cross-shore flow velocity on their tolerance to inundation. The analysis of the measured stand characteristics revealed a negative effect of total flow velocity on all measured parameters of B. maritimus and thus confirmed our expectation that flow velocity is a decisive stressor which influences the growth of this species.
To gain a better understanding of the processes and habitat factors influencing the spatio-temporal vegetation dynamics in brackish marshes, I built a spatially explicit, mechanistic model applying a pattern-oriented modelling approach. A sensitivity analysis of the para-meters of this dynamic habitat-macrophyte model HaMac suggests that rhizome growth is the key process for the lateral dynamics of brackish marshes. From the analysed habitat factors, P. australis patterns were mainly influenced by flow velocity. The competition with P. australis was of key importance for the belowground biomass of B. maritimus. Concerning vegetation dynamics, the model results emphasise that without the effect of flow velocity the B. maritimus vegetation belt would expand into the tidal flat at locations with present vegetation recession, suggesting that flow velocity is the main reason for vegetation recession at exposed locations.
Overall, the results of this thesis demonstrate that brackish marsh vegetation considerably contributes to flow reduction under average flow conditions and can hence be a valuable component of shore-protection schemes. At the same time, the distribution, growth and expansion of tidal marsh vegetation is substantially influenced by flow. Altogether, this thesis provides a clear step forward in understanding plant-habitat interactions in tidal marshes. Future research should integrate studies of vertical marsh accretion with research on the factors that control the lateral position of marshes.
Information on the contemporary in-situ stress state of the earth’s crust is essential for geotechnical applications and physics-based seismic hazard assessment. Yet, stress data records for a data point are incomplete and their availability is usually not dense enough to allow conclusive statements. This demands a thorough examination of the in-situ stress field which is achieved by 3D geomechanicalnumerical models. However, the models spatial resolution is limited and the resulting local stress state is subject to large uncertainties that confine the significance of the findings. In addition, temporal variations of the in-situ stress field are naturally or anthropogenically induced. In my thesis I address these challenges in three manuscripts that investigate (1) the current crustal stress field orientation, (2) the 3D geomechanical-numerical modelling of the in-situ stress state, and (3) the phenomenon of injection induced temporal stress tensor rotations. In the first manuscript I present the first comprehensive stress data compilation of Iceland with 495 data records. Therefore, I analysed image logs from 57 boreholes in Iceland for indicators of the orientation of the maximum horizontal stress component. The study is the first stress survey from different kinds of stress indicators in a geologically very young and tectonically active area of an onshore spreading ridge. It reveals a distinct stress field with a depth independent stress orientation even very close to the spreading centre. In the second manuscript I present a calibrated 3D geomechanical-numerical modelling approach of the in-situ stress state of the Bavarian Molasse Basin that investigates the regional (70x70x10km³) and local (10x10x10km³) stress state. To link these two models I develop a multi-stage modelling approach that provides a reliable and efficient method to derive from the larger scale model initial and boundary conditions for the smaller scale model. Furthermore, I quantify the uncertainties in the models results which are inherent to geomechanical-numerical modelling in general and the multi-stage approach in particular. I show that the significance of the models results is mainly reduced due to the uncertainties in the material properties and the low number of available stress magnitude data records for calibration. In the third manuscript I investigate the phenomenon of injection induced temporal stress tensor rotation and its controlling factors. I conduct a sensitivity study with a 3D generic thermo-hydro-mechanical model. I show that the key control factors for the stress tensor rotation are the permeability as the decisive factor, the injection rate, and the initial differential stress. In particular for enhanced geothermal systems with a low permeability large rotations of the stress tensor are indicated. According to these findings the estimation of the initial differential stress in a reservoir is possible provided the permeability is known and the angle of stress rotation is observed. I propose that the stress tensor rotations can be a key factor in terms of the potential for induced seismicity on pre-existing faults due to the reorientation of the stress field that changes the optimal orientation of faults.
The existence of diverse and active microbial ecosystems in the deep subsurface – a biosphere that was originally considered devoid of life – was discovered in multiple microbiological studies. However, most of the studies are restricted to marine ecosystems, while our knowledge about the microbial communities in the deep subsurface of lake systems and their potentials to adapt to changing environmental conditions is still fragmentary. This doctoral thesis aims to build up a unique data basis for providing the first detailed high-throughput characterization of the deep biosphere of lacustrine sediments and to emphasize how important it is to differentiate between the living and the dead microbial community in deep biosphere studies.
In this thesis, up to 3.6 Ma old sediments (up to 317 m deep) of the El’gygytgyn Crater Lake were examined, which represents the oldest terrestrial climate record of the Arctic. Combining next generation sequencing with detailed geochemical characteristics and other environmental parameters, the microbial community composition was analyzed in regard to changing climatic conditions within the last 3.6 Ma to 1.0 Ma (Pliocene and Pleistocene). DNA from all investigated sediments was successfully extracted and a surprisingly diverse (6,910 OTUs) and abundant microbial community in the El’gygytgyn deep sediments were revealed. The bacterial abundance (10³-10⁶ 16S rRNA copies g⁻¹ sediment) was up to two orders of magnitudes higher than the archaeal abundance (10¹-10⁵) and fluctuates with the Pleistocene glacial/interglacial cyclicality. Interestingly, a strong increase in the microbial diversity with depth was observed (approximately 2.5 times higher diversity in Pliocene sediments compared to Pleistocene sediments). The increase in diversity with depth in the Lake El’gygytgyn is most probably caused by higher sedimentary temperatures towards the deep sediment layers as well as an enhanced temperature-induced intra-lake bioproductivity and higher input of allochthonous organic-rich material during Pliocene climatic conditions. Moreover, the microbial richness parameters follow the general trends of the paleoclimatic parameters, such as the paleo-temperature and paleo-precipitation. The most abundant bacterial representatives in the El’gygytgyn deep biosphere are affiliated with the phyla Proteobacteria, Actinobacteria, Bacteroidetes, and Acidobacteria, which are also commonly distributed in the surrounding permafrost habitats. The predominated taxon was the halotolerant genus Halomonas (in average 60% of the total reads per sample).
Additionally, this doctoral thesis focuses on the live/dead differentiation of microbes in cultures and environmental samples. While established methods (e.g., fluorescence in situ hybridization, RNA analyses) are not applicable to the challenging El’gygytgyn sediments, two newer methods were adapted to distinguish between DNA from live cells and free (extracellular, dead) DNA: the propidium monoazide (PMA) treatment and the cell separation adapted for low amounts of DNA. The applicability of the DNA-intercalating dye PMA was successfully evaluated to mask free DNA of different cultures of methanogenic archaea, which play a major role in the global carbon cycle. Moreover, an optimal procedure to simultaneously treat bacteria and archaea was developed using 130 µM PMA and 5 min of photo-activation with blue LED light, which is also applicable on sandy environmental samples with a particle load of ≤ 200 mg mL⁻¹. It was demonstrated that the soil texture has a strong influence on the PMA treatment in particle-rich samples and that in particular silt and clay-rich samples (e.g., El’gygytgyn sediments) lead to an insufficient shielding of free DNA by PMA. Therefore, a cell separation protocol was used to distinguish between DNA from live cells (intracellular DNA) and extracellular DNA in the El’gygytgyn sediments. While comparing these two DNA pools with a total DNA pool extracted with a commercial kit, significant differences in the microbial composition of all three pools (mean distance of relative abundance: 24.1%, mean distance of OTUs: 84.0%) was discovered. In particular, the total DNA pool covers significantly fewer taxa than the cell-separated DNA pools and only inadequately represents the living community. Moreover, individual redundancy analyses revealed that the microbial community of the intra- and extracellular DNA pool are driven by different environmental factors. The living community is mainly influenced by life-dependent parameters (e.g., sedimentary matrix, water availability), while the extracellular DNA is dependent on the biogenic silica content. The different community-shaping parameters and the fact, that a redundancy analysis of the total DNA pool explains significantly less variance of the microbial community, indicate that the total DNA represents a mixture of signals of the live and dead microbial community.
This work provides the first fundamental data basis of the diversity and distribution of microbial deep biosphere communities of a lake system over several million years. Moreover, it demonstrates the substantial importance of extracellular DNA in old sediments. These findings may strongly influence future environmental community analyses, where applications of live/dead differentiation avoid incorrect interpretations due to a failed extraction of the living microbial community or an overestimation of the past community diversity in the course of total DNA extraction approaches.