Refine
Year of publication
- 2012 (292) (remove)
Document Type
- Doctoral Thesis (292) (remove)
Language
- English (158)
- German (129)
- Spanish (3)
- Italian (1)
- Multiple languages (1)
Keywords
- Korrosion (3)
- Nanopartikel (3)
- corrosion (3)
- Blickbewegungen (2)
- Computationale Modellierung (2)
- Evaluation (2)
- Fernerkundung (2)
- Fluoreszenzmikroskopie (2)
- Klimawandel (2)
- Lake sediments (2)
Institute
- Institut für Biochemie und Biologie (63)
- Institut für Chemie (44)
- Institut für Physik und Astronomie (35)
- Wirtschaftswissenschaften (24)
- Institut für Geowissenschaften (19)
- Institut für Informatik und Computational Science (17)
- Institut für Romanistik (12)
- Philosophische Fakultät (9)
- Extern (8)
- Sozialwissenschaften (7)
- Öffentliches Recht (7)
- Bürgerliches Recht (6)
- Department Psychologie (6)
- Institut für Ernährungswissenschaft (6)
- Strafrecht (6)
- Institut für Mathematik (5)
- Department Erziehungswissenschaft (4)
- Department Linguistik (4)
- Historisches Institut (4)
- Institut für Germanistik (4)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (3)
- Institut für Umweltwissenschaften und Geographie (3)
- Institut für Philosophie (2)
- Institut für Slavistik (2)
- Department Sport- und Gesundheitswissenschaften (1)
- Institut für Künste und Medien (1)
- Institut für Religionswissenschaft (1)
Wege zum Reichtum
(2012)
Auf welchen Wegen werden private Haushalte in Deutschland reich und unter welchen individuellen und strukturellen Umständen vollziehen sich diese Prozesse? Melanie Böwing-Schmalenbrock untersucht gesellschaftliche Verteilungsprozesse und weist auf eine entscheidende Bedingung moderner Gesellschaften hin: die individuelle Chance auf gesellschaftliche Teilhabe und freie Lebensführung. Die Entstehung von Reichtum erweist sich als ein komplexes Zusammenspiel verschiedener Reichtumsquellen, es wird die erhebliche Relevanz von Arbeit und Erbschaft deutlich sowie die entscheidende Rolle der Persönlichkeit.
Der boomende Wirtschaftsriese China erfährt weltweit immer mehr Aufmerksamkeit von Politik, Wirtschaft, Wissenschaft und Öffentlichkeit. Gleichzeitig werden aber auch innere soziale Probleme und die großen regionalen Disparitäten des Landes angesprochen. Auch Berichte über die Situation der Wanderarbeiter in den Großstädten des Landes häufen sich. Obwohl sich die Wissenschaft ebenfalls dieses Themas angenommen hat, mangelt es noch an Untersuchungen darüber, insbesondere an solchen, die eingehend und systematisch mit empirischen Erhebungen „vor Ort“ dieses Phänomen studieren. In diese Lücke stößt die Dissertation von Ling He. In ihrem Mittelpunkt steht das Alltagsleben der Arbeitsmigranten in Peking. Dabei werden behandelt: die Migrationsmotive, die strukturellen und individuellen Rahmenbedingungen für die Arbeitsmigranten und ihre Familien, die Arbeits- und Lebensbedingungen, die Bildung finanzieller Ressourcen sowie auch die Konstruktion sozialer Netzwerke und die Integration der Migranten in Peking. Außerdem geht die Dissertation ein auf die Vorteile, die für die etwa 17 Millionen Einwohner zählende Stadt durch die Beschäftigung der etwa 3 Millionen Arbeitsmigranten geschaffen werden, und sie weist auf die sozialen und ökonomischen Probleme hin, die im Zusammenhang mit der Arbeitsmigration gelöst werden müssten.
In dieser Arbeit wurden sphärische Gold Nanopartikel (NP) mit einem Durchmesser größer ~ 2 nm, Gold Quantenpunkte (QDs) mit einem Durchmesser kleiner ~ 2 nm sowie Gold Nanostäbchen (NRs) unterschiedlicher Länge hergestellt und optisch charakterisiert. Zudem wurden zwei neue Synthesevarianten für die Herstellung thermosensitiver Gold QDs entwickelt werden. Sphärische Gold NP zeigen eine Plasmonenbande bei ~ 520 nm, die auf die kollektive Oszillation von Elektronen zurückzuführen ist. Gold NRs weisen aufgrund ihrer anisotropen Form zwei Plasmonenbanden auf, eine transversale Plasmonenbande bei ~ 520 nm und eine longitudinale Plasmonenbande, die vom Länge-zu-Durchmesser-Verhältnis der Gold NRs abhängig ist. Gold QDs besitzen keine Plasmonenbande, da ihre Elektronen Quantenbeschränkungen unterliegen. Gold QDs zeigen jedoch aufgrund diskreter Energieniveaus und einer Bandlücke Photolumineszenz (PL). Die synthetisierten Gold QDs besitzen eine Breitbandlumineszenz im Bereich von ~ 500-800 nm, wobei die Lumineszenz-eigenschaften (Emissionspeak, Quantenausbeute, Lebenszeiten) stark von den Herstellungs-bedingungen und den Oberflächenliganden abhängen. Die PL in Gold QDs ist ein sehr komplexes Phänomen und rührt vermutlich von Singulett- und Triplett-Zuständen her. Gold NRs und Gold QDs konnten in verschiedene Polymere wie bspw. Cellulosetriacetat eingearbeitet werden. Polymernanokomposite mit Gold NRs wurden erstmals unter definierten Bedingungen mechanisch gezogen, um Filme mit optisch anisotropen (richtungsabhängigen) Eigenschaften zu erhalten. Zudem wurde das Temperaturverhalten von Gold NRs und Gold QDs untersucht. Es konnte gezeigt werden, dass eine lokale Variation der Größe und Form von Gold NRs in Polymernanokompositen durch Temperaturerhöhung auf 225-250 °C erzielt werden kann. Es zeigte sich, dass die PL der Gold QDs stark temperaturabhängig ist, wodurch die PL QY der Proben beim Abkühlen (-7 °C) auf knapp 30 % verdoppelt und beim Erhitzen auf 70 °C nahezu vollständig gelöscht werden konnte. Es konnte demonstriert werden, dass die Länge der Alkylkette des Oberflächenliganden einen Einfluss auf die Temperaturstabilität der Gold QDs hat. Zudem wurden verschiedene neuartige und optisch anisotrope Sicherheitslabels mit Gold NRs sowie thermosensitive Sicherheitslabel mit Gold QDs entwickelt. Ebenso scheinen Gold NRs und QDs für die und die Optoelektronik (bspw. Datenspeicherung) und die Medizin (bspw. Krebsdiagnostik bzw. -therapie) von großem Interesse zu sein.
Leaf senescence is an active process required for plant survival, and it is flexibly controlled, allowing plant adaptation to environmental conditions. Although senescence is largely an age-dependent process, it can be triggered by environmental signals and stresses. Leaf senescence coordinates the breakdown and turnover of many cellular components, allowing a massive remobilization and recycling of nutrients from senescing tissues to other organs (e.g., young leaves, roots, and seeds), thus enhancing the fitness of the plant. Such metabolic coordination requires a tight regulation of gene expression. One important mechanism for the regulation of gene expression is at the transcriptional level via transcription factors (TFs). The NAC TF family (NAM, ATAF, CUC) includes various members that show elevated expression during senescence, including ORE1 (ANAC092/AtNAC2) among others. ORE1 was first reported in a screen for mutants with delayed senescence (oresara1, 2, 3, and 11). It was named after the Korean word “oresara,” meaning “long-living,” and abbreviated to ORE1, 2, 3, and 11, respectively. Although the pivotal role of ORE1 in controlling leaf senescence has recently been demonstrated, the underlying molecular mechanisms and the pathways it regulates are still poorly understood. To unravel the signaling cascade through which ORE1 exerts its function, we analyzed particular features of regulatory pathways up-stream and down-stream of ORE1. We identified characteristic spatial and temporal expression patterns of ORE1 that are conserved in Arabidopsis thaliana and Nicotiana tabacum and that link ORE1 expression to senescence as well as to salt stress. We proved that ORE1 positively regulates natural and dark-induced senescence. Molecular characterization of the ORE1 promoter in silico and experimentally suggested a role of the 5’UTR in mediating ORE1 expression. ORE1 is a putative substrate of a calcium-dependent protein kinase named CKOR (unpublished data). Promising data revealed a positive regulation of putative ORE1 targets by CKOR, suggesting the phosphorylation of ORE1 as a requirement for its regulation. Additionally, as part of the ORE1 up-stream regulatory pathway, we identified the NAC TF ATAF1 which was able to transactivate the ORE1 promoter in vivo. Expression studies using chemically inducible ORE1 overexpression lines and transactivation assays employing leaf mesophyll cell protoplasts provided information on target genes whose expression was rapidly induced upon ORE1 induction. First, a set of target genes was established and referred to as early responding in the ORE1 regulatory network. The consensus binding site (BS) of ORE1 was characterized. Analysis of some putative targets revealed the presence of ORE1 BSs in their promoters and the in vitro and in vivo binding of ORE1 to their promoters. Among these putative target genes, BIFUNCTIONAL NUCLEASE I (BFN1) and VND-Interacting2 (VNI2) were further characterized. The expression of BFN1 was found to be dependent on the presence of ORE1. Our results provide convincing data which support a role for BFN1 as a direct target of ORE1. Characterization of VNI2 in age-dependent and stress-induced senescence revealed ORE1 as a key up-stream regulator since it can bind and activate VNI2 expression in vivo and in vitro. Furthermore, VNI2 was able to promote or delay senescence depending on the presence of an activation domain located in its C-terminal region. The plasticity of this gene might include alternative splicing (AS) to regulate its function in different organs and at different developmental stages, particularly during senescence. A model is proposed on the molecular mechanism governing the dual role of VNI2 during senescence.
Die Entwicklung neuer Verfahren für die Rückführung von Palladium aus Altmaterialien, wie gebrauchten Autoabgaskatalysatoren, in den Stoffstromkreislauf ist sowohl aus ökologischer als auch ökonomischer Sicht erstrebenswert. In dieser Arbeit wurden neue Flüssig-Flüssig- und Fest-Flüssig-Extraktionsmittel entwickelt, mit denen Palladium(II) aus einer oxidierenden, salzsauren Laugungslösung, die neben Palladium auch Platin und Rhodium sowie zahlreiche unedle Metalle enthält, zurückgewonnen werden kann. Die neuen Extraktionsmittel ungesättigte monomere 1,2-Dithioether und oligomere Ligandenmischungen mit vicinalen Dithioether-Einheiten – sind im Gegensatz zu vielen in der Literatur aufgeführten Extraktionsmitteln hochselektiv. Aufgrund ihrer geometrischen und elektronischen Präorganisation bilden sie mit Palladium(II) stabile quadratisch-planare Chelatkomplexe. Für die Entwicklung des Flüssig-Flüssig-Extraktionsmittels wurde eine Reihe von ungesättigten 1,2-Dithioetherliganden dargestellt, welche auf einer starren 1,2-Dithioethen-Einheit, die in ein variierendes elektronenziehendes Grundgerüst eingebettet ist, basieren und polare Seitenketten besitzen. Neben der Bestimmung der Kristallstrukturen der Liganden und ihrer Palladiumdichlorid-Komplexe wurden die elektro- und photochemischen Eigenschaften, die Komplexstabilität und das Verhalten in Lösung untersucht. In Flüssig-Flüssig-Extraktionsuntersuchungen konnte gezeigt werden, dass einige der neuen Liganden industriell genutzten Extraktionsmitteln durch eine schnellere Einstellung des Extraktionsgleichgewichts überlegen sind. Anhand von Kriterien, die für eine industrielle Nutzbarkeit entscheidend sind, wie: guter Oxidationsbeständigkeit, einer hohen Extraktionsausbeute (auch bei hohen Salzsäurekonzentrationen der Speiselösung), schneller Extraktionskinetik und einer hohen Selektivität für Palladium(II) wurde aus der Reihe der sechs Liganden ein geeignetes Flüssig-Flüssig-Extraktionsmittel ausgewählt: 1,2-Bis(2-methoxyethylthio)benzen. Mit diesem wurde ein praxisnahes Flüssig-Flüssig-Extraktionssystem entwickelt. Nach der schrittweisen Adaption der wässrigen Phase von einer Modelllösung hin zu der oxidierenden, salzsauren Laugungslösung erfolgte die Auswahl eines geeigneten großtechnisch, einsetzbaren Lösemittels (1,2-Dichlorbenzen) und eines effizienten Reextraktionsmittels (0,5 M Thioharnstoff in 0,1 M HCl). Die hohe Palladium(II)-Selektivität dieses Flüssig-Flüssig-Extraktionssystems konnte verifiziert und seine Wiederverwendbarkeit und Praxistauglichkeit unter Beweis gestellt werden. Weiterhin wurde gezeigt, dass sich beim Kontakt mit oxidierenden Medien aus dem Dithioether 1,2-Bis(2-methoxyethylthio)benzen geringe Mengen des Thioethersulfoxids 1-(2-Methoxyethylsulfinyl)-2-(2-methoxyethylthio)benzen bilden. Dieses wird im sauren Milieu protoniert und beschleunigt die Extraktion wie ein Phasentransferkatalysator, ohne jedoch die Palladium(II)-Selektivität herabzusetzen. Die Kristallstruktur des Palladiumdichlorid-Komplexes des Tioethersulfoxids zeigt, dass der unprotonierte Ligand Palladium(II), analog zum Dithioether, über die chelatisierenden Schwefelatome koordiniert. Verschiedene Mischungen von Oligo(dithioether)-Liganden und der monomere Ligand 1,2-Bis(2-methoxyethylthio)benzen dienten als Extraktionsmittel für Fest-Flüssig-Extraktionsversuche mit SIRs (solvent impregnated resins) und wurden zu diesem Zweck auf hydrophilem Kieselgel und organophilem Amberlite® XAD 2 adsorbiert. Die Oligo(dithioether)-Liganden basieren auf 1,2-Dithiobenzen oder 1,2-Dithiomaleonitril-Einheiten, welche über Tris(oxyethylen)ethylen- oder Trimethylen-Brücken miteinander verknüpft sind. Mit Hilfe von Batch-Versuchen konnte gezeigt werden, dass sich strukturelle Unterschiede - wie die Art der chelatisierenden Einheit, die Art der verbrückenden Ketten und das Trägermaterial - auf die Extraktionsausbeuten, die Extraktionskinetik und die Beladungskapazität auswirken. Die kieselgelhaltigen SIRs stellen das Extraktionsgleichgewicht viel schneller ein als die Amberlite® XAD 2-haltigen. Jedoch bleiben die Extraktionsmittel auf Amberlite® XAD 2, im Gegensatz zu Kieselgel, dauerhaft haften. Im salzsauren Milieu sind die 1,2-Dithiobenzen-derivate besser als Extraktionsmittel geeignet als die 1,2-Dithiomaleonitrilderivate. In Säulenversuchen mit der oxidierenden, salzsauren Laugungslösung und wiederverwendbaren, mit 1,2-Dithiobenzenderivaten imprägnierten, Amberlite® XAD 2-haltigen SIRs zeigte sich, dass für die Realisierung hoher Beladungskapazitäten sehr geringe Pumpraten benötigt werden. Trotzdem konnte die gute Palladium(II)-Selektivität dieser Festphasenmaterialien demonstriert werden. Allerdings wurden in den Eluaten im Gegensatz zu den Eluaten, die aus Flüssig-Flüssig-Extraktion resultierten neben dem Palladium auch geringe Mengen an Platin, Aluminium, Eisen und Blei gefunden.
In industrialized economies such as the European countries unemployment rates are very responsive to the business cycle and significant shares stay unemployed for more than one year. To fight cyclical and long-term unemployment countries spend significant shares of their budget on Active Labor Market Policies (ALMP). To improve the allocation and design of ALMP it is essential for policy makers to have reliable evidence on the effectiveness of such programs available. Although the number of studies has been increased during the last decades, policy makers still lack evidence on innovative programs and for specific subgroups of the labor market. Using Germany as a case study, the dissertation aims at contributing in this way by providing new evidence on start-up subsidies, marginal employment and programs for youth unemployed. The idea behind start-up subsidies is to encourage unemployed individuals to exit unemployment by starting their own business. Those programs have compared to traditional programs of ALMP the advantage that not only the participant escapes unemployment but also might generate additional jobs for other individuals. Considering two distinct start-up subsidy programs, the dissertation adds three substantial aspects to the literature: First, the programs are effective in improving the employment and income situation of participants compared to non-participants in the long-run. Second, the analysis on effect heterogeneity reveals that the programs are particularly effective for disadvantaged groups in the labor market like low educated or low qualified individuals, and in regions with unfavorable economic conditions. Third, the analysis considers the effectiveness of start-up programs for women. Due to higher preferences for flexible working hours and limited part-time jobs, unemployed women often face more difficulties to integrate in dependent employment. It can be shown that start-up subsidy programs are very promising as unemployed women become self-employed which gives them more flexibility to reconcile work and family. Overall, the results suggest that the promotion of self-employment among the unemployed is a sensible strategy to fight unemployment by abolishing labor market barriers for disadvantaged groups and sustainably integrating those into the labor market. The next chapter of the dissertation considers the impact of marginal employment on labor market outcomes of the unemployed. Unemployed individuals in Germany are allowed to earn additional income during unemployment without suffering a reduction in their unemployment benefits. Those additional earnings are usually earned by taking up so-called marginal employment that is employment below a certain income level subject to reduced payroll taxes (also known as “mini-job”). The dissertation provides an empirical evaluation of the impact of marginal employment on unemployment duration and subsequent job quality. The results suggest that being marginal employed during unemployment has no significant effect on unemployment duration but extends employment duration. Moreover, it can be shown that taking up marginal employment is particularly effective for long-term unemployed, leading to higher job-finding probabilities and stronger job stability. It seems that mini-jobs can be an effective instrument to help long-term unemployed individuals to find (stable) jobs which is particularly interesting given the persistently high shares of long-term unemployed in European countries. Finally, the dissertation provides an empirical evaluation of the effectiveness of ALMP programs to improve labor market prospects of unemployed youth. Youth are generally considered a population at risk as they have lower search skills and little work experience compared to adults. This results in above-average turnover rates between jobs and unemployment for youth which is particularly sensitive to economic fluctuations. Therefore, countries spend significant resources on ALMP programs to fight youth unemployment. However, so far only little is known about the effectiveness of ALMP for unemployed youth and with respect to Germany no comprehensive quantitative analysis exists at all. Considering seven different ALMP programs, the results show an overall positive picture with respect to post-treatment employment probabilities for all measures under scrutiny except for job creation schemes. With respect to effect heterogeneity, it can be shown that almost all programs particularly improve the labor market prospects of youths with high levels of pretreatment schooling. Furthermore, youths who are assigned to the most successful employment measures have much better characteristics in terms of their pre-treatment employment chances compared to non-participants. Therefore, the program assignment process seems to favor individuals for whom the measures are most beneficial, indicating a lack of ALMP alternatives that could benefit low-educated youths.
Tectonic and geological processes on Earth often result in structural anisotropy of the subsurface, which can be imaged by various geophysical methods. In order to achieve appropriate and realistic Earth models for interpretation, inversion algorithms have to allow for an anisotropic subsurface. Within the framework of this thesis, I analyzed a magnetotelluric (MT) data set taken from the Cape Fold Belt in South Africa. This data set exhibited strong indications for crustal anisotropy, e.g. MT phases out of the expected quadrant, which are beyond of fitting and interpreting with standard isotropic inversion algorithms. To overcome this obstacle, I have developed a two-dimensional inversion method for reconstructing anisotropic electrical conductivity distributions. The MT inverse problem represents in general a non-linear and ill-posed minimization problem with many degrees of freedom: In isotropic case, we have to assign an electrical conductivity value to each cell of a large grid to assimilate the Earth's subsurface, e.g. a grid with 100 x 50 cells results in 5000 unknown model parameters in an isotropic case; in contrast, we have the sixfold in an anisotropic scenario where the single value of electrical conductivity becomes a symmetric, real-valued tensor while the number of the data remains unchanged. In order to successfully invert for anisotropic conductivities and to overcome the non-uniqueness of the solution of the inverse problem it is necessary to use appropriate constraints on the class of allowed models. This becomes even more important as MT data is not equally sensitive to all anisotropic parameters. In this thesis, I have developed an algorithm through which the solution of the anisotropic inversion problem is calculated by minimization of a global penalty functional consisting of three entries: the data misfit, the model roughness constraint and the anisotropy constraint. For comparison, in an isotropic approach only the first two entries are minimized. The newly defined anisotropy term is measured by the sum of the square difference of the principal conductivity values of the model. The basic idea of this constraint is straightforward. If an isotropic model is already adequate to explain the data, there is no need to introduce electrical anisotropy at all. In order to ensure successful inversion, appropriate trade-off parameters, also known as regularization parameters, have to be chosen for the different model constraints. Synthetic tests show that using fixed trade-off parameters usually causes the inversion to end up by either a smooth model with large RMS error or a rough model with small RMS error. Using of a relaxation approach on the regularization parameters after each successful inversion iteration will result in smoother inversion model and a better convergence. This approach seems to be a sophisticated way for the selection of trade-off parameters. In general, the proposed inversion method is adequate for resolving the principal conductivities defined in horizontal plane. Once none of the principal directions of the anisotropic structure is coincided with the predefined strike direction, only the corresponding effective conductivities, which is the projection of the principal conductivities onto the model coordinate axes direction, can be resolved and the information about the rotation angles is lost. In the end the MT data from the Cape Fold Belt in South Africa has been analyzed. The MT data exhibits an area (> 10 km) where MT phases over 90 degrees occur. This part of data cannot be modeled by standard isotropic modeling procedures and hence can not be properly interpreted. The proposed inversion method, however, could not reproduce the anomalous large phases as desired because of losing the information about rotation angles. MT phases outside the first quadrant are usually obtained by different anisotropic anomalies with oblique anisotropy strike. In order to achieve this challenge, the algorithm needs further developments. However, forward modeling studies with the MT data have shown that surface highly conductive heterogeneity in combination with a mid-crustal electrically anisotropic zone are required to fit the data. According to known geological and tectonic information the mid-crustal zone is interpreted as a deep aquifer related to the fractured Table Mountain Group rocks in the Cape Fold Belt.
Die Strahlentherapie ist neben der Chemotherapie und einer operativen Entfernung die stärkste Waffe für die Bekämpfung bösartiger Tumore in der Krebsmedizin. Nach Herz-Kreislauf-Erkrankungen ist Krebs die zweithäufigste Todesursache in der westlichen Welt, wobei Prostatakrebs heutzutage die häufigste, männliche Krebserkrankung darstellt. Trotz technologischer Fortschritte der radiologischen Verfahren kann es noch viele Jahre nach einer Radiotherapie zu einem Rezidiv kommen, was zum Teil auf die hohe Resistenzfähigkeit einzelner, entarteter Zellen des lokal vorkommenden Tumors zurückgeführt werden kann. Obwohl die moderne Strahlenbiologie viele Aspekte der Resistenzmechanismen näher beleuchtet hat, bleiben Fragestellungen, speziell über das zeitliche Ansprechen eines Tumors auf ionisierende Strahlung, größtenteils unbeantwortet, da systemweite Untersuchungen nur begrenzt vorliegen. Als Zellmodelle wurden vier Prostata-Krebszelllinien (PC3, DuCaP, DU-145, RWPE-1) mit unterschiedlichen Strahlungsempfindlichkeiten kultiviert und auf ihre Überlebensfähigkeit nach ionisierender Bestrahlung durch einen Trypanblau- und MTT-Vitalitätstest geprüft. Die proliferative Kapazität wurde mit einem Koloniebildungstest bestimmt. Die PC3 Zelllinie, als Strahlungsresistente, und die DuCaP Zelllinie, als Strahlungssensitive, zeigten dabei die größten Differenzen bezüglich der Strahlungsempfindlichkeit. Auf Grundlage dieser Ergebnisse wurden die beiden Zelllinien ausgewählt, um anhand ihrer transkriptomweiten Genexpressionen, eine Identifizierung potentieller Marker für die Prognose der Effizienz einer Strahlentherapie zu ermöglichen. Weiterhin wurde mit der PC3 Zelllinie ein Zeitreihenexperiment durchgeführt, wobei zu 8 verschiedenen Zeitpunkten nach Bestrahlung mit 1 Gy die mRNA mittels einer Hochdurchsatz-Sequenzierung quantifiziert wurde, um das dynamisch zeitversetzte Genexpressionsverhalten auf Resistenzmechanismen untersuchen zu können. Durch das Setzen eines Fold Change Grenzwertes in Verbindung mit einem P-Wert < 0,01 konnten aus 10.966 aktiven Genen 730 signifikant differentiell exprimierte Gene bestimmt werden, von denen 305 stärker in der PC3 und 425 stärker in der DuCaP Zelllinie exprimiert werden. Innerhalb dieser 730 Gene sind viele stressassoziierte Gene wiederzufinden, wie bspw. die beiden Transmembranproteingene CA9 und CA12. Durch Berechnung eines Netzwerk-Scores konnten aus den GO- und KEGG-Datenbanken interessante Kategorien und Netzwerke abgeleitet werden, wobei insbesondere die GO-Kategorien Aldehyd-Dehydrogenase [NAD(P)+] Aktivität (GO:0004030) und der KEGG-Stoffwechselweg der O-Glykan Biosynthese (hsa00512) als relevante Netzwerke auffällig wurden. Durch eine weitere Interaktionsanalyse konnten zwei vielversprechende Netzwerke mit den Transkriptionsfaktoren JUN und FOS als zentrale Elemente identifiziert werden. Zum besseren Verständnis des dynamisch zeitversetzten Ansprechens der strahlungsresistenten PC3 Zelllinie auf ionisierende Strahlung, konnten anhand der 10.840 exprimierten Gene und ihrer Expressionsprofile über 8 Zeitpunkte interessante Einblicke erzielt werden. Während es innerhalb von 30 min (00:00 - 00:30) nach Bestrahlung zu einer schnellen Runterregulierung der globalen Genexpression kommt, folgen in den drei darauffolgenden Zeitabschnitten (00:30 - 01:03; 01:03 - 02:12; 02:12 - 04:38) spezifische Expressionserhöhungen, die eine Aktivierung schützender Netzwerke, wie die Hochregulierung der DNA-Reparatursysteme oder die Arretierung des Zellzyklus, auslösen. In den abschließenden drei Zeitbereichen (04:38 - 09:43; 09:43 - 20:25; 20:25 - 42:35) liegt wiederum eine Ausgewogenheit zwischen Induzierung und Supprimierung vor, wobei die absoluten Genexpressionsveränderungen ansteigen. Beim Vergleich der Genexpressionen kurz vor der Bestrahlung mit dem letzten Zeitpunkt (00:00 - 42:53) liegen mit 2.670 die meisten verändert exprimierten Gene vor, was einer massiven, systemweiten Genexpressionsänderung entspricht. Signalwege wie die ATM-Regulierung des Zellzyklus und der Apoptose, des NRF2-Signalwegs nach oxidativer Stresseinwirkung und die DNA-Reparaturmechanismen der homologen Rekombination, des nicht-homologen End Joinings, der MisMatch-, der Basen-Exzision- und der Strang-Exzision-Reparatur spielen bei der zellulären Antwort eine tragende Rolle. Äußerst interessant sind weiterhin die hohen Aktivitäten RNA-gesteuerter Ereignisse, insbesondere von small nucleolar RNAs und Pseudouridin-Prozessen. Demnach scheinen diese RNA-modifizierenden Netzwerke einen bisher unbekannten funktionalen und schützenden Einfluss auf das Zellüberleben nach ionisierender Bestrahlung zu haben. All diese schützenden Netzwerke mit ihren zeitspezifischen Interaktionen sind essentiell für das Zellüberleben nach Einwirkung von oxidativem Stress und zeigen ein komplexes aber im Einklang befindliches Zusammenspiel vieler Einzelkomponenten zu einem systemweit ablaufenden Programm.
Nowadays, model-driven engineering (MDE) promises to ease software development by decreasing the inherent complexity of classical software development. In order to deliver on this promise, MDE increases the level of abstraction and automation, through a consideration of domain-specific models (DSMs) and model operations (e.g. model transformations or code generations). DSMs conform to domain-specific modeling languages (DSMLs), which increase the level of abstraction, and model operations are first-class entities of software development because they increase the level of automation. Nevertheless, MDE has to deal with at least two new dimensions of complexity, which are basically caused by the increased linguistic and technological heterogeneity. The first dimension of complexity is setting up an MDE environment, an activity comprised of the implementation or selection of DSMLs and model operations. Setting up an MDE environment is both time-consuming and error-prone because of the implementation or adaptation of model operations. The second dimension of complexity is concerned with applying MDE for actual software development. Applying MDE is challenging because a collection of DSMs, which conform to potentially heterogeneous DSMLs, are required to completely specify a complex software system. A single DSML can only be used to describe a specific aspect of a software system at a certain level of abstraction and from a certain perspective. Additionally, DSMs are usually not independent but instead have inherent interdependencies, reflecting (partial) similar aspects of a software system at different levels of abstraction or from different perspectives. A subset of these dependencies are applications of various model operations, which are necessary to keep the degree of automation high. This becomes even worse when addressing the first dimension of complexity. Due to continuous changes, all kinds of dependencies, including the applications of model operations, must also be managed continuously. This comprises maintaining the existence of these dependencies and the appropriate (re-)application of model operations. The contribution of this thesis is an approach that combines traceability and model management to address the aforementioned challenges of configuring and applying MDE for software development. The approach is considered as a traceability approach because it supports capturing and automatically maintaining dependencies between DSMs. The approach is considered as a model management approach because it supports managing the automated (re-)application of heterogeneous model operations. In addition, the approach is considered as a comprehensive model management. Since the decomposition of model operations is encouraged to alleviate the first dimension of complexity, the subsequent composition of model operations is required to counteract their fragmentation. A significant portion of this thesis concerns itself with providing a method for the specification of decoupled yet still highly cohesive complex compositions of heterogeneous model operations. The approach supports two different kinds of compositions - data-flow compositions and context compositions. Data-flow composition is used to define a network of heterogeneous model operations coupled by sharing input and output DSMs alone. Context composition is related to a concept used in declarative model transformation approaches to compose individual model transformation rules (units) at any level of detail. In this thesis, context composition provides the ability to use a collection of dependencies as context for the composition of other dependencies, including model operations. In addition, the actual implementation of model operations, which are going to be composed, do not need to implement any composition concerns. The approach is realized by means of a formalism called an executable and dynamic hierarchical megamodel, based on the original idea of megamodels. This formalism supports specifying compositions of dependencies (traceability and model operations). On top of this formalism, traceability is realized by means of a localization concept, and model management by means of an execution concept.
Diese Arbeit befasst sich mit der Synthese und Charakterisierung von organolöslichen Thiophen und Benzodithiophen basierten Materialien und ihrer Anwendung als aktive lochleitende Halbleiterschichten in Feldeffekttransistoren. Im ersten Teil der Arbeit wird durch eine gezielte Modifikation des Thiophengrundgerüstes eine neue Comonomer-Einheit für die Synthese von Thiophen basierten Copolymeren erfolgreich dargestellt. Die hydrophoben Hexylgruppen in der 3-Position des Thiophens werden teilweise durch hydrophile 3,6-Dioxaheptylgruppen ersetzt. Über die Grignard-Metathese nach McCullough werden statistische Copolymere mit unterschiedlichen molaren Anteilen vom hydrophoben Hexyl- und hydrophilem 3,6-Dioxaheptylgruppen 1:1 (P-1), 1:2 (P-2) und 2:1 (P-3) erfolgreich hergestellt. Auch die Synthese eines definierten Blockcopolymers BP-1 durch sequentielle Addition der Comonomere wird realisiert. Optische und elektrochemische Eigenschaften der neuartigen Copolymere sind vergleichbar mit P3HT. Mit allen Copolymeren wird ein charakteristisches Transistorverhalten in einem Top-Gate/Bottom-Kontakt-Aufbau erhalten. Dabei werden mit P-1 als die aktive Halbleiterschicht im Bauteil, PMMA als Dielektrikum und Silber als Gate-Elektrode Mobilitäten von bis zu 10-2 cm2/Vs erzielt. Als Folge der optimierten Grenzfläche zwischen Dielektrikum und Halbleiter wird eine Verbesserung der Luftstabilität der Transistoren über mehrere Monate festgestellt. Im zweiten Teil der Arbeit werden Benzodithiophen basierte organische Materialien hergestellt. Für die Synthese der neuartigen Benzodithiophen-Derivate wird die Schlüsselverbindung TIPS-BDT in guter Ausbeute dargestellt. Die Difunktionalisierung von TIPS-BDT in den 2,6-Positionen über eine elektrophile Substitution liefert die gewünschten Dibrom- und Distannylmonomere. Zunächst werden über die Stille-Reaktion alternierende Copolymere mit alkylierten Fluoren- und Chinoxalin-Einheiten realisiert. Alle Copolymere zeichnen sich durch eine gute Löslichkeit in gängigen organischen Lösungsmitteln, hohe thermische Stabilität und durch gute Filmbildungseigenschaften aus. Des Weiteren sind alle Copolymere mit HOMO Lagen höher als -6.3 eV, verglichen mit den Thiophen basierten Copolymeren (P-1 bis P-3), sehr oxidationsstabil. Diese Copolymere zeigen amorphes Verhalten in den Halbleiterschichten in OFETs auf und es werden Mobilitäten bis zu 10-4 cm2/Vs erreicht. Eine Abhängigkeit der Bauteil-Leistung von dem Zinngehalt-Rest im Polymer wird nachgewiesen. Ein Zinngehalt von über 0.6 % kann enormen Einfluss auf die Mobilität ausüben, da die funktionellen SnMe3-Gruppen als Fallenzustände wirken können. Alternativ wird das alternierende TIPS-BDT/Fluoren-Copolymer P-5-Stille nach der Suzuki-Methode polymerisiert. Mit P-5-Suzuki als die aktive organische Halbleiterschicht im OFET wird die höchste Mobilität von 10-2 cm2/Vs erzielt. Diese Mobilität ist somit um zwei Größenordnungen höher als bei P-5-Stille, da die Fallenzustände in diesem Fall minimiert werden und folglich der Ladungstransport verbessert wird. Sowohl das Homopolymer P-12 als auch das Copolymer mit dem aromatischen Akzeptor Benzothiadiazol P-9 führen zu schwerlöslichen Polymeren. Aus diesem Grund werden einerseits Terpolymere aus TIPS-BDT/Fluoren/BTD-Einheiten P-10 und P-11 aufgebaut und andererseits wird versucht die TIPS-BDT-Einheit in die Seitenkette des Styrols einzubringen. Mit der Einführung von BTD in die Hauptpolymerkette werden insbesondere die Absorptions- und die elektrochemischen Eigenschaften beeinflusst. Im Vergleich zu dem TIPS-BDT/Fluoren-Copolymer reicht die Absorption bis in den sichtbaren Bereich und die LUMO Lage wird zu niederen Werten verschoben. Eine Verbesserung der Leistung in den Bauteilen wird jedoch nicht festgestellt. Die erfolgreiche erstmalige Synthese von TIPS-BDT als Seitenkettenpolymer an Styrol P-13 führt zu einem löslichen und amorphen Polymer mit vergleichbaren Mobilitäten von Styrol basierten Polymeren (µ = 10-5 cm2/Vs) im OFET. Ein weiteres Ziel dieser Arbeit ist die Synthese von niedermolekularen organolöslichen Benzodithiophen-Derivaten. Über Suzuki- und Stille-Reaktionen ist es erstmals möglich, verschiedenartige Aromaten über eine σ-Bindung an TIPS-BDT in den 2,6-Positionen zu knüpfen. Die UV/VIS-Untersuchungen zeigen, dass die Absorption durch die Verlängerung der π-Konjugationslänge zu höheren Wellenlängen verschoben wird. Darüber hinaus ist es möglich, thermisch vernetzbare Gruppen wie Allyloxy in das Molekülgerüst einzubauen. Das Einführen von F-Atomen in das Molekülgerüst resultiert in einer verstärkten Packungsordnung im Fluorbenzen funktionalisiertem TIPS-BDT (SM-4) im Festkörper mit sehr guten elektronischen Eigenschaften im OFET, wobei Mobilitäten bis zu 0.09 cm2/Vs erreicht werden.
Current climate warming is affecting arctic regions at a faster rate than the rest of the world. This has profound effects on permafrost that underlies most of the arctic land area. Permafrost thawing can lead to the liberation of considerable amounts of greenhouse gases as well as to significant changes in the geomorphology, hydrology, and ecology of the corresponding landscapes, which may in turn act as a positive feedback to the climate system. Vast areas of the east Siberian lowlands, which are underlain by permafrost of the Yedoma-type Ice Complex, are particularly sensitive to climate warming because of the high ice content of these permafrost deposits. Thermokarst and thermal erosion are two major types of permafrost degradation in periglacial landscapes. The associated landforms are prominent indicators of climate-induced environmental variations on the regional scale. Thermokarst lakes and basins (alasses) as well as thermo-erosional valleys are widely distributed in the coastal lowlands adjacent to the Laptev Sea. This thesis investigates the spatial distribution and morphometric properties of these degradational features to reconstruct their evolutionary conditions during the Holocene and to deduce information on the potential impact of future permafrost degradation under the projected climate warming. The methodological approach is a combination of remote sensing, geoinformation, and field investigations, which integrates analyses on local to regional spatial scales. Thermokarst and thermal erosion have affected the study region to a great extent. In the Ice Complex area of the Lena River Delta, thermokarst basins cover a much larger area than do present thermokarst lakes on Yedoma uplands (20.0 and 2.2 %, respectively), which indicates that the conditions for large-area thermokarst development were more suitable in the past. This is supported by the reconstruction of the development of an individual alas in the Lena River Delta, which reveals a prolonged phase of high thermokarst activity since the Pleistocene/Holocene transition that created a large and deep basin. After the drainage of the primary thermokarst lake during the mid-Holocene, permafrost aggradation and degradation have occurred in parallel and in shorter alternating stages within the alas, resulting in a complex thermokarst landscape. Though more dynamic than during the first phase, late Holocene thermokarst activity in the alas was not capable of degrading large portions of Pleistocene Ice Complex deposits and substantially altering the Yedoma relief. Further thermokarst development in existing alasses is restricted to thin layers of Holocene ice-rich alas sediments, because the Ice Complex deposits underneath the large primary thermokarst lakes have thawed completely and the underlying deposits are ice-poor fluvial sands. Thermokarst processes on undisturbed Yedoma uplands have the highest impact on the alteration of Ice Complex deposits, but will be limited to smaller areal extents in the future because of the reduced availability of large undisturbed upland surfaces with poor drainage. On Kurungnakh Island in the central Lena River Delta, the area of Yedoma uplands available for future thermokarst development amounts to only 33.7 %. The increasing proximity of newly developing thermokarst lakes on Yedoma uplands to existing degradational features and other topographic lows decreases the possibility for thermokarst lakes to reach large sizes before drainage occurs. Drainage of thermokarst lakes due to thermal erosion is common in the study region, but thermo-erosional valleys also provide water to thermokarst lakes and alasses. Besides these direct hydrological interactions between thermokarst and thermal erosion on the local scale, an interdependence between both processes exists on the regional scale. A regional analysis of extensive networks of thermo-erosional valleys in three lowland regions of the Laptev Sea with a total study area of 5,800 km² found that these features are more common in areas with higher slopes and relief gradients, whereas thermokarst development is more pronounced in flat lowlands with lower relief gradients. The combined results of this thesis highlight the need for comprehensive analyses of both, thermokarst and thermal erosion, in order to assess past and future impacts and feedbacks of the degradation of ice-rich permafrost on hydrology and climate of a certain region.
Thermodynamics, kinetics and rheology of surfactant adsorption layers at water/oil interfaces
(2012)
Theory of mRNA degradation
(2012)
One of the central themes of biology is to understand how individual cells achieve a high fidelity in gene expression. Each cell needs to ensure accurate protein levels for its proper functioning and its capability to proliferate. Therefore, complex regulatory mechanisms have evolved in order to render the expression of each gene dependent on the expression level of (all) other genes. Regulation can occur at different stages within the framework of the central dogma of molecular biology. One very effective and relatively direct mechanism concerns the regulation of the stability of mRNAs. All organisms have evolved diverse and powerful mechanisms to achieve this. In order to better comprehend the regulation in living cells, biochemists have studied specific degradation mechanisms in detail. In addition to that, modern high-throughput techniques allow to obtain quantitative data on a global scale by parallel analysis of the decay patterns of many different mRNAs from different genes. In previous studies, the interpretation of these mRNA decay experiments relied on a simple theoretical description based on an exponential decay. However, this does not account for the complexity of the responsible mechanisms and, as a consequence, the exponential decay is often not in agreement with the experimental decay patterns. We have developed an improved and more general theory of mRNA degradation which provides a general framework of mRNA expression and allows describing specific degradation mechanisms. We have made an attempt to provide detailed models for the regulation in different organisms. In the yeast S. cerevisiae, different degradation pathways are known to compete and furthermore most of them rely on the biochemical modification of mRNA molecules. In bacteria such as E. coli, degradation proceeds primarily endonucleolytically, i.e. it is governed by the initial cleavage within the coding region. In addition, it is often coupled to the level of maturity and the size of the polysome of an mRNA. Both for S. cerevisiae and E. coli, our descriptions lead to a considerable improvement of the interpretation of experimental data. The general outcome is that the degradation of mRNA must be described by an age-dependent degradation rate, which can be interpreted as a consequence of molecular aging of mRNAs. Within our theory, we find adequate ways to address this much debated topic from a theoretical perspective. The improvements of the understanding of mRNA degradation can be readily applied to further comprehend the mRNA expression under different internal or environmental conditions such as after the induction of transcription or stress application. Also, the role of mRNA decay can be assessed in the context of translation and protein synthesis. The ultimate goal in understanding gene regulation mediated by mRNA stability will be to identify the relevance and biological function of different mechanisms. Once more quantitative data will become available, our description allows to elaborate the role of each mechanism by devising a suitable model.
This thesis investigates the gradient flow of Dirac-harmonic maps. Dirac-harmonic maps are critical points of an energy functional that is motivated from supersymmetric field theories. The critical points of this energy functional couple the equation for harmonic maps with spinor fields. At present, many analytical properties of Dirac-harmonic maps are known, but a general existence result is still missing. In this thesis the existence question is studied using the evolution equations for a regularized version of Dirac-harmonic maps. Since the energy functional for Dirac-harmonic maps is unbounded from below the method of the gradient flow cannot be applied directly. Thus, we first of all consider a regularization prescription for Dirac-harmonic maps and then study the gradient flow. Chapter 1 gives some background material on harmonic maps/harmonic spinors and summarizes the current known results about Dirac-harmonic maps. Chapter 2 introduces the notion of Dirac-harmonic maps in detail and presents a regularization prescription for Dirac-harmonic maps. In Chapter 3 the evolution equations for regularized Dirac-harmonic maps are introduced. In addition, the evolution of certain energies is discussed. Moreover, the existence of a short-time solution to the evolution equations is established. Chapter 4 analyzes the evolution equations in the case that the domain manifold is a closed curve. Here, the existence of a smooth long-time solution is proven. Moreover, for the regularization being large enough, it is shown that the evolution equations converge to a regularized Dirac-harmonic map. Finally, it is discussed in which sense the regularization can be removed. In Chapter 5 the evolution equations are studied when the domain manifold is a closed Riemmannian spin surface. For the regularization being large enough, the existence of a global weak solution, which is smooth away from finitely many singularities is proven. It is shown that the evolution equations converge weakly to a regularized Dirac-harmonic map. In addition, it is discussed if the regularization can be removed in this case.
Tensorial spacetime geometries carrying predictive, interpretable and quantizable matter dynamics
(2012)
Which tensor fields G on a smooth manifold M can serve as a spacetime structure? In the first part of this thesis, it is found that only a severely restricted class of tensor fields can provide classical spacetime geometries, namely those that can carry predictive, interpretable and quantizable matter dynamics. The obvious dependence of this characterization of admissible tensorial spacetime geometries on specific matter is not a weakness, but rather presents an insight: it was Maxwell theory that justified Einstein to promote Lorentzian manifolds to the status of a spacetime geometry. Any matter that does not mimick the structure of Maxwell theory, will force us to choose another geometry on which the matter dynamics of interest are predictive, interpretable and quantizable. These three physical conditions on matter impose three corresponding algebraic conditions on the totally symmetric contravariant coefficient tensor field P that determines the principal symbol of the matter field equations in terms of the geometric tensor G: the tensor field P must be hyperbolic, time-orientable and energy-distinguishing. Remarkably, these physically necessary conditions on the geometry are mathematically already sufficient to realize all kinematical constructions familiar from Lorentzian geometry, for precisely the same structural reasons. This we were able to show employing a subtle interplay of convex analysis, the theory of partial differential equations and real algebraic geometry. In the second part of this thesis, we then explore general properties of any hyperbolic, time-orientable and energy-distinguishing tensorial geometry. Physically most important are the construction of freely falling non-rotating laboratories, the appearance of admissible modified dispersion relations to particular observers, and the identification of a mechanism that explains why massive particles that are faster than some massless particles can radiate off energy until they are slower than all massless particles in any hyperbolic, time-orientable and energy-distinguishing geometry. In the third part of the thesis, we explore how tensorial spacetime geometries fare when one wants to quantize particles and fields on them. This study is motivated, in part, in order to provide the tools to calculate the rate at which superluminal particles radiate off energy to become infraluminal, as explained above. Remarkably, it is again the three geometric conditions of hyperbolicity, time-orientability and energy-distinguishability that allow the quantization of general linear electrodynamics on an area metric spacetime and the quantization of massive point particles obeying any admissible dispersion relation. We explore the issue of field equations of all possible derivative order in rather systematic fashion, and prove a practically most useful theorem that determines Dirac algebras allowing the reduction of derivative orders. The final part of the thesis presents the sketch of a truly remarkable result that was obtained building on the work of the present thesis. Particularly based on the subtle duality maps between momenta and velocities in general tensorial spacetimes, it could be shown that gravitational dynamics for hyperbolic, time-orientable and energy distinguishable geometries need not be postulated, but the formidable physical problem of their construction can be reduced to a mere mathematical task: the solution of a system of homogeneous linear partial differential equations. This far-reaching physical result on modified gravity theories is a direct, but difficult to derive, outcome of the findings in the present thesis. Throughout the thesis, the abstract theory is illustrated through instructive examples.
Eye movements are a powerful tool to examine cognitive processes. However, in most paradigms little is known about the dynamics present in sequences of saccades and fixations. In particular, the control of fixation durations has been widely neglected in most tasks. As a notable exception, both spatial and temporal aspects of eye-movement control have been thoroughly investigated during reading. There, the scientific discourse was dominated by three controversies, (i), the role of oculomotor vs. cognitive processing on eye-movement control, (ii) the serial vs. parallel processing of words, and, (iii), the control of fixation durations. The main purpose of this thesis was to investigate eye movements in tasks that require sequences of fixations and saccades. While reading phenomena served as a starting point, we examined eye guidance in non-reading tasks with the aim to identify general principles of eye-movement control. In addition, the investigation of eye movements in non-reading tasks helped refine our knowledge about eye-movement control during reading. Our approach included the investigation of eye movements in non-reading experiments as well as the evaluation and development of computational models. I present three main results : First, oculomotor phenomena during reading can also be observed in non-reading tasks (Chapter 2 & 4). Oculomotor processes determine the fixation position within an object. The fixation position, in turn, modulates both the next saccade target and the current fixation duration. Second, predicitions of eye-movement models based on sequential attention shifts were falsified (Chapter 3). In fact, our results suggest that distributed processing of multiple objects forms the basis of eye-movement control. Third, fixation durations are under asymmetric control (Chapter 4). While increasing processing demands immediately prolong fixation durations, decreasing processing demands reduce fixation durations only with a temporal delay. We propose a computational model ICAT to account for asymmetric control. In this model, an autonomous timer initiates saccades after random time intervals independent of ongoing processing. However, processing demands that are higher than expected inhibit the execution of the next saccade and, thereby, prolong the current fixation. On the other hand, lower processing demands will not affect the duration before the next saccade is executed. Since the autonomous timer adjusts to expected processing demands from fixation to fixation, a decrease in processing demands may lead to a temporally delayed reduction of fixation durations. In an extended version of ICAT, we evaluated its performance while simulating both temporal and spatial aspects of eye-movement control. The eye-movement phenomena investigated in this thesis have now been observed in a number of different tasks, which suggests that they represent general principles of eye guidance. I propose that distributed processing of the visual input forms the basis of eye-movement control, while fixation durations are controlled by the principles outlined in ICAT. In addition, oculomotor control contributes considerably to the variability observed in eye movements. Interpretations for the relation between eye movements and cognition strongly benefit from a precise understanding of this interplay.
Ein viel diskutiertes Thema unserer Zeit ist die Zukunft der Energiegewinnung und Speicherung. Dabei nimmt die Nanowissenschaft eine bedeutende Rolle ein; sie führt zu einer Effizienzsteigerung bei der Speicherung und Gewinnung durch bereits bekannte Materialien und durch neue Materialien. In diesem Zusammenhang ist die Chemie Wegbereiter für Nanomaterialien. Allerdings führen bisher die meisten bekannten Synthesen von Nanopartikeln zu undefinierten Partikeln. Eine einfache, kostengünstige und sichere Synthese würde die Möglichkeit einer breiten Anwendung und Skalierbarkeit bieten. In dieser Arbeit soll daher die Darstellung der einfachen Synthese von Mangannitrid-, Aluminiumnitrid-, Lithiummangansilicat-, Zirkonium-oxinitrid- und Mangancarbonatnanopartikel betrachtet werden. Dabei werden die sogenannte Harnstoff-Glas-Route als eine Festphasensynthese und die Solvothermalsynthese als typische Flüssigphasensynthese eingesetzt. Beide Synthesewege führen zu definierten Partikelgrößen und interessanten Morphologien und ermöglichen eine Einflussnahme auf die Produkte. Im Falle der Synthese der Mangannitridnanopartikel mithilfe der Harnstoff-Glas-Route führt diese zu Nanopartikeln mit Kern-Hülle-Struktur, deren Einsatz als Konversionsmaterial erstmalig vorgestellt wird. Mit dem Ziel einer leichteren Anwendung von Nanopartikeln wird eine einfache Beschichtung von Oberflächen mit Nanopartikeln mithilfe der Rotationsbeschichtung beschrieben. Es entstand ein Gemisch aus MnN0,43/MnO-Nanopartikeln, eingebettet in einem Kohlenstofffilm, dessen Untersuchung als Konversionsmaterial hohe spezifische Kapazitäten (811 mAh/g) zeigt, die die von dem konventionellen Anodenmaterial Graphit (372 mAh/g) übersteigt. Neben der Synthese des Anodenmaterials wurde ebenfalls die des Kathodenmaterials Li2MnSiO4-Nanopartikeln mithilfe der Harnstoff-Glas-Route vorgestellt. Mithilfe der Synthese von Zirkoniumoxinitridnanopartikeln Zr2ON2 kann eine einfache Einflussnahme auf das gewünschte Produkt durch die Variation derReaktionsbedingungen, wie Harnstoffmenge oder Reaktionstemperatur, bei der Harnstoff-Glas-Route demonstriert werden. Der Zusatz von kleinsten Mengen an Ammoniumchlorid vermeidet, dass sich Kohlenstoff im Endprodukt bildet und führt so zu gelben Zr2ON2-Nanopartikeln mit einer Größe d = 8 nm, die Halbleitereigen-schaften besitzen. Die Synthese von Aluminiumnitridnanopartikeln führt zu kristallinen Nanopartikeln, die in eine amorphe Matrix eingebettet sind. Die Solvothermalsynthese von Mangancarbonatnanopartikel lässt neue Morphologien in Form von Nanostäbchen entstehen, die zu schuppenartigen sphärischen Überstrukturen agglomeriert sind.
Sustainable management of semi-arid African savannas under environmental and political change
(2012)
Drylands cover about 40% of the earth’s land surface and provide the basis for the livelihoods of 38% of the global human population. Worldwide, these ecosystems are prone to heavy degradation. Increasing levels of dryland degradation result a strong decline of ecosystem services. In addition, in highly variable semi-arid environments changing future environmental conditions will potentially have severe consequences for productivity and ecosystem dynamics. Hence, global efforts have to be made to understand the particular causes and consequences of dryland degradation and to promote sustainable management options for semi-arid and arid ecosystems in a changing world. Here I particularly address the problem of semi-arid savanna degradation, which mostly occurs in form of woody plant encroachment. At this, I aim at finding viable sustainable management strategies and improving the general understanding of semi-arid savanna vegetation dynamics under conditions of extensive livestock production. Moreover, the influence of external forces, i.e. environmental change and land reform, on the use of savanna vegetation and on the ecosystem response to this land use is assessed. Based on this I identify conditions and strategies that facilitate a sustainable use of semi-arid savanna rangelands in a changing world. I extended an eco-hydrological model to simulate rangeland vegetation dynamics for a typical semi-arid savanna in eastern Namibia. In particular, I identified the response of semi-arid savanna vegetation to different land use strategies (including fire management) also with regard to different predicted precipitation, temperature and CO2 regimes. Not only environmental but also economic and political constraints like e.g. land reform programmes are shaping rangeland management strategies. Hence, I aimed at understanding the effects of the ongoing process of land reform in southern Africa on land use and the semi-arid savanna vegetation. Therefore, I developed and implemented an agent-based ecological-economic modelling tool for interactive role plays with land users. This tool was applied in an interdisciplinary empirical study to identify general patterns of management decisions and the between-farm cooperation of land reform beneficiaries in eastern Namibia. The eco-hydrological simulations revealed that the future dynamics of semi-arid savanna vegetation strongly depend on the respective climate change scenario. In particular, I found that the capacity of the system to sustain domestic livestock production will strongly depend on changes in the amount and temporal distribution of precipitation. In addition, my simulations revealed that shrub encroachment will become less likely under future climatic conditions although positive effects of CO2 on woody plant growth and transpiration have been considered. While earlier studies predicted a further increase in shrub encroachment due to increased levels of atmospheric CO2, my contrary finding is based on the negative impacts of temperature increase on the drought sensitive seedling germination and establishment of woody plant species. Further simulation experiments revealed that prescribed fires are an efficient tool for semi-arid rangeland management, since they suppress woody plant seedling establishment. The strategies tested have increased the long term productivity of the savanna in terms of livestock production and decreased the risk for shrub encroachment (i.e. savanna degradation). This finding refutes the views promoted by existing studies, which state that fires are of minor importance for the vegetation dynamics of semi-arid and arid savannas. Again, the difference in predictions is related to the bottleneck at the seedling establishment stage of woody plants, which has not been sufficiently considered in earlier studies. The ecological-economic role plays with Namibian land reform beneficiaries showed that the farmers made their decisions with regard to herd size adjustments according to economic but not according to environmental variables. Hence, they do not manage opportunistically by tracking grass biomass availability but rather apply conservative management strategies with low stocking rates. This implies that under the given circumstances the management of these farmers will not per se cause (or further worsen) the problem of savanna degradation and shrub encroachment due to overgrazing. However, as my results indicate that this management strategy is rather based on high financial pressure, it is not an indicator for successful rangeland management. Rather, farmers struggle hard to make any positive revenue from their farming business and the success of the Namibian land reform is currently disputable. The role-plays also revealed that cooperation between farmers is difficult even though obligatory due to the often small farm sizes. I thus propose that cooperation needs to be facilitated to improve the success of land reform beneficiaries.
The need to reduce humankind reliance on fossil fuels by exploiting sustainably the planet renewable resources is a major driving force determining the focus of modern material research. For this reason great interest is nowadays focused on finding alternatives to fossil fuels derived products/materials. For the short term the most promising substitute is undoubtedly biomass, since it is the only renewable and sustainable alternative to fossil fuels as carbon source. As a consequence efforts, aimed at finding new synthetic approaches to convert biomass and its derivatives into carbon-based materials, are constantly increasing. In this regard, hydrothermal carbonisation (HTC) has shown to be an effective means of conversion of biomass-derived precursors into functional carbon materials. However the attempts to convert raw biomass, in particular lignocellulosic one, directly into such products have certainly been rarer. Unlocking the direct use of these raw materials as carbon precursors would definitely be beneficial in terms of HTC sustainability. For this reason, in this thesis the HTC of carbohydrate and protein-rich biomass was systematically investigated, in order to obtain more insights on the potentials of this thermochemical processing technique in relation to the production of functional carbon materials from crude biomass. First a detailed investigation on the HTC conversion mechanism of lignocellulosic biomass and its single components (i.e. cellulose, lignin) was developed based on a comparison with glucose HTC, which was adopted as a reference model. In the glucose case it was demonstrated that varying the HTC temperature allowed tuning the chemical structure of the synthesised carbon materials from a highly cross-linked furan-based structure (T = 180oC) to a carbon framework composed of polyaromatic arene-like domains. When cellulose or lignocellulosic biomass was used as carbon precursor, the furan rich structure could not be isolated at any of the investigated processing conditions. These evidences were indicative of a different HTC conversion mechanism for cellulose, involving reactions that are commonly observed during pyrolytic processes. The evolution of glucose-derived HTC carbon chemical structure upon pyrolysis was also investigated. These studies revealed that upon heat treatment (Investigated temperatures 350 – 900 oC) the furan-based structure was progressively converted into highly curved aromatic pre-graphenic domains. This thermal degradation process was observed to produce an increasingly more hydrophobic surface and considerable microporosity within the HTC carbon structure. In order to introduce porosity in the HTC carbons derived from lignocellulosic biomass, KOH chemical activation was investigated as an HTC post-synthesis functionalisation step. These studies demonstrated that HTC carbons are excellent precursors for the production of highly microporous activated carbons (ACs) and that the porosity development upon KOH chemical activation is dependent on the chemical structure of the HTC carbon, tuned by employing different HTC temperatures. Preliminary testing of the ACs for CO2 capture or high pressure CH4 storage yielded very promising results, since the measured uptakes of both adsorbates (i.e. CO2 and CH4) were comparable to top-performing and commercially available adsorbents, usually employed for these end-applications. The combined use of HTC and KOH chemical activation was also employed to produce highly microporous N-doped ACs from microalgae. The hydrothermal treatment of the microalgae substrate was observed to cause the depletion of the protein and carbohydrate fractions and the near complete loss (i.e. 90%) of the microalgae N-content, as liquid hydrolysis/degradation products. The obtained carbonaceous product showed a predominantly aliphatic character indicating the presence of alkyl chains presumably derived from the lipid fractions. Addition of glucose to the initial reaction mixture was found out to be extremely beneficial, because it allowed the fixation of a higher N amount, in the algae derived HTC carbons (i.e. 60%), and the attainment of higher product yields (50%). Both positive effects were attributed to Maillard type cascade reactions taking place between the monosaccharides and the microalgae derived liquid hydrolysis/degradation products, which were in this way recovered from the liquid phase. KOH chemical activation of the microalgae/glucose mixture derived HTC carbons produced highly microporous N-doped carbons. Although the activation process led to a major reduction of the N-content, the retained N-amount in the ACs was still considerable. These features render these materials ideal candidates for supercapacitors electrodes, since they provide extremely high surface areas, for the formation of electric double-layer, coupled to abundant heteroatom doping (i.e. N and O) necessary to obtain a pseudocapacitance contribution.
Structuring process models
(2012)
One can fairly adopt the ideas of Donald E. Knuth to conclude that process modeling is both a science and an art. Process modeling does have an aesthetic sense. Similar to composing an opera or writing a novel, process modeling is carried out by humans who undergo creative practices when engineering a process model. Therefore, the very same process can be modeled in a myriad number of ways. Once modeled, processes can be analyzed by employing scientific methods. Usually, process models are formalized as directed graphs, with nodes representing tasks and decisions, and directed arcs describing temporal constraints between the nodes. Common process definition languages, such as Business Process Model and Notation (BPMN) and Event-driven Process Chain (EPC) allow process analysts to define models with arbitrary complex topologies. The absence of structural constraints supports creativity and productivity, as there is no need to force ideas into a limited amount of available structural patterns. Nevertheless, it is often preferable that models follow certain structural rules. A well-known structural property of process models is (well-)structuredness. A process model is (well-)structured if and only if every node with multiple outgoing arcs (a split) has a corresponding node with multiple incoming arcs (a join), and vice versa, such that the set of nodes between the split and the join induces a single-entry-single-exit (SESE) region; otherwise the process model is unstructured. The motivations for well-structured process models are manifold: (i) Well-structured process models are easier to layout for visual representation as their formalizations are planar graphs. (ii) Well-structured process models are easier to comprehend by humans. (iii) Well-structured process models tend to have fewer errors than unstructured ones and it is less probable to introduce new errors when modifying a well-structured process model. (iv) Well-structured process models are better suited for analysis with many existing formal techniques applicable only for well-structured process models. (v) Well-structured process models are better suited for efficient execution and optimization, e.g., when discovering independent regions of a process model that can be executed concurrently. Consequently, there are process modeling languages that encourage well-structured modeling, e.g., Business Process Execution Language (BPEL) and ADEPT. However, the well-structured process modeling implies some limitations: (i) There exist processes that cannot be formalized as well-structured process models. (ii) There exist processes that when formalized as well-structured process models require a considerable duplication of modeling constructs. Rather than expecting well-structured modeling from start, we advocate for the absence of structural constraints when modeling. Afterwards, automated methods can suggest, upon request and whenever possible, alternative formalizations that are "better" structured, preferably well-structured. In this thesis, we study the problem of automatically transforming process models into equivalent well-structured models. The developed transformations are performed under a strong notion of behavioral equivalence which preserves concurrency. The findings are implemented in a tool, which is publicly available.
Structural dynamics of photoexcited nanolayered perovskites studied by ultrafast x-ray diffraction
(2012)
This publication-based thesis represents a contribution to the active research field of ultrafast structural dynamics in laser-excited nanostructures. The investigation of such dynamics is mandatory for the understanding of the various physical processes on microscopic scales in complex materials which have great potentials for advances in many technological applications. I theoretically and experimentally examine the coherent, incoherent and anharmonic lattice dynamics of epitaxial metal-insulator heterostructures on timescales ranging from femtoseconds up to nanoseconds. To infer information on the transient dynamics in the photoexcited crystal lattices experimental techniques using ultrashort optical and x-ray pulses are employed. The experimental setups include table-top sources as well as large-scale facilities such as synchrotron sources. At the core of my work lies the development of a linear-chain model to simulate and analyze the photoexcited atomic-scale dynamics. The calculated strain fields are then used to simulate the optical and x-ray response of the considered thin films and multilayers in order to relate the experimental signatures to particular structural processes. This way one obtains insight into the rich lattice dynamics exhibiting coherent transport of vibrational energy from local excitations via delocalized phonon modes of the samples. The complex deformations in tailored multilayers are identified to give rise to highly nonlinear x-ray diffraction responses due to transient interference effects. The understanding of such effects and the ability to precisely calculate those are exploited for the design of novel ultrafast x-ray optics. In particular, I present several Phonon Bragg Switch concepts to efficiently generate ultrashort x-ray pulses for time-resolved structural investigations. By extension of the numerical models to include incoherent phonon propagation and anharmonic lattice potentials I present a new view on the fundamental research topics of nanoscale thermal transport and anharmonic phonon-phonon interactions such as nonlinear sound propagation and phonon damping. The former issue is exemplified by the time-resolved heat conduction from thin SrRuO3 films into a SrTiO3 substrate which exhibits an unexpectedly slow heat conductivity. Furthermore, I discuss various experiments which can be well reproduced by the versatile numerical models and thus evidence strong lattice anharmonicities in the perovskite oxide SrTiO3. The thesis also presents several advances of experimental techniques such as time-resolved phonon spectroscopy with optical and x-ray photons as well as concepts for the implementation of x-ray diffraction setups at standard synchrotron beamlines with largely improved time-resolution for investigations of ultrafast structural processes. This work forms the basis for ongoing research topics in complex oxide materials including electronic correlations and phase transitions related to the elastic, magnetic and polarization degrees of freedom.
Mit der Liberalisierung des Strommarkts, den unsicheren Aussichten in der Klimapolitik und stark schwankenden Preisen bei Brennstoffen, Emissionsrechten und Kraftwerkskomponenten hat bei Kraftwerksinvestitionen das Risikomanagement an Bedeutung gewonnen. Dies äußert sich im vermehrten Einsatz probabilistischer Verfahren. Insbesondere bei regulativen Risiken liefert der klassische, häufigkeitsbasierte Wahrscheinlichkeitsbegriff aber keine Handhabe zur Risikoquantifizierung. In dieser Arbeit werden Kraftwerksinvestitionen und -portfolien in Deutschland mit Methoden des Bayes'schen Risikomanagements bewertet. Die Bayes'sche Denkschule begreift Wahrscheinlichkeit als persönliches Maß für Unsicherheit. Wahrscheinlichkeiten können auch ohne statistische Datenanalyse allein mit Expertenbefragungen gewonnen werden. Das Zusammenwirken unsicherer Werttreiber wurde mit einem probabilistischen DCF-Modell (Discounted Cash Flow-Modell) spezifiziert und in ein Einflussdiagramm mit etwa 1200 Objekten umgesetzt. Da der Überwälzungsgrad von Brennstoff- und CO2-Kosten und damit die Höhe der von den Kraftwerken erwirtschafteten Deckungsbeiträge im Wettbewerb bestimmt werden, reicht eine einzelwirtschaftliche Betrachtung der Kraftwerke nicht aus. Strompreise und Auslastungen werden mit Heuristiken anhand der individuellen Position der Kraftwerke in der Merit Order bestimmt, d.h. anhand der nach kurzfristigen Grenzkosten gestaffelten Einsatzreihenfolge. Dazu wurden 113 thermische Großkraftwerke aus Deutschland in einer Merit Order vereinigt. Das Modell liefert Wahrscheinlichkeitsverteilungen für zentrale Größen wie Kapitalwerte von Bestandsportfolien sowie Stromgestehungskosten und Kapitalwerte von Einzelinvestitionen (Steinkohle- und Braunkohlekraftwerke mit und ohne CO2-Abscheidung sowie GuD-Kraftwerke). Der Wert der Bestandsportfolien von RWE, E.ON, EnBW und Vattenfall wird primär durch die Beiträge der Braunkohle- und Atomkraftwerke bestimmt. Erstaunlicherweise schlägt sich der Emissionshandel nicht in Verlusten nieder. Dies liegt einerseits an den Zusatzgewinnen der Atomkraftwerke, andererseits an den bis 2012 gratis zugeteilten Emissionsrechten, welche hohe Windfall-Profite generieren. Dadurch erweist sich der Emissionshandel in seiner konkreten Ausgestaltung insgesamt als gewinnbringendes Geschäft. Über die Restlaufzeit der Bestandskraftwerke resultiert ab 2008 aus der Einführung des Emissionshandels ein Barwertvorteil von insgesamt 8,6 Mrd. €. In ähnlicher Dimension liegen die Barwertvorteile aus der 2009 von der Bundesregierung in Aussicht gestellten Laufzeitverlängerung für Atomkraftwerke. Bei einer achtjährigen Laufzeitverlängerung ergäben sich je nach CO2-Preisniveau Barwertvorteile von 8 bis 15 Mrd. €. Mit höheren CO2-Preisen und Laufzeitverlängerungen von bis zu 28 Jahren würden 25 Mrd. € oder mehr zusätzlich anfallen. Langfristig erscheint fraglich, ob unter dem gegenwärtigen Marktdesign noch Anreize für Investitionen in fossile Kraftwerke gegeben sind. Zu Beginn der NAP 2-Periode noch rentable Investitionen in Braunkohle- und GuD-Kraftwerke werden mit der auslaufenden Gratiszuteilung von Emissionsrechten zunehmend unrentabler. Die Rentabilität wird durch Strommarkteffekte der erneuerbaren Energien und ausscheidender alter Gas- und Ölkraftwerke stetig weiter untergraben. Steinkohlekraftwerke erweisen sich selbst mit anfänglicher Gratiszuteilung als riskante Investition. Die festgestellten Anreizprobleme für Neuinvestitionen sollten jedoch nicht dem Emissionshandel zugeschrieben werden, sondern resultieren aus den an Grenzkosten orientierten Strompreisen. Das Anreizproblem ist allerdings bei moderaten CO2-Preisen am größten. Es gilt auch für Kraftwerke mit CO2-Abscheidung: Obwohl die erwarteten Vermeidungskosten für CCS-Kraftwerke gegenüber konventionellen Kohlekraftwerken im Jahr 2025 auf 25 €/t CO2 (Braunkohle) bzw. 38,5 €/t CO2 (Steinkohle) geschätzt werden, wird ihr Bau erst ab CO2-Preisen von 50 bzw. 77 €/t CO2 rentabel. Ob und welche Kraftwerksinvestitionen sich langfristig rechnen, wird letztlich aber politisch entschieden und ist selbst unter stark idealisierten Bedingungen kaum vorhersagbar.
In the course of this thesis gold nanoparticle/polyelectrolyte multilayer structures were prepared, characterized, and investigated according to their static and ultrafast optical properties. Using the dip-coating or spin-coating layer-by-layer deposition method, gold-nanoparticle layers were embedded in a polyelectrolyte environment with high structural perfection. Typical structures exhibit four repetition units, each consisting of one gold-particle layer and ten double layers of polyelectrolyte (cationic+anionic polyelectrolyte). The structures were characterized by X-ray reflectivity measurements, which reveal Bragg peaks up to the seventh order, evidencing the high stratication of the particle layers. In the same measurements pronounced Kiessig fringes were observed, which indicate a low global roughness of the samples. Atomic force microscopy (AFM) images veried this low roughness, which results from the high smoothing capabilities of polyelectrolyte layers. This smoothing effect facilitates the fabrication of stratified nanoparticle/polyelectrolyte multilayer structures, which were nicely illustrated in a transmission electron microscopy image. The samples' optical properties were investigated by static spectroscopic measurements in the visible and UV range. The measurements revealed a frequency shift of the reflectance and of the plasmon absorption band, depending on the thickness of the polyelectrolyte layers that cover a nanoparticle layer. When the covering layer becomes thicker than the particle interaction range, the absorption spectrum becomes independent of the polymer thickness. However, the reflectance spectrum continues shifting to lower frequencies (even for large thicknesses). The range of plasmon interaction was determined to be in the order of the particle diameter for 10 nm, 20 nm, and 150 nm particles. The transient broadband complex dielectric function of a multilayer structure was determined experimentally by ultrafast pump-probe spectroscopy. This was achieved by simultaneous measurements of the changes in the reflectance and transmittance of the excited sample over a broad spectral range. The changes in the real and imaginary parts of the dielectric function were directly deduced from the measured data by using a recursive formalism based on the Fresnel equations. This method can be applied to a broad range of nanoparticle systems where experimental data on the transient dielectric response are rare. This complete experimental approach serves as a test ground for modeling the dielectric function of a nanoparticle compound structure upon laser excitation.
This thesis is focussed on the electronic properties of the new material class named topological insulators. Spin and angle resolved photoelectron spectroscopy have been applied to reveal several unique properties of the surface state of these materials. The first part of this thesis introduces the methodical background of these quite established experimental techniques.
In the following chapter, the theoretical concept of topological insulators is introduced. Starting from the prominent example of the quantum Hall effect, the application of topological invariants to classify material systems is illuminated. It is explained how, in presence of time reversal symmetry, which is broken in the quantum Hall phase, strong spin orbit coupling can drive a system into a topologically non trivial phase. The prediction of the spin quantum Hall effect in two dimensional insulators an the generalization to the three dimensional case of topological insulators is reviewed together with the first experimental realization of a three dimensional topological insulator in the Bi1-xSbx alloys given in the literature.
The experimental part starts with the introduction of the Bi2X3 (X=Se, Te) family of materials. Recent theoretical predictions and experimental findings on the bulk and surface electronic structure of these materials are introduced in close discussion to our own experimental results. Furthermore, it is revealed, that the topological surface state of Bi2Te3 shares its orbital symmetry with the bulk valence band and the observation of a temperature induced shift of the chemical potential is to a high probability unmasked as a doping effect due to residual gas adsorption.
The surface state of Bi2Te3 is found to be highly spin polarized with a polarization value of about 70% in a macroscopic area, while in Bi2Se3 the polarization appears reduced, not exceeding 50%. We, however, argue that the polarization is most likely only extrinsically limited in terms of the finite angular resolution and the lacking detectability of the out of plane component of the electron spin. A further argument is based on the reduced surface quality of the single crystals after cleavage and, for Bi2Se3 a sensitivity of the electronic structure to photon exposure.
We probe the robustness of the topological surface state in Bi2X3 against surface impurities in Chapter 5. This robustness is provided through the protection by the time reversal symmetry. Silver, deposited on the (111) surface of Bi2Se3 leads to a strong electron doping but the surface state is observed up to a deposited Ag mass equivalent to one atomic monolayer. The opposite sign of doping, i.e., hole-like, is observed by exposing oxygen to Bi2Te3. But while the n-type shift of Ag on Bi2Se3 appears to be more or less rigid, O2 is lifting the Dirac point of the topological surface state in Bi2Te3 out of the valence band minimum at $\Gamma$. After increasing the oxygen dose further, it is possible to shift the Dirac point to the Fermi level, while the valence band stays well beyond. The effect is found reversible, by warming up the samples which is interpreted in terms of physisorption of O2.
For magnetic impurities, i.e., Fe, we find a similar behavior as for the case of Ag in both Bi2Se3 and Bi2Te3. However, in that case the robustness is unexpected, since magnetic impurities are capable to break time reversal symmetry which should introduce a gap in the surface state at the Dirac point which in turn removes the protection. We argue, that the fact that the surface state shows no gap must be attributed to a missing magnetization of the Fe overlayer. In Bi2Te3 we are able to observe the surface state for deposited iron mass equivalents in the monolayer regime. Furthermore, we gain control over the sign of doping through the sample temperature during deposition.
Chapter6 is devoted to the lifetime broadening of the photoemission signal from the topological surface states of Bi2Se3 and Bi2Te3. It is revealed that the hexagonal warping of the surface state in Bi2Te3 introduces an anisotropy for electrons traveling along the two distinct high symmetry directions of the surface Brillouin zone, i.e., $\Gamma$K and $\Gamma$M. We show that the phonon coupling strength to the surface electrons in Bi2Te3 is in nice agreement with the theoretical prediction but, nevertheless, higher than one may expect. We argue that the electron-phonon coupling is one of the main contributions to the decay of photoholes but the relatively small size of the Fermi surface limits the number of phonon modes that may scatter off electrons. This effect is manifested in the energy dependence of the imaginary part of the electron self energy of the surface state which shows a decay to higher binding energies in contrast to the monotonic increase proportional to E$^2$ in the Fermi liquid theory due to electron-electron interaction.
Furthermore, the effect of the surface impurities of Chapter 5 on the quasiparticle life- times is investigated. We find that Fe impurities have a much stronger influence on the lifetimes as compared to Ag. Moreover, we find that the influence is stronger independently of the sign of the doping. We argue that this observation suggests a minor contribution of the warping on increased scattering rates in contrast to current belief. This is additionally confirmed by the observation that the scattering rates increase further with increasing silver amount while the doping stays constant and by the fact that clean Bi2Se3 and Bi2Te3 show very similar scattering rates regardless of the much stronger warping in Bi2Te3.
In the last chapter we report on a strong circular dichroism in the angle distribution of the photoemission signal of the surface state of Bi2Te3. We show that the color pattern obtained by calculating the difference between photoemission intensities measured with opposite photon helicity reflects the pattern expected for the spin polarization. However, we find a strong influence on strength and even sign of the effect when varying the photon energy. The sign change is qualitatively confirmed by means of one-step photoemission calculations conducted by our collaborators from the LMU München, while the calculated spin polarization is found to be independent of the excitation energy. Experiment and theory together unambiguously uncover the dichroism in these systems as a final state effect and the question in the title of the chapter has to be negated: Circular dichroism in the angle distribution is not a new spin sensitive technique.
Soil conditions under vegetation cover and their spatial and temporal variations from point to catchment scale are crucial for understanding hydrological processes within the vadose zone, for managing irrigation and consequently maximizing yield by precision farming. Soil moisture and soil roughness are the key parameters that characterize the soil status. In order to monitor their spatial and temporal variability on large scales, remote sensing techniques are required. Therefore the determination of soil parameters under vegetation cover was approached in this thesis by means of (multi-angular) polarimetric SAR acquisitions at a longer wavelength (L-band, lambda=23cm). In this thesis, the penetration capabilities of L-band are combined with newly developed (multi-angular) polarimetric decomposition techniques to separate the different scattering contributions, which are occurring in vegetation and on ground. Subsequently the ground components are inverted to estimate the soil characteristics. The novel (multi-angular) polarimetric decomposition techniques for soil parameter retrieval are physically-based, computationally inexpensive and can be solved analytically without any a priori knowledge. Therefore they can be applied without test site calibration directly to agricultural areas. The developed algorithms are validated with fully polarimetric SAR data acquired by the airborne E-SAR sensor of the German Aerospace Center (DLR) for three different study areas in Germany. The achieved results reveal inversion rates up to 99% for the soil moisture and soil roughness retrieval in agricultural areas. However, in forested areas the inversion rate drops significantly for most of the algorithms, because the inversion in forests is invalid for the applied scattering models at L-band. The validation against simultaneously acquired field measurements indicates an estimation accuracy (root mean square error) of 5-10vol.% for the soil moisture (range of in situ values: 1-46vol.%) and of 0.37-0.45cm for the soil roughness (range of in situ values: 0.5-4.0cm) within the catchment. Hence, a continuous monitoring of soil parameters with the obtained precision, excluding frozen and snow covered conditions, is possible. Especially future, fully polarimetric, space-borne, long wavelength SAR missions can profit distinctively from the developed polarimetric decomposition techniques for separation of ground and volume contributions as well as for soil parameter retrieval on large spatial scales.
Education in knowledge society is challenged with a lot of problems in particular the interaction between the teacher and learner in social networking software as a key factor affects the learners’ learning and satisfaction (Prammanee, 2005) where “to teach is to communicate, to communicate is to interact, to interact is to learn” (Hefzallah, 2004, p. 48). Analyzing the relation between teacher-learner interaction from a side and learning outcome and learners’ satisfaction from the other side, some basic problems regarding a new learning culture using social networking software are discussed. Most of the educational institutions pay a lot of attentions to the equipments and emerging Information and Communication Technologies (ICTs) in learning situations. They try to incorporate ICT into their institutions as teaching and learning environments. They do this because they expect that by doing so they will improve the outcome of the learning process. Despite this, the learning outcome as reported in most studies is very limited, because the expectations of self-directed learning are much higher than the reality. Findings from an empirical study (investigating the role of teacher-learner interaction through new digital media wiki in higher education and learning outcome and learner’s satisfaction) are presented recommendations about the necessity of pedagogical interactions in support of teaching and learning activities in wiki courses in order to improve the learning outcome. Conclusions show the necessity for significant changes in the approach of vocational teacher training programs of online teachers in order to meet the requirements of new digital media in coherence with a new learning culture. These changes have to address collaborative instead of individual learning and ICT wiki as a tool for knowledge construction instead of a tool for gathering information.
This thesis contains several theoretical studies on optomechanical systems, i.e. physical devices where mechanical degrees of freedom are coupled with optical cavity modes. This optomechanical interaction, mediated by radiation pressure, can be exploited for cooling and controlling mechanical resonators in a quantum regime. The goal of this thesis is to propose several new ideas for preparing meso- scopic mechanical systems (of the order of 10^15 atoms) into highly non-classical states. In particular we have shown new methods for preparing optomechani-cal pure states, squeezed states and entangled states. At the same time, proce-dures for experimentally detecting these quantum effects have been proposed. In particular, a quantitative measure of non classicality has been defined in terms of the negativity of phase space quasi-distributions. An operational al- gorithm for experimentally estimating the non-classicality of quantum states has been proposed and successfully applied in a quantum optics experiment. The research has been performed with relatively advanced mathematical tools related to differential equations with periodic coefficients, classical and quantum Bochner’s theorems and semidefinite programming. Nevertheless the physics of the problems and the experimental feasibility of the results have been the main priorities.
Shape-memory properties of magnetically active compositives based on multiphase polymer networks
(2012)
A discrete analogue of the Witten Laplacian on the n-dimensional integer lattice is considered. After rescaling of the operator and the lattice size we analyze the tunnel effect between different wells, providing sharp asymptotics of the low-lying spectrum. Our proof, inspired by work of B. Helffer, M. Klein and F. Nier in continuous setting, is based on the construction of a discrete Witten complex and a semiclassical analysis of the corresponding discrete Witten Laplacian on 1-forms. The result can be reformulated in terms of metastable Markov processes on the lattice.
Neben der Frage nach der leistungssteigernden Wirkung von sogenannten "Ich-kann"-Checklisten auf die Metakognitionsstrategien der Schülerinnen und Schüler, geht die Arbeit auch den Fragen nach, welche Schülerinnen und Schüler "Ich-kann"-Checklisten nutzen, in welcher Form und unter welchen Kontextmerkmalen sie am wirksamsten sind. Dabei handelt es sich um Listen mit festgelegten, fachlichen und überfachlichen Kompetenzen einer bzw. mehrerer Unterrichtseinheiten, die in Form von „Ich-kann“-Formulierungen für Schüler geschrieben sind und die Aufforderung einer Selbst- und Fremdeinschätzung beinhalten. Blickt man in die Veröffentlichungen der letzten Jahre zu diesem Thema und in die schulische Praxis, so ist eine deutliche Hinwendung zur Entwicklung und Arbeit mit „Ich-kann“-Checklisten und Kompetenzrastern zu erkennen. Umso erstaunlicher ist es, dass diesbezüglich so gut wie keine empirischen Untersuchungen vorliegen (vgl. Bastian & Merziger, 2007; Merziger, 2007). Basierend auf einer quantitativen Erhebung von 197 Gymnasialschülerinnen und -schülern in der 7. Jahrgangsstufe im Fach Deutsch wurde über einen Zeitraum von zwei Jahren diesen übergeordneten Fragen nachgegangen. Die Ergebnisse lassen die Aussagen zu, dass "Ich-kann"-Checklisten insbesondere für Jungen ein wirksames pädagogisches Instrument der Selbstregulation darstellen. So fördert die Arbeit mit "Ich-kann"-Checklisten nicht nur die Steuerung eigener Lernprozesse, sondern auch die Anstrengungsbereitschaft der Schülerinnen und Schüler, mehr für das Fach tun zu wollen. Eine während der Intervention erfolgte Selbsteinschätzung über den Leistungsstand mittels der "Ich-kann"-Checklisten fördert dabei den freiwilligen außerunterrichtlichen Gebrauch.
In the western hemisphere, the piano is one of the most important instruments. While its evolution lasted for more than three centuries, and the most important physical aspects have already been investigated, some parts in the characterization of the piano remain not well understood. Considering the pivotal piano soundboard, the effect of ribs mounted on the board exerted on the sound radiation and propagation in particular, is mostly neglected in the literature. The present investigation deals exactly with the sound wave propagation effects that emerge in the presence of an array of equally-distant mounted ribs at a soundboard. Solid-state theory proposes particular eigenmodes and -frequencies for such arrangements, which are comparable to single units in a crystal. Following this 'linear chain model' (LCM), differences in the frequency spectrum are observable as a distinct band structure. Also, the amplitudes of the modes are changed, due to differences of the damping factor. These scattering effects were not only investigated for a well-understood conceptional rectangular soundboard (multichord), but also for a genuine piano resonance board manufactured by the piano maker company 'C. Bechstein Pianofortefabrik'. To obtain the possibility to distinguish between the characterizing spectra both with and without mounted ribs, the typical assembly plan for the Bechstein instrument was specially customized. Spectral similarities and differences between both boards are found in terms of damping and tone. Furthermore, specially prepared minimal-invasive piezoelectric polymer sensors made from polyvinylidene fluoride (PVDF) were used to record solid-state vibrations of the investigated system. The essential calibration and characterization of these polymer sensors was performed by determining the electromechanical conversion, which is represented by the piezoelectric coefficient. Therefore, the robust 'sinusoidally varying external force' method was applied, where a dynamic force perpendicular to the sensor's surface, generates movable charge carriers. Crucial parameters were monitored, with the frequency response function as the most important one for acousticians. Along with conventional condenser microphones, the sound was measured as solid-state vibration as well as airborne wave. On this basis, statements can be made about emergence, propagation, and also the overall radiation of the generated modes of the vibrating system. Ultimately, these results acoustically characterize the entire system.
The present thesis is to be brought into line with the current need for alternative and sustainable approaches toward energy management and materials design. In this context, carbon in particular has become the material of choice in many fields such as energy conversion and storage. Herein, three main topics are covered: 1)An alternative synthesis strategy toward highly porous functional carbons with tunable porosity using ordinary salts as porogen (denoted as “salt templating”) 2)The one-pot synthesis of porous metal nitride containing functional carbon composites 3)The combination of both approaches, enabling the generation of highly porous composites with finely tunable properties All approaches have in common that they are based on the utilization of ionic liquids, salts which are liquid below 100 °C, as precursors. Just recently, ionic liquids were shown to be versatile precursors for the generation of heteroatom-doped carbons since the liquid state and a negligible vapor pressure are highly advantageous properties. However, in most cases the products do not possess any porosity which is essential for many applications. In the first part, “salt templating”, the utilization of salts as diverse and sustainable porogens, is introduced. Exemplarily shown for ionic liquid derived nitrogen- and nitrogen-boron-co-doped carbons, the control of the porosity and morphology on the nanometer scale by salt templating is presented. The studies within this thesis were conducted with the ionic liquids 1-Butyl-3-methyl-pyridinium dicyanamide (Bmp-dca), 1-Ethyl-3-methyl-imidazolium dicyanamide (Emim-dca) and 1 Ethyl 3-methyl-imidazolium tetracyanoborate (Emim-tcb). The materials are generated through thermal treatment of precursor mixtures containing one of the ionic liquids and a porogen salt. By simple removal of the non-carbonizable template salt with water, functional graphitic carbons with pore sizes ranging from micro- to mesoporous and surface areas up to 2000 m2g-1 are obtained. The carbon morphologies, which presumably originate from different onsets of demixing, mainly depend on the nature of the porogen salt whereas the nature of the ionic liquid plays a minor role. Thus, a structural effect of the porogen salt rather than activation can be assumed. This offers an alternative to conventional activation and templating methods, enabling to avoid multiple-step and energy-consuming synthesis pathways as well as employment of hazardous chemicals for the template removal. The composition of the carbons can be altered via the heat-treatment procedure, thus at lower synthesis temperatures rather polymeric carbonaceous materials with a high degree of functional groups and high surface areas are accessible. First results suggest the suitability of the materials for CO2 utilization. In order to further illustrate the potential of ionic liquids as carbon precursors and to expand the class of carbons which can be obtained, the ionic liquid 1-Ethyl-3-methyl-imidazolium thiocyanate (Emim-scn) is introduced for the generation of nitrogen-sulfur-co-doped carbons in combination with the already studied ionic liquids Bmp-dca and Emim-dca. Here, the salt templating approach should also be applicable eventually further illustrating the potential of salt templating, too. In the second part, a one-pot and template-free synthesis approach toward inherently porous metal nitride nanoparticle containing nitrogen-doped carbon composites is presented. Since ionic liquids also offer outstanding solubility properties, the materials can be generated through the carbonization of homogeneous solutions of an ionic liquid acting as nitrogen as well as carbon source and the respective metal precursor. The metal content and surface area are easily tunable via the initial metal precursor amount. Furthermore, it is also possible to synthesize composites with ternary nitride nanoparticles whose composition is adjustable by the metal ratio in the precursor solution. Finally, both approaches are combined into salt templating of the one-pot composites. This opens the way to the one-step synthesis of composites with tunable composition, particle size as well as precisely controllable porosity and morphology. Thereby, common synthesis strategies where the product composition is often negatively affected by the template removal procedure can be avoided. The composites are further shown to be suitable as electrodes for supercapacitors. Here, different properties such as porosity, metal content and particle size are investigated and discussed with respect to their influence on the energy storage performance. Because a variety of ionic liquids, metal precursors and salts can be combined and a simple closed-loop process including salt recycling is imaginable, the approaches present a promising platform toward sustainable materials design.
ROS transcriptional networks controlling cell expansion during leaf growth in Arabidopsis thaliana
(2012)
The constantly growing capacity of reconfigurable devices allows simultaneous execution of complex applications on those devices. The mere diversity of applications deems it impossible to design an interconnection network matching the requirements of every possible application perfectly, leading to suboptimal performance in many cases. However, the architecture of the interconnection network is not the only aspect affecting performance of communication. The resource manager places applications on the device and therefore influences latency between communicating partners and overall network load. Communication protocols affect performance by introducing data and processing overhead putting higher load on the network and increasing resource demand. Approaching communication holistically not only considers the architecture of the interconnect, but communication-aware resource management, communication protocols and resource usage just as well. Incorporation of different parts of a reconfigurable system during design- and runtime and optimizing them with respect to communication demand results in more resource efficient communication. Extensive evaluation shows enhanced performance and flexibility, if communication on reconfigurable devices is regarded in a holistic fashion.
This work is concerned with the characterization of certain classes of stochastic processes via duality formulae. In particular we consider reciprocal processes with jumps, a subject up to now neglected in the literature. In the first part we introduce a new formulation of a characterization of processes with independent increments. This characterization is based on a duality formula satisfied by processes with infinitely divisible increments, in particular Lévy processes, which is well known in Malliavin calculus. We obtain two new methods to prove this duality formula, which are not based on the chaos decomposition of the space of square-integrable function- als. One of these methods uses a formula of partial integration that characterizes infinitely divisible random vectors. In this context, our characterization is a generalization of Stein’s lemma for Gaussian random variables and Chen’s lemma for Poisson random variables. The generality of our approach permits us to derive a characterization of infinitely divisible random measures. The second part of this work focuses on the study of the reciprocal classes of Markov processes with and without jumps and their characterization. We start with a resume of already existing results concerning the reciprocal classes of Brownian diffusions as solutions of duality formulae. As a new contribution, we show that the duality formula satisfied by elements of the reciprocal class of a Brownian diffusion has a physical interpretation as a stochastic Newton equation of motion. Thus we are able to connect the results of characterizations via duality formulae with the theory of stochastic mechanics by our interpretation, and to stochastic optimal control theory by the mathematical approach. As an application we are able to prove an invariance property of the reciprocal class of a Brownian diffusion under time reversal. In the context of pure jump processes we derive the following new results. We describe the reciprocal classes of Markov counting processes, also called unit jump processes, and obtain a characterization of the associated reciprocal class via a duality formula. This formula contains as key terms a stochastic derivative, a compensated stochastic integral and an invariant of the reciprocal class. Moreover we present an interpretation of the characterization of a reciprocal class in the context of stochastic optimal control of unit jump processes. As a further application we show that the reciprocal class of a Markov counting process has an invariance property under time reversal. Some of these results are extendable to the setting of pure jump processes, that is, we admit different jump-sizes. In particular, we show that the reciprocal classes of Markov jump processes can be compared using reciprocal invariants. A characterization of the reciprocal class of compound Poisson processes via a duality formula is possible under the assumption that the jump-sizes of the process are incommensurable.
Rechenstörungen im Kindes- und Jugendalter: psychische Auffälligkeiten und kognitive Defizite
(2012)
The aim of this thesis is the quantum dynamical study of two examples of scanning tunneling microscope (STM)-controllable, Si(100)(2x1) surface-mounted switches of atomic and molecular scale. The first example considers the switching of single H-atoms between two dangling-bond chemisorption sites on a Si-dimer of the Si(100) surface (Grey et al., 1996). The second system examines the conformational switching of single 1,5-cyclooctadiene molecules chemisorbed on the Si(100) surface (Nacci et al., 2008). The temporal dynamics are provided by the propagation of the density matrix in time via an according set of equations of motion (EQM). The latter are based on the open-system density matrix theory in Lindblad form. First order perturbation theory is used to evaluate those transition rates between vibrational levels of the system part. In order to account for interactions with the surface phonons, two different dissipative models are used, namely the bilinear, harmonic and the Ohmic bath model. IET-induced vibrational transitions in the system are due to the dipole- and the resonance-mechanism. A single surface approach is used to study the influence of dipole scattering and resonance scattering in the below-threshold regime. Further, a second electronic surface was included to study the resonance-induced switching in the above-threshold regime. Static properties of the adsorbate, e.g., potentials and dipole function and potentials, are obtained from quantum chemistry and used within the established quantum dynamical models.
Der Förster-Resonanzenergietransfer (FRET) liefert einen wichtigen Beitrag bei der Untersuchung kleinskaliger biologischer Systeme und Prozesse. Möglich wird dies durch die r-6-Abhängigkeit des FRET, die es erlaubt Abstände und strukturelle Änderungen weit unterhalb der Beugungsgrenze des Lichts mit hoher Sensitivität und geringem Aufwand zu bestimmen. Die besonderen photophysikalischen Eigenschaften von Terbiumkomplexen (LTC) und Quantenpunkten (QD) machen sie zu geeigneten Kandidaten für hochsensitive und störungsarme Multiplex-Abstandsmessungen in biologischen Systemen und Prozessen. Die Abstandsbestimmungen setzen jedoch eine genaueste Kenntnis des Mechanismus des Energietransfers von LTC auf QD ebenso voraus, wie das Wissen um Größe und Gestalt letzterer. Quantenpunkte haben im Vergleich zu biologischen Strukturen ähnliche Dimensionen und können nicht als punktförmig betrachtet werden, wie es bei einfacheren Farbstoffen möglich ist. Durch ihre Form kommt es zu einer Abstandsverteilung innerhalb des Donor-Akzeptorsystems. Dies beeinflusst den Energietransfer und damit die experimentellen Ergebnisse. In dieser Arbeit wurde der Energietransfer von LTC auf QD untersucht, um zu einer Aussage hinsichtlich des Mechanismus der Energieübertragung und der dabei zu berücksichtigenden photophysikalischen und strukturellen Parameter von LTC und QD zu gelangen. Mit der Annahme einer Abstandsverteilung sollten die Größen der Quantenpunkte bestimmt und der Einfluss von Form und Gestalt auf den Energietransfer betrachtet werden. Die notwendigen theoretischen und praktischen Grundlagen wurden eingangs dargestellt. Daran schlossen sich Messungen zur photophysikalischen Charakterisierung der Donoren und Akzeptoren an, die Grundlage der Berechnung der FRET-Parameter waren. Die Förster-Radien zeigten die für den FRET von LTC auf QD typischen extrem hohen Werte von bis zu 11 nm. Zeitaufgelöste Messungen der FRET-induzierten Lumineszenz der Donoren und Akzeptoren in den beiden biomolekularen Modellsystemen Zink-Histidin und Biotin-Streptavidin beschlossen den praktischen Teil. Als Donor wurde Lumi4Tb gebunden an ein Peptid bzw. Streptavidin genutzt, Akzeptoren waren fünf verschiedene, kommerziell erhältliche Quantenpunkte mit Carboxyl- bzw. Biotinfunktionalisierung. Bei allen Donor-Akzeptor-Paarungen konnte FRET beobachtet und ausgewertet werden. Es konnte gezeigt werden, dass die gesamte Emission des Terbiums zum Energietransfer beiträgt und der Orientierungsfaktor ² den Wert 2/3 annimmt. Die Charakterisierung der Bindungsverhältnisse innerhalb der FRET-Paare von LTC und QD über Verteilungsfunktionen bietet über die Form der Verteilungskurve die Möglichkeit Aussagen über die Gestalt der FRET-Partner zu treffen. So war es möglich, die mittlere Form der Quantenpunkte als Sphäre zu bestimmen. Dies entsprach, insbesondere bei den in z-Richtung des Kristallgitters elongierten Quantenpunkten, nicht den Erwartungen. Dieser Befund ermöglicht daher bei zukünftigen Messungen eine Verbesserung der Genauigkeit bei Abstandsbestimmungen mit Quantenpunkten. Neben der Ermittlung der die FRET-Verteilung bestimmenden Gestalt der Quantenpunkte konnte im Rahmen dieser Arbeit anhand vergleichender Messungen die Dicke der Polymerhülle der QD bestimmt und so gezeigt werden, dass FRET-Paare aus lumineszenten Terbiumkomplexen und Quantenpunkten in der Lage sind, Abstände im Nano- bis Sub-Nanometerbereich aufzulösen.
Die Photophysik und Photochemie von Flavinen sind aufgrund ihrer biologischen Funktion, inbesondere von Flavoproteinen, von großen Interesse. Flavoproteine spielen eine große Rolle in einer Vielzahl von biologischen Prozessen, z.B. Biolumineszenz, Entfernung von Radikalen, die bei oxidativem Stress entstehen, Photosynthese und DNA-Reparatur. Die spektroskopischen Eigenschaften des Flavin-Cofaktors machen diesen zu einem natürlichen Reporter für Veränderungen innerhalb des aktiven Zentrums. Deshalb sind die Flavoproteine eine der am meisten untersuchten Enzymfamilien. Eine biologische Aktivität des Flavins führt über einen elektronisch angeregten Zustand, wo dann, abhängig von der Aminosäureumgebung, ein bestimmter Mechanismus zu einem biologischen Prozess führt (Photozyklus). Ein wichtiges Analysetool zum Verständnis des anfänglichen Photoanregungsschritts der Flavine sind die elektronische und die Schwingungsspektroskopie. In dieser Arbeit wurden die Prozesse von Riboflavin (RF) während und nach optischer Anregung mit theoretischen Mitteln beleuchtet. Dazu wurden quantenchemische Berechnungen für Schwingungsspektren (vibratorische) von Riboflavin, auch Laktoflavin oder Vitamin B2 genannt, dem Grundmolekül der Chromophore biologischer Blaulichtrezeptoren, in dessen elektronischem Grundzustand und dessen niedrigsten angeregten Zustand durchgeführt. Weiterhin wurden vibronische (vibratorische+elektronische) Absorptionsspektren und ein vibronisches Emissionsspektrum berechnet. Die so berechneten Schwingungs- und elektronischen Spektren sind in guter qualitativer wie quantitativer Übereinstimmung mit gemessenen Werten, und helfen so, die experimentellen Signale der Photoanregung von Flavinen zuzuweisen. Unmittelbar nach der Photoanregung wurde ein Verlust des Doppelbindungscharakters im polaren Bereich des Ringssystems beobachtet, was zu der vibronischen Feinstruktur im elektronischen Absorptions- und Emissionsspektrum führte. Hier zeigte sich zudem, dass neben den vibronischen Effekten auch die Lösungsmitteleffekte wichtig für das quantitative Verständnis der Photophysik der Flavine in Lösung sind. Um Details des optischen Anregungsprozesses als initialen, elementaren Schritt zur Signalweiterleitung zu entschlüsseln, wurden ultraschnelle (femtosekundenaufgelöste) Experimente durchgeführt, die die Photoaktivierung des Flavins untersuchen. Diese Arbeit soll zu einem weiteren Verständnis und der Interpretation dieser Experimente durch das Studium der Post-Anregungsschwingungsdynamik von Riboflavin und mikrosolvatisiertem Riboflavin beitragen. Dazu wurde eine 200 fs lange Molekulardynamik in angeregten Zuständen betrachtet. Durch die Analyse charakteristischer Atombewegungen und durch die Berechnungen zeitaufgelöster Emissionsspektren fand man heraus, dass nach der optischen Anregung Schwingungen im Ringssystem des Riboflavins einsetzen. Mit Hilfe dieser Berechnungen kann die Umverteilung der Energie im angeregten Zustand beobachtet werden. Neben den theoretischen Untersuchungen zu Riboflavin in der Gasphase und auch in Lösung wurde ein Modell für eine BLUF (Blue-Light Photoreceptor Using Flavin) Domäne, ein Flavin benutzender Photorezeptor, erstellt. Hierbei zeigt sich, dass man die in dieser Arbeit angewendeten Analysemethoden auch auf biologisch relevante Systeme anwenden kann.
In many applications one is faced with the problem of inferring some functional relation between input and output variables from given data. Consider, for instance, the task of email spam filtering where one seeks to find a model which automatically assigns new, previously unseen emails to class spam or non-spam. Building such a predictive model based on observed training inputs (e.g., emails) with corresponding outputs (e.g., spam labels) is a major goal of machine learning. Many learning methods assume that these training data are governed by the same distribution as the test data which the predictive model will be exposed to at application time. That assumption is violated when the test data are generated in response to the presence of a predictive model. This becomes apparent, for instance, in the above example of email spam filtering. Here, email service providers employ spam filters and spam senders engineer campaign templates such as to achieve a high rate of successful deliveries despite any filters. Most of the existing work casts such situations as learning robust models which are unsusceptible against small changes of the data generation process. The models are constructed under the worst-case assumption that these changes are performed such to produce the highest possible adverse effect on the performance of the predictive model. However, this approach is not capable to realistically model the true dependency between the model-building process and the process of generating future data. We therefore establish the concept of prediction games: We model the interaction between a learner, who builds the predictive model, and a data generator, who controls the process of data generation, as an one-shot game. The game-theoretic framework enables us to explicitly model the players' interests, their possible actions, their level of knowledge about each other, and the order at which they decide for an action. We model the players' interests as minimizing their own cost function which both depend on both players' actions. The learner's action is to choose the model parameters and the data generator's action is to perturbate the training data which reflects the modification of the data generation process with respect to the past data. We extensively study three instances of prediction games which differ regarding the order in which the players decide for their action. We first assume that both player choose their actions simultaneously, that is, without the knowledge of their opponent's decision. We identify conditions under which this Nash prediction game has a meaningful solution, that is, a unique Nash equilibrium, and derive algorithms that find the equilibrial prediction model. As a second case, we consider a data generator who is potentially fully informed about the move of the learner. This setting establishes a Stackelberg competition. We derive a relaxed optimization criterion to determine the solution of this game and show that this Stackelberg prediction game generalizes existing prediction models. Finally, we study the setting where the learner observes the data generator's action, that is, the (unlabeled) test data, before building the predictive model. As the test data and the training data may be governed by differing probability distributions, this scenario reduces to learning under covariate shift. We derive a new integrated as well as a two-stage method to account for this data set shift. In case studies on email spam filtering we empirically explore properties of all derived models as well as several existing baseline methods. We show that spam filters resulting from the Nash prediction game as well as the Stackelberg prediction game in the majority of cases outperform other existing baseline methods.
Porous materials (e.g. zeolites, activated carbon, etc.) have found various applications in industry, such as the use as sorbents, catalyst supports and membranes for separation processes. Recently, much attention has been focused on synthesizing porous polymer materials. A vast amount of tailor-made polymeric systems with tunable properties has been investigated. Very often, however, the starting substances for these polymers are of petrochemical origin, and the processes are all in all not sustainable. Moreover, the new polymers have challenged existing characterizing methodologies. These have to be further developed to address the upcoming demands of the novel materials. Some standard techniques for the analysis of porous substances like nitrogen sorption at 77 K do not seem to be sufficient to answer all arising questions about the microstructure of such materials. In this thesis, microporous polymers from an abundant natural resource, betulin, will be presented. Betulin is a large-scale byproduct of the wood industry, and its content in birch bark can reach 30 wt.%. Based on its rigid structure, polymer networks with intrinsic microporosity could be synthesized and characterized. Apart from standard nitrogen and carbon dioxide sorption at 77 K and 273 K, respectively, gas sorption has been examined not only with various gases (hydrogen and argon) but also at various temperatures. Additional techniques such as X-ray scattering and xenon NMR have been utilized to enable insight into the microporous structure of the material. Starting from insoluble polymer networks with promising gas selectivities, soluble polyesters have been synthesized and processed to a cast film. Such materials are feasible for membrane applications in gas separation. Betulin as a starting compound for polyester synthesis has aided to prepare, and for the first time to thoroughly analyse a microporous polyester with respect to its pores and microstructure. It was established that nitrogen adsorption at 87 K can be a better method to solve the microstructure of the material. In addition to that, other betulin-based polymers such as polyurethanes and polyethylene glycol bioconjugates are presented. Altogether, it has been shown that as an abundant natural resource betulin is a suitable and cheap starting compound for some polymers with various potential applications.
The present work is devoted to establishing of a new generation of self-healing anti-corrosion coatings for protection of metals. The concept of self-healing anticorrosion coatings is based on the combination of the passive part, represented by the matrix of conventional coating, and the active part, represented by micron-sized capsules loaded with corrosion inhibitor. Polymers were chosen as the class of compounds most suitable for the capsule preparation. The morphology of capsules made of crosslinked polymers, however, was found to be dependent on the nature of the encapsulated liquid. Therefore, a systematic analysis of the morphology of capsules consisting of a crosslinked polymer and a solvent was performed. Three classes of polymers such as polyurethane, polyurea and polyamide were chosen. Capsules made of these polymers and eight solvents of different polarity were synthesized via interfacial polymerization. It was shown that the morphology of the resulting capsules is specific for every polymer-solvent pair. Formation of capsules with three general types of morphology, such as core-shell, compact and multicompartment, was demonstrated by means of Scanning Electron Microscopy. Compact morphology was assumed to be a result of the specific polymer-solvent interactions and be analogues to the process of swelling. In order to verify the hypothesis, pure polyurethane, polyurea and polyamide were synthesized; their swelling behavior in the solvents used as the encapsulated material was investigated. It was shown that the swelling behavior of the polymers in most cases correlates with the capsules morphology. Different morphologies (compact, core-shell and multicompartment) were therefore attributed to the specific polymer-solvent interactions and discussed in terms of “good” and “poor” solvent. Capsules with core-shell morphology are formed when the encapsulated liquid is a “poor” solvent for the chosen polymer while compact morphologies are formed when the solvent is “good”. Multicompartment morphology is explained by the formation of infinite networks or gelation of crosslinked polymers. If gelation occurs after the phase separation in the system is achieved, core-shell morphology is present. If gelation of the polymer occurs far before crosslinking is accomplished, further condensation of the polymer due to the crosslinking may lead to the formation of porous or multicompartment morphologies. It was concluded that in general, the morphology of capsules consisting of certain polymer-solvent pairs can be predicted on the basis of polymer-solvent behavior. In some cases, the swelling behavior and morphology may not match. The reasons for that are discussed in detail in the thesis. The discussed approach is only capable of predicting capsule morphology for certain polymer-solvent pairs. In practice, the design of the capsules assumes the trial of a great number of polymer-solvent combinations; more complex systems consisting of three, four or even more components are often used. Evaluation of the swelling behavior of each component pair of such systems becomes unreasonable. Therefore, exploitation of the solubility parameter approach was found to be more useful. The latter allows consideration of the properties of each single component instead of the pair of components. In such a manner, the Hansen Solubility Parameter (HSP) approach was used for further analysis. Solubility spheres were constructed for polyurethane, polyurea and polyamide. For this a three-dimensional graph is plotted with dispersion, polar and hydrogen bonding components of solubility parameter, obtained from literature, as the orthogonal axes. The HSP of the solvents are used as the coordinates for the points on the HSP graph. Then a sphere with a certain radius is located on a graph, and the “good” solvents would be located inside the sphere, while the “poor” ones are located outside. Both the location of the sphere center and the sphere radius should be fitted according to the information on polymer swelling behavior in a number of solvents. According to the existing correlation between the capsule morphology and swelling behavior of polymers, the solvents located inside the solubility sphere of a polymer give capsules with compact morphologies. The solvents located outside the solubility sphere of the solvent give either core-shell or multicompartment capsules in combination with the chosen polymer. Once the solubility sphere of a polymer is found, the solubility/swelling behavior is approximated to all possible substances. HSP theory allows therefore prediction of polymer solubility/swelling behavior and consequently the capsule morphology for any given substance with known HSP parameters on the basis of limited data. The latter makes the theory so attractive for application in chemistry and technology, since the choice of the system components is usually performed on the basis of a large number of different parameters that should mutually match. Even slight change of the technology sometimes leads to the necessity to find the analogue of this or that solvent in a sense of solvency but carrying different chemistry. Usage of the HSP approach in this case is indispensable. In the second part of the work examples of the HSP application for the fabrication of capsules with on-demand-morphology are presented. Capsules with compact or core-shell morphology containing corrosion inhibitors were synthesized. Thus, alkoxysilanes possessing long hydrophobic tail, combining passivating and water-repelling properties, were encapsulated in polyurethane shell. The mechanism of action of the active material required core-shell morphology of the capsules. The new hybrid corrosion inhibitor, cerium diethylhexyl phosphate, was encapsulated in polyamide shells in order to facilitate the dispersion of the substance and improve its adhesion to the coating matrix. The encapsulation of commercially available antifouling agents in polyurethane shells was carried out in order to control its release behavior and colloidal stability. Capsules with compact morphology made of polyurea containing the liquid corrosion inhibitor 2-methyl benzothiazole were synthesized in order to improve the colloidal stability of the substance. Capsules with compact morphology allow slower release of the liquid encapsulated material compared to the core-shell ones. If the “in-situ” encapsulation is not possible due to the reaction of the oil-soluble monomer with the encapsulated material, a solution was proposed: loading of the capsules should be performed after monomer deactivation due to the accomplishment of the polymerization reaction. Capsules of desired morphologies should be preformed followed by the loading step. In this way, compact polyurea capsules containing the highly effective but chemically active corrosion inhibitors 8-hydroxyquinoline and benzotriazole were fabricated. All the resulting capsules were successfully introduced into model coatings. The efficiency of the resulting “smart” self-healing anticorrosion coatings on steel and aluminium alloy of the AA-2024 series was evaluated using characterization techniques such as Scanning Vibrating Electron Spectroscopy, Electrochemical Impedance Spectroscopy and salt-spray chamber tests.
„Alle Kinder müssen zu wertvollen Menschen erzogen werden“, forderte Margot Honecker, Erziehungsminister der DDR von 1963 bis 1989. Während liberale Jugendsoziologen die Jugendphase als Moratorium begreifen und damit Heranwachsenden Freiräume zubilligen, geltende soziale Normen infrage zu stellen und selbstbestimmte Lebensentwürfe zu erproben, ohne ihr Handeln in gleicher Weise verantworten zu müssen wie Erwachsene, wurden Jugendliche in der DDR danach beurteilt, inwieweit sie dem Ideal der „allseitig gebildeten sozialistischen Persönlichkeit“ entsprachen. Nach Honeckers Ansicht wäre die freie Entfaltung des Individuums erst im Kommunismus möglich. Individuelle Entfaltung besaß für sie keinen eigenen Wert. Der politische Erziehungsanspruch erstreckte sich grundsätzlich auf alle Lebenswelten von Jugendlichen. Freiräume zur Selbstentfaltung waren in der DDR sowohl materiell als auch ideell eng umgrenzt, ein Umstand den der bundesdeutsche Bildungssoziologe Jürgen Zinnecker als „Jugendmoratorium in kasernierter Form“ bezeichnete. Dem politischen Anpassungsdruck waren Kinder und Jugendliche in besonders starkem Maße ausgesetzt. Zwar richtete sich der Erziehungsanspruch der SED grundsätzlich auf alle Bürger, doch anders als Erwachsene hatten Kinder und Jugendliche noch keine eigenständige Stellung innerhalb des sozialen und gesellschaftlichen Gefüges gefunden und deshalb weniger Möglichkeiten, sich der politischen Einwirkung zu entziehen. Mit dem Jugendgesetz von 1974 wurde die sozialistische Persönlichkeit als Erziehungsziel festgelegt, dem auch die Eltern zu folgen hatten. Bildungschancen wurden schon frühzeitig von der Anpassung an vorgegebene Normen abhängig gemacht, abweichendes Verhalten konnte rigide bestraft werden und gravierende Folgen für den weiteren Lebensweg haben. Auch wenn die meisten Jugendlichen die Forderungen des Staates zu erfüllen schienen und ihre Verbundenheit mit der Politik der SED wann immer gefordert bezeugten, standen sie dieser Politik tatsächlich mindestens gleichgültig gegenüber. Der „Widerspruch zwischen Wort und Tat“ war eines der gravierenden Probleme der Herrschenden im Umgang mit Heranwachsenden. Es gab aber auch Jugendliche, die bewusst Einschränkungen in Kauf nahmen, um ihre Vorstellungen eines selbstbestimmten Lebens verwirklichen zu können. Schon bei geringfügiger Abweichung von ausdrücklichen oder unausgesprochenen Vorgaben mussten sie mit erheblichen staatlichen Eingriffen in ihr persönliches Dasein rechnen. Die äußerste Form der Abweichung waren Ausreiseersuchen und Fluchtversuche. Jugendliche waren unter Antragstellern und „Republikflüchtigen“ überproportional vertreten. Die Dissertation beleuchtet das Spannungsverhältnis zwischen staatlich vorgegebenen Lebenswegen und eigen-sinniger Gestaltung verschiedener Lebensbereiche von Kindern und Jugendlichen für die Jahre der Honecker-Herrschaft zwischen 1971 bis 1989 im Bezirk Schwerin.
This study follows the debate in comparative public administration research on the role of advisory arrangements in central governments. The aim of this study is to explain the mechanisms by which these actors gain their alleged role in government decision-making. Hence, it analyses advisory arrangements that are proactively involved in executive decision-making and may compete with the permanent bureaucracy by offering policy advice to political executives. The study argues that these advisory arrangements influence government policy-making by "institutional politics", i.e. by shaping the institutional underpinnings to govern or rather the "rules of the executive game" in order to strengthen their own position or that of their clients. The theoretical argument of this study follows the neo-institutionalist turn in organization theory and defines institutional politics as gradual institutionalization processes between institutions and organizational actors. It applies a broader definition of institutions as sets of regulative, normative and cognitive pillars. Following the "power-distributional approach" such gradual institutionalization processes are influenced by structure-oriented characteristics, i.e. the nature of the objects of institutional politics, in particular the freedom of interpretation in their application, as well as the distinct constraints of the institutional context. In addition, institutional politics are influenced by agency-oriented characteristics, i.e. the ambitions of actors to act as "would-be change agents". These two explanatory dimensions result in four ideal-typical mechanisms of institutional politics: layering, displacement, drift, and conversion, which correspond to four ideal-types of would-be change agents. The study examines the ambitions of advisory arrangements in institutional politics in an exploratory manner, the relevance of the institutional context is analyzed via expectation hypotheses on the effects of four institutional context features that are regarded as relevant in the scholarly debate: (1) the party composition of governments, (2) the structuring principles in cabinet, (3) the administrative tradition, and (4) the formal politicization of the ministerial bureaucracy. The study follows a "most similar systems design" and conducts qualitative case studies on the role of advisory arrangements at the center of German and British governments, i.e. the Prime Minister’s Office and the Ministry of Finance, for a longer period (1969/1970-2005). Three time periods are scrutinized per country; the British case studies examine the role of advisory arrangements at the Cabinet Office, the Prime Minister's Office, and the Ministry of Finance under Prime Ministers Heath (1970-74), Thatcher (1979-87) and Blair (1997-2005). The German case studies study the role of advisory arrangements at the Federal Chancellery and the Federal Ministry of Finance during the Brandt government (1969-74), the Kohl government (1982-1987) and the Schröder government (1998-2005). For the empirical analysis, the results of a document analysis and the findings of 75 semi-structured expert interviews have been triangulated. The comparative analysis reveals different patterns of institutional politics. The German advisory arrangements engaged initially in displacement but turned soon towards layering and drift, i.e. after an initial displacement of the pre-existing institutional underpinnings to govern they laid increasingly new elements onto existing ones and took the non-deliberative decision to neglect the adaption of existing rules of the executive game towards changing environmental demands. The British advisory arrangements were mostly involved in displacement and conversion, despite occasional layering, i.e. they displaced the pre-existing institutional underpinnings to govern with new rules of the executive game and transformed and realigned them, sometimes also layering new elements onto pre-existing ones. The structure- and agency-oriented characteristics explain these patterns of institutional politics. First, the study shows that the institutional context limits the institutional politics in Germany and facilitates the institutional politics in the UK. Second, the freedom of interpreting the application of institutional targets is relevant and could be observed via the different ambitions of advisory arrangements across countries and over time, confirming, third, that the interests of such would-be change agents are likewise important to understand the patterns of institutional politics. The study concludes that the role of advisory arrangements in government policy-making rests not only upon their policy-related, party-political or media-advisory role for political executives, but especially upon their activities in institutional politics, resulting in distinct institutional constraints on all actors in government policy-making – including their own role in these processes.
A point process is a mechanism, which realizes randomly locally finite point measures. One of the main results of this thesis is an existence theorem for a new class of point processes with a so called signed Levy pseudo measure L, which is an extension of the class of infinitely divisible point processes. The construction approach is a combination of the classical point process theory, as developed by Kerstan, Matthes and Mecke, with the method of cluster expansions from statistical mechanics. Here the starting point is a family of signed Radon measures, which defines on the one hand the Levy pseudo measure L, and on the other hand locally the point process. The relation between L and the process is the following: this point process solves the integral cluster equation determined by L. We show that the results from the classical theory of infinitely divisible point processes carry over in a natural way to the larger class of point processes with a signed Levy pseudo measure. In this way we obtain e.g. a criterium for simplicity and a characterization through the cluster equation, interpreted as an integration by parts formula, for such point processes. Our main result in chapter 3 is a representation theorem for the factorial moment measures of the above point processes. With its help we will identify the permanental respective determinantal point processes, which belong to the classes of Boson respective Fermion processes. As a by-product we obtain a representation of the (reduced) Palm kernels of infinitely divisible point processes. In chapter 4 we see how the existence theorem enables us to construct (infinitely extended) Gibbs, quantum-Bose and polymer processes. The so called polymer processes seem to be constructed here for the first time. In the last part of this thesis we prove that the family of cluster equations has certain stability properties with respect to the transformation of its solutions. At first this will be used to show how large the class of solutions of such equations is, and secondly to establish the cluster theorem of Kerstan, Matthes and Mecke in our setting. With its help we are able to enlarge the class of Polya processes to the so called branching Polya processes. The last sections of this work are about thinning and splitting of point processes. One main result is that the classes of Boson and Fermion processes remain closed under thinning. We use the results on thinning to identify a subclass of point processes with a signed Levy pseudo measure as doubly stochastic Poisson processes. We also pose the following question: Assume you observe a realization of a thinned point process. What is the distribution of deleted points? Surprisingly, the Papangelou kernel of the thinning, besides a constant factor, is given by the intensity measure of this conditional probability, called splitting kernel.
Since available phosphate (Pi) resources in soil are limited, symbiotic interactions between plant roots and arbuscular mycorrhizal (AM) fungi are a widespread strategy to improve plant phosphate nutrition. The repression of AM symbiosis by a high plant Pi-status indicates a link between Pi homeostasis signalling and AM symbiosis development. This assumption is supported by the systemic induction of several microRNA399 (miR399) primary transcripts in shoots and a simultaneous accumulation of mature miR399 in roots of mycorrhizal plants. However, the physiological role of this miR399 expression pattern is still elusive and offers the question whether other miRNAs are also involved in AM symbiosis. Therefore, a deep sequencing approach was applied to investigate miRNA-mediated posttranscriptional gene regulation in M. truncatula mycorrhizal roots. Degradome analysis revealed that 185 transcripts were cleaved by miRNAs, of which the majority encoded transcription factors and disease resistance genes, suggesting a tight control of transcriptional reprogramming and a downregulation of defence responses by several miRNAs in mycorrhizal roots. Interestingly, 45 of the miRNA-cleaved transcripts showed a significant differentially regulated between mycorrhizal and non-mycorrhizal roots. In addition, key components of the Pi homeostasis signalling pathway were analyzed concerning their expression during AM symbiosis development. MtPhr1 overexpression and time course expression data suggested a strong interrelation between the components of the PHR1-miR399-PHO2 signalling pathway and AM symbiosis, predominantly during later stages of symbiosis. In situ hybridizations confirmed accumulation of mature miR399 in the phloem and in arbuscule-containing cortex cells of mycorrhizal roots. Moreover, a novel target of the miR399 family, named as MtPt8, was identified by the above mentioned degradome analysis. MtPt8 encodes a Pi-transporter exclusively transcribed in mycorrhizal roots and its promoter activity was restricted to arbuscule-containing cells. At a low Pi-status, MtPt8 transcript abundance inversely correlated with a mature miR399 expression pattern. Increased MtPt8 transcript levels were accompanied by elevated symbiotic Pi-uptake efficiency, indicating its impact on balancing plant and fungal Pi-acquisition. In conclusion, this study provides evidence for a direct link of the regulatory mechanisms of plant Pi-homeostasis and AM symbiosis at a cell-specific level. The results of this study, especially the interaction of miR399 and MtPt8 provide a fundamental step for future studies of plant-microbe-interactions with regard to agricultural and ecological aspects.
Im Fokus dieser Arbeit stand der Aufbau einer auf DNA basierenden Nanostruktur. Der universelle Vier-Buchstaben-Code der DNA ermöglicht es, Bindungen auf molekularer Ebene zu adressieren. Die chemischen und physikalischen Eigenschaften der DNA prädestinieren dieses Makromolekül für den Einsatz und die Verwendung als Konstruktionselement zum Aufbau von Nanostrukturen. Das Ziel dieser Arbeit war das Aufspannen eines DNA-Stranges zwischen zwei Fixpunkten. Hierfür war es notwendig, eine Methode zu entwickeln, welche es ermöglicht, Funktionsmoleküle als Ankerelemente ortsaufgelöst auf eine Oberfläche zu deponieren. Das Deponieren dieser Moleküle sollte dabei im unteren Mikrometermaßstab erfolgen, um den Abmaßen der DNA und der angestrebten Nanostruktur gerecht zu werden. Das eigens für diese Aufgabe entwickelte Verfahren zum ortsaufgelösten Deponieren von Funktionsmolekülen nutzt das Bindungspaar Biotin-Neutravidin. Mit Hilfe eines Rasterkraftmikroskops (AFM) wurde eine zu einem „Stift“ umfunktionierte Rasterkraftmikroskopspitze so mit der zu deponierenden „Tinte“ beladen, dass das Absetzen von Neutravidin im unteren Mikrometermaßstab möglich war. Dieses Neutravidinmolekül übernahm die Funktion als Bindeglied zwischen der biotinylierten Glasoberfläche und dem eigentlichen Adressmolekül. Das somit generierte Neutravidin-Feld konnte dann mit einem biotinylierten Adressmolekül durch Inkubation funktionalisiert werden. Namensgebend für dieses Verfahren war die Möglichkeit, Neutravidin mehrmals zu deponieren und zu adressieren. Somit ließ sich sequenziell ein Mehrkomponenten-Feld aufbauen. Die Einschränkung, mit einem AFM nur eine Substanz deponieren zu können, wurde so umgangen. Ferner mußten Ankerelemente geschaffen werden, um die DNA an definierten Punkten immobilisieren zu können. Die Bearbeitung der DNA erfolgte mit molekularbiologischen Methoden und zielte darauf ab, einen DNA-Strang zu generieren, welcher an seinen beiden Enden komplementäre Adressequenzen enthält, um gezielt mit den oberflächenständigen Ankerelementen binden zu können. Entsprechend der Geometrie der mit dem AFM erzeugten Fixpunkte und den oligonukleotidvermittelten Adressen kommt es zur Ausbildung einer definierten DNA-Struktur. Mit Hilfe von fluoreszenzmikroskopischen Methoden wurde die aufgebaute DNA-Nanostruktur nachgewiesen. Der Nachweis der nanoskaligen Interaktion von DNA-bindenden Molekülen mit der generierten DNA-Struktur wurde durch die Bindung von PNA (peptide nucleic acid) an den DNA-Doppelstrang erbracht. Diese PNA-Bindung stellt ihrerseits ein funktionales Strukturelement im Nanometermaßstab dar und wird als Nanostrukturbaustein verstanden.
Growing populations, continued economic development, and limited natural resources are critical factors affecting sustainable development. These factors are particularly pertinent in developing countries in which large parts of the population live at a subsistence level and options for sustainable development are limited. Therefore, addressing sustainable land use strategies in such contexts requires that decision makers have access to evidence-based impact assessment tools that can help in policy design and implementation. Ex-ante impact assessment is an emerging field poised at the science-policy interface and is used to assess the potential impacts of policy while also exploring trade-offs between economic, social and environmental sustainability targets. The objective of this study was to operationalise the impact assessment of land use scenarios in the context of developing countries that are characterised by limited data availability and quality. The Framework for Participatory Impact Assessment (FoPIA) was selected for this study because it allows for the integration of various sustainability dimensions, the handling of complexity, and the incorporation of local stakeholder perceptions. FoPIA, which was originally developed for the European context, was adapted to the conditions of developing countries, and its implementation was demonstrated in five selected case studies. In each case study, different land use options were assessed, including (i) alternative spatial planning policies aimed at the controlled expansion of rural-urban development in the Yogyakarta region (Indonesia), (ii) the expansion of soil and water conservation measures in the Oum Zessar watershed (Tunisia), (iii) the use of land conversion and the afforestation of agricultural areas to reduce soil erosion in Guyuan district (China), (iv) agricultural intensification and the potential for organic agriculture in Bijapur district (India), and (v) land division and privatisation in Narok district (Kenya). The FoPIA method was effectively adapted by dividing the assessment into three conceptual steps: (i) scenario development; (ii) specification of the sustainability context; and (iii) scenario impact assessment. A new methodological approach was developed for communicating alternative land use scenarios to local stakeholders and experts and for identifying recommendations for future land use strategies. Stakeholder and expert knowledge was used as the main sources of information for the impact assessment and was complemented by available quantitative data. Based on the findings from the five case studies, FoPIA was found to be suitable for implementing the impact assessment at case study level while ensuring a high level of transparency. FoPIA supports the identification of causal relationships underlying regional land use problems, facilitates communication among stakeholders and illustrates the effects of alternative decision options with respect to all three dimensions of sustainable development. Overall, FoPIA is an appropriate tool for performing preliminary assessments but cannot replace a comprehensive quantitative impact assessment, and FoPIA should, whenever possible, be accompanied by evidence from monitoring data or analytical tools. When using FoPIA for a policy oriented impact assessment, it is recommended that the process should follow an integrated, complementary approach that combines quantitative models, scenario techniques, and participatory methods.
Actin is one of the most abundant and highly conserved proteins in eukaryotic cells. The globular protein assembles into long filaments, which form a variety of different networks within the cytoskeleton. The dynamic reorganization of these networks - which is pivotal for cell motility, cell adhesion, and cell division - is based on cycles of polymerization (assembly) and depolymerization (disassembly) of actin filaments. Actin binds ATP and within the filament, actin-bound ATP is hydrolyzed into ADP on a time scale of a few minutes. As ADP-actin dissociates faster from the filament ends than ATP-actin, the filament becomes less stable as it grows older. Recent single filament experiments, where abrupt dynamical changes during filament depolymerization have been observed, suggest the opposite behavior, however, namely that the actin filaments become increasingly stable with time. Several mechanisms for this stabilization have been proposed, ranging from structural transitions of the whole filament to surface attachment of the filament ends. The key issue of this thesis is to elucidate the unexpected interruptions of depolymerization by a combination of experimental and theoretical studies. In new depolymerization experiments on single filaments, we confirm that filaments cease to shrink in an abrupt manner and determine the time from the initiation of depolymerization until the occurrence of the first interruption. This duration differs from filament to filament and represents a stochastic variable. We consider various hypothetical mechanisms that may cause the observed interruptions. These mechanisms cannot be distinguished directly, but they give rise to distinct distributions of the time until the first interruption, which we compute by modeling the underlying stochastic processes. A comparison with the measured distribution reveals that the sudden truncation of the shrinkage process neither arises from blocking of the ends nor from a collective transition of the whole filament. Instead, we predict a local transition process occurring at random sites within the filament. The combination of additional experimental findings and our theoretical approach confirms the notion of a local transition mechanism and identifies the transition as the photo-induced formation of an actin dimer within the filaments. Unlabeled actin filaments do not exhibit pauses, which implies that, in vivo, older filaments become destabilized by ATP hydrolysis. This destabilization can be identified with an acceleration of the depolymerization prior to the interruption. In the final part of this thesis, we theoretically analyze this acceleration to infer the mechanism of ATP hydrolysis. We show that the rate of ATP hydrolysis is constant within the filament, corresponding to a random as opposed to a vectorial hydrolysis mechanism.
Numerical simulation of the Antartic ice sheet and its dynamic response to external pertubations
(2012)
Nucleation and growth of unsubstituted metal phthalocyanine films from solution on planar substrates
(2012)
In den vergangenen Jahren wurden kosteneffiziente nasschemische Beschichtungsverfahren für die Herstellung organischer Dünnfilme für verschiedene opto-elektronische Anwendungen entdeckt und weiterentwickelt. Unter anderem wurden Phthalocyanin-Moleküle in photoaktiven Schichten für die Herstellung von Solarzellen intensiv erforscht. Aufgrund der kleinen bzw. unbekannten Löslichkeit wurden Phthalocyanin-Schichten durch Aufdampfverfahren im Vakuum hergestellt. Des Weiteren wurde die Löslichkeit durch chemische Synthese erhöht, was aber die Eigenschaften von Pc beeinträchtigte. In dieser Arbeit wurde die Löslichkeit, optische Absorption und Stabilität von 8 verschiedenen unsubstituierten Metall-Phthalocyaninen in 28 verschiedenen Lösungsmitteln quantitativ gemessen. Wegen ausreichender Löslichkeit, Stabilität und Anwendbarkeit in organischen Solarzellen wurde Kupferphthalocyanin (CuPc) in Trifluoressigsäure (TFA) für weitere Untersuchungen ausgewählt. Durch die Rotationsbeschichtung von CuPc aus TFA Lösung wurde ein dünner Film aus der verdampfenden Lösung auf dem Substrat platziert. Nach dem Verdampfen des Lösungsmittels, die Nanobändern aus CuPc bedecken das Substrat. Die Nanobänder haben eine Dicke von etwa ~ 1 nm (typische Dimension eines CuPc-Molekül) und variierender Breite und Länge, je nach Menge des Materials. Solche Nanobändern können durch Rotationsbeschichtung oder auch durch andere Nassbeschichtungsverfahren, wie Tauchbeschichtung, erzeugt werden. Ähnliche Fibrillen-Strukturen entstehen durch Nassbeschichtung von anderen Metall-Phthalocyaninen, wie Eisen- und Magnesium-Phthalocyanin, aus TFA-Lösung sowie auf anderen Substraten, wie Glas oder Indium Zinnoxid. Materialeigenschaften von aufgebrachten CuPc aus TFA Lösung und CuPc in der Lösung wurden ausführlich mit Röntgenbeugung, Spektroskopie- und Mikroskopie Methoden untersucht. Es wird gezeigt, dass die Nanobänder nicht in der Lösung, sondern durch Verdampfen des Lösungsmittels und der Übersättigung der Lösung entstehen. Die Rasterkraftmikroskopie wurde dazu verwendet, um die Morphologie des getrockneten Films bei unterschiedlicher Konzentration zu studieren. Der Mechanismus der Entstehung der Nanobändern wurde im Detail studiert. Gemäß der Keimbildung und Wachstumstheorie wurde die Entstehung der CuPc Nanobänder aus einer übersättigt Lösung diskutiert. Die Form der Nanobändern wurde unter Berücksichtigung der Wechselwirkung zwischen den Molekülen und dem Substrat diskutiert. Die nassverarbeitete CuPc-Dünnschicht wurde als Donorschicht in organischen Doppelschicht Solarzellen mit C60-Molekül, als Akzeptor eingesetzt. Die Effizienz der Energieumwandlung einer solchen Zelle wurde entsprechend den Schichtdicken der CuPc Schicht untersucht.
This work describes the synthesis and characterization of stimuli-responsive polymers made by reversible addition-fragmentation chain transfer (RAFT) polymerization and the investigation of their self-assembly into “smart” hydrogels. In particular the hydrogels were designed to swell at low temperature and could be reversibly switched to a collapsed hydrophobic state by rising the temperature. Starting from two constituents, a short permanently hydrophobic polystyrene (PS) block and a thermo-responsive poly(methoxy diethylene glycol acrylate) (PMDEGA) block, various gelation behaviors and switching temperatures were achieved. New RAFT agents bearing tert-butyl benzoate or benzoic acid groups, were developed for the synthesis of diblock, symmetrical triblock and 3-arm star block copolymers. Thus, specific end groups were attached to the polymers that facilitate efficient macromolecular characterization, e.g by routine 1H-NMR spectroscopy. Further, the carboxyl end-groups allowed functionalizing the various polymers by a fluorophore. Because reports on PMDEGA have been extremely rare, at first, the thermo-responsive behavior of the polymer was investigated and the influence of factors such as molar mass, nature of the end-groups, and architecture, was studied. The use of special RAFT agents enabled the design of polymer with specific hydrophobic and hydrophilic end-groups. Cloud points (CP) of the polymers proved to be sensitive to all molecular variables studied, namely molar mass, nature and number of the end-groups, up to relatively high molar masses. Thus, by changing molecular parameters, CPs of the PMDEGA could be easily adjusted within the physiological interesting range of 20 to 40°C. A second responsivity, namely to light, was added to the PMDEGA system via random copolymerization of MDEGA with a specifically designed photo-switchable azobenzene acrylate. The composition of the copolymers was varied in order to determine the optimal conditions for an isothermal cloud point variation triggered by light. Though reversible light-induced solubility changes were achieved, the differences between the cloud points before and after the irradiation were small. Remarkably, the response to light differed from common observations for azobenzene-based systems, as CPs decreased after UV-irradiation, i.e with increasing content of cis-azobenzene units. The viscosifying and gelling abilities of the various block copolymers made from PS and PMDEGA blocks were studied by rheology. Important differences were observed between diblock copolymers, containing one hydrophobic PS block only, the telechelic symmetrical triblock copolymers made of two associating PS termini, and the star block copolymers having three associating end blocks. Regardless of their hydrophilic block length, diblock copolymers PS11 PMDEGAn were freely flowing even at concentrations as high as 40 wt. %. In contrast, all studied symmetrical triblock copolymers PS8-PMDEGAn-PS8 formed gels at low temperatures and at concentrations as low as 3.5 wt. % at best. When heated, these gels underwent a gel-sol transition at intermediate temperatures, well below the cloud point where phase separation occurs. The gel-sol transition shifted to markedly higher transition temperatures with increasing length of the hydrophilic inner block. This effect increased also with the number of arms, and with the length of the hydrophobic end blocks. The mechanical properties of the gels were significantly altered at the cloud point and liquid-like dispersions were formed. These could be reversibly transformed into hydrogels by cooling. This thesis demonstrates that high molar mass PMDEGA is an easily accessible, presumably also biocompatible and at ambient temperature well water-soluble, non-ionic thermo-responsive polymer. PMDEGA can be easily molecularly engineered via the RAFT method, implementing defined end-groups, and producing different, also complex, architectures, such as amphiphilic triblock and star block copolymers, having an analogous structure to associative telechelics. With appropriate design, such amphiphilic copolymers give way to efficient, “smart” viscosifiers and gelators displaying tunable gelling and mechanical properties.
In diesem Buch geht es um ein Phänomen, das als konstantes Element in der Geschichte des Christentums bezeichnet werden kann: Neuoffenbarungen. Denn der Kanonisierung der Bibel und dem kritischen Blick der kirchlichen Orthodoxie zum Trotz gab und gibt es immer wieder Menschen, die behaupten, dass sich ihnen Gottvater, Christus, der Heilige Geist oder andere Wesenheiten (Maria, Engel, Verstorbene) offenbart haben. Religionswissenschaftler haben das Thema bislang weitgehend ignoriert. Sie haben den Bereich des Christentums den Theologen überlassen und sich allenfalls mit frei flottierender Esoterik befasst. Theologen neigen ihrerseits dazu, Neuoffenbarungen apologetisch zu bekämpfen. Die vorliegende Untersuchung leistet daher einen wichtigen Beitrag zur religionswissenschaftlichen Erforschung des Themas. Im ersten Teil des Buches wird der Begriff „Neuoffenbarung“ aus verschiedenen religionswissenschaftlichen Perspektiven betrachtet. Zunächst wird untersucht, was die christliche Theologie unter „Offenbarung“ versteht. Danach werden die verschiedenen Termini analysiert, die für das Feld der außer- und nachbiblischen Offenbarungen kursieren (Neuoffenbarung, Privatoffenbarung, Channeling, Spiritismus, Prophetie u. v. m.). Anschließend werden jene Argumente referiert, die von Neuoffenbarungsanhängern bzw. kirchlichen Apologeten ins Feld geführt werden, um die Legitimität von Neuoffenbarungen zu behaupten bzw. zu bestreiten. Dass Neuoffenbarungen gar nicht so neu sind, zeigt ein religionshistorischer Überblick. Denn der Anspruch, besondere Offenbarungen empfangen zu haben, lässt sich in jeder Epoche des Christentums nachweisen. Nachdem einige Exponenten des prophetischen Charismas als ideengeschichtliche Vorläufer und Geistesverwandte der modernen Neuoffenbarungen vorgestellt wurden, werden diese schließlich selbst in den Fokus genommen. Das disparate Feld der Neuoffenbarungsträger des 19. und 20. Jahrhunderts wird anhand exemplarischer Gestalten in einer Typologie geordnet dargestellt. Um den Zitationszirkel zu durchbrechen, der sich offensichtlich im Diskurs etabliert hat, werden darin auch bislang weniger bekannte Neuoffenbarer vorgestellt. In einer Art Tiefenbohrung werden diese religionsphilosophischen, semantischen, historischen und systematischen Zugänge im zweiten Teil an der mexikanischen Neuoffenbarung „Das Buch des Wahren Lebens“ exemplifiziert. Die analysierende Darstellung beschränkt sich jedoch nicht auf ein isoliertes Objekt, sondern dies wird in einen komparatistischen Kontext gestellt: Zentrale Topoi des „Buches des Wahren Lebens“ (Christologie, Reinkarnationslehre, Kirchenkritik u. v. m.) werden zum einen in einer Synopse mit anderen Neuoffenbarungen dargestellt und zum anderen an der orthodoxen Theologie gespiegelt. Damit wird eine doppelte Differenz gezeigt: die Nähe/Ferne zu ähnlichen Phänomenen und die Nähe/Ferne zum kirchlichen Christentum.
Virtual 3D city and landscape models are the main subject investigated in this thesis. They digitally represent urban space and have many applications in different domains, e.g., simulation, cadastral management, and city planning. Visualization is an elementary component of these applications. Photo-realistic visualization with an increasingly high degree of detail leads to fundamental problems for comprehensible visualization. A large number of highly detailed and textured objects within a virtual 3D city model may create visual noise and overload the users with information. Objects are subject to perspective foreshortening and may be occluded or not displayed in a meaningful way, as they are too small. In this thesis we present abstraction techniques that automatically process virtual 3D city and landscape models to derive abstracted representations. These have a reduced degree of detail, while essential characteristics are preserved. After introducing definitions for model, scale, and multi-scale representations, we discuss the fundamentals of map generalization as well as techniques for 3D generalization. The first presented technique is a cell-based generalization of virtual 3D city models. It creates abstract representations that have a highly reduced level of detail while maintaining essential structures, e.g., the infrastructure network, landmark buildings, and free spaces. The technique automatically partitions the input virtual 3D city model into cells based on the infrastructure network. The single building models contained in each cell are aggregated to abstracted cell blocks. Using weighted infrastructure elements, cell blocks can be computed on different hierarchical levels, storing the hierarchy relation between the cell blocks. Furthermore, we identify initial landmark buildings within a cell by comparing the properties of individual buildings with the aggregated properties of the cell. For each block, the identified landmark building models are subtracted using Boolean operations and integrated in a photo-realistic way. Finally, for the interactive 3D visualization we discuss the creation of the virtual 3D geometry and their appearance styling through colors, labeling, and transparency. We demonstrate the technique with example data sets. Additionally, we discuss applications of generalization lenses and transitions between abstract representations. The second technique is a real-time-rendering technique for geometric enhancement of landmark objects within a virtual 3D city model. Depending on the virtual camera distance, landmark objects are scaled to ensure their visibility within a specific distance interval while deforming their environment. First, in a preprocessing step a landmark hierarchy is computed, this is then used to derive distance intervals for the interactive rendering. At runtime, using the virtual camera distance, a scaling factor is computed and applied to each landmark. The scaling factor is interpolated smoothly at the interval boundaries using cubic Bézier splines. Non-landmark geometry that is near landmark objects is deformed with respect to a limited number of landmarks. We demonstrate the technique by applying it to a highly detailed virtual 3D city model and a generalized 3D city model. In addition we discuss an adaptation of the technique for non-linear projections and mobile devices. The third technique is a real-time rendering technique to create abstract 3D isocontour visualization of virtual 3D terrain models. The virtual 3D terrain model is visualized as a layered or stepped relief. The technique works without preprocessing and, as it is implemented using programmable graphics hardware, can be integrated with minimal changes into common terrain rendering techniques. Consequently, the computation is done in the rendering pipeline for each vertex, primitive, i.e., triangle, and fragment. For each vertex, the height is quantized to the nearest isovalue. For each triangle, the vertex configuration with respect to their isovalues is determined first. Using the configuration, the triangle is then subdivided. The subdivision forms a partial step geometry aligned with the triangle. For each fragment, the surface appearance is determined, e.g., depending on the surface texture, shading, and height-color-mapping. Flexible usage of the technique is demonstrated with applications from focus+context visualization, out-of-core terrain rendering, and information visualization. This thesis presents components for the creation of abstract representations of virtual 3D city and landscape models. Re-using visual language from cartography, the techniques enable users to build on their experience with maps when interpreting these representations. Simultaneously, characteristics of 3D geovirtual environments are taken into account by addressing and discussing, e.g., continuous scale, interaction, and perspective.