Refine
Year of publication
- 2017 (303) (remove)
Document Type
- Doctoral Thesis (303) (remove)
Keywords
- Klimawandel (4)
- climate change (4)
- FRET (3)
- Nanopartikel (3)
- Adipositas (2)
- Arbeitsmarktpolitik (2)
- Bioraffinerie (2)
- Calciumphosphat (2)
- DNA origami (2)
- Depression (2)
Institute
- Institut für Biochemie und Biologie (51)
- Institut für Chemie (35)
- Institut für Geowissenschaften (28)
- Institut für Physik und Astronomie (26)
- Institut für Ernährungswissenschaft (17)
- Öffentliches Recht (17)
- Sozialwissenschaften (16)
- Wirtschaftswissenschaften (16)
- Department Psychologie (10)
- Historisches Institut (10)
- Institut für Umweltwissenschaften und Geographie (9)
- Department Linguistik (8)
- Department Erziehungswissenschaft (6)
- Department Sport- und Gesundheitswissenschaften (6)
- Hasso-Plattner-Institut für Digital Engineering GmbH (6)
- Institut für Informatik und Computational Science (6)
- Institut für Mathematik (6)
- Bürgerliches Recht (5)
- Extern (5)
- Institut für Germanistik (5)
- Institut für Künste und Medien (4)
- Institut für Romanistik (4)
- Institut für Jüdische Studien und Religionswissenschaft (3)
- Strafrecht (3)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (2)
- Institut für Philosophie (2)
- Department Grundschulpädagogik (1)
- Department Musik und Kunst (1)
- Institut für Anglistik und Amerikanistik (1)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (1)
Translating innovation
(2017)
This doctoral thesis studies the process of innovation adoption in public administrations, addressing the research question of how an innovation is translated to a local context. The study empirically explores Design Thinking as a new problem-solving approach introduced by a federal government organisation in Singapore. With a focus on user-centeredness, collaboration and iteration Design Thinking seems to offer a new way to engage recipients and other stakeholders of public services as well as to re-think the policy design process from a user’s point of view. Pioneered in the private sector, early adopters of the methodology include civil services in Australia, Denmark, the United Kingdom, the United States as well as Singapore. Hitherto, there is not much evidence on how and for which purposes Design Thinking is used in the public sector.
For the purpose of this study, innovation adoption is framed in an institutionalist perspective addressing how concepts are translated to local contexts. The study rejects simplistic views of the innovation adoption process, in which an idea diffuses to another setting without adaptation. The translation perspective is fruitful because it captures the multidimensionality and ‘messiness’ of innovation adoption. More specifically, the overall research question addressed in this study is: How has Design Thinking been translated to the local context of the public sector organisation under investigation? And from a theoretical point of view: What can we learn from translation theory about innovation adoption processes?
Moreover, there are only few empirical studies of organisations adopting Design Thinking and most of them focus on private organisations. We know very little about how Design Thinking is embedded in public sector organisations. This study therefore provides further empirical evidence of how Design Thinking is used in a public sector organisation, especially with regards to its application to policy work which has so far been under-researched.
An exploratory single case study approach was chosen to provide an in-depth analysis of the innovation adoption process. Based on a purposive, theory-driven sampling approach, a Singaporean Ministry was selected because it represented an organisational setting in which Design Thinking had been embedded for several years, making it a relevant case with regard to the research question. Following a qualitative research design, 28 semi-structured interviews (45-100 minutes) with employees and managers were conducted. The interview data was triangulated with observations and documents, collected during a field research research stay in Singapore.
The empirical study of innovation adoption in a single organisation focused on the intra-organisational perspective, with the aim to capture the variations of translation that occur during the adoption process. In so doing, this study opened the black box often assumed in implementation studies. Second, this research advances translation studies not only by showing variance, but also by deriving explanatory factors. The main differences in the translation of Design Thinking occurred between service delivery and policy divisions, as well as between the first adopter and the rest of the organisation. For the intra-organisational translation of Design Thinking in the Singaporean Ministry the following five factors played a role: task type, mode of adoption, type of expertise, sequence of adoption, and the adoption of similar practices.
Via their powerful radiation, stellar winds, and supernova explosions, massive stars (Mini & 8 M☉) bear a tremendous impact on galactic evolution. It became clear in recent decades that the majority of massive stars reside in binary systems. This thesis sets as a goal to quantify the impact of binarity (i.e., the presence of a companion star) on massive stars. For this purpose, massive binary systems in the Local Group, including OB-type binaries, high mass X-ray binaries (HMXBs), and Wolf-Rayet (WR) binaries, were investigated by means of spectral, orbital, and evolutionary analyses.
The spectral analyses were performed with the non-local thermodynamic equillibrium (non-LTE) Potsdam Wolf-Rayet (PoWR) model atmosphere code. Thanks to critical updates in the calculation of the hydrostatic layers, the code became a state-of-the-art tool applicable for all types of hot massive stars (Chapter 2). The eclipsing OB-type triple system δ Ori served as an intriguing test-case for the new version of the PoWR code, and provided key insights regarding the formation of X-rays in massive stars (Chapter 3). We further analyzed two prototypical HMXBs, Vela X-1 and IGR J17544-2619, and obtained fundamental conclusions regarding the dichotomy of two basic classes of HMXBs (Chapter 4). We performed an exhaustive analysis of the binary R 145 in the Large Magellanic Cloud (LMC), which was claimed to host the most massive stars known. We were able to disentangle the spectrum of the system, and performed an orbital, polarimetric, and spectral analysis, as well as an analysis of the wind-wind collision region. The true masses of the binary components turned out to be significantly lower than suggested, impacting our understanding of the initial mass function and stellar evolution at low metallicity (Chapter 5). Finally, all known WR binaries in the Small Magellanic Cloud (SMC) were analyzed. Although it was theoretical predicted that virtually all WR stars in the SMC should be formed via mass-transfer in binaries, we find that binarity was not important for the formation of the known WR stars in the SMC, implying a strong discrepancy between theory and observations (Chapter 6).
1. Einleitung: Gegenstand und Vorgehensweise (Ziel und Anliegen der Arbeit / Forschungsstand / Methode) ― 2. Die Ideologen und die Seconde Classe des Institut national (Etienne Bonnot de Condillac als Referenz der Ideologen / Antoine Laurent de Lavoisier und die Nomenklatur der Chemie (1787) / Die Grup¬pe der Ideologen / Institutionelle Wirkungsmöglichkeiten der Ideologen: Die Classe des Sciences morales et politiques am Institut national / Vorlesungen in der Ecole normale de l’an III ) ― 3. Zum Korpus: Rekonstruktion der Ausschreibung (The¬men¬findung und Ausschreibung / Die erste Bewertung (1797) und die Neuausschreibung / Die zweite Bewertung (1799): Bekanntgabe des Gewinners) ― 4. Das archivalische Korpus (Zur Auffindsituation der Preisbewerbungsschriften / Serie B1 (1797) / Serie B2 (1799)) ― 5. Auswertung des Korpus: Übergreifende Topoi und Argumentationsstrukturen (Semiotisierung und Entsemiotisierung / Der Nutzen der Analyse / Hervorhebung der Schriftsprache gegenüber der Lautsprache / Das Mate¬ria¬lisieren / Die Zeichen der Mathematik als Vorbild) ― 6. Schlussbetrachtungen ― Literatur
Trunk loading and back pain
(2017)
An essential function of the trunk is the compensation of external forces and loads in order to guarantee stability. Stabilising the trunk during sudden, repetitive loading in everyday tasks, as well as during performance is important in order to protect against injury. Hence, reduced trunk stability is accepted as a risk factor for the development of back pain (BP). An altered activity pattern including extended response and activation times as well as increased co-contraction of the trunk muscles as well as a reduced range of motion and increased movement variability of the trunk are evident in back pain patients (BPP). These differences to healthy controls (H) have been evaluated primarily in quasi-static test situations involving isolated loading directly to the trunk. Nevertheless, transferability to everyday, dynamic situations is under debate. Therefore, the aim of this project is to analyse 3-dimensional motion and neuromuscular reflex activity of the trunk as response to dynamic trunk loading in healthy (H) and back pain patients (BPP).
A measurement tool was developed to assess trunk stability, consisting of dynamic test situations. During these tests, loading of the trunk is generated by the upper and lower limbs with and without additional perturbation. Therefore, lifting of objects and stumbling while walking are adequate represents. With the help of a 12-lead EMG, neuromuscular activity of the muscles encompassing the trunk was assessed. In addition, three-dimensional trunk motion was analysed using a newly developed multi-segmental trunk model. The set-up was checked for reproducibility as well as validity. Afterwards, the defined measurement set-up was applied to assess trunk stability in comparisons of healthy and back pain patients.
Clinically acceptable to excellent reliability could be shown for the methods (EMG/kinematics) used in the test situations. No changes in trunk motion pattern could be observed in healthy adults during continuous loading (lifting of objects) of different weights. In contrast, sudden loading of the trunk through perturbations to the lower limbs during walking led to an increased neuromuscular activity and ROM of the trunk. Moreover, BPP showed a delayed muscle response time and extended duration until maximum neuromuscular activity in response to sudden walking perturbations compared to healthy controls. In addition, a reduced lateral flexion of the trunk during perturbation could be shown in BPP.
It is concluded that perturbed gait seems suitable to provoke higher demands on trunk stability in adults. The altered neuromuscular and kinematic compensation pattern in back pain patients (BPP) can be interpreted as increased spine loading and reduced trunk stability in patients. Therefore, this novel assessment of trunk stability is suitable to identify deficits in BPP. Assignment of affected BPP to therapy interventions with focus on stabilisation of the trunk aiming to improve neuromuscular control in dynamic situations is implied. Hence, sensorimotor training (SMT) to enhance trunk stability and compensation of unexpected sudden loading should be preferred.
Natural hazards can have serious societal and economic impacts. Worldwide, around one third of economic losses due to natural hazards are attributable to floods. The majority of natural hazards are triggered by weather-related extremes such as heavy precipitation, rapid snow melt, or extreme temperatures. Some of them, and in particular floods, are expected to further increase in terms of frequency and/or intensity in the coming decades due to the impacts of climate change. In this context, the European Alps areas are constantly disclosed as being particularly sensitive.
In order to enhance the resilience of societies to natural hazards, risk assessments are substantial as they can deliver comprehensive risk information to be used as a basis for effective and sustainable decision-making in natural hazards management. So far, current assessment approaches mostly focus on single societal or economic sectors – e.g. flood damage models largely concentrate on private-sector housing – and other important sectors, such as the transport infrastructure sector, are widely neglected. However, transport infrastructure considerably contributes to economic and societal welfare, e.g. by ensuring mobility of people and goods. In Austria, for example, the national railway network is essential for the European transit of passengers and freights as well as for the development of the complex Alpine topography. Moreover, a number of recent experiences show that railway infrastructure and transportation is highly vulnerable to natural hazards. As a consequence, the Austrian Federal Railways had to cope with economic losses on the scale of several million euros as a result of flooding and other alpine hazards.
The motivation of this thesis is to contribute to filling the gap of knowledge about damage to railway infrastructure caused by natural hazards by providing new risk information for actors and stakeholders involved in the risk management of railway transportation. Hence, in order to support the decision-making towards a more effective and sustainable risk management, the following two shortcomings in natural risks research are approached: i) the lack of dedicated models to estimate flood damage to railway infrastructure, and ii) the scarcity of insights into possible climate change impacts on the frequency of extreme weather events with focus on future implications for railway transportation in Austria.
With regard to flood impacts to railway infrastructure, the empirically derived damage model Railway Infrastructure Loss (RAIL) proved expedient to reliably estimate both structural flood damage at exposed track sections of the Northern Railway and resulting repair cost. The results show that the RAIL model is capable of identifying flood risk hot spots along the railway network and, thus, facilitates the targeted planning and implementation of (technical) risk reduction measures. However, the findings of this study also show that the development and validation of flood damage models for railway infrastructure is generally constrained by the continuing lack of detailed event and damage data.
In order to provide flood risk information on the large scale to support strategic flood risk management, the RAIL model was applied for the Austrian Mur River catchment using three different hydraulic scenarios as input as well as considering an increased risk aversion of the railway operator. Results indicate that the model is able to deliver comprehensive risk information also on the catchment level. It is furthermore demonstrated that the aspect of risk aversion can have marked influence on flood damage estimates for the study area and, hence, should be considered with regard to the development of risk management strategies.
Looking at the results of the investigation on future frequencies of extreme weather events jeopardizing railway infrastructure and transportation in Austria, it appears that an increase in intense rainfall events and heat waves has to be expected, whereas heavy snowfall and cold days are likely to decrease. Furthermore, results indicate that frequencies of extremes are rather sensitive to changes of the underlying thresholds. It thus emphasizes the importance to carefully define, validate, and — if needed — to adapt the thresholds that are used to detect and forecast meteorological extremes. For this, continuous and standardized documentation of damaging events and near-misses is a prerequisite.
Overall, the findings of the research presented in this thesis agree on the necessity to improve event and damage documentation procedures in order to enable the acquisition of comprehensive and reliable risk information via risk assessments and, thus, support strategic natural hazards management of railway infrastructure and transportation.
This thesis investigates the processing of non-canonical word orders and whether non-canonical orders involving object topicalizations, midfield scrambling and particle verbs are treated the same by native (L1) and non-native (L2) speakers. The two languages investigated are Norwegian and German.
32 L1 Norwegian and 32 L1 German advanced learners of Norwegian were tested in two experiments on object topicalization in Norwegian. The results from the online self-paced reading task and the offline agent identification task show that both groups are able to identify the non-canonical word order and show a facilitatory effect of animate subjects in their reanalysis. Similarly high error rates in the agent identification task suggest that globally unambiguous object topicalizations are a challenging structure for L1 and L2 speakers alike.
The same participants were also tested in two experiments on particle placement in Norwegian, again using a self-paced reading task, this time combined with an acceptability rating task. In the acceptability rating L1 and L2 speakers show the same preference for the verb-adjacent placement of the particle over the non-adjacent placement after the direct object. However, this preference for adjacency is only found in the L1 group during online processing, whereas the L2 group shows no preference for either order.
Another set of experiments tested 33 L1 German and 39 L1 Slavic advanced learners of German on object scrambling in ditransitive clauses in German. Non-native speakers accept both object orders and show neither a preference for either order nor a processing advantage for the canonical order. The L1 group, in contrast, shows a small, but significant preference for the canonical dative-first order in the judgment and the reading task.
The same participants were also tested in two experiments on the application of the split rule in German particle verbs. Advanced L2 speakers of German are able to identify particle verbs and can apply the split rule in V2 contexts in an acceptability judgment task in the same way as L1 speakers. However, unlike the L1 group, the L2 group is not sensitive to the grammaticality manipulation during online processing. They seem to be sensitive to the additional lexical information provided by the particle, but are unable to relate the split particle to the preceding verb and recognize the ungrammaticality in non-V2 contexts.
Taken together, my findings suggest that non-canonical word orders are not per se more difficult to identify for L2 speakers than L1 speakers and can trigger the same reanalysis processes as in L1 speakers. I argue that L2 speakers’ ability to identify a non-canonical word order depends on how the non-canonicity is signaled (case marking vs. surface word order), on the constituents involved (identical vs. different word types), and on the impact of the word order change on sentence meaning. Non-canonical word orders that are signaled by morphological case marking and cause no change to the sentence’s content are hard to detect for L2 speakers.
L'exil comme patrie
(2017)
Die vorliegende Arbeit trägt den Titel: Intellektuellen-Rolle in Günter Grass Werken : „Die Plebejer proben den Aufstand“(1966), „Örtlich betäubt“(1969), „Aus dem Tagebuch einer Schnecke“(1972), und „Ein weites Feld“(1995).
Das erste Kapitel befasst sich insgesamt mit drei Haupttiteln
II. Der Intellektuelle
II.1 Das allgemeine Umfeld
In diesem Teil der Dissertation sollen Aussagen getroffen werden, die auf folgende und weitere Fragen eine Antwort geben: Was ist ein Intellektueller? Wie kam der Begriff zustande? Gibt es Unterschiede zwischen den Intellektuellen und wie werden sie eingeteilt?
II.2 Das deutsche Umfeld
Die Behandlung des Nazisystems und dessen historische Hintergründe vermittelt bedeutsame Lehren. Aber wozu braucht man diese Lehren? Gibt es Spuren von Nationalsozialismus heutzutage? Wo waren die Intellektuellen bei der Bildung des Nationalsozialismus? Ist der Nationalsozialismus erst mit Hitler aufgetaucht? Wenn zuvor, in welcher Phase hat er sich im Bewusstsein der Deutschen verankert? Ob theoretische bzw. geistige Tendenzen dazu beigetragen haben?
II.3 Das Bild von Grass als Intellektueller
II.3.1 Positionierung
Eine Hauptthese für Grass intellektuelle Positionierung wird durch die Verbindung zwischen Grass’ Grundkonzeption der gesellschaftspolitischen Intellektualität und der Gruppe 47 ermittelt. Dann bezweckt die Behandlung von Grass Bild nach Erscheinen seines autobiographischen Werks: „Beim Häuten der Zwiebel“ (2006), dass seine Intellektualität nicht nur aus dem positiven, sondern auch aus dem negativen Profil beleuchtet wird.
Aus der Darstellung zahlreicher Ansichten von Günter Grass werden fünf thematische Kernpunkte als Konzepte behandelt. Unter jedem Konzept sollen spezifische Vorschläge zur gesellschaftlichen Positionierung aufgezeigt werden.
II.3.2 Grass’ politische Merkmale
Es handelt sich hier um die intellektuellen Charaktereigenschaften. Dadurch kommen manche Fragen zu Wort: Hat Günter Grass gesellschaftliche Aktivitäten? Hat er die Voraussetzungen dafür? Wie ist der Umfang seiner Aktivitäten? Hat die Gruppe 47 Einfluss auf Grass intellektuelle Merkmale? Steht bei Grass eine Methode der gesellschaftspolitischen Arbeit zur Verfügung?
Dann wird die politische Sprache von Günter Grass und ihre Wirkung auf den Rezipienten untersucht. Danach wird nach Grass’ Auffassung von der Revision gefragt und ob sie mit seiner Auffassung der Aufklärung zusammenpasst. Darauf wird die Funktion der Revision in seinem literarischen Werk und in seiner gesellschaftspolitischen Aktivität gezeigt.
Abschließend werden die Argumente seiner Intellektualität untersucht:
Wie hat Grass’ gesellschaftspolitische Aktivität den konkreten politischen Rahmen berücksichtigt? Um diese Frage zu beantworten, muss der Zusammenhang zwischen Politik und Moral verdeutlicht werden.
III. Historischer Kontext und Inhalt der Werke
Unter diesem Titel wird erstens der historische Zusammenhang der untersuchten Werke skizziert. Dann werden meistens durch Argumente aus jedem Werk selbst nicht nur der Kern des Werkes und sein Handlungsverlauf, sondern auch die dafür angewandte Methode dargestellt.
IV. Bezug der untersuchten Werke zu konkreten gesellschaftspolitischen Fragen
IV.1 Interaktionswege des Intellektuellen mit der Gesellschaft, vor allem beim Wandel gesellschaftspolitischer Prozesse
Zentralkonzepte des ersten Werkes sind: Vermittlung, Engagement, Solidarität und die Aktualität als Maßstab. Diese werden durch zwei Konzepte des zweiten Werkes: Appell an Generationen beim Wechsel und Zusammenhaltsprinzip an Revision gebunden, sowie durch die Behandlung vom Prozess der Meinungsbildung im vierten Werk ausgearbeitet.
IV.2 Thematische Aspekte zur Vermeidung eines Naziregimes
Aus den thematischen Perspektiven der drei letzten Werke geht eine bunte Sammlung intellektueller Konzepte aus, die zur Bekämpfung von Nazivorsprünge verwendet werden können.
V. Pädagogische Strategien der untersuchten Werke
Die pädagogischen Aspekte der untersuchten Werke sollen intellektuelle Werte vermitteln, die einen bedeutenden Beitrag zur Lösung gesellschaftspolitischer Probleme und Konflikte leisten.
VI. Entwicklung der literarischen und gesellschaftspolitischen Vision
Hier wird die Entwicklungslinie der gesellschaftspolitischen Vision in den untersuchten Werken verfolgt.
VII. Zur Rezeption der vier Werke
Durch die Auseinandersetzung mit der negativen Kritik wird angestrebt, ihre Subjektivität darzulegen, damit der gesellschaftspolitische Wert der vier Werke enthüllt wird.
Conformational transition of peptide-functionalized cryogels enabling shape-memory capability
(2017)
Bobrowski hatte nach dem Abitur die Absicht geäußert, Kunstgeschichte zu studieren, doch Krieg und Kriegsgefangenschaft vereitelten seinen Plan: Der Wehrmachtsangehörige wurde einzig im Winter 1941/1942 für ein Studiensemester an der Universität Berlin vom Kriegsdienst freigestellt. Nachhaltig beeindruckt war Bobrowski insbesondere von der Vorlesung „Deutsche Kunst der Goethezeit“ des Lehrstuhlinhabers Wilhelm Pinder. Trotz eines grundlegenden Einflusses ist indessen zu keinem Zeitpunkt Pinders ideologischer Hintergrund in Bobrowskis Gedichten manifest geworden. Nach der Rückkehr aus sowjetischer Gefangenschaft an Weihnachten 1949 war für den mittlerweile Zweiunddreißigjährigen an ein Studium nicht mehr zu denken. Die lebenslange intermediale Auseinandersetzung mit Werken der bildenden Kunst in seinem Œuvre kann indessen als Ausdruck seiner vielfältigen kulturgeschichtlichen Interessen und Neigungen interpretiert werden. Die Lebensphasen des Dichters korrelieren mit einer motivischen Entwicklung seiner Bildgedichte: Insbesondere half ihm die unantastbare Ästhetik bedeutender Kunstwerke, das Grauen der letzten Kriegsjahre und die Entbehrungen in sowjetischer Kriegsgefangenschaft zu überwinden. Didaktisch-moralische Zielsetzungen prägten zunächst die in den Jahren nach seiner Heimkehr entstandenen Gedichte, bevor sich Bobrowski inhaltlich und formal von diesem Gedichttypus zu lösen vermochte und vermehrt Gedichte zu schreiben begann, die kulturgeschichtliche Dimensionen annahmen und historische, mythologische, biblische und religionsphilosophische Themen in epochenübergreifende Zusammenhänge stellten. Die Gedichte über die Künstler Jawlensky und Calder berühren gleichzeitig kulturlandschaftliche Aspekte. Im letzten Lebensjahrzehnt interessierte sich Bobrowski zunehmend für die Kunst des 20. Jahrhunderts, während die moderne Architektur aus seinem Werk ausgeklammert blieb.
Architektur bildet eine Leitmotivik in Bobrowskis lyrischem Werk. Die übertragene Bedeutungsebene der in den Gedichten benannten sakralen und profanen Einzelbauten, aber auch der städtischen und dörflichen Ensembles sowie einzelner Gebäudeteile, verändert sich mehrfach im Laufe der Jahre. Ausgehend von traditionellen, paargereimten Jugendgedichten in jambischem Versmaß, in denen architektonische Elemente Teil einer Wahrnehmung bilden, die alles Außerästhetische ausblendet, wandelt sich der Sinngehalt der Sakral- und Profanbauten in Bobrowskis lyrischem Werk ein erstes Mal während den Kriegsjahren in Russland, die der Wehrmachtsangehörige am Ilmensee verbracht hat. In den damals entstandenen Oden zeugen die architektonischen Relikte von Leid, Tod und Zerstörung. Noch fehlt indessen der später so zentrale Gedanke der Schuld, der erst im Rückblick auf jene Zeit in den Gedichten, die nach der Rückkehr aus der Kriegsgefangenschaft bis zu Bobrowskis frühem Tod entstanden sind, thematisiert worden ist.
Gegen Ende des Kriegs und in den Jahren der Kriegsgefangenschaft besinnt sich Bobrowski erneut auf Heimatthemen, und die Architektur in seinen Gedichten wird zu einem ästhetisch überhöhten Fluchtpunkt seiner Sehnsucht nach Ostpreußen und dem Memelgebiet. In Kriegsgefangenschaft tritt erstmals der Aspekt des Sublimen in seinen Gedichten auf, und zwar sowohl bezogen auf die Malerei als auch auf die Architektur. Dieser Gedanke wird einerseits nach der Rückkehr nach Berlin in den Gedichten über die Architektur gotischer Kathedralen und das bauliche Erbe des Klassizismus weitergesponnen, doch steht in den damals entstandenen Gedichten das Kulturerbe Europas auch für historisches Unrecht und eine schwere, weit zurückreichende Schuld.
Von dieser auf den ganzen Kontinent bezogenen Kritik wendet sich Bobrowski in den nachfolgenden Jahren ab und konzentriert sich auf die Schuld der Deutschen gegenüber den Völkern Osteuropas. Damit erhält auch die Architektur in seinen Gedichten eine neue Bedeutung. Die Relikte der Ritterburgen des deutschen Ordens zeugen von der Herrschaft der mittelalterlichen Eroberer und verschmelzen dabei mit der Natur: Das Zeichenhafte der Architektur wird Teil der Landschaft. Im letzten Lebensjahrzehnt entstehen vermehrt Gedichte, die sich auf Parkanlagen und städtische Grünräume beziehen.
Der Dichter hat sich nicht nur auf persönliche Erfahrungen, sondern mitunter auch auf Bildquellen abstützt, ohne dass er das Original je gesehen hätte. Nur schwer zugänglich sind die Gedichte über Chagall und Gauguin ohne die Erkenntnis, dass sie sich auf Bildvorlagen in schmalen, populärwissenschaftlichen Büchern beziehen, die Bobrowski jeweils kurz vor der Niederschrift der entsprechenden Gedichte erworben hat. Anders verhält es sich mit jenen russischen Kirchen, die Eingang in sein lyrisches Werk gefunden haben. Bobrowski hat sie alle selbst im Krieg gesehen, und die meisten scheinen noch heute zu bestehen und können mit einiger Sicherheit identifiziert werden, wozu auch die Briefe des Dichters aus jener Zeit beitragen.
Einleitung: Die Erdnussallergie zählt zu den häufigsten Nahrungsmittelallergien im Kindesalter. Bereits kleine Mengen Erdnuss (EN) können zu schweren allergischen Reaktionen führen. EN ist der häufigste Auslöser einer lebensbedrohlichen Anaphylaxie bei Kindern und Jugendlichen. Im Gegensatz zu anderen frühkindlichen Nahrungsmittelallergien entwickeln Patienten mit einer EN-Allergie nur selten eine natürliche Toleranz. Seit mehreren Jahren wird daher an kausalen Therapiemöglichkeiten für EN-Allergiker, insbesondere an der oralen Immuntherapie (OIT), geforscht. Erste kleinere Studien zur OIT bei EN-Allergie zeigten erfolgsversprechende Ergebnisse. Im Rahmen einer randomisierten, doppelblind, Placebo-kontrollierten Studie mit größerer Fallzahl werden in der vorliegenden Arbeit die klinische Wirksamkeit und Sicherheit dieser Therapieoption bei Kindern mit EN-Allergie genauer evaluiert. Des Weiteren werden immunologische Veränderungen sowie die Lebensqualität und Therapiebelastung unter OIT untersucht.
Methoden: Kinder zwischen 3-18 Jahren mit einer IgE-vermittelten EN-Allergie wurden in die Studie eingeschlossen. Vor Beginn der OIT wurde eine orale Provokation mit EN durchgeführt. Die Patienten wurden 1:1 randomisiert und entsprechend der Verum- oder Placebogruppe zugeordnet. Begonnen wurde mit 2-120 mg EN bzw. Placebo pro Tag, abhängig von der Reaktionsdosis bei der oralen Provokation. Zunächst wurde die tägliche OIT-Dosis alle zwei Wochen über etwa 14 Monate langsam bis zu einer Erhaltungsdosis von mindestens 500 mg EN (= 125 mg EN-Protein, ~ 1 kleine EN) bzw. Placebo gesteigert. Die maximal erreichte Dosis wurde dann über zwei Monate täglich zu Hause verabreicht. Im Anschluss erfolgte erneut eine orale Provokation mit EN. Der primäre Endpunkt der Studie war die Anzahl an Patienten der Verum- und Placebogruppe, die unter oraler Provokation nach OIT ≥1200 mg EN vertrugen (=„partielle Desensibilisierung“). Sowohl vor als auch nach OIT wurde ein Hautpricktest mit EN durchgeführt und EN-spezifisches IgE und IgG4 im Serum bestimmt. Außerdem wurden die Basophilenaktivierung sowie die Ausschüttung von T-Zell-spezifischen Zytokinen nach Stimulation mit EN in vitro gemessen. Anhand von Fragebögen wurde die Lebensqualität vor und nach OIT sowie die Therapiebelastung während OIT erfasst.
Ergebnisse: 62 Patienten wurden in die Studie eingeschlossen und randomisiert. Nach etwa 16 Monaten unter OIT zeigten 74,2% (23/31) der Patienten der Verumgruppe und nur 16,1% (5/31) der Placebogruppe eine „partielle Desensibilisierung“ gegenüber EN (p<0,001). Im Median vertrugen Patienten der Verumgruppe 4000 mg EN (~8 kleine EN) unter der Provokation nach OIT wohingegen Patienten der Placebogruppe nur 80 mg EN (~1/6 kleine EN) vertrugen (p<0,001). Fast die Hälfte der Patienten der Verumgruppe (41,9%) tolerierten die Höchstdosis von 18 g EN unter Provokation („komplette Desensibilisierung“). Es zeigte sich ein vergleichbares Sicherheitsprofil unter Verum- und Placebo-OIT in Bezug auf objektive Nebenwirkungen. Unter Verum-OIT kam es jedoch signifikant häufiger zu subjektiven Nebenwirkungen wie oralem Juckreiz oder Bauchschmerzen im Vergleich zu Placebo (3,7% der Verum-OIT-Gaben vs. 0,5% der Placebo-OIT-Gaben, p<0,001). Drei Kinder der Verumgruppe (9,7%) und sieben Kinder der Placebogruppe (22,6%) beendeten die Studie vorzeitig, je zwei Patienten beider Gruppen aufgrund von Nebenwirkungen. Im Gegensatz zu Placebo, zeigten sich unter Verum-OIT signifikante immunologische Veränderungen. So kam es zu einer Abnahme des EN-spezifischen Quaddeldurchmessers im Hautpricktest, einem Anstieg der EN-spezifischen IgG4-Werte im Serum sowie zu einer verminderten EN-spezifischen Zytokinsekretion, insbesondere der Th2-spezifischen Zytokine IL-4 und IL-5. Hinsichtlich der EN-spezifischen IgE-Werte sowie der EN-spezifischen Basophilenaktivierung zeigten sich hingegen keine Veränderungen unter OIT. Die Lebensqualität von Kindern der Verumgruppe war nach OIT signifikant verbessert, jedoch nicht bei Kindern der Placebogruppe. Während der OIT wurde die Therapie von fast allen Kindern (82%) und Müttern (82%) als positiv bewertet (= niedrige Therapiebelastung).
Diskussion: Die EN-OIT führte bei einem Großteil der EN-allergischen Kinder zu einer Desensibilisierung und einer deutlich erhöhten Reaktionsschwelle auf EN. Somit sind die Kinder im Alltag vor akzidentellen Reaktionen auf EN geschützt, was die Lebensqualität der Kinder deutlich verbessert. Unter den kontrollierten Studienbedingungen zeigte sich ein akzeptables Sicherheitsprofil, mit vorrangig milder Symptomatik. Die klinische Desensibilisierung ging mit Veränderungen auf immunologischer Ebene einher. Langzeitstudien zur EN-OIT müssen jedoch abgewartet werden, um die klinische und immunologische Wirksamkeit hinsichtlich einer möglichen langfristigen oralen Toleranzinduktion sowie die Sicherheit unter langfristiger OIT zu untersuchen, bevor das Therapiekonzept in die Praxis übertragen werden kann.
This thesis investigates the processing and representation of (ir-)regularity in inflectional verb morphology in German and English. The focus lies on the predictions from models of morphological processing about the production of subtypes of irregular verbs which are usually subsumed under the category `irregular verbs'. Thus, this dissertation presents three journal articles investigating the language production of healthy speakers and speakers with agrammatic aphasia in order to fill a gap both for the availability of language production data and systematically tested patterns of irregularity. The second Chapter set out to investigate whether regularity of a verb or its phonological complexity (measured in number of phonemes) better predict the production accuracies of German speakers with agrammatic aphasia. While regular verbs were significantly more often correct than mixed and irregular verbs, production accuracies of irregular and mixed verbs for impaired participants did not differ. Thus, no influence of phonological complexity was observed. Chapter 3 aimed at teasing apart the influence of stem changes and affix type on the production accuracies of English speaking individuals with agrammatic aphasia. The analyses revealed that the presence of stem changes but not the type of affix had a significant effect on the production accuracies. Moreover, as four different verb types were tested, results showed that production accuracies did not conform to a regular-irregular distinction but that accuracies differed by the degree of regularity. In Chapter 4, long-lag primed picture naming design was used to study if the differences found in the production accuracies of Chapter 3 were also associated with differences in production latencies of non-brain damaged speakers. A morphological priming effect was found, however, in neither experiment the effect differed of the three verb types tested. In addition to standard frequentist analysis, Bayesian analysis were performed. In this way the absence of a difference of the morphological priming effect between verb types was interpreted as actual evidence for the lack of such a difference. Hence, this thesis presents diverging results on the production of subtypes of irregular verbs in healthy and impaired adult speakers. However, at the same time these results provided evidence that the conventional regular-irregular distinction is not adequate for testing models of morphological processing.
Lithospheric plates move over the low viscosity asthenosphere balancing several forces. The driving forces include basal shear stress exerted by mantle convection and plate boundary forces such as slab pull and ridge push, whereas the resisting forces include inter-plate friction, trench resistance, and cratonic root resistance. These generate plate motions, the lithospheric stress field and dynamic topography which are observed with different geophysical methods. The orientation and tectonic regime of the observed crustal/lithospheric stress field further contribute to our knowledge of different deformation processes occurring within the Earth's crust and lithosphere. Using numerical models previous studies were able to identify major forces generating stresses in the crust and lithosphere which also contribute to the formation of topography as well as driving lithospheric plates. They showed that the first-order stress pattern explaining about 80\,\% of the stress field originates from a balance of forces acting at the base of the moving lithospheric plates due to convective flow in the underlying mantle. The remaining second-order stress pattern is due to lateral density variations in the crust and lithosphere in regions of pronounced topography and high gravitational potential, such as the Himalayas and mid-ocean ridges. By linking global lithosphere dynamics to deep mantle flow this study seeks to evaluate the influence of shallow and deep density heterogenities on plate motions, lithospheric stress field and dynamic topography using the geoid as a major constraint for mantle rheology. We use the global 3D lithosphere-asthenosphere model SLIM3D with visco-elasto-plastic rheology coupled at 300 km depth to a spectral model of mantle flow. The complexity of the lithosphere-asthenosphere component allows for the simulation of power-law rheology with creep parameters accounting for both diffusion and dislocation creep within the uppermost 300 km.
First we investigate the influence of intra-plate friction and asthenospheric viscosity on present-day plate motions. Previous modelling studies have suggested that small friction coefficients (µ < 0.1, yield stress ~ 100 MPa) can lead to plate tectonics in models of mantle convection. Here we show that, in order to match present-day plate motions and net rotation, the frictional parameter must be less than 0.05. We are able to obtain a good fit with the magnitude and orientation of observed plate velocities (NUVEL-1A) in a no-net-rotation (NNR) reference frame with µ < 0.04 and minimum asthenosphere viscosity ~ 5*10e19 Pas to 10e20 Pas. Our estimates of net rotation (NR) of the lithosphere suggest that amplitudes ~ 0.1-0.2 °/Ma, similar to most observation-based estimates, can be obtained with asthenosphere viscosity cutoff values of ~ 10e19 Pas to 5*10e19 Pas and friction coefficient µ < 0.05.
The second part of the study investigates further constraints on shallow and deep mantle heterogeneities causing plate motion by predicting lithosphere stress field and topography and validating with observations. Lithosphere stresses and dynamic topography are computed using the modelling setup and rheological parameters for prescribed plate motions. We validate our results with the World Stress Map 2016 (WSM2016) and the observed residual topography. Here we tested a number of upper mantle thermal-density structures. The one used to calculate plate motions is considered the reference thermal-density structure. This model is derived from a heat flow model combined with a sea floor age model. In addition we used three different thermal-density structures derived from global S-wave velocity models to show the influence of lateral density heterogeneities in the upper 300 km on model predictions. A large portion of the total dynamic force generating stresses in the crust/lithosphere has its origin in the deep mantle, while topography is largely influenced by shallow heterogeneities. For example, there is hardly any difference between the stress orientation patterns predicted with and without consideration of the heterogeneities in the upper mantle density structure across North America, Australia, and North Africa. However, the crust is dominant in areas of high altitude for the stress orientation compared to the all deep mantle contribution.
This study explores the sensitivity of all the considered surface observables with regards to model parameters providing insights into the influence of the asthenosphere and plate boundary rheology on plate motion as we test various thermal-density structures to predict stresses and topography.
According to the classical plume hypothesis, mantle plumes are localized upwellings of hot, buoyant material in the Earth’s mantle. They have a typical mushroom shape, consisting of a large plume head, which is associated with the formation of voluminous flood basalts (a Large
Igneous Province) and a narrow plume tail, which generates a linear, age-progressive chain of volcanic edifices (a hotspot track) as the tectonic plate migrates over the relatively stationary plume. Both plume heads and tails reshape large areas of the Earth’s surface over many tens of millions of years.
However, not every plume has left an exemplary record that supports the classical hypothesis. The main objective of this thesis is therefore to study how specific hotspots have created the crustal thickness pattern attributed to their volcanic activities. Using regional geodynamic
models, the main chapters of this thesis address the challenge of deciphering the three individual (and increasingly complex) Réunion, Iceland, and Kerguelen hotspot histories, especially focussing on the interactions between the respective plume and nearby spreading ridges.
For this purpose, the mantle convection code ASPECT is used to set up three-dimensional numerical models, which consider the specific local surroundings of each plume by prescribing time-dependent boundary conditions for temperature and mantle flow. Combining reconstructed plate boundaries and plate motions, large-scale global flow velocities and an inhomogeneous lithosphere thickness distribution together with a dehydration rheology represents a novel setup for regional convection models.
The model results show the crustal thickness pattern produced by the plume, which is compared to present-day topographic structures, crustal thickness estimates and age determinations of volcanic provinces associated with hotspot activity. Altogether, the model results agree well
with surface observations. Moreover, the dynamic development of the plumes in the models provide explanations for the generation of smaller, yet characteristic volcanic features that were previously unexplained. Considering the present-day state of a model as a prediction for the
current temperature distribution in the mantle, it cannot only be compared to observations on the surface, but also to structures in the Earth’s interior as imaged by seismic tomography.
More precisely, in the case of the Réunion hotspot, the model demonstrates how the distinctive gap between the Maldives and Chagos is generated due to the combination of the ridge geometry and plume-ridge interaction. Further, the Rodrigues Ridge is formed as the surface expression
of a long-distance sublithospheric flow channel between the upwelling plume and the closest ridge segment, confirming the long-standing hypothesis of Morgan (1978) for the first time in a dynamic context. The Réunion plume has been studied in connection with the seismological
RHUM-RUM project, which has recently provided new seismic tomography images that yield an excellent match with the geodynamic model.
Regarding the Iceland plume, the numerical model shows how plume material may have accumulated in an east-west trending corridor of thin lithosphere across Greenland and resulted in simultaneous melt generation west and east of Greenland. This provides an explanation for the
extremely widespread volcanic material attributed to magma production of the Iceland hotspot and demonstrates that the model setup is also able to explain more complicated hotspot histories. The Iceland model results also agree well with newly derived seismic tomographic images.
The Kerguelen hotspot has an extremely complex history and previous studies concluded that the plume might be dismembered or influenced by solitary waves in its conduit to produce the reconstructed variable melt production rate. The geodynamic model, however, shows that a constant plume influx can result in a variable magma production rate if the plume interacts with nearby mid-ocean ridges. Moreover, the Ninetyeast Ridge in the model is created by on-ridge activities, while the Kerguelen plume was located beneath the Australian plate. This is also a contrast to earlier studies, which described the Ninetyeast Ridge as the result of the Indian plate passing over the plume. Furthermore, the Amsterdam-Saint Paul Plateau in the model is the result of plume material flowing from the upwelling toward the Southeast Indian Ridge, whereas previous geochemical studies attributed that volcanic province to a separate deep plume.
In summary, the three case studies presented in this thesis consistently highlight the importance of plume-ridge interaction in order to reconstruct the overall volcanic hotspot record as well as specific smaller features attributed to a certain hotspot. They also demonstrate that it is not necessary to attribute highly complicated properties to a specific plume in order to account for complex observations. Thus, this thesis contributes to the general understanding of plume dynamics and extends the very specific knowledge about the Réunion, Iceland, and Kerguelen mantle plumes.
With recent advances in the area of information extraction, automatically extracting structured information from a vast amount of unstructured textual data becomes an important task, which is infeasible for humans to capture all information manually. Named entities (e.g., persons, organizations, and locations), which are crucial components in texts, are usually the subjects of structured information from textual documents. Therefore, the task of named entity mining receives much attention. It consists of three major subtasks, which are named entity recognition, named entity linking, and relation extraction.
These three tasks build up an entire pipeline of a named entity mining system, where each of them has its challenges and can be employed for further applications. As a fundamental task in the natural language processing domain, studies on named entity recognition have a long history, and many existing approaches produce reliable results. The task is aiming to extract mentions of named entities in text and identify their types. Named entity linking recently received much attention with the development of knowledge bases that contain rich information about entities. The goal is to disambiguate mentions of named entities and to link them to the corresponding entries in a knowledge base. Relation extraction, as the final step of named entity mining, is a highly challenging task, which is to extract semantic relations between named entities, e.g., the ownership relation between two companies.
In this thesis, we review the state-of-the-art of named entity mining domain in detail, including valuable features, techniques, evaluation methodologies, and so on. Furthermore, we present two of our approaches that focus on the named entity linking and relation extraction tasks separately.
To solve the named entity linking task, we propose the entity linking technique, BEL, which operates on a textual range of relevant terms and aggregates decisions from an ensemble of simple classifiers. Each of the classifiers operates on a randomly sampled subset of the above range. In extensive experiments on hand-labeled and benchmark datasets, our approach outperformed state-of-the-art entity linking techniques, both in terms of quality and efficiency.
For the task of relation extraction, we focus on extracting a specific group of difficult relation types, business relations between companies. These relations can be used to gain valuable insight into the interactions between companies and perform complex analytics, such as predicting risk or valuating companies. Our semi-supervised strategy can extract business relations between companies based on only a few user-provided seed company pairs. By doing so, we also provide a solution for the problem of determining the direction of asymmetric relations, such as the ownership_of relation. We improve the reliability of the extraction process by using a holistic pattern identification method, which classifies the generated extraction patterns. Our experiments show that we can accurately and reliably extract new entity pairs occurring in the target relation by using as few as five labeled seed pairs.
Nowadays, graph data models are employed, when relationships between entities have to be stored and are in the scope of queries. For each entity, this graph data model locally stores relationships to adjacent entities. Users employ graph queries to query and modify these entities and relationships. These graph queries employ graph patterns to lookup all subgraphs in the graph data that satisfy certain graph structures. These subgraphs are called graph pattern matches. However, this graph pattern matching is NP-complete for subgraph isomorphism. Thus, graph queries can suffer a long response time, when the number of entities and relationships in the graph data or the graph patterns increases.
One possibility to improve the graph query performance is to employ graph views that keep ready graph pattern matches for complex graph queries for later retrieval. However, these graph views must be maintained by means of an incremental graph pattern matching to keep them consistent with the graph data from which they are derived, when the graph data changes. This maintenance adds subgraphs that satisfy a graph pattern to the graph views and removes subgraphs that do not satisfy a graph pattern anymore from the graph views.
Current approaches for incremental graph pattern matching employ Rete networks. Rete networks are discrimination networks that enumerate and maintain all graph pattern matches of certain graph queries by employing a network of condition tests, which implement partial graph patterns that together constitute the overall graph query. Each condition test stores all subgraphs that satisfy the partial graph pattern. Thus, Rete networks suffer high memory consumptions, because they store a large number of partial graph pattern matches. But, especially these partial graph pattern matches enable Rete networks to update the stored graph pattern matches efficiently, because the network maintenance exploits the already stored partial graph pattern matches to find new graph pattern matches. However, other kinds of discrimination networks exist that can perform better in time and space than Rete networks. Currently, these other kinds of networks are not used for incremental graph pattern matching.
This thesis employs generalized discrimination networks for incremental graph pattern matching. These discrimination networks permit a generalized network structure of condition tests to enable users to steer the trade-off between memory consumption and execution time for the incremental graph pattern matching. For that purpose, this thesis contributes a modeling language for the effective definition of generalized discrimination networks. Furthermore, this thesis contributes an efficient and scalable incremental maintenance algorithm, which updates the (partial) graph pattern matches that are stored by each condition test. Moreover, this thesis provides a modeling evaluation, which shows that the proposed modeling language enables the effective modeling of generalized discrimination networks. Furthermore, this thesis provides a performance evaluation, which shows that a) the incremental maintenance algorithm scales, when the graph data becomes large, and b) the generalized discrimination network structures can outperform Rete network structures in time and space at the same time for incremental graph pattern matching.
Das Thema der vorliegenden Arbeit ist die semantische Suche im Kontext heutiger Informationsmanagementsysteme. Zu diesen Systemen zählen Intranets, Web 3.0-Anwendungen sowie viele Webportale, die Informationen in heterogenen Formaten und Strukturen beinhalten. Auf diesen befinden sich einerseits Daten in strukturierter Form und andererseits Dokumente, die inhaltlich mit diesen Daten in Beziehung stehen. Diese Dokumente sind jedoch in der Regel nur teilweise strukturiert oder vollständig unstrukturiert. So beschreiben beispielsweise Reiseportale durch strukturierte Daten den Zeitraum, das Reiseziel, den Preis einer Reise und geben in unstrukturierter Form weitere Informationen, wie Beschreibungen zum Hotel, Zielort, Ausflugsziele an.
Der Fokus heutiger semantischer Suchmaschinen liegt auf dem Finden von Wissen entweder in strukturierter Form, auch Faktensuche genannt, oder in semi- bzw. unstrukturierter Form, was üblicherweise als semantische Dokumentensuche bezeichnet wird. Einige wenige Suchmaschinen versuchen die Lücke zwischen diesen beiden Ansätzen zu schließen. Diese durchsuchen zwar gleichzeitig strukturierte sowie unstrukturierte Daten, werten diese jedoch entweder weitgehend voneinander unabhängig aus oder schränken die Suchmöglichkeiten stark ein, indem sie beispielsweise nur bestimmte Fragemuster unterstützen. Hierdurch werden die im System verfügbaren Informationen nicht ausgeschöpft und gleichzeitig unterbunden, dass Zusammenhänge zwischen einzelnen Inhalten der jeweiligen Informationssysteme und sich ergänzende Informationen den Benutzer erreichen.
Um diese Lücke zu schließen, wurde in der vorliegenden Arbeit ein neuer hybrider semantischer Suchansatz entwickelt und untersucht, der strukturierte und semi- bzw. unstrukturierte Inhalte während des gesamten Suchprozesses kombiniert. Durch diesen Ansatz werden nicht nur sowohl Fakten als auch Dokumente gefunden, es werden auch Zusammenhänge, die zwischen den unterschiedlich strukturierten Daten bestehen, in jeder Phase der Suche genutzt und fließen in die Suchergebnisse mit ein. Liegt die Antwort zu einer Suchanfrage nicht vollständig strukturiert, in Form von Fakten, oder unstrukturiert, in Form von Dokumenten vor, so liefert dieser Ansatz eine Kombination der beiden. Die Berücksichtigung von unterschiedlich Inhalten während des gesamten Suchprozesses stellt jedoch besondere Herausforderungen an die Suchmaschine. Diese muss in der Lage sein, Fakten und Dokumente in Abhängigkeit voneinander zu durchsuchen, sie zu kombinieren sowie die unterschiedlich strukturierten Ergebnisse in eine geeignete Rangordnung zu bringen. Weiterhin darf die Komplexität der Daten nicht an die Endnutzer weitergereicht werden. Die Darstellung der Inhalte muss vielmehr sowohl bei der Anfragestellung als auch bei der Darbietung der Ergebnisse verständlich und leicht interpretierbar sein.
Die zentrale Fragestellung der Arbeit ist, ob ein hybrider Ansatz auf einer vorgegebenen Datenbasis die Suchanfragen besser beantworten kann als die semantische Dokumentensuche und die Faktensuche für sich genommen, bzw. als eine Suche die diese Ansätze im Rahmen des Suchprozesses nicht kombiniert. Die durchgeführten Evaluierungen aus System- und aus Benutzersicht zeigen, dass die im Rahmen der Arbeit entwickelte hybride semantische Suchlösung durch die Kombination von strukturierten und unstrukturierten Inhalten im Suchprozess bessere Antworten liefert als die oben genannten Verfahren und somit Vorteile gegenüber bisherigen Ansätzen bietet. Eine Befragung von Benutzern macht deutlich, dass die hybride semantische Suche als verständlich empfunden und für heterogen strukturierte Datenmengen bevorzugt wird.
Hintergrund
Für Patienten mit hochgradiger Aortenklappenstenose, die aufgrund ihres Alters oder ihrer Multimorbidität ein hohes Operationsrisiko tragen, konnte mit der kathetergestützten Aortenklappenkorrektur (transcatheter aortic valve implantation, TAVI) eine vielversprechende Alternative zum herzchirurgischen Eingriff etabliert werden. Explizite Daten zur multidisziplinären kardiologischen Rehabilitation nach TAVI liegen bislang nicht vor. Ziel vorliegender Arbeit war, den Effekt der kardiologischen Rehabilitation auf die körperliche Leistungsfähigkeit, den emotionalen Status, die Lebensqualität und die Gebrechlichkeit bei Patienten nach TAVI zu untersuchen sowie Prädiktoren für die Veränderung der körperlichen Leistungsfähigkeit und der Lebensqualität zu identifizieren.
Methodik
Zwischen 10/2013 und 07/2015 wurden 136 Patienten (80,6 ± 5,0 Jahre, 47,8 % Männer) in Anschlussheilbehandlung nach TAVI in drei kardiologischen Rehabilitationskliniken eingeschlossen. Zur Beurteilung des Effekts der kardiologischen Rehabilitation wurden jeweils zu Beginn und Ende der Rehabilitation der Frailty (Gebrechlichkeits)-Index (Score bestehend aus Barthel-Index, Instrumental Activities of Daily Living, Mini Mental State Exam, Mini Nutritional Assessment, Timed Up and Go und subjektiver Mobilitätsverschlechterung), die Lebensqualität im Short-Form 12 (SF-12) sowie die funktionale körperliche Leistungsfähigkeit im 6-Minuten Gehtest (6-minute walk test, 6MWT) und die maximale körperliche Leistungsfähigkeit in der Belastungs-Ergometrie erhoben. Zusätzlich wurden soziodemographische Daten (z. B. Alter und Geschlecht), Komorbiditäten (z. B. chronisch obstruktive Lungenerkrankung, koronare Herzkrankheit und Karzinom), kardiovaskuläre Risikofaktoren und die NYHA-Klasse dokumentiert. Prädiktoren für die Veränderung der körperlichen Leistungsfähigkeit und Lebensqualität wurden mit Kovarianzanalysen angepasst.
Ergebnisse
Die maximale Gehstrecke im 6MWT konnte um 56,3 ± 65,3 m (p < 0,001) und die maximale körperliche Leistungsfähigkeit in der Belastungs-Ergometrie um 8,0 ± 14,9 Watt (p < 0001) gesteigert werden. Weiterhin konnte eine Verbesserung im SF-12 sowohl in der körperlichen Summenskala um 2,5 ± 8,7 Punkte (p = 0,001) als auch in der psychischen Summenskala um 3,4 ± 10,2 Punkte (p = 0,003) erreicht werden. In der multivariaten Analyse waren ein höheres Alter und eine höhere Bildung signifikant mit einer geringeren Zunahme im 6MWT assoziiert, währenddessen eine bessere kognitive Leistungsfähigkeit und Adipositas einen positiven prädiktiven Wert aufwiesen. Eine höhere Selbstständigkeit und ein besserer Ernährungsstatus beeinflussten die Veränderung in der körperlichen Summenskala des SF-12 positiv, währenddessen eine bessere kognitive Leistungsfähigkeit einen Prädiktor für eine geringere Veränderung darstellte. Des Weiteren hatten die jeweiligen Ausgangswerte der körperlichen und psychischen Summenskala im SF-12 einen inversen Einfluss auf die Veränderungen in der gleichen Skala.
Schlussfolgerung
Eine multidisziplinäre kardiologische Rehabilitation kann sowohl die körperliche Leistungs-fähigkeit und Lebensqualität verbessern als auch die Gebrechlichkeit von Patienten nach kathetergestützter Aortenklappenkorrektur verringern. Daraus resultierend gilt es, spezifische Assessments für die kardiologische Rehabilitation zu entwickeln. Weiterhin ist es notwendig, individualisierte Therapieprogramme mit besonderem Augenmerk auf kognitive Funktionen und Ernährung zu initiieren, um die Selbstständigkeit hochbetagter Patienten zu erhalten bzw. wiederherzustellen und um die Pflegebedürftigkeit der Patienten hinauszuzögern.
Data profiling is the computer science discipline of analyzing a given dataset for its metadata. The types of metadata range from basic statistics, such as tuple counts, column aggregations, and value distributions, to much more complex structures, in particular inclusion dependencies (INDs), unique column combinations (UCCs), and functional dependencies (FDs). If present, these statistics and structures serve to efficiently store, query, change, and understand the data. Most datasets, however, do not provide their metadata explicitly so that data scientists need to profile them.
While basic statistics are relatively easy to calculate, more complex structures present difficult, mostly NP-complete discovery tasks; even with good domain knowledge, it is hardly possible to detect them manually. Therefore, various profiling algorithms have been developed to automate the discovery. None of them, however, can process datasets of typical real-world size, because their resource consumptions and/or execution times exceed effective limits.
In this thesis, we propose novel profiling algorithms that automatically discover the three most popular types of complex metadata, namely INDs, UCCs, and FDs, which all describe different kinds of key dependencies. The task is to extract all valid occurrences from a given relational instance. The three algorithms build upon known techniques from related work and complement them with algorithmic paradigms, such as divide & conquer, hybrid search, progressivity, memory sensitivity, parallelization, and additional pruning to greatly improve upon current limitations. Our experiments show that the proposed algorithms are orders of magnitude faster than related work. They are, in particular, now able to process datasets of real-world, i.e., multiple gigabytes size with reasonable memory and time consumption.
Due to the importance of data profiling in practice, industry has built various profiling tools to support data scientists in their quest for metadata. These tools provide good support for basic statistics and they are also able to validate individual dependencies, but they lack real discovery features even though some fundamental discovery techniques are known for more than 15 years. To close this gap, we developed Metanome, an extensible profiling platform that incorporates not only our own algorithms but also many further algorithms from other researchers. With Metanome, we make our research accessible to all data scientists and IT-professionals that are tasked with data profiling. Besides the actual metadata discovery, the platform also offers support for the ranking and visualization of metadata result sets.
Being able to discover the entire set of syntactically valid metadata naturally introduces the subsequent task of extracting only the semantically meaningful parts. This is challenge, because the complete metadata results are surprisingly large (sometimes larger than the datasets itself) and judging their use case dependent semantic relevance is difficult. To show that the completeness of these metadata sets is extremely valuable for their usage, we finally exemplify the efficient processing and effective assessment of functional dependencies for the use case of schema normalization.
Die vorliegende Arbeit mit dem Titel „Eine Frage der Zeit. Wie Einflüsse individueller Merkmale auf Einkommen bei Frauen über ihre familiären Verpflichtungen vermittelt werden“ geht der Frage der Heterogenität bei weiblichen Einkommensergebnissen nach. Dabei steht die Thematik der individuellen Investitionen in die familiäre Arbeit als erklärender Faktor im Vordergrund und es wird der Frage nachgegangen, warum die einen Frauen viele und andere weniger häusliche Verpflichtungen übernehmen. Hierfür werden das individuelle Humankapital der Frauen, ihre Werteorientierungen und individuelle berufliche Motivationen aus der Jugendzeit und im Erwachsenenalter herangezogen. Die analysierten Daten (Daten der LifE-Studie) repräsentieren eine Langzeitperspektive vom 16. bis zum 45. Lebensjahr der befragten Frauen. Zusammenfassend kann im Ergebnis gezeigt werden, dass ein Effekt familiärer Verpflichtungen auf Einkommensergebnisse bei Frauen im frühen und mittleren Erwachsenenalter als Zeiteffekt über die investierte Erwerbsarbeitszeit vermittelt wird. Die Relevanz privater Routinearbeiten für Berufserfolge von Frauen und insbesondere Müttern stellt somit eine Frage der Zeit dar. Weiterhin kann für individuelle Einflüsse auf Einkommen bei Frauen gezeigt werden, dass höhere zeitliche Investitionen in den Beruf von Frauen mit hohem Bildungsniveau als indirect-only-Mediation nur über die Umverteilung häuslicher Arbeiten erklärbar werden. Frauen sind demnach zwar Gewinnerinnen der Bildungsexpansion. Die Bildungsexpansion stellt jedoch auch die Geschichte der Entstehung eines Vereinbarkeitskonflikts für eben diese Frauen dar, weil die bis heute virulenten Beharrungskräfte hinsichtlich der Frauen zugeschriebenen familiären Verpflichtungen mit ihren gestiegenen beruflichen Erwartungen und Chancen kollidieren. Die Arbeit leistet in ihren Analyseresultaten einen wichtigen Beitrag zur Erklärung heterogener Investitionen von Frauen in den Beruf und ihrer Einkommensergebnisse aus dem Privaten heraus.
Natural products and their derivatives have always been a source of drug leads. In particular, bacterial compounds have played an important role in drug development, for example in the field of antibiotics. A decrease in the discovery of novel leads from natural sources and the hope of finding new leads through the generation of large libraries of drug-like compounds by combinatorial chemistry aimed at specific molecular targets drove the pharmaceutical companies away from research on natural products. However, recent technological advances in genetics, bioinformatics and analytical chemistry have revived the interest in natural products. The ribosomally synthesized and post-translationally modified peptides (RiPPs) are a group of natural products generated by the action of post-translationally modifying enzymes on precursor peptides translated from mRNA by ribosomes. The great substrate promiscuity exhibited by many of the enzymes from RiPP biosynthetic pathways have led to the generation of hundreds of novel synthetic and semisynthetic variants, including variants carrying non-canonical amino acids (ncAAs). The microviridins are a family of RiPPs characterized by their atypical tricyclic structure composed of lactone and lactam rings, and their activity as serine protease inhibitors. The generalities of their biosynthetic pathway have already been described, however, the lack of information on details such as the protease responsible for cleaving off the leader peptide from the cyclic core peptide has impeded the fast and cheap production of novel microviridin variants. In the present work, knowledge on leader peptide activation of enzymes from other RiPP families has been extrapolated to the microviridin family, making it possible to bypass the need of a leader peptide. This feature allowed for the exploitation of the microviridin biosynthetic machinery for the production of novel variants through the establishment of an efficient one-pot in vitro platform. The relevance of this chemoenzymatic approach has been exemplified by the synthesis of novel potent serine protease inhibitors from both rationally-designed peptide libraries and bioinformatically predicted microviridins. Additionally, new structure-activity relationships (SARs) could be inferred by screening microviridin intermediates. The significance of this technique was further demonstrated by the simple incorporation of ncAAs into the microviridin scaffold.
Recognizing, understanding, and responding to quantities are considerable skills for human beings. We can easily communicate quantities, and we are extremely efficient in adapting our behavior to numerical related tasks. One usual task is to compare quantities. We also use symbols like digits in numerical-related tasks. To solve tasks including digits, we must to rely on our previously learned internal number representations.
This thesis elaborates on the process of number comparison with the use of noisy mental representations of numbers, the interaction of number and size representations and how we use mental number representations strategically. For this, three studies were carried out.
In the first study, participants had to decide which of two presented digits was numerically larger. They had to respond with a saccade in the direction of the anticipated answer. Using only a small set of meaningfully interpretable parameters, a variant of random walk models is described that accounts for response time, error rate, and variance of response time for the full matrix of 72 digit pairs. In addition, the used random walk model predicts a numerical distance effect even for error response times and this effect clearly occurs in the observed data. In relation to corresponding correct answers error responses were systematically faster. However, different from standard assumptions often made in random walk models, this account required that the distributions of step sizes of the induced random walks be asymmetric to account for this asymmetry between correct and incorrect responses.
Furthermore, the presented model provides a well-defined framework to investigate the nature and scale (e.g., linear vs. logarithmic) of the mapping of numerical magnitude onto its internal representation. In comparison of the fits of proposed models with linear and logarithmic mapping, the logarithmic mapping is suggested to be prioritized.
Finally, we discuss how our findings can help interpret complex findings (e.g., conflicting speed vs. accuracy trends) in applied studies that use number comparison as a well-established diagnostic tool. Furthermore, a novel oculomotoric effect is reported, namely the saccadic overschoot effect. The participants responded by saccadic eye movements and the amplitude of these saccadic responses decreases with numerical distance.
For the second study, an experimental design was developed that allows us to apply the signal detection theory to a task where participants had to decide whether a presented digit was physically smaller or larger. A remaining question is, whether the benefit in (numerical magnitude – physical size) congruent conditions is related to a better perception than in incongruent conditions. Alternatively, the number-size congruency effect is mediated by response biases due to numbers magnitude. The signal detection theory is a perfect tool to distinguish between these two alternatives. It describes two parameters, namely sensitivity and response bias. Changes in the sensitivity are related to the actual task performance due to real differences in perception processes whereas changes in the response bias simply reflect strategic implications as a stronger preparation (activation) of an anticipated answer. Our results clearly demonstrate that the number-size congruency effect cannot be reduced to mere response bias effects, and that genuine sensitivity gains for congruent number-size pairings contribute to the number-size congruency effect.
Third, participants had to perform a SNARC task – deciding whether a presented digit was odd or even. Local transition probability of irrelevant attributes (magnitude) was varied while local transition probability of relevant attributes (parity) and global probability occurrence of each stimulus were kept constantly. Participants were quite sensitive in recognizing the underlying local transition probability of irrelevant attributes. A gain in performance was observed for actual repetitions of the irrelevant attribute in relation to changes of the irrelevant attribute in high repetition conditions compared to low repetition conditions. One interpretation of these findings is that information about the irrelevant attribute (magnitude) in the previous trial is used as an informative precue, so that participants can prepare early processing stages in the current trial, with the corresponding benefits and costs typical of standard cueing studies.
Finally, the results reported in this thesis are discussed in relation to recent studies in numerical cognition.
Proteins are molecules that are essential for life and carry out an enormous number of functions in organisms. To this end, they change their conformation and bind to other molecules. However, the interplay between conformational change and binding is not fully understood. In this work, this interplay is investigated with molecular dynamics (MD) simulations of the protein-peptide system Mdm2-PMI and by analysis of data from relaxation experiments.
The central task it to uncover the binding mechanism, which is described by the sequence of (partial) binding events and conformational change events including their probabilities. In the simplest case, the binding mechanism is described by a two-step model: binding followed by conformational change or conformational change followed by binding. In the general case, longer sequences with multiple conformational changes and partial binding events are possible as well as parallel pathways that differ in their sequences of events. The theory of Markov state models (MSMs) provides the theoretical framework in which all these cases can be modeled. For this purpose, MSMs are estimated in this work from MD data, and rate equation models, which are related to MSMs, are inferred from experimental relaxation data.
The MD simulation and Markov modeling of the PMI-Mdm2 system shows that PMI and Mdm2 can bind via multiple pathways. A main result of this work is a dissociation rate on the order of one event per second, which was calculated using Markov modeling and is in agreement with experiment. So far, dissociation rates and transition rates of this magnitude have only been calculated with methods that speed up transitions by acting with time-dependent, external forces on the binding partners. The simulation technique developed in this work, in contrast, allows the estimation of dissociation rates from the combination of free energy calculation and direct MD simulation of the fast binding process. Two new statistical estimators TRAM and TRAMMBAR are developed to estimate a MSM from the joint data of both simulation types.
In addition, a new analysis technique for time-series data from chemical relaxation experiments is developed in this work. It allows to identify one of the above-mentioned two-step mechanisms as the mechanism that underlays the data. The new method is valid for a broader range of concentrations than previous methods and therefore allows to choose the concentrations such that the mechanism can be uniquely identified. It is successfully tested with data for the binding of recoverin to a rhodopsin kinase peptide.
Start-up incentives targeted at unemployed individuals have become an important tool of the Active Labor Market Policy (ALMP) to fight unemployment in many countries in recent years. In contrast to traditional ALMP instruments like training measures, wage subsidies, or job creation schemes, which are aimed at reintegrating unemployed individuals into dependent employment, start-up incentives are a fundamentally different approach to ALMP, in that they intend to encourage and help unemployed individuals to exit unemployment by entering self-employment and, thus, by creating their own jobs. In this sense, start-up incentives for unemployed individuals serve not only as employment and social policy to activate job seekers and combat unemployment but also as business policy to promote entrepreneurship. The corresponding empirical literature on this topic so far has been mainly focused on the individual labor market perspective, however. The main part of the thesis at hand examines the new start-up subsidy (“Gründungszuschuss”) in Germany and consists of four empirical analyses that extend the existing evidence on start-up incentives for unemployed individuals from multiple perspectives and in the following directions:
First, it provides the first impact evaluation of the new start-up subsidy in Germany. The results indicate that participation in the new start-up subsidy has significant positive and persistent effects on both reintegration into the labor market as well as the income profiles of participants, in line with previous evidence on comparable German and international programs, which emphasizes the general potential of start-up incentives as part of the broader ALMP toolset. Furthermore, a new innovative sensitivity analysis of the applied propensity score matching approach integrates findings from entrepreneurship and labor market research about the key role of an individual’s personality on start-up decision, business performance, as well as general labor market outcomes, into the impact evaluation of start-up incentives. The sensitivity analysis with regard to the inclusion and exclusion of usually unobserved personality variables reveals that differences in the estimated treatment effects are small in magnitude and mostly insignificant. Consequently, concerns about potential overestimation of treatment effects in previous evaluation studies of similar start-up incentives due to usually unobservable personality variables are less justified, as long as the set of observed control variables is sufficiently informative (Chapter 2).
Second, the thesis expands our knowledge about the longer-term business performance and potential of subsidized businesses arising from the start-up subsidy program. In absolute terms, the analysis shows that a relatively high share of subsidized founders successfully survives in the market with their original businesses in the medium to long run. The subsidy also yields a “double dividend” to a certain extent in terms of additional job creation. Compared to “regular”, i.e., non-subsidized new businesses founded by non-unemployed individuals in the same quarter, however, the economic and growth-related impulses set by participants of the subsidy program are only limited with regard to employment growth, innovation activity, or investment. Further investigations of possible reasons for these differences show that differential business growth paths of subsidized founders in the longer run seem to be mainly limited by higher restrictions to access capital and by unobserved factors, such as less growth-oriented business strategies and intentions, as well as lower (subjective) entrepreneurial persistence. Taken together, the program has only limited potential as a business and entrepreneurship policy intended to induce innovation and economic growth (Chapters 3 and 4).
And third, an empirical analysis on the level of German regional labor markets yields that there is a high regional variation in subsidized start-up activity relative to overall new business formation. The positive correlation between regular start-up intensity and the share among all unemployed individuals who participate in the start-up subsidy program suggests that (nascent) unemployed founders also profit from the beneficial effects of regional entrepreneurship capital. Moreover, the analysis of potential deadweight and displacement effects from an aggregated regional perspective emphasizes that the start-up subsidy for unemployed individuals represents a market intervention into existing markets, which affects incumbents and potentially produces inefficiencies and market distortions. This macro perspective deserves more attention and research in the future (Chapter 5).
The Cauchy problem for the linearised Einstein equation and the Goursat problem for wave equations
(2017)
In this thesis, we study two initial value problems arising in general relativity. The first is the Cauchy problem for the linearised Einstein equation on general globally hyperbolic spacetimes, with smooth and distributional initial data. We extend well-known results by showing that given a solution to the linearised constraint equations of arbitrary real Sobolev regularity, there is a globally defined solution, which is unique up to addition of gauge solutions. Two solutions are considered equivalent if they differ by a gauge solution. Our main result is that the equivalence class of solutions depends continuously on the corre- sponding equivalence class of initial data. We also solve the linearised constraint equations in certain cases and show that there exist arbitrarily irregular (non-gauge) solutions to the linearised Einstein equation on Minkowski spacetime and Kasner spacetime.
In the second part, we study the Goursat problem (the characteristic Cauchy problem) for wave equations. We specify initial data on a smooth compact Cauchy horizon, which is a lightlike hypersurface. This problem has not been studied much, since it is an initial value problem on a non-globally hyperbolic spacetime. Our main result is that given a smooth function on a non-empty, smooth, compact, totally geodesic and non-degenerate Cauchy horizon and a so called admissible linear wave equation, there exists a unique solution that is defined on the globally hyperbolic region and restricts to the given function on the Cauchy horizon. Moreover, the solution depends continuously on the initial data. A linear wave equation is called admissible if the first order part satisfies a certain condition on the Cauchy horizon, for example if it vanishes. Interestingly, both existence of solution and uniqueness are false for general wave equations, as examples show. If we drop the non-degeneracy assumption, examples show that existence of solution fails even for the simplest wave equation. The proof requires precise energy estimates for the wave equation close to the Cauchy horizon. In case the Ricci curvature vanishes on the Cauchy horizon, we show that the energy estimates are strong enough to prove local existence and uniqueness for a class of non-linear wave equations. Our results apply in particular to the Taub-NUT spacetime and the Misner spacetime. It has recently been shown that compact Cauchy horizons in spacetimes satisfying the null energy condition are necessarily smooth and totally geodesic. Our results therefore apply if the spacetime satisfies the null energy condition and the Cauchy horizon is compact and non-degenerate.
Anthropogenically amplified erosion leads to increased fine-grained sediment input into the fluvial system in the 15.000 km2 Kharaa River catchment in northern Mongolia and constitutes a major stressing factor for the aquatic ecosystem. This study uniquely combines the application of intensive monitoring, source fingerprinting and catchment modelling techniques to allow for the comparison of the credibility and accuracy of each single method. High-resolution discharge data were used in combination with daily suspended solid measurements to calculate the suspended sediment budget and compare it with estimations of the sediment budget model SedNet. The comparison of both techniques showed that the development of an overall sediment budget with SedNet was possible, yielding results in the same order of magnitude (20.3 kt a- 1 and 16.2 kt a- 1).
Radionuclide sediment tracing, using Be-7, Cs-137 and Pb-210 was applied to differentiate sediment sources for particles < 10μm from hillslope and riverbank erosion and showed that riverbank erosion generates 74.5% of the suspended sediment load, whereas surface erosion contributes 21.7% and gully erosion only 3.8%. The contribution of the single subcatchments of the Kharaa to the suspended sediment load was assessed based on their variation in geochemical composition (e.g. in Ti, Sn, Mo, Mn, As, Sr, B, U, Ca and Sb). These variations were used for sediment source discrimination with geochemical composite fingerprints based on Genetic Algorithm driven Discriminant Function Analysis, the Kruskal–Wallis H-test and Principal Component Analysis. The contributions of the individual sub-catchment varied from 6.4% to 36.2%, generally showing higher contributions from the sub-catchments in the middle, rather than the upstream portions of the study area.
The results indicate that river bank erosion generated by existing grazing practices of livestock is the main cause for elevated fine sediment input. Actions towards the protection of the headwaters and the stabilization of the river banks within the middle reaches were identified as the highest priority. Deforestation and by lodging and forest fires should be prevented to avoid increased hillslope erosion in the mountainous areas. Mining activities are of minor importance for the overall catchment sediment load but can constitute locally important point sources for particular heavy metals in the fluvial system.
Self-adaptive data quality
(2017)
Carrying out business processes successfully is closely linked to the quality of the data inventory in an organization. Lacks in data quality lead to problems: Incorrect address data prevents (timely) shipments to customers. Erroneous orders lead to returns and thus to unnecessary effort. Wrong pricing forces companies to miss out on revenues or to impair customer satisfaction. If orders or customer records cannot be retrieved, complaint management takes longer. Due to erroneous inventories, too few or too much supplies might be reordered.
A special problem with data quality and the reason for many of the issues mentioned above are duplicates in databases. Duplicates are different representations of same real-world objects in a dataset. However, these representations differ from each other and are for that reason hard to match by a computer. Moreover, the number of required comparisons to find those duplicates grows with the square of the dataset size. To cleanse the data, these duplicates must be detected and removed. Duplicate detection is a very laborious process. To achieve satisfactory results, appropriate software must be created and configured (similarity measures, partitioning keys, thresholds, etc.). Both requires much manual effort and experience.
This thesis addresses automation of parameter selection for duplicate detection and presents several novel approaches that eliminate the need for human experience in parts of the duplicate detection process.
A pre-processing step is introduced that analyzes the datasets in question and classifies their attributes semantically. Not only do these annotations help understanding the respective datasets, but they also facilitate subsequent steps, for example, by selecting appropriate similarity measures or normalizing the data upfront. This approach works without schema information.
Following that, we show a partitioning technique that strongly reduces the number of pair comparisons for the duplicate detection process. The approach automatically finds particularly suitable partitioning keys that simultaneously allow for effective and efficient duplicate retrieval. By means of a user study, we demonstrate that this technique finds partitioning keys that outperform expert suggestions and additionally does not need manual configuration. Furthermore, this approach can be applied independently of the attribute types.
To measure the success of a duplicate detection process and to execute the described partitioning approach, a gold standard is required that provides information about the actual duplicates in a training dataset. This thesis presents a technique that uses existing duplicate detection results and crowdsourcing to create a near gold standard that can be used for the purposes above. Another part of the thesis describes and evaluates strategies how to reduce these crowdsourcing costs and to achieve a consensus with less effort.
Ziel der vorliegenden Arbeit war die Synthese und Charakterisierung von anisotropen Goldnanopartikeln in einer geeigneten Polyelektrolyt-modifizierten Templatphase. Der Mittelpunkt bildet dabei die Auswahl einer geeigneten Templatphase, zur Synthese von einheitlichen und reproduzierbaren anisotropen Goldnanopartikeln mit den daraus resultierenden besonderen Eigenschaften. Bei der Synthese der anisotropen Goldnanopartikeln lag der Fokus in der Verwendung von Vesikeln als Templatphase, wobei hier der Einfluss unterschiedlicher strukturbildender Polymere (stark alternierende Maleamid-Copolymere PalH, PalPh, PalPhCarb und PalPhBisCarb mit verschiedener Konformation) und Tenside (SDS, AOT – anionische Tenside) bei verschiedenen Synthese- und Abtrennungsbedingungen untersucht werden sollte.
Im ersten Teil der Arbeit konnte gezeigt werden, dass PalPhBisCarb bei einem pH-Wert von 9 die Bedingungen eines Röhrenbildners für eine morphologische Transformation von einer vesikulären Phase in eine röhrenförmige Netzwerkstruktur erfüllt und somit als Templatphase zur formgesteuerten Bildung von Nanopartikeln genutzt werden kann.
Im zweiten Teil der Arbeit wurde dargelegt, dass die Templatphase PalPhBisCarb (pH-Wert von 9, Konzentration von 0,01 wt.%) mit AOT als Tensid und PL90G als Phospholipid (im Verhältnis 1:1) die effektivste Wahl einer Templatphase für die Bildung von anisotropen Strukturen in einem einstufigen Prozess darstellt. Bei einer konstanten Synthesetemperatur von 45 °C wurden die besten Ergebnisse bei einer Goldchloridkonzentration von 2 mM, einem Gold-Templat-Verhältnis von 3:1 und einer Synthesezeit von 30 Minuten erzielt. Ausbeute an anisotropen Strukturen lag bei 52 % (Anteil an dreieckigen Nanoplättchen von 19 %). Durch Erhöhung der Synthesetemperatur konnte die Ausbeute auf 56 % (29 %) erhöht werden.
Im dritten Teil konnte durch zeitabhängige Untersuchungen gezeigt werden, dass bei Vorhandensein von PalPhBisCarb die Bildung der energetisch nicht bevorzugten Plättchen-Strukturen bei Raumtemperatur initiiert wird und bei 45 °C ein Optimum annimmt.
Kintetische Untersuchungen haben gezeigt, dass die Bildung dreieckiger Nanoplättchen bei schrittweiser Zugabe der Goldchlorid-Präkursorlösung zur PalPhBisCarb enthaltenden Templatphase durch die Dosierrate der vesikulären Templatphase gesteuert werden kann. In umgekehrter Weise findet bei Zugabe der Templatphase zur Goldchlorid-Präkursorlösung bei 45 °C ein ähnlicher, kinetisch gesteuerter Prozess der Bildung von Nanodreiecken statt mit einer maximalen Ausbeute dreieckigen Nanoplättchen von 29 %.
Im letzten Kapitel erfolgten erste Versuche zur Abtrennung dreieckiger Nanoplättchen von den übrigen Geometrien der gemischten Nanopartikellösung mittels tensidinduzierter Verarmungsfällung. Bei Verwendung von AOT mit einer Konzentration von 0,015 M wurde eine Ausbeute an Nanoplättchen von 99 %, wovon 72 % dreieckiger Geometrien hatten, erreicht.