Refine
Has Fulltext
- yes (137) (remove)
Year of publication
- 2013 (137) (remove)
Document Type
- Doctoral Thesis (137) (remove)
Language
- English (89)
- German (47)
- Multiple languages (1)
Keywords
- Kinder (3)
- children (3)
- climate change (3)
- remote sensing (3)
- Adipositas (2)
- Arctic (2)
- Design Thinking (2)
- Eltern (2)
- Escherichia coli (2)
- Fernerkundung (2)
Institute
- Institut für Geowissenschaften (25)
- Institut für Physik und Astronomie (16)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (15)
- Institut für Biochemie und Biologie (12)
- Institut für Chemie (12)
- Institut für Ernährungswissenschaft (11)
- Department Psychologie (10)
- Wirtschaftswissenschaften (10)
- Department Linguistik (7)
- Institut für Informatik und Computational Science (7)
Leuchtkäfer & Orgelkoralle
(2013)
Leuchtende Käfer und Medusen, phosphoreszierende Meereswellen oder zu Stein erstarrende Korallen faszinierten den bisher vornehmlich als Dichter portraitierten Naturforscher Adelbert von Chamisso (1781–1838). Intensiver noch als den zoologischen und geologischen Phänomenen, widmete er sich der Scientia amabilis – der liebenswerten Wissenschaft von den Pflanzen. Der vielseitig Talentierte verfasste seine Reise um die Welt (1836), die bis heute als eine der stilistisch anspruchvollsten und lesenswertesten Reisebeschreibungen gilt. Diese Studie widmet sich dezidiert den naturkundlichen Studien Chamissos im Kontext der dreijährigen Rurik-Expedition sowie den zugehörigen Textproduktionen. Mit einem umfassenden Text- und Materialkorpus werden literatur- und kulturwissenschaftliche sowie wissenschaftshistorische Fragestellungen an das Werk gelegt und ertragreich beantwortet. Für die Reiseliteraturforschung wird bisher unbeachtetes Quellenmaterial ans Licht gebracht, gängige Thesen werden widerlegt, Quellen anderer Besatzungsmitglieder vergleichend betrachtet. Die Studie stellt den Naturforscher Chamisso in den Fokus, ohne den Dichter auszublenden, und widmet sich Fragen der Generierung, Vernetzung und Darstellung naturkundlichen Wissens in Texten, Illustrationen und Materialien zur Expedition – sie ist insgesamt für die Literatur- und Geschichtswissenschaft ebenso innovativ wie für die interdisziplinäre Geschichte des Wissens.
One of the central questions in psycholinguistic is understanding whether and how prosodic phrase boundaries are used to resolve syntactic ambiguities in sentence processing. The present work aimed to answer both, first, the effects of φ- and ι-boundaries on syntactic ambiguity resolution, and second, how the prosodic correlates of the auditory input are taken for the phonetic-phonology mapping in order to attain a meaningful sentence interpretation.
With regard to the first aim, we investigated locally syntactic ambiguities involving either φ- or ι-phrase boundaries in German and the structural preference that listeners have, based on the prosodic content. The experiments described in this work show that German listeners exploit both types of prosodic phrase boundaries to resolve local syntactic ambiguities, that however, their disambiguation altered by the presence or absence of prosodic cues correlated with the corresponding boundary. Specifically, the perception data revealed that the phonetically measured prosodic correlates of each prosodic boundary such as pitch accents, boundary tones, deaccentuation and durational properties do not contribute to ambiguity resolution in equal measure. Rather, it is the case that listeners rely primarily on prefinal lengthening as a correlate of phrasing in the vicinity of φ-phrase boundaries, while at the level of the ι-phrase boundary, boundary tones serve as phrasal cues. This way the results of the present work take account of the as yet missing information on individual contributions of prosodic correlates on listeners’ disambiguation of syntactically ambiguous sentences in German. It further implies that the question of how German listeners resolve syntactic ambiguities cannot simply be attributed to the presence or absence of prosodic correlates. The interpretation of the phrasal structure rather depends on a more general picture of cohesion between prosodic correlates and prosodic boundary sizes.
With respect to the second aim, the processing models proposed in the present work describe a specific phonetics-phonology mapping in the vicinity of both phrase boundaries. It is assumed that auditory sentence processing proceeds in several successively organized steps, during which listeners transform overt phonetic forms into language specific abstract surface forms. This process is referred to as phonetics-phonology mapping in the present work. Perceptual evidence resulting from the experiments of the present work suggest that the phonetics-phonology mapping is guided by the above mentioned boundary related prosodic correlates. The resulting abstract phonological structure is subjected to the syntax-prosody mapping, in turn. The outcome of the presented perception experiments are modulated in an Optimality-Theoretic framework. The offered OT-models are grounded on the assumption that single prosodic correlates are used by listeners as a signal to syntax in sentence processing. This is in line with studies arguing that the prosodic phrase structure determines the syntactic parse (Cutler et al., 1997; Warren et al., 1995; Pynte & Prieur, 1996; Snedeker & Trueswell, 2003; Kjelgaard & Speer, 1999), to name just a few.
Rhythm is a temporal and systematic organization of acoustic events in terms of prominence, timing and grouping, helping to structure our most basic experiences, such as body movement, music and speech. In speech, rhythm groups auditory events, e.g., sounds and pauses, together into words, making their boundaries acoustically prominent and aiding word segmentation and recognition by the hearer. After word recognition, the hearer is able to retrieve word meaning form his mental lexicon, integrating it with information from other linguistic domains, such as semantics, syntax and pragmatics, until comprehension is achieved. The importance of speech rhythm, however, is not restricted to word segmentation and recognition only. Beyond the word level rhythm continues to operate as an organization device, interacting with different linguistic domains, such as syntax and semantics, and grouping words into larger prosodic constituents, organized in a prosodic hierarchy. This dissertation investigates the function of speech rhythm as a sentence segmentation device during syntactic ambiguity processing, possible limitations on its use, i.e., in the context of second language processing, and its transferability as cognitive skill to the music domain.
Die kumulative Dissertation zur Projektdidaktik trägt den Titel „Von der Konzeption zur Praxis: Zur Entwicklung der Projektdidaktik am Oberstufen-Kolleg Bielefeld und ihre Impulsgebung und Modellbildung für das deutsche Regelschulwesen“. Die Dissertation versteht sich als beispielgebende Umsetzung und Implementierung der Projektdidaktik für das Regelschulsystem. Auf der Basis von 22 bereits erschienenen Publikationen und einer Monographie werden mit fünf methodischen Zugriffen (bildungshistorisch, dichte Beschreibung, Aktionsforschung, empirische Untersuchung an Regelschulen und Implementierungsforschung, s. Kapitel 1) in sieben Kapiteln (2- 8) des systematischen ersten Teils die Entwicklung der Unterrichtsform Projektunterricht in der BRD, Projektbegriff und Weiterentwicklung des Konzepts, Methodik, Bewertung sowie Organisation des Projektunterrichts am Oberstufen-Kolleg, der Versuchsschule des Landes NRW, in Auseinandersetzung mit der allgemeinen Projektdidaktik dargestellt sowie Formen und Verfahren der erprobten Implementierung in das Regelschulsystem präsentiert.
Ein Schlusskapitel (9) fasst die Ergebnisse zusammen. Im umfangreichen Anhang finden sich verschiedene Publikationen zu Aspekten der Projektdidaktik, auf die der systematische Teil jeweils Bezug nimmt.
Die bildungshistorische Analyse (Kapitel 2) untersucht das Verhältnis von pädagogischer Theorie und schulischer Praxis, die weder in Literatur und noch in Praxis genügend verbunden sind. Nach der Rezeption der gut erforschten Konzeptgeschichte pädagogischer Theorie in Anlehnung an Dewey und Kilpatrick wird durch eine erste Analyse der „Praxisgeschichte“ des Projektunterrichts auf ein Forschungsdesiderat hingewiesen, dies auch um die Projektpraxis am Oberstufen-Kolleg in Beziehung zu der in den Regelschulen setzen zu können. Dabei wurden seit 1975 sechs Entwicklungslinien herausgearbeitet: Start, Krise und ihre Überwindung durch Öffnung und Vernetzung (1975-1990), didaktisch-methodische Differenzierung und Notwendigkeit von Professionalisierung (ab 1990) sowie Schulentwicklung und Institutionalisierung (seit Ende der 1990er Jahre).
Projektunterricht besteht am Oberstufen-Kolleg seit der Gründung 1974 als fest eingerichtete Unterrichtsform (seit 2002 zweimal jährlich 2 Wochen) mit dem Ziel, für das Regelschulsystem die Projektdidaktik zu erproben und weiterzuentwickeln. Als wichtige praxisorientierte Ziele wurden ein praxistauglicher Begriff, Bildungswert und Kompetenzen im Unterschied zum Lehrgang herausgearbeitet (z.B. handlungs- und anwendungsorientierte Kompetenzen) und das Verhältnis zum Fachunterricht bestimmt (Kapitel 3). Letzteres wurde am Beispiel des Fachs Geschichte entwickelt und exemplarisch in Formen der Verzahnung dargestellt (Kapitel 6).
Auch für die methodische Dimension galt, die allgemeine Projektdidaktik weiterzuentwickeln durch ihre Abgrenzung zu anderen Methoden der Öffnung von Schule und Unterricht (Kapitel 4). Dabei wurde als zentrales methodisches Prinzip die Handlungsorientierung bestimmt sowie sieben Phasen und jeweilige Handlungsschritte festgelegt. Besonders Planung und Rollenwechsel bedürfen dabei besonderer Beachtung, um Selbsttätigkeit der ProjektteilnehmerInnen zu erreichen. Verschiedene methodische „Etüden“ ( z.B. Gruppenarbeit, recherchieren, sich öffentlich verhalten), handlungsorientierte Vorformen und projektorientiertes Arbeiten sollten die Vollform Projektunterricht vorbereiten helfen.
Die Bewertung von Projekten (Kapitel 5) stellt andere Anforderungen als der Lehrgang, weil sie unterschiedliche Bewertungsebenen (z.B. Prozessbedeutung, Produktbeurteilung, Gruppenbewertung) umfasst. Dazu sind am Oberstufen-Kolleg andere Bewertungsformen als die Ziffernnote entwickelt worden: z.B. ein „Reflexionsbericht“ als individuelle Rückmeldung von SchülerInnen und LehrerInnen und ein „Zertifikat“ für besondere Leistungen im Projekt.
Zentral für die Entwicklung von Projektunterricht ist jedoch die Organisationsfrage (Kapitel 7). Dazu bedarf es einer Organisationsgruppe Projekt, die die Unterrichtsform didaktisch betreut und in einem Hearing die angemeldeten Projekte berät. Das Oberstufen-Kolleg hat damit eine entwickelte „Projektkultur“ organisatorisch umgesetzt. Für eine empirische Untersuchung an sechs Regelschulen in Ostwestfalen ist dann eine idealtypische Merkmalsliste von schulischer „Projektkultur“ als Untersuchungsinstrument entstanden, das zugleich als Leitlinie für Schulentwicklung im Bereich Projektlernen in den Regelschulen dienen kann. Zu dieser Implementierung (Kapitel 8) wurden Konzepte und Erfahrungen vom Oberstufen-Kolleg für schulinterne und schulexterne Fortbildungsformen sowie eine exemplarische Fortbildungseinheit entwickelt. So konnten in zahlreichen Lehrerfortbildungen durch die Versuchsschule Impulse für das Regelschulsystem gegeben werden.
Inhibition, attentional control, and causes of forgetting in working memory: a formal approach
(2013)
In many cognitive activities, the temporary maintenance and manipulation of mental objects is a necessary step in order to reach a cognitive goal. Working memory has been regarded as the process responsible for those cognitive activities. This thesis addresses the question: what limits working-memory capacity (WMC)? A question that still remains controversial (Barrouillet & Camos, 2009; Lewandowsky, Oberauer, & Brown, 2009). This study attempted to answer this question by proposing that the dynamics between the causes of forgetting and the processes helping the maintenance, and the manipulation of the memoranda are the key aspects in understanding the limits of WMC.
Chapter 1 introduced key constructs and the strategy to examine the dynamics between inhibition, attentional control, and the causes of forgetting in working memory.
The study in Chapter 2 tested the performance of children, young adults, and old adults in a working-memory updating-task with two conditions: one condition included go steps and the other condition included go, and no-go steps. The interference model (IM; Oberauer & Kliegl, 2006), a model proposing interference-related mechanisms as the main cause of forgetting was used to simultaneously fit the data of these age groups. In addition to the interference-related parameters reflecting interference by feature overwriting and interference by confusion, and in addition to the parameters reflecting the speed of processing, the study included a new parameter that captured the time for switching between go steps and no-go steps. The study indicated that children and young adults were less susceptible than old adults to interference by feature overwriting; children were the most susceptible to interference by confusion, followed by old adults and then by young adults; young adults presented the higher rate of processing, followed by children and then by old adults; and young adults were the fastest group switching from go steps to no-go steps.
Chapter 3 examined the dynamics between causes of forgetting and the inhibition of a prepotent response in the context of three formal models of the limits of WMC: A resources model, a decay-based model, and three versions of the IM. The resources model was built on the assumption that a limited and shared source of activation for the maintenance and manipulation of the objects underlies the limits of WMC. The decay model assumes that memory traces of the working-memory objects decay over time if they are not reactivated via different mechanisms of maintenance. The IM, already described, proposes that interference-related mechanisms explain the limits of WMC. In two experiments and in a reanalysis of data of the second experiment, one version of the IM received more statistical support from the data. This version of the IM proposes that interference by feature overwriting and interference by confusion are the main factors underlying the limits of WMC. In addition, the model suggests that experimental conditions involving the inhibition of a prepotent response reduce the speed of processing and promotes the involuntary activation of irrelevant information in working memory.
Chapter 4 summed up Chapter 2 and 3 and discussed their findings and presented how this thesis has provided evidence of interference-related mechanisms as the main cause of forgetting, and it has attempted to clarify the role of inhibition and attentional control in working memory. With the implementation of formal models and experimental manipulations in the framework of nonlinear mixed models the data offered explanations of causes of forgetting and the role of inhibition in WMC at different levels: developmental effects, aging effects, effects related to experimental manipulations and individual differences in these effects. Thus, the present approach afforded a comprehensive view of a large number of factors limiting WMC.
This dissertation is about factors that contribute to the surface forms of tones in connected speech in Akan. Akan is an African tone language, which is spoken in Ghana. It has two level tones (low and high), automatic and non-automatic downstep. Downstep is the major factor that influences the surface forms of tones. The thesis shows that downstep is caused by declination. It is argued that declination is an intonational property of Akan, which serves to signal coherence. A phonological representation using a high and a low register tone, associating to the left and right edge of an intonational phrase (IP), respectively, is proposed. Declination/downstep is modelled using a (phonetic) pitch implementation algorithm (Liberman & Pierrehumbert, 1984). An innovative application of the algorithm is presented, which naturally captures the relation between declination and downstep in Akan. Another important factor is the prosodic manifestation of sentence level pragmatic meanings, such as sentence mode and focus. Regarding the former, the thesis shows that a post-lexical low tone, which associates with the right edge of an IP, signals interrogativity. Additionally, lexical tones in Yes – No questions are realized in a higher pitch register, which does not lead to a reduction of declination. It is claimed that the higher register is not part of the phonological representation in Akan, but that it emerges at the phonetic level to compensate for the ‘unnatural’ form of the question morpheme and to satisfy the Frequency code (Gussenhoven, 2002; 2004). An extension of Rialland’s (2007) typology in terms of a new category called “low tense” question prosody is proposed. Concerning focus marking, it is argued that the use of the morpho-syntactic focus marking strategy is related to extra grammatical factors, such as hearer expectation, discourse expectability (Zimmermann, 2007) and emphasis (Hartmann, 2008). If a speaker of Akan wants to highlight a particular element in a sentence, in-situ, i.e. by means of prosody, the default prosodic structure is modified in such a way that the focused element forms its own phonological phrase (pP). If it is already contained in a pP, the boundary deliminating the focused element is enhanced (Féry, 2012). This restructuring/enhancement is accompanied by an interruption of the otherwise continuous melody due to insertion of a pause and/or a glottal stop. Beside declination and intonation, raising of H tones applies in Akan. H raising is analyzed as a local anticipatory planning effect, employed at the phonetic level, which enhances the perceptual distance between low and high tones. Low tones are raised, if they are wedged between two high tones. L raising is argued to be a local carryover effect (co-articulation). Further, it is demonstrated that global anticipatory raising takes place. It is shown that Akan speakers anticipate the length of an IP. Preplanning (anticipatory raising) is argued to be an important process at the level of pitch implementation. It serves to ensure that declination can be maintained throughout the IP, which prevents pitch resetting.
The melody of an Akan sentence is largely determined by the choice of words. The inventory of post-lexical tones is small. It consists of post-lexical register tones, which trigger declination and post-lexical intonational tones, which signal sentence type. The overall melodic shape is falling. At the local level, H raising and L raising occur. At the global level, initial low and high tones are realized higher if they occur in a long and/or complex sentence. This dissertation shows that many factors, which emerge at different levels of the tone production process, contribute to the surface form of tones in Akan.
Folgt tatsächlich aus einem liberalen Wertekanon eine generative Selbstbestimmung, eine weitgehende elterliche Handlungsfreiheit bei eugenischen Maßnahmen, wie es Vertreter einer „liberalen Eugenik“ versichern? Diese Arbeit diskutiert die Rolle Staates und die Handlungsspielräume der Eltern bei der genetischen Gestaltung von Nachkommen im Rahmen eines liberalen Wertverständnisses.
Den Schwerpunkt/Fokus der Betrachtungen liegt hier Maßnahmen des genetic enhancement.
Darüber hinaus wird auch das Verhältnis der „liberalen Eugenik“ zur „autoritären Eugenik“ neu beleuchtet.
Die Untersuchung beginnt bei der Analyse zentraler liberaler Werte und Normen, wie Freiheit, Autonomie und Gerechtigkeit und deren Funktionen in der „liberalen Eugenik“. Wobei nur sehr eingeschränkt von der „liberalen Eugenik“ gesprochen werden kann, sondern viel mehr von Varianten einer „liberalen Eugenik“.
Darüber hinaus wird in dieser Arbeit die historische Entwicklung der „liberalen“ und der „autoritären Eugenik“, speziell des Sozialdarwinismus, untersucht und verglichen, insbesondere im Hinblick auf liberale Werte und Normen und der generativen Selbstbestimmung.
Den Kern der Arbeit bildet der Vergleich der „liberalen Eugenik“ mit der „liberalen Erziehung“. Da hier die grundlegenden Aufgaben der Eltern, aber auch des Staates, analysiert und deren Verhältnis diskutiert wird.
Es zeigt sich, dass sich aus einem liberalen Wertverständnisses heraus keine umfangreiche generative Selbstbestimmung ableiten lässt, sondern sich viel mehr staatlich kontrollierte enge Grenzen bei eugenischen Maßnahmen zum Wohle der zukünftigen Person, begründen.
Zudem wurde der Weg zur autoritären Eugenik nicht durch die Abkehr von der generativen Selbstbestimmung geebnet, sondern viel mehr durch die Übertragung des Fortschrittsgedankens auf den Menschen selbst. Damit verliert die generative Selbstbestimmung auch ihre Funktion als Brandmauer gegen eine autoritäre Eugenik. Nicht der Verlust der generativen Selbstbestimmung, sondern viel mehr die Idee der Perfektionierung des Menschen muss kritisch betrachtet und letztlich abgelehnt werden.
Ohne generative Selbstbestimmung und einer Perfektionierung des Menschen, bleibt nur eine Basis-Eugenik, bei der die Entwicklungsfähigkeit des Menschen sichergestellt wird, nicht jedoch seine Verbesserung.
Darüber hinaus muss auch über eine Entwicklungsmöglichkeit des zukünftigen Menschen gesprochen werden, d. h. ein minimales Potential zu gesellschaftlicher Integration muss gegeben sein. Nur wenn tatsächlich keine Möglichkeiten seitens der Gesellschaft bestehen eine Person zu integrieren und dieser eine Entwicklungsmöglichkeit zu bieten, wären eugenische Maßnahmen als letztes Mittel akzeptabel.
Der Transformationsprozess in der Mongolei stellt besonders für den wirtschaftlichen Bereich eine große Herausforderung dar. Bei der Umgestaltung von der Plan-zur Marktwirtschaft nehmen dabei Führungskräfte eine Schlüsselfunktion ein, da sie wesentlichen Einfluss auf den Gestaltungsprozess der sich neu orientierenden Unternehmen haben. Die Arbeit untersucht das Verhältnis der Führungskräfte zu ihren Mitarbeitern vor dem Hintergrund neocharismatischer Theorieansätze und kommt zum dem Schluss, dass es Hinweise auf transformationale Führung gibt. Dabei nehmen Gruppenprozesse, die Person der Führungskraft sowie traditionelle und sozialistisch sozialisierte Elemente zentrale Rollen ein. Des Weiteren gibt es Verweise auf Konzepte der Authentizität und der Geteilten Führung.
Adipositas gilt seit einigen Jahren als eine der häufigsten chronischen Erkrankungen des Kindes- und Jugendalters. Welche Faktoren zu einer erfolgreichen Behandlung der Adipositas im Kindes- und Jugendalter führen, sind jedoch noch immer nicht ausreichend geklärt. Ein wichtiger – bisher jedoch weitgehend unbeachteter – Faktor, welcher möglicherweise wegweisend für den Therapieverlauf sein kann, ist das subjektive Krankheitskonzept der betroffenen Kinder. Das bedeutsamste theoretische Modell, welches den Einfluss der individuellen Krankheitsvorstellungen auf den Regulationsprozess eines Menschen im Umgang mit Erkrankungen beschreibt, ist das Common Sense Model of Illness Representation (CSM) von Howard Leventhal. Ziel der vorliegenden Arbeit war es die subjektiven Krankheitskonzepte adipöser Kinder zu erfassen und ihren Einfluss auf den Regulationsprozess zu analysieren. In einer ersten Untersuchung wurde mittels Daten von 168 adipösen Kindern im Alter von 8 bis 12 Jahren zunächst ein Fragebogen zur Erfassung der subjektiven Krankheitskonzepte entwickelt. Die Ergebnisse weisen darauf hin, dass der Fragebogen als reliabel und valide eingeschätzt werden kann. Mit Hilfe dieses Fragebogens konnte nachgewiesen werden, dass adipöse Kinder Konstrukte über ihre Erkrankung haben, welche in eigenständigen Dimensionen gespeichert werden. Die gefundenen initialen Krankheitskonzepte adipöser Kinder ergeben ein homogenes erwartungskonformes Bild. In einer zweiten Untersuchung wurden anschließend die subjektiven Krankheitskonzepte adipöser Kinder, die Bewältigungsstrategien sowie gesundheits- und krankheitsrelevante Kriteriumsvariablen untersucht. Die Befragungen erfolgten vor Beginn einer stationären Reha (T1), am Ende der Reha (T2) sowie sechs Monate nach Reha-Ende (T3). Von 107 Kindern liegen Daten zu allen drei Messzeitpunkten vor. Es konnte ein Zusammenhang zwischen Krankheitskonzepten, Bewältigungsstrategien und spezifischen Kriteriumsvariablen bei adipösen Kindern nachgewiesen werden. Die Analyse der Wirkzusammenhänge konnte zeigen, dass die kindlichen Krankheitskonzepte – neben den indirekten Einflüssen über die Bewältigungsstrategien – die Kriteriumsvariablen vor allem auch direkt beeinflussen können. Der Einfluss der initialen Krankheitskonzepte adipöser Kinder konnte hierbei sowohl im querschnittlichen als auch im längsschnittlichen Design bestätigt werden. Zudem konnten vielfältige Einflüsse der Veränderung der subjektiven Krankheitskonzepte während der Therapie gefunden werden. Die Veränderungen der Krankheitskonzepte wirken sowohl mittelfristig auf die individuellen Bewältigungsstrategien am Ende der Reha als auch längerfristig auf die adipositasspezifischen Kriteriumsvariablen Gewicht, Ernährung, Bewegung und Lebensqualität. Die Befunde stärken die Relevanz und das Potential der zielgerichteten Modifikation adaptiver bzw. maladaptiver Krankheitskonzepte innerhalb der stationären Therapie der kindlichen Adipositas. Zudem konnte bestätigt werden, dass subjektive Krankheitskonzepte und ihre Veränderung innerhalb der Therapie einen relevanten Beitrag zur Vorhersage des kindlichen Therapieerfolgs über einen längerfristigen Zeitraum leisten können.
Adipositas ist eine chronische Erkrankung mit erheblichen Komorbiditäten und Folgeschäden, die bereits im Kindes- und Jugendalter weit verbreitet ist. Unterschiedliche Faktoren sind an der Ätiologie dieser Störung beteiligt. Die Ernährung stellt dabei eine der Hauptsäulen dar, auf welche immer wieder Bezug genommen wird. Der Einfluss der Eltern auf die kindliche Ernährung spielt unbestritten eine zentrale Rolle – hinsichtlich genetischer Dispositionen, aber auch als Gestalter der Lebensumwelten und Vorbilder im Ernährungsbereich. Die vorliegende Arbeit hat zum Ziel, Übereinstimmungen elterlicher und kindlicher Ernährung zu untersuchen und dabei zu prüfen, inwiefern Prozesse des Modelllernens für die Zusammenhänge verantwortlich zeichnen. Grundlage ist die sozial-kognitive Theorie Albert Banduras mit dem Fokus auf seinen Ausführungen zum Beobachtungs- oder Modelllernen. Die Zusammenhänge elterlicher und kindlicher Ernährung wurden anhand einer Stichprobe 7 – 13-jähriger adipöser Kinder und ihrer Eltern in Beziehung gesetzt zu den Bedingungen des Modelllernens, die zuvor auch in anderen Studien gefunden worden waren. Eine hohe Ähnlichkeit oder gute Beziehung zwischen Modell (Mutter bzw. Vater) und Lernendem (Kind) sollte demnach moderierend auf die Stärke des Zusammenhangs wirken. Aus Banduras Ausführungen zu den Phasen des Modelllernens ergibt sich zudem ein dritter Aspekt, der in das Untersuchungsmodell einbezogen wurde. Die von Bandura postulierte Aneignungsphase setzt voraus, dass das zu lernende Verhalten auch beobachtet werden kann. Aus diesem Grund sollte die Analyse von Zusammenhängen im Verhalten nicht losgelöst von der Zeit betrachtet werden, die Modell und Beobachter miteinander verbringen bzw. verbracht haben. Zudem wurde die Wahrnehmung eines Elternteils als Vorbild beim Kind erfragt und als Moderator aufgenommen. In die Analysen eingeschlossen wurden vollständige Mutter-Vater-Kind-Triaden. Im Querschnitt der Fragebogenerhebung waren die Daten von 171 Mädchen und 176 Jungen, in einem 7 Monate darauf folgenden Längsschnitt insgesamt 75 Triaden (davon 38 Mädchen) enthalten. Es zeigte sich ein positiver Zusammenhang zwischen der kindlichen und mütterlichen Ernährung ebenso wie zwischen der kindlichen und väterlichen Ernährung. Die Übereinstimmungen zwischen Mutter und Kind waren größer als zwischen Vater und Kind. Überwiegend bestätigt werden konnten der moderierende Einfluss der Beziehungsqualität und der Vorbildwahrnehmung auf die Zusammenhänge elterlicher und kindlicher gesunder Ernährung und der Einfluss gemeinsam verbrachter Zeit vor allem in Bezug auf Vater-Kind-Zusammenhänge problematischer Ernährung. Der väterliche Einfluss, der sowohl in Studien als auch in präventiven oder therapeutischen Angeboten oft noch vernachlässigt wird und in vorliegender Arbeit besondere bzw. gleichberechtigte Beachtung fand, zeigte sich durch den Einbezug moderierender Variablen verstärkt. Eine Ansprache von Müttern und Vätern gleichermaßen ist somit unbedingtes Ziel bei der Prävention und Therapie kindlicher Adipositas. Auch jenseits des Adipositaskontextes sollten Eltern für die Bedeutung elterlicher Vorbildwirkung sensibilisiert werden, um eine gesunde Ernährungsweise ihrer Kinder zu fördern.
User-centered design processes are the first choice when new interactive systems or services are developed to address real customer needs and provide a good user experience. Common tools for collecting user research data, conducting brainstormings, or sketching ideas are whiteboards and sticky notes. They are ubiquitously available, and no technical or domain knowledge is necessary to use them. However, traditional pen and paper tools fall short when saving the content and sharing it with others unable to be in the same location. They are also missing further digital advantages such as searching or sorting content. Although research on digital whiteboard and sticky note applications has been conducted for over 20 years, these tools are not widely adopted in company contexts. While many research prototypes exist, they have not been used for an extended period of time in a real-world context. The goal of this thesis is to investigate what the enablers and obstacles for the adoption of digital whiteboard systems are. As an instrument for different studies, we developed the Tele-Board software system for collaborative creative work. Based on interviews, observations, and findings from former research, we tried to transfer the analog way of working to the digital world. Being a software system, Tele-Board can be used with a variety of hardware and does not depend on special devices. This feature became one of the main factors for adoption on a larger scale. In this thesis, I will present three studies on the use of Tele-Board with different user groups and foci. I will use a combination of research methods (laboratory case studies and data from field research) with the overall goal of finding out when a digital whiteboard system is used and in which cases not. Not surprisingly, the system is used and accepted if a user sees a main benefit that neither analog tools nor other applications can offer. However, I found that these perceived benefits are very different for each user and usage context. If a tool provides possibilities to use in different ways and with different equipment, the chances of its adoption by a larger group increase. Tele-Board has now been in use for over 1.5 years in a global IT company in at least five countries with a constantly growing user base. Its use, advantages, and disadvantages will be described based on 42 interviews and usage statistics from server logs. Through these insights and findings from laboratory case studies, I will present a detailed analysis of digital whiteboard use in different contexts with design implications for future systems.
Zur Versorgung ausländischer Märkte bedienen sich Unternehmen unterschiedlicher Versorgungsformen. Die proximity-concentration trade-off-Literatur betrachtet die Wahl zwischen Export und Auslandsproduktion und erklärt die Entstehung von internationalem Handel und horizontalen ausländischen Direktinvestitionen. Das Standardmodell von Brainard (1993) integriert die Auslandsproduktion als alternative Versorgungsform zum Handel in ein allgemeines Gleichgewichtsmodell mit zwei Ländern, monopolistischer Konkurrenz, steigenden Skalenerträgen und Transportkosten. Im Gleichgewicht versorgen Unternehmen ausländische Märkte entweder durch Exporte oder eine Auslandsproduktion. Die real zu beobachtende Ko-Existenz von internationalem Handel und ausländischen Direktinvestitionen auf der Unternehmensebene kann mit diesem Modell nicht erklärt werden. Im Rahmen dieser Arbeit wird die Exportplattform (EP) als mögliche Antwort auf dieses Phänomen herangezogen. Eine Exportplattform ist eine Auslandsproduktion, durch die nicht nur der lokale Auslandsmarkt, sondern auch Drittländer versorgt werden. Im modelltheoretischen Teil dieser Arbeit wird ein partialanalytisches EP-Modell formuliert, dass auf Brainard (1993) aufbaut. Dabei wird ihr Modell um eine Mehr-Länder-Welt mit heterogener Verteilungsstruktur erweitert und die Versorgungsalternative der EP-Exporte nach dem Beispiel von Neary (2002) integriert. Durch die analytische Lösung des partiellen Gleichgewichts lässt sich die substitutive Beziehung zwischen Heimatexporten, Auslandsproduktion und EP-Exporten aufzeigen. Ferner kann die Wirkung der Versorgungskosten auf die Versorgungswahl analysiert werden. Dabei wird neben der analytischen Modellbeschreibung besonders auf die Gleichgewichtsbestimmung und die Existenz der Gleichgewichte eingegangen. Aufbauend auf den analytisch abzuleitenden Hypothesen wird das EP-Modell ferner einem empirischen Signifikanztest unterzogen. Unter Anwendung von nicht-linearen Regressionsverfahren wird die Wahl zwischen EP-Exporten und Auslandsproduktion, zwischen EP- und Heimatexporten sowie zwischen EP-Exporten und der EP-Produktion separat geschätzt. Hierfür wird auf Daten der Automobilindustrie zurückgegriffen, welche die regionalen PKW-Produktions- und -Absatzdaten sämtlicher Automobilhersteller in Osteuropa, Asien und Ozeanien umfassen.
Der W-Fragen-Erwerb stellt einen Teilbereich der kindlichen Syntaxentwicklung dar, die sich maßgeblich innerhalb der ersten drei Lebensjahre eines Kindes vollzieht. Eine wesentliche Rolle spielen dabei zwei Bewegungsoperationen, die sich auf die Position des Interrogativpronomens an die erste Stelle der W-Frage sowie die Position des Verbs an die zweite Stelle beziehen. In drei Studien wurde einerseits untersucht, ob deutschsprachige Kinder, die noch keine W-Fragen produzieren können, in der Lage sind, grammatische von ungrammatischen W-Fragen zu unterscheiden und andererseits, welche Leistungen sprachunauffällige und sprachauffällige deutschsprachige Kinder beim Verstehen und Korrigieren unterschiedlich komplexer W-Fragen (positive und negative W-Fragen) zeigen. Die Ergebnisse deuten auf ein frühes syntaktisches Wissen über W-Fragen im Spracherwerb hin und stützen damit die Annahme einer Kontinuität der kindlichen Grammatik zur Standardsprache. Auch scheinen sprachauffällige Kinder sich beim Erwerb von W-Fragen nicht qualitativ von sprachgesunden Kindern zu unterscheiden, sondern W-Fragen lediglich später korrekt umzusetzen. In beiden Populationen konnte ein syntaktischer Ökonomieeffekt beobachtet werden, der für eine spätere Umsetzung der Verbbewegung im Vergleich zur Bewegung des W-Elementes spricht.
Pronoun resolution normally takes place without conscious effort or awareness, yet the processes behind it are far from straightforward. A large number of cues and constraints have previously been recognised as playing a role in the identification and integration of potential antecedents, yet there is considerable debate over how these operate within the resolution process. The aim of this thesis is to investigate how the parser handles multiple antecedents in order to understand more about how certain information sources play a role during pronoun resolution. I consider how both structural information and information provided by the prior discourse is used during online processing. This is investigated through several eye tracking during reading experiments that are complemented by a number of offline questionnaire experiments. I begin by considering how condition B of the Binding Theory (Chomsky 1981; 1986) has been captured in pronoun processing models; some researchers have claimed that processing is faithful to syntactic constraints from the beginning of the search (e.g. Nicol and Swinney 1989), while others have claimed that potential antecedents which are ruled out on structural grounds nonetheless affect processing, because the parser must also pay attention to a potential antecedent’s features (e.g. Badecker and Straub 2002). My experimental findings demonstrate that the parser is sensitive to the subtle changes in syntactic configuration which either allow or disallow pronoun reference to a local antecedent, and indicate that the parser is normally faithful to condition B at all stages of processing. Secondly, I test the Primitives of Binding hypothesis proposed by Koornneef (2008) based on work by Reuland (2001), which is a modular approach to pronoun resolution in which variable binding (a semantic relationship between pronoun and antecedent) takes place before coreference. I demonstrate that a variable-binding (VB) antecedent is not systematically considered earlier than a coreference (CR) antecedent online. I then go on to explore whether these findings could be attributed to the linear order of the antecedents, and uncover a robust recency preference both online and offline. I consider what role the factor of recency plays in pronoun resolution and how it can be reconciled with the first-mention advantage (Gernsbacher and Hargreaves 1988; Arnold 2001; Arnold et al., 2007). Finally, I investigate how aspects of the prior discourse affect pronoun resolution. Prior discourse status clearly had an effect on pronoun resolution, but an antecedent’s appearance in the previous context was not always facilitative; I propose that this is due to the number of topic switches that a reader must make, leading to a lack of discourse coherence which has a detrimental effect on pronoun resolution. The sensitivity of the parser to structural cues does not entail that cue types can be easily separated into distinct sequential stages, and I therefore propose that the parser is structurally sensitive but not modular. Aspects of pronoun resolution can be captured within a parallel constraints model of pronoun resolution, however, such a model should be sensitive to the activation of potential antecedents based on discourse factors, and structural cues should be strongly weighted.
In den vergangenen Jahren wurden stetig wachsende Produktionskapazitäten von Biokunststoffen aus nachwachsenden Rohstoffe nverzeichnet. Trotz großer Produktionskapazitäten und einem geeigneten Eigenschaftsprofil findet Stärke nur als hydrophile, mit Weichmachern verarbeitete thermoplastische Stärke (TPS) in Form von Blends mit z. B. Polyestern Anwendung. Gleiches gilt für Kunststoffe auf Proteinbasis. Die vorliegende Arbeit hat die Entwicklung von Biokunststoffen auf Stärkebasis zum Ziel, welche ohne externe Weichmacher thermoplastisch verarbeitbar und hydrophob sind sowie ein mechanisches Eigenschaftsprofil aufweisen, welches ein Potenzial zur Herstellung von Materialien für eine Anwendung als Verpackungsmittel bietet. Um die Rohstoffbasis für Biokunststoffe zu erweitern, soll das erarbeitete Konzept auf zwei industriell verfügbare Proteintypen, Zein und Molkenproteinisolat (WPI), übertragen werden. Als geeignete Materialklasse wurden Fettsäureester der Stärke herausgearbeitet. Zunächst fand ein Vergleich der Säurechlorid-Veresterung und der Umesterung von Fettsäurevinylestern statt, woraus letztere als geeignetere Methode hervorging. Durch Variation der Reaktionsparameter konnte diese optimiert und auf eine Serie der Fettsäurevinylester von Butanoat bis Stearat für DS-Werte bis zu 2,2-2,6 angewandt werden. Möglich war somit eine systematische Studie unter Variation der veresterten Fettsäure sowie des Substitutionsgrades (DS). Sämtliche Produkte mit einem DS ab 1,5 wiesen eine ausgprägte Löslichkeit in organischen Lösungsmitteln auf wodurch sowohl die Aufnahme von NMR-Spektren als auch Molmassenbestimmung mittels Größenausschlusschromatographie mit gekoppelter Mehrwinkel-Laserlichtstreuung (GPC-MALLS) möglich waren. Durch dynamische Lichtstreuung (DLS) wurde das Löslichkeitsverhalten veranschaulicht. Sämtliche Produkte konnten zu Filmen verarbeitet werden, wobei Materialien mit DS 1,5-1,7 hohe Zugfestigkeiten (bis zu 42 MPa) und Elastizitätsmodule (bis 1390 MPa) aufwiesen. Insbesondere Stärkehexanoat mit DS <2 sowie Stärkebutanoat mit DS >2 hatten ein mechanisches Eigenschaftsprofil, welches insbesondere in Bezug auf die Festigkeit/Steifigkeit vergleichbar mit Verpackungsmaterialien wie Polyethylen war (Zugfestigkeit: 15-32 MPa, E-Modul: 300-1300 MPa). Zugfestigkeit und Elastizitätsmodul nahmen mit steigender Kettenlänge der veresterten Fettsäure ab. Ester längerkettiger Fettsäuren (C16-C18) waren spröde. Über Weitwinkel-Röntgenstreuung (WAXS) und Infrarotspektroskopie (ATR-FTIR) konnte der Verlauf der Festigkeiten mit einer zunehmenden Distanz der Stärke im Material begründet werden. Es konnten von DS und Kettenlänge abhängige Glasübergänge detektiert werden, die kristallinen Strukturen der langkettigen Fettsäuren zeigten einen Schmelzpeak. Die Hydrophobie der Filme wurde anhand von Kontaktwinkeln >95° gegen Wasser dargestellt. Blends mit biobasierten Polyterpenen sowie den in der Arbeit hergestellten Zein-Acylderivaten ermöglichten eine weitere Verbesserung der Zugfestigkeit bzw. des Elastizitätsmoduls hochsubstituierter Produkte. Eine thermoplastische Verarbeitung mittels Spritzgießen war sowohl für Produkte mit hohem als auch mittlerem DS-Wert ohne jeglichen Zusatz von Weichmachern möglich. Es entstanden homogene, transparente Prüfstäbe. Untersuchungen der Härte ergaben auch hier für Stärkehexanoat und –butanoat mit Polyethylen vergleichbare Werte. Ausgewählte Produkte wurden zu Fasern nach dem Schmelzspinnverfahren verarbeitet. Hierbei wurden insbesondere für hochsubstituierte Derivate homogenen Fasern erstellt, welche im Vergleich zur Gießfolie signifikant höhere Zugfestigkeiten aufwiesen. Stärkeester mit mittlerem DS ließen sich ebenfalls verarbeiten. Zunächst wurden für eine Übertragung des Konzeptes auf die Proteine Zein und WPI verschiedene Synthesemethoden verglichen. Die Veresterung mit Säurechloriden ergab hierbei die höchsten Werte. Im Hinblick auf eine gute Löslichkeit in organischen Lösungsmitteln wurde für WPI die Veresterung mit carbonyldiimidazol (CDI)-aktivierten Fettsäuren in DMSO und für Zein die Veresterung mit Säu-rechloriden in Pyridin bevorzugt. Es stellte sich heraus, dass acyliertes WPI zwar hydrophob, jedoch ohne Weichmacher nicht thermoplastisch verarbeitet werden konnte. Die Erstellung von Gießfolien führte zu Sprödbruchverhalten. Unter Zugabe der biobasierten Ölsäure wurde die Anwendung von acyliertem WPI als thermoplastischer Filler z. B. in Blends mit Stärkeestern dargestellt. Im Gegensatz hierzu zeigte acyliertes Zein Glasübergänge <100 °C bei ausreichender Stabilität (150-200 °C). Zeinoleat konnte ohne Weichmacher zu einer transparenten Gießfolie verarbeitet werden. Sämtliche Derivate erwiesen sich als ausgeprägt hydrophob. Zeinoleat konnte über das Schmelzspinnverfahren zu thermoplastischen Fasern verarbeitet werden.
1. Teil A – Theoretische Grundlegung der Arbeit Die wissenschaftliche Arbeit beginnt mit der Darstellung der Problemstellung und der Zielsetzung der wissenschaftlichen Arbeit. Es wird deutlich aufgezeigt werden, dass sich die Lehre des Qualitätsmanagements (QM) nur sehr unzureichend mit den sozialen Aspekten der Information & Kommunikation (I&K) und dem organisatorischen Wandel beschäftigt hat. Aus diesen beiden Unterkapiteln werden die Forschungsfragen abgeleitet und der weitere Aufbau der Arbeit konstruiert. (Kapitel 1). Aufgrund der Problemstellung startet das zweite Kapitel im Rahmen der theoretischen Grundlagen mit der sozialen Systemtheorie. Die Entscheidung für die soziale Systemtheorie wird begründet. Im Zusammenhang mit den sozialen Aspekten der I&K werden die relevanten Erkenntnisbeiträge der sozialen Systemtheorien als einzelne Komponenten vorgestellt. Diese Komponenten werden dann zu einem systemtheoretischen I&K-Modell (SEM) zusammengefügt. (Kapitel 2). Damit die beiden Disziplinen QM und soziale Systemtheorie miteinander verbunden werden können, bedarf es im dritten Kapitel der Dissertation einer Vorstellung der dafür notwendigen und relevanten Inhalte des QM. Im Zuge der Vorstellung der Inhalte des QM werden diese bereits mit der sozialen Systemtheorie verknüpft, um damit aufzuzeigen, wie QMS durch I&K existieren und operieren (Kapitel 3). Das vierte Kapitel verbindet dann die beiden Disziplinen QM und soziale Systemtheorie miteinander, wodurch ein systemtheoretisches QM-Modell (SQM) entsteht. Dieses Modell erklärt den Zusammenhang von QM, I&K und organisatorischem Wandel(Kapitel 4). 2. Teil B – Empirische Untersuchung Für die empirische Untersuchung wird in Kapitel fünf das allgemeine Forschungsdesign hergeleitet werden. Darauf folgt die Vorstellung des Aufbaus und der Abfolge von Interviews und eines Fragebogens (Kapitel 5). Das sechste Kapitel erklärt die Zielsetzung, Hintergrund und Methodik der Experteninterviews mit den Qualitätsmanagementbeauftragten (QMB) und unter-sucht die gängige Praxis des QM bzgl. der sozialen Aspekte der I&K. (Kapitel 6). Das Kapitel sieben erklärt die Zielsetzung, Hintergrund und Methodik der Interviews mit den Unternehmen der Best-Practise (BP). (Kapitel 7). Im Kapitel acht werden die Ursache und Wirkung der sozialen Aspekte der I&K über die Unternehmenskultur im Rahmen eines QMS dargestellt. (Kapitel 8). Im Kapitel neun erfolgt ein Resümee der empirischen Untersuchungen. Die Ergebnisse der empirischen Untersuchungen werden kritisch gewürdigt. Des Weiteren wird aufgezeigt, welcher weitere empirische Forschungsbedarf aufgedeckt wurde.(Kapitel 9). 3. Teil C - Abschluss Der Schlussteil der Arbeit beginnt mit dem zehnten Kapitel durch die Herleitung und Begründung von Verbesserungspotentialen und Handlungsempfehlungen für die Praxis im QM.(Kapitel 10). Im elften Kapitel erfolgt die Beantwortung der Forschungsfragen und der kritischen Würdigung der generierten Erkenntnisse.(Kapitel 11). Im zwölften Kapitel endet die Arbeit mit einem Ausblick auf weiteren Forschungsbedarf, welcher durch das Ergebnis dieser Arbeit entstanden ist (Kapitel 12).
Galaxy clusters are the largest known gravitationally bound objects, their study is important for both an intrinsic understanding of their systems and an investigation of the large scale structure of the universe. The multi- component nature of galaxy clusters offers multiple observable signals across the electromagnetic spectrum. At X-ray wavelengths, galaxy clusters are simply identified as X-ray luminous, spatially extended, and extragalactic sources. X-ray observations offer the most powerful technique for constructing cluster catalogues. The main advantages of the X-ray cluster surveys are their excellent purity and completeness and the X-ray observables are tightly correlated with mass, which is indeed the most fundamental parameter of clusters. In my thesis I have conducted the 2XMMi/SDSS galaxy cluster survey, which is a serendipitous search for galaxy clusters based on the X-ray extended sources in the XMM-Newton Serendipitous Source Catalogue (2XMMi-DR3). The main aims of the survey are to identify new X-ray galaxy clusters, investigate their X-ray scaling relations, identify distant cluster candidates, and study the correlation of the X-ray and optical properties. The survey is constrained to those extended sources that are in the footprint of the Sloan Digital Sky Survey (SDSS) in order to be able to identify the optical counterparts as well as to measure their redshifts that are mandatory to measure their physical properties. The overlap area be- tween the XMM-Newton fields and the SDSS-DR7 imaging, the latest SDSS data release at the starting of the survey, is 210 deg^2. The survey comprises 1180 X-ray cluster candidates with at least 80 background-subtracted photon counts, which passed the quality control process. To measure the optical redshifts of the X-ray cluster candidates, I used three procedures; (i) cross-matching these candidates with the recent and largest optically selected cluster catalogues in the literature, which yielded the photometric redshifts of about a quarter of the X-ray cluster candidates. (ii) I developed a finding algorithm to search for overdensities of galaxies at the positions of the X-ray cluster candidates in the photometric redshift space and to measure their redshifts from the SDSS-DR8 data, which provided the photometric redshifts of 530 groups/clusters. (iii) I developed an algorithm to identify the cluster candidates associated with spectroscopically targeted Luminous Red Galaxies (LRGs) in the SDSS-DR9 and to measure the cluster spectroscopic redshift, which provided 324 groups and clusters with spectroscopic confirmation based on spectroscopic redshift of at least one LRG. In total, the optically confirmed cluster sample comprises 574 groups and clusters with redshifts (0.03 ≤ z ≤ 0.77), which is the largest X-ray selected cluster catalogue to date based on observations from the current X-ray observatories (XMM-Newton, Chandra, Suzaku, and Swift/XRT). Among the cluster sample, about 75 percent are newly X-ray discovered groups/clusters and 40 percent are new systems to the literature. To determine the X-ray properties of the optically confirmed cluster sample, I reduced and analysed their X-ray data in an automated way following the standard pipelines of processing the XMM-Newton data. In this analysis, I extracted the cluster spectra from EPIC(PN, MOS1, MOS2) images within an optimal aperture chosen to maximise the signal-to-noise ratio. The spectral fitting procedure provided the X-ray temperatures kT (0.5 - 7.5 keV) for 345 systems that have good quality X-ray data. For all the optically confirmed cluster sample, I measured the physical properties L500 (0.5 x 10^42 – 1.2 x 10^45 erg s-1 ) and M500 (1.1 x 10^13 – 4.9 x 10^14 M⊙) from an iterative procedure using published scaling relations. The present X-ray detected groups and clusters are in the low and intermediate luminosity regimes apart from few luminous systems, thanks to the XMM-Newton sensitivity and the available XMM-Newton deep fields The optically confirmed cluster sample with measurements of redshift and X-ray properties can be used for various astrophysical applications. As a first application, I investigated the LX - T relation for the first time based on a large cluster sample of 345 systems with X-ray spectroscopic parameters drawn from a single survey. The current sample includes groups and clusters with wide ranges of redshifts, temperatures, and luminosities. The slope of the relation is consistent with the published ones of nearby clusters with higher temperatures and luminosities. The derived relation is still much steeper than that predicted by self-similar evolution. I also investigated the evolution of the slope and the scatter of the LX - T relation with the cluster redshift. After excluding the low luminosity groups, I found no significant changes of the slope and the intrinsic scatter of the relation with redshift when dividing the sample into three redshift bins. When including the low luminosity groups in the low redshift subsample, I found its LX - T relation becomes after than the relation of the intermediate and high redshift subsamples. As a second application of the optically confirmed cluster sample from our ongoing survey, I investigated the correlation between the cluster X-ray and the optical parameters that have been determined in a homogenous way. Firstly, I investigated the correlations between the BCG properties (absolute magnitude and optical luminosity) and the cluster global proper- ties (redshift and mass). Secondly, I computed the richness and the optical luminosity within R500 of a nearby subsample (z ≤ 0.42, with a complete membership detection from the SDSS data) with measured X-ray temperatures from our survey. The relation between the estimated optical luminosity and richness is also presented. Finally, the correlation between the cluster optical properties (richness and luminosity) and the cluster global properties (X-ray luminosity, temperature, mass) are investigated.
LCST-type synthetic thermoresponsive polymers can reversibly respond to certain stimuli in aqueous media with a massive change of their physical state. When fluorophores, that are sensitive to such changes, are incorporated into the polymeric structure, the response can be translated into a fluorescence signal. Based on this idea, this thesis presents sensing schemes which transduce the stimuli-induced variations in the solubility of polymer chains with covalently-bound fluorophores into a well-detectable fluorescence output. Benefiting from the principles of different photophysical phenomena, i.e. of fluorescence resonance energy transfer and solvatochromism, such fluorescent copolymers enabled monitoring of stimuli such as the solution temperature and ionic strength, but also of association/disassociation mechanisms with other macromolecules or of biochemical binding events through remarkable changes in their fluorescence properties. For instance, an aqueous ratiometric dual sensor for temperature and salts was developed, relying on the delicate supramolecular assembly of a thermoresponsive copolymer with a thiophene-based conjugated polyelectrolyte. Alternatively, by taking advantage of the sensitivity of solvatochromic fluorophores, an increase in solution temperature or the presence of analytes was signaled as an enhancement of the fluorescence intensity. A simultaneous use of the sensitivity of chains towards the temperature and a specific antibody allowed monitoring of more complex phenomena such as competitive binding of analytes. The use of different thermoresponsive polymers, namely poly(N-isopropylacrylamide) and poly(meth)acrylates bearing oligo(ethylene glycol) side chains, revealed that the responsive polymers differed widely in their ability to perform a particular sensing function. In order to address questions regarding the impact of the chemical structure of the host polymer on the sensing performance, the macromolecular assembly behavior below and above the phase transition temperature was evaluated by a combination of fluorescence and light scattering methods. It was found that although the temperature-triggered changes in the macroscopic absorption characteristics were similar for these polymers, properties such as the degree of hydration or the extent of interchain aggregations differed substantially. Therefore, in addition to the demonstration of strategies for fluorescence-based sensing with thermoresponsive polymers, this work highlights the role of the chemical structure of the two popular thermoresponsive polymers on the fluorescence response. The results are fundamentally important for the rational choice of polymeric materials for a specific sensing strategy.
This thesis gives formal definitions of discourse-givenness, coreference and reference, and reports on experiments with computational models of discourse-givenness of noun phrases for English and German. Definitions are based on Bach's (1987) work on reference, Kibble and van Deemter's (2000) work on coreference, and Kamp and Reyle's Discourse Representation Theory (1993). For the experiments, the following corpora with coreference annotation were used: MUC-7, OntoNotes and ARRAU for Englisch, and TueBa-D/Z for German. As for classification algorithms, they cover J48 decision trees, the rule based learner Ripper, and linear support vector machines. New features are suggested, representing the noun phrase's specificity as well as its context, which lead to a significant improvement of classification quality.
Several mechanisms are proposed to be part of the earthquake triggering process, including static stress interactions and dynamic stress transfer. Significant differences of these mechanisms are particularly expected in the spatial distribution of aftershocks. However, testing the different hypotheses is challenging because it requires the consideration of the large uncertainties involved in stress calculations as well as the appropriate consideration of secondary aftershock triggering which is related to stress changes induced by smaller pre- and aftershocks. In order to evaluate the forecast capability of different mechanisms, I take the effect of smaller--magnitude earthquakes into account by using the epidemic type aftershock sequence (ETAS) model where the spatial probability distribution of direct aftershocks, if available, is correlated to alternative source information and mechanisms. Surface shaking, rupture geometry, and slip distributions are tested. As an approximation of the shaking level, ShakeMaps are used which are available in near real-time after a mainshock and thus could be used for first-order forecasts of the spatial aftershock distribution. Alternatively, the use of empirical decay laws related to minimum fault distance is tested and Coulomb stress change calculations based on published and random slip models. For comparison, the likelihood values of the different model combinations are analyzed in the case of several well-known aftershock sequences (1992 Landers, 1999 Hector Mine, 2004 Parkfield). The tests show that the fault geometry is the most valuable information for improving aftershock forecasts. Furthermore, they reveal that static stress maps can additionally improve the forecasts of off--fault aftershock locations, while the integration of ground shaking data could not upgrade the results significantly. In the second part of this work, I focused on a procedure to test the information content of inverted slip models. This allows to quantify the information gain if this kind of data is included in aftershock forecasts. For this purpose, the ETAS model based on static stress changes, which is introduced in part one, is applied. The forecast ability of the models is systematically tested for several earthquake sequences and compared to models using random slip distributions. The influence of subfault resolution and segment strike and dip is tested. Some of the tested slip models perform very good, in that cases almost no random slip models are found to perform better. Contrastingly, for some of the published slip models, almost all random slip models perform better than the published slip model. Choosing a different subfault resolution hardly influences the result, as long the general slip pattern is still reproducible. Whereas different strike and dip values strongly influence the results depending on the standard deviation chosen, which is applied in the process of randomly selecting the strike and dip values.
La tesis doctoral „Ficcionalizar el referente. Violencia, Saber, Ficcion y Utopía en El Primer Nueva Corónica y Buen Gobierno de Felipe Guamán Poma de Ayala“ tiene como objetivo explicar la violencia, el saber, la ficción y la utopía en El Primer Nueva Corónica y Buen Gobierno. Este estudio se inicia con la Historia de la Recepción de El Primer Nueva Corónica y Buen Gobierno en el siglo XX. El criterio principal en la elaboración de este registro ha sido analizar estudios destacados sobre la crónica peruana y que han abierto nuevos significados desde diferentes disciplinas. De esta manera la Historia de la Recepción de El Primer Nueva Corónica y Buen Gobierno se inicia a partir de un "Ensayo de Interpretación" desde la arqueología hasta arribar a la ciencia filológica que amparada por el fortalecimiento de los estudios culturales y el cuestionamiento de los "metarrelatos" en las tres últimas décadas del siglo XX desarrolló un "acto de descolonización" desde la crítica histórica y literaria. Los conceptos de violencia, saber, ficción y utopía han sido constantes a lo largo del proceso de lectura y reflexión de la crónica peruana. De esta manera este estudio responde a interrogantes sobre las dimensiones y los espacios de la violencia en la crónica peruana. Este estudio reconoce además que los relatos del fin del mundo andino y su recomposición corresponden a la confluencia de saberes y a su ubicidad. En El Primer Nueva Coronica y Buen Gobierno se hallan escenas de "experiencia límite" que puede tener un desenlace trágico con la desaparición y la muerte del autor o el abandono y perdida de esperanza de sus ideales. Sin embargo la crónica de Felipe Guamán Poma de Ayala concluye por una apuesta por la vida y por tanto participa de un "saber de vida" o un "saber vivir" debido a que se halla el registro sobre la vida y de un saber que tiene la vida como objetivo central en el relato histórico. De esta manera la crónica peruana contiene una lógica narrativa que "socava la estática de un destino irrevocable" y elabora una denuncia del mundo desolador con el anhelo de transformarlo. Esta escena produce fricción entre lo que afirma el cronista y la representación ficcional del dibujo, de esta manera se halla un espacio entre lo vivido e inventado que abarcaran todo un campo de experimentación que oscilan entre una dicción plena y la fórmula ficcional. La tesis doctoral „Ficcionalizar el referente. Violencia, Saber, Ficcion y Utopía en El Primer Nueva Corónica y Buen Gobierno de Felipe Guamán Poma de Ayala“ identificó una eclosiva fuga de la referencialidad textual de la crónica peruana para crear sus propias referencialidades bajo un sentido utópico. El testimonio dramático, la denuncia en voz e imagen no estarán distanciados del elemento fantástico y legendario propio de los primeros escritos latinoamericanos. Los artificios narrativos de El Primer Nueva Corónica y Buen Gobierno crea territorialidades textuales aptas para la imaginación y la leyenda, cercano a lo real maravilloso. La crónica indígena fue concebida cuando se organizaba el "Buen gobierno y justicia", en un ambiente confrontacional, pero también en un mundo que era nuevo, naciente en un momento que América aparecía como idealización del ansiado proyecto platónico de nación feliz, de un buen gobierno, una mejor nación, una utopía.
Information flows in EU policy-making are heavily dependent on personal networks, both within the Brussels sphere but also reaching outside the narrow limits of the Belgian capital. These networks develop for example in the course of formal and informal meetings or at the sidelines of such meetings. A plethora of committees at European, transnational and regional level provides the basis for the establishment of pan-European networks. By studying affiliation to those committees, basic network structures can be uncovered. These affiliation network structures can then be used to predict EU information flows, assuming that certain positions within the network are advantageous for tapping into streams of information while others are too remote and peripheral to provide access to information early enough. This study has tested those assumptions for the case of the reform of the Common Fisheries Policy for the time after 2012. Through the analysis of an affiliation network based on participation in 10 different fisheries policy committees over two years (2009 and 2010), network data for an EU-wide network of about 1300 fisheries interest group representatives and more than 200 events was collected. The structure of this network showed a number of interesting patterns, such as – not surprisingly – a rather central role of Brussels-based committees but also close relations of very specific interests to the Brussels-cluster and stronger relations between geographically closer maritime regions. The analysis of information flows then focused on access to draft EU Commission documents containing the upcoming proposal for a new basic regulation of the Common Fisheries Policy. It was first documented that it would have been impossible to officially obtain this document and that personal networks were thus the most likely sources for fisheries policy actors to obtain access to these “leaks” in early 2011. A survey of a sample of 65 actors from the initial network supported these findings: Only a very small group had accessed the draft directly from the Commission. Most respondents who obtained access to the draft had received it from other actors, highlighting the networked flow of informal information in EU politics. Furthermore, the testing of the hypotheses connecting network positions and the level of informedness indicated that presence in or connections to the Brussels sphere had both advantages for overall access to the draft document and with regard to timing. Methodologically, challenges of both the network analysis and the analysis of information flows but also their relevance for the study of EU politics have been documented. In summary, this study has laid the foundation for a different way to study EU policy-making by connecting topical and methodological elements – such as affiliation network analysis and EU committee governance – which so far have not been considered together, thereby contributing in various ways to political science and EU studies.
For the first time the transcriptional reprogramming of distinct root cortex cells during the arbuscular mycorrhizal (AM) symbiosis was investigated by combining Laser Capture Mirodissection and Affymetrix GeneChip® Medicago genome array hybridization. The establishment of cryosections facilitated the isolation of high quality RNA in sufficient amounts from three different cortical cell types. The transcript profiles of arbuscule-containing cells (arb cells), non-arbuscule-containing cells (nac cells) of Rhizophagus irregularis inoculated Medicago truncatula roots and cortex cells of non-inoculated roots (cor) were successfully explored. The data gave new insights in the symbiosis-related cellular reorganization processes and indicated that already nac cells seem to be prepared for the upcoming fungal colonization. The mycorrhizal- and phosphate-dependent transcription of a GRAS TF family member (MtGras8) was detected in arb cells and mycorrhizal roots. MtGRAS shares a high sequence similarity to a GRAS TF suggested to be involved in the fungal colonization processes (MtRAM1). The function of MtGras8 was unraveled upon RNA interference- (RNAi-) mediated gene silencing. An AM symbiosis-dependent expression of a RNAi construct (MtPt4pro::gras8-RNAi) revealed a successful gene silencing of MtGras8 leading to a reduced arbuscule abundance and a higher proportion of deformed arbuscules in root with reduced transcript levels. Accordingly, MtGras8 might control the arbuscule development and life-time. The targeting of MtGras8 by the phosphate-dependent regulated miRNA5204* was discovered previously (Devers et al., 2011). Since miRNA5204* is known to be affected by phosphate, the posttranscriptional regulation might represent a link between phosphate signaling and arbuscule development. In this work, the posttranscriptional regulation was confirmed by mis-expression of miRNA5204* in M. truncatula roots. The miRNA-mediated gene silencing affects the MtGras8 transcript abundance only in the first two weeks of the AM symbiosis and the mis-expression lines seem to mimic the phenotype of MtGras8-RNAi lines. Additionally, MtGRAS8 seems to form heterodimers with NSP2 and RAM1, which are known to be key regulators of the fungal colonization process (Hirsch et al., 2009; Gobbato et al., 2012). These data indicate that MtGras8 and miRNA5204* are linked to the sym pathway and regulate the arbuscule development in phosphate-dependent manner.
The Semantic Web provides information contained in the World Wide Web as machine-readable facts. In comparison to a keyword-based inquiry, semantic search enables a more sophisticated exploration of web documents. By clarifying the meaning behind entities, search results are more precise and the semantics simultaneously enable an exploration of semantic relationships. However, unlike keyword searches, a semantic entity-focused search requires that web documents are annotated with semantic representations of common words and named entities. Manual semantic annotation of (web) documents is time-consuming; in response, automatic annotation services have emerged in recent years. These annotation services take continuous text as input, detect important key terms and named entities and annotate them with semantic entities contained in widely used semantic knowledge bases, such as Freebase or DBpedia. Metadata of video documents require special attention. Semantic analysis approaches for continuous text cannot be applied, because information of a context in video documents originates from multiple sources possessing different reliabilities and characteristics. This thesis presents a semantic analysis approach consisting of a context model and a disambiguation algorithm for video metadata. The context model takes into account the characteristics of video metadata and derives a confidence value for each metadata item. The confidence value represents the level of correctness and ambiguity of the textual information of the metadata item. The lower the ambiguity and the higher the prospective correctness, the higher the confidence value. The metadata items derived from the video metadata are analyzed in a specific order from high to low confidence level. Previously analyzed metadata are used as reference points in the context for subsequent disambiguation. The contextually most relevant entity is identified by means of descriptive texts and semantic relationships to the context. The context is created dynamically for each metadata item, taking into account the confidence value and other characteristics. The proposed semantic analysis follows two hypotheses: metadata items of a context should be processed in descendent order of their confidence value, and the metadata that pertains to a context should be limited by content-based segmentation boundaries. The evaluation results support the proposed hypotheses and show increased recall and precision for annotated entities, especially for metadata that originates from sources with low reliability. The algorithms have been evaluated against several state-of-the-art annotation approaches. The presented semantic analysis process is integrated into a video analysis framework and has been successfully applied in several projects for the purpose of semantic video exploration of videos.
Passive plant actuators have fascinated many researchers in the field of botany and structural biology since at least one century. Up to date, the most investigated tissue types in plant and artificial passive actuators are fibre-reinforced composites (and multilayered assemblies thereof) where stiff, almost inextensible cellulose microfibrils direct the otherwise isotropic swelling of a matrix. In addition, Nature provides examples of actuating systems based on lignified, low-swelling, cellular solids enclosing a high-swelling cellulosic phase. This is the case of the Delosperma nakurense seed capsule, in which a specialized tissue promotes the reversible opening of the capsule upon wetting. This tissue has a diamond-shaped honeycomb microstructure characterized by high geometrical anisotropy: when the cellulosic phase swells inside this constraining structure, the tissue deforms up to four times in one principal direction while maintaining its original dimension in the other. Inspired by the example of the Delosoperma nakurense, in this thesis we analyze the role of architecture of 2D cellular solids as models for natural hygromorphs. To start off, we consider a simple fluid pressure acting in the cells and try to assess the influence of several architectural parameters onto their mechanical actuation. Since internal pressurization is a configurational type of load (that is the load direction is not fixed but it “follows” the structure as it deforms) it will result in the cellular structure acquiring a “spontaneous” shape. This shape is independent of the load but just depends on the architectural characteristics of the cells making up the structure itself. Whereas regular convex tiled cellular solids (such as hexagonal, triangular or square lattices) deform isotropically upon pressurization, we show through finite element simulations that by introducing anisotropic and non-convex, reentrant tiling large expansions can be achieved in each individual cell. The influence of geometrical anisotropy onto the expansion behaviour of a diamond shaped honeycomb is assessed by FEM calculations and a Born lattice approximation. We found that anisotropic expansions (eigenstrains) comparable to those observed in the keels tissue of the Delosoperma nakurense are possible. In particular these depend on the relative contributions of bending and stretching of the beams building up the honeycomb. Moreover, by varying the walls’ Young modulus E and internal pressure p we found that both the eigenstrains and 2D elastic moduli scale with the ratio p/E. Therefore the potential of these pressurized structures as soft actuators is outlined. This approach was extended by considering several 2D cellular solids based on two types of non-convex cells. Each honeycomb is build as a lattice made of only one non-convex cell. Compared to usual honeycombs, these lattices have kinked walls between neighbouring cells which offers a hidden length scale allowing large directed deformations. By comparing the area expansion in all lattices, we were able to show that less convex cells are prone to achieve larger area expansions, but the direction in which the material expands is variable and depends on the local cell’s connectivity. This has repercussions both at the macroscopic (lattice level) and microscopic (cells level) scales. At the macroscopic scale, these non-convex lattices can experience large anisotropic (similarly to the diamond shaped honeycomb) or perfectly isotropic principal expansions, large shearing deformations or a mixed behaviour. Moreover, lattices that at the macroscopic scale expand similarly can show quite different microscopic deformation patterns that include zig-zag motions and radical changes of the initial cell shape. Depending on the lattice architecture, the microscopic deformations of the individual cells can be equal or not, so that they can build up or mutually compensate and hence give rise to the aforementioned variety of macroscopic behaviours. Interestingly, simple geometrical arguments involving the undeformed cell shape and its local connectivity enable to predict the results of the FE simulations. Motivated by the results of the simulations, we also created experimental 3D printed models of such actuating structures. When swollen, the models undergo substantial deformation with deformation patterns qualitatively following those predicted by the simulations. This work highlights how the internal architecture of a swellable cellular solid can lead to complex shape changes which may be useful in the fields of soft robotics or morphing structures.
Die Arbeit thematisiert die Veränderungen im deutschen Wissenschafts- und Hochschulsystem. Im Mittelpunkt steht die "unternehmerische Mission" von Universitäten. Der Blick wird auf das Aufgabenfeld Wissens- und Technologietransfer (WTT) gerichtet. Anhand dessen werden die Veränderungen, die innerhalb des deutschen Universitätssystems in den vergangenen Jahren erfolgten, nachgezeichnet. Die Erwartungshaltungen an Universitäten haben sich verändert. Ökonomische Sichtweisen nehmen einen immer größeren Stellenwert ein. Die Arbeit baut auf den Prämissen der neoinstitutionalistischen Organisationstheorie auf. Anhand dieser wird gezeigt, wie Erwartungen externer Stakeholder Eingang in Hochschulen finden und sich auf ihre organisatorische Ausgestaltung auswirken. Der Arbeit liegt ein exploratives, qualitatives Untersuchungsdesign zugrunde. In einer Fallstudie werden zwei Universitäten als Fallbeispiele untersucht. Die Untersuchung liefert Antworten auf die Fragen, wie der WTT als Aufgabenbereich an deutschen Universitäten umgesetzt wird, welche Strukturen sich herausgebildet haben und inwieweit eine Institutionalisierung des WTTs an Universitäten erfolgt ist. In der Arbeit werden verschiedene Erhebungsinstrumente im Rahmen einer Triangulation genutzt. Experteninterviews bilden das Hauptanalyseinstrument. Ziel der Untersuchung ist neben der Beantwortung der Forschungsfragen, Hypothesen zu bilden, die für weiterführende Untersuchungen genutzt werden können. Darüber hinaus werden Handlungsempfehlungen für die Umsetzung des WTTs an deutschen Hochschulen gegeben. Die Arbeit richtet sich sowohl an Wissenschaftler als auch Praktiker aus dem Bereich Wissens- und Technologietransfer.
An important strand of research has investigated the question of how children acquire a morphological system using offline data from spontaneous or elicited child language. Most of these studies have found dissociations in how children apply regular and irregular inflection (Marcus et al. 1992, Weyerts & Clahsen 1994, Rothweiler & Clahsen 1993). These studies have considerably deepened our understanding of how linguistic knowledge is acquired and organised in the human mind. Their methodological procedures, however, do not involve measurements of how children process morphologically complex forms in real time. To date, little is known about how children process inflected word forms. The aim of this study is to investigate children’s processing of inflected words in a series of on-line reaction time experiments. We used a cross-modal priming experiment to test for decompositional effects on the central level. We used a speeded production task and a lexical decision task to test for frequency effects on access level in production and recognition. Children’s behaviour was compared to adults’ behaviour towards three participle types (-t participles, e.g. getanzt ‘danced’ vs. -n participles with stem change, e.g. gebrochen ‘broken’ vs.-n participles without stem change, e.g. geschlafen ‘slept’). For the central level, results indicate that -t participles but not -n participles have decomposed representations. For the access level, results indicate that -t participles are represented according to their morphemes and additionally as full forms, at least from the age of nine years onwards (Pinker 1999 and Clahsen et al. 2004). Further evidence suggested that -n participles are represented as full-form entries on access level and that -n participles without stem change may encode morphological structure (cf. Clahsen et al. 2003). Out data also suggests that processing strategies for -t participles are differently applied in recognition and production. These results provide evidence that children (within the age range tested) employ the same mechanisms for processing participles as adults. The child lexicon grows as children form additional full-form representations for -t participles on access level and elaborate their full-form lexical representations of -n participles on central level. These results are consistent with processing as explained in dual-system theories.
Intensive Forschung hat in den vergangenen Jahrzehnten zu einer sehr detaillierten Charakterisierung des Geschmackssystems der Säugetiere geführt. Dennoch sind mit den bislang eingesetzten Methoden wichtige Fragestellungen unbeantwortet geblieben. Eine dieser Fragen gilt der Unterscheidung von Bitterstoffen. Die Zahl der Substanzen, die für den Menschen bitter schmecken und in Tieren angeborenes Aversionsverhalten auslösen, geht in die Tausende. Diese Substanzen sind sowohl von der chemischen Struktur als auch von ihrer Wirkung auf den Organismus sehr verschieden. Während viele Bitterstoffe potente Gifte darstellen, sind andere in den Mengen, die mit der Nahrung aufgenommen werden, harmlos oder haben sogar positive Effekte auf den Körper. Zwischen diesen Gruppen unterscheiden zu können, wäre für ein Tier von Vorteil. Ein solcher Mechanismus ist jedoch bei Säugetieren nicht bekannt. Das Ziel dieser Arbeit war die Untersuchung der Verarbeitung von Geschmacksinformation in der ersten Station der Geschmacksbahn im Mausgehirn, dem Nucleus tractus solitarii (NTS), mit besonderem Augenmerk auf der Frage nach der Diskriminierung verschiedener Bitterstoffe. Zu diesem Zweck wurde eine neue Untersuchungsmethode für das Geschmackssystem etabliert, die die Nachteile bereits verfügbarer Methoden umgeht und ihre Vorteile kombiniert. Die Arc-catFISH-Methode (cellular compartment analysis of temporal activity by fluorescent in situ hybridization), die die Charakterisierung der Antwort großer Neuronengruppen auf zwei Stimuli erlaubt, wurde zur Untersuchung geschmacksverarbeitender Zellen im NTS angewandt. Im Zuge dieses Projekts wurde erstmals eine stimulusinduzierte Arc-Expression im NTS gezeigt. Die ersten Ergebnisse offenbarten, dass die Arc-Expression im NTS spezifisch nach Stimulation mit Bitterstoffen auftritt und sich die Arc exprimierenden Neurone vornehmlich im gustatorischen Teil des NTS befinden. Dies weist darauf hin, dass Arc-Expression ein Marker für bitterverarbeitende gustatorische Neurone im NTS ist. Nach zweimaliger Stimulation mit Bittersubstanzen konnten überlappende, aber verschiedene Populationen von Neuronen beobachtet werden, die unterschiedlich auf die drei verwendeten Bittersubstanzen Cycloheximid, Chininhydrochlorid und Cucurbitacin I reagierten. Diese Neurone sind vermutlich an der Steuerung von Abwehrreflexen beteiligt und könnten so die Grundlage für divergentes Verhalten gegenüber verschiedenen Bitterstoffen bilden.
Measuring the metabolite profile of plants can be a strong phenotyping tool, but the changes of metabolite pool sizes are often difficult to interpret, not least because metabolite pool sizes may stay constant while carbon flows are altered and vice versa. Hence, measuring the carbon allocation of metabolites enables a better understanding of the metabolic phenotype. The main challenge of such measurements is the in vivo integration of a stable or radioactive label into a plant without perturbation of the system. To follow the carbon flow of a precursor metabolite, a method is developed in this work that is based on metabolite profiling of primary metabolites measured with a mass spectrometer preceded by a gas chromatograph (Wagner et al. 2003; Erban et al. 2007; Dethloff et al. submitted). This method generates stable isotope profiling data, besides conventional metabolite profiling data. In order to allow the feeding of a 13C sucrose solution into the plant, a petiole and a hypocotyl feeding assay are developed. To enable the processing of large numbers of single leaf samples, their preparation and extraction are simplified and optimised. The metabolite profiles of primary metabolites are measured, and a simple relative calculation is done to gain information on carbon allocation from 13C sucrose. This method is tested examining single leaves of one rosette in different developmental stages, both metabolically and regarding carbon allocation from 13C sucrose. It is revealed that some metabolite pool sizes and 13C pools are tightly associated to relative leaf growth, i.e. to the developmental stage of the leaf. Fumaric acid turns out to be the most interesting candidate for further studies because pool size and 13C pool diverge considerably. In addition, the analyses are also performed on plants grown in the cold, and the initial results show a different metabolite pool size pattern across single leaves of one Arabidopsis rosette, compared to the plants grown under normal temperatures. Lastly, in situ expression of REIL genes in the cold is examined using promotor-GUS plants. Initial results suggest that single leaf metabolite profiles of reil2 differ from those of the WT.
When we read a text, we obtain information at different levels of representation from abstract symbols. A reader’s ultimate aim is the extraction of the meaning of the words and the text. The reserach of eye movements in reading covers a broad range of psychological systems, ranging from low-level perceptual and motor processes to high-level cognition. Reading of skilled readers proceeds highly automatic, but is a complex phenomenon of interacting subprocesses at the same time. The study of eye movements during reading offers the possibility to investigate cognition via behavioral measures during the excercise of an everyday task. The process of reading is not limited to the directly fixated (or foveal) word but also extends to surrounding (or parafoveal) words, particularly the word to the right of the gaze position. This process may be unconscious, but parafoveal information is necessary for efficient reading. There is an ongoing debate on whether processing of the upcoming word encompasses word meaning (or semantics) or only superficial features. To increase the knowledge about how the meaning of one word helps processing another word, seven experiments were conducted. In these studies, words were exachanged during reading. The degree of relatedness between the word to the right of the currently fixated one and the word subsequently fixated was experimentally manipulated. Furthermore, the time course of the parafoveal extraction of meaning was investigated with two different approaches, an experimental one and a statistical one. As a major finding, fixation times were consistently lower if a semantically related word was presented compared to the presence of an unrelated word. Introducing an experimental technique that allows controlling the duration for which words are available, the time course of processing and integrating meaning was evaluated. Results indicated both facilitation and inhibition due to relatedness between the meanings of words. In a more natural reading situation, the effectiveness of the processing of parafoveal words was sometimes time-dependent and substantially increased with shorter distances between the gaze position and the word. Findings are discussed with respect to theories of eye-movement control. In summary, the results are more compatible with models of distributed word processing. The discussions moreover extend to language differences and technical issues of reading research.
Landslides are one of the biggest natural hazards in Georgia, a mountainous country in the Caucasus. So far, no systematic monitoring and analysis of the dynamics of landslides in Georgia has been made. Especially as landslides are triggered by extrinsic processes, the analysis of landslides together with precipitation and earthquakes is challenging. In this thesis I describe the advantages and limits of remote sensing to detect and better understand the nature of landslide in Georgia. The thesis is written in a cumulative form, composing a general introduction, three manuscripts and a summary and outlook chapter. In the present work, I measure the surface displacement due to active landslides with different interferometric synthetic aperture radar (InSAR) methods. The slow landslides (several cm per year) are well detectable with two-pass interferometry. In same time, the extremely slow landslides (several mm per year) could be detected only with time series InSAR techniques. I exemplify the success of InSAR techniques by showing hitherto unknown landslides, located in the central part of Georgia. Both, the landslide extent and displacement rate is quantified. Further, to determine a possible depth and position of potential sliding planes, inverse models were developed. Inverse modeling searches for parameters of source which can create observed displacement distribution. I also empirically estimate the volume of the investigated landslide using displacement distributions as derived from InSAR combined with morphology from an aerial photography. I adapted a volume formula for our case, and also combined available seismicity and precipitation data to analyze potential triggering factors. A governing question was: What causes landslide acceleration as observed in the InSAR data? The investigated area (central Georgia) is seismically highly active. As an additional product of the InSAR data analysis, a deformation area associated with the 7th September Mw=6.0 earthquake was found. Evidences of surface ruptures directly associated with the earthquake could not be found in the field, however, during and after the earthquake new landslides were observed. The thesis highlights that deformation from InSAR may help to map area prone landslides triggering by earthquake, potentially providing a technique that is of relevance for country wide landslide monitoring, especially as new satellite sensors will emerge in the coming years.
In this work, the development of temperature- and protein-responsive sensor materials based on biocompatible, inverse hydrogel opals (IHOs) is presented. With these materials, large biomolecules can be specifically recognised and the binding event visualised. The preparation of the IHOs was performed with a template process, for which monodisperse silica particles were vertically deposited onto glass slides as the first step. The obtained colloidal crystals with a thickness of 5 μm displayed opalescent reflections because of the uniform alignment of the colloids. As a second step, the template was embedded in a matrix consisting of biocompatible, thermoresponsive hydrogels. The comonomers were selected from the family of oligo(ethylene glycol)methacrylates. The monomer solution was injected into a polymerisation mould, which contained the colloidal crystals as a template. The space in-between the template particles was filled with the monomer solution and the hydrogel was cured via UV-polymerisation. The particles were chemically etched, which resulted in a porous inner structure. The uniform alignment of the pores and therefore the opalescent reflection were maintained, so these system were denoted as inverse hydrogel opals. A pore diameter of several hundred nanometres as well as interconnections between the pores should facilitate a diffusion of bigger (bio)molecules, which was always a challenge in the presented systems until now. The copolymer composition was chosen to result in a hydrogel collapse over 35 °C. All hydrogels showed pronounced swelling in water below the critical temperature. The incorporation of a reactive monomer with hydroxyl groups ensured a potential coupling group for the introduction of recognition units for analytes, e.g. proteins. As a test system, biotin as a recognition unit for avidin was coupled to the IHO via polymer-analogous Steglich esterification. The amount of accessible biotin was quantified with a colorimetric binding assay. When avidin was added to the biotinylated IHO, the wavelength of the opalescent reflection was significantly shifted and therefore the binding event was visualised. This effect is based on the change in swelling behaviour of the hydrogel after binding of the hydrophilic avidin, which is amplified by the thermoresponsive nature of the hydrogel. A swelling or shrinking of the pores induces a change in distance of the crystal planes, which are responsible for the colour of the reflection. With these findings, the possibility of creating sensor materials or additional biomolecules in the size range of avidin is given.
Numerical simulations of galaxy formation and observational Galactic Astronomy are two fields of research that study the same objects from different perspectives. Simulations try to understand galaxies like our Milky Way from an evolutionary point of view while observers try to disentangle the current structure and the building blocks of our Galaxy. Due to great advances in computational power as well as in massive stellar surveys we are now able to compare resolved stellar populations in simulations and in observations. In this thesis we use a number of approaches to relate the results of the two fields to each other. The major observational data set we refer to for this work comes from the Radial Velocity Experiment (RAVE), a massive spectroscopic stellar survey that observed almost half a million stars in the Galaxy. In a first study we use three different models of the Galaxy to generate synthetic stellar surveys that can be directly compared to the RAVE data. To do this we evaluate the RAVE selection function to great detail. Among the Galaxy models is the widely used Besancon model that performs well when individual parameter distribution are considered, but fails when we study chemodynamic correlations. The other two models are based on distributions of mass particles instead of analytical distribution functions. This is the first time that such models are converted to the space of observables and are compared to a stellar survey. We show that these models can be competitive and in some aspects superior to analytic models, because of their self-consistent dynamic history. In the case of a full cosmological simulation of disk galaxy formation we can recover features in the synthetic survey that relate to the known issues of the model and hence proof that our technique is sensitive to the global structure of the model. We argue that the next generation of cosmological galaxy formation simulations will deliver valuable models for our Galaxy. Testing these models with our approach will provide a direct connection between stellar Galactic astronomy and physical cosmology. In the second part of the thesis we use a sample of high-velocity halo stars from the RAVE data to estimate the Galactic escape speed and the virial mass of the Milky Way. In the course of this study cosmological simulations of galaxy formation also play a crucial role. Here we use them to calibrate and extensively test our analysis technique. We find the local Galactic escape speed to be 533 (+54/-41) km/s (90% confidence). With this result in combination with a simple mass model of the Galaxy we then construct an estimate of the virial mass of the Galaxy. For the mass profile of the dark matter halo we use two extreme models, a pure Navarro, Frenk & White (NFW) profile and an adiabatically contracted NFW profile. When we use statistics on the concentration parameter of these profile taken from large dissipationless cosmological simulations we obtain an estimate of the virial mass that is almost independent of the choice of the halo profile. For the mass M_340 enclosed within R_340 = 180 kpc we find 1.3 (+0.4/-0.3) x 10^12 M_sun. This value is in very good agreement with a number of other mass estimates in the literature that are based on independent data sets and analysis techniques. In the last part of this thesis we investigate a new possible channel to generate a population of Hypervelocity stars (HVSs) that is observed in the stellar halo. Commonly, it is assumed that the velocities of these stars originate from an interaction with the super-massive black hole in the Galactic center. It was suggested recently that stars stripped-off a disrupted satellite galaxy could reach similar velocities and leave the Galaxy. Here we study in detail the kinematics of tidal debris stars to investigate the probability that the observed sample of HVSs could partly originate from such a galaxy collision. We use a suite of $N$-body simulations following the encounter of a satellite galaxy with its Milky Way-type host galaxy. We quantify the typical pattern in angular and phase space formed by the debris stars and develop a simple model that predicts the kinematics of stripped-off stars. We show that the distribution of orbital energies in the tidal debris has a typical form that can be described quite accurately by a simple function. The main parameters determining the maximum energy kick a tidal debris star can get is the initial mass of the satellite and only to a lower extent its orbit. Main contributors to an unbound stellar population created in this way are massive satellites (M_sat > 10^9 M_sun). The probability that the observed HVS population is significantly contaminated by tidal debris stars appears small in the light of our results.
Challenging Khmer citizenship : minorities, the state, and the international community in Cambodia
(2013)
The idea of a distinctly ‘liberal’ form of multiculturalism has emerged in the theory and practice of Western democracies and the international community has become actively engaged in its global dissemination via international norms and organizations. This thesis investigates the internationalization of minority rights, by exploring state-minority relations in Cambodia, in light of Will Kymlicka’s theory of multicultural citizenship. Based on extensive empirical research, the analysis explores the situation and aspirations of Cambodia’s ethnic Vietnamese, highland peoples, Muslim Cham, ethnic Chinese and Lao and the relationships between these groups and the state. All Cambodian regimes since independence have defined citizenship with reference to the ethnicity of the Khmer majority and have - often violently - enforced this conception through the assimilation of highland peoples and the Cham and the exclusion of ethnic Vietnamese and Chinese. Cambodia’s current constitution, too, defines citizenship ethnically. State-sponsored Khmerization systematically privileges members of the majority culture and marginalizes minority members politically, economically and socially. The thesis investigates various international initiatives aimed at promoting application of minority rights norms in Cambodia. It demonstrates that these initiatives have largely failed to accomplish a greater degree of compliance with international norms in practice. This failure can be explained by a number of factors, among them Cambodia’s neo-patrimonial political system, the geo-political fears of a ‘minoritized’ Khmer majority, the absence of effective regional security institutions, the lack of minority access to political decision-making, the significant differences between international and Cambodian conceptions of modern statehood and citizenship and the emergence of China as Cambodia’s most important bilateral donor and investor. Based on this analysis, the dissertation develops recommendations for a sequenced approach to minority rights promotion, with pragmatic, less ambitious shorter-term measures that work progressively towards achievement of international norms in the longer-term.
Requirements engineers have to elicit, document, and validate how stakeholders act and interact to achieve their common goals in collaborative scenarios. Only after gathering all information concerning who interacts with whom to do what and why, can a software system be designed and realized which supports the stakeholders to do their work. To capture and structure requirements of different (groups of) stakeholders, scenario-based approaches have been widely used and investigated. Still, the elicitation and validation of requirements covering collaborative scenarios remains complicated, since the required information is highly intertwined, fragmented, and distributed over several stakeholders. Hence, it can only be elicited and validated collaboratively. In times of globally distributed companies, scheduling and conducting workshops with groups of stakeholders is usually not feasible due to budget and time constraints. Talking to individual stakeholders, on the other hand, is feasible but leads to fragmented and incomplete stakeholder scenarios. Going back and forth between different individual stakeholders to resolve this fragmentation and explore uncovered alternatives is an error-prone, time-consuming, and expensive task for the requirements engineers. While formal modeling methods can be employed to automatically check and ensure consistency of stakeholder scenarios, such methods introduce additional overhead since their formal notations have to be explained in each interaction between stakeholders and requirements engineers. Tangible prototypes as they are used in other disciplines such as design, on the other hand, allow designers to feasibly validate and iterate concepts and requirements with stakeholders. This thesis proposes a model-based approach for prototyping formal behavioral specifications of stakeholders who are involved in collaborative scenarios. By simulating and animating such specifications in a remote domain-specific visualization, stakeholders can experience and validate the scenarios captured so far, i.e., how other stakeholders act and react. This interactive scenario simulation is referred to as a model-based virtual prototype. Moreover, through observing how stakeholders interact with a virtual prototype of their collaborative scenarios, formal behavioral specifications can be automatically derived which complete the otherwise fragmented scenarios. This, in turn, enables requirements engineers to elicit and validate collaborative scenarios in individual stakeholder sessions – decoupled, since stakeholders can participate remotely and are not forced to be available for a joint session at the same time. This thesis discusses and evaluates the feasibility, understandability, and modifiability of model-based virtual prototypes. Similarly to how physical prototypes are perceived, the presented approach brings behavioral models closer to being tangible for stakeholders and, moreover, combines the advantages of joint stakeholder sessions and decoupled sessions.
In the presence of a solid-liquid or liquid-air interface, bacteria can choose between a planktonic and a sessile lifestyle. Depending on environmental conditions, cells swimming in close proximity to the interface can irreversibly attach to the surface and grow into three-dimensional aggregates where the majority of cells is sessile and embedded in an extracellular polymer matrix (biofilm). We used microfluidic tools and time lapse microscopy to perform experiments with the polarly flagellated soil bacterium Pseudomonas putida (P. putida), a bacterial species that is able to form biofilms. We analyzed individual trajectories of swimming cells, both in the bulk fluid and in close proximity to a glass-liquid interface. Additionally, surface related growth during the early phase of biofilm formation was investigated. In the bulk fluid, P.putida shows a typical bacterial swimming pattern of alternating periods of persistent displacement along a line (runs) and fast reorientation events (turns) and cells swim with an average speed around 24 micrometer per second. We found that the distribution of turning angles is bimodal with a dominating peak around 180 degrees. In approximately six out of ten turning events, the cell reverses its swimming direction. In addition, our analysis revealed that upon a reversal, the cell systematically changes its swimming speed by a factor of two on average. Based on the experimentally observed values of mean runtime and rotational diffusion, we presented a model to describe the spreading of a population of cells by a run-reverse random walker with alternating speeds. We successfully recover the mean square displacement and, by an extended version of the model, also the negative dip in the directional autocorrelation function as observed in the experiments. The analytical solution of the model demonstrates that alternating speeds enhance a cells ability to explore its environment as compared to a bacterium moving at a constant intermediate speed. As compared to the bulk fluid, for cells swimming near a solid boundary we observed an increase in swimming speed at distances below d= 5 micrometer and an increase in average angular velocity at distances below d= 4 micrometer. While the average speed was maximal with an increase around 15% at a distance of d= 3 micrometer, the angular velocity was highest in closest proximity to the boundary at d=1 micrometer with an increase around 90% as compared to the bulk fluid. To investigate the swimming behavior in a confinement between two solid boundaries, we developed an experimental setup to acquire three-dimensional trajectories using a piezo driven objective mount coupled to a high speed camera. Results on speed and angular velocity were consistent with motility statistics in the presence of a single boundary. Additionally, an analysis of the probability density revealed that a majority of cells accumulated near the upper and lower boundaries of the microchannel. The increase in angular velocity is consistent with previous studies, where bacteria near a solid boundary were shown to swim on circular trajectories, an effect which can be attributed to a wall induced torque. The increase in speed at a distance of several times the size of the cell body, however, cannot be explained by existing theories which either consider the drag increase on cell body and flagellum near a boundary (resistive force theory) or model the swimming microorganism by a multipole expansion to account for the flow field interaction between cell and boundary. An accumulation of swimming bacteria near solid boundaries has been observed in similar experiments. Our results confirm that collisions with the surface play an important role and hydrodynamic interactions alone cannot explain the steady-state accumulation of cells near the channel walls. Furthermore, we monitored the number growth of cells in the microchannel under medium rich conditions. We observed that, after a lag time, initially isolated cells at the surface started to grow by division into colonies of increasing size, while coexisting with a comparable smaller number of swimming cells. After 5:50 hours, we observed a sudden jump in the number of swimming cells, which was accompanied by a breakup of bigger clusters on the surface. After approximately 30 minutes where planktonic cells dominated in the microchannel, individual swimming cells reattached to the surface. We interpret this process as an emigration and recolonization event. A number of complementary experiments were performed to investigate the influence of collective effects or a depletion of the growth medium on the transition. Similar to earlier observations on another bacterium from the same family we found that the release of cells to the swimming phase is most likely the result of an individual adaption process, where syntheses of proteins for flagellar motility are upregulated after a number of division cycles at the surface.
Logging and large earthquakes are disturbances that may significantly affect hydrological and erosional processes and process rates, although in decisively different ways. Despite numerous studies that have documented the impacts of both deforestation and earthquakes on water and sediment fluxes, a number of details regarding the timing and type of de- and reforestation; seismic impacts on subsurface water fluxes; or the overall geomorphic work involved have remained unresolved. The main objective of this thesis is to address these shortcomings and to better understand and compare the hydrological and erosional process responses to such natural and man-made disturbances. To this end, south-central Chile provides an excellent natural laboratory owing to its high seismicity and the ongoing conversion of land into highly productive plantation forests. In this dissertation I combine paired catchment experiments, data analysis techniques, and physics-based modelling to investigate: 1) the effect of plantation forests on water resources, 2) the source and sink behavior of timber harvest areas in terms of overland flow generation and sediment fluxes, 3) geomorphic work and its efficiency as a function of seasonal logging, 4) possible hydrologic responses of the saturated zone to the 2010 Maule earthquake and 5) responses of the vadose zone to this earthquake. Re 1) In order to quantify the hydrologic impact of plantation forests, it is fundamental to first establish their water balances. I show that tree species is not significant in this regard, i.e. Pinus radiata and Eucalyptus globulus do not trigger any decisive different hydrologic response. Instead, water consumption is more sensitive to soil-water supply for the local hydro-climatic conditions. Re 2) Contradictory opinions exist about whether timber harvest areas (THA) generate or capture overland flow and sediment. Although THAs contribute significantly to hydrology and sediment transport because of their spatial extent, little is known about the hydrological and erosional processes occurring on them. I show that THAs may act as both sources and sinks for overland flow, which in turn intensifies surface erosion. Above a rainfall intensity of ~20 mm/h, which corresponds to <10% of all rainfall, THAs may generate runoff whereas below that threshold they remain sinks. The overall contribution of Hortonian runoff is thus secondary considering the local rainfall regime. The bulk of both runoff and sediment is generated by Dunne, saturation excess, overland flow. I also show that logging may increase infiltrability on THAs which may cause an initial decrease in streamflow followed by an increase after the groundwater storage has been refilled. Re 3) I present changes in frequency-magnitude distributions following seasonal logging by applying Quantile Regression Forests at hitherto unprecedented detail. It is clearly the season that controls the hydro-geomorphic work efficiency of clear cutting. Logging, particularly dry seasonal logging, caused a shift of work efficiency towards less flashy and mere but more frequent moderate rainfall-runoff events. The sediment transport is dominated by Dunne overland flow which is consistent with physics-based modelling using WASA-SED. Re 4) It is well accepted that earthquakes may affect hydrological processes in the saturated zone. Assuming such flow conditions, consolidation of saturated saprolitic material is one possible response. Consolidation raises the hydraulic gradients which may explain the observed increase in discharge following earthquakes. By doing so, squeezed water saturates the soil which in turn increases the water accessible for plant transpiration. Post-seismic enhanced transpiration is reflected in the intensification of diurnal cycling. Re 5) Assuming unsaturated conditions, I present the first evidence that the vadose zone may also respond to seismic waves by releasing pore water which in turn feeds groundwater reservoirs. By doing so, water tables along the valley bottoms are elevated thus providing additional water resources to the riparian vegetation. By inverse modelling, the transient increase in transpiration is found to be 30-60%. Based on the data available, both hypotheses, are not testable. Finally, when comparing the hydrological and erosional effects of the Maule earthquake with the impact of planting exotic plantation forests, the overall observed earthquake effects are comparably small, and limited to short time scales.
Gegenstand dieser Arbeit sind sog. nicht-kanonische bzw. unintegrierte Nebensätze. Diese Nebensätze zeichnen sich dadurch aus, dass sie sich mittels gängiger Kriterien (Satzgliedstatus, Verbletztstellung) nicht klar als koordiniert oder subordiniert beschreiben lassen. Das Phänomen nicht-kanonischer Nebensätze ist ein Thema, welches in der Sprachwissenschaft generell seit den späten Siebzigern (Davison 1979) diskutiert wird und spätestens mit Fabricius-Hansen (1992) auch innerhalb der germanistischen Linguistik angekommen ist. Ein viel beachteter Komplex ist hierbei – neben der reinen Identifizierung nicht-kanonischer Satzgefüge – meist auch die Erstellung einer Klassifikation zur Erfassung zumindest einiger nicht-kanonischer Gefüge, wie dies etwa bei Fabricius-Hansen (1992) und Reis (1997) zu sehen ist. Das Ziel dieser Studie ist es, eine exhaustive Klassifikation der angesprochenen Nebensatztypen vorzunehmen. Dazu werden zunächst – unter Zuhilfenahme von Korpusdaten – alle potentiellen Subordinationsmerkmale genauer untersucht, da die meisten bisherigen Studien zu diesem Thema die stets gleichen Merkmale als gegeben voraussetzen. Dabei wird sich herausstellen, dass nur eine kleine Anzahl von Merkmalen sich wirklich zweifelsfrei dazu eignet, Aufschluss über die Satzverknüpfungsqualität zu geben. Die anschließend aufgestellte Taxonomie deutscher Nebensätze wird schließlich einzig mit der Postulierung einer nicht-kanonischen Nebensatzklasse auskommen. Sie ist darüber hinaus auch in der Lage, die zahlreich vorkommenden Ausnahmefälle zu erfassen. Dies heißt konkret, dass auch etwaige Nebensätze, die sich aufgrund bestimmter Eigenschaften teilweise idiosynkratisch verhalten, einfach in die vorgeschlagene Klassifikation übernommen werden können. In diesem Zuge werde ich weiterhin zeigen, wie eine Nebensatzklassifikation auch sog. sekundären Subordinationsmerkmalen gerecht werden kann, obwohl diese sich hinsichtlich der einzelnen Nebensatzklassen nicht einheitlich verhalten. Schließlich werde ich eine theoretische Modellierung der zuvor postulierten Taxonomie vornehmen, die auf Basis der HPSG mittels Merkmalsvererbung alle möglichen Nebensatztypen zu erfassen imstande ist.
Die professionalisierte Kommunikation komplexer Gebilde wie Staaten und Nationen, die ihre Hinwendung politischer Fragestellungen in die Sphären von Image und Einfluss verlegt, kommt vor dem Hintergrund wachsenden Wettbewerbs an der Bedeutung der Reputation nicht vorbei. Denn neben ihrer ökonomischen Bedeutung legitimiert Reputation als mittel- oder langfristiges öffentliches Ansehen, das ihren Trägern Definitions- und Überzeugungsmacht verschafft, Macht- und Herrschaftspositionen. In einer mediatisierten Gesellschaft wächst die Bedeutung der Kommunikation mit der Öffentlichkeit sowohl für Erwerb und Erhalt von Reputation – wie auch für deren Aberkennung. Dabei spielt eine zunehmende Skandalisierung als Eigenheit der Mediengesellschaft eine Rolle, die eine erhöhte Fragilisierung der Reputation zur Folge hat und als wirksamster Mechanismus bei der Aberkennung von Reputation gilt, wie das Beispiel der dänischen „Karikaturen-Affäre“ veranschaulicht. In einer kommunikativ schnelllebigen Welt, zunehmend frei verfügbarer Information, geistern durch die Außenministerien des Global Village Begriffe wie Public Diplomacy, Nation Branding, Country Branding oder Place Branding, deren gemeinsamer Nenner zunächst die nach außen gerichtete Kommunikation ist. Aber schon die Frage nach Absender und Adressat, nach Akteur und Rezipient, Botschaften und Zielgruppen verweist auf die Komplexität der Kommunikation eines Landes. Ziel der Untersuchung ist es in den Kommunikationsbemühungen von Ländern, Faktoren zu identifizieren, die Image und Reputation nachhaltig beeinflussen können. Dabei stehen die folgenden Fragen im Mittelpunkt der Untersuchung: Wie werden Länder wahrgenommen und Reputation gebildet? Können die Bezugsgruppen eines Landes durch die Stakeholdertheorie (Freeman 1984) beschrieben werden – und wenn ja, welche Konsequenzen hat eine solche Berücksichtigung der Anspruchs- und Adressatengruppen auf die Kommunikation eines Landes? Welche Aspekte der theoretischen Ansätze können für die Kommunikation von Ländern als zentral bewertet werden? Und schließlich: Kann die Reflexion durch die Praxis, am Fallbeispiel der Schweiz, die Relevanz der eruierten Aspekte bestätigen bzw. um weitere Aspekte ergänzen und können auf dieser Grundlage Erfolgsfaktoren identifiziert und geeignete Instrumente für die Reputationskommunikation aufgezeigt werden?
Systems of Systems (SoS) have received a lot of attention recently. In this thesis we will focus on SoS that are built atop the techniques of Service-Oriented Architectures and thus combine the benefits and challenges of both paradigms. For this thesis we will understand SoS as ensembles of single autonomous systems that are integrated to a larger system, the SoS. The interesting fact about these systems is that the previously isolated systems are still maintained, improved and developed on their own. Structural dynamics is an issue in SoS, as at every point in time systems can join and leave the ensemble. This and the fact that the cooperation among the constituent systems is not necessarily observable means that we will consider these systems as open systems. Of course, the system has a clear boundary at each point in time, but this can only be identified by halting the complete SoS. However, halting a system of that size is practically impossible. Often SoS are combinations of software systems and physical systems. Hence a failure in the software system can have a serious physical impact what makes an SoS of this kind easily a safety-critical system. The contribution of this thesis is a modelling approach that extends OMG's SoaML and basically relies on collaborations and roles as an abstraction layer above the components. This will allow us to describe SoS at an architectural level. We will also give a formal semantics for our modelling approach which employs hybrid graph-transformation systems. The modelling approach is accompanied by a modular verification scheme that will be able to cope with the complexity constraints implied by the SoS' structural dynamics and size. Building such autonomous systems as SoS without evolution at the architectural level --- i. e. adding and removing of components and services --- is inadequate. Therefore our approach directly supports the modelling and verification of evolution.
Organic semiconductors combine the benefits of organic materials, i.e., low-cost production, mechanical flexibility, lightweight, and robustness, with the fundamental semiconductor properties light absorption, emission, and electrical conductivity. This class of material has several advantages over conventional inorganic semiconductors that have led, for instance, to the commercialization of organic light-emitting diodes which can nowadays be found in the displays of TVs and smartphones. Moreover, organic semiconductors will possibly lead to new electronic applications which rely on the unique mechanical and electrical properties of these materials. In order to push the development and the success of organic semiconductors forward, it is essential to understand the fundamental processes in these materials. This thesis concentrates on understanding how the charge transport in thiophene-based semiconductor layers depends on the layer morphology and how the charge transport properties can be intentionally modified by doping these layers with a strong electron acceptor. By means of optical spectroscopy, the layer morphologies of poly(3-hexylthiophene), P3HT, P3HT-fullerene bulk heterojunction blends, and oligomeric polyquaterthiophene, oligo-PQT-12, are studied as a function of temperature, molecular weight, and processing conditions. The analyses rely on the decomposition of the absorption contributions from the ordered and the disordered parts of the layers. The ordered-phase spectra are analyzed using Spano’s model. It is figured out that the fraction of aggregated chains and the interconnectivity of these domains is fundamental to a high charge carrier mobility. In P3HT layers, such structures can be grown with high-molecular weight, long P3HT chains. Low and medium molecular weight P3HT layers do also contain a significant amount of chain aggregates with high intragrain mobility; however, intergranular connectivity and, therefore, efficient macroscopic charge transport are absent. In P3HT-fullerene blend layers, a highly crystalline morphology that favors the hole transport and the solar cell efficiency can be induced by annealing procedures and the choice of a high-boiling point processing solvent. Based on scanning near-field and polarization optical microscopy, the morphology of oligo-PQT-12 layers is found to be highly crystalline which explains the rather high field-effect mobility in this material as compared to low molecular weight polythiophene fractions. On the other hand, crystalline dislocations and grain boundaries are identified which clearly limit the charge carrier mobility in oligo-PQT-12 layers. The charge transport properties of organic semiconductors can be widely tuned by molecular doping. Indeed, molecular doping is a key to highly efficient organic light-emitting diodes and solar cells. Despite this vital role, it is still not understood how mobile charge carriers are induced into the bulk semiconductor upon the doping process. This thesis contains a detailed study of the doping mechanism and the electrical properties of P3HT layers which have been p-doped by the strong molecular acceptor tetrafluorotetracyanoquinodimethane, F4TCNQ. The density of doping-induced mobile holes, their mobility, and the electrical conductivity are characterized in a broad range of acceptor concentrations. A long-standing debate on the nature of the charge transfer between P3HT and F4TCNQ is resolved by showing that almost every F4TCNQ acceptor undergoes a full-electron charge transfer with a P3HT site. However, only 5% of these charge transfer pairs can dissociate and induce a mobile hole into P3HT which contributes electrical conduction. Moreover, it is shown that the left-behind F4TCNQ ions broaden the density-of-states distribution for the doping-induced mobile holes, which is due to the longrange Coulomb attraction in the low-permittivity organic semiconductors.
Small eye movements during fixation : the case of postsaccadic fixation and preparatory influences
(2013)
Describing human eye movement behavior as an alternating sequence of saccades and fixations turns out to be an oversimplification because the eyes continue to move during fixation. Small-amplitude saccades (e.g., microsaccades) are typically observed 1-2 times per second during fixation. Research on microsaccades came in two waves. Early studies on microsaccades were dominated by the question whether microsaccades affect visual perception, and by studies on the role of microsaccades in the process of fixation control. The lack of evidence for a unique role of microsaccades led to a very critical view on the importance of microsaccades. Over the last years, microsaccades moved into focus again, revealing many interactions with perception, oculomotor control and cognition, as well as intriguing new insights into the neurophysiological implementation of microsaccades. In contrast to early studies on microsaccades, recent findings on microsaccades were accompanied by the development of models of microsaccade generation. While the exact generating mechanisms vary between the models, they still share the assumption that microsaccades are generated in a topographically organized saccade motor map that includes a representation for small-amplitude saccades in the center of the map (with its neurophysiological implementation in the rostral pole of the superior colliculus). In the present thesis I criticize that models of microsaccade generation are exclusively based on results obtained during prolonged presaccadic fixation. I argue that microsaccades should also be studied in a more natural situation, namely the fixation following large saccadic eye movements. Studying postsaccadic fixation offers a new window to falsify models that aim to account for the generation of small eye movements. I demonstrate that error signals (visual and extra-retinal), as well as non-error signals like target eccentricity influence the characteristics of small-amplitude eye movements. These findings require a modification of a model introduced by Rolfs, Kliegl and Engbert (2008) in order to account for the generation of small-amplitude saccades during postsaccadic fixation. Moreover, I present a promising type of survival analysis that allowed me to examine time-dependent influences on postsaccadic eye movements. In addition, I examined the interplay of postsaccadic eye movements and postsaccadic location judgments, highlighting the need to include postsaccadic eye movements as covariate in the analyses of location judgments in the presented paradigm. In a second goal, I tested model predictions concerning preparatory influences on microsaccade generation during presaccadic fixation. The observation, that the preparatory set significantly influenced microsaccade rate, supports the critical model assumption that increased fixation-related activity results in a larger number of microsaccades. In the present thesis I present important influences on the generation of small-amplitude saccades during fixation. These eye movements constitute a rich oculomotor behavior which still poses many research questions. Certainly, small-amplitude saccades represent an interesting source of information and will continue to influence future studies on perception and cognition.
The Arctic tundra, covering approx. 5.5 % of the Earth’s land surface, is one of the last ecosystems remaining closest to its untouched condition. Remote sensing is able to provide information at regular time intervals and large spatial scales on the structure and function of Arctic ecosystems. But almost all natural surfaces reveal individual anisotropic reflectance behaviors, which can be described by the bidirectional reflectance distribution function (BRDF). This effect can cause significant changes in the measured surface reflectance depending on solar illumination and sensor viewing geometries. The aim of this thesis is the hyperspectral and spectro-directional reflectance characterization of important Arctic tundra vegetation communities at representative Siberian and Alaskan tundra sites as basis for the extraction of vegetation parameters, and the normalization of BRDF effects in off-nadir and multi-temporal remote sensing data. Moreover, in preparation for the upcoming German EnMAP (Environmental Mapping and Analysis Program) satellite mission, the understanding of BRDF effects in Arctic tundra is essential for the retrieval of high quality, consistent and therefore comparable datasets. The research in this doctoral thesis is based on field spectroscopic and field spectro-goniometric investigations of representative Siberian and Alaskan measurement grids. The first objective of this thesis was the development of a lightweight, transportable, and easily managed field spectro-goniometer system which nevertheless provides reliable spectro-directional data. I developed the Manual Transportable Instrument platform for ground-based Spectro-directional observations (ManTIS). The outcome of the field spectro-radiometrical measurements at the Low Arctic study sites along important environmental gradients (regional climate, soil pH, toposequence, and soil moisture) show that the different plant communities can be distinguished by their nadir-view reflectance spectra. The results especially reveal separation possibilities between the different tundra vegetation communities in the visible (VIS) blue and red wavelength regions. Additionally, the near-infrared (NIR) shoulder and NIR reflectance plateau, despite their relatively low values due to the low structure of tundra vegetation, are still valuable information sources and can separate communities according to their biomass and vegetation structure. In general, all different tundra plant communities show: (i) low maximum NIR reflectance; (ii) a weakly or nonexistent visible green reflectance peak in the VIS spectrum; (iii) a narrow “red-edge” region between the red and NIR wavelength regions; and (iv) no distinct NIR reflectance plateau. These common nadir-view reflectance characteristics are essential for the understanding of the variability of BRDF effects in Arctic tundra. None of the analyzed tundra communities showed an even closely isotropic reflectance behavior. In general, tundra vegetation communities: (i) usually show the highest BRDF effects in the solar principal plane; (ii) usually show the reflectance maximum in the backward viewing directions, and the reflectance minimum in the nadir to forward viewing directions; (iii) usually have a higher degree of reflectance anisotropy in the VIS wavelength region than in the NIR wavelength region; and (iv) show a more bowl-shaped reflectance distribution in longer wavelength bands (>700 nm). The results of the analysis of the influence of high sun zenith angles on the reflectance anisotropy show that with increasing sun zenith angles, the reflectance anisotropy changes to azimuthally symmetrical, bowl-shaped reflectance distributions with the lowest reflectance values in the nadir view position. The spectro-directional analyses also show that remote sensing products such as the NDVI or relative absorption depth products are strongly influenced by BRDF effects, and that the anisotropic characteristics of the remote sensing products can significantly differ from the observed BRDF effects in the original reflectance data. But the results further show that the NDVI can minimize view angle effects relative to the contrary spectro-directional effects in the red and NIR bands. For the researched tundra plant communities, the overall difference of the off-nadir NDVI values compared to the nadir value increases with increasing sensor viewing angles, but on average never exceeds 10 %. In conclusion, this study shows that changes in the illumination-target-viewing geometry directly lead to an altering of the reflectance spectra of Arctic tundra communities according to their object-specific BRDFs. Since the different tundra communities show only small, but nonetheless significant differences in the surface reflectance, it is important to include spectro-directional reflectance characteristics in the algorithm development for remote sensing products.
The supercapacitor is one of the most important energy storage devices as its construction allows for addressing many of the drawbacks related to batteries, but the low energy density of current systems is a major issue. In this doctoral dissertation, with a view to attaining high energy density supercapacitor systems that can be comparable to those for batteries, new heteroatom-containing carbons in the form of particles and three-dimensional films were investigated. A nitrogen-containing material, acrodam, was chosen as the carbon precursor due to the inexpensiveness, high carbonization yield, oligomerizability, etc. The carbon particles were prepared from acrodam together with caesium acetate as a meltable flux agent, and disclosed excellent properties in hydroquinone-loaded sulphuric acid electrolyte with high energy densities (up to 133.0 Wh kg–1) and sufficient cycle stabilities. These properties are already now comparable to those of batteries. Besides, conductive carbon three-dimensional films were fabricated using acrodam oligomer as the precursor by the inexpensive spin coating method. The films were found to be homogeneous, flat, void- and crack-free, and high conductivities (up to 334 S cm–1) could be obtained at the carbonization temperature of 1000 ºC. Furthermore, a porous carbon three-dimensional film could be formed using an organic template at the first attempt. This finding demonstrates the film’s potentiality for various applications such as supercapacitor electrode; the essential absence of contact resistance within the network should contribute to effective transportation of electron within the electrode. The progress made in this dissertation will open a new way to further enhancement of energy density for supercapacitor as well as other applications that exceeds the current properties.
Even though quite different in occurrence and consequences, from a modeling perspective many natural hazards share similar properties and challenges. Their complex nature as well as lacking knowledge about their driving forces and potential effects make their analysis demanding: uncertainty about the modeling framework, inaccurate or incomplete event observations and the intrinsic randomness of the natural phenomenon add up to different interacting layers of uncertainty, which require a careful handling. Nevertheless deterministic approaches are still widely used in natural hazard assessments, holding the risk of underestimating the hazard with disastrous effects. The all-round probabilistic framework of Bayesian networks constitutes an attractive alternative. In contrast to deterministic proceedings, it treats response variables as well as explanatory variables as random variables making no difference between input and output variables. Using a graphical representation Bayesian networks encode the dependency relations between the variables in a directed acyclic graph: variables are represented as nodes and (in-)dependencies between variables as (missing) edges between the nodes. The joint distribution of all variables can thus be described by decomposing it, according to the depicted independences, into a product of local conditional probability distributions, which are defined by the parameters of the Bayesian network. In the framework of this thesis the Bayesian network approach is applied to different natural hazard domains (i.e. seismic hazard, flood damage and landslide assessments). Learning the network structure and parameters from data, Bayesian networks reveal relevant dependency relations between the included variables and help to gain knowledge about the underlying processes. The problem of Bayesian network learning is cast in a Bayesian framework, considering the network structure and parameters as random variables itself and searching for the most likely combination of both, which corresponds to the maximum a posteriori (MAP score) of their joint distribution given the observed data. Although well studied in theory the learning of Bayesian networks based on real-world data is usually not straight forward and requires an adoption of existing algorithms. Typically arising problems are the handling of continuous variables, incomplete observations and the interaction of both. Working with continuous distributions requires assumptions about the allowed families of distributions. To "let the data speak" and avoid wrong assumptions, continuous variables are instead discretized here, thus allowing for a completely data-driven and distribution-free learning. An extension of the MAP score, considering the discretization as random variable as well, is developed for an automatic multivariate discretization, that takes interactions between the variables into account. The discretization process is nested into the network learning and requires several iterations. Having to face incomplete observations on top, this may pose a computational burden. Iterative proceedings for missing value estimation become quickly infeasible. A more efficient albeit approximate method is used instead, estimating the missing values based only on the observations of variables directly interacting with the missing variable. Moreover natural hazard assessments often have a primary interest in a certain target variable. The discretization learned for this variable does not always have the required resolution for a good prediction performance. Finer resolutions for (conditional) continuous distributions are achieved with continuous approximations subsequent to the Bayesian network learning, using kernel density estimations or mixtures of truncated exponential functions. All our proceedings are completely data-driven. We thus avoid assumptions that require expert knowledge and instead provide domain independent solutions, that are applicable not only in other natural hazard assessments, but in a variety of domains struggling with uncertainties.
Da geologische Störungen können als Grundwasserleiter, -Barrieren oder als gemischte leitende /stauende Fluidsysteme wirken. Aufgrund dessen können Störungen maßgeblich den Grundwasserfluss im Untergrund beeinflussen, welcher deutliche Veränderungen des tiefen thermischen Feldes bewirken kann. Grundwasserdynamik und Temperaturveränderungen sind wiederum entscheidende Faktoren für die Exploration geothermischer Energie. Diese Studie untersuchte den Einfluss von Störungen auf das Fluidsystem und das thermische Feld im Untergrund. Sie erforschte die physikalischen Prozesse, welche das Fluidverhalten und die Temperaturverteilung in Störungen und in den umgebenden Gesteinen. Dazu wurden 3D Finite Elemente Simulationen des gekoppelten Fluid und Wärmetransports für synthetische sowie reale Modelszenarien auf unterschiedlichen Skalen durchgeführt. Um den Einfluss einer schräg einfallenden Störung systematisch durch die schrittweise Veränderung der hydraulischen Öffnungsweite und der Permeabilität, zu untersuchen, wurde ein klein-skaliges synthetisches Modell entwickelt. Ein inverser linearer Zusammenhang wurde festgestellt, welcher zeigt, dass sich die Fluidgeschwindigkeit in der Störung jeweils um ~1e-01 m/s verringert, wenn die Öffnungsweite der Störung um jeweils eine Magnitude vergrößert wird. Ein hoher Permeabilitätskontrast zwischen Störung und umgebender Matrix begünstigt die Fluidadvektion hin zur Störung und führt zu ausgeprägten Druck- und Temperaturveränderungen innerhalb und um die Störung herum. Bei geringem Permeabilitätskontrast zwischen Störung und umgebendem Gestein findet hingegen kein Fluidfluss in der Störung statt, wobei das hydrostatische Druck- sowie das Temperaturfeld unverändert bleiben. Auf Grundlage der synthetischen Modellierungsergebnisse wurde der Einfluss von Störungen auf einer größeren Skala anhand eines komplexeren (realen) geologischen Systems analysiert. Dabei handelt es sich um ein 3D Modell des Geothermiestandortes Groß Schönebeck, der ca. 40 km nördlich von Berlin liegt. Die Integration von einer permeablen und drei impermeablen Hauptstörungen, zeigte unterschiedlich starke Einflüsse auf Fluidzirkulation, Temperatur – und Druckfeld. Die modellierte konvektive Zirkulation in der permeablen Störung verändert das thermische Feld stark (bis zu 15 K). In den gering durchlässigen Störungen wird die Wärme ausschließlich durch Diffusion geleitet. Der konduktive Wärmetransport beeinflusst das thermische Feld nicht, bewirkt jedoch lokale Veränderungen des hydrostatischen Druckfeldes. Um den Einfluss großer Störungszonen mit kilometerweitem vertikalen Versatz auf das geothermische Feld der Beckenskala zu untersuchen, wurden gekoppelte Fluid- und Wärmetransportsimulationen für ein 3D Strukturmodell des Gebietes Brandenburg durchgeführt (Noack et al. 2010; 2013). Bezüglich der Störungspermeabilität wurden verschiedene geologische Szenarien modelliert, von denen zwei Endgliedermodelle ausgewertet wurden. Die Ergebnisse zeigten, dass die undurchlässigen Störungen den Fluidfluss nur lokal beeinflussen. Da sie als hydraulische Barrieren wirken, wird der Fluidfluss mir sehr geringen Geschwindigkeiten entlang der Störungen innerhalb eines Bereichs von ~ 1 km auf jeder Seite umgelenkt. Die modellierten lokalen Veränderungen des Grundwasserzirkulationssystems haben keinen beobachtbaren Effekt auf das Temperaturfeld. Hingegen erzeugen permeable Störungszonen eine ausgeprägte thermische Signatur innerhalb eines Einflussbereichs von ~ 2.4-8.8 km in -1000 m Tiefe und ~6-12 km in -3000 m Tiefe. Diese thermische Signatur, in der sich kältere und wärmere Temperaturbereiche abwechseln, wird durch auf- und abwärts gerichteten Fluidfluss innerhalb der Störung verursacht, der grundsätzlich durch existierende Gradienten in der hydraulischen Druckhöhe angetrieben wird. Alle Studien haben gezeigt, dass Störungen einen beachtlichen Einfluss auf den Fluid-, und Wärmefluss haben. Es stellte sich heraus, dass die Permeabilität in der Störung und in den umgebenden geologischen Schichten so wie der spezifische geologische Rahmen entscheidende Faktoren in der Ausbildung verschiedener Wärmetransportmechanismen sind, die sich in Störungen entwickeln können. Die von permeablen Störungen verursachten Temperaturveränderungen können lokal, jedoch groß sein, genauso wie die durch hydraulisch leitende und nichtleitende Störungen hervorgerufenen Veränderungen des Fluidystems. Letztlich haben die Simulationen für die unterschiedlich skalierten Modelle gezeigt, dass die Ergebnisse sich nicht aufeinander übertragen lassen und dass es notwendig ist, jeden geologischen Rahmen hinsichtlich Konfiguration und Größenskala gesondert zu betrachten. Abschließend hat diese Studie demonstriert, dass die Betrachtung von Störungen in 3D Finiten Elementen Modellen für die Simulation von gekoppeltem Fluid- und Wärmetransport auf unterschiedlichen Skalen möglich ist. Da diese Art von numerischen Simulationen sowohl die geologische Struktur des Untergrunds sowie die im Erdinnern ablaufenden physikalischen Prozesse integriert, können sie einen wertvollen Beitrag leisten, indem sie Feld- und Laborgestützte Untersuchungen vervollständigen.
In this work, thermosensitive hydrogels having tunable thermo-mechanical properties were synthesized. Generally the thermal transition of thermosensitive hydrogels is based on either a lower critical solution temperature (LCST) or critical micelle concentration/ temperature (CMC/ CMT). The temperature dependent transition from sol to gel with large volume change may be seen in the former type of thermosensitive hydrogels and is negligible in CMC/ CMT dependent systems. The change in volume leads to exclusion of water molecules, resulting in shrinking and stiffening of system above the transition temperature. The volume change can be undesired when cells are to be incorporated in the system. The gelation in the latter case is mainly driven by micelle formation above the transition temperature and further colloidal packing of micelles around the gelation temperature. As the gelation mainly depends on concentration of polymer, such a system could undergo fast dissolution upon addition of solvent. Here, it was envisioned to realize a thermosensitive gel based on two components, one responsible for a change in mechanical properties by formation of reversible netpoints upon heating without volume change, and second component conferring degradability on demand. As first component, an ABA triblockcopolymer (here: Poly(ethylene glycol)-b-poly(propylene glycol)-b-poly(ethylene glycol) (PEPE) with thermosensitive properties, whose sol-gel transition on the molecular level is based on micellization and colloidal jamming of the formed micelles was chosen, while for the additional macromolecular component crosslinking the formed micelles biopolymers were employed. The synthesis of the hydrogels was performed in two ways, either by physical mixing of compounds showing electrostatic interactions, or by covalent coupling of the components. Biopolymers (here: the polysaccharides hyaluronic acid, chondroitin sulphate, or pectin, as well as the protein gelatin) were employed as additional macromolecular crosslinker to simultaneously incorporate an enzyme responsiveness into the systems. In order to have strong ionic/electrostatic interactions between PEPE and polysaccharides, PEPE was aminated to yield predominantly mono- or di-substituted PEPEs. The systems based on aminated PEPE physically mixed with HA showed an enhancement in the mechanical properties such as, elastic modulus (G′) and viscous modulus (G′′) and a decrease of the gelation temperature (Tgel) compared to the PEPE at same concentration. Furthermore, by varying the amount of aminated PEPE in the composition, the Tgel of the system could be tailored to 27-36 °C. The physical mixtures of HA with di-amino PEPE (HA·di-PEPE) showed higher elastic moduli G′ and stability towards dissolution compared to the physical mixtures of HA with mono-amino PEPE (HA·mono-PEPE). This indicates a strong influence of electrostatic interaction between –COOH groups of HA and –NH2 groups of PEPE. The physical properties of HA with di-amino PEPE (HA·di-PEPE) compare beneficially with the physical properties of the human vitreous body, the systems are highly transparent, and have a comparable refractive index and viscosity. Therefore,this material was tested for a potential biological application and was shown to be non-cytotoxic in eluate and direct contact tests. The materials will in the future be investigated in further studies as vitreous body substitutes. In addition, enzymatic degradation of these hydrogels was performed using hyaluronidase to specifically degrade the HA. During the degradation of these hydrogels, increase in the Tgel was observed along with decrease in the mechanical properties. The aminated PEPE were further utilised in the covalent coupling to Pectin and chondroitin sulphate by using EDC as a coupling agent. Here, it was possible to adjust the Tgel (28-33 °C) by varying the grafting density of PEPE to the biopolymer. The grafting of PEPE to Pectin enhanced the thermal stability of the hydrogel. The Pec-g-PEPE hydrogels were degradable by enzymes with slight increase in Tgel and decrease in G′ during the degradation time. The covalent coupling of aminated PEPE to HA was performed by DMTMM as a coupling agent. This method of coupling was observed to be more efficient compared to EDC mediated coupling. Moreover, the purification of the final product was performed by ultrafiltration technique, which efficiently removed the unreacted PEPE from the final product, which was not sufficiently achieved by dialysis. Interestingly, the final products of these reaction were in a gel state and showed enhancement in the mechanical properties at very low concentrations (2.5 wt%) near body temperature. In these hydrogels the resulting increase in mechanical properties was due to the combined effect of micelle packing (physical interactions) by PEPE and covalent netpoints between PEPE and HA. PEPE alone or the physical mixtures of the same components were not able to show thermosensitive behavior at concentrations below 16 wt%. These thermosensitive hydrogels also showed on demand solubilisation by enzymatic degradation. The concept of thermosensitivity was introduced to 3D architectured porous hydrogels, by covalently grafting the PEPE to gelatin and crosslinking with LDI as a crosslinker. Here, the grafted PEPE resulted in a decrease in the helix formation in gelatin chains and after fixing the gelatin chains by crosslinking, the system showed an enhancement in the mechanical properties upon heating (34-42 °C) which was reversible upon cooling. A possible explanation of the reversible changes in mechanical properties is the strong physical interactions between micelles formed by PEPE being covalently linked to gelatin. Above the transition temperature, the local properties were evaluated by AFM indentation of pore walls in which an increase in elastic modulus (E) at higher temperature (37 °C) was observed. The water uptake of these thermosensitive architectured porous hydrogels was also influenced by PEPE and temperature (25 °C and 37 °C), showing lower water up take at higher temperature and vice versa. In addition, due to the lower water uptake at high temperature, the rate of hydrolytic degradation of these systems was found to be decreased when compared to pure gelatin architectured porous hydrogels. Such temperature sensitive architectured porous hydrogels could be important for e.g. stem cell culturing, cell differentiation and guided cell migration, etc. Altogether, it was possible to demonstrate that the crosslinking of micelles by a macromolecular crosslinker increased the shear moduli, viscosity, and stability towards dissolution of CMC-based gels. This effect could be likewise be realized by covalent or non-covalent mechanisms such as, micelle interactions, physical interactions of gelatin chains and physical interactions between gelatin chains and micelles. Moreover, the covalent grafting of PEPE will create additional net-points which also influence the mechanical properties of thermosensitive architectured porous hydrogels. Overall, the physical and chemical interactions and reversible physical interactions in such thermosensitive architectured porous hydrogels gave a control over the mechanical properties of such complex system. The hydrogels showing change of mechanical properties without a sol-gel transition or volume change are especially interesting for further study with cell proliferation and differentiation.
Der Untersuchungsgegenstand der vorliegenden Arbeit ist, die mit dem Begriff „Design Thinking“ verbundenen Diskurse zu bestimmen und deren Themen, Konzepte und Bezüge herauszuarbeiten. Diese Zielstellung ergibt sich aus den mehrfachen Widersprüchen und Vieldeutigkeiten, die die gegenwärtigen Verwendungen des Design-Thinking-Begriffs charakterisieren und den kohärenten Gebrauch in Wissenschaft und Wirtschaft erschweren. Diese Arbeit soll einen Beitrag dazu leisten, „Design Thinking“ in den unterschiedlichen Diskurszusammenhängen grundlegend zu verstehen und für zukünftige Verwendungen des Design-Thinking-Begriffs eine solide Argumentationsbasis zu schaffen.
The surface heat flow (qs) is paramount for modeling the thermal structure of the lithosphere. Changes in the qs over a distinct lithospheric unit are normally directly reflecting changes in the crustal composition and therewith the radiogenic heat budget (e.g., Rudnick et al., 1998; Förster and Förster, 2000; Mareschal and Jaupart, 2004; Perry et al., 2006; Hasterok and Chapman, 2011, and references therein) or, less usual, changes in the mantle heat flow (e.g., Pollack and Chapman, 1977). Knowledge of this physical property is therefore of great interest for both academic research and the energy industry. The present study focuses on the qs of central and southern Israel as part of the Sinai Microplate (SM). Having formed during Oligocene to Miocene rifting and break-up of the African and Arabian plates, the SM is characterized by a young and complex tectonic history. Resulting from the time thermal diffusion needs to pass through the lithosphere, on the order of several tens-of-millions of years (e.g., Fowler, 1990); qs-values of the area reflect conditions of pre-Oligocene times. The thermal structure of the lithosphere beneath the SM in general, and south-central Israel in particular, has remained poorly understood. To address this problem, the two parameters needed for the qs determination were investigated. Temperature measurements were made at ten pre-existing oil and water exploration wells, and the thermal conductivity of 240 drill core and outcrop samples was measured in the lab. The thermal conductivity is the sensitive parameter in this determination. Lab measurements were performed on both, dry and water-saturated samples, which is labor- and time-consuming. Another possibility is the measurement of thermal conductivity in dry state and the conversion to a saturated value by using mean model approaches. The availability of a voluminous and diverse dataset of thermal conductivity values in this study allowed (1) in connection with the temperature gradient to calculate new reliable qs values and to use them to model the thermal pattern of the crust in south-central Israel, prior to young tectonic events, and (2) in connection with comparable datasets, controlling the quality of different mean model approaches for indirect determination of bulk thermal conductivity (BTC) of rocks. The reliability of numerically derived BTC values appears to vary between different mean models, and is also strongly dependent upon sample lithology. Yet, correction algorithms may significantly reduce the mismatch between measured and calculated conductivity values based on the different mean models. Furthermore, the dataset allowed the derivation of lithotype-specific conversion equations to calculate the water-saturated BTC directly from data of dry-measured BTC and porosity (e.g., well log derived porosity) with no use of any mean model and thus provide a suitable tool for fast analysis of large datasets. The results of the study indicate that the qs in the study area is significantly higher than previously assumed. The new presented qs values range between 50 and 62 mW m⁻². A weak trend of decreasing heat flow can be identified from the east to the west (55-50 mW m⁻²), and an increase from the Dead Sea Basin to the south (55-62 mW m⁻²). The observed range can be explained by variation in the composition (heat production) of the upper crust, accompanied by more systematic spatial changes in its thickness. The new qs data then can be used, in conjunction with petrophysical data and information on the structure and composition of the lithosphere, to adjust a model of the pre-Oligocene thermal state of the crust in south-central Israel. The 2-D steady-state temperature model was calculated along an E-W traverse based on the DESIRE seismic profile (Mechie et al., 2009). The model comprises the entire lithosphere down to the lithosphere–asthenosphere boundary (LAB) involving the most recent knowledge of the lithosphere in pre-Oligocene time, i.e., prior to the onset of rifting and plume-related lithospheric thermal perturbations. The adjustment of modeled and measured qs allows conclusions about the pre-Oligocene LAB-depth. After the best fitting the most likely depth is 150 km which is consistent with estimations made in comparable regions of the Arabian Shield. It therefore comprises the first ever modelled pre-Oligocene LAB depth, and provides important clues on the thermal state of lithosphere before rifting. This, in turn, is vital for a better understanding of the (thermo)-dynamic processes associated with lithosphere extension and continental break-up.
Die Expansion des renalen Tubulointerstitiums aufgrund einer Akkumulation zellulärer Bestandteile und extrazellulärer Matrix ist eine charakteristische Eigenschaft der chronischen Nierenerkrankung (CKD) und führt zu einer Progression der Erkrankung in Richtung eines terminalen Nierenversagens. Die Fibroblasten Proliferation und ihre Transformation hin zum sekretorischen Myofibroblasten-Phänotyp stellen hierbei Schlüsselereignisse dar. Signalprozesse, die zur Induktion der Myofibroblasten führen, werden aktiv beforscht um anti-fibrotische Therapieansätze zu identifizieren. Das anti-inflammatorische Protein Annexin A1 und sein Rezeptor Formyl-Peptid Rezeptor 2 (FPR2) wurden in verschiedenen Organsystemen mit der Regulation von Fibroblastenaktivität in Verbindung gebracht, jedoch wurden ihre Expression und Funktion bei renalen fibrotischen Erkrankungen bisher nicht untersucht. Ziel der aktuellen Studie war daher die Untersuchung der renalen Annexin A1- und FPR2-Expression in einem Tiermodell des chronischen Nierenversagens, sowie die Charakterisierung der funktionellen Rolle von Annexin A1 in der Regulation des Fibroblasten Phänotyps und ihrer Syntheseleistung. Dazu wurden neugeborene Sprague-Dawley Ratten in den ersten zwei Wochen ihres Lebens entweder mit Vehikel oder mit einem Angiotensin II Typ I Rezeptor Antagonisten behandelt und ohne weitere Intervention bis zu einem Alter von 11 Monaten (CKD Ratten) gehalten. Die Regulation und Lokalisation von Annexin A1 und FPR2 wurden mit Hilfe von Real-Time PCR und Immunhistochemie erfasst. Annexin A1- und FPR2-exprimierende Zellen wurden weiter durch Doppelimmunfluoreszenzfärbungen charakterisiert. Gefärbt wurde mit Antikörpern gegen endotheliale Zellen (rat endothelial cell antigen), Makrophagen (CD 68), Fibroblasten (CD73) und Myofibroblasten (alpha-smooth muscle actin (α-sma)). Zellkulturstudien wurden an immortalisierten renalen kortikalen Fibroblasten aus Wildtyp- und Annexin A1-defizienten Mäusen, sowie an etablierten humanen und murinen renalen Fibrolasten durchgeführt. Eine Überexpression von Annexin A1 wurde durch eine stabile Transfektion erreicht. Die Expression von Annexin A1, α-sma und Kollagen 1α1 wurde durch Real-Time PCR, Western Blot und Immuhistochemie erfasst. Die Sekretion des Annexin A1 Proteins wurde nach TCA-Fällung des Zellkulturüberstandes im Western Blot untersucht. Wie zu erwarten zeigten die CKD Ratten eine geringere Anzahl an Nephronen mit deutlicher glomerulären Hypertrophie. Der tubulointerstitielle Raum war durch fibrilläres Kollagen, aktivierte Fibroblasten und inflammatorische Zellen expandiert. Parallel dazu war die mRNA Expression von Annexin A1 und Transforming growth factor beta (TGF-β) signifikant erhöht. Die Annexin A1-Lokalisation mittels Doppelimmunfluorsezenz identifizierte eine große Anzahl von CD73-positiven kortikalen Fibroblasten und eine Subpopulation von Makrophagen als Annexin A1-positiv. Die Annexin A1-Menge in Myofibroblasten und renalen Endothelien war gering. FPR2 konnte in der Mehrzahl der renalen Fibroblasten, in Myofibroblasten, in einer Subpopulation von Makrophagen und in renalen Epithelzellen nachgewiesen werden. Eine Behandlung der murinen Fibroblasten mit dem pro-fibrotischen Zytokin TGF-β führte zu einem parallelen Anstieg der α-sma-, Kollagen 1α1- und Annexin A1-Biosynthese und zu einer gesteigerten Sekretion von Annexin A1. Eine Überexpression von Annexin A1 in murinen Fibroblasten reduzierte das Ausmaß der TGF-β induzierten α-sma- und Kollagen 1α1-Biosynthese. Fibroblasten aus Annexin A1-defizienten Mäusen zeigten einen starken Myofibroblasten-Phänotyp mit einer gesteigerten Expression an α-sma und Kollagen 1α1. Der Einsatz eines Peptidantagonisten des FPR2 (WRW4) resultierte in einer Stimulation der α-sma-Biosynthese, was die Vermutung nahe legte, dass Annexin A1 FPR2-vermittelt anti-fibrotische Effekte hat. Zusammenfassend zeigen diese Ergebnisse, dass renale kortikale Fibroblasten eine Hauptquelle des Annexin A1 im renalen Interstitium und einen Ansatzpunkt für Annexin A1-Signalwege in der Niere darstellen. Das Annexin A1/FPR2-System könnte daher eine wichtige Rolle in der Kontrolle des Fibroblasten Phänotyp und der Fibroblasten Aktivität spielen und daher einen neuen Ansatz für die anti-fibrotischen pharmakologischen Strategien in der Behandlung des CKD darstellen.