Refine
Has Fulltext
- yes (509) (remove)
Year of publication
- 2016 (509) (remove)
Document Type
- Postprint (215)
- Doctoral Thesis (136)
- Article (68)
- Monograph/Edited Volume (26)
- Preprint (17)
- Part of Periodical (16)
- Master's Thesis (12)
- Review (6)
- Conference Proceeding (5)
- Bachelor Thesis (2)
Is part of the Bibliography
- yes (509) (remove)
Keywords
- model (6)
- climate-change (5)
- sentence processing (5)
- Europe (4)
- German (4)
- Klimawandel (4)
- aggression (4)
- climate change (4)
- evolution (4)
- language (4)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (79)
- Institut für Geowissenschaften (40)
- Humanwissenschaftliche Fakultät (39)
- Institut für Chemie (39)
- Institut für Physik und Astronomie (31)
- Institut für Biochemie und Biologie (30)
- Bürgerliches Recht (28)
- Department Linguistik (20)
- Department Psychologie (20)
- Institut für Mathematik (19)
The cytoskeleton is an essential component of living cells. It is composed of different types of protein filaments that form complex, dynamically rearranging, and interconnected networks. The cytoskeleton serves a multitude of cellular functions which further depend on the cell context. In animal cells, the cytoskeleton prominently shapes the cell's mechanical properties and movement. In plant cells, in contrast, the presence of a rigid cell wall as well as their larger sizes highlight the role of the cytoskeleton in long-distance intracellular transport. As it provides the basis for cell growth and biomass production, cytoskeletal transport in plant cells is of direct environmental and economical relevance. However, while knowledge about the molecular details of the cytoskeletal transport is growing rapidly, the organizational principles that shape these processes on a whole-cell level remain elusive.
This thesis is devoted to the following question: How does the complex architecture of the plant cytoskeleton relate to its transport functionality? The answer requires a systems level perspective of plant cytoskeletal structure and transport. To this end, I combined state-of-the-art confocal microscopy, quantitative digital image analysis, and mathematically powerful, intuitively accessible graph-theoretical approaches.
This thesis summarizes five of my publications that shed light on the plant cytoskeleton as a transportation network: (1) I developed network-based frameworks for accurate, automated quantification of cytoskeletal structures, applicable in, e.g., genetic or chemical screens; (2) I showed that the actin cytoskeleton displays properties of efficient transport networks, hinting at its biological design principles; (3) Using multi-objective optimization, I demonstrated that different plant cell types sustain cytoskeletal networks with cell-type specific and near-optimal organization; (4) By investigating actual transport of organelles through the cell, I showed that properties of the actin cytoskeleton are predictive of organelle flow and provided quantitative evidence for a coordination of transport at a cellular level; (5) I devised a robust, optimization-based method to identify individual cytoskeletal filaments from a given network representation, allowing the investigation of single filament properties in the network context. The developed methods were made publicly available as open-source software tools.
Altogether, my findings and proposed frameworks provide quantitative, system-level insights into intracellular transport in living cells. Despite my focus on the plant cytoskeleton, the established combination of experimental and theoretical approaches is readily applicable to different organisms. Despite the necessity of detailed molecular studies, only a complementary, systemic perspective, as presented here, enables both understanding of cytoskeletal function in its evolutionary context as well as its future technological control and utilization.
Among the bloom-forming and potentially harmful cyanobacteria, the genus Microcystis represents a most diverse taxon, on the genomic as well as on morphological and secondary metabolite levels. Microcystis communities are composed of a variety of diversified strains. The focus of this study lies on potential interactions between Microcystis representatives and the roles of secondary metabolites in these interaction processes.
The role of secondary metabolites functioning as signaling molecules in the investigated interactions is demonstrated exemplary for the prevalent hepatotoxin microcystin. The extracellular and intracellular roles of microcystin are tested in microarray-based transcriptomic approaches. While an extracellular effect of microcystin on Microcystis transcription is confirmed and connected to a specific gene cluster of another secondary metabolite in this study, the intracellularly occurring microcystin is related with several pathways of the primary metabolism. A clear correlation of a microcystin knockout and the SigE-mediated regulation of carbon metabolism is found. According to the acquired transcriptional data, a model is proposed that postulates the regulating effect of microcystin on transcriptional regulators such as the alternative sigma factor SigE, which in return captures an essential role in sugar catabolism and redox-state regulation.
For the purpose of simulating community conditions as found in the field, Microcystis colonies are isolated from the eutrophic lakes near Potsdam, Germany and established as stably growing under laboratory conditions. In co-habitation simulations, the recently isolated field strain FS2 is shown to specifically induce nearly immediate aggregation reactions in the axenic lab strain Microcystis aeruginosa PCC 7806. In transcriptional studies via microarrays, the induced expression program in PCC 7806 after aggregation induction is shown to involve the reorganization of cell envelope structures, a highly altered nutrient uptake balance and the reorientation of the aggregating cells to a heterotrophic carbon utilization, e.g. via glycolysis. These transcriptional changes are discussed as mechanisms of niche adaptation and acclimation in order to prevent competition for resources.
This cumulative dissertation contains four self-contained articles which are related to EU regional policy and its structural funds as the overall research topic. In particular, the thesis addresses the question if EU regional policy interventions can at all be scientifically justified and legitimated on theoretical and empirical grounds from an economics point of view. The first two articles of the thesis (“The EU structural funds as a means to hamper migration” and “Internal migration and EU regional policy transfer payments: a panel data analysis for 28 EU member countries”) enter into one particular aspect of the debate regarding the justification and legitimisation of EU regional policy. They theoretically and empirically analyse as to whether regional policy or the market force of the free flow of labour (migration) in the internal European market is the better instrument to improve and harmonise the living and working conditions of EU citizens. Based on neoclassical market failure theory, the first paper argues that the structural funds of the EU are inhibiting internal migration, which is one of the key measures in achieving convergence among the nations in the single European market. It becomes clear that European regional policy aiming at economic growth and cohesion among the member states cannot be justified and legitimated if the structural funds hamper instead of promote migration. The second paper, however, shows that the empirical evidence on the migration and regional policy nexus is not unambiguous, i.e. different empirical investigations show that EU structural funds hamper and promote EU internal migration. Hence, the question of the scientific justification and legitimisation of EU regional policy cannot be readily and unambiguously answered on empirical grounds. This finding is unsatisfying but is in line with previous theoretical and empirical literature. That is why, I take a step back and reconsider the theoretical beginnings of the thesis, which took for granted neoclassical market failure theory as the starting point for the positive explanation as well as the normative justification and legitimisation of EU regional policy. The third article of the thesis (“EU regional policy: theoretical foundations and policy conclusions revisited”) deals with the theoretical explanation and legitimisation of EU regional policy as well as the policy recommendations given to EU regional policymakers deduced from neoclassical market failure theory. The article elucidates that neoclassical market failure is a normative concept, which justifies and legitimates EU regional policy based on a political and thus subjective goal or value-judgement. It can neither be used, therefore, to give a scientifically positive explanation of the structural funds nor to obtain objective and practically applicable policy instruments. Given this critique of neoclassical market failure theory, the third paper consequently calls into question the widely prevalent explanation and justification of EU regional policy given in static neoclassical equilibrium economics. It argues that an evolutionary non-equilibrium economics perspective on EU regional policy is much more appropriate to provide a realistic understanding of one of the largest policies conducted by the EU. However, this does neither mean that evolutionary economic theory can be unreservedly seen as the panacea to positively explain EU regional policy nor to derive objective policy instruments for EU regional policymakers. This issue is discussed in the fourth article of the thesis (“Market failure vs. system failure as a rationale for economic policy? A critique from an evolutionary perspective”). This article reconsiders the explanation of economic policy from an evolutionary economics perspective. It contrasts the neoclassical equilibrium notions of market and government failure with the dominant evolutionary neo-Schumpeterian and Austrian-Hayekian perceptions. Based on this comparison, the paper criticises the fact that neoclassical failure reasoning still prevails in non-equilibrium evolutionary economics when economic policy issues are examined. This is surprising, since proponents of evolutionary economics usually view their approach as incompatible with its neoclassical counterpart. The paper therefore argues that in order to prevent the otherwise fruitful and more realistic evolutionary approach from undermining its own criticism of neoclassical economics and to create a consistent as well as objective evolutionary policy framework, it is necessary to eliminate the equilibrium spirit. Taken together, the main finding of this thesis is that European regional policy and its structural funds can neither theoretically nor empirically be justified and legitimated from an economics point of view. Moreover, the thesis finds that the prevalent positive and instrumental explanation of EU regional policy given in the literature needs to be reconsidered, because these theories can neither scientifically explain the emergence and development of this policy nor are they appropriate to derive objective and scientific policy instruments for EU regional policymakers.
En route towards advanced catalyst materials for the electrocatalytic water splitting reaction
(2016)
The thesis on hand deals with the development of new types of catalysts based on pristine metals and ceramic materials and their application as catalysts for the electrocatalytic water splitting reaction. In order to breathe life into this technology, cost-efficient, stable and efficient catalysts are imploringly desired. In this manner, the preparation of Mn-, N-, S-, P-, and C-containing nickel materials has been investigated together with the theoretical and electrochemical elucidation of their activity towards the hydrogen (and oxygen) evolution reaction. The Sabatier principle has been used as the principal guideline towards successful tuning of catalytic sites. Furthermore, two pathways have been chosen to ameliorate the electrocatalytic performance, namely, the direct improvement of intrinsic properties through appropriate material selection and secondly the increase of surface area of the catalytic material with an increased amount of active sites. In this manner, bringing materials with optimized hydrogen adsorption free energy onto high surface area support, catalytic performances approaching the golden standards of noble metals were feasible. Despite varying applied synthesis strategies (wet chemistry in organic solvents, ionothermal reaction, gas phase reaction), one goal has been systematically pursued: to understand the driving mechanism of the growth. Moreover, deeper understanding of inherent properties and kinetic parameters of the catalytic materials has been gained.
Die Annäherung von Entwicklung und Sicherheit seit Beginn der 1990er Jahre gilt in Teilen der Fachöffentlichkeit als wesentliches Merkmal einer zunehmenden Eigennutz- und Interessenorientierung der deutschen Entwicklungspolitik nach Ende des Ost-West-Konflikts. Den Ausgangspunkt der vorliegenden Untersuchung bildete die Skepsis gegenüber diesem Befund eines Wandels deutscher Entwicklungspolitik weg von moralischen Begründungszusammenhängen und hin zu nationaler Interessenpolitik seit Beginn der 1990er Jahre. Diese Skepsis begründet sich in der Annahme, dass die bisherige Kritik gegenüber einer möglichen Versicherheitlichung von Entwicklungspolitik die Rolle von eigennutzorientierten Interessen als erklärendem Faktor überbetont und gleichzeitig ideellen Strukturen und deren möglichem Wandel als konstitutivem Faktor für politische Prozesse zu wenig Aufmerksamkeit schenkt. Die Forschungsfrage lautet dementsprechend: Kann die deutsche Entwicklungspolitik im Lichte der Verknüpfung von Entwicklung und Sicherheit als zunehmend interessenorientiert gedeutet werden und hat sich damit ein grundlegender Politikwandel vollzogen?
Theoretisch knüpft die Arbeit an die konstruktivistisch-orientierte Forschung im Thema Entwicklung und Sicherheit an und entwickelt diese weiter. Für die Herleitung der theoretischen Position wird auf konstruktivistische Überlegungen in den Theorien der Internationalen Beziehungen rekurriert. Im Vordergrund stehen dabei jene Ansätze der Internationalen Beziehungen, die die konstruktivistische Wende nicht nur ontologisch, sondern auch epistemologisch vollziehen und der Rolle von Sprache besondere Aufmerksamkeit schenken. In empirischer Hinsicht wird die Verknüpfung von Entwicklung und Sicherheit in der deutschen staatlichen Entwicklungspolitik anhand von Interpretationen dieser Verknüpfung im Agenda-Setting und in der Politikformulierung untersucht. Der Untersuchungszeitraum der empirischen Analyse beläuft sich auf die Amtsjahre der SPD-Politikerin Heidemarie Wieczorek-Zeul als Bundesministerin für wirtschaftliche Entwicklung und Zusammenarbeit, nämlich 1998 2009. Der Datenkorpus der Untersuchung in Agenda-Setting und Politikformulierung umfasst über 50 Reden von Mitgliedern der Bundesregierung sowie ausgewählte offizielle Politikdokumente, in denen relevante Textpassagen enthalten sind. Die beispielhafte Untersuchung der Institutionalisierung im Lichte der Verknüpfungen von Entwicklung und Sicherheit bezieht sich auf weitere Primär- und Sekundärquellen.
Auf der Grundlage der empirischen Analyse wird deutlich, dass unterschiedliche Interpretationen in der staatlichen deutschen Entwicklungspolitik hinsichtlich der Verknüpfung von Entwicklung und Sicherheit über den Untersuchungszeitraum 1998 - 2009 nachgezeichnet werden können. Bemerkenswert ist dabei insbesondere die diffuse Vielfalt der Konstruktionen des Sicherheitsbegriffs. Außerdem wird anhand der empirischen Untersuchung nachgezeichnet, dass zum Teil erhebliche Unterschiede bestehen zwischen den Verknüpfungen von Entwicklung und Sicherheit auf der ressortübergreifenden Ebene einerseits und der entwicklungspolitischen Ebene andererseits. Auch die beispielhafte Diskussion von Meilensteinen der institutionalisierten Entwicklungspolitik bestätigt diese Varianzen, die durch die nuancierte Analyse sprachlicher Konstruktionen sichtbar gemacht werden konnte. Ausgehend vom empirischen Ergebnis der Varianz und Variabilität der Begründungsmuster für die Verknüpfungen von Entwicklung und Sicherheit ist es nunmehr möglich, Schlussfolgerungen im Hinblick auf die Forschungsfrage zu ziehen: Ist deutsche Entwicklungspolitik im Lichte der Verknüpfung von Entwicklung und Sicherheit zunehmend eigennutz- und interessenorientiert?
In den Anfangsjahren von Wieczorek-Zeul spielen normative Aspekte wie Gerechtigkeit und Frieden im Zusammenhang mit der Genese des Themenfelds Frieden und Sicherheit eine wichtige Rolle. Prägend für die Politikformulierung sind dabei vor allem die Herausforderungen im Zusammenhang mit der Globalisierung, die den Ausgangspunkt für die Formulierung der von Wieczorek-Zeul geprägten Globalen Strukturpolitik bilden. Eine Eigennutzorientierung im realistischen Sinne scheint nur dann präsent, wenn es um unser Interesse der Wohlstandssicherung geht. Entwicklungspolitische Friedenförderung und Krisenpräventionen dienen dazu, die ökonomischen Kosten von Kriegen zu verringern und leisten einen Beitrag zur Vermeidung von wohlstandsgefährdender Migration. Es wird auf einen Sicherheitsbegriff rekurriert, der die Menschliche Sicherheit der Bevölkerung in den Entwicklungs- und Transformationsländern in den Vordergrund stellt. Nach 9/11 verschieben sich die sprachlichen Konstruktionen weg von unserem Wohlstand und dem Frieden weltweit in Richtung unsere Sicherheit. Artikulierte Eigennutzorientierung mit Bezug auf Sicherheit gewinnt an Dominanz gegenüber moralischen Begründungszusammenhängen. Diese Entwicklung lässt sich vor allem im Rahmen der ressortübergreifenden Interpretationen des Zusammenhangs von Entwicklung und Sicherheit nachzeichnen. Auch bei dieser ressortübergreifenden Verschiebung lässt sich die Verknüpfung von Entwicklung und Sicherheit auf der Ebene des für die deutsche Entwicklungspolitik federführenden Bundesministeriums für wirtschaftliche Zusammenarbeit und Entwicklung (BMZ) hingegen weiterhin als vorwiegend verpflichtungsorientiert deuten. Erst mit der Großen Koalition ab 2005 kann von umfassenderer Neu-Interpretation der Verknüpfung von Entwicklung und Sicherheit ausgegangen werden: Wohlstand und Sicherheit in der Welt werden nunmehr gleichermaßen als in unserem Interesse artikuliert, die neben der internationalen Verpflichtung zur Friedenssicherung als gleichwertig eingeschätzt werden können
Zusammenfassend bringen diese empirischen Ergebnisse im Lichte der theoretischen Deutung ein nuancierter es Bild hervor als in der bisherigen Forschung mit ihrem meist einseitigen Fokus auf einer zunehmenden Interessenorientierung angenommen wurde. Die ideellen Bezüge waren immer präsent als prägender Faktor für die deutsche Entwicklungspolitik, sie haben sich allerdings im Zeitverlauf verändert. Der theoretische Ertrag der Studie und die Policy-Relevanz liegen auf mehreren Ebenen. Erstens wird mit der differenzierten Untersuchung und Deutung deutscher Entwicklungspolitik im Lichte der Verknüpfungen von Entwicklung und Sicherheit die Forschung zum Thema Versicherheitlichung von Entwicklungspolitik angereichert und deren theoretische Prämissen weiterentwickelt. Zweitens leistet die Arbeit einen Beitrag zur Forschung zur deutschen Entwicklungspolitik. Mit der vorliegenden Studie wird diese oft an der Umsetzung und Praxis interessierte Forschung durch die theoretische Beschäftigung mit der Deutung deutscher Entwicklungspolitik angereichert. Dieser Beitrag ergibt sich konkret aus der Anwendung theoretischer Überlegungen der Sicherheitsstudien, aus dem konstruktivistischen Strang der Theorien der Internationalen Beziehungen (IB) sowie konzeptionellen Überlegungen aus der Policy-Forschung, die miteinander verknüpft werden.
When realizing a programming language as VM, implementing behavior as part of the VM, as primitive, usually results in reduced execution times. But supporting and developing primitive functions requires more effort than maintaining and using code in the hosted language since debugging is harder, and the turn-around times for VM parts are higher. Furthermore, source artifacts of primitive functions are seldom reused in new implementations of the same language. And if they are reused, the existing API usually is emulated, reducing the performance gains. Because of recent results in tracing dynamic compilation, the trade-off between performance and ease of implementation, reuse, and changeability might now be decided adversely.
In this work, we investigate the trade-offs when creating primitives, and in particular how large a difference remains between primitive and hosted function run times in VMs with tracing just-in-time compiler. To that end, we implemented the algorithmic primitive BitBlt three times for RSqueak/VM. RSqueak/VM is a Smalltalk VM utilizing the PyPy RPython toolchain. We compare primitive implementations in C, RPython, and Smalltalk, showing that due to the tracing just-in-time compiler, the performance gap has lessened by one magnitude to one magnitude.
Die hohe Energieaufnahme durch Fette ist ein Hauptfaktor für die Entstehung von Adipositas, was zu weltweiten Bestrebungen führte, die Fettaufnahme zu verringern. Fettreduzierte Lebensmittel erreichen jedoch, trotz ihrer Weiterentwicklung, nicht die Schmackhaftigkeit ihrer Originale. Die traditionelle Sichtweise, dass die Attraktivität von Fetten allein durch Textur, Geruch, Aussehen und postingestive Effekte bestimmt wird, wird nun durch das Konzept einer gustatorischen Wahrnehmung ergänzt. Bei Nagetieren zeigte sich, dass Lipide unabhängig von den vorgenannten Eigenschaften erkannt werden, sowie, dass Fettsäuren, freigesetzt durch linguale Lipasen, als gustatorische Stimuli fungieren und Fettsäuresensoren in Geschmackszellen exprimiert sind. Die Datenlage für den Menschen erwies sich jedoch als sehr begrenzt, daher war es Ziel der vorliegenden Arbeit molekulare und histologische Voraussetzungen für eine gustatorische Fettwahrnehmung beim Menschen zu untersuchen.
Zunächst wurde humanes Geschmacksgewebe mittels RT-PCR und immunhistochemischen Methoden auf die Expression von Fettsäuresensoren untersucht, sowie exprimierende Zellen in Kofärbeexperimenten charakterisiert und quantifiziert. Es wurde die Expression fettsäuresensitiver Rezeptoren nachgewiesen, deren Agonisten das gesamte Spektrum an kurz- bis langkettigen Fettsäuren abdecken (GPR43, GPR84, GPR120, CD36, KCNA5). Ein zweifelsfreier Nachweis des Proteins konnte für den auf langkettige Fettsäuren spezialisierten Rezeptor GPR120 in Typ-I- und Typ-III-Geschmackszellen der Wallpapillen erbracht werden. Etwa 85 % dieser GPR120-exprimierenden Zellen enthielten keine der ausgewählten Rezeptoren der Geschmacksqualitäten süß (TAS1R2/3), umami (TAS1R1/3) oder bitter (TAS2R38). Somit findet sich in humanen Geschmackspapillen nicht nur mindestens ein Sensor, sondern möglicherweise auch eine spezifische, fettsäuresensitive Zellpopulation. Weitere RT-PCR-Experimente und Untersuchungen mittels In-situ-Hybridisierung wurden zur Klärung der Frage durchgeführt, ob Lipasen in den Von-Ebner-Speicheldrüsen (VED) existieren, die freie Fettsäuren aus Triglyceriden als gustatorischen Stimulus freisetzen können. Es zeigte sich zwar keine Expression der bei Nagetieren gefundenen Lipase F (LIPF), jedoch der eng verwandten Lipasen K, M und N in den serösen Zellen der VED. In-silico-Untersuchungen der Sekundär- und Tertiärstrukturen zeigten die hohe Ähnlichkeit zu LIPF, erwiesen aber auch Unterschiede in den Bindungstaschen der Enzyme, welche auf ein differenziertes Substratspektrum hinweisen. Die Anwesenheit eines spezifischen Signalpeptids macht eine Sekretion der Lipasen in den die Geschmacksporen umspülenden Speichel wahrscheinlich und damit auch eine Bereitstellung von Fettsäuren als Stimuli für Fettsäuresensoren. Die Übertragung des durch diese Stimuli hervorgerufenen Signals von Geschmackszellen auf gustatorische Nervenfasern über P2X-Rezeptormultimere wurde mit Hilfe einer vorherigen Intervention mit einem P2X3 /P2X2/3-spezifischen Antagonisten an der Maus als Modellorganismus im Kurzzeit-Präferenztest untersucht. Es zeigte sich weder eine Beeinträchtigung der Wahrnehmung einer Fettsäurelösung, noch einer zuckerhaltigen Kontrolllösung, wohingegen die Wahrnehmung einer Bitterstofflösung reduziert wurde. Somit ist anhand der Ergebnisse dieser Arbeit eine Beteiligung des P2X3-Homomers bzw. des P2X2/3-Heteromers unwahrscheinlich, jedoch die des P2X2-Homomers und damit der gustatorischen Nervenfasern nicht ausgeschlossen.
Die Ergebnisse dieser Arbeit weisen auf die Erfüllung grundlegender Voraussetzungen für die gustatorische Fett(säure)wahrnehmung hin und tragen zum Verständnis der sensorischen Fettwahrnehmung und der Regulation der Fettaufnahme bei. Das Wissen um die Regulation dieser Mechanismen stellt eine Grundlage zur Aufklärung der Ursachen und damit der Bekämpfung von Adipositas und assoziierten Krankheiten dar.
Background
It has been demonstrated that core strength training is an effective means to enhance trunk muscle strength (TMS) and proxies of physical fitness in youth. Of note, cross-sectional studies revealed that the inclusion of unstable elements in core strengthening exercises produced increases in trunk muscle activity and thus provide potential extra training stimuli for performance enhancement. Thus, utilizing unstable surfaces during core strength training may even produce larger performance gains. However, the effects of core strength training using unstable surfaces are unresolved in youth. This randomized controlled study specifically investigated the effects of core strength training performed on stable surfaces (CSTS) compared to unstable surfaces (CSTU) on physical fitness in school-aged children.
Methods
Twenty-seven (14 girls, 13 boys) healthy subjects (mean age: 14 ± 1 years, age range: 13–15 years) were randomly assigned to a CSTS (n = 13) or a CSTU (n = 14) group. Both training programs lasted 6 weeks (2 sessions/week) and included frontal, dorsal, and lateral core exercises. During CSTU, these exercises were conducted on unstable surfaces (e.g., TOGU© DYNAIR CUSSIONS, THERA-BAND© STABILITY TRAINER).
Results
Significant main effects of Time (pre vs. post) were observed for the TMS tests (8-22%, f = 0.47-0.76), the jumping sideways test (4-5%, f = 1.07), and the Y balance test (2-3%, f = 0.46-0.49). Trends towards significance were found for the standing long jump test (1-3%, f = 0.39) and the stand-and-reach test (0-2%, f = 0.39). We could not detect any significant main effects of Group. Significant Time x Group interactions were detected for the stand-and-reach test in favour of the CSTU group (2%, f = 0.54).
Conclusions
Core strength training resulted in significant increases in proxies of physical fitness in adolescents. However, CSTU as compared to CSTS had only limited additional effects (i.e., stand-and-reach test). Consequently, if the goal of training is to enhance physical fitness, then CSTU has limited advantages over CSTS.
The link between cognitive scripts for consensual sexual interactions and attitudes towards sexual coercion was studied in 524 Polish high school students. We proposed that risky sexual scripts, containing risk elements linked to sexual aggression, would be associated with attitudes condoning sexual coercion. Pornography use and religiosity were included as predictors of participants’ risky sexual scripts and attitudes towards sexual coercion. Risky sexual scripts were linked to attitudes condoning sexual coercion. Pornography use was indirectly linked to attitudes condoning sexual coercion via risky sexual scripts. Religiosity showed a positive direct link with attitudes towards sexual coercion, but a negative indirect link through risky sexual scripts. The results are discussed regarding the significance of risky sexual scripts, pornography use, and religiosity in understanding attitudes towards sexual coercion as well as their implications for preventing sexually aggressive behaviour.
Swets et al. (2008. Underspecification of syntactic ambiguities: Evidence from self-paced reading. Memory and Cognition, 36(1), 201–216) presented evidence that the so-called ambiguity advantage [Traxler et al. (1998). Adjunct attachment is not a form of lexical ambiguity resolution. Journal of Memory and Language, 39(4), 558–592], which has been explained in terms of the Unrestricted Race Model, can equally well be explained by assuming underspecification in ambiguous conditions driven by task-demands. Specifically, if comprehension questions require that ambiguities be resolved, the parser tends to make an attachment: when questions are about superficial aspects of the target sentence, readers tend to pursue an underspecification strategy. It is reasonable to assume that individual differences in strategy will play a significant role in the application of such strategies, so that studying average behaviour may not be informative. In order to study the predictions of the good-enough processing theory, we implemented two versions of underspecification: the partial specification model (PSM), which is an implementation of the Swets et al. proposal, and a more parsimonious version, the non-specification model (NSM). We evaluate the relative fit of these two kinds of underspecification to Swets et al.’s data; as a baseline, we also fitted three models that assume no underspecification. We find that a model without underspecification provides a somewhat better fit than both underspecification models, while the NSM model provides a better fit than the PSM. We interpret the results as lack of unambiguous evidence in favour of underspecification; however, given that there is considerable existing evidence for good-enough processing in the literature, it is reasonable to assume that some underspecification might occur. Under this assumption, the results can be interpreted as tentative evidence for NSM over PSM. More generally, our work provides a method for choosing between models of real-time processes in sentence comprehension that make qualitative predictions about the relationship between several dependent variables. We believe that sentence processing research will greatly benefit from a wider use of such methods.
This study addressed the role of reading motivation as a potential determinant of losses or gains in reading competence over six weeks of summer vacation (SV). Based on a sample of 223 third-grade elementary students, structural equation analyses showed that intrinsic reading motivation before SV contributed positively to both word and sentence comprehension after SV when controlling for comprehension performance before SV. These effects were mediated by reading amount. Extrinsic reading motivation did not show significant associations with end-of-summer comprehension scores. Taken together, the findings suggest that intrinsic reading motivation facilitates students’ development of reading comprehension over SV.
Die schulische Berufswahlvorbereitung versäumt es, Jugendliche auf die Wahl des Ausbildungsbetriebs vorzubereiten. Sie thematisiert nur die Berufswahl, obwohl die Entscheidung für eine betriebliche Ausbildung immer auch die Entscheidung für einen Ausbildungsbetrieb voraussetzt. Für die Ausbildungszufriedenheit und den -erfolg ist diese Betriebswahl zentral. Angesichts des Mismatchs am Ausbildungsmarkt ist das Thema hochrelevant.
Aus welchen Gründen entscheiden sich Jugendliche für einen Ausbildungsbetrieb? Diese Frage untersucht die vorliegende Arbeit aus prospektiver Sicht in narrativen Einzelinterviews mit 52 Schülerinnen und Schülern der 9. und 10. Klassenstufen verschiedener Schultypen und aus retrospektiver Sicht in vier multipel eingebetteten Mehrfallstudien mit 17 Auszubildenden aus vier Betrieben und in acht Berufen – jeweils in Brandenburg und Berlin. Theoretisch nähert sich diese Arbeit dem Thema über psychologische, soziologische und wirtschaftswissenschaftliche sowie interdisziplinäre Berufswahltheorien an, dem operativen Modell der Betriebswahl sowie dem hier neu entwickelten Modell der Ausbildungswahl als Entscheidungsprozess, das die beiden Wahlkomponenten Betrieb und Beruf vereint.
Drei zentrale Erkenntnisse kennzeichnen das Ergebnis der vorliegenden Arbeit:
1. Jugendliche beschäftigen sich mit der Wahl des Ausbildungsbetriebs und berücksichtigen vor allem emotionale Gründe. Diese variieren von Person zu Person.
2. Wichtigste Entscheidungsgründe für den Ausbildungsbetrieb sind der persönliche Eindruck, die inhaltliche Solidität, der Ort, das Betriebsklima, Kontakte ins Unternehmen, Perspektiven und die Bezahlung.
3. Jugendliche mit Mittlerem Schulabschluss achten besonders auf die Perspektiven nach Ausbildungsende.
Die wenigen anderen Studien zur Entscheidung für den Ausbildungsbetrieb gehen auf den am häufigsten genannten Entscheidungsgrund persönlicher Eindruck nicht ein. Auch kommen sie zu uneinheitlichen Schlüssen, für welche Personengruppe der Entscheidungsgrund Perspektiven besonders relevant ist. Es bedarf zusätzlicher Studien, um die Ergebnisse zu überprüfen und ihre statistische Verteilung in größeren Bevölkerungsgruppen zu untersuchen sowie eine belastbare, ganzheitliche Theorie zur Ausbildungswahl zu entwickeln.
Aus dem Inhalt:
- Menschenrechte, Staatlichkeit und der symbolische Charakter von Menschenrechtsverletzungen
- Schädliche traditionelle und kulturelle Praktiken im Menschenrechtsdiskurs seit CEDAW – Eine Annäherung an ein umstrittenes Konzept
- Schutz von Menschenrechtsverteidigern - neuere Entwicklungen
Heute sind die Themen Frauen und Frieden auf der Ebene der Sicherheitspolitik der Vereinten Nationen als Resultat von Resolution 1325 (2000) eng miteinander verbunden. Welche rechtlichen und tatsächlichen Konsequenzen haben sich aus dieser Entwicklung einerseits für die Arbeit der Vereinten Nationen selbst, andererseits für die Mitgliedstaaten ergeben und wie steht es um ihre Umsetzung? Die Studie zeichnet die WPS-Agenda nach und diskutiert die diesbezüglichen Aktivitäten der Vereinten Nationen. Die Umsetzungsmaßnahmen Deutschlands werden im Anschluss untersucht und bewertet.
Jahresbericht 2015
(2016)
Inhalt:
1. Allgemeiner Überblick
2. Organisationsstruktur des MenschenRechtsZentrums
2.1 Angehörige des MenschenRechtsZentrums
2.2 Wissenschaftlicher Beirat des MenschenRechtsZentrums
2.3 Förderverein
3. Aktivitäten im Berichtszeitraum
3.1 Forschung und wissenschaftliche Veranstaltungen
3.2 Promotionen
3.3 Lehrveranstaltungen
3.4 Publikationen – Neuerscheinungen 2015
3.5 Wissenschaftliche Vorträge, Vorlesungen, Fachgespräche u. ä.
Editorial (Dr. Roswitha Lohwaßer) ; Medienbildung in Potsdam (Cornelia Brückner) ; Das mobile Klassenzimmer (Cornelia Brückner) ; Englisch lehren und lernen auf Tablet-Computern (Susanne Gnädig, Manuela Pohl) ; Wir machen jetzt auch was mit Medien (Nadine Zülow) ; Klassische Philologen auf MedLeh-Kurs (Peggy Wittich) ; Neue Medien im Sachunterricht (Dr. phil. Ksenia Hintze) ; History Realoaded (Prof. Dr. Monika Fenn, Christopher Brandt) ; Digitale Medien in der Sportdidaktik (Ludwig Zimmermann) ; Akademische Grundkompetenzen (Michael Konarski) ; Experimentierfreudig (Prof. Dr. Andreas Borowski, Dr. Uta Magdans, Jirka Müller) ; Themenzentrierung und Interaktion auf Exkursionen (Dr. Antje Schneider, Bastian Schulz, Maik Wienecke) ; Das Lernlabor Informatik (Mareen Przybylla) ; Digitale Lernschleifen (Dr. rer. pol. Benjamin Apelojg) ; Medienkompetenz (Ilka Goetz) ; Besser Lernen mit digitalen Medien (Dr. Barbara Eckardt)
Over the past decades, rapid and constant advances have motivated GNSS technology to approach the ability to monitor transient ground motions with mm to cm accuracy in real-time. As a result, the potential of using real-time GNSS for natural hazards prediction and early warning has been exploited intensively in recent years, e.g., landslides and volcanic eruptions monitoring. Of particular note, compared with traditional seismic instruments, GNSS does not saturate or tilt in terms of co-seismic displacement retrieving, which makes it especially valuable for earthquake and earthquake induced tsunami early warning. In this thesis, we focus on the application of real-time GNSS to fast seismic source inversion and tsunami early warning.
Firstly, we present a new approach to get precise co-seismic displacements using cost effective single-frequency receivers. As is well known, with regard to high precision positioning, the main obstacle for single-frequency GPS receiver is ionospheric delay. Considering that over a few minutes, the change of ionospheric delay is almost linear, we constructed a linear model for each satellite to predict ionospheric delay. The effectiveness of this method has been validated by an out-door experiment and 2011 Tohoku event, which confirms feasibility of using dense GPS networks for geo-hazard early warning at an affordable cost.
Secondly, we extended temporal point positioning from GPS-only to GPS/GLONASS and assessed the potential benefits of multi-GNSS for co-seismic displacement determination. Out-door experiments reveal that when observations are conducted in an adversary environment, adding a couple of GLONASS satellites could provide more reliable results. The case study of 2015 Illapel Mw 8.3 earthquake shows that the biases between co-seismic displacements derived from GPS-only and GPS/GLONASS vary from station to station, and could be up to 2 cm in horizontal direction and almost 3 cm in vertical direction. Furthermore, slips inverted from GPS/GLONASS co-seismic displacements using a layered crust structure on a curved plane are shallower and larger for the Illapel event.
Thirdly, we tested different inversion tools and discussed the uncertainties of using real-time GNSS for tsunami early warning. To be exact, centroid moment tensor inversion, uniform slip inversion using a single Okada fault and distributed slip inversion in layered crust on a curved plane were conducted using co-seismic displacements recorded during 2014 Pisagua earthquake. While the inversion results give similar magnitude and the rupture center, there are significant differences in depth, strike, dip and rake angles, which lead to different tsunami propagation scenarios. Even though, resulting tsunami forecasting along the Chilean coast is close to each other for all three models.
Finally, based on the fact that the positioning performance of BDS is now equivalent to GPS in Asia-Pacific area and Manila subduction zone has been identified as a zone of potential tsunami hazard, we suggested a conceptual BDS/GPS network for tsunami early warning in South China Sea. Numerical simulations with two earthquakes (Mw 8.0 and Mw 7.5) and induced tsunamis demonstrate the viability of this network. In addition, the advantage of BDS/GPS over a single GNSS system by source inversion grows with decreasing earthquake magnitudes.
Der Bittergeschmack warnt den Organismus vor potentiell verdorbener oder giftiger Nahrung und ist somit ein wichtiger Kontrollmechanismus. Die initiale Detektion der zahlreich vorkommenden Bitterstoffe erfolgt bei der Maus durch 35 Bitterrezeptoren (Tas2rs), die sich im Zungengewebe befinden. Die Geschmacksinformation wird anschließend von der Zunge über das periphere (PNS) ins zentrale Nervensystem (ZNS) geleitet, wo deren Verarbeitung stattfindet. Die Verarbeitung der Geschmacksinformation konnte bislang nicht gänzlich aufgeklärt werden. Neue Studien deuten auf eine Expression von Tas2rs auch im PNS und ZNS entlang der Geschmacksbahn hin. Über Vorkommen und Aufgaben dieser Rezeptoren bzw. Rezeptorzellen im Nervensystem ist bislang wenig bekannt.
Im Rahmen dieser Arbeit wurde die Tas2r-Expression in verschiedenen Mausmodellen untersucht, Tas2r-exprimierende Zellen identifiziert und deren Funktionen bei der Übertragung der Geschmacksinformationen analysiert. Im Zuge der Expressionsanalysen mittels qRT-PCR konnte die Expression von 25 der 35 bekannten Bittergeschmacksrezeptoren im zentralen Nervensystem der Maus nachgewiesen werden. Die Expressionsmuster im PNS sowie im ZNS lassen darüber hinaus Vermutungen zu Funktionen in verschiedenen Bereichen des Nervensystems zu. Basierend auf den Ergebnissen der Expressionsanalysen war es möglich, stark exprimierte Tas2rs mittels In-situ-Hybridisierung in verschiedenen Zelltypen zu visualisieren. Des Weiteren konnten immunhistochemische Färbungen unter Verwendung eines genetisch modifizierten Mausmodells die Ergebnisse der Expressionsanalysen bestätigen. Sie zeigten eine Expression von Tas2rs, am Beispiel des Tas2r131-Rezeptors, in cholinergen, dopaminergen, GABAergen, noradrenergen und glycinerg-angesteuerten Projektionsneuronen sowie in Interneuronen. Die Ergebnisse der vorliegenden Arbeit zeigen daher erstmals das Vorkommen von Tas2rs in verschiedenen neuronalen Zelltypen in weiten Teilen des ZNS. Dies lässt den Schluss zu, dass Tas2r-exprimierende Zellen potentiell multiple Funktionen innehaben. Anhand von Verhaltensexperimenten in genetisch modifizierten Mäusen wurde die mögliche Funktion von Tas2r131-exprimierenden Neuronen (Tas2r131-Neurone) bei der Geschmackswahrnehmung untersucht. Die Ergebnisse weisen auf eine Beteiligung von Tas2r131-Neuronen an der Signalweiterleitung bzw. -verarbeitung der Geschmacksinformation für eine Auswahl von Bittersubstanzen hin. Die Analysen zeigen darüber hinaus, dass Tas2r131-Neuronen nicht an der Geschmackswahrnehmung anderer Bitterstoffe sowie Geschmacksstimuli anderer Qualitäten (süß, umami, sauer, salzig), beteiligt sind. Eine spezifische „Tas2r131-Bittergeschmacksbahn“, die mit anderen potentiellen „Bitterbahnen“ teils unabhängige, teils überlappende Signalwege bzw. Verarbeitungsbereiche besitzt, bildet eine mögliche zelluläre Grundlage zur Unterscheidung von Bitterstoffen. Die im Rahmen dieser Arbeit entstandene Hypothese einer potentiellen Diskriminierung von Bitterstoffen soll daher in weiterführenden Studien durch die Etablierung eines Verhaltenstest mit Mäusen geprüft werden.
Kain im Fegefeuer
(2016)
Z,E-Diene sind ein häufig auftretendes Strukturmerkmal in Naturstoffen. Aus diesem Grund ist die einfache Darstellung dieser Struktureinheit von großen Interesse in der organischen Chemie.
Das erste Ziel der vorliegenden Arbeit war daher die Weiterentwicklung der Ringschlussmetathese-/ baseninduzierten Ringöffnungs-/ Veresterungssequenz (RBRV-Sequenz) zur Synthese von (2Z,4E)-Diencarbonsäureethylestern ausgehend von Butenoaten. Dazu wurde zunächst die RBRV-Sequenz optimiert. Diese aus drei Schritten bestehende Sequenz konnte in einem Eintopf-Verfahren angewendet werden. Die Ringschlussmetathese gelang mit einer Katalysatorbeladung von 1 mol% des GRUBBS-Katalysators der zweiten Generation in Dichlormethan. Für die baseninduzierte Ringöffnung des β,γ-ungesättigten δ Valerolactons wurde NaHMDS verwendet. Die Alkylierung der Carboxylatspezies gelang mit dem MEERWEIN-Reagenz. Die Anwendbarkeit der Sequenz wurde für verschiedene Substrate demonstriert.
Die Erweiterung der Methode auf α-substituierte Butenoate unterlag starken Einschränkungen. So konnte der Zugang für α Hydroxyderivate realisiert werden. Bei der Anwendung der RBRV-Sequenz auf die α-substituierten Butenoate wurde festgestellt, dass diese sich nur in moderaten Ausbeuten umsetzen ließen und zudem nicht selektiv zu den (2E,4E)-konfigurierten α-substituierten-Dienestern reagierten.
Der Einsatz von Eninen unter den Standardbedingungen der RBRV-Sequenz gelang nicht. Erst nach Modifizierung der Sequenz (höhere Katalysatorbeladung, Wechsel des Lösungsmittels) konnten die [3]Dendralen-Produkte in geringen Ausbeuten erhalten werden.
Im zweiten Teil der Arbeit wurde der Einsatz von (2Z,4E)-Diencarbonsäureethylestern in der Totalsynthese von Naturstoffen untersucht. Dazu wurden zunächst die Transformationsmöglichkeiten der Ester geprüft. Es konnte gezeigt werden, dass sich (2Z,4E)-Diencarbonsäureethylester insbesondere zur Synthese von (2Z,4E)-Aldehyden sowie zum Aufbau der (3Z,5E)-Dien-1-in-Struktur eignen.
Anhand dieser Ergebnisse wurde im Anschluss die RBRV-Sequenz in der Totalsynthese eingesetzt. Dazu wurde zunächst der (2Z,4E)-Dienester Microsphaerodiolin in seiner ersten Totalsynthese auf drei verschiedene Routen hergestellt. Im Anschluss wurden sechs verschiedene Polyacetylene mit einer (3Z,5E)-Dien-1-in-Einheit hergestellt. Schlüsselschritte in ihrer Synthese waren immer die RBRV-Sequenz zum Aufbau der Z,E-Dien-Einheit, die Transformation des Esters in ein terminales Alkin sowie die CADIOT-CHODKIEWICZ-Kupplung zum Aufbau unsymmetrischer Polyine. Alle sechs Polyacetylene wurden zum ersten Mal in einer Totalsynthese synthetisiert. Drei Polyacetylene wurden ausgehend von (S)-Butantriol enantiomerenrein dargestellt. Anhand ihrer Drehwerte konnte eine Revision der von YAO und Mitarbeitern vorgenommen Zuordnung der Absolutkonfiguration der Naturstoffe vorgenommen werden.
Grenzräume – Grenzbewegungen
(2016)
Der vorliegende Sammelband vereinigt die Beiträge der 12. und 13. Tagung des Jungen Forums Slavistische Literaturwissenschaft (JFSL) in Basel 2013 und Frankfurt (Oder) und Słubice 2014. Unter den thematischen Leitbegriffen Grenzräume – Grenzbewegungen präsentiert er Einblicke in die Arbeit von Nachwuchswissenschaftlerinnen und -wissenschaftlern der deutschsprachigen slavischen Literatur- und Kulturwissenschaft.
In this thesis sentence processing was investigated using a psychophysiological measure known as pupillometry as well as Event-Related Potentials (ERP). The scope of the the- sis was broad, investigating the processing of several different movement constructions with native speakers of English and second language learners of English, as well as word order and case marking in German speaking adults and children. Pupillometry and ERP allowed us to test competing linguistic theories and use novel methodologies to investigate the processing of word order. In doing so we also aimed to establish pupillometry as an effective way to investigate the processing of word order thus broadening the methodological spectrum.
Ziel der vorliegenden Arbeit ist es, belohnungsabhängiges (instrumentelles) Lernen und Entscheidungsfindungsprozesse auf Verhaltens- und neuronaler Ebene in Abhängigkeit von chronischem Stresserleben (erfasst über den Lifetime Stress Inventory, Holmes und Rahe 1962) und kognitiven Variablen (eingeteilt in eine fluide und eine kristalline Intelligenzkomponente) an gesunden Probanden zu untersuchen. Dabei steht zu Beginn die Sicherung der Konstruktvalidität zwischen den bislang oft synonym verwendeten Begriffen modellfrei ~ habituell, bzw. modellbasiert ~ zielgerichtet im Fokus. Darauf aufbauend soll dann insbesondere der differentielle und interaktionelle Einfluss von chronischem Stresserleben und kognitiven Variablen auf Entscheidungsprozesse (instrumentelles Lernen) und deren neuronales Korrelat im VS untersucht und dargestellt werden. Abschließend wird die Relevanz der untersuchten belohnungsabhängigen Lernprozesse für die Entstehung und Aufrechterhaltung der Alkoholabhängigkeit zusammen mit weiteren Einflussvariablen in einem Übersichtspapier diskutiert.
TripleA is a workshop series founded by linguists from the University of Tübingen and the University of Potsdam. Its aim is to provide a forum for semanticists doing fieldwork on understudied languages, and its focus is on languages from Africa, Asia, Australia and Oceania. The second TripleA workshop was held at the University of Potsdam, June 3-5, 2015.
It has been long agreed by formal and functional researchers (primarily based on English data) that contrastive topic marking, namely marking a constituent as a contrastive topic via the B-accent/the rising intonation contour) requires the co-occurrence of focus marking via the A-accent/the falling intonation contour (see Sturgeon 2006, and references therein). However, this consensus has recently been disputed by new findings indicating the occurrence of utterances with only B-accent, dubbed as lone contrastive topic (Büring 2003, Constant 2014). In this paper, I argue, based on the data in Vietnamese, that the presence of lone contrastive topic is just apparent, and that the focus that co-occurs with the seemingly lone contrastive topic is a verum focus.
Als Folge der demografischen Entwicklungen und der finanziellen Engpässe vieler Bundesländer kommt es seit einigen Jahren in den meisten Flächenländern der Bundesrepublik Deutschland erneut zu tiefgreifenden Verwaltungsreformen auf kommunaler Ebene. Mit Hilfe von Verwaltungsstruktur-, Funktional- und Kreisgebietsreformen wird versucht, die Verwaltungseffektivität zu erhöhen sowie die kreislichen Aufgaben- und Territorialstrukturen an die veränderten Rahmenbedingungen anzupassen. Nach Auffassung vieler Reformer kann die angestrebte Effektivitätssteigerung, die insbesondere zu Kostenersparnissen und Synergieeffekten führen soll, v. a. durch eine deutliche Vergrößerung der Verwaltungsräume erreicht werden.
Neben dem Ziel, die Leistungsfähigkeit der lokalen Verwaltungsstrukturen zu erhöhen, das zumeist im Mittelpunkt der Reformvorhaben steht, gilt es jedoch gleichermaßen, die Legitimität lokalen Handelns durch Demokratie und bürgerschaftliche Teilhabe zu erhalten. Der Gesetzgeber steht daher vor der Aufgabe, beide Zielvorstellungen in einem Reformprozess zu berücksichtigen und die Größe der administrativen Einheiten so zu gestalten, dass in ihnen ein ausgewogenes Verhältnis zwischen Effizienz und Bürgernähe entsteht.
Ausgehend von dieser Problematik werden jene Thesen und Annahmen aufgegriffen, die im Zusammenhang mit dem Urteil des LVerfG M-V vom 26. Juli 2007 im Kontext von Effizienz und Partizipation stehen, und auf die Kreisgebietsreform im Freistaat Sachsen übertragen. Konkret werden die Auswirkungen des territorialen Neuzuschnitts der sächsischen Landkreise auf die Wahrnehmung des kommunalpolitischen Ehrenamts erstmals auch durch eine breite Empirie untersucht.
Die vorliegende Arbeit untersucht die Politik der Zentralbankunabhängigkeit (ZBU) am Beispiel der Türkei. Im Mittelpunkt der Arbeit stehen theoretische und empirische Fragen und Probleme, die sich im Zusammenhang mit der ZBU stellen und anhand der türkischen Geldpolitik diskutiert werden. Ein zentrales Ziel der Arbeit besteht darin, zu untersuchen, ob und inwiefern die türkische Zentralbank nach Erlangung der de jure institutionellen Unabhängigkeit tatsächlich als unabhängig und entpolitisiert eingestuft werden kann. Um diese Forschungsfrage zu beantworten, werden die institutionellen Bedingungen, die Ziele und die Regeln, nach denen sich die türkische Geldpolitik richtet, geklärt. Anschließend wird empirisch überprüft, ob die geldpolitische Praxis der CBRT sich an dem offiziell vorgegebenen Regelwerk orientiert. Die Hauptthese dieser Arbeit lautet, dass die formelle Unabhängigkeit der CBRT und die regelorientierte Geldpolitik nicht mit einer Entpolitisierung der Geldpolitik in der Türkei gleichzusetzen ist. Als Alternative schlägt die vorliegende Studie vor, den institutionellen Status der CBRT als einen der relativen Autonomie zu untersuchen. Auch eine de jure unabhängige Zentralbank kann sich nicht von politischen Eingriffen abkoppeln, wie das Fallbeispiel Türkei zeigen wird.
Тезисы
(2016)
Тезисы
(2016)
Тезисы
(2016)
Theses
(2016)
Thesen
(2016)
Тезисы
(2016)
Тезисы
(2016)
Тезисы
(2016)
The current study investigates to what extent masked morphological priming is modulated by language-particular properties, specifically by its writing system. We present results from two masked priming experiments investigating the processing of complex Japanese words written in less common (moraic) scripts. In Experiment 1, participants performed lexical decisions on target verbs; these were preceded by primes which were either (i) a past-tense form of the same verb, (ii) a stem-related form with the epenthetic vowel -i, (iii) a semantically-related form, and (iv) a phonologically-related form. Significant priming effects were obtained for prime types (i), (ii), and (iii), but not for (iv). This pattern of results differs from previous findings on languages with alphabetic scripts, which found reliable masked priming effects for morphologically related prime/target pairs of type (i), but not for non-affixal and semantically-related primes of types (ii), and (iii). In Experiment 2, we measured priming effects for prime/target pairs which are neither morphologically, semantically, phonologically nor - as presented in their moraic scripts—orthographically related, but which—in their commonly written form—share the same kanji, which are logograms adopted from Chinese. The results showed a significant priming effect, with faster lexical-decision times for kanji-related prime/target pairs relative to unrelated ones. We conclude that affix-stripping is insufficient to account for masked morphological priming effects across languages, but that language-particular properties (in the case of Japanese, the writing system) affect the processing of (morphologically) complex words.
Species can adjust their traits in response to selection which may strongly influence species coexistence. Nevertheless, current theory mainly assumes distinct and time-invariant trait values. We examined the combined effects of the range and the speed of trait adaptation on species coexistence using an innovative multispecies predator–prey model. It allows for temporal trait changes of all predator and prey species and thus simultaneous coadaptation within and among trophic levels. We show that very small or slow trait adaptation did not facilitate coexistence because the stabilizing niche differences were not sufficient to offset the fitness differences. In contrast, sufficiently large and fast trait adaptation jointly promoted stable or neutrally stable species coexistence. Continuous trait adjustments in response to selection enabled a temporally variable convergence and divergence of species traits; that is, species became temporally more similar (neutral theory) or dissimilar (niche theory) depending on the selection pressure, resulting over time in a balance between niche differences stabilizing coexistence and fitness differences promoting competitive exclusion. Furthermore, coadaptation allowed prey and predator species to cluster into different functional groups. This equalized the fitness of similar species while maintaining sufficient niche differences among functionally different species delaying or preventing competitive exclusion. In contrast to previous studies, the emergent feedback between biomass and trait dynamics enabled supersaturated coexistence for a broad range of potential trait adaptation and parameters. We conclude that accounting for trait adaptation may explain stable and supersaturated species coexistence for a broad range of environmental conditions in natural systems when the absence of such adaptive changes would preclude it. Small trait changes, coincident with those that may occur within many natural populations, greatly enlarged the number of coexisting species.
Recombination of free charge is a key process limiting the performance of solar cells. For low mobility materials, such as organic semiconductors, the kinetics of non-geminate recombination (NGR) is strongly linked to the motion of charges. As these materials possess significant disorder, thermalization of photogenerated carriers in the inhomogeneously broadened density of state distribution is an unavoidable process. Despite its general importance, knowledge about the kinetics of NGR in complete organic solar cells is rather limited. We employ time delayed collection field (TDCF) experiments to study the recombination of photogenerated charge in the high-performance polymer:fullerene blend PCDTBT:PCBM. NGR in the bulk of this amorphous blend is shown to be highly dispersive, with a continuous reduction of the recombination coefficient throughout the entire time scale, until all charge carriers have either been extracted or recombined. Rapid, contact-mediated recombination is identified as an additional loss channel, which, if not properly taken into account, would erroneously suggest a pronounced field dependence of charge generation. These findings are in stark contrast to the results of TDCF experiments on photovoltaic devices made from ordered blends, such as P3HT:PCBM, where non-dispersive recombination was proven to dominate the charge carrier dynamics under application relevant conditions.
Two experiments examined how individuals respond to a restriction presented within an approach versus an avoidance frame. In Study 1, working on a problem-solving task, participants were initially free to choose their strategy, but for a second task were told to change their strategy. The message to change was embedded in either an approach or avoidance frame. When confronted with an avoidance compared to an approach frame, the participants’ reactance toward the request was greater and, in turn, led to impaired performance. The role of reactance as a response to threat to freedom was explicitly examined in Study 2, in which participants evaluated a potential change in policy affecting their program of study herein explicitly varying whether a restriction was present or absent and whether the message was embedded in an approach versus avoidance frame. When communicated with an avoidance frame and as a restriction, participants showed the highest resistance in terms of reactance, message agreement and evaluation of the communicator. The difference in agreement with the change was mediated by reactance only when a restriction was present. Overall, avoidance goal frames were associated with more resistance to change on different levels of experience (reactance, performance, and person perception). Reactance mediated the effect of goal frame on other outcomes only when a restriction was present.
La siguiente investigación aborda la relación entre mito y modernidad en la obra literaria (poemas, cuentos, novelas y crónicas) del autor chileno Rosamel del Valle (Curacaví, 1901 – Santiago de Chile, 1965). En sus distintos textos existe una tensión entre un proyecto poético basado en una visión mítica del mundo y un contexto histórico que privilegia posturas más racionalistas, relegando lo poético y lo mítico. Ya en el siglo XIX la modernidad y los fenómenos asociados de la modernización producen el desplazamiento de la poesía como discurso y del poeta como persona a una situación deficitaria dentro de la sociedad, lo que se extiende en el siglo XX. Debido a este conflicto Rosamel del Valle cuestiona en su obra los alcances de sus propios postulados, tanto estéticos como vitales. Esto conlleva una vacilación entre la reafirmación de su proyecto poético-vital y la consciencia del fracaso. Es por esto que la obra del poeta chileno contiene un Lebenswissen, que concibe a la poesía como una forma privilegiada de vida. Sin embargo, debido a las dificultades de las condiciones históricas es posible entender este Lebenswissen también como un ÜberLebenswissen (Ette).
En la primera parte del texto se estudia la concepción mítica de la poesía de del Valle y se analiza los distintos niveles en los que aparece el mito en su obra literaria: como pensamiento, como lenguaje y como narración tradicional. La identificación de la poesía como mito muestra las principales características de la poética de del Valle: una concepción ontológica que diferencia entre una dimensión visible y otra invisible de la realidad; la tendencia mística de la poesía; una concepción cíclica del tiempo que funda una relación con los discursos de la memoria y de la muerte, así como la idea de un pasado utópico que por medio de la poesía podría revivir en el presente; además de la figura de la mujer cómo símbolo del amor y de la poesía.
En la segunda parte se investiga la relación y las consecuencias de esta “poesía mítica” en el contexto de la modernidad. Se aborda especialmente el efecto en la poética de Rosamel del Valle de la Entzauberung der Welt (Weber), como experiencia específica de la época. Para esto se destaca ante todo sus impresiones sobre Nueva York. Esta ciudad en la que vivió y trabajó entre 1946 y 1963, se transforma en sus textos en un lugar en el que sería posible el “habitar poético del hombre” en la modernidad.
Metal-containing ionic liquids (ILs) are of interest for a variety of technical applications, e.g., particle synthesis and materials with magnetic or thermochromic properties. In this paper we report the synthesis of, and two structures for, some new tetrabromidocuprates(II) with several “onium” cations in comparison to the results of electron paramagnetic resonance (EPR) spectroscopic analyses. The sterically demanding cations were used to separate the paramagnetic Cu(II) ions for EPR measurements. The EPR hyperfine structure in the spectra of these new compounds is not resolved, due to the line broadening resulting from magnetic exchange between the still-incomplete separated paramagnetic Cu(II) centres. For the majority of compounds, the principal g values (g|| and gK) of the tensors could be determined and information on the structural changes in the [CuBr4]2- anions can be obtained. The complexes have high potential, e.g., as ionic liquids, as precursors for the synthesis of copper bromide particles, as catalytically active or paramagnetic ionic liquids.
Compared to their inorganic counterparts, organic semiconductors suffer from relatively low charge carrier mobilities. Therefore, expressions derived for inorganic solar cells to correlate characteristic performance parameters to material properties are prone to fail when applied to organic devices. This is especially true for the classical Shockley-equation commonly used to describe current-voltage (JV)-curves, as it assumes a high electrical conductivity of the charge transporting material. Here, an analytical expression for the JV-curves of organic solar cells is derived based on a previously published analytical model. This expression, bearing a similar functional dependence as the Shockley-equation, delivers a new figure of merit α to express the balance between free charge recombination and extraction in low mobility photoactive materials. This figure of merit is shown to determine critical device parameters such as the apparent series resistance and the fill factor.
Dynamics of mantle plumes
(2016)
Mantle plumes are a link between different scales in the Earth’s mantle: They are an important part of large-scale mantle convection, transporting material and heat from the core-mantle boundary to the surface, but also affect processes on a smaller scale, such as melt generation and transport and surface magmatism. When they reach the base of the lithosphere, they cause massive magmatism associated with the generation of large igneous provinces, and they can be related to mass extinction events (Wignall, 2001) and continental breakup (White and McKenzie, 1989).
Thus, mantle plumes have been the subject of many previous numerical modelling studies (e.g. Farnetani and Richards, 1995; d’Acremont et al., 2003; Lin and van Keken, 2005; Sobolev et al., 2011; Ballmer et al., 2013). However, complex mechanisms, such as the development and implications of chemical heterogeneities in plumes, their interaction with mid-ocean ridges and global mantle flow, and melt ascent from the source region to the surface are still not very well understood; and disagreements between observations and the predictions of classical plume models have led to a challenge of the plume concept in general (Czamanske et al., 1998; Anderson, 2000; Foulger, 2011). Hence, there is a need for more sophisticated models that can explain the underlying physics, assess which properties and processes are important, explain how they cause the observations visible at the Earth’s surface and provide a link between the different scales.
In this work, integrated plume models are developed that investigate the effect of dense recycled oceanic crust on the development of mantle plumes, plume–ridge interaction under the influence of global mantle flow and melting and melt migration in form of two-phase flow.
The presented analysis of these models leads to a new, updated picture of mantle plumes: Models considering a realistic depth-dependent density of recycled oceanic crust and peridotitic mantle material show that plumes with excess temperatures of up to 300 K can transport up to 15% of recycled oceanic crust through the whole mantle. However, due to the high density of recycled crust, plumes can only advance to the base of the lithosphere directly if they have high excess temperatures, high plume volumes and the lowermost mantle is subadiabatic, or plumes rise from the top or edges of thermo-chemical piles. They might only cause minor surface uplift, and instead of the classical head–tail structure, these low-buoyancy plumes are predicted to be broad features in the lower mantle with much less pronounced plume heads. They can form a variety of shapes and regimes, including primary plumes directly advancing to the base of the lithosphere, stagnating plumes, secondary plumes rising from the core–mantle boundary or a pool of eclogitic material in the upper mantle and failing plumes. In the upper mantle, plumes are tilted and deflected by global mantle flow, and the shape, size and stability of the melting region is influenced by the distance from nearby plate boundaries, the speed of the overlying plate and the movement of the plume tail arriving from the lower mantle. Furthermore, the structure of the lithosphere controls where hot material is accumulated and melt is generated. In addition to melting in the plume tail at the plume arrival position, hot plume material flows upwards towards opening rifts, towards mid-ocean ridges and towards other regions of thinner lithosphere, where it produces additional melt due to decompression. This leads to the generation of either broad ridges of thickened magmatic crust or the separation into multiple thinner lines of sea mount chains at the surface. Once melt is generated within the plume, it influences its dynamics, lowering the viscosity and density, and while it rises the melt volume is increased up to 20% due to decompression. Melt has the tendency to accumulate at the top of the plume head, forming diapirs and initiating small-scale convection when the plume reaches the base of the lithosphere. Together with the introduced unstable, high-density material produced by freezing of melt, this provides an efficient mechanism to thin the lithosphere above plume heads.
In summary, this thesis shows that mantle plumes are more complex than previously considered, and linking the scales and coupling the physics of different processes occurring in mantle plumes can provide insights into how mantle plumes are influenced by chemical heterogeneities, interact with the lithosphere and global mantle flow, and are affected by melting and melt migration. Including these complexities in geodynamic models shows that plumes can also have broad plume tails, might produce only negligible surface uplift, can generate one or several volcanic island chains in interaction with a mid–ocean ridge, and can magmatically thin the lithosphere.
Mujeres de apocalipsis
(2016)
Mujeres del Apocalipsis propone nuevas lecturas de género en relación a las mujeres piadosas que habitaron Nueva España en el siglo XVIII. El estudio sebasa en un corpus de los archivos de sus procesos inquisitoriales, mucho de ellos aún indéditos. Estas mujeres ganaron libertad y accedieon a una parcial autonomía en el mundo colonial. A través de su lectura, se descubren las estrategías tácticas a través de las cuales las beatas pactaron una nueva forma de se mujer en aquella era.
Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic “Operating the Cloud”. Our goal is to provide a forum for the exchange of knowledge and experience between industry and academia. Hence, HPI’s Future SOC Lab is the adequate environment to host this event which is also supported by BITKOM.
On the occasion of this workshop we called for submissions of research papers and practitioner’s reports. ”Operating the Cloud” aims to be a platform for productive discussions of innovative ideas, visions, and upcoming technologies in the field of cloud operation and administration.
In this workshop proceedings the results of the third HPI cloud symposium ”Operating the Cloud” 2015 are published. We thank the authors for exciting presentations and insights into their current work and research. Moreover, we look forward to more interesting submissions for the upcoming symposium in 2016.
Using an algorithm based on a retrospective rejection sampling scheme, we propose an exact simulation of a Brownian diffusion whose drift admits several jumps. We treat explicitly and extensively the case of two jumps, providing numerical simulations. Our main contribution is to manage the technical difficulty due to the presence of two jumps thanks to a new explicit expression of the transition density of the skew Brownian motion with two semipermeable barriers and a constant drift.
Die vorliegende Arbeit befasst sich mit Führungsverhalten im öffentlichen Sektor sowie mit Einflussfaktoren auf dieses Führungsverhalten. Hierzu wurde eine Taxonomie, bestehend aus sechs Metakategorien von Führungsverhalten, entwickelt. Die Metakategorien umfassen Aufgaben-, Beziehungs-, Veränderungs-, Außen-, Ethik- und Sachbearbeitungsorientierung. Eine Analyse von Umfragedaten, die für diese Arbeit bei Mitarbeitern und unteren Führungskräften dreier Behörden erhoben wurden, zeigt, dass diese Taxonomie sehr gut geeignet ist, die Führungsrealität in der öffentlichen Verwaltung abzubilden.
Eine deskriptive Auswertung der Daten zeigt außerdem, dass es eine relativ große Differenz zwischen der Selbsteinschätzung der Führungskräfte und der Fremdeinschätzung durch ihre Mitarbeiter gibt. Diese Differenz ist bei der Beziehungs- und Veränderungsorientierung besonders hoch.
Der deskriptiven Auswertung schließt sich eine Analyse von Einflussfaktoren auf das Führungsverhalten an. Die Einflussfaktoren können den vier Kategorien "Charakteristika und Eigenschaften der Führungskräfte", "Erwartungen und Interesse von Vorgesetzten", "Charakteristika und Einstellungen von Geführten" und "Managementinstrumente und -rahmenbedingungen" zugeordnet werden.
Eine Analyse mit Hilfe von hierarchischen linearen Modellen zeigt, dass vor allem die Führungsmotivation und die Managementorientierung der Führungskräfte, die Gemeinwohlorientierung und die Art der Aufgabe der Geführten sowie die strategische Führungskräfteauswahl und die Leistungsmessung durch die Führungskräfte anhand konkreter Ziele einen Einfluss auf das Führungsverhalten haben.
Die Ergebnisse dieser Arbeit ergänzen die Literatur zu Führungsverhalten im öffentlichen Sektor um die Perspektive der Einflussfaktoren auf das Führungsverhalten und leisten zusätzlich mit Hilfe der verwendeten Taxonomie einen Beitrag zur theoretischen Diskussion von Führungsverhalten in der Public-Management-Forschung. Darüber hinaus bieten die gewonnenen Erkenntnisse der Verwaltungspraxis Hinweise zu relevanten Einflussfaktoren auf das Führungsverhalten sowie auf beachtliche Differenzen zwischen Selbst- und Fremdwahrnehmung des Führungsverhaltens.
Universitäten erbringen wohlfahrtsfördernde Leistungen für die Gesellschaft, insbesondere indem sie Studierende ausbilden, über Forschung neues Wissen erzeugen sowie den Wissens- und Technologietransfer in Wirtschaft und Gesellschaft hinein betreiben. Die Erbringung dieser Leistungen wird ermöglicht durch eine größtenteils öffentliche Finanzierung, die nicht nur in Zeiten wirtschaftlicher Krisen und Spargebote gerne hinterfragt wird. Die Politik ist daher ebenso wie die Hochschulen gut beraten, die Mittelzuweisungen immer wieder neu zu legitimieren. Die vorliegende Studie untersucht die sozioökonomischen Effekte der Universität Potsdam und schließt so vorhandene Informationslücken. Die Autoren zeigen, dass auch indirekte und unerwartete Effekte eine große Rolle spielen können, was die Wirkung einer Universität auf Wirtschaft und Wohlstand angeht.
Ingo Schwarz: Opfer für die Wissenschaften „in dem Drange wichtiger öffentlicher Begebenheiten“ Briefe von Alexander von Humboldt an Friedrich Wilhelm III., 1806
Reinhard Andress: Eine Bitte an Thomas Jefferson um Tabaksamen und Tabak: ein unveröffentlichter Brief Alexander von Humboldts
Ottmar Ette: Naturaleza y cultura: perspectivas científico-vitales de la ciencia de Humboldt
Miguel Ángel Puig-Samper, Elisa Garrido: The presentation of the results of Alexander von Humboldt's voyage to Carlos IV
Thomas Schmuck: Humboldt in Goethes Bibliothek
Bärbel Rott: Alexander von Humboldt brachte Guano nach Europa - mit ungeahnten globalen Folgen
Background: The efficiency of multiplex editing in plants by the RNA-guided Cas9 system is limited by efficient introduction of its components into the genome and by their activity. The possibility of introducing large fragment deletions by RNA-guided Cas9 tool provides the potential to study the function of any DNA region of interest in its
‘endogenous’ environment.
Results: Here, an RNA-guided Cas9 system was optimized to enable efficient multiplex editing in Arabidopsis thaliana. We demonstrate the flexibility of our system for knockout of multiple genes, and to generate heritable largefragment deletions in the genome. As a proof of concept, the function of part of the second intron of the flower development gene AGAMOUS in Arabidopsis was studied by generating a Cas9-free mutant plant line in which part of this intron was removed from the genome. Further analysis revealed that deletion of this intron fragment results 40 % decrease of AGAMOUS gene expression without changing the splicing of the gene which indicates that this regulatory region functions as an activator of AGAMOUS gene expression.
Conclusions: Our modified RNA-guided Cas9 system offers a versatile tool for the functional dissection of coding and non-coding DNA sequences in plants.
Walking while concurrently performing cognitive and/or motor interference tasks is the norm rather than the exception during everyday life and there is evidence from behavioral studies that it negatively affects human locomotion. However, there is hardly any information available regarding the underlying neural correlates of single- and dual-task walking. We had 12 young adults (23.8 ± 2.8 years) walk while concurrently performing a cognitive interference (CI) or a motor interference (MI) task. Simultaneously, neural activation in frontal, central, and parietal brain areas was registered using a mobile EEG system. Results showed that the MI task but not the CI task affected walking performance in terms of significantly decreased gait velocity and stride length and significantly increased stride time and tempo-spatial variability. Average activity in alpha and beta frequencies was significantly modulated during both CI and MI walking conditions in frontal and central brain regions, indicating an increased cognitive load during dual-task walking. Our results suggest that impaired motor performance during dual-task walking is mirrored in neural activation patterns of the brain. This finding is in line with established cognitive theories arguing that dual-task situations overstrain cognitive capabilities resulting in motor performance decrements.
Naturaleza y cultura
(2016)
El presente trabajo gira en torno al inexpugnable vínculo entre naturaleza y cultura y la 'no naturalidad' de la primera, producto de las milenarias intervenciones del hombre, subsumido bajo el término del 'antropoceno'. Los filósofos franceses Bruno Latour y Philippe Descola supieron destacar, aunque por caminos diferentes, la importancia de este nexo para asegurar la supervivencia del hombre; Bruno Latour centra sus reflexiones en la política de la naturaleza y Philippe Descola destaca el carácter ecológico de la naturaleza y la cultura. Sin embargo, ambos dejan de lado las literaturas del mundo y su capacidad de atesorar los diversos diseños del saber convivir entre hombre y naturaleza y las nociones de sustentabilidad. Descuella además la inspiración que Descola encuentra en la figura del gran erudito Alexander von Humboldt, quien en el siglo XIX ya daba fe de la relación inextricable entre naturaleza y cultura en innumerables testimonios, entre otros, el Chimborazo que, como cuadro global es representativo para entender que la naturaleza desde siempre ha sido cultura y la cultura es inimaginble sin la naturaleza.
Regionale Unterschiede der Inanspruchnahme von Präventionsleistungen in der ambulanten Versorgung
(2016)
Das Ziel dieser Studie war es die regionalen Unterschiede der Inanspruchnahme sekundärpräventiver Leistungen in Deutschland auf Kreisebene zu analysieren. Hierbei sollte eine Lücke in der deutschen Forschung geschlossen werden, indem neben individuellen Faktoren auch ökologische Faktoren durch einen Mehrebenenansatz einbezogen wurden. Auf ökologischer Ebene wurde die Effekte der regionalen sozialen Deprivation, der Urbanisierung und der Arztdichte der ambulanten Ärzte analysiert. Variablen auf Individualebene waren Geschlecht und Gesundheitsstatus.
In der Studie wurden drei verschiedene Datenbanken miteinander verknüpft. Zur Berechnung der regionalen sozialen Deprivation und der Urbanisierung wurden Daten von INKAR für alle 402 Kreise verwendet. Das Bundesarztregister lieferte die Datengrundlage zur Bestimmung der Arztdichte. Die Abrechnungsdaten aller Kassenärztlichen Vereinigungen nach § 295 SGB V lieferten die Zahlen für die Inanspruchnahme der spezifischen Präventionsangebote als auch für Geschlecht und Gesundheitsstatus. Hierdurch war es möglich eine Vollerhebung aller gesetzlich Krankenversicherten zwischen 50 und 55 Jahren durchzuführen, die 2013 einen Arzt aufgesucht haben (N = 6,6 Mio.). Die unabhängigen Variablen der regionalen sozialen Deprivation und Urbanisierung sowie die Kontrollvariable Gesundheitsstatus wurden mit Hilfe der Faktorenanalyse gebildet. Um die regionalen Unterschiede analysieren zu können, wurde eine hierarchische multivariate Regression durchgeführt.
Rund 80% aller sekundärpräventiven Leistungen wurden von Frauen in Anspruch genommen. Ein schlechterer Gesundheitsstatus war mit einer höheren Rate der Inanspruchnahme assoziiert. Die Ergebnisse weisen auf regionale Unterschiede hin, die sich nach Geschlecht unterscheiden wobei die unabhängigen Variablen nur kleine Effekte aufweisen. Entgegen der Hypothese war eine höhere regionale soziale Deprivation mit einer höheren Inanspruchnahme bei Männern und Frauen assoziiert. Urbanität war bei Männern positiv und bei Frauen negativ mit der Inanspruchnahme assoziiert. Die Interaktion beider Variablen hat keinen Effekt auf Männer aber einen negativen Effekt auf Frauen. Die Arztdichte wurde aus dem finalen statistischen Modell ausgeschlossen, da die Variable Multikollinearität aufwies.
Bisherige Theorien sind nicht in der Lage die Ergebnisse zu erklären, da sie bisherigen Forschungsergebnissen widersprechen. Zusätzliche Berechnungen legen die Schlussfolgerung nahe, dass die herrschenden Ost-West-Unterschiede zu einer Konfundierung der Ergebnisse geführt haben. Berücksichtigt man das Alter der Patienten, kann vermutet werden, dass die Sozialisation der Inanspruchnahme sekundärpräventiver Leistungen in der DDR bis heute das Gesundheitsverhalten beeinflusst. Allerdings sind weitere Forschungen notwendig um die Gründe für die regionalen Unterschiede der Inanspruchnahme sekundärpräventiver Leistungen besser zu verstehen.
In recent years, entire industries and their participants have been affected by disruptive technologies, resulting in dramatic market changes and challenges to firm’s business logic and thus their business models (BMs). Firms from mature industries are increasingly realizing that BMs that worked successfully for years have become insufficient to stay on track in today’s “move fast and break things” economy. Firms must scrutinize the core logic that informs how they do business, which means exploring novel ways to engage customers and get them to pay. This can lead to a complete renewal of existing BMs or innovating completely new BMs.
BMs have emerged as a popular object of research within the last decade. Despite the popularity of the BM, the theoretical and empirical foundation underlying the concept is still weak. In particular, the innovation process for BMs has been developed and implemented in firms, but understanding of the mechanisms behind it is still lacking. Business model innovation (BMI) is a complex and challenging management task that requires more than just novel ideas. Systematic studies to generate a better understanding of BMI and support incumbents with appropriate concepts to improve BMI development are in short supply. Further, there is a lack of knowledge about appropriate research practices for studying BMI and generating valid data sets in order to meet expectations in both practice and academia.
This paper-based dissertation aims to contribute to research practice in the field of BM and BMI and foster better understanding of the BM concept and BMI processes in incumbent firms from mature industries. The overall dissertation presents three main results. The first result is a new perspective, or the systems thinking view, on the BM and BMI. With the systems thinking view, the fuzzy BM concept is clearly structured and a BMI framework is proposed. The second result is a new research strategy for studying BMI. After analyzing current research practice in the areas of BMs and BMI, it is obvious that there is a need for better research on BMs and BMI in terms of accuracy, transparency, and practical orientation. Thus, the action case study approach combined with abductive methodology is proposed and proven in the research setting of this thesis. The third result stems from three action case studies in incumbent firms from mature industries employed to study how BMI occurs in practice. The new insights and knowledge gained from the action case studies help to explain BMI in such industries and increase understanding of the core of these processes.
By studying these issues, the articles complied in this thesis contribute conceptually and empirically to the recently consolidated but still increasing literature on the BM and BMI. The conclusions and implications made are intended to foster further research and improve managerial practices for achieving BMI in a dramatically changing business environment.
Dependency Resolution Difficulty Increases with Distance in Persian Separable Complex Predicates
(2016)
Delaying the appearance of a verb in a noun-verb dependency tends to increase processing difficulty at the verb; one explanation for this locality effect is decay and/or interference of the noun in working memory. Surprisal, an expectation-based account, predicts that delaying the appearance of a verb either renders it no more predictable or more predictable, leading respectively to a prediction of no effect of distance or a facilitation. Recently, Husain et al. (2014) suggested that when the exact identity of the upcoming verb is predictable (strong predictability), increasing argument-verb distance leads to facilitation effects, which is consistent with surprisal; but when the exact identity of the upcoming verb is not predictable (weak predictability), locality effects are seen. We investigated Husain et al.'s proposal using Persian complex predicates (CPs), which consist of a non-verbal element—a noun in the current study—and a verb. In CPs, once the noun has been read, the exact identity of the verb is highly predictable (strong predictability); this was confirmed using a sentence completion study. In two self-paced reading (SPR) and two eye-tracking (ET) experiments, we delayed the appearance of the verb by interposing a relative clause (Experiments 1 and 3) or a long PP (Experiments 2 and 4). We also included a simple Noun-Verb predicate configuration with the same distance manipulation; here, the exact identity of the verb was not predictable (weak predictability). Thus, the design crossed Predictability Strength and Distance. We found that, consistent with surprisal, the verb in the strong predictability conditions was read faster than in the weak predictability conditions. Furthermore, greater verb-argument distance led to slower reading times; strong predictability did not neutralize or attenuate the locality effects. As regards the effect of distance on dependency resolution difficulty, these four experiments present evidence in favor of working memory accounts of argument-verb dependency resolution, and against the surprisal-based expectation account of Levy (2008). However, another expectation-based measure, entropy, which was computed using the offline sentence completion data, predicts reading times in Experiment 1 but not in the other experiments. Because participants tend to produce more ungrammatical continuations in the long-distance condition in Experiment 1, we suggest that forgetting due to memory overload leads to greater entropy at the verb.
When trying to extend the Hodge theory for elliptic complexes on compact closed manifolds to the case of compact manifolds with boundary one is led to a boundary value problem for
the Laplacian of the complex which is usually referred to as Neumann problem. We study the Neumann problem for a larger class of sequences of differential operators on
a compact manifold with boundary. These are sequences of small curvature, i.e., bearing the property that the composition of any two neighbouring operators has order less than two.
Different systems for habitual versus goal-directed control are thought to underlie human decision-making. Working memory is known to shape these decision-making systems and
their interplay, and is known to support goal-directed decision making even under stress. Here, we investigated if and how decision systems are differentially influenced by breaks filled with diverse everyday life activities known to modulate working memory performance. We used a within-subject design where young adults listened to music and played a video game during breaks interleaved with trials of a sequential two-step Markov decision task, designed to assess habitual as well as goal-directed decision making. Based on a neurocomputational model of task performance, we observed that for individuals with a rather limited working memory capacity video gaming as compared to music reduced reliance on the goal-directed decision-making system, while a rather large working memory capacity prevented such a decline. Our findings suggest differential effects of everyday activities on key decision-making processes.
We tested the influence of two light intensities [40 and 300 μmol PAR / (m2s)] on the fatty acid composition of three distinct lipid classes in four freshwater phytoplankton species. We chose species of different taxonomic classes in order to detect potentially similar reaction characteristics that might also be present in natural phytoplankton communities. From samples of the bacillariophyte Asterionella formosa, the chrysophyte Chromulina sp., the cryptophyte Cryptomonas ovata and the zygnematophyte Cosmarium botrytis we first separated glycolipids (monogalactosyldiacylglycerol, digalactosyldiacylglycerol, and sulfoquinovosyldiacylglycerol), phospholipids (phosphatidylcholine, phosphatidylethanolamine, phosphatidylglycerol, phosphatidylinositol, and phosphatidylserine) as well as non-polar lipids (triacylglycerols), before analyzing the fatty acid composition of each lipid class. High variation in the fatty acid composition existed among different species. Individual fatty acid compositions differed in their reaction to changing light intensities in the four species. Although no generalizations could be made for species across taxonomic classes, individual species showed clear but small responses in their ecologically-relevant omega-3 and omega-6 polyunsaturated fatty acids (PUFA) in terms of proportions and of per tissue carbon quotas. Knowledge on how lipids like fatty acids change with environmental or culture conditions is of great interest in ecological food web studies, aquaculture, and biotechnology, since algal lipids are the most important sources of omega-3 long-chain PUFA for aquatic and terrestrial consumers, including humans.
We examined the effects of argument-head distance in SVO and SOV languages (Spanish and German), while taking into account readers' working memory capacity and controlling for expectation (Levy, 2008) and other factors. We predicted only locality effects, that is, a slowdown produced by increased dependency distance (Gibson, 2000; Lewis and Vasishth, 2005). Furthermore, we expected stronger locality effects for readers with low working memory capacity. Contrary to our predictions, low-capacity readers showed faster reading with increased distance, while high-capacity readers showed locality effects. We suggest that while the locality effects are compatible with memory-based explanations, the speedup of low-capacity readers can be explained by an increased probability of retrieval failure. We present a computational model based on ACT-R built under the previous assumptions, which is able to give a qualitative account for the present data and can be tested in future research. Our results suggest that in some cases, interpreting longer RTs as indexing increased processing difficulty and shorter RTs as facilitation may be too simplistic: The same increase in processing difficulty may lead to slowdowns in high-capacity readers and speedups in low-capacity ones. Ignoring individual level capacity differences when investigating locality effects may lead to misleading conclusions.
The gravitational field of a laser pulse of finite lifetime, is investigated in the framework of linearized gravity. Although the effects are very small, they may be of fundamental physical interest. It is shown that the gravitational field of a linearly polarized light pulse is modulated as the norm of the corresponding electric field strength, while no modulations arise for circular polarization. In general, the gravitational field is independent of the polarization direction. It is shown that all physical effects are confined to spherical shells expanding with the speed of light, and that these shells are imprints of the spacetime events representing emission and absorption of the pulse. Nearby test particles at rest are attracted towards the pulse trajectory by the gravitational field due to the emission of the pulse, and they are repelled from the pulse trajectory by the gravitational field due to its absorption. Examples are given for the size of the attractive effect. It is recovered that massless test particles do not experience any physical effect if they are co-propagating with the pulse, and that the acceleration of massless test particles counter-propagating with respect to the pulse is four times stronger than for massive particles at rest. The similarities between the gravitational effect of a laser pulse and Newtonian gravity in two dimensions are pointed out. The spacetime curvature close to the pulse is compared to that induced by gravitational waves from astronomical sources.
Children’s interpretations of sentences containing focus particles do not seem adult-like until school age. This study investigates how German 4-year-old children comprehend sentences with the focus particle ‘nur’ (only) by using different tasks and controlling for the impact of general cognitive abilities on performance measures. Two sentence types with ‘only’ in either pre-subject or pre-object position were presented. Eye gaze data and verbal responses were collected via the visual world paradigm combined with a sentence-picture verification task. While the eye tracking data revealed an adult-like pattern of focus particle processing, the sentence-picture verification replicated previous findings of poor comprehension, especially for ‘only’ in pre-subject position. A second study focused on the impact of general cognitive abilities on the outcomes of the verification task. Working memory was related to children’s performance in both sentence types whereas inhibitory control was selectively related to the number of errors for sentences with ‘only’ in pre-subject position. These results suggest that children at the age of 4 years have the linguistic competence to correctly interpret sentences with focus particles, which–depending on specific task demands–may be masked by immature general cognitive abilities.
Recent research has indicated that university students sometimes use caffeine pills for neuroenhancement (NE; non-medical use of psychoactive substances or technology to produce a subjective enhancement in psychological functioning and experience), especially during exam preparation. In our factorial survey experiment, we manipulated the evidence participants were given about the prevalence of NE amongst peers and measured the resulting effects on the psychological predictors included in the Prototype-Willingness Model of risk behavior. Two hundred and thirty-one university students were randomized to a high prevalence condition (read faked research results overstating usage of caffeine pills amongst peers by a factor of 5; 50%), low prevalence condition (half the estimated prevalence; 5%) or control condition (no information about peer prevalence). Structural equation modeling confirmed that our participants’ willingness and intention to use caffeine pills in the next exam period could be explained by their past use of neuroenhancers, attitude to NE and subjective norm about use of caffeine pills whilst image of the typical user was a much less important factor. Provision of inaccurate information about prevalence reduced the predictive power of attitude with respect to willingness by 40-45%. This may be because receiving information about peer prevalence which does not fit with their perception of the social norm causes people to question their attitude. Prevalence information might exert a deterrent effect on NE via the attitude-willingness association. We argue that research into NE and deterrence of associated risk behaviors should be informed by psychological theory.
Filling the Silence
(2016)
In a self-paced reading experiment, we investigated the processing of sluicing constructions (“sluices”) whose antecedent contained a known garden-path structure in German. Results showed decreased processing times for sluices with garden-path antecedents as well as a disadvantage for antecedents with non-canonical word order downstream from the ellipsis site. A post-hoc analysis showed the garden-path advantage also to be present in the region right before the ellipsis site. While no existing account of ellipsis processing explicitly predicted the results, we argue that they are best captured by combining a local antecedent mismatch effect with memory trace reactivation through reanalysis.
Background
Overweight and obesity are increasing health problems that are not restricted to adults only. Childhood obesity is associated with metabolic, psychological and musculoskeletal comorbidities. However, knowledge about the effect of obesity on the foot function across maturation is lacking. Decreased foot function with disproportional loading characteristics is expected for obese children. The aim of this study was to examine foot loading characteristics during gait of normal-weight, overweight and obese children aged 1-12 years.
Methods
A total of 10382 children aged one to twelve years were enrolled in the study. Finally, 7575 children (m/f: n = 3630/3945; 7.0 +/- 2.9yr; 1.23 +/- 0.19m; 26.6 +/- 10.6kg; BMI: 17.1 +/- 2.4kg/m(2)) were included for (complete case) data analysis. Children were categorized to normalweight (>= 3rd and <90th percentile; n = 6458), overweight (>= 90rd and <97th percentile; n = 746) or obese (>97th percentile; n = 371) according to the German reference system that is based on age and gender-specific body mass indices (BMI). Plantar pressure measurements were assessed during gait on an instrumented walkway. Contact area, arch index (AI), peak pressure (PP) and force time integral (FTI) were calculated for the total, fore-, mid-and hindfoot. Data was analyzed descriptively (mean +/- SD) followed by ANOVA/Welch-test (according to homogeneity of variances: yes/no) for group differences according to BMI categorization (normal-weight, overweight, obesity) and for each age group 1 to 12yrs (post-hoc Tukey Kramer/Dunnett's C; alpha = 0.05).
Results
Mean walking velocity was 0.95 +/- 0.25 m/s with no differences between normal-weight, overweight or obese children (p = 0.0841). Results show higher foot contact area, arch index, peak pressure and force time integral in overweight and obese children (p< 0.001). Obese children showed the 1.48-fold (1 year-old) to 3.49-fold (10 year-old) midfoot loading (FTI) compared to normal-weight.
Conclusion
Additional body mass leads to higher overall load, with disproportional impact on the midfoot area and longitudinal foot arch showing characteristic foot loading patterns. Already the feet of one and two year old children are significantly affected. Childhood overweight and obesity is not compensated by the musculoskeletal system. To avoid excessive foot loading with potential risk of discomfort or pain in childhood, prevention strategies should be developed and validated for children with a high body mass index and functional changes in the midfoot area. The presented plantar pressure values could additionally serve as reference data to identify suspicious foot loading patterns in children.
In the past, floods were basically managed by flood control mechanisms. The focus was set on the reduction of flood hazard. The potential consequences were of minor interest. Nowadays river flooding is increasingly seen from the risk perspective, including possible consequences. Moreover, the large-scale picture of flood risk became increasingly important for disaster management planning, national risk developments and the (re-) insurance industry. Therefore, it is widely accepted that risk-orientated flood management ap-proaches at the basin-scale are needed. However, large-scale flood risk assessment methods for areas of several 10,000 km² are still in early stages. Traditional flood risk assessments are performed reach wise, assuming constant probabilities for the entire reach or basin. This might be helpful on a local basis, but where large-scale patterns are important this approach is of limited use. Assuming a T-year flood (e.g. 100 years) for the entire river network is unrealistic and would lead to an overestimation of flood risk at the large scale. Due to the lack of damage data, additionally, the probability of peak discharge or rainfall is usually used as proxy for damage probability to derive flood risk. With a continuous and long term simulation of the entire flood risk chain, the spatial variability of probabilities could be consider and flood risk could be directly derived from damage data in a consistent way.
The objective of this study is the development and application of a full flood risk chain, appropriate for the large scale and based on long term and continuous simulation. The novel approach of ‘derived flood risk based on continuous simulations’ is introduced, where the synthetic discharge time series is used as input into flood impact models and flood risk is directly derived from the resulting synthetic damage time series.
The bottleneck at this scale is the hydrodynamic simu-lation. To find suitable hydrodynamic approaches for the large-scale a benchmark study with simplified 2D hydrodynamic models was performed. A raster-based approach with inertia formulation and a relatively high resolution of 100 m in combination with a fast 1D channel routing model was chosen.
To investigate the suitability of the continuous simulation of a full flood risk chain for the large scale, all model parts were integrated into a new framework, the Regional Flood Model (RFM). RFM consists of the hydrological model SWIM, a 1D hydrodynamic river network model, a 2D raster based inundation model and the flood loss model FELMOps+r. Subsequently, the model chain was applied to the Elbe catchment, one of the largest catchments in Germany. For the proof-of-concept, a continuous simulation was per-formed for the period of 1990-2003. Results were evaluated / validated as far as possible with available observed data in this period. Although each model part introduced its own uncertainties, results and runtime were generally found to be adequate for the purpose of continuous simulation at the large catchment scale.
Finally, RFM was applied to a meso-scale catchment in the east of Germany to firstly perform a flood risk assessment with the novel approach of ‘derived flood risk assessment based on continuous simulations’. Therefore, RFM was driven by long term synthetic meteorological input data generated by a weather generator. Thereby, a virtual time series of climate data of 100 x 100 years was generated and served as input to RFM providing subsequent 100 x 100 years of spatially consistent river discharge series, inundation patterns and damage values. On this basis, flood risk curves and expected annual damage could be derived directly from damage data, providing a large-scale picture of flood risk. In contrast to traditional flood risk analysis, where homogenous return periods are assumed for the entire basin, the presented approach provides a coherent large-scale picture of flood risk. The spatial variability of occurrence probability is respected. Additionally, data and methods are consistent. Catchment and floodplain processes are repre-sented in a holistic way. Antecedent catchment conditions are implicitly taken into account, as well as physical processes like storage effects, flood attenuation or channel–floodplain interactions and related damage influencing effects. Finally, the simulation of a virtual period of 100 x 100 years and consequently large data set on flood loss events enabled the calculation of flood risk directly from damage distributions. Problems associated with the transfer of probabilities in rainfall or peak runoff to probabilities in damage, as often used in traditional approaches, are bypassed.
RFM and the ‘derived flood risk approach based on continuous simulations’ has the potential to provide flood risk statements for national planning, re-insurance aspects or other questions where spatially consistent, large-scale assessments are required.
In this dissertation, an electric field-assisted method was developed and applied to achieve immobilization and alignment of biomolecules on metal electrodes in a simple one-step experiment. Neither modifications of the biomolecule nor of the electrodes were needed. The two major electrokinetic effects that lead to molecule motion in the chosen electrode configurations used were identified as dielectrophoresis and AC electroosmotic flow. To minimize AC electroosmotic flow, a new 3D electrode configuration was designed. Thus, the influence of experimental parameters on the dielectrophoretic force and the associated molecule movement could be studied. Permanent immobilization of proteins was examined and quantified absolutely using an atomic force microscope. By measuring the volumes of the immobilized protein deposits, a maximal number of proteins contained therein was calculated. This was possible since the proteins adhered to the tungsten electrodes even after switching off the electric field. The permanent immobilization of functional proteins on surfaces or electrodes is one crucial prerequisite for the fabrication of biosensors.
Furthermore, the biofunctionality of the proteins must be retained after immobilization. Due to the chemical or physical modifications on the proteins caused by immobilization, their biofunctionality is sometimes hampered. The activity of dielectrophoretically immobilized proteins, however, was proven here for an enzyme for the first time. The enzyme horseradish peroxidase was used exemplarily, and its activity was demonstrated with the oxidation of dihydrorhodamine 123, a non-fluorescent precursor of the fluorescence dye rhodamine 123.
Molecular alignment and immobilization - reversible and permanent - was achieved under the influence of inhomogeneous AC electric fields. For orientational investigations, a fluorescence microscope setup, a reliable experimental procedure and an evaluation protocol were developed and validated using self-made control samples of aligned acridine orange molecules in a liquid crystal.
Lambda-DNA strands were stretched and aligned temporarily between adjacent interdigitated electrodes, and the orientation of PicoGreen molecules, which intercalate into the DNA strands, was determined. Similarly, the aligned immobilization of enhanced Green Fluorescent Protein was demonstrated exploiting the protein's fluorescence and structural properties. For this protein, the angle of the chromophore with respect to the protein's geometrical axis was determined in good agreement with X-ray crystallographic data. Permanent immobilization with simultaneous alignment of the proteins was achieved along the edges, tips and on the surface of interdigitated electrodes. This was the first demonstration of aligned immobilization of proteins by electric fields.
Thus, the presented electric field-assisted immobilization method is promising with regard to enhanced antibody binding capacities and enzymatic activities, which is a requirement for industrial biosensor production, as well as for general interaction studies of proteins.
Udmurt as an OV language
(2016)
This is the first study to investigate Hubert Haider's (2000, 2010, 2013, 2014) proposed systematic differences between OV and VO language in a family other than Germanic. Its aim is to gather evidence on whether basic word order is predictive of further properties of a language. The languages under investigation are the Finno-Ugric languages Udmurt (as an OV language) and Finnish (as a VO language). Counter to Kayne (1994), Haider proposes that the structure of a sentence with a head-final VP is fundamentally different from that of a sentence with a head-initial VP, e.g., OV languages do not exhibit a VP-shell structure, and they do not employ a TP layer with a structural subject position. Haider's proposed structural differences are said to result in the following empirically testable differences:
(a) VP: the availability of VP-internal adverbial intervention and scrambling only in OV-VPs;
(b) subjects: the lack of certain subject-object asymmetries in OV languages, i.e., lack of the subject condition and lack of superiority effects;
(c) V-complexes: the availability of partial predicate fronting only in OV languages; different orderings between selecting and selected verbs; the intervention of non-verbal material between verbs only in VO languages;
(d) V-particles: differences in the distribution of resultative phrases and verb particles.
Udmurt and Finnish behave in line with Haider's predictions with regard to the status of the subject, with regard to the order of selecting and selected verbs, and with regard to the availability of partial predicate fronting. Moreover, Udmurt allows for adverbial intervention and scrambling, as predicted, whereas the status of these properties in Finnish could not be reliably determined due to obligatory V-to-T. There is also counterevidence to Haider's predictions: Udmurt allows for non-verbal material between verbs, and the distribution of resultative phrases and verb particles is essentially as free as the distribution of adverbial phrases in both Finno-Ugric languages. As such, Haider's theory is not falsified by the data from Udmurt and Finnish (except for his theory on verb particles), but it is also not fully supported by the data.
In many statistical applications, the aim is to model the relationship between covariates and some outcomes. A choice of the appropriate model depends on the outcome and the research objectives, such as linear models for continuous outcomes, logistic models for binary outcomes and the Cox model for time-to-event data. In epidemiological, medical, biological, societal and economic studies, the logistic regression is widely used to describe the relationship between a response variable as binary outcome and explanatory variables as a set of covariates. However, epidemiologic cohort studies are quite expensive regarding data management since following up a large number of individuals takes long time. Therefore, the case-cohort design is applied to reduce cost and time for data collection. The case-cohort sampling collects a small random sample from the entire cohort, which is called subcohort. The advantage of this design is that the covariate and follow-up data are recorded only on the subcohort and all cases (all members of the cohort who develop the event of interest during the follow-up process).
In this thesis, we investigate the estimation in the logistic model for case-cohort design. First, a model with a binary response and a binary covariate is considered. The maximum likelihood estimator (MLE) is described and its asymptotic properties are established. An estimator for the asymptotic variance of the estimator based on the maximum likelihood approach is proposed; this estimator differs slightly from the estimator introduced by Prentice (1986). Simulation results for several proportions of the subcohort show that the proposed estimator gives lower empirical bias and empirical variance than Prentice's estimator.
Then the MLE in the logistic regression with discrete covariate under case-cohort design is studied. Here the approach of the binary covariate model is extended. Proving asymptotic normality of estimators, standard errors for the estimators can be derived. The simulation study demonstrates the estimation procedure of the logistic regression model with a one-dimensional discrete covariate. Simulation results for several proportions of the subcohort and different choices of the underlying parameters indicate that the estimator developed here performs reasonably well. Moreover, the comparison between theoretical values and simulation results of the asymptotic variance of estimator is presented.
Clearly, the logistic regression is sufficient for the binary outcome refers to be available for all subjects and for a fixed time interval. Nevertheless, in practice, the observations in clinical trials are frequently collected for different time periods and subjects may drop out or relapse from other causes during follow-up. Hence, the logistic regression is not appropriate for incomplete follow-up data; for example, an individual drops out of the study before the end of data collection or an individual has not occurred the event of interest for the duration of the study. These observations are called censored observations. The survival analysis is necessary to solve these problems. Moreover, the time to the occurence of the event of interest is taken into account. The Cox model has been widely used in survival analysis, which can effectively handle the censored data. Cox (1972) proposed the model which is focused on the hazard function. The Cox model is assumed to be
λ(t|x) = λ0(t) exp(β^Tx)
where λ0(t) is an unspecified baseline hazard at time t and X is the vector of covariates, β is a p-dimensional vector of coefficient.
In this thesis, the Cox model is considered under the view point of experimental design. The estimability of the parameter β0 in the Cox model, where β0 denotes the true value of β, and the choice of optimal covariates are investigated. We give new representations of the observed information matrix In(β) and extend results for the Cox model of Andersen and Gill (1982). In this way conditions for the estimability of β0 are formulated. Under some regularity conditions, ∑ is the inverse of the asymptotic variance matrix of the MPLE of β0 in the Cox model and then some properties of the asymptotic variance matrix of the MPLE are highlighted. Based on the results of asymptotic estimability, the calculation of local optimal covariates is considered and shown in examples. In a sensitivity analysis, the efficiency of given covariates is calculated. For neighborhoods of the exponential models, the efficiencies have then been found. It is appeared that for fixed parameters β0, the efficiencies do not change very much for different baseline hazard functions. Some proposals for applicable optimal covariates and a calculation procedure for finding optimal covariates are discussed.
Furthermore, the extension of the Cox model where time-dependent coefficient are allowed, is investigated. In this situation, the maximum local partial likelihood estimator for estimating the coefficient function β(·) is described. Based on this estimator, we formulate a new test procedure for testing, whether a one-dimensional coefficient function β(·) has a prespecified parametric form, say β(·; ϑ). The score function derived from the local constant partial likelihood function at d distinct grid points is considered. It is shown that the distribution of the properly standardized quadratic form of this d-dimensional vector under the null hypothesis tends to a Chi-squared distribution. Moreover, the limit statement remains true when replacing the unknown ϑ0 by the MPLE in the hypothetical model and an asymptotic α-test is given by the quantiles or p-values of the limiting Chi-squared distribution. Finally, we propose a bootstrap version of this test. The bootstrap test is only defined for the special case of testing whether the coefficient function is constant. A simulation study illustrates the behavior of the bootstrap test under the null hypothesis and a special alternative. It gives quite good results for the chosen underlying model.
References
P. K. Andersen and R. D. Gill. Cox's regression model for counting processes: a large samplestudy. Ann. Statist., 10(4):1100{1120, 1982.
D. R. Cox. Regression models and life-tables. J. Roy. Statist. Soc. Ser. B, 34:187{220, 1972.
R. L. Prentice. A case-cohort design for epidemiologic cohort studies and disease prevention trials. Biometrika, 73(1):1{11, 1986.
We consider a statistical inverse learning problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points X_i, superposed with an additional noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependence of the constant factor in the variance of the noise and the radius of the source condition set.
Exhaustivity
(2016)
The dissertation proposes an answer to the question of how to model exhaustive inferences and what the meaning of the linguistic material that triggers these inferences is. In particular, it deals with the semantics of exclusive particles, clefts, and progressive aspect in Ga, an under-researched language spoken in Ghana. Based on new data coming from the author’s original fieldwork in Accra, the thesis points to a previously unattested variation in the semantics of exclusives in a cross-linguistic perspective, analyzes the connections between exhaustive interpretation triggered by clefts and the aspectual interpretation of the sentence, and identifies a cross-categorial definite determiner. By that it sheds new light on several exhaustivity-related phenomena in both the nominal and the verbal domain and shows that both domains are closely connected.
DNA origami nanostructures are a versatile tool to arrange metal nanostructures and other chemical entities with nanometer precision. In this way gold nanoparticle dimers with defined distance can be constructed, which can be exploited as novel substrates for surface enhanced Raman scattering (SERS). We have optimized the size, composition and arrangement of Au/Ag nanoparticles to create intense SERS hot spots, with Raman enhancement up to 1010, which is sufficient to detect single molecules by Raman scattering. This is demonstrated using single dye molecules (TAMRA and Cy3) placed into the center of the nanoparticle dimers. In conjunction with the DNA origami nanostructures novel SERS substrates are created, which can in the future be applied to the SERS analysis of more complex biomolecular targets, whose position and conformation within the SERS hot spot can be precisely controlled.
LifE
(2016)
Die LifE-Studie (Lebensverläufe ins fortgeschrittene Erwachsenenalter) ist eine der wenigen Studien weltweit, in der Lebensläufe vom 12. bis zum 45. Lebensjahr nachgezeichnet werden. Es wird bislang eine Spanne von über 30 Jahren betrachtet. Im Mittelpunkt steht die Frage nach den Bedingungen produktiver Lebensbewältigung im jungen und mittleren Lebensalter. Damit wird auch die Frage nach den Risiken nicht gelingender Lebensbewältigung thematisiert. Insbesondere ist von Interesse, welche herkunftsbezogenen, persönlichen und sozialen Ressourcen zu einer gelingenden Lebensbewältigung beitragen. Wie entwickeln und verändern sich bereichsspezifische Domänen, wie etwa Partnerschaft und Familie, die Erwerbstätigkeit oder auch die Identität im Leben? Welche Faktoren haben eine prädikative Wirkung über mehrere Jahrzehnte, welche Faktoren haben nur temporäre Bedeutung?
Der erste Teil dieses Berichts stellt die Durchführung und das Konzept der LifE-Studie vor. Der zweite Teil betrachtet das Teilnahmeverhalten seit Beginn der Jugendstudie 1979 über die erste Follow-Up-Studie im Jahre 2002 bis zur letzten Erhebung 2012. Ausfälle sind bei einem Untersuchungszeitraum von über 30 Jahren unvermeidbar. Eine Herausforderung von langangelegten Panelstudien stellt das Wiederauffinden und Motivieren der Teilnehmer dar, dass sie langfristig – über Jahrzehnte – an der Studie partizipieren. Insbesondere wird daher nach den Ausfallgründen gefragt.
Um eine untersuchte Alterskohorte als repräsentativ für einen Jahrgang betrachten zu können, ist es notwendig zu überprüfen, ob und in welchem Ausmaß die Personen, die über einen solchen langen Zeitraum an einer Studie teilgenommen haben, eine herausragend selektive Gruppe darstellen. Um dies zu überprüfen, werden ausgewählte soziodemografische Merkmale der Stichprobe der Erhebung 2012 (1.359 Teilnehmende) mit den entsprechenden Verteilungen des SOEP 2012 und des Mikrozensus 2012 verglichen. Durch diese externe Validierung werden mögliche Verzerrungen der Datengrundlage der Studie sichtbar.
Im Verlauf dieser Arbeit sind Blockcopolymere verschiedener Ladung auf Basis von PEO mit hohen Molekulargewichten durch lebendende freie radikalische Polymerisation hergestellt worden. Die Polymere sind einfach im Grammmaßstab herstellbar. Sie zeigen sowohl einen großen Einfluss auf die Nukleation als auch auf die Auflösung von Calciumphosphat. Gleichwohl scheint das Vorhandensein von positiven Gruppen (Kationen, Ampholyten und Betainen) keinen dramatischen Einfluss auf die Nukleation zu haben.
So verursachen Polymere mit positiven Ladungen die gleiche Retentionwirkung wie solche, die ausschließlich anionische Gruppen enthalten. Aus der Verwendung der kationischen, ampholytischen und betainischen Copolymere resultiert allerdings eine andersartige Morphologie der Niederschläge, als aus der Verwendung der Anionischen hervorgeht. Bei der Stabilisierung einer HAP-Oberfläche setzt sich dieser Trend fort, das heißt, rein anionische Copolymere wirken stärker stabilisierend als solche, die positive Ladungen enthalten. Durch Inkubation von menschlichem Zahnschmelz mit anionischen Copolymeren konnte gezeigt werden, dass die Biofilmbildung verglichen mit einer unbehandelten Zahnoberfläche eingeschränkt abläuft. All dies macht die Polymere zu interessanten Additiven für Zahnpflegeprodukte.
Zusätzlich konnten auf Basis dieser rein anionischen Copolymere Polymerbürsten, ebenfalls über lebendende freie radikalische Polymerisation, hergestellt werden. Diese zeichnen sich durch einen großen Einfluss auf die Kristallphase aus und bilden mit dem CHAP des AB-Types das Material, welches auch in Knochen und Zähnen vorkommt. Erste Cytotoxizitätstests lassen auf das große Potential dieser Polymerbürsten für Beschichtungen in der Medizintechnik schließen.
The aim of this work is the evaluation of the geothermal potential of Luxembourg. The approach consists in a joint interpretation of different types of information necessary for a first rather qualitative assessment of deep geothermal reservoirs in Luxembourg and the adjoining regions in the surrounding countries of Belgium, France and Germany. For the identification of geothermal reservoirs by exploration, geological, thermal, hydrogeological and structural data are necessary. Until recently, however, reliable information about the thermal field and the regional geology, and thus about potential geothermal reservoirs, was lacking. Before a proper evaluation of the geothermal potential can be performed, a comprehensive survey of the geology and an assessment of the thermal field are required.
As a first step, the geology and basin structure of the Mesozoic Trier–Luxembourg Basin (TLB) is reviewed and updated using recently published information on the geology and structures as well as borehole data available in Luxembourg and the adjoining regions. A Bouguer map is used to get insight in the depth, morphology and structures in the Variscan basement buried beneath the Trier–Luxembourg Basin. The geological section of the old Cessange borehole is reinterpreted and provides, in combination with the available borehole data, consistent information for the production of isopach maps. The latter visualize the synsedimentary evolution of the Trier–Luxembourg Basin. Complementary, basin-wide cross sections illustrate the evolution and structure of the Trier–Luxembourg Basin. The knowledge gained does not support the old concept of the Weilerbach Mulde. The basin-wide cross sections, as well as the structural and sedimentological observations in the Trier–Luxembourg Basin suggest that the latter probably formed above a zone of weakness related to a buried Rotliegend graben. The inferred graben structure designated by SE-Luxembourg Graben (SELG) is located in direct southwestern continuation of the Wittlicher Rotliegend-Senke.
The lack of deep boreholes and subsurface temperature prognosis at depth is circumnavigated by using thermal modelling for inferring the geothermal resource at depth. For this approach, profound structural, geological and petrophysical input data are required. Conceptual geological cross sections encompassing the entire crust are constructed and further simplified and extended to lithospheric scale for their utilization as thermal models. The 2-D steady state and conductive models are parameterized by means of measured petrophysical properties including thermal conductivity, radiogenic heat production and density. A surface heat flow of 75 ∓ 7 (2δ) mW m–2 for verification of the thermal models could be determined in the area. The models are further constrained by the geophysically-estimated depth of the lithosphere–asthenosphere boundary (LAB) defined by the 1300 °C isotherm. A LAB depth of 100 km, as seismically derived for the Ardennes, provides the best fit with the measured surface heat flow. The resulting mantle heat flow amounts to ∼40 mW m–2. Modelled temperatures are in the range of 120–125 °C at 5 km depth and of 600–650 °C at the crust/mantle discontinuity (Moho). Possible thermal consequences of the 10–20 Ma old Eifel plume, which apparently caused upwelling of the asthenospheric mantle to 50–60 km depth, were modelled in a steady-state thermal scenario resulting in a surface heat flow of at least 91 mW m–2 (for the plume top at 60 km) in the Eifel region. Available surface heat-flow values are significantly lower (65–80 mW m–2) and indicate that the plume-related heating has not yet entirely reached the surface.
Once conceptual geological models are established and the thermal regime is assessed, the geothermal potential of Luxembourg and the surrounding areas is evaluated by additional consideration of the hydrogeology, the stress field and tectonically active regions. On the one hand, low-enthalpy hydrothermal reservoirs in Mesozoic reservoirs in the Trier–Luxembourg Embayment (TLE) are considered. On the other hand, petrothermal reservoirs in the Lower Devonian basement of the Ardennes and Eifel regions are considered for exploitation by Enhanced/Engineered Geothermal Systems (EGS). Among the Mesozoic aquifers, the Buntsandstein aquifer characterized by temperatures of up to 50 °C is a suitable hydrothermal reservoir that may be exploited by means of heat pumps or provide direct heat for various applications. The most promising area is the zone of the SE–Luxembourg Graben. The aquifer is warmest underneath the upper Alzette River valley and the limestone plateau in Lorraine, where the Buntsandstein aquifer lies below a thick Mesozoic cover. At the base of an inferred Rotliegend graben in the same area, temperatures of up to 75 °C are expected. However, geological and hydraulic conditions are uncertain. In the Lower Devonian basement, thick sandstone-/quartzite-rich formations with temperatures >90 °C are expected at depths >3.5 km and likely offer the possibility of direct heat use. The setting of the Südeifel (South Eifel) region, including the Müllerthal region near Echternach, as a tectonically active zone may offer the possibility of deep hydrothermal reservoirs in the fractured Lower Devonian basement. Based on the recent findings about the structure of the Trier–Luxembourg Basin, the new concept presents the Müllerthal–Südeifel Depression (MSD) as a Cenozoic structure that remains tectonically active and subsiding, and therefore is relevant for geothermal exploration. Beyond direct use of geothermal heat, the expected modest temperatures at 5 km depth (about 120 °C) and increased permeability by EGS in the quartzite-rich Lochkovian could prospectively enable combined geothermal heat production and power generation in Luxembourg and the western realm of the Eifel region.
Das vorliegende Werk ist das Ergebnis der 3. Woche des Russischen Rechts an der Juristischen Fakultät der Universität Potsdam. Namhafte Wissenschaftlerinnen und Wissenschaftler der Moskauer Staatlichen Juristischen Universität O. E. Kutafin hielten Vorträge zum Völkerrecht, Verfassungs- und Staatsangehörigkeitsrecht, Bürgerlichen Recht, Unternehmens- und Gesellschaftsrecht, Finanzrecht und Bankrecht. Das russische Recht befindet sich nach dem Ende der Sowjetunion im Wandel. Die Beiträge zeugen vom hohen Stand der Jurisprudenz an der Moskauer Staatlichen Juristischen Universität O. E. Kutafin. Die Wochen des Russischen Rechts tragen dazu bei, das Recht der Russischen Föderation bei uns in Deutschland bekannt zu machen und rechtsvergleichend zur Diskussion zu stellen.
The aim of this paper is to bring together two areas which are of great importance for the study of overdetermined boundary value problems. The first area is homological algebra which is the main tool in constructing the formal theory of overdetermined problems. And the second area is the global calculus of pseudodifferential operators which allows one to develop explicit analysis.
This article assesses the distance between the laws of stochastic differential equations with multiplicative Lévy noise on path space in terms of their characteristics. The notion of transportation distance on the set of Lévy kernels introduced by Kosenkova and Kulik yields a natural and statistically tractable upper bound on the noise sensitivity. This extends recent results for the additive case in terms of coupling distances to the multiplicative case. The strength of this notion is shown in a statistical implementation for simulations and the example of a benchmark time series in paleoclimate.
Grenzräume – Grenzbewegungen
(2016)
Der vorliegende Sammelband vereinigt die Beiträge der 12. und 13. Tagung des Jungen Forums Slavistische Literaturwissenschaft (JFSL) in Basel 2013 und Frankfurt (Oder) und Słubice 2014. Unter den thematischen Leitbegriffen Grenzräume – Grenzbewegungen präsentiert er Einblicke in die Arbeit von Nachwuchswissenschaftlerinnen und -wissenschaftlern der deutschsprachigen slavischen Literatur- und Kulturwissenschaft.