Refine
Year of publication
- 2016 (322) (remove)
Document Type
- Doctoral Thesis (322) (remove)
Is part of the Bibliography
- yes (322)
Keywords
- Klimawandel (4)
- climate change (4)
- Blickbewegungen (3)
- Deutschland (3)
- earthquake (3)
- Aggression (2)
- Aphasie (2)
- Avantgarde (2)
- Berlin (2)
- Bodenfeuchte (2)
Institute
- Institut für Biochemie und Biologie (58)
- Institut für Geowissenschaften (37)
- Institut für Chemie (33)
- Institut für Physik und Astronomie (21)
- Institut für Ernährungswissenschaft (19)
- Wirtschaftswissenschaften (19)
- Institut für Informatik und Computational Science (16)
- Historisches Institut (14)
- Sozialwissenschaften (12)
- Department Linguistik (10)
Der Patriotismus gehört in der zweiten Hälfte des 18. Jahrhunderts zu den zentralen Themen der bürgerlichen Schichten im Europa der Aufklärung. In den Grenzen der Alten Eidgenossenschaft sticht dabei die Helvetische Gesellschaft heraus, eine Sozietät, die in einem Zeitraum von fast vierzig Jahren über Themen wie Freiheit, Bildung, Tugend und eben Vaterlandliebe debattierte.
Die vorliegende Untersuchung unterzieht die schriftlichen Zeugnisse dieser Debatten und andere eidgenössische Schriften jener Zeit einer Vokabularanalyse, einer neuartigen Textanalyse, die eine Annäherung an den Sprachgebrauch dieser Epoche erlaubt. Dabei steht das Vokabular der Vaterlandsliebe im Mittelpunkt, welches in den einzelnen Texten in ganz unterschiedlicher Weise Verwendung gefunden hat. Durch diese Analyseform ist es möglich, erste Antworten auf zentrale Fragen der Ideengeschichte zu formulieren – ob etwa ein Autor in einer Textpassage ein ganz bestimmtes Wort, einen wohldefinierten Begriff verwendet hat, den ein anderer Autor im selben Zusammenhang bewusst vermieden hat.
Anhand der in dieser Untersuchung entwickelten Vokabularanalyse wird es darüber hinaus möglich, der Frage nachzugehen, ob die Begriffe Patriotismus und Vaterlandsliebe, wie sie in der Forschung verwendet werden, den Intentionen der zeitgenössischen Autoren gerecht werden. Mit der Methode der Vokabularanalyse wird dem Historiker somit ein Instrument in die Hände gelegt, auf spezifische Weise die Absichten eines einzelnen Autors einer vergangenen Epoche durch den Vergleich mit anderen Autoren näher zu ergründen.
The cell interior is a highly packed environment in which biological macromolecules evolve and function. This crowded media has effects in many biological processes such as protein-protein binding, gene regulation, and protein folding. Thus, biochemical reactions that take place in such crowded conditions differ from diluted test tube conditions, and a considerable effort has been invested in order to understand such differences.
In this work, we combine different computationally tools to disentangle the effects of molecular crowding on biochemical processes. First, we propose a lattice model to study the implications of molecular crowding on enzymatic reactions. We provide a detailed picture of how crowding affects binding and unbinding events and how the separate effects of crowding on binding equilibrium act together. Then, we implement a lattice model to study the effects of molecular crowding on facilitated diffusion. We find that obstacles on the DNA impair facilitated diffusion. However, the extent of this effect depends on how dynamic obstacles are on the DNA. For the scenario in which crowders are only present in the bulk solution, we find that at some conditions presence of crowding agents can enhance specific-DNA binding. Finally, we make use of structure-based techniques to look at the impact of the presence of crowders on the folding a protein. We find that polymeric crowders have stronger effects on protein stability than spherical crowders. The strength of this effect increases as the polymeric crowders become longer. The methods we propose here are general and can also be applied to more complicated systems.
Changes in extratropical storm track activity and their implications for extreme weather events
(2016)
Die empirische Studie hat die Verwissenschaftlichung der Physiotherapie und deren Relevanz für die berufliche Praxis in Deutschland zum Gegenstand. Unter Verwissenschaftlichung werden Prozesse der wissenschaftlichen Disziplinbildung und Akademisierung verstanden. Die Praxisrelevanz drückt sich in Veränderungsbestrebungen der Physiotherapie vom Beruf hin zur Profession aus.
Ausgehend von wissenschaftstheoretischen Ansätzen zur Disziplinbildung, Akademisierung und Professionalisierung sowie dem diesbezüglichen physiotherapeutischen Forschungsstand wendet sich die Arbeit aus den Perspektiven historischer und gegenwärtiger wissenschaftlicher Formierungsprozesse der empirischen Analyse des beschriebenen Gegenstandes zu.
Die zentralen Fragestellungen der vorliegenden Arbeit sind:
Auf welcher theoretischen Basis werden welche der Physiotherapie impliziten Gegenstände im Kontext welchen Theorie-Praxis-Verständnisses konstituiert?
Und: Gibt es ein theoretisches Fundament in Form von Theorien und Modellen, aus welchem sich forschungsmethodologische und wissenschaftstheoretische Zugänge begründen lassen und inwieweit zeigt sich hier das Potential zur Heranbildung einer wissenschaftlichen Disziplin?
Wie bezieht sich die Wissenschaft dabei auf eine professionelle Praxis und umgekehrt?
Der empirische Zugang zum Gegenstand erfolgte auf zwei Wegen:
1. Fachzeitschriftenanalyse zur Erfassung der Historizität,
2. Experteninterviews zur Erfassung der Kontextualität der Verwissenschaftlichung.
Die vorliegende Arbeit versteht sich als Beitrag zum wissenschaftlichen Diskurs in der Physiotherapie. Sie verfolgt das Ziel, eine empirisch belastbare Aussage bezüglich des Gelingens einer Disziplinbildung sowie des Akademisierungsprozesses in Deutschland zu treffen und diese in Beziehung zum Praxisfeld zu setzen. Empirisch relevant ist hierfür die Analyse der Historie. Letztere wiederum definiert den Weg zu einer ebenfalls zu analysierenden gegenwärtigen gesellschaftlichen Verortung. Die vorliegenden Analysen rekonstruieren die Emanzipation der Physiotherapie in Deutschland von einem Heilhilfsberuf hin zu einer eigenständigen Profession mit dem Fokus auf Prozesse der Disziplinbildung und Akademisierung.
Die Ergebnisse der Arbeit sind vielfältig und zeigen, dass die deutsche Physiotherapie sich unter anderem durch die Akademisierung auf dem Weg zu einer Wissenschaft sowie einer Profession befindet. Allerdings führt die Parallelität von Theoriebildung und Praxishandeln im Sinne einer kaum nachweisbaren Verschränkung beider Handlungsebenen zu dem Schluss, dass die untersuchten Prozesse bislang nicht zwangsläufig zu wissenschaftlich emanzipatorischem Erfolg führen müssen.
It is the intention of this study to contribute to further rethinking and innovating in the Microcredit business which stands at a turning point – after around 40 years of practice it is endangered to fail as a tool for economic development and to become a doubtful finance product with a random scope instead. So far, a positive impact of Microfinance on the improvement of the lives of the poor could not be confirmed. Over-indebtment of borrowers due to the pre-dominance of consumption Microcredits has become a widespread problem. Furthermore, a rising number of abusive and commercially excessive practices have been reported.
In fact, the Microfinance sector appears to suffer from a major underlying deficit: there does not exist a coherent and transparent understanding of its meaning and objectives so that Microfinance providers worldwide follow their own approaches of Microfinance which tend to differ considerably from each other.
In this sense the study aims at consolidating the multi-faced and very often confusingly different Microcredit profiles that exist nowadays. Subsequently, in this study, the Microfinance spectrum will be narrowed to one clear-cut objective, in fact away from the mere monetary business transactions to poor people it has gradually been reduced to back towards a tool for economic development as originally envisaged by its pioneers.
Hence, the fundamental research question of this study is whether, and under which conditions, Microfinance may attain a positive economic impact leading to an improvement of the living of the poor.
The study is structured in five parts: the three main parts (II.-IV.) are surrounded by an introduction (I.) and conclusion (V.). In part II., the Microfinance sector is analysed critically aiming to identify the challenges persisting as well as their root causes. In the third part, a change to the macroeconomic perspective is undertaken in oder to learn about the potential and requirements of small-scale finance to enhance economic development, particularly within the economic context of less developed countries. By consolidating the insights gained in part IV., the elements of a new concept of Microfinance with the objecitve to achieve economic development of its borrowers are elaborated.
Microfinance is a rather sensitive business the great fundamental idea of which is easily corruptible and, additionally, the recipients of which are predestined victims of abuse due to their limited knowledge in finance. It therefore needs to be practiced responsibly, but also according to clear cut definitions of its meaning and objectives all institutions active in the sector should be devoted to comply with. This is especially relevant as the demand for Microfinance services is expected to rise further within the years coming. For example, the recent refugee migration movement towards Europe entails a vast potential for Microfinance to enable these people to make a new start into economic life. This goes to show that Microfinance may no longer mainly be associated with a less developed economic context, but that it will gain importance as a financial instrument in the developed economies, too.
Among the bloom-forming and potentially harmful cyanobacteria, the genus Microcystis represents a most diverse taxon, on the genomic as well as on morphological and secondary metabolite levels. Microcystis communities are composed of a variety of diversified strains. The focus of this study lies on potential interactions between Microcystis representatives and the roles of secondary metabolites in these interaction processes.
The role of secondary metabolites functioning as signaling molecules in the investigated interactions is demonstrated exemplary for the prevalent hepatotoxin microcystin. The extracellular and intracellular roles of microcystin are tested in microarray-based transcriptomic approaches. While an extracellular effect of microcystin on Microcystis transcription is confirmed and connected to a specific gene cluster of another secondary metabolite in this study, the intracellularly occurring microcystin is related with several pathways of the primary metabolism. A clear correlation of a microcystin knockout and the SigE-mediated regulation of carbon metabolism is found. According to the acquired transcriptional data, a model is proposed that postulates the regulating effect of microcystin on transcriptional regulators such as the alternative sigma factor SigE, which in return captures an essential role in sugar catabolism and redox-state regulation.
For the purpose of simulating community conditions as found in the field, Microcystis colonies are isolated from the eutrophic lakes near Potsdam, Germany and established as stably growing under laboratory conditions. In co-habitation simulations, the recently isolated field strain FS2 is shown to specifically induce nearly immediate aggregation reactions in the axenic lab strain Microcystis aeruginosa PCC 7806. In transcriptional studies via microarrays, the induced expression program in PCC 7806 after aggregation induction is shown to involve the reorganization of cell envelope structures, a highly altered nutrient uptake balance and the reorientation of the aggregating cells to a heterotrophic carbon utilization, e.g. via glycolysis. These transcriptional changes are discussed as mechanisms of niche adaptation and acclimation in order to prevent competition for resources.
Das biogene Amin Serotonin (5-Hydroxytryptamin, 5-HT) agiert als wichtiger chemischer Botenstoff bei einer Vielzahl von Organismen. Das durch 5 HT vermittelte Signal wird dabei durch spezifische Rezeptoren wahrgenommen und in eine zelluläre Reaktion umgesetzt. Diese 5 HT Rezeptoren gehören überwiegend zur Familie der G Protein gekoppelten Rezeptoren (GPCRs). Die Honigbiene Apis mellifera bietet unter anderem aufgrund ihrer eusozialen Lebensweise vielfältige Ansatzpunkte zur Erforschung der Funktionen des serotonergen Systems in Insekten. Bei A. mellifera wurden bereits vier 5-HT-Rezeptor-Subtypen beschrieben und molekular sowie pharmakologisch charakterisiert: Am5 HT1A, Am5 HT2α, Am5 HT2β und Am5 HT7. Ziel dieser Arbeit war es, gewebespezifische sowie alters- und tageszeitabhängige Expressionsmuster der 5 HT Rezeptor-Subtypen zu untersuchen, um zu einem umfassenden Verständnis des serotonergen Systems der Honigbiene beizutragen und eine Basis zur Hypothesenentwicklung für mögliche physiologische Funktionen zu schaffen.
Es wurde die Expression der 5 HT Rezeptorgene sowohl im zentralen Nervensystem, als auch in Teilen des Verdauungs-, Exkretions- und Speicheldrüsensystems gemessen. Dabei konnte gezeigt werden, dass die untersuchten 5-HT-Rezeptor-Subtypen generell weit im Organismus der Honigbiene verbreitet sind. Interessanterweise unterschieden sich die untersuchten Gewebe hinsichtlich der mRNA-Expressionsmuster der untersuchten Rezeptoren. Während beispielsweise im Gehirn Am5 ht1A und Am5 ht7 stärker als Am5 ht2α und Am5 ht2β exprimiert wurden, zeigte sich in Darmgewebe ein umgekehrtes Muster.
Es war bereits bekannt, dass es bei der Expression der Am5-ht2-Gene zu alternativem Spleißen kommt. Dies führt zur Entstehung der verkürzten mRNA-Varianten Am5 ht2αΔIII und Am5 ht2βΔII. Die daraus resultierenden Proteine können nicht als funktionelle GPCRs agieren. Es konnte gezeigt werden, dass diese verkürzten Spleißvarianten dennoch ubiquitär in der Honigbiene exprimiert werden. Bemerkenswerterweise wurden gewebeübergreifende Ähnlichkeiten der Expressionsmuster der Spleißvarianten gegenüber deren zugehörigen Volllängenvarianten festgestellt, welche auf Funktionen der verkürzten Varianten in vivo hindeuten.
Im Hinblick auf die bei A. mellifera hauptsächlich altersbedingte Arbeitsteilung wurde die Expression der 5 HT Rezeptor-Subtypen in Gehirnen von unterschiedlich alten Arbeiterinnen mit unterschiedlichen sozialen Rollen verglichen. Während auf mRNA-Ebene keines der vier 5 HT Rezeptor-Subtypen eine altersabhängig unterschiedliche Expression zeigte, konnte für das Am5-HT1A-Protein eine höhere Konzentration in den Gehirnen älterer Tiere gefunden werden. Dies deutet auf eine posttranskriptionale Regulation der 5 HT1A Rezeptorexpression hin, welche im Zusammenhang mit der Arbeitsteilung stehen könnte.
Es erfolgte die Untersuchung tageszeitlicher Änderungen sowohl der Expression der 5 HT Rezeptor-Subtypen, als auch des biogenen Amins 5 HT selbst. Während es in den Gehirnen von Arbeiterinnen, welche unter natürlichen Bedingungen gehalten wurden, zu keiner tageszeitabhängigen Veränderung des 5 HT-Titers kam, zeigte die mRNA-Expression von Am5-ht2α und Am5-ht2β eine periodische Oszillation mit Zunahme während des Tages und Abnahme während der Nacht. Diese Regulation wird durch externe Faktoren hervorgerufen und ist nicht auf einen endogenen circadianen Rhythmus zurückzuführen. Dies ging aus der Wiederholung der Expressionsmessungen an Gehirnen von Bienen, welche unter konstanten Laborbedingungen gehalten wurden, hervor.
Weiterhin wurde die Beteiligung des serotonergen Systems an der Steuerung von Aspekten des circadianen lokomotorischen Aktivitätsrhythmus anhand von Verhaltensexperimenten untersucht. Mit 5 HT gefütterte Arbeiterinnen zeigten dabei unter konstanten Bedingungen eine längere Periode des Aktivitätsrhythmus als Kontrolltiere. Dies deutet auf einen Einfluss von 5 HT auf die Modulation der Synchronisation der inneren Uhr hin.
Die vorliegenden Ergebnisse tragen wesentlich zum tieferen Verständnis des serotonergen Systems der Honigbiene bei und bieten Ansatzpunkte für weitergehende Studien zur Funktion von 5 HT im Zusammenhang mit der Modulation von physiologischen Prozessen, Arbeitsteilung und circadianen Rhythmen.
Selbstverständnis und Image der Unternehmensberatung, das Bewerbern, Mitarbeitern und Kunden gleichermaßen Wissensvorsprung durch Ballung der besten Köpfe verspricht, scheinen sowohl für den Berater als auch den Kunden einen Erfolgsfaktor einzunehmen. Die Karriere des Unternehmensberaters wird im Vergleich zu anderen Branchen durch eine starke Formalisierung anhand von Kompetenzen und Entwicklungspfaden begleitet. Talentgewinnung und -entwicklung sind dabei Kernaufgaben des Personalmanagements, das gerade aufgrund seiner kompetenzbasierten Instrumente und formalistischen Strukturen als Erfolgsfaktor gilt. Die Analyse der Autorin setzt beim Personalmanagement der Unternehmensberatung an. Auffällig erscheinen dabei zunächst ähnliche Strukturen und Instrumente zur Talentidentifikation und -entwicklung, die für eine gesamte Branche charakteristisch sind. Speziell für Professional Service Firms ist der Mitarbeiter die entscheidende ökonomische Größe in der Leistungserbringung. Der Kunde beurteilt die Unternehmensleistung im Zusammenspiel mit seinem Kontakt zum Mitarbeiter, der maßgeblich für die Leistungserbringung und Qualitätssicherung verantwortlich ist. Der Analysefokus liegt deshalb im Personalmanagement von Unternehmensberatungen als Teil der Professional Service Firms und wird vor dem Hintergrund systemtheoretischer Überlegungen beleuchtet. Eckpfeiler des Systems zeigen sich insbesondere in Form von branchenüblichen Rekrutierungsstrategien, der formalistischen Leistungsbeurteilung, dem vergleichsweise steilen Karriereverlauf sowie anhand überdurchschnittlicher Gehälter. Hat die Unternehmensberatung die Qualifizierung und Entwicklung ihrer Mitarbeiter zum Erfolgsfaktor gemacht? Die Autorin analysiert, ob das Personalmanagement und seine Verfahren berechtigterweise als Erfolgsfaktor einer Branche gelten, welche Faktoren den wirtschaftlichen Erfolg der Branche beeinflussen und welche Grenzen die Nutzenanalyse in Bezug auf die Beratungsleistung erfährt. ?
Die Geschichte der Landtage in der SBZ und in der frühen DDR ist weithin in Vergessenheit geraten. Obwohl die allgemeine Forschungsmeinung ihnen bislang nur die Rolle einer Fußnote der Landesgeschichte beigemessen hat, besaßen die Parlamente in Wirklichkeit für die Nachkriegszeit eine nicht zu unterschätzende Bedeutung.
Die vorliegende Studie untersucht am Beispiel der Landtage von Brandenburg und Thüringen den Wandel der Volksvertretungen von ihren Anfängen 1946 bis zur Auflösung 1952. Im Sinne einer vergleichenden Landesgeschichte werden die Parlamente nicht nur in den von der Besatzungsmacht vorgegebenen politischen und administrativen Rahmen eingeordnet, sondern es wird auch nach ihren strukturellen Gemeinsamkeiten und Besonderheiten gefragt. Das Augenmerk richtet sich zudem auf die Wandlung der Landtagsfraktionen von CDU und LDP: Pochten diese anfangs auf Eigenständigkeit und Gleichberechtigung, wurden sie rasch einem Prozess der politischen Anpassung und schließlich der Ausschaltung unterworfen. An dessen Ende stand die vollständige Unterordnung unter den Willen der Einheitspartei. Die Publikation versteht sich somit als ein Beitrag zum besseren Verständnis der Diktaturdurchsetzung in der SBZ/DDR auf Landesebene.
Diese Arbeit zu Grunde liegenden Forschung zielte darauf ab, neue schmelzbare Acrylnitril-Copolymere zu entwickeln. Diese sollten im Anschluss über ein Schmelzspinnverfahren zur Chemiefaser geformt und im letzten Schritt zur Carbonfaser konvertiert werden. Zu diesem Zweck wurden zunächst orientierende Untersuchungen an unterschiedlichen Copolymeren des Acrylnitril aus Lösungspolymerisation durchgeführt. Die Untersuchungen zeigten, dass elektrostatische Wechselwirkungen besser als sterische Abschirmung dazu geeignet sind, Schmelzbarkeit unterhalb der Zersetzungstemperatur von Polyacrylnitril zu bewirken. Aus der Vielzahl untersuchter Copolymere stellten sich jene mit Methoxyethylacrylat (MEA) als am effektivsten heraus. Für diese Copolymere wurden sowohl die Copolymerisationsparameter bestimmt als auch die grundlegende Kinetik der Lösungspolymerisation untersucht. Die Copolymere mit MEA wurden über Schmelzspinnen zur Faser umgeformt und diese dann untersucht. Hierbei wurden auch Einflüsse verschiedener Parameter, wie z.B. die der Molmasse, auf die Fasereigenschaften und -herstellung untersucht. Zuletzt wurde ein Heterophasenpolymerisationsverfahren zur Herstellung von Copolymeren aus AN/MEA entwickelt; dadurch konnten die Materialeigenschaften weiter verbessert werden. Zur Unterdrückung der thermoplastischen Eigenschaften der Fasern wurde ein geeignetes Verfahren entwickelt und anschließend die Konversion zu Carbonfasern durchgeführt.
The humid tropics are the region with the highest rate of land-cover change worldwide. Especially prevalent is the deforestation of old-growth tropical forests to create space for cattle pastures and soybean fields.
The regional water cycle is influenced by vegetation cover in various ways. Especially evapotranspiration considerably contributes to water vapor content in the lower atmosphere. Besides active transpiration by plants, evaporation from wetted plant surfaces further known as interception loss is an important supply of water vapor. Changes in interception loss due to change in land cover and the related consequences on the regional water cycle in the humid tropics of Latin America are the research focus of my thesis. (1) In an experimental setup I assess differences in interception loss between an old-growth tropical forest and a soybean plantation. (2) In a modeling study, I examine interception losses of these two vegetation types compared to a younger secondary forest with the use of the Gash interception model, including an uncertainty analysis for the estimation of the necessary model parameters. (3) Studying the water balance of a 192-km² catchment I disentangle the influences of changes in land cover and climatic factors on interception loss.
The three different research sites in my thesis represent a currently typical spectrum for land-cover changes in Latin America. In the first example I study the consequences of deforestation of transitional forest, which forms the transition from the Brazilian tree savanna (cerrado) to tropical rain forest, for the establishment of soybean fields in the southern Amazon basin. The second study site is a young secondary forest within the “Agua Salud” project area in Panama as an example of reforestation of former pastures. The third study site is the Cirí Grande river catchment which comprises a mixture of young and old forests as well as pastures, which is typical for the southern sub-catchments of the Panama Canal.
The experimental approach consists of the indirect estimation of interception loss by measuring throughfall and stem flow. For the first experimental study I measured throughfall as well as stem flow manually. Measurements of the leaf area index of the two land covers do not show distinct differences; hence it could not serve as an explanation for the differences in the measured interception loss. The considerably higher interception loss at the soybean field is attributed to a possible underestimation of stemflow but also to the stronger ventilation within the well-structured plant rows causing higher evaporation rates. This situation is valid only for two months of the rainy season, when soybean plants are fully developed. In the annual balance evapotranspiration at the soybean site is clearly less than at the forest site, accelerating the development of fast runoff components and consequently discharge. In the medium term, a reduction of water availability in the study area can be expected.
For the modeling study, throughfall in a young secondary forest is sampled automatically. The resulting temporally high-resolution dataset allows the distinction between different precipitation and interception events. The core of this study is the sensitivity and uncertainty analysis of the Gash interception model parameters and the consequences for its results. Canopy storage capacity plays a key role for the model and parameter uncertainty. With increasing storage capacity uncertainty in parameter delineation also increases. Evaporation rate as the driving component of the interception process incorporates in this context the largest parameter uncertainty. Depending on the selected method for parameter estimation, parameter values may vary tremendously.
In the third study, I analyze the influence of interception loss on the water balance of the Cirí Grande catchment, incorporating the interlinked effects of temperature, precipitation and changes of the land use mosaic using the SWAT (soil water assessment tool) model. Constructing several land-cover scenarios I assess their influence on the catchment’s discharge. The results show that land-cover change exerts only a small influence on annual discharge in the Cirí Grande catchment whereas an increase in temperature markedly influences evapotranspiration. The temperature-induced larger transpiration and interception loss balances the simultaneous increase in annual precipitation, such that the resulting changes in annual discharge are negligible.
The results of the three studies show the considerable effect of land cover on interception. However, the magnitude of this effect can be masked by changes in local conditions, especially by an increase in temperature. Hence, the results cannot be transferred easily between the different study sites. For modeling purposes, this means that measurements of vegetation characteristics as well as interception loss at the respective sites are indispensable.
Subcultures creating culture
(2016)
The purpose of this work is to apply the methods of textual semiotics to subcultures, in particular to the little known glam subculture. Subcultures have been the main research field of the Birmingham Centre for Contemporary Cultural Studies, known for its interdisciplinary approach, and for its focus on the creative aspects of subculture. Hebdige, in particular, introduced many semiotic elements in his work, as the aberrant decoding after Eco and the cultural creativity via bricolage after Lévi-Strauss. His definition of subculture as symbolic resistance has been criticized by the following post-subcultural researchers for its abstractness and lack of cohesion.
Semiotics eventually have been expelled from the set of tools used in sociology for the analysis of subcultures. Nowadays, the studies on subcultures have a strong ethnographic focus. Due to terminological proliferation and a descriptive approach, it is difficult to compare them on a common basis.
Textual semiotics, through the concept of semiosphere developed by Lotman, allows to go back to the intuitions of Hebdige, organizing the semiotic elements already present in his work into a wider system of interpretation. The semiosphere offers a coherent theoretical horizon as a basis for further analysis, and a new methodological perspective focusing on the cultural. In this thesis for the first time the work of Lotman is applied to the study of a subculture.
Understanding the role of natural climate variability under the pressure of human induced changes of climate and landscapes, is crucial to improve future projections and adaption strategies. This doctoral thesis aims to reconstruct Holocene climate and environmental changes in NE Germany based on annually laminated lake sediments. The work contributes to the ICLEA project (Integrated CLimate and Landscape Evolution Analyses). ICLEA intends to compare multiple high-resolution proxy records with independent chronologies from the N central European lowlands, in order to disentangle the impact of climate change and human land use on landscape development during the Lateglacial and Holocene. In this respect, two study sites in NE Germany are investigated in this doctoral project, Lake Tiefer See and palaeolake Wukenfurche. While both sediment records are studied with a combination of high-resolution sediment microfacies and geochemical analyses (e.g. µ-XRF, carbon geochemistry and stable isotopes), detailed proxy understanding mainly focused on the continuous 7.7 m long sediment core from Lake Tiefer See covering the last ~6000 years. Three main objectives are pursued at Lake Tiefer See: (1) to perform a reliable and independent chronology, (2) to establish microfacies and geochemical proxies as indicators for climate and environmental changes, and (3) to trace the effects of climate variability and human activity on sediment deposition.
Addressing the first aim, a reliable chronology of Lake Tiefer See is compiled by using a multiple-dating concept. Varve counting and tephra findings form the chronological framework for the last ~6000 years. The good agreement with independent radiocarbon dates of terrestrial plant remains verifies the robustness of the age model. The resulting reliable and independent chronology of Lake Tiefer See and, additionally, the identification of nine tephras provide a valuable base for detailed comparison and synchronization of the Lake Tiefer See data set with other climate records. The sediment profile of Lake Tiefer See exhibits striking alternations between well-varved and non-varved sediment intervals. The combination of microfacies, geochemical and microfossil (i.e. Cladocera and diatom) analyses indicates that these changes of varve preservation are caused by variations of lake circulation in Lake Tiefer See. An exception is the well-varved sediment deposited since AD 1924, which is mainly influenced by human-induced lake eutrophication. Well-varved intervals before the 20th century are considered to reflect phases of reduced lake circulation and, consequently, stronger anoxic conditions. Instead, non-varved intervals indicate increased lake circulation in Lake Tiefer See, leading to more oxygenated conditions at the lake ground. Furthermore, lake circulation is not only influencing sediment deposition, but also geochemical processes in the lake. As, for example, the proxy meaning of δ13COM varies in time in response to changes of the oxygen regime in the lake hypolinion. During reduced lake circulation and stronger anoxic conditions δ13COM is influenced by microbial carbon cycling. In contrast, organic matter degradation controls δ13COM during phases of intensified lake circulation and more oxygenated conditions. The varve preservation indicates an increasing trend of lake circulation at Lake Tiefer See after ~4000 cal a BP. This trend is superimposed by decadal to centennial scale variability of lake circulation intensity. Comparison to other records in Central Europe suggests that the long-term trend is probably related to gradual changes in Northern Hemisphere orbital forcing, which induced colder and windier conditions in Central Europe and, therefore, reinforced lake circulation. Decadal to centennial scale periods of increased lake circulation coincide with settlement phases at Lake Tiefer See, as inferred from pollen data of the same sediment record. Deforestation reduced the wind shelter of the lake, which probably increased the sensitivity of lake circulation to wind stress. However, results of this thesis also suggest that several of these phases of increased lake circulation are additionally reinforced by climate changes. A first indication is provided by the comparison to the Baltic Sea record, which shows striking correspondence between major non-varved intervals at Lake Tiefer See and bioturbated sediments in the Baltic Sea. Furthermore, a preliminary comparison to the ICLEA study site Lake Czechowskie (N central Poland) shows a coincidence of at least three phases of increased lake circulation in both lakes, which concur with periods of known climate changes (2.8 ka event, ’Migration Period’ and ’Little Ice Age’). These results suggest an additional over-regional climate forcing also on short term increased of lake circulation in Lake Tiefer See.
In summary, the results of this thesis suggest that lake circulation at Lake Tiefer See is driven by a combination of long-term and short-term climate changes as well as of anthropogenic deforestation phases. Furthermore, the lake circulation drives geochemical cycles in the lake affecting the meaning of proxy data. Therefore, the work presented here expands the knowledge of climate and environmental variability in NE Germany. Furthermore, the integration of the Lake Tiefer See multi-proxy record in a regional comparison with another ICLEA side, Lake Czechowskie, enabled to better decipher climate changes and human impact on the lake system. These first results suggest a huge potential for further detailed regional comparisons to better understand palaeoclimate dynamics in N central Europe.
Implementation of a plasmodesmata gatekeeper system, and its effect on intercellular transport
(2016)
Der Klimawandel
(2016)
Was ist Gerechtigkeit? Wie könnten gerechte Regelungen aussehen für die Katastrophen und Leiden, die der Klimawandel auslöst bzw. auslösen wird? Diese sind häufig ungerecht, weil sie oft deutlich stärker diejenigen treffen, die am wenigsten zur Klimaveränderung beigetragen haben.
Doch was genau verstehen wir unter dem Schlagwort: ‚Klimawandel‘? Und kann dieser wirklich den Menschen direkt treffen? Ein kurzer naturwissenschaftlicher Abriss klärt hier die wichtigsten Fragen.
Da es sich hierbei um eine philosophische Arbeit handelt, muss zunächst geklärt werden, ob der Mensch überhaupt die Ursache von so etwas sein kann wie z.B. der Klimaerwärmung. Robert Spaemanns These dazu ist, dass der Mensch durch seinen freien Willen mit seinen Einzelhandlungen das Weltgeschehen verändern kann. Hans Jonas fügt dem hinzu, dass wir durch diese Fähigkeit, verantwortlich sind für die gewollten und ungewollten Folgen unserer Handlungen.
Damit wäre aus naturwissenschaftlicher Sicht (1. Teil der Arbeit) und aus philosophischer Sicht (Anfang 2. Teil) geklärt, dass der Mensch mit größter Wahrscheinlichkeit die Ursache des Klimawandels ist und diese Verursachung moralische Konsequenzen für ihn hat.
Ein philosophischer Gerechtigkeitsbegriff wird aus der Kantischen Rechts- und Moralphilosophie entwickelt, weil diese die einzige ist, die dem Menschen überhaupt ein Recht auf Rechte zusprechen kann. Diese entspringt der transzendentalen Freiheitsfähigkeit des Menschen, weshalb jedem das Recht auf Rechte absolut und immer zukommt. Gleichzeitig mündet Kants Philosophie wiederum in dem Freiheitsgedanken, indem Gerechtigkeit nur existiert, wenn alle Menschen gleichermaßen frei sein können.
Was heißt das konkret? Wie könnte Gerechtigkeit in der Realität wirklich umgesetzt werden? Die Realisierung schlägt zwei Grundrichtungen ein. John Rawls und Stefan Gosepath beschäftigen sich u.a. eingehend mit der prozeduralen Gerechtigkeit, was bedeutet, dass gerechte Verfahren gefunden werden, die das gesellschaftliche Zusammenleben regeln. Das leitende Prinzip hierfür ist vor allem: ein Mitbestimmungsrecht aller, so dass sich im Prinzip alle Bürger ihre Gesetze selbst geben und damit frei handeln.
In Bezug auf den Klimawandel steht die zweite Ausrichtung im Vordergrund – die distributive oder auch Verteilungs-Gerechtigkeit. Materielle Güter müssen so aufgeteilt werden, dass auch trotz empirischer Unterschiede alle Menschen als moralische Subjekte anerkannt werden und frei sein können.
Doch sind diese philosophischen Schlussfolgerungen nicht viel zu abstrakt, um auf ein ebenso schwer fassbares und globales Problem wie den Klimawandel angewendet zu werden? Was könnte daher eine Klimagerechtigkeit sein?
Es gibt viele Gerechtigkeitsprinzipien, die vorgeben, eine gerechte Grundlage für die Klimaprobleme zu bieten wie z.B. das Verursacherprinzip, das Fähigkeitsprinzip oder das Grandfathering-Prinzip, bei dem die Hauptverursacher nach wie vor am meisten emittieren dürfen (dieses Prinzip leitete die bisherigen internationalen Verhandlungen).
Das Ziel dieser Arbeit ist, herauszufinden, wie die Klimaprobleme gelöst werden können, so dass für alle Menschen unter allen Umständen die universellen Menschenrechte her- und sichergestellt werden und diese frei und moralisch handeln können.
Die Schlussfolgerung dieser Arbeit ist, dass Kants Gerechtigkeitsbegriff durch eine Kombination des Subsistenzemissions-Rechts, des Greenhouse-Development-Rights-Principles (GDR-Prinzip) und einer internationalen Staatlichkeit durchgesetzt werden könnte.
Durch das Subsistenzemissions-Recht hat jeder Mensch das Recht, so viel Energie zu verbrauchen und damit zusammenhängende Emissionen zu produzieren, dass er ein menschenwürdiges Leben führen kann. Das GDR-Prinzip errechnet den Anteil an der weltweiten Gesamtverantwortung zum Klimaschutz eines jeden Landes oder sogar eines jeden Weltbürgers, indem es die historischen Emissionen (Klimaschuld) zu der jetzigen finanziellen Kapazität des Landes/ des Individuums (Verantwortungsfähigkeit) hinzuaddiert. Die Implementierung von internationalen Gremien wird verteidigt, weil es ein globales, grenzüberschreitendes Problem ist, dessen Effekte und dessen Verantwortung globale Ausmaße haben.
Ein schlagendes Argument für fast alle Klimaschutzmaßnahmen ist, dass sie Synergien aufweisen zu anderen gesellschaftlichen Bereichen aufweisen wie z.B. Gesundheit und Armutsbekämpfung, in denen auch noch um die Durchsetzung unserer Menschenrechte gerungen wird.
Ist dieser Lösungsansatz nicht völlig utopisch?
Dieser Vorschlag stellt für die internationale Gemeinschaft eine große Herausforderung dar, wäre jedoch die einzig gerechte Lösung unserer Klimaprobleme. Des Weiteren wird an dem Kantischen Handlungsgrundsatz festgehalten, dass das ewige Streben auf ideale Ziele hin, die beste Verwirklichung dieser durch menschliche, fehlbare Wesen ist.
Gene expression describes the process of making functional gene products (e.g. proteins or special RNAs) from instructions encoded in the genetic information (e.g. DNA). This process is heavily regulated, allowing cells to produce the appropriate gene products necessary for cell survival, adapting production as necessary for different cell environments. Gene expression is subject to regulation at several levels, including transcription, mRNA degradation, translation and protein degradation. When intact, this system maintains cell homeostasis, keeping the cell alive and adaptable to different environments. Malfunction in the system can result in disease states and cell death. In this dissertation, we explore several aspects of gene expression control by analyzing data from biological experiments. Most of the work following uses a common mathematical model framework based on Markov chain models to test hypotheses, predict system dynamics or elucidate network topology. Our work lies in the intersection between mathematics and biology and showcases the power of statistical data analysis and math modeling for validation and discovery of biological phenomena.
Over the last decades, the world’s population has been growing at a faster rate, resulting in increased urbanisation, especially in developing countries. More than half of the global population currently lives in urbanised areas with an increasing tendency. The growth of cities results in a significant loss of vegetation cover, soil compaction and sealing of the soil surface which in turn results in high surface runoff during high-intensity storms and causes the problem of accelerated soil water erosion on streets and building grounds. Accelerated soil water erosion is a serious environmental problem in cities as it gives rise to the contamination of aquatic bodies, reduction of ground water recharge and increase in land degradation, and also results in damages to urban infrastructures, including drainage systems, houses and roads. Understanding the problem of water erosion in urban settings is essential for the sustainable planning and management of cities prone to water erosion. However, in spite of the vast existence of scientific literature on water erosion in rural regions, a concrete understanding of the underlying dynamics of urban erosion still remains inadequate for the urban dryland environments.
This study aimed at assessing water erosion and the associated socio-environmental determinants in a typical dryland urban area and used the city of Windhoek, Namibia, as a case study. The study used a multidisciplinary approach to assess the problem of water erosion. This included an in depth literature review on current research approaches and challenges of urban erosion, a field survey method for the quantification of the spatial extent of urban erosion in the dryland city of Windhoek, and face to face interviews by using semi-structured questionnaires to analyse the perceptions of stakeholders on urban erosion.
The review revealed that around 64% of the literatures reviewed were conducted in the developed world, and very few researches were carried out in regions with extreme climate, including dryland regions. Furthermore, the applied methods for erosion quantification and monitoring are not inclusive of urban typical features and they are not specific for urban areas. The reviewed literature also lacked aspects aimed at addressing the issues of climate change and policies regarding erosion in cities. In a field study, the spatial extent and severity of an urban dryland city, Windhoek, was quantified and the results show that nearly 56% of the city is affected by water erosion showing signs of accelerated erosion in the form of rills and gullies, which occurred mainly in the underdeveloped, informal and semi-formal areas of the city. Factors influencing the extent of erosion in Windhoek included vegetation cover and type, socio-urban factors and to a lesser extent slope estimates. A comparison of an interpolated field survey erosion map with a conventional erosion assessment tool (the Universal Soil Loss Equation) depicted a large deviation in spatial patterns, which underlines the inappropriateness of traditional non-urban erosion tools to urban settings and emphasises the need to develop new erosion assessment and management methods for urban environments. It was concluded that measures for controlling water erosion in the city need to be site-specific as the extent of erosion varied largely across the city.
The study also analysed the perceptions and understanding of stakeholders of urban water erosion in Windhoek, by interviewing 41 stakeholders using semi-structured questionnaires. The analysis addressed their understanding of water erosion dynamics, their perceptions with regards to the causes and the seriousness of erosion damages, and their attitudes towards the responsibilities for urban erosion. The results indicated that there is less awareness of the process as a phenomenon, instead there is more awareness of erosion damages and the factors contributing to the damages. About 69% of the stakeholders considered erosion damages to be ranging from moderate to very serious. However, there were notable disparities between the private householders and public authority groups. The study further found that the stakeholders have no clear understanding of their responsibilities towards the management of the control measures and payment for the damages. The private householders and local authority sectors pointed fingers at each other for the responsibilities for erosion damage payments and for putting up prevention measures. The reluctance to take responsibility could create a predicament for areas affected, specifically in the informal settlements where land management is not carried out by the local authority and land is not owned by the occupants.
The study concluded that in order to combat urban erosion, it is crucial to understand diverse dynamics aggravating the process of urbanisation from different scales. Accordingly, the study suggests that there is an urgent need for the development of urban-specific approaches that aim at: (a) incorporating the diverse socio-economic-environmental aspects influencing erosion, (b) scientifically improving natural cycles that influence water storages and nutrients for plants in urbanised dryland areas in order to increase the amount of vegetation cover, (c) making use of high resolution satellite images to improve the adopted methods for assessing urban erosion, (d) developing water erosion policies, and (e) continuously monitoring the impact of erosion and the influencing processes from local, national and international levels.
Physikalische Hydrogele gewinnen derzeit als Zellsubstrate zunehmend an Interesse, da Viskoelastizität oder Stressrelaxation ein bedeutender Parameter in der Mechanotransduktion ist, der bisher vernachlässigt wurde. In dieser Arbeit wurden multi-funktionelle Polyurethane entworfen, die über einen neuartigen Gelierungsmechanismus physikalische Hydrogele bilden. In Wasser bilden die anionischen Polyurethane spontan Aggregate, welche durch elektrostatische Abstoßung in Lösung gehalten werden. Eine schnelle Gelierung kann von hier aus durch Ladungsabschirmung erreicht werden, wodurch die Aggregation voranschreitet und ein Netzwerk ausgebildet wird. Dies kann durch die Zugabe von verschiedenen Säuren oder Salzen geschehen, sodass sowohl saure (pH 4 - 5) als auch pH-neutrale Hydrogele erhalten werden können. Während konventionelle Hydrogele auf Polyurethan-Basis in der Regel durch toxische isocyanat-haltige Präpolymere hergestellt werden, eignet sich der hier beschriebene physikalische Gelierungsmechanismus für in situ Anwendungen in sensitiven Umgebungen. Sowohl Härte als auch Stressrelaxation der Hydrogele können unabhängig voneinander über einen breiten Bereich eingestellt werden. Darüberhinaus zeichnen sich die Hydrogele durch exzellente Stressregeneration aus.
Die Empirie des beginnenden 21. Jahrhunderts weist mehr autoritäre Regime aus als am Ende des 20. Jahrhunderts angenommen. Die gegenwärtige Autoritarismusforschung versucht die Fortdauer dieses Regimetyps in Hinblick auf die politischen Institutionen zu erklären – dabei bleiben politische Akteure, die nicht zum Herrschaftszentrum gehören, außen vor.
Das vorliegende Projekt untersucht die Rolle und Funktion politischer Opposition in autoritären Regimen. Es wird davon ausgegangen, dass sich an der Opposition eine signifikante Charakteristik autoritärer Regime manifestiert. Das akteurszentrierte Projekt ist der qualitativ orientierten Politikwissenschaft zuzurechnen und verknüpft das Autoritarismuskonzept von Juan Linz mit klassischen Ansätzen der Oppositionsforschung und macht diese Theorien für die gegenwärtige Autoritarismusforschung nutzbar.
Die eigens entwickelte elitenorientierte Oppositionstypologie wird am Beispiel Kenias im Zeitraum 1990-2005 angewendet. Die Oppositionsgruppen werden im Institutionengefüge autoritärer Regime verortet und ihr politisches Agieren in den Dimensionen Handlungsstatus, Handlungsüberzeugung und Handlungsstrategie analysiert. Unter Beachtung der historisch gewachsenen regionalen und kulturellen Spezifika wird angenommen, dass generelle, Regionen übergreifende Aussagen zur Opposition in autoritären Regimen getroffen werden können: Kein Oppositionstyp kann allein einen Herrschaftswechsel bewirken. Der Wechsel bzw. die Fortdauer der Herrschaft hängt von der Dominanz bestimmter Oppositionstypen im Oppositionsgeflecht sowie der gleichzeitigen Schwäche anderer Oppositionstypen ab.
Durch die konzeptionelle Beschäftigung mit Opposition sowie deren empirische Erschließung soll ein substantieller Beitrag für die notwendige Debatte um autoritäre Regime im 21. Jahrhundert geleistet werden.
In the debate on how to govern sustainable development, a central question concerns the interaction between knowledge about sustainability and policy developments. The discourse on what constitutes sustainable development conflict on some of the most basic issues, including the proper definitions, instruments and indicators of what should be ‘developed’ or ‘sustained’. Whereas earlier research on the role of (scientific) knowledge in policy adopted a rationalist-positivist view of knowledge as the basis for ‘evidence-based policy making’, recent literature on knowledge creation and transfer processes has instead pointed towards aspects of knowledge-policy ‘co-production’ (Jasanoff 2004). It is highlighted that knowledge utilisation is not just a matter of the quality of the knowledge as such, but a question of which knowledge fits with the institutional context and dominant power structures. Just as knowledge supports and justifies certain policy, policy can produce and stabilise certain knowledge. Moreover, rather than viewing knowledge-policy interaction as a linear and uni-directional model, this conceptualization is based on an assumption of the policy process as being more anarchic and unpredictable, something Cohen, March and Olsen (1972) has famously termed the ‘garbage-can model’.
The present dissertation focuses on the interplay between knowledge and policy in sustainability governance. It takes stock with the practice of ‘Management by Objectives and Results’ (MBOR: Lundqvist 2004) whereby policy actors define sustainable development goals (based on certain knowledge) and are expected to let these definitions guide policy developments as well as evaluate whether sustainability improves or not. As such a knowledge-policy instrument, Sustainability Indicators (SI:s) help both (subjectively) construct ‘social meaning’ about sustainability and (objectively) influence policy and measure its success. The different articles in this cumulative dissertation analyse the development, implementation and policy support (personal and institutional) of Sustainability Indicators as an instrument for MBOR in a variety of settings. More specifically, the articles centre on the question of how sustainability definitions and measurement tools on the one hand (knowledge) and policy instruments and political power structures on the other, are co-produced.
A first article examines the normative foundations of popular international SI:s and country rankings. Combining theoretical (constructivist) analysis with factor analysis, it analyses how the input variable structure of SI:s are related to different sustainability paradigms, producing a different output in terms of which countries (developed versus developing) are most highly ranked. Such a theoretical input-output analysis points towards a potential problem of SI:s becoming a sort of ‘circular argumentation constructs’. The article thus, highlights on a quantitative basis what others have noted qualitatively – that different definitions and interpretations of sustainability influence indicator output to the point of contradiction. The normative aspects of SI:s does thereby not merely concern the question of which indicators to use for what purposes, but also the more fundamental question of how normative and political bias are intrinsically a part of the measurement instrument as such. The study argues that, although no indicator can be expected to tell the sustainability ‘truth-out-there’, a theoretical localization of indicators – and of the input variable structure – may help facilitate interpretation of SI output and the choice of which indicators to use for what (policy or academic) purpose.
A second article examines the co-production of knowledge and policy in German sustainability governance. It focuses on the German sustainability strategy ‘Perspektiven für Deutschland’ (2002), a strategy that stands out both in an international comparison of national sustainability strategies as well as among German government policy strategies because of its relative stability over five consecutive government constellations, its rather high status and increasingly coercive nature. The study analyses what impact the sustainability strategy has had on the policy process between 2002 and 2015, in terms of defining problems and shaping policy processes. Contrasting rationalist and constructivist perspectives on the role of knowledge in policy, two factors, namely the level of (scientific and political) consensus about policy goals and the ‘contextual fit’ of problem definitions, are found to be main factors explaining how different aspects of the strategy is used. Moreover, the study argues that SI:s are part of a continuous process of ‘structuring’ in which indicator, user and context factors together help structure the sustainability challenge in such a way that it becomes more manageable for government policy.
A third article examines how 31 European countries have built supportive institutions of MBOR between 1992 and 2012. In particular during the 1990s and early 2000s much hope was put into the institutionalisation of Environmental Policy Integration (EPI) as a way to overcome sectoral thinking in sustainability policy making and integrate issues of environmental sustainability into all government policy. However, despite high political backing (FN, EU, OECD), implementation of EPI seems to differ widely among countries. The study is a quantitative longitudinal cross-country comparison of how countries’ ‘EPI architectures’ have developed over time. Moreover, it asks which ‘EPI architectures’ seem to be more effective in producing more ‘stringent’ sustainability policy.
The introduction of columnar in-memory databases, along with hardware evolution, has made the execution of transactional and analytical enterprise application workloads on a single system both feasible and viable. Yet, we argue that executing analytical aggregate queries directly on the transactional data can decrease the overall system performance. Despite the aggregation capabilities of columnar in-memory databases, the direct access to records of a materialized aggregate is always more efficient than aggregating on the fly. The traditional approach to materialized aggregates, however, introduces significant overhead in terms of materialized view selection, maintenance, and exploitation. When this overhead is handled by the application, it increases the application complexity, and can slow down the transactional throughput of inserts, updates, and deletes.
In this thesis, we motivate, propose, and evaluate the aggregate cache, a materialized aggregate engine in the main-delta architecture of a columnar in-memory database that provides efficient means to handle costly aggregate queries of enterprise applications. For our design, we leverage the specifics of the main-delta architecture that separates a table into a main and delta partition. The central concept is to only cache the partial aggregate query result as defined on the main partition of a table, because the main partition is relatively stable as records are only inserted into the delta partition. We contribute by proposing incremental aggregate maintenance and query compensation techniques for mixed workloads of enterprise applications. In addition, we introduce aggregate profit metrics that increase the likelihood of persisting the most profitable aggregates in the aggregate cache.
Query compensation and maintenance of materialized aggregates based on joins of multiple tables is expensive due to the partitioned tables in the main-delta architecture. Our analysis of enterprise applications has revealed several data schema and workload patterns. This includes the observation that transactional data is persisted in header and item tables, whereas in many cases, the insertion of related header and item records is executed in a single database transaction. We contribute by proposing an approach to transport these application object semantics to the database system and optimize the query processing using the aggregate cache by applying partition pruning and predicate pushdown techniques.
For the experimental evaluation, we propose the FICO benchmark that is based on data from a productive ERP system with extracted mixed workloads. Our evaluation reveals that the aggregate cache can accelerate the execution of aggregate queries up to a factor of 60 whereas the speedup highly depends on the number of aggregated records in the main and delta partitions. In mixed workloads, the proposed aggregate maintenance and query compensation techniques perform up to an order of magnitude better than traditional materialized aggregate maintenance approaches. The introduced aggregate profit metrics outperform existing costbased metrics by up to 20%. Lastly, the join pruning and predicate pushdown techniques can accelerate query execution in the aggregate cache in the presence of multiple partitioned tables by up to an order of magnitude.
In the first part of my work I have investigated the ageing properties of the first passage time distributions in a one-dimensional subdiffusive continuous time random walk with power law distributed waiting times of the form $\psi(\tau) \sim \tau^{-1-\alpha}$ with $0<\alpha<1$ and $1<\alpha<2$. The age or ageing time $t_a$ is the time span from the start of the stochastic process to the start of the observation of this process (at $t=0$). I have calculated the results for a single target and two targets, also including the biased case, where the walker is driven towards the boundary by a constant force. I have furthermore refined the previously derived results for the non-ageing case and investigated the changes that occur when the walk is performed in a discrete quenched energy landscape, where the waiting times are fixed for every site. The results include the exact Laplace space densities and infinite (converging) series as exact results in the time space. The main results are the dominating long time power law behavior regimes, which depend on the ageing time. For the case of unbiased subdiffusion ($\alpha < 1$) in the presence of one target, I find three different dominant terms for ranges of $t$ separated by $t_a$ and another crossover time $t^{\star}$, which depends on $t_a$ as well as on the anomalous exponent $\alpha$ and the anomalous diffusion coefficient $K_{\alpha}$. In all three regimes ($t \ll t_a$, $t_a \ll t \ll t^{\star}$, $t \gg t^{\star}$) one finds power law decay with exponents depending on $\alpha$. The middle regime only exists for $t_a \ll t^{\star}$. The dominant terms in the first two regimes (ageing regimes) come from the probability distribution of the forward waiting time, the time one has to wait for the stochastic process to make the first step during the observation. When the observation time is larger than the second crossover time $t^{\star}$, the first passage time density does not show ageing and the non-ageing first passage time dominates. The power law exponents in the respective regimes are $-\alpha$ for strong ageing, $-1-\alpha$ in the intermediate regime, and $-1-\alpha/2$ in the final non-ageing regime. A similar split into three regimes can be found for $1<\alpha<2$, only with a different second crossover time $t^*$. In this regime the diffusion is normal but also age-dependent. For the diffusion in quenched energy landscapes one cannot detect ageing. The first passage time density shows a quenched power law $^\sim t^{-(1+2\alpha)/(1+\alpha)}$. For diffusion between two target sites and the biased diffusion towards a target only two scaling regimes emerge, separated by the ageing time. In the ageing case $t \ll t_a$ the forward waiting time is again dominant with power law exponent $-\alpha$, while the non-ageing power law $-1-\alpha$ is found for all times $t \gg t_a$. An intermediate regime does not exist. The bias and the confinement have similar effects on the first passage time density. For quenched diffusion, the biased case is interesting, as the bias reduces correlations due to revisiting of the same waiting time. As a result, CTRW like behavior is observed, including ageing. Extensive computer simulations support my findings.
The second part of my research was done on the subject of ageing Scher-Montroll transport, which is in parts closely related to the first passage densities. It explains the electrical current in an amorphous material. I have investigated the effect of the width of a given initial distribution of charge carriers on the transport coefficients as well as the ageing effect on the emerging power law regimes and a constant initial regime. While a spread out initial distribution has only little impact on the Scher-Montroll current, ageing alters the behavior drastically. Instead of the two classical power laws one finds four current regimes, up to three of which can appear in a single experiment. The dominant power laws differ for $t \ll t_a, t_c$, $t_a \ll t \ll t_c$, $t_c \ll t \ll t_a$, and $t \gg t_a,t_c$. Here, $t_c$ is the crossover time of the non-aged Scher-Montroll current. For strongly aged systems one can observe a constant current in the first regime while the others are dominated by decaying power laws with exponents $\alpha -1$, $-\alpha$, and $-1-\alpha$. The ageing regimes are the 1st and 3rd one, while the classical regimes are the 2nd and the 4th. I have verified the theory using numerical integration of the exact integrals and applied the new results to experimental data.
In the third part I considered a single file of subdiffusing particles in an energy landscape. Every occupied site of the landscape acts as a boundary, from which a particle is immediately reflected to its previous site, if it tries to jump there. I have analysed the effects single-file diffusion a quenched landscape compared to an annealed landscape and I have related these results to the number of steps and related quantities. The diffusion changes from ultraslow logarithmic diffusion in the annealed or CTRW case to subdiffusion with an anomalous exponent $\alpha/(1+\alpha)$ in the quenched landscape. The behavior is caused by the forward waiting time, which changes drastically from the quenched to the annealed case. Single-file effects in the quenched landscape are even more complicated to consider in the ensemble average, since the diffusion in individual landscapes shows extremely diverse behavior. Extensive simulations support my theoretical arguments, which consider mainly the long time evolution of the mean square displacement of a bulk particle.
A majority of studies documented a reduced ankle muscle activity, particularly of the peroneus longus muscle (PL), in patients with functional ankle instability (FI). It is considered valid that foot orthoses as well as sensorimotor training have a positive effect on ankle muscle activity in healthy individuals and those with lower limb overuse injuries or flat arched feet (reduced reaction time by sensorimotor exercises; increased ankle muscle amplitude by orthoses use). However, the acute- and long-term influence of foot orthoses on ankle muscle activity in individuals with FI is unknown.
AIMS: The present thesis addressed (1a) acute- and (1b) long-term effects of foot orthoses compared to sensorimotor training on ankle muscle activity in patients with FI. (2) Further, it was investigated if the orthosis intervention group demonstrate higher ankle muscle activity by additional short-term use of a measurement in-shoe orthosis (compared to short-term use of “shoe only”) after intervention. (3) As prerequisite, it was evaluated if ankle muscle activity can be tested reliably and (4) if this differs between healthy individuals and those with FI.
METHODS: Three intervention groups (orthosis group [OG], sensorimotor training group [SMTG], control group [CG]), consisting of both, healthy individuals and those with FI, underwent one longitudinal investigation (randomised controlled trial). Throughout 6 weeks of intervention, OG wore an in-shoe orthosis with a specific “PL stimulation module”, whereas SMTG conducted home-based exercises. CG served to measure test-retest reliability of ankle muscle activity (PL, M. tibialis anterior [TA] and M. gastrocnemius medialis [GM]). Pre- and post-intervention, ankle muscle activity (EMG amplitude) was recorded during “normal” unperturbed (NW) and perturbed walking (PW) on a split-belt treadmill (stimulus 200 ms post initial heel contact [IC]) as well as during side cutting (SC), each while wearing “shoes only” and additional measurement in-shoe orthoses (randomized order). Normalized RMS values (100% MVC, mean±SD) were calculated pre- (100-50 ms) and post (200-400 ms) - IC.
RESULTS: (3) Test-retest reliability showed a high range of values in healthy individuals and those with FI. (4) Compared to healthy individuals, patients with FI demonstrated lower PL pre-activity during SC, however higher PL pre-activity for NW and PW. (1a) Acute orthoses use did not influence ankle muscle activity. (1b) For most conditions, sensorimotor training was more effective in individuals with FI than long-term orthotic intervention (increased: PL and GM pre-activity and TA reflex-activity for NW, PL pre-activity and TA, PL and GM reflex-activity for SC, PL reflex-activity for PW). However, prolonged orthoses use was more beneficial in terms of an increase in GM pre-activity during SC. For some conditions, long-term orthoses intervention was as effective as sensorimotor training for individuals with FI (increased: PL pre-activity for PW, TA pre-activity for SC, PL and GM reflex-activity for NW). Prolonged orthoses use was also advantageous in healthy individuals (increased: PL and GM pre-activity for NW and PW, PL pre-activity for SC, TA and PL reflex-activity for NW, PL and GM reflex-activity for PW). (2) The orthosis intervention group did not present higher ankle muscle activity by the additional short-term use of a measurement in-shoe orthosis at re-test after intervention.
CONCLUSION: High variations of reproducibility reflect physiological variability in muscle activity during gait and therefore deemed acceptable. The main findings confirm the presence of sensorimotor long-term effects of specific foot orthoses in healthy individuals (primary preventive effect) and those with FI (therapeutic effect). Neuromuscular compensatory feedback- as well as anticipatory feedforward adaptation mechanism to prolonged orthoses use, specifically of the PL muscle, underpins the key role of PL in providing essential dynamic ankle joint stability. Due to its advantages over sensorimotor training (positive subjective feedback in terms of comfort, time-and-cost-effectiveness), long-term foot orthoses use can be recommended as an applicable therapy alternative in the treatment of FI. Long-term effect of foot orthoses in a population with FI must be validated in a larger sample size with longer follow-up periods to substantiate the generalizability of the existing outcomes.
The impact of soil microbiota on plant species performance and diversity in semi-natural grasslands
(2016)
Over the past decades, rapid and constant advances have motivated GNSS technology to approach the ability to monitor transient ground motions with mm to cm accuracy in real-time. As a result, the potential of using real-time GNSS for natural hazards prediction and early warning has been exploited intensively in recent years, e.g., landslides and volcanic eruptions monitoring. Of particular note, compared with traditional seismic instruments, GNSS does not saturate or tilt in terms of co-seismic displacement retrieving, which makes it especially valuable for earthquake and earthquake induced tsunami early warning. In this thesis, we focus on the application of real-time GNSS to fast seismic source inversion and tsunami early warning.
Firstly, we present a new approach to get precise co-seismic displacements using cost effective single-frequency receivers. As is well known, with regard to high precision positioning, the main obstacle for single-frequency GPS receiver is ionospheric delay. Considering that over a few minutes, the change of ionospheric delay is almost linear, we constructed a linear model for each satellite to predict ionospheric delay. The effectiveness of this method has been validated by an out-door experiment and 2011 Tohoku event, which confirms feasibility of using dense GPS networks for geo-hazard early warning at an affordable cost.
Secondly, we extended temporal point positioning from GPS-only to GPS/GLONASS and assessed the potential benefits of multi-GNSS for co-seismic displacement determination. Out-door experiments reveal that when observations are conducted in an adversary environment, adding a couple of GLONASS satellites could provide more reliable results. The case study of 2015 Illapel Mw 8.3 earthquake shows that the biases between co-seismic displacements derived from GPS-only and GPS/GLONASS vary from station to station, and could be up to 2 cm in horizontal direction and almost 3 cm in vertical direction. Furthermore, slips inverted from GPS/GLONASS co-seismic displacements using a layered crust structure on a curved plane are shallower and larger for the Illapel event.
Thirdly, we tested different inversion tools and discussed the uncertainties of using real-time GNSS for tsunami early warning. To be exact, centroid moment tensor inversion, uniform slip inversion using a single Okada fault and distributed slip inversion in layered crust on a curved plane were conducted using co-seismic displacements recorded during 2014 Pisagua earthquake. While the inversion results give similar magnitude and the rupture center, there are significant differences in depth, strike, dip and rake angles, which lead to different tsunami propagation scenarios. Even though, resulting tsunami forecasting along the Chilean coast is close to each other for all three models.
Finally, based on the fact that the positioning performance of BDS is now equivalent to GPS in Asia-Pacific area and Manila subduction zone has been identified as a zone of potential tsunami hazard, we suggested a conceptual BDS/GPS network for tsunami early warning in South China Sea. Numerical simulations with two earthquakes (Mw 8.0 and Mw 7.5) and induced tsunamis demonstrate the viability of this network. In addition, the advantage of BDS/GPS over a single GNSS system by source inversion grows with decreasing earthquake magnitudes.
Die hohe Energieaufnahme durch Fette ist ein Hauptfaktor für die Entstehung von Adipositas, was zu weltweiten Bestrebungen führte, die Fettaufnahme zu verringern. Fettreduzierte Lebensmittel erreichen jedoch, trotz ihrer Weiterentwicklung, nicht die Schmackhaftigkeit ihrer Originale. Die traditionelle Sichtweise, dass die Attraktivität von Fetten allein durch Textur, Geruch, Aussehen und postingestive Effekte bestimmt wird, wird nun durch das Konzept einer gustatorischen Wahrnehmung ergänzt. Bei Nagetieren zeigte sich, dass Lipide unabhängig von den vorgenannten Eigenschaften erkannt werden, sowie, dass Fettsäuren, freigesetzt durch linguale Lipasen, als gustatorische Stimuli fungieren und Fettsäuresensoren in Geschmackszellen exprimiert sind. Die Datenlage für den Menschen erwies sich jedoch als sehr begrenzt, daher war es Ziel der vorliegenden Arbeit molekulare und histologische Voraussetzungen für eine gustatorische Fettwahrnehmung beim Menschen zu untersuchen.
Zunächst wurde humanes Geschmacksgewebe mittels RT-PCR und immunhistochemischen Methoden auf die Expression von Fettsäuresensoren untersucht, sowie exprimierende Zellen in Kofärbeexperimenten charakterisiert und quantifiziert. Es wurde die Expression fettsäuresensitiver Rezeptoren nachgewiesen, deren Agonisten das gesamte Spektrum an kurz- bis langkettigen Fettsäuren abdecken (GPR43, GPR84, GPR120, CD36, KCNA5). Ein zweifelsfreier Nachweis des Proteins konnte für den auf langkettige Fettsäuren spezialisierten Rezeptor GPR120 in Typ-I- und Typ-III-Geschmackszellen der Wallpapillen erbracht werden. Etwa 85 % dieser GPR120-exprimierenden Zellen enthielten keine der ausgewählten Rezeptoren der Geschmacksqualitäten süß (TAS1R2/3), umami (TAS1R1/3) oder bitter (TAS2R38). Somit findet sich in humanen Geschmackspapillen nicht nur mindestens ein Sensor, sondern möglicherweise auch eine spezifische, fettsäuresensitive Zellpopulation. Weitere RT-PCR-Experimente und Untersuchungen mittels In-situ-Hybridisierung wurden zur Klärung der Frage durchgeführt, ob Lipasen in den Von-Ebner-Speicheldrüsen (VED) existieren, die freie Fettsäuren aus Triglyceriden als gustatorischen Stimulus freisetzen können. Es zeigte sich zwar keine Expression der bei Nagetieren gefundenen Lipase F (LIPF), jedoch der eng verwandten Lipasen K, M und N in den serösen Zellen der VED. In-silico-Untersuchungen der Sekundär- und Tertiärstrukturen zeigten die hohe Ähnlichkeit zu LIPF, erwiesen aber auch Unterschiede in den Bindungstaschen der Enzyme, welche auf ein differenziertes Substratspektrum hinweisen. Die Anwesenheit eines spezifischen Signalpeptids macht eine Sekretion der Lipasen in den die Geschmacksporen umspülenden Speichel wahrscheinlich und damit auch eine Bereitstellung von Fettsäuren als Stimuli für Fettsäuresensoren. Die Übertragung des durch diese Stimuli hervorgerufenen Signals von Geschmackszellen auf gustatorische Nervenfasern über P2X-Rezeptormultimere wurde mit Hilfe einer vorherigen Intervention mit einem P2X3 /P2X2/3-spezifischen Antagonisten an der Maus als Modellorganismus im Kurzzeit-Präferenztest untersucht. Es zeigte sich weder eine Beeinträchtigung der Wahrnehmung einer Fettsäurelösung, noch einer zuckerhaltigen Kontrolllösung, wohingegen die Wahrnehmung einer Bitterstofflösung reduziert wurde. Somit ist anhand der Ergebnisse dieser Arbeit eine Beteiligung des P2X3-Homomers bzw. des P2X2/3-Heteromers unwahrscheinlich, jedoch die des P2X2-Homomers und damit der gustatorischen Nervenfasern nicht ausgeschlossen.
Die Ergebnisse dieser Arbeit weisen auf die Erfüllung grundlegender Voraussetzungen für die gustatorische Fett(säure)wahrnehmung hin und tragen zum Verständnis der sensorischen Fettwahrnehmung und der Regulation der Fettaufnahme bei. Das Wissen um die Regulation dieser Mechanismen stellt eine Grundlage zur Aufklärung der Ursachen und damit der Bekämpfung von Adipositas und assoziierten Krankheiten dar.
Die schulische Berufswahlvorbereitung versäumt es, Jugendliche auf die Wahl des Ausbildungsbetriebs vorzubereiten. Sie thematisiert nur die Berufswahl, obwohl die Entscheidung für eine betriebliche Ausbildung immer auch die Entscheidung für einen Ausbildungsbetrieb voraussetzt. Für die Ausbildungszufriedenheit und den -erfolg ist diese Betriebswahl zentral. Angesichts des Mismatchs am Ausbildungsmarkt ist das Thema hochrelevant.
Aus welchen Gründen entscheiden sich Jugendliche für einen Ausbildungsbetrieb? Diese Frage untersucht die vorliegende Arbeit aus prospektiver Sicht in narrativen Einzelinterviews mit 52 Schülerinnen und Schülern der 9. und 10. Klassenstufen verschiedener Schultypen und aus retrospektiver Sicht in vier multipel eingebetteten Mehrfallstudien mit 17 Auszubildenden aus vier Betrieben und in acht Berufen – jeweils in Brandenburg und Berlin. Theoretisch nähert sich diese Arbeit dem Thema über psychologische, soziologische und wirtschaftswissenschaftliche sowie interdisziplinäre Berufswahltheorien an, dem operativen Modell der Betriebswahl sowie dem hier neu entwickelten Modell der Ausbildungswahl als Entscheidungsprozess, das die beiden Wahlkomponenten Betrieb und Beruf vereint.
Drei zentrale Erkenntnisse kennzeichnen das Ergebnis der vorliegenden Arbeit:
1. Jugendliche beschäftigen sich mit der Wahl des Ausbildungsbetriebs und berücksichtigen vor allem emotionale Gründe. Diese variieren von Person zu Person.
2. Wichtigste Entscheidungsgründe für den Ausbildungsbetrieb sind der persönliche Eindruck, die inhaltliche Solidität, der Ort, das Betriebsklima, Kontakte ins Unternehmen, Perspektiven und die Bezahlung.
3. Jugendliche mit Mittlerem Schulabschluss achten besonders auf die Perspektiven nach Ausbildungsende.
Die wenigen anderen Studien zur Entscheidung für den Ausbildungsbetrieb gehen auf den am häufigsten genannten Entscheidungsgrund persönlicher Eindruck nicht ein. Auch kommen sie zu uneinheitlichen Schlüssen, für welche Personengruppe der Entscheidungsgrund Perspektiven besonders relevant ist. Es bedarf zusätzlicher Studien, um die Ergebnisse zu überprüfen und ihre statistische Verteilung in größeren Bevölkerungsgruppen zu untersuchen sowie eine belastbare, ganzheitliche Theorie zur Ausbildungswahl zu entwickeln.
The horse is a fascinating animal symbolizing power, beauty, strength and grace. Among all the animal species domesticated the horse had the largest impact on the course of human history due to its importance for warfare and transportation. Studying the process of horse domestication contributes to the knowledge about the history of horses and even of our own species.
Research based on molecular methods has increasingly focused on the genetic basis of horse domestication. Mitochondrial DNA (mtDNA) analyses of modern and ancient horses detected immense maternal diversity, probably due to many mares that contributed to the domestic population. However, mtDNA does not provide an informative phylogeographic structure. In contrast, Y chromosome analyses displayed almost complete uniformity in modern stallions but relatively high diversity in a few ancient horses. Further molecular markers that seem to be well suited to infer the domestication history of horses or genetic and phenotypic changes during this process are loci associated with phenotypic traits.
This doctoral thesis consists of three different parts for which I analyzed various single nucleotide polymorphisms (SNPs) associated with coat color, locomotion or Y chromosomal variation of horses. These SNPs were genotyped in 350 ancient horses from the Chalcolithic (5,000 BC) to the Middle Ages (11th century). The distribution of the samples ranges from China to the Iberian Peninsula and Iceland. By applying multiplexed next-generation sequencing (NGS) I sequenced short amplicons covering the relevant positions: i) eight coat-color-associated mutations in six genes to deduce the coat color phenotype; ii) the so-called ’Gait-keeper’ SNP in the DMRT3 gene to screen for the ability to amble; iii) 16 SNPs previously detected in ancient horses to infer the corresponding haplotype. Based on these data I investigated the occurrence and frequencies of alleles underlying the respective phenotypes as well as Y chromosome haplotypes at different times and regions. Also, selection coefficients for several Y chromosome lineages or phenotypes were estimated.
Concerning coat color differences in ancient horses my work constitutes the most comprehensive study to date. I detected an increase of chestnut horses in the Middle Ages as well as differential selection for spotted and solid phenotypes over time which reflects changing human preferences.
With regard to ambling horses, the corresponding allele was present in medieval English and Icelandic horses. Based on these results I argue that Norse settlers, who frequently invaded parts of Britain, brought ambling individuals to Iceland from the British Isles which can be regarded the origin of this trait. Moreover, these settlers appear to have selected for ambling in Icelandic horses.
Relating to the third trait, the paternal diversity, these findings represent the largest ancient dataset of Y chromosome variation in non-humans. I proved the existence of several Y chromosome haplotypes in early domestic horses. The decline of Y chromosome variation coincides with the movement of nomadic peoples from the Eurasian steppes and later with different breeding practices in the Roman period.
In conclusion, positive selection was estimated for several phenotypes/lineages
in different regions or times which indicates that these were preferred by humans. Furthermore, I could successfully infer the distribution and dispersal of horses in association with human movements and actions. Thereby, a better understanding of the influence of people on the changing appearance and genetic diversity of domestic horses could be gained. My results also emphasize the close relationship of ancient genetics and archeology or history and that only in combination well-founded conclusions can be reached.