Refine
Has Fulltext
- yes (2515) (remove)
Year of publication
Document Type
- Doctoral Thesis (2515) (remove)
Language
Keywords
- climate change (53)
- Klimawandel (51)
- Modellierung (34)
- Nanopartikel (28)
- machine learning (21)
- Fernerkundung (20)
- Synchronisation (19)
- remote sensing (18)
- Spracherwerb (17)
- Blickbewegungen (16)
Institute
- Institut für Physik und Astronomie (404)
- Institut für Biochemie und Biologie (383)
- Institut für Geowissenschaften (326)
- Institut für Chemie (302)
- Extern (148)
- Institut für Umweltwissenschaften und Geographie (121)
- Institut für Ernährungswissenschaft (102)
- Wirtschaftswissenschaften (97)
- Department Psychologie (88)
- Hasso-Plattner-Institut für Digital Engineering GmbH (88)
Das Ziel dieser Arbeit ist es, die Strukturen im äußeren Erdkern zu untersuchen und Rückschlüsse auf die sich daraus ergebenden Konsequenzen für geodynamische Modellvorstellungen zu ziehen. Die Untersuchung der Kernphasenkaustik B mit Hilfe einer kumulierten Amplituden-Entfernungskurve ist Gegenstand des ersten Teils. Dazu werden die absoluten Amplituden der PKP-Phasen im Entfernungsbereich von 142 ° bis 147 ° bestimmt und mit den Amplituden synthetischer Seismogramme verglichen. Als Datenmaterial dienen die Breitbandregistrierungen des Deutschen Seismologischen Re-gionalnetzes (GRSN 1 ) und des Arrays Gräfenberg (GRF). Die verwendeten Wellen-formen werden im WWSSN-SP-Frequenzbereich gefiltert. Als Datenbasis dienen vier Tiefherdbeben der Subduktionszone der Neuen Hebriden (Vanuatu Island) und vier Nuklearexplosionen, die auf dem Mururoa und Fangataufa Atoll im Südpazifik stattgefunden haben. Beide Regionen befinden sich vom Regionalnetz aus gesehen in einer Epizentraldistanz von ungefähr 145 °. Die Verwendung eines homogen instrumentierten Netzes von Detektoren und die Anwendung von Stations- und Magnitudenkorrekturen verringern den Hauptteil der Streuung bei den Amplitudenwerten. Dies gilt auch im Vergleich zu Untersuchungen von langperiodischen Amplituden im Bereich der Kernphasenkaustik (Häge, 1981). Ein weiterer Grund für die geringe Streuung ist die ausschließliche Verwendung von Ereignissen mit kurzer impulsiver Herdzeitfunktion. Erst die geringe Streuung der Amplitudenwerte ermöglicht eine Interpretation der Daten. Die theoretischen Amplitudenkurven der untersuchten Erdmodelle zeigen im Bereich der Kaustik B einen gleichartigen Kurvenverlauf. Bei allen Berechnungen wird ein einheitliches Modell für die Güte der P- und S-Wellen verwendet, das sich aus den Q-Werten der Modelle CIT112 und PREM 2 zusammensetzt. Die mit diesem Q-Modell berechneten Amplituden liegen in geringem Maße oberhalb der gemessenen Amplituden. Dies braucht nicht berücksichtigt zu werden, da die kumulierte Amplituden-Entfernungskurve anhand der Lage des Maximums auf der Entfernungsachse ausgewertet wird. Folglich wird darauf verzichtet, ein alternatives Q-Modell zu entwickeln. Hinsichtlich der Lage des Kaustikmaximums lassen sich die untersuchten Erdmodelle in zwei Kategorien einteilen. Eine Gruppe besteht aus den Modellen IASP91 und 1066B, deren Maxima bei 144.6 ° und 144.7 ° liegen. Zur zweiten Gruppe von Modellen zählen AK135, PREM und SP6 mit den Maxima bei 145.1 ° und 145.2 ° (SP6). Die gemessene Amplitudenkurve hat ihr Maximum bei 145 °. Alle Entfernungsangaben beziehen sich auf eine Herdtiefe von 200 km. Die Kaustikentfernung für einen Oberflächenherd ist jeweils um 0.454 ° größer als die angegeben Werte. Damit liegen die Maxima der Modelle AK135 und PREM nur 0.1 ° neben dem der gemessenen kumulierten Amplitudenkurve. Daher wird auf die Erstellung eines eigenen Modells verzichtet, da dieses eine unwesentlich verbesserte Amplitudenkurve aufweisen würde. Das Ergebnis der Untersuchung ist die Erstellung einer gemessenen kumulierten Amplituden-Entfernungskurve für die Kaustik B. Die Kurve legt die Position der Kaustik B für kurzperiodische Daten auf ± 0.15 ° fest und bestimmt damit, welche Erdmodelle für die Beschreibung der Amplituden im Entfernungsbereich der Kaustik B besonders geeignet sind. Die Erdmodelle AK135 und PREM, ergänzt durch ein einheitliches Q-Modell, geben den Verlauf der Amplituden am besten wieder. Da die Amplitudenkurven beider Modelle nahe beieinander liegen, sind sie als gleichwertig zu bezeichnen. Im zweiten Teil der Arbeit wird die Struktur der Übergangszone in den inneren Erdkern anhand des spektralen Abklingens der Phase PKP(BC)diff am Punkt C der Laufzeitkurve untersucht. Der physikalische Prozeß der Beugung ist für die starke Abnahme der Amplituden dieser Phase verantwortlich. Die Diffraktion beeinflußt das Abklingverhalten verschiedener Frequenzanteile des seismischen Signals auf unterschiedliche Weise. Eine Deutung des Verhaltens erfordert die Berechnung von Abklingspektren. Dabei wird die Abschwächung des PKP(BC)diff Signals für acht Frequenzen zwischen 6.4 s und 1.25 Hz ermittelt und als Spektrum dargestellt. Die Form des Abklingspektrums ist charakteristisch für die Beschaffenheit der Geschwindigkeitsstruktur direkt oberhalb der Grenze zum inneren Erdkern (GIK). Die Beben, deren Kernphasen im Regionalnetz als diffraktierte Kernphasen BCdiff registriert werden, liegen in einem Entfernungsbereich jenseits von 150 °. In dieser Distanz befinden sich die Erdbebenherde der Tonga-Fidschi-Subduktionszone, deren Breitbandaufzeichnungen verwendet werden. Die Auswertung unkorrigierter Wellenformen ergibt Abklingspektren, die mit plausiblen Erdmodellen nicht in Einklang zu bringen sind. Aus diesem Grund werden die Daten einer spektralen Stationskorrektur unterzogen, die eigens zu diesem Zweck ermittelt wird. Am Beginn der Auswertung steht eine Prüfung bekannter Erdmodelle mit unterschiedlichen Geschwindigkeitsstrukturen oberhalb der GIK. Zu den untersuchten Modellen zählen PREM, IASP91, AK135Q, PREM2, SP6, OICM2 und eine Variante des PREM. Die Untersuchung ergibt, daß Modelle, die einen verringerten Gradienten oberhalb der GIK aufweisen, eine bessere Übereinstimmung mit den gemessenen Daten zeigen als Modelle ohne diese Übergangszone. Zur Verifikation dieser These wird ein Erdmodell, das keinen verringerten Gradienten oberhalb der GIK besitzt (PREM), durch eine Reihe unterschiedlicher Geschwindigkeitsverläufe in diesem Bereich ergänzt und deren synthetische Seismogramme berechnet. Das Resultat der Untersuchung sind zwei Varianten des PREM, deren Frequenzanalyse eine gute Übereinstimmung mit den Daten zeigt. Das Abklingspektrum des Erdmodells PD47, das in einer 380 km mächtigen Schicht einen negativen Gradienten besitzt, zeigt eine große Ähnlichkeit mit den gemessenen Spektren. Dennoch kann es nicht als realistisches Modell angesehen werden, da der Punkt C in einer zu großen Entfernung liegt. Darüber hinaus müßte die zu kurze Differenzlaufzeit zwischen PKP(AB) und PKP(DF) beziehungsweise PKIKP durch eine größere Änderung der Geschwindigkeitsstruktur im inneren Kern kompensiert werden. Es wird deshalb das Modell PD27a favorisiert, das diese Nachteile nicht aufweist. PD27a besitzt eine Schicht konstanter Geschwindigkeit oberhalb der GIK mit einer Mächtigkeit von 150 km. Die Art des Geschwindigkeitsverlaufs steht im Einklang mit der geodynamischen Modellvorstellung, nach der eine Anreicherung leichter Elemente oberhalb der GIK vorliegt, die als Ursache für die Konvektion im äußeren Erdkern anzusehen ist.
Media artists have been struggling for financial survival ever since media art came into being. The non-material value of the artwork, a provocative attitude towards the traditional arts world and originally anti-capitalist mindset of the movement makes it particularly difficult to provide a constructive solution. However, a cultural entrepreneurial approach can be used to build a framework in order to find a balance between culture and business while ensuring that the cultural mission remains the top priority.
Flood polders are part of the flood risk management strategy for many lowland rivers. They are used for the controlled storage of flood water so as to lower peak discharges of large floods. Consequently, the flood hazard in adjacent and downstream river reaches is decreased in the case of flood polder utilisation. Flood polders are usually dry storage reservoirs that are typically characterised by agricultural activities or other land use of low economic and ecological vulnerability. The objective of this thesis is to analyse hydraulic, environmental and economic impacts of the utilisation of flood polders in order to draw conclusions for their management. For this purpose, hydrodynamic and water quality modelling as well as an economic vulnerability assessment are employed in two study areas on the Middle Elbe River in Germany. One study area is an existing flood polder system on the tributary Havel, which was put into operation during the Elbe flood in summer 2002. The second study area is a planned flood polder, which is currently in the early planning stages. Furthermore, numerical models of different spatial dimensionality, ranging from zero- to two-dimensional, are applied in order to evaluate their suitability for hydrodynamic and water quality simulations of flood polders in regard to performance and modelling effort. The thesis concludes with overall recommendations on the management of flood polders, including operational schemes and land use. In view of future changes in flood frequency and further increasing values of private and public assets in flood-prone areas, flood polders may be effective and flexible technical flood protection measures that contribute to a successful flood risk management for large lowland rivers.
Fluids in the Earth's crust can move by creating and flowing through fractures, in a process called `hydraulic fracturing’. The tip-line of such fluid-filled fractures grows at locations where stress is larger than the strength of the rock. Where the tip stress vanishes, the fracture closes and the fluid-front retreats. If stress gradients exist on the fracture's walls, induced by fluid/rock density contrasts or topographic stresses, this results in an asymmetric shape and growth of the fracture, allowing for the contained batch of fluid to propagate through the crust.
The state-of-the-art analytical and numerical methods to simulate fluid-filled fracture propagation are two-dimensional (2D). In this work I extend these to three dimensions (3D). In my analytical method, I approximate the propagating 3D fracture as a penny-shaped crack that is influenced by both an internal pressure and stress gradients. In addition, I develop a numerical method to model propagation where curved fractures can be simulated as a mesh of triangular dislocations, with the displacement of faces computed using the displacement discontinuity method. I devise a rapid technique to approximate stress intensity and use this to calculate the advance of the tip-line. My 3D models can be applied to arbitrary stresses, topographic and crack shapes, whilst retaining short computation times.
I cross-validate my analytical and numerical methods and apply them to various natural and man-made settings, to gain additional insights into the movements of hydraulic fractures such as magmatic dikes and fluid injections in rock. In particular, I calculate the `volumetric tipping point’, which once exceeded allows a fluid-filled fracture to propagate in a `self-sustaining’ manner. I discuss implications this has for hydro-fracturing in industrial operations. I also present two studies combining physical models that define fluid-filled fracture trajectories and Bayesian statistical techniques. In these studies I show that the stress history of the volcanic edifice defines the location of eruptive vents at volcanoes. Retrieval of the ratio between topographic to remote stresses allows for forecasting of probable future vent locations. Finally, I address the mechanics of 3D propagating dykes and sills in volcanic regions. I focus on Sierra Negra volcano in the Gal\'apagos islands, where in 2018, a large sill propagated with an extremely curved trajectory. Using a 3D analysis, I find that shallow horizontal intrusions are highly sensitive to topographic and buoyancy stress gradients, as well as the effects of the free surface.
The Earth’s magnetic field (EMF) is generated by convections in the electrically conducting liquid iron-rich outer core, modified by the Earth’s rotation. A drastic manifestation of the dynamics of this fluid body is the occurrence of geomagnetic field reversals in the Earth’s history but also geomagnetic excursions, which are more frequent features of otherwise stable polarity chrons, but often poorly constrained in the geological record. To better understand the origin of the field, we need to know how the field has varied on different geological timescales. This includes not only information about changes in the ancient field’s direction but also about the absolute intensity (palaeointensity) and the age. This palaeointensity record is needed for compiling a full-vector description of the field. A palaeomagnetic and palaeointensity study on lava flows allows gaining insights about the evolution of the EMF through time and space. However, constraining the EMF evolution over different geological timescales remains a difficult objective due to the paucity of available palaeointensity data. One new alternative approach in palaeointensity studies is the recently proposed multispecimen parallel differential pTRM (MS) method, which has potentially several advantages over the commonly used Thellier method, because it is in theory independent of magnetic domain state, less prone to biasing effects, such as thermal alteration and significantly faster to perform in the laboratory. A study of highly active volcanic regions, such as the Trans-Mexican Volcanic Belt, seems promising when attempting a full-vector reconstruction or when looking for field excursions. One aim of this thesis was to gain new information about the occurrence and global validity of geomagnetic excursions from the Brunhes- or Matuyama Chron. For this purpose some 75 lava flows from within the Trans-Mexican Volcanic Belt were sampled for palaeomagnetic analyses. The scatter of virtual geomagnetic poles from lavas younger than 1.7 Ma was used for estimating palaeosecular variation and was found to be consistent with latitude dependent Model G and other high quality palaeomagnetic data from Mexico. The palaeomagnetic mean-vectors of 56 lavas were correlated to the Geomagnetic Polarity Timescale supplemented with information on geomagnetic excursions. On the grounds of their associated radioisotopic ages, four lavas were tentatively correlated with known excursions from marine records. Two lava flows dating of Brunhes Chron were associated with the Big Lost and Delts/Stage 17 excursions, respectively. From further two flows dating of Matuyama Chron, one flow was associated with either the Santa Rosa- or Kamikatsura excursions, while the other could have been emplaced during the Gilsa excursion. The most significant outcome was the finding that both Brunhes excursional flows display nearly fully reversed directions that deviate almost 180°C from the expected normal polarity direction. This observation could indicate that in particular the Big Lost and Delta/Stage17 excursions may represent other short periods during which the field completed a full reversal for a short time, such as was previously found for other older cryptochrons or tiny wiggles. Another focus of this thesis was set on estimating the feasibility of the new MS method for routine palaeointensity determination. This was accomplished by applying the MS method to samples from 11 historical lava flows from Mexico and Iceland from which the actual field intensity was either known from contemporary observatory data, or deduced from magnetic field models. Comparing observed with expected intensity values allowed to test the accuracy of the MS method. It a was found that the majority of palaeointensity estimates after the MS method yielded results that were very close or indistinguishable within the range of uncertainty from the expected values. However, a general trend towards an overestimate in the palaeointensity was also observed, which, on the grounds of corroborating rock magnetic analyses, was associated with multidomain material. This observation was taken as first evidence that the MS method is not entirely independent of magnetic domain state, as was originally claimed. However, a second experiment in which a modification of the most widely used Thellier method was applied to sister samples from 5 Icelandic flows revealed that, in comparison to the MS method, the latter produced more accurate and statistically better defined palaeointensities. Thus, from these first results, the MS method appeared as a viable alternative for future palaeointensity studies. Subsequently it was attempted to corroborate the directional record from Mexican lavas with palaeointensity data. It was possible to acquire palaeointensity estimates for 32 out of 51 investigated lava flows. These new results revealed that the new MS palaeointensities for Mexico are, with a high degree of statistical significance, around 30% higher than expected. The generally high palaeointensities seem to corroborate the results obtained from historical lava flows in this study and other previous studies on synthetic samples where domain state effects were found to cause overestimates in the palaeointensity of up to 30 per cent in the MS method. The primary process that leads to this overestimate is assigned to an asymmetry in the demagnetisation and remagnetisation process. Yet, this overestimate is expected to be no larger than what might be expected from Thellier experiments performed on samples with a given degree of multidomain behaviour.
Although educational content in electronic form is increasing dramatically, its usage in an educational environment is poor, mainly due to the fact that there is too much of (unreliable) redundant, and not relevant information. Finding appropriate answers is a rather difficult task being reliant on the user filtering of the pertinent information from the noise. Turning knowledge bases like the online tele-TASK archive into useful educational resources requires identifying correct, reliable, and "machine-understandable" information, as well as developing simple but efficient search tools with the ability to reason over this information. Our vision is to create an E-Librarian Service, which is able to retrieve multimedia resources from a knowledge base in a more efficient way than by browsing through an index, or by using a simple keyword search. In our E-Librarian Service, the user can enter his question in a very simple and human way; in natural language (NL). Our premise is that more pertinent results would be retrieved if the search engine understood the sense of the user's query. The returned results are then logical consequences of an inference rather than of keyword matchings. Our E-Librarian Service does not return the answer to the user's question, but it retrieves the most pertinent document(s), in which the user finds the answer to his/her question. Among all the documents that have some common information with the user query, our E-Librarian Service identifies the most pertinent match(es), keeping in mind that the user expects an exhaustive answer while preferring a concise answer with only little or no information overhead. Also, our E-Librarian Service always proposes a solution to the user, even if the system concludes that there is no exhaustive answer. Our E-Librarian Service was implemented prototypically in three different educational tools. A first prototype is CHESt (Computer History Expert System); it has a knowledge base with 300 multimedia clips that cover the main events in computer history. A second prototype is MatES (Mathematics Expert System); it has a knowledge base with 115 clips that cover the topic of fractions in mathematics for secondary school w.r.t. the official school programme. All clips were recorded mainly by pupils. The third and most advanced prototype is the "Lecture Butler's E-Librarain Service"; it has a Web service interface to respect a service oriented architecture (SOA), and was developed in the context of the Web-University project at the Hasso-Plattner-Institute (HPI). Two major experiments in an educational environment - at the Lycée Technique Esch/Alzette in Luxembourg - were made to test the pertinence and reliability of our E-Librarian Service as a complement to traditional courses. The first experiment (in 2005) was made with CHESt in different classes, and covered a single lesson. The second experiment (in 2006) covered a period of 6 weeks of intensive use of MatES in one class. There was no classical mathematics lesson where the teacher gave explanations, but the students had to learn in an autonomous and exploratory way. They had to ask questions to the E-Librarian Service just the way they would if there was a human teacher.
Over the last decades, the world’s population has been growing at a faster rate, resulting in increased urbanisation, especially in developing countries. More than half of the global population currently lives in urbanised areas with an increasing tendency. The growth of cities results in a significant loss of vegetation cover, soil compaction and sealing of the soil surface which in turn results in high surface runoff during high-intensity storms and causes the problem of accelerated soil water erosion on streets and building grounds. Accelerated soil water erosion is a serious environmental problem in cities as it gives rise to the contamination of aquatic bodies, reduction of ground water recharge and increase in land degradation, and also results in damages to urban infrastructures, including drainage systems, houses and roads. Understanding the problem of water erosion in urban settings is essential for the sustainable planning and management of cities prone to water erosion. However, in spite of the vast existence of scientific literature on water erosion in rural regions, a concrete understanding of the underlying dynamics of urban erosion still remains inadequate for the urban dryland environments.
This study aimed at assessing water erosion and the associated socio-environmental determinants in a typical dryland urban area and used the city of Windhoek, Namibia, as a case study. The study used a multidisciplinary approach to assess the problem of water erosion. This included an in depth literature review on current research approaches and challenges of urban erosion, a field survey method for the quantification of the spatial extent of urban erosion in the dryland city of Windhoek, and face to face interviews by using semi-structured questionnaires to analyse the perceptions of stakeholders on urban erosion.
The review revealed that around 64% of the literatures reviewed were conducted in the developed world, and very few researches were carried out in regions with extreme climate, including dryland regions. Furthermore, the applied methods for erosion quantification and monitoring are not inclusive of urban typical features and they are not specific for urban areas. The reviewed literature also lacked aspects aimed at addressing the issues of climate change and policies regarding erosion in cities. In a field study, the spatial extent and severity of an urban dryland city, Windhoek, was quantified and the results show that nearly 56% of the city is affected by water erosion showing signs of accelerated erosion in the form of rills and gullies, which occurred mainly in the underdeveloped, informal and semi-formal areas of the city. Factors influencing the extent of erosion in Windhoek included vegetation cover and type, socio-urban factors and to a lesser extent slope estimates. A comparison of an interpolated field survey erosion map with a conventional erosion assessment tool (the Universal Soil Loss Equation) depicted a large deviation in spatial patterns, which underlines the inappropriateness of traditional non-urban erosion tools to urban settings and emphasises the need to develop new erosion assessment and management methods for urban environments. It was concluded that measures for controlling water erosion in the city need to be site-specific as the extent of erosion varied largely across the city.
The study also analysed the perceptions and understanding of stakeholders of urban water erosion in Windhoek, by interviewing 41 stakeholders using semi-structured questionnaires. The analysis addressed their understanding of water erosion dynamics, their perceptions with regards to the causes and the seriousness of erosion damages, and their attitudes towards the responsibilities for urban erosion. The results indicated that there is less awareness of the process as a phenomenon, instead there is more awareness of erosion damages and the factors contributing to the damages. About 69% of the stakeholders considered erosion damages to be ranging from moderate to very serious. However, there were notable disparities between the private householders and public authority groups. The study further found that the stakeholders have no clear understanding of their responsibilities towards the management of the control measures and payment for the damages. The private householders and local authority sectors pointed fingers at each other for the responsibilities for erosion damage payments and for putting up prevention measures. The reluctance to take responsibility could create a predicament for areas affected, specifically in the informal settlements where land management is not carried out by the local authority and land is not owned by the occupants.
The study concluded that in order to combat urban erosion, it is crucial to understand diverse dynamics aggravating the process of urbanisation from different scales. Accordingly, the study suggests that there is an urgent need for the development of urban-specific approaches that aim at: (a) incorporating the diverse socio-economic-environmental aspects influencing erosion, (b) scientifically improving natural cycles that influence water storages and nutrients for plants in urbanised dryland areas in order to increase the amount of vegetation cover, (c) making use of high resolution satellite images to improve the adopted methods for assessing urban erosion, (d) developing water erosion policies, and (e) continuously monitoring the impact of erosion and the influencing processes from local, national and international levels.
The present thesis introduces an iterative expert-based Bayesian approach for assessing greenhouse gas (GHG) emissions from the 2030 German new vehicle fleet and quantifying the impacts of their main drivers. A first set of expert interviews has been carried out in order to identify technologies which may help to lower car GHG emissions and to quantify their emission reduction potentials. Moreover, experts were asked for their probability assessments that the different technologies will be widely adopted, as well as for important prerequisites that could foster or hamper their adoption. Drawing on the results of these expert interviews, a Bayesian Belief Network has been built which explicitly models three vehicle types: Internal Combustion Engine Vehicles (which include mild and full Hybrid Electric Vehicles), Plug-In Hybrid Electric Vehicles, and Battery Electric Vehicles. The conditional dependencies of twelve central variables within the BBN - battery energy, fuel and electricity consumption, relative costs, and sales shares of the vehicle types - have been quantified by experts from German car manufacturers in a second series of interviews. For each of the seven second-round interviews, an expert's individually specified BBN results. The BBN have been run for different hypothetical 2030 scenarios which differ, e.g., in regard to battery development, regulation, and fuel and electricity GHG intensities. The present thesis delivers results both in regard to the subject of the investigation and in regard to its method. On the subject level, it has been found that the different experts expect 2030 German new car fleet emission to be at 50 to 65% of 2008 new fleet emissions under the baseline scenario. They can be further reduced to 40 to 50% of the emissions of the 2008 fleet though a combination of a higher share of renewables in the electricity mix, a larger share of biofuels in the fuel mix, and a stricter regulation of car CO$_2$ emissions in the European Union. Technically, 2030 German new car fleet GHG emissions can be reduced to a minimum of 18 to 44% of 2008 emissions, a development which can not be triggered by any combination of measures modeled in the BBN alone but needs further commitment. Out of a wealth of existing BBN, few have been specified by individual experts through elicitation, and to my knowledge, none of them has been employed for analyzing perspectives for the future. On the level of methods, this work shows that expert-based BBN are a valuable tool for making experts' expectations for the future explicit and amenable to the analysis of different hypothetical scenarios. BBN can also be employed for quantifying the impacts of main drivers. They have been demonstrated to be a valuable tool for iterative stakeholder-based science approaches.
An exploration of activity and therapist preferences and their predictors in German-speaking samples
(2023)
According to current definitions of evidence-based practice, patients’ preferences play an important role for the psychotherapeutic process and outcomes. However, whereas a significant body of research investigated preferences regarding specific treatments, research on preferred activities or therapist characteristics is rare, investigated heterogeneous aspects with inconclusive results, lacked validated assessment tools, and neglected relevant preferences, their predictors as well as the perspective of mental health professionals. Therefore, the three studies of this dissertation aimed to address the most fundamental drawbacks in current preference research by providing a validated questionnaire, focus efforts on activity and therapist preferences and add preferences of psychotherapy trainees. To this end, Paper I reports the translation and validation of the 18-item Cooper-Norcross Inventory of Preference (C-NIP) in a broad, heterogeneous sample of N = 969 laypeople, resulting in good to acceptable reliabilities and first evidence of validity. However, the original factor structure was not replicated. Paper II assesses activity preferences of psychotherapists in training using the C-NIP and compares them with the initial laypeople sample. There were significant differences between both samples, with trainees preferring a more patient-directed, emotionally intense and confrontational approach than laypeople. CBT trainees preferred a more therapist-directed, present-focused, challenging and less emotional intense approach than psychodynamic or -analytic trainees. Paper III explores therapist preferences and tests predictors for specific preference choices. For most characteristics, more than half of the participants did not have specific preferences. Results pointed towards congruency effects (i.e., preference for similar characteristics), especially for members of marginalized groups. The dissertation provides both researchers and practitioners with a validated questionnaire, shows potentially obstructive differences between patients and therapists and underlines the importance of therapist characteristics for marginalized groups, thereby laying the foundation for future applications and implementations in research and practice.
The need to develop sustainable resource management strategies for semi-arid and arid rangelands is acute as non-adapted grazing strategies lead to irreversible environmental problems such as desertification and associated loss of economic support to society. In such vulnerable ecosystems, successful implementation of sustainable management strategies depends on well-founded under-standing of processes at different scales that underlay the complex system dynamic. There is ample evidence that, in contrast to traditional sectoral approaches, only interdisciplinary research does work for resolving problems in conservation and natural resource management. In this thesis I combined a range of modeling approaches that integrate different disciplines and spatial scales in order to contribute to basic guidelines for sustainable management of semi-arid and arid range-lands. Since water availability and livestock management are seen as most potent determinants for the dynamics of semi-arid and arid ecosystems I focused on (i) the interaction of ecological and hydro-logical processes and (ii) the effect of farming strategies. First, I developed a grid-based and small-scaled model simulating vegetation dynamics and inter-linked hydrological processes. The simulation results suggest that ecohydrological interactions gain importance in rangelands with ascending slope where vegetation cover serves to obstruct run-off and decreases evaporation from the soil. Disturbances like overgrazing influence these positive feedback mechanisms by affecting vegetation cover and composition. In the second part, I present a modeling approach that has the power to transfer and integrate ecological information from the small scale vegetation model to the landscape scale, most relevant for the conservation of biodiversity and sustainable management of natural resources. I combined techniques of stochastic modeling with remotely sensed data and GIS to investigate to which ex-tent spatial interactions, like the movement of surface water by run-off in water limited environments, affect ecosystem functioning at the landscape scale. My simulation experiments show that overgrazing decreases the number of vegetation patches that act as hydrological sinks and run-off increases. The results of both simulation models implicate that different vegetation types should not only be regarded as provider of forage production but also as regulator of ecosystem functioning. Vegetation patches with good cover of perennial vegetation are capable to catch and conserve surface run-off from degraded surrounding areas. Therefore, downstream out of the simulated system is prevented and efficient use of water resources is guaranteed at all times. This consequence also applies to commercial rotational grazing strategies for semi-arid and arid rangelands with ascending slope where non-degraded paddocks act as hydrological sinks. Finally, by the help of an integrated ecological-economic modeling approach, I analyzed the relevance of farmers’ ecological knowledge for longterm functioning of semi-arid and arid grazing systems under current and future climatic conditions. The modeling approach consists of an ecological and an economic module and combines relevant processes on either level. Again, vegetation dynamics and forage productivity is derived by the small-scaled vegetation model. I showed that sustainable management of semi-arid and arid rangelands relies strongly on the farmers’ knowledge on how the ecosystem works. Furthermore, my simulation results indicate that the projected lower annual rainfall due to climate change in combination with non-adapted grazing strategies adds an additional layer of risk to these ecosystems that are already prone to land degradation. All simulation models focus on the most essential factors and ignore specific details. Therefore, even though all simulation models are parameterized for a specific dwarf shrub savanna in arid southern Namibia, the conclusions drawn are applicable for semi-arid and arid rangelands in general.
Surface displacement at volcanic edifices is related to subsurface processes associated with magma movements, fluid transfers within the volcano edifice and gravity-driven deformation processes. Understanding of associated ground displacements is of importance for assessment of volcanic hazards. For example, volcanic unrest is often preceded by surface uplift, caused by magma intrusion and followed by subsidence, after the withdrawal of magma. Continuous monitoring of the surface displacement at volcanoes therefore might allow the forecasting of upcoming eruptions to some extent. In geophysics, the measured surface displacements allow the parameters of possible deformation sources to be estimated through analytical or numerical modeling. This is one way to improve the understanding of subsurface processes acting at volcanoes. Although the monitoring of volcanoes has significantly improved in the last decades (in terms of technical advancements and number of monitored volcanoes), the forecasting of volcanic eruptions remains puzzling. In this work I contribute towards the understanding of the subsurface processes at volcanoes and thus to the improvement of volcano eruption forecasting. I have investigated the displacement field of Llaima volcano in Chile and of Tendürek volcano in East Turkey by using synthetic aperture radar interferometry (InSAR). Through modeling of the deformation sources with the extracted displacement data, it was possible to gain insights into potential subsurface processes occurring at these two volcanoes that had been barely studied before. The two volcanoes, although of very different origin, composition and geometry, both show a complexity of interacting deformation sources. At Llaima volcano, the InSAR technique was difficult to apply, due to the large decorrelation of the radar signal between the acquisition of images. I developed a model-based unwrapping scheme, which allows the production of reliable displacement maps at the volcano that I used for deformation source modeling. The modeling results show significant differences in pre- and post-eruptive magmatic deformation source parameters. Therefore, I conjecture that two magma chambers exist below Llaima volcano: a post-eruptive deep one and a shallow one possibly due to the pre-eruptive ascent of magma. Similar reservoir depths at Llaima have been confirmed by independent petrologic studies. These reservoirs are interpreted to be temporally coupled. At Tendürek volcano I have found long-term subsidence of the volcanic edifice, which can be described by a large, magmatic, sill-like source that is subject to cooling contraction. The displacement data in conjunction with high-resolution optical images, however, reveal arcuate fractures at the eastern and western flank of the volcano. These are most likely the surface expressions of concentric ring-faults around the volcanic edifice that show low magnitudes of slip over a long time. This might be an alternative mechanism for the development of large caldera structures, which are so far assumed to be generated during large catastrophic collapse events. To investigate the potential subsurface geometry and relation of the two proposed interacting sources at Tendürek, a sill-like magmatic source and ring-faults, I have performed a more sophisticated numerical modeling approach. The optimum source geometries show, that the size of the sill-like source was overestimated in the simple models and that it is difficult to determine the dip angle of the ring-faults with surface displacement data only. However, considering physical and geological criteria a combination of outward-dipping reverse faults in the west and inward-dipping normal faults in the east seem to be the most likely. Consequently, the underground structure at the Tendürek volcano consists of a small, sill-like, contracting, magmatic source below the western summit crater that causes a trapdoor-like faulting along the ring-faults around the volcanic edifice. Therefore, the magmatic source and the ring-faults are also interpreted to be temporally coupled. In addition, a method for data reduction has been improved. The modeling of subsurface deformation sources requires only a relatively small number of well distributed InSAR observations at the earth’s surface. Satellite radar images, however, consist of several millions of these observations. Therefore, the large amount of data needs to be reduced by several orders of magnitude for source modeling, to save computation time and increase model flexibility. I have introduced a model-based subsampling approach in particular for heterogeneously-distributed observations. It allows a fast calculation of the data error variance-covariance matrix, also supports the modeling of time dependent displacement data and is, therefore, an alternative to existing methods.
We establish elements of a new approach to ellipticity and parametrices within operator algebras on manifolds with higher singularities, only based on some general axiomatic requirements on parameter-dependent operators in suitable scales of spaes. The idea is to model an iterative process with new generations of parameter-dependent operator theories, together with new scales of spaces that satisfy analogous requirements as the original ones, now on a corresponding higher level. The "full" calculus involves two separate theories, one near the tip of the corner and another one at the conical exit to infinity. However, concerning the conical exit to infinity, we establish here a new concrete calculus of edge-degenerate operators which can be iterated to higher singularities.
Gasausströmungen, oft in der Form hoch kollimierter Jets, sind ein allgegenwärtiges Phänomen bei der Geburt neuer Sterne. Emission von stossangeregtem molekularem Wasserstoff bei Wellenlängen im nahen Infrarotbereich ist ein Merkmal ihrer Existenz und auch in eingebetteten, im Optischen obskurierten Ausströmungen generell gut zu beobachten. In dieser Arbeit werden die Resultate einer von Auswahleffekten freien, empfindlichen, grossflächigen Suche nach solchen Ausströmungen von Protosternen in der v=1-0 S(1) Linie molekularen Wasserstoffs bei einer Wellenlänge von 2.12 µm vorgestellt. Die Durchmusterung umfasst eine Fläche von etwa einem Quadratgrad in der Orion A Riesenmolekülwolke. Weitere Daten aus einem grossen Wellenlängenbereich werden benutzt, um die Quellen der Ausströmungen zu identifizieren. Das Ziel dieser Arbeit ist es, eine Stichprobe von Ausströmungen zu bekommen, die so weit wie möglich frei von Auswahleffekten ist, um die typischen Eigenschaften protostellarer Ausströmungen und deren Entwicklung festzustellen, sowie um die Rückwirkung der Ausströmungen auf die umgebende Wolke zu untersuchen. Das erste Ergebnis ist, dass Ausströmungen in Sternentstehungsgebieten tatsächlich sehr häufig sind: mehr als 70 Jet-Kandidaten werden identifiziert. Die meisten zeigen eine sehr irreguläre Morphologie anstelle regulärer oder symmetrischer Strukturen. Dies ist auf das turbulente, klumpige Medium zurückzuführen, in das sich die Jets hineinbewegen. Die Ausrichtung der Jets ist zufällig verteilt. Insbesondere gibt es keine bevorzugte Ausrichtung der Jets parallel zum grossräumigen Magnetfeld in der Wolke. Das legt nahe, dass die Rotations- und Symmetrieachse in einem protostellaren System durch zufällige, turbulente Bewegung in der Wolke bestimmt wird. Mögliche Ausströmungsquellen werden für 49 Jets identifiziert; für diese wird der Entwicklungsstand und die bolometrische Leuchtkraft abgeschätzt. Die Jetlänge und die H2 Leuchtkraft entwickeln sich gemeinsam mit der Ausströmungsquelle. Von null startend, dehnen sich die Jets schnell bis auf eine Länge von einigen Parsec aus und werden dann langsam wieder kürzer. Sie sind zuerst sehr leuchtkräftig, die H2 Helligkeit nimmt aber im Lauf der protostellaren Entwicklung ab. Die Längen- und H2 Leuchtkraftentwicklung lässt sich im Wesentlichen durch eine zuerst sehr hohe, dann niedriger werdende Massenausflussrate erklären, die auf eine zuerst sehr hohe, dann niedriger werdende Gasakkretionsrate auf den Protostern schliessen lässt (Akkretion und Ejektion sind eng verknüpft!). Die Längenabnahme der Jets erfordert eine ständig wirkende Abbremsung der Jets. Ein einfaches Modell einer simultanen Entwicklung eines Protosterns, seiner zirkumstellaren Umgebung und seiner Ausströmung (Smith 2000) kann die gemessenen H2- und bolometrischen Leuchtkräfte der Jets und ihrer Quellen reproduzieren, unter der Annahme, dass die starke Akkretionsaktivität zu Beginn der protostellaren Entwicklung mit einer überproportional hohen Massenausflussrate verbunden ist. Im Durchmusterungsgebiet sind 125 dichte Molekülwolkenkerne bekannt (Tatematsu et al. 1993). Jets (bzw. Sterne) entstehen in ruhigen Wolkenkernen, d.h. solchen mit einem niedrigen Verhältnis von interner kinetischer Energie zu gravitativer potentieller Energie; dies sind die Wolkenkerne höherer Masse. Die Wolkenkerne mit Jets haben im Mittel grössere Linienbreiten als die ohne Jets. Dies ist darauf zurückzuführen, dass sie bevorzugt in den massereicheren Wolkenkernen zu finden sind, welche generell eine grössere Linienbreite haben. Es gibt keinen Hinweis auf stärkere interne Bewegungen in Wolkenkernen mit Jets, die durch eine Wechselwirkung der Jets mit den Wolkenkernen erzeugt sein könnte. Es gibt, wie von der Theorie vorausgesagt, eine Beziehung zwischen der Linienbreite der Wolkenkerne und der H2 Leuchtkraft der Jets, wenn Jets von Klasse 0 und Klasse I Protosternen separat betrachtet werden; dabei sind Klasse 0 Jets leuchtkräftiger als Klasse I Jets, was ebenfalls auf eine zeitabhängige Akkretionsrate mit einer frühzeitigen Spitze und einem darauffolgenden Abklingen hinweist. Schliesslich wird die Rückwirkung der Jetpopulation auf eine Molekülwolke unter der Annahme strikter Vorwärtsimpulserhaltung betrachtet. Die Jets können auf der Skala einer ganzen Riesenmolekülwolke und auf den Skalen von Molekülwolkenkernen nicht genügend Impuls liefern, um die abklingende Turbulenz wieder anzuregen. Auf der mittleren Skala von molekularen Klumpen, mit einer Grösse von einigen parsec und Massen von einigen hundert Sonnenmassen liefern die Jets jedoch genügend Impuls in hinreichend kurzer Zeit, um die Turbulenz “am Leben zu erhalten” und können damit helfen, einen Klumpen gegen seinen Kollaps zu stabilisieren.
Analphabetismus und Teilhabe
(2015)
Aus bildungstheoretisch-gesellschaftskritischer Perspektive stellt sich Lernen als soziales Handeln in gesellschaftlich-vermittelten Verhältnissen – Möglichkeiten wie auch Begrenzungen – dar. Funktionaler Analphabetismus ist mit einem bundesweiten Anteil von 14% der erwerbsfähigen Bevölkerung oder 7,5 Millionen Analphabeten in Deutschland nicht nur ein bildungspolitisches und -praktisches, sondern auch ein wissenschaftlich zu untersuchendes Phänomen. Es gibt zahlreiche Untersuchungen, die sich mit dieser Thematik auseinandersetzen und Anknüpfungspunkte für die vorliegende Studie bieten. Aus der Zielgruppenforschung beispielsweise ist bekannt, dass die Hauptadressaten der Männer, der Älteren und der Bildungsfernen nicht adäquat erreicht bzw. als Teilnehmende gewonnen werden. Aus der Teilnehmendenforschung sind Abbrüche und Drop-Outs bekannt.
Warum Analphabeten im Erwachsenenalter, also nach der Aneignung vielfältigster Bewältigungsstrategien, durch das sich das Phänomen einer direkten Sichtbarkeit entzieht, dennoch beginnen das Lesen und Schreiben (wieder) zu lernen, wird bislang weder bildungs- noch lerntheoretisch untersucht. Im Rahmen der vorliegenden Erwachsenenbildungsstudie werden genau diese Lernanlässe empirisch herausgearbeitet.
Als Heuristik wird auf eine subjekttheoretische Theoriefolie rekurriert, die sich in besonderer Weise eignet Lernbegründungen im Kontext gesellschaftlich verhafteter Biografien sichtbar zu machen. Lernforschung im Begründungsmodell muss dabei auf eine Methodik zurückgreifen, die die Perspektive des Subjekts, Bedeutungszusammenhänge und typische Sinnstrukturen hervorbringen kann. Daher wird ein auf Einzelfallstudien basierendes, qualitatives Forschungsdesign gewählt, das Daten aus der Erhebung mittels problemzentrierter Interviews bereitstellt, die eine Auswertung innerhalb der Forschungsstrategie der Grounded Theory erfahren und in einer empirisch begründeten Typenbildung münden. Dieses Design ermöglicht die Rekonstruktion typischer Lernanlässe und im Ergebnis die Entwicklung einer gegenstandsbezogene Theorie mittlerer Reichweite.
Aus der vorliegenden Bedeutungs-Begründungsanalyse konnten empirisch fünf Lernbegründungstypen ausdifferenziert werden, die sich im Spannungsverhältnis von Teilhabeausrichtung und Widersprüchlichkeit bewegen und in ihrer Komplexität mittels der drei Schlüsselkategorien Bedeutungsraum, Reflexion der sozialen Eingebundenheit und Kompetenzen sowie Lernen bzw. dem Erleben der Diskrepanzerfahrung zwischen Lesen-Wollen und Lesen-Können dargestellt werden. Das Spektrum der Lernbegründungstypen reicht von teilhabesicherndem resignierten Lernen, bei dem die Sicherung des bedrohten Status quo im Vordergrund steht und die Welt als nicht gestaltbar erlebt wird, bis hin zu vielschichtigem teilhabeerweiternden Lernen, das auf die Erweiterung der eigenen Handlungsmöglichkeiten zielt und die umfangreichste Reflexion der sozialen Eingebundenheit und Kompetenzen aufweist. Funktionale Analphabeten begründen ihr Lernen und Nicht-Lernen vor dem Hintergrund ihrer sozialen Situation, ihrer Begrenzungen und Möglichkeiten: Schriftsprachlernen erhält erst im Kontext gesellschaftlicher Teilhabe und dessen Reflexion eine Bedeutung.
Mit der Einordnung der Lernbegründungen funktionaler Analphabeten in: erstens, Diskurse der Bildungsbenachteiligung durch Exklusionsprozesse; zweitens, die lerntheoretische Bedeutung von Inklusionsprozessen und drittens, den internationalen Theorieansatz transformativen Lernens durch die Integration der Reflexionskategorie, erfolgt eine Erweiterung bildungs- und lerntheoretischer Ansätze. In dieser Arbeit werden Alphabetisierungs- und Erwachsenen-bildungsforschung verbunden und in den jeweiligen Diskurs integriert. Weitere Anschluss- und Verwertungsmöglichkeiten in der Bildungsforschung wären denkbar. Die Untersuchung von Lernbegründungen im Längsschnitt beispielsweise kann Transformationsprozesse rekonstruierbar machen und somit Erträge für eine Bildungsprozessforschung liefern. Bildungspraktisch können die Lernbegründungstypen einerseits der Teilnehmergewinnung dienen, andererseits Ausgangspunkt für reflexive Lernbegleitungskonzepte sein, die Lernbegründungen zur Sprache bringen und die soziale Eingebundenheit thematisieren und damit Lernprozesse unterstützen.
Die vorliegende Arbeit beschäftigt sich mit der Charakterisierung von Seismizität anhand von Erdbebenkatalogen. Es werden neue Verfahren der Datenanalyse entwickelt, die Aufschluss darüber geben sollen, ob der seismischen Dynamik ein stochastischer oder ein deterministischer Prozess zugrunde liegt und was daraus für die Vorhersagbarkeit starker Erdbeben folgt. Es wird gezeigt, dass seismisch aktive Regionen häufig durch nichtlinearen Determinismus gekennzeichent sind. Dies schließt zumindest die Möglichkeit einer Kurzzeitvorhersage ein. Das Auftreten seismischer Ruhe wird häufig als Vorläuferphaenomen für starke Erdbeben gedeutet. Es wird eine neue Methode präsentiert, die eine systematische raumzeitliche Kartierung seismischer Ruhephasen ermöglicht. Die statistische Signifikanz wird mit Hilfe des Konzeptes der Ersatzdaten bestimmt. Als Resultat erhält man deutliche Korrelationen zwischen seismischen Ruheperioden und starken Erdbeben. Gleichwohl ist die Signifikanz dafür nicht hoch genug, um eine Vorhersage im Sinne einer Aussage über den Ort, die Zeit und die Stärke eines zu erwartenden Hauptbebens zu ermöglichen.
Für ein tiefergehendes Verständnis von Entwicklung und Funktion der quergestreiften Muskulatur ist eine Betrachtung der am Aufbau der Myofibrillen, den kontraktilen Organellen, beteiligten Proteine essentiell. Die vorliegende Arbeit beschäftigt sich mit Myomesin, einem Protein der sarkomeren M-Bande. Zunächst wurde die cDNA des humanen Myomesins vollständig kloniert, sequenziert und nachfolgend die komplette Größe der aminoterminalen Kopfdomäne bestimmt. Es konnte gezeigt werden, daß Myomesin in vitro mit den Domänen 1 und 12 an Myosin bindet. Die muskelspezifische Isoform der Kreatinkinase bindet an die Domänen 7 und 8. Stimulations- und Inhibitionsexperimente belegen, daß Myomesin an Serin 618 in vivo durch die Proteinkinase A phosphoryliert wird und daß diese Phosphorylierung durch Aktivierung beta2-adrenerger Rezeptoren stimulierbar ist. In Muskelgewebeproben von Patienten, die an der Hypertrophen Kardiomyopathie, einer genetisch bedingten Herzmuskelkrankheit, erkrankt sind, konnte mit einem neu hergestellten phosphorylierungsabhängigen Antikörper eine Verminderung der Menge phosphorylierten Myomesins nachgewiesen werden. Mögliche Ursachen werden diskutiert. Myomesin bildet Dimere, wie durch hefegenetische und biochemische Experimente gezeigt werden konnte. Die Dimerisierung von Myomesin könnte eine zentrale Rolle für den Einbau der Myosinfilamente in die naszierende Myofibrille haben. Anhand der gewonnenen Daten wurde ein verbessertes Modell der zentralen M-Bande erstellt.
Ziel der vorliegenden Arbeit war die Entwicklung einer SNP-Genotypisierungsmethode mit auf Mikroarrays immobilisierten PCR-Produkten. Für die Analyse wurde ein faseroptischer Affinitätssensor bzw. ein Durchfluss-Biochip-Scanner mit integrierter Fluoreszenzdetektion verwendet. An den immobilisierten Analyten (PCR-Produkten) wurde eine Fluoreszenzoligonukleotidsonde hybridisiert und anschließend die Dissoziation der Sonde im Fluss verfolgt. Die Diskriminierung von Wildtyp- und Mutanten-DNA erfolgte durch die kinetische Auswertung der Dissoziationskurven sowie durch die Analyse der Fluoreszenzintensität. Die Versuche am faseroptischen Affinitätssensor zeigten, dass DNA-DNA-Hybride sowohl von Oligonukleotiden als auch von PCR-Produkten ein typisches Dissoziationsverhalten aufweisen, wobei fehlgepaarte Hybride eine signifikant schnellere Dissoziation zeigen als perfekt passende Hybride. Dieser Geschwindigkeitsunterschied lässt sich durch den Vergleich der jeweiligen kinetischen Geschwindigkeitskonstanten kD quantitativ erfassen. Da die Kopplung des Analyten an der Chipoberfläche sowie die Hybridisierungs- und Dissoziationsparameter essentiell für die Methodenentwicklung war, wurden die Parameter für ein optimales Spotting und die Immobilisierung von PCR-Produkten ermittelt. Getestet wurden die affine Kopplung von biotinylierten PCR-Produkten an Streptavidin-, Avidin- und NeutrAvidin-Oberflächen sowie die kovalente Bindung von phosphorylierten Amplifikaten mit der EDC/Methylimidazol-Methode. Die besten Ergebnisse sowohl in Spotform und -homogenität als auch im Signal/Rausch-Verhältnis wurden an NeutrAvidin-Oberflächen erreicht. Für die Etablierung der Mikroarray-Genotypisierungsmethode durch kinetische Analyse nach einem Hybridisierungsexperiment wurden Sondenlänge, Puffersystem, Spotting-Konzentration des Analyten sowie Temperatur optimiert. Das Analysensystem erlaubte es, PCR-Produkte mit einer Konzentration von 250 ng/µl in einem HEPES-EDTA-NaCl-Puffer auf mit NeutrAvidin beschichtete Glasträger zu spotten. In den anschließenden Hybridisierungs- und Dissoziationsexperimenten bei 30 °C konnte die Diskriminierung von homocygoter Wildtyp- und homocygoter Mutanten- sowie heterocygoter DNA am Beispiel von Oligonukleotid-Hybriden erreicht werden. In einer Gruppe von 24 homocygoten Patienten wurde ein Polymorphismus im SULT1A1-Gen analysiert. Sowohl durch kinetische Auswertung als auch mit der Analyse der Fluoreszenzintensität wurde der Genotyp der Proben identifiziert. Die Ergebnisse wurden mit dem Referenzverfahren, der Restriktionschnittstellenanalyse (PCR-RFLP) validiert. Lediglich ein Genotyp wurde falsch bestimmt, die Genauigkeit lag bei 96%. In einer Gruppe von 44 Patienten wurde der Genotyp eines SNP in der Adiponectin-Promotor-Region untersucht. Nach Vergleich der Analysenergebnisse mit denen eines Referenzverfahrens konnten lediglich 14 der untersuchten Genotypen bestätigt werden. Ursache für die unzureichende Genauigkeit der Methode war vor allem das schlechte Signal/Rausch-Verhältnis. Zusammenfassend kann gesagt werden, dass das in dieser Arbeit entwickelte Analysesystem für die Genotypisierung von Einzelpunktmutationen geeignet ist, homocygote Patientenproben zuverlässig zu analysieren. Prinzipiell ist das auch bei heterocygoter DNA möglich. Da nach aktuellem Kenntnisstand eine SNP-Analysemethode an immobilisierten PCR-Produkten noch nicht veröffentlicht wurde, stellt das hier entwickelte Verfahren eine Alternative zu bisher bekannten Mikroarray-Verfahren dar. Als besonders vorteilhaft erweist sich der reverse Ansatz der Methode. Der hier vorgestellte Ansatz ist eine kostengünstigere und weniger hoch dimensionierte Lösung für Fragestellungen beispielsweise in der Ernährungswissenschaft, bei denen meist eine mittlere Anzahl Patienten auf nur einige wenige SNPs zu untersuchen ist. Wenn es gelingt, durch die Weiterentwicklung der Hardware bzw. weiterer Optimierung, eine Verbesserung des Signal/Rausch-Verhältnisses und damit die Diskriminierung von heterocygoter DNA zu erreichen, kann diese Methode zukünftig bei der Analyse von mittelgroßen Patientengruppen alternativ zu anderen Genotypisierungsmethoden verwendet werden.
Metabolic systems tend to exhibit steady states that can be measured in terms of their concentrations and fluxes. These measurements can be regarded as a phenotypic representation of all the complex interactions and regulatory mechanisms taking place in the underlying metabolic network. Such interactions determine the system's response to external perturbations and are responsible, for example, for its asymptotic stability or for oscillatory trajectories around the steady state. However, determining these perturbation responses in the absence of fully specified kinetic models remains an important challenge of computational systems biology. Structural kinetic modeling (SKM) is a framework to analyse whether a metabolic steady state remains stable under perturbation, without requiring detailed knowledge about individual rate equations. It provides a parameterised representation of the system's Jacobian matrix in which the model parameters encode information about the enzyme-metabolite interactions. Stability criteria can be derived by generating a large number of structural kinetic models (SK-models) with randomly sampled parameter sets and evaluating the resulting Jacobian matrices. The parameter space can be analysed statistically in order to detect network positions that contribute significantly to the perturbation response. Because the sampled parameters are equivalent to the elasticities used in metabolic control analysis (MCA), the results are easy to interpret biologically. In this project, the SKM framework was extended by several novel methodological improvements. These improvements were evaluated in a simulation study using a set of small example pathways with simple Michaelis Menten rate laws. Afterwards, a detailed analysis of the dynamic properties of the neuronal TCA cycle was performed in order to demonstrate how the new insights obtained in this work could be used for the study of complex metabolic systems. The first improvement was achieved by examining the biological feasibility of the elasticity combinations created during Monte Carlo sampling. Using a set of small example systems, the findings showed that the majority of sampled SK-models would yield negative kinetic parameters if they were translated back into kinetic models. To overcome this problem, a simple criterion was formulated that mitigates such infeasible models and the application of this criterion changed the conclusions of the SKM experiment. The second improvement of this work was the application of supervised machine-learning approaches in order to analyse SKM experiments. So far, SKM experiments have focused on the detection of individual enzymes to identify single reactions important for maintaining the stability or oscillatory trajectories. In this work, this approach was extended by demonstrating how SKM enables the detection of ensembles of enzymes or metabolites that act together in an orchestrated manner to coordinate the pathways response to perturbations. In doing so, stable and unstable states served as class labels, and classifiers were trained to detect elasticity regions associated with stability and instability. Classification was performed using decision trees and relevance vector machines (RVMs). The decision trees produced good classification accuracy in terms of model bias and generalizability. RVMs outperformed decision trees when applied to small models, but encountered severe problems when applied to larger systems because of their high runtime requirements. The decision tree rulesets were analysed statistically and individually in order to explore the role of individual enzymes or metabolites in controlling the system's trajectories around steady states. The third improvement of this work was the establishment of a relationship between the SKM framework and the related field of MCA. In particular, it was shown how the sampled elasticities could be converted to flux control coefficients, which were then investigated for their predictive information content in classifier training. After evaluation on the small example pathways, the methodology was used to study two steady states of the neuronal TCA cycle with respect to their intrinsic mechanisms responsible for stability or instability. The findings showed that several elasticities were jointly coordinated to control stability and that the main source for potential instabilities were mutations in the enzyme alpha-ketoglutarate dehydrogenase.
The main objective of this dissertation is to analyse prerequisites, expectations, apprehensions, and attitudes of students studying computer science, who are willing to gain a bachelor degree. The research will also investigate in the students’ learning style according to the Felder-Silverman model. These investigations fall in the attempt to make an impact on reducing the “dropout”/shrinkage rate among students, and to suggest a better learning environment.
The first investigation starts with a survey that has been made at the computer science department at the University of Baghdad to investigate the attitudes of computer science students in an environment dominated by women, showing the differences in attitudes between male and female students in different study years. Students are accepted to university studies via a centrally controlled admission procedure depending mainly on their final score at school. This leads to a high percentage of students studying subjects they do not want. Our analysis shows that 75% of the female students do not regret studying computer science although it was not their first choice. And according to statistics over previous years, women manage to succeed in their study and often graduate on top of their class. We finish with a comparison of attitudes between the freshman students of two different cultures and two different university enrolment procedures (University of Baghdad, in Iraq, and the University of Potsdam, in Germany) both with opposite gender majority.
The second step of investigation took place at the department of computer science at the University of Potsdam in Germany and analyzes the learning styles of students studying the three major fields of study offered by the department (computer science, business informatics, and computer science teaching). Investigating the differences in learning styles between the students of those study fields who usually take some joint courses is important to be aware of which changes are necessary to be adopted in the teaching methods to address those different students. It was a two stage study using two questionnaires; the main one is based on the Index of Learning Styles Questionnaire of B. A. Solomon and R. M. Felder, and the second questionnaire was an investigation on the students’ attitudes towards the findings of their personal first questionnaire. Our analysis shows differences in the preferences of learning style between male and female students of the different study fields, as well as differences between students with the different specialties (computer science, business informatics, and computer science teaching).
The third investigation looks closely into the difficulties, issues, apprehensions and expectations of freshman students studying computer science. The study took place at the computer science department at the University of Potsdam with a volunteer sample of students. The goal is to determine and discuss the difficulties and issues that they are facing in their study that may lead them to think in dropping-out, changing the study field, or changing the university. The research continued with the same sample of students (with business informatics students being the majority) through more than three semesters. Difficulties and issues during the study were documented, as well as students’ attitudes, apprehensions, and expectations. Some of the professors and lecturers opinions and solutions to some students’ problems were also documented. Many participants had apprehensions and difficulties, especially towards informatics subjects. Some business informatics participants began to think of changing the university, in particular when they reached their third semester, others thought about changing their field of study. Till the end of this research, most of the participants continued in their studies (the study they have started with or the new study they have changed to) without leaving the higher education system.
Water management and environmental protection is vulnerable to extreme low flows during streamflow droughts. During the last decades, in most rivers of Central Europe summer runoff and low flows have decreased. Discharge projections agree that future decrease in runoff is likely for catchments in Brandenburg, Germany. Depending on the first-order controls on low flows, different adaption measures are expected to be appropriate. Small catchments were analyzed because they are expected to be more vulnerable to a changing climate than larger rivers. They are mainly headwater catchments with smaller ground water storage. Local characteristics are more important at this scale and can increase vulnerability. This thesis mutually evaluates potential adaption measures to sustain minimum runoff in small catchments of Brandenburg, Germany, and similarities of these catchments regarding low flows. The following guiding questions are addressed: (i) Which first-order controls on low flows and related time scales exist? (ii) Which are the differences between small catchments regarding low flow vulnerability? (iii) Which adaption measures to sustain minimum runoff in small catchments of Brandenburg are appropriate considering regional low flow patterns? Potential adaption measures to sustain minimum runoff during periods of low flows can be classified into three categories: (i) increase of groundwater recharge and subsequent baseflow by land use change, land management and artificial ground water recharge, (ii) increase of water storage with regulated outflow by reservoirs, lakes and wetland water management and (iii) regional low flow patterns have to be considered during planning of measures with multiple purposes (urban water management, waste water recycling and inter-basin water transfer). The question remained whether water management of areas with shallow groundwater tables can efficiently sustain minimum runoff. Exemplary, water management scenarios of a ditch irrigated area were evaluated using the model Hydrus-2D. Increasing antecedent water levels and stopping ditch irrigation during periods of low flows increased fluxes from the pasture to the stream, but storage was depleted faster during the summer months due to higher evapotranspiration. Fluxes from this approx. 1 km long pasture with an area of approx. 13 ha ranged from 0.3 to 0.7 l\s depending on scenario. This demonstrates that numerous of such small decentralized measures are necessary to sustain minimum runoff in meso-scale catchments. Differences in the low flow risk of catchments and meteorological low flow predictors were analyzed. A principal component analysis was applied on daily discharge of 37 catchments between 1991 and 2006. Flows decreased more in Southeast Brandenburg according to meteorological forcing. Low flow risk was highest in a region east of Berlin because of intersection of a more continental climate and the specific geohydrology. In these catchments, flows decreased faster during summer and the low flow period was prolonged. A non-linear support vector machine regression was applied to iteratively select meteorological predictors for annual 30-day minimum runoff in 16 catchments between 1965 and 2006. The potential evapotranspiration sum of the previous 48 months was the most important predictor (r²=0.28). The potential evapotranspiration of the previous 3 months and the precipitation of the previous 3 months and last year increased model performance (r²=0.49, including all four predictors). Model performance was higher for catchments with low yield and more damped runoff. In catchments with high low flow risk, explanatory power of long term potential evapotranspiration was high. Catchments with a high low flow risk as well as catchments with a considerable decrease in flows in southeast Brandenburg have the highest demand for adaption. Measures increasing groundwater recharge are to be preferred. Catchments with high low flow risk showed relatively deep and decreasing groundwater heads allowing increased groundwater recharge at recharge areas with higher altitude away from the streams. Low flows are expected to stay low or decrease even further because long term potential evapotranspiration was the most important low flow predictor and is projected to increase during climate change. Differences in low flow risk and runoff dynamics between catchments have to be considered for management and planning of measures which do not only have the task to sustain minimum runoff.
Analysis and modeling of transient earthquake patterns and their dependence on local stress regimes
(2015)
Investigations in the field of earthquake triggering and associated interactions, which includes aftershock triggering as well as induced seismicity, is important for seismic hazard assessment due to earthquakes destructive power. One of the approaches to study earthquake triggering and their interactions is the use of statistical earthquake models, which are based on knowledge of the basic seismicity properties, in particular, the magnitude distribution and spatiotemporal properties of the triggered events.
In my PhD thesis I focus on some specific aspects of aftershock properties, namely, the relative seismic moment release of the aftershocks with respect to the mainshocks; the spatial correlation between aftershock occurrence and fault deformation; and on the influence of aseismic transients on the aftershock parameter estimation. For the analysis of aftershock sequences I choose a statistical approach, in particular, the well known Epidemic Type Aftershock Sequence (ETAS) model, which accounts for the input of background and triggered seismicity. For my specific purposes, I develop two ETAS model modifications in collaboration with Sebastian Hainzl. By means of this approach, I estimate the statistical aftershock parameters and performed simulations of aftershock sequences as well.
In the case of seismic moment release of aftershocks, I focus on the ratio of cumulative seismic moment release with respect to the mainshocks. Specifically, I investigate the ratio with respect to the focal mechanism of the mainshock and estimate an effective magnitude, which represents the cumulative aftershock energy (similar to Bath's law, which defines the average difference between mainshock and the largest aftershock magnitudes). Furthermore, I compare the observed seismic moment ratios with the results of the ETAS simulations. In particular, I test a restricted ETAS (RETAS) model which is based on results of a clock advanced model and static stress triggering.
To analyze spatial variations of triggering parameters I focus in my second approach on the aftershock occurrence triggered by large mainshocks and the study of the aftershock parameter distribution and their spatial correlation with the coseismic/postseismic slip and interseismic locking. To invert the aftershock parameters I improve the modified ETAS (m-ETAS) model, which is able to take the extension of the mainshock rupture into account. I compare the results obtained by the classical approach with the output of the m-ETAS model.
My third approach is concerned with the temporal clustering of seismicity, which might not only be related to earthquake-earthquake interactions, but also to a time-dependent background rate, potentially biasing the parameter estimations. Thus, my coauthors and I also applied a modification of the ETAS model, which is able to take into account time-dependent background activity. It can be applicable for two different cases: when an aftershock catalog has a temporal incompleteness or when the background seismicity rate changes with time, due to presence of aseismic forces.
An essential part of any research is the testing of the developed models using observational data sets, which are appropriate for the particular study case. Therefore, in the case of seismic moment release I use the global seismicity catalog. For the spatial distribution of triggering parameters I exploit two aftershock sequences of the Mw8.8 2010 Maule (Chile) and Mw 9.0 2011 Tohoku (Japan) mainshocks. In addition, I use published geodetic slip models of different authors. To test our ability to detect aseismic transients my coauthors and I use the data sets from Western Bohemia (Central Europe) and California.
Our results indicate that:
(1) the seismic moment of aftershocks with respect to mainshocks depends on the static stress changes and is maximal for the normal, intermediate for thrust and minimal for strike-slip stress regimes, where the RETAS model shows a good correspondence with the results;
(2) The spatial distribution of aftershock parameters, obtained by the m-ETAS model, shows anomalous values in areas of reactivated crustal fault systems. In addition, the aftershock density is found to be correlated with coseismic slip gradient, afterslip, interseismic coupling and b-values. Aftershock seismic moment is positively correlated with the areas of maximum coseismic slip and interseismically locked areas. These correlations might be related to the stress level or to material properties variations in space;
(3) Ignoring aseismic transient forcing or temporal catalog incompleteness can lead to the significant under- or overestimation of the underlying trigger parameters. In the case when a catalog is complete, this method helps to identify aseismic sources.
3D point clouds are a universal and discrete digital representation of three-dimensional objects and environments. For geospatial applications, 3D point clouds have become a fundamental type of raw data acquired and generated using various methods and techniques. In particular, 3D point clouds serve as raw data for creating digital twins of the built environment.
This thesis concentrates on the research and development of concepts, methods, and techniques for preprocessing, semantically enriching, analyzing, and visualizing 3D point clouds for applications around transport infrastructure. It introduces a collection of preprocessing techniques that aim to harmonize raw 3D point cloud data, such as point density reduction and scan profile detection. Metrics such as, e.g., local density, verticality, and planarity are calculated for later use. One of the key contributions tackles the problem of analyzing and deriving semantic information in 3D point clouds. Three different approaches are investigated: a geometric analysis, a machine learning approach operating on synthetically generated 2D images, and a machine learning approach operating on 3D point clouds without intermediate representation.
In the first application case, 2D image classification is applied and evaluated for mobile mapping data focusing on road networks to derive road marking vector data. The second application case investigates how 3D point clouds can be merged with ground-penetrating radar data for a combined visualization and to automatically identify atypical areas in the data. For example, the approach detects pavement regions with developing potholes. The third application case explores the combination of a 3D environment based on 3D point clouds with panoramic imagery to improve visual representation and the detection of 3D objects such as traffic signs.
The presented methods were implemented and tested based on software frameworks for 3D point clouds and 3D visualization. In particular, modules for metric computation, classification procedures, and visualization techniques were integrated into a modular pipeline-based C++ research framework for geospatial data processing, extended by Python machine learning scripts. All visualization and analysis techniques scale to large real-world datasets such as road networks of entire cities or railroad networks.
The thesis shows that some use cases allow taking advantage of established image vision methods to analyze images rendered from mobile mapping data efficiently. The two presented semantic classification methods working directly on 3D point clouds are use case independent and show similar overall accuracy when compared to each other. While the geometry-based method requires less computation time, the machine learning-based method supports arbitrary semantic classes but requires training the network with ground truth data. Both methods can be used in combination to gradually build this ground truth with manual corrections via a respective annotation tool.
This thesis contributes results for IT system engineering of applications, systems, and services that require spatial digital twins of transport infrastructure such as road networks and railroad networks based on 3D point clouds as raw data. It demonstrates the feasibility of fully automated data flows that map captured 3D point clouds to semantically classified models. This provides a key component for seamlessly integrated spatial digital twins in IT solutions that require up-to-date, object-based, and semantically enriched information about the built environment.
River reaches protected by dikes exhibit high damage potential due to strong value accumulation in the hinterland areas. While providing an efficient protection against low magnitude flood events, dikes may fail under the load of extreme water levels and long flood durations. Hazard and risk assessments for river reaches protected by dikes have not adequately considered the fluvial inundation processes up to now. Particularly, the processes of dike failures and their influence on the hinterland inundation and flood wave propagation lack comprehensive consideration. This study focuses on the development and application of a new modelling system which allows a comprehensive flood hazard assessment along diked river reaches under consideration of dike failures. The proposed Inundation Hazard Assessment Model (IHAM) represents a hybrid probabilistic-deterministic model. It comprises three models interactively coupled at runtime. These are: (1) 1D unsteady hydrodynamic model of river channel and floodplain flow between dikes, (2) probabilistic dike breach model which determines possible dike breach locations, breach widths and breach outflow discharges, and (3) 2D raster-based diffusion wave storage cell model of the hinterland areas behind the dikes. Due to the unsteady nature of the 1D and 2D coupled models, the dependence between hydraulic load at various locations along the reach is explicitly considered. The probabilistic dike breach model describes dike failures due to three failure mechanisms: overtopping, piping and slope instability caused by the seepage flow through the dike core (micro-instability). The 2D storage cell model driven by the breach outflow boundary conditions computes an extended spectrum of flood intensity indicators such as water depth, flow velocity, impulse, inundation duration and rate of water rise. IHAM is embedded in a Monte Carlo simulation in order to account for the natural variability of the flood generation processes reflected in the form of input hydrographs and for the randomness of dike failures given by breach locations, times and widths. The model was developed and tested on a ca. 91 km heavily diked river reach on the German part of the Elbe River between gauges Torgau and Vockerode. The reach is characterised by low slope and fairly flat extended hinterland areas. The scenario calculations for the developed synthetic input hydrographs for the main river and tributary were carried out for floods with return periods of T = 100, 200, 500, 1000 a. Based on the modelling results, probabilistic dike hazard maps could be generated that indicate the failure probability of each discretised dike section for every scenario magnitude. In the disaggregated display mode, the dike hazard maps indicate the failure probabilities for each considered breach mechanism. Besides the binary inundation patterns that indicate the probability of raster cells being inundated, IHAM generates probabilistic flood hazard maps. These maps display spatial patterns of the considered flood intensity indicators and their associated return periods. Finally, scenarios of polder deployment for the extreme floods with T = 200, 500, 1000 were simulated with IHAM. The developed IHAM simulation system represents a new scientific tool for studying fluvial inundation dynamics under extreme conditions incorporating effects of technical flood protection measures. With its major outputs in form of novel probabilistic inundation and dike hazard maps, the IHAM system has a high practical value for decision support in flood management.
Recent high-throughput technologies enable the acquisition of a variety of complementary data and imply regulatory networks on the systems biology level. A common approach to the reconstruction of such networks is the cluster analysis which is based on a similarity measure. We use the information theoretic concept of the mutual information, that has been originally defined for discrete data, as a measure of similarity and propose an extension to a commonly applied algorithm for its calculation from continuous biological data. We compare our approach to previously existing algorithms. We develop a performance optimised software package for the application of the mutual information to large-scale datasets. Furthermore, we design and implement a web-based service for the analysis of integrated data measured with different technologies. Application to biological data reveals biologically relevant groupings and reconstructed signalling networks show agreements with physiological findings.
For the first time the transcriptional reprogramming of distinct root cortex cells during the arbuscular mycorrhizal (AM) symbiosis was investigated by combining Laser Capture Mirodissection and Affymetrix GeneChip® Medicago genome array hybridization. The establishment of cryosections facilitated the isolation of high quality RNA in sufficient amounts from three different cortical cell types. The transcript profiles of arbuscule-containing cells (arb cells), non-arbuscule-containing cells (nac cells) of Rhizophagus irregularis inoculated Medicago truncatula roots and cortex cells of non-inoculated roots (cor) were successfully explored. The data gave new insights in the symbiosis-related cellular reorganization processes and indicated that already nac cells seem to be prepared for the upcoming fungal colonization. The mycorrhizal- and phosphate-dependent transcription of a GRAS TF family member (MtGras8) was detected in arb cells and mycorrhizal roots. MtGRAS shares a high sequence similarity to a GRAS TF suggested to be involved in the fungal colonization processes (MtRAM1). The function of MtGras8 was unraveled upon RNA interference- (RNAi-) mediated gene silencing. An AM symbiosis-dependent expression of a RNAi construct (MtPt4pro::gras8-RNAi) revealed a successful gene silencing of MtGras8 leading to a reduced arbuscule abundance and a higher proportion of deformed arbuscules in root with reduced transcript levels. Accordingly, MtGras8 might control the arbuscule development and life-time. The targeting of MtGras8 by the phosphate-dependent regulated miRNA5204* was discovered previously (Devers et al., 2011). Since miRNA5204* is known to be affected by phosphate, the posttranscriptional regulation might represent a link between phosphate signaling and arbuscule development. In this work, the posttranscriptional regulation was confirmed by mis-expression of miRNA5204* in M. truncatula roots. The miRNA-mediated gene silencing affects the MtGras8 transcript abundance only in the first two weeks of the AM symbiosis and the mis-expression lines seem to mimic the phenotype of MtGras8-RNAi lines. Additionally, MtGRAS8 seems to form heterodimers with NSP2 and RAM1, which are known to be key regulators of the fungal colonization process (Hirsch et al., 2009; Gobbato et al., 2012). These data indicate that MtGras8 and miRNA5204* are linked to the sym pathway and regulate the arbuscule development in phosphate-dependent manner.
The development of fast and reliable biochemical tools for on-site screening in environmental analysis was the main target of the present work. Due to various hazardous effects such as endocrine disruption and toxicity phenolic compounds are key analytes in environmental analysis and thus were chosen as model analytes. Three different methods were developed: For the enzymatic detection of phenols in environmental samples an enzyme-based biosensor was developed. In contrast to reported work using tyrosinase or peroxidases, we developed a biosensor based on glucose dehydrogenase as biorecognition element. This biosensor was devoted for an application in a laboratory flow system as well as in a portable device for on-site measurements. This enzymatic detection is applicable only for a limited number of phenols due to substrate specificity of the enzyme. For other relevant compounds based on a phenolic structure (i.e. nitrophenol, alkylphenols and alkylphenol ethoxylates) immunological methods had to be developed. The electrochemical GDH-biosensor was used as the label detector in these immunoassays. Two heterogeneous immunoassays were developed where ßGal was used as the label. An electrochemical method for the determination of the marker enzyme activity was processed. The separation step was realized with protein A/G columns (laboratory flow system) or by direct immobilization of the antibodies in small disposable capillaries (on-site analysis). All methods were targeted on the contemporary analysis of small numbers of samples.
Analysis of supramolecular assemblies of NE81, the first lamin protein in a non-metazoan organism
(2019)
Nuclear lamins are nucleus-specific intermediate filaments forming a network located at the inner nuclear membrane of the nuclear envelope. They form the nuclear lamina together with proteins of the inner nuclear membrane regulating nuclear shape and gene expression, among others. The amoebozoan Dictyostelium NE81 protein is a suitable candidate for an evolutionary conserved lamin protein in this non-metazoan organism. It shares the domain organization of metazoan lamins and is fulfilling major lamin functions in Dictyostelium. Moreover, field-emission scanning electron microscopy (feSEM) images of NE81 expressed on Xenopus oocytes nuclei revealed filamentous structures with an overall appearance highly reminiscent to that of metazoan Xenopus lamin B2. For the classification as a lamin-like or a bona fide lamin protein, a better understanding of the supramolecular NE81 structure was necessary. Yet, NE81 carrying a large N-terminal GFP-tag turned out as unsuitable source for protein isolation and characterization; GFP-NE81 expressed in Dictyostelium NE81 knock-out cells exhibited an abnormal distribution, which is an indicator for an inaccurate assembly of GFP-tagged NE81. Hence, a shorter 8×HisMyc construct was the tag of choice to investi-gate formation and structure of NE81 assemblies. One strategy was the structural analysis of NE81 in situ at the outer nuclear membrane in Dictyostelium cells; NE81 without a func-tional nuclear localization signal (NLS) forms assemblies at the outer face of the nucleus. Ultrastructural feSEM pictures of NE81ΔNLS nuclei showed a few filaments of the expected size but no repetitive filamentous structures. The former strategy should also be established for metazoan lamins in order to facilitate their structural analysis. However, heterologously expressed Xenopus and C. elegans lamins showed no uniform localization at the outer nucle-ar envelope of Dictyostelium and hence, no further ultrastructural analysis was undertaken. For in vitro assembly experiments a Dictyostelium mutant was generated, expressing NE81 without the NLS and the membrane-anchoring isoprenylation site (HisMyc-NE81ΔNLSΔCLIM). The cytosolic NE81 clusters were soluble at high ionic strength and were purified from Dictyostelium extracts using Ni-NTA Agarose. Widefield immunofluorescence microscopy, super-resolution light microscopy and electron microscopy images of purified NE81 showed its capability to form filamentous structures at low ionic strength, as described previously for metazoan lamins. Introduction of a phosphomimetic point mutation (S122E) into the CDK1-consensus sequence of NE81 led to disassembled NE81 protein in vivo, which could be reversibly stimulated to form supramolecular assemblies by blue light exposure.
The results of this work reveal that NE81 has to be considered a bona fide lamin, since it is able to form filamentous assemblies. Furthermore, they highlight Dictyostelium as a non-mammalian model organism with a well-characterized nuclear envelope containing all rele-vant protein components known in animal cells.
In this thesis, we treat the extreme Newman-Penrose components of both the Maxwell field (s=±1) and the linearized gravitational perturbations (or "linearized gravity" for short) (s=±2) in the exterior of a slowly rotating Kerr black hole. Upon different rescalings, we can obtain spin s components which satisfy the separable Teukolsky master equation (TME). For each of these spin s components defined in Kinnersley tetrad, the resulting equations by performing some first-order differential operator on it once and twice (twice only for s=±2), together with the TME, are in the form of an "inhomogeneous spin-weighted wave equation" (ISWWE) with different potentials and constitute a linear spin-weighted wave system. We then prove energy and integrated local energy decay (Morawetz) estimates for this type of ISWWE, and utilize them to achieve both a uniform bound of a positive definite energy and a Morawetz estimate for the regular extreme Newman-Penrose components defined in the regular Hawking-Hartle tetrad.
We also present some brief discussions on mode stability for TME for the case of real frequencies. This says that in a fixed subextremal Kerr spacetime, there is no nontrivial separated mode solutions to TME which are purely ingoing at horizon and purely outgoing at infinity. This yields a representation formula for solutions to inhomogeneous Teukolsky equations, and will play a crucial role in generalizing the above energy and Morawetz estimates results to the full subextremal Kerr case.
Sulphur, a macronutrient essential for plant growth, is among the most versatile elements in living organisms. Unfortunately, little is known about regulation of sulphate uptake and assimilation by plants. Identification of sulphate signalling processes will allow to control sulphate acquisition and assimilation and may prove useful in the future to improve sulphur-use efficiency in agriculture. Many of genes involved in sulphate metabolism are regulated on transcriptional level by products of other genes called transcription factors (TF). Several published experiments revealed TF genes that respond to sulphate deprivation, but none of these have been so far been characterized functionally. Thus, we aimed at identifying and characterising transcription factors that control sulphate metabolism in the model plant Arabidopsis thaliana. To achieve that goal we postulated that factors regulating Arabidopsis responses to inorganic sulphate deficiency change their transcriptional levels under sulphur-limited conditions. By comparing TF transcript profiles from plants grown on different sulphate regimes, we identified TF genes that may specifically induce or repress changes in expression of genes that allow plants to adapt to changes in sulphate availability. Candidate genes obtained from this screening were tested by reverse genetics approaches. Transgenic plants constitutively overproducing selected TF genes and mutant plants, lacking functional selected TF genes (knock out), were used. By comparing metabolite and transcript profiles from transgenic and wild type plants we aimed at confirming the role of selected AP2 TF candidate genes in plant adaptation to sulphur unavailability. After preliminary characterisation of WRKY24 and MYB93 TF genes, we postulate that these factors are involved in a complex multifactorial regulatory network, in which WRKY24 and MYB93 would act as superior factors regulating other transcription factors directly involved in the regulation of S-metabolism genes. Results obtained for plants overproducing TOE1 and TOE2 TF genes suggests that these factors may be involved in a mechanism, which is promoting synthesis of an essential amino acid, methionine, over synthesis of another amino acid, cysteine. Thus, TOE1 and TOE2 genes might be a part of transcriptional regulation of methionine synthesis. Approaches creating genetically manipulated plants may produce plant phenotypes of immediate biotechnological interest, such as plants with increased sulphate or sulphate-containing amino acid content, or better adapted to the sulphate unavailability.
The advances in modern geodetic techniques such as the global navigation satellite system (GNSS) and synthetic aperture radar (SAR) provide surface deformation measurements with an unprecedented accuracy and temporal and spatial resolutions even at most remote volcanoes on Earth. Modelling of the high-quality geodetic data is crucial for understanding the underlying physics of volcano deformation processes. Among various approaches, mathematical models are the most effective for establishing a quantitative link between the surface displacements and the shape and strength of deformation sources. Advancing the geodetic data analyses and hence, the knowledge on the Earth’s interior processes, demands sophisticated and efficient deformation modelling approaches. Yet the majority of these models rely on simplistic assumptions for deformation source geometries and ignore complexities such as the Earth’s surface topography and interactions between multiple sources.
This thesis addresses this problem in the context of analytical and numerical volcano deformation modelling. In the first part, new analytical solutions for triangular dislocations (TDs) in uniform infinite and semi-infinite elastic media have been developed. Through a comprehensive investigation, the locations and causes of artefact singularities and numerical instabilities associated with TDs have been determined and these long-standing drawbacks have been addressed thoroughly. This approach has then been extended to rectangular dislocations (RDs) with full rotational degrees of freedom. Using this solution in a configuration of three orthogonal RDs a compound dislocation model (CDM) has been developed. The CDM can represent generalized volumetric and planar deformation sources efficiently. Thus, the CDM is relevant for rapid inversions in early warning systems and can also be used for detailed deformation analyses. In order to account for complex source geometries and realistic topography in the deformation models, in this thesis the boundary element method (BEM) has been applied to the new solutions for TDs. In this scheme, complex surfaces are simulated as a continuous mesh of TDs that may possess any displacement or stress boundary conditions in the BEM calculations. In the second part of this thesis, the developed modelling techniques have been applied to five different real-world deformation scenarios. As the first and second case studies the deformation sources associated with the 2015 Calbuco eruption and 2013–2016 Copahue inflation period have been constrained by using the CDM. The highly anisotropic source geometries in these two cases highlight the importance of using generalized deformation models such as the CDM, for geodetic data inversions. The other three case studies in this thesis involve high-resolution dislocation models and BEM calculations. As the third case, the 2013 pre-explosive inflation of Volcán de Colima has been simulated by using two ellipsoidal cavities, which locate zones of pressurization in the volcano’s lava dome. The fourth case study, which serves as an example for volcanotectonics interactions, the 3-D kinematics of an active ring-fault at Tendürek volcano has been investigated through modelling displacement time series over the 2003–2010 time period. As the fifth example, the deformation sources associated with North Korea’s underground nuclear test in September 2017 have been constrained. These examples demonstrate the advancement and increasing level of complexity and the general applicability of the developed dislocation modelling techniques.
This thesis establishes a unified framework for rapid and high-resolution dislocation modelling, which in addition to volcano deformations can also be applied to tectonic and humanmade deformations.
One of the major problems for the implementation of water resources planning and management in arid and semi-arid environments is the scarcity of hydrological data and, consequently, research studies. In this thesis, the hydrology of dryland river systems was analyzed and a semi-distributed hydrological model and a forecasting approach were developed for flow transmission processes in river-systems with a focus on semi-arid conditions. Three different sources of hydrological data (streamflow series, groundwater level series and multi-temporal satellite data) were combined in order to analyze the channel transmission losses of a large reach of the Jaguaribe River in NE Brazil. A perceptual model of this reach was derived suggesting that the application of models, which were developed for sub-humid and temperate regions, may be more suitable for this reach than classical models, which were developed for arid and semi-arid regions. Summarily, it was shown that this river reach is hydraulically connected with groundwater and shifts from being a losing river at the dry and beginning of rainy seasons to become a losing/gaining (mostly losing) river at the middle and end of rainy seasons. A new semi-distributed channel transmission losses model was developed, which was based primarily on the capability of simulation in very different dryland environments and flexible model structures for testing hypotheses on the dominant hydrological processes of rivers. This model was successfully tested in a large reach of the Jaguaribe River in NE Brazil and a small stream in the Walnut Gulch Experimental Watershed in the SW USA. Hypotheses on the dominant processes of the channel transmission losses (different model structures) in the Jaguaribe river were evaluated, showing that both lateral (stream-)aquifer water fluxes and ground-water flow in the underlying alluvium parallel to the river course are necessary to predict streamflow and channel transmission losses, the former process being more relevant than the latter. This procedure not only reduced model structure uncertainties, but also reported modelling failures rejecting model structure hypotheses, namely streamflow without river-aquifer interaction and stream-aquifer flow without groundwater flow parallel to the river course. The application of the model to different dryland environments enabled learning about the model itself from differences in channel reach responses. For example, the parameters related to the unsaturated part of the model, which were active for the small reach in the USA, presented a much greater variation in the sensitivity coefficients than those which drove the saturated part of the model, which were active for the large reach in Brazil. Moreover, a nonparametric approach, which dealt with both deterministic evolution and inherent fluctuations in river discharge data, was developed based on a qualitative dynamical system-based criterion, which involved a learning process about the structure of the time series, instead of a fitting procedure only. This approach, which was based only on the discharge time series itself, was applied to a headwater catchment in Germany, in which runoff are induced by either convective rainfall during the summer or snow melt in the spring. The application showed the following important features: • the differences between runoff measurements were more suitable than the actual runoff measurements when using regression models; • the catchment runoff system shifted from being a possible dynamical system contaminated with noise to a linear random process when the interval time of the discharge time series increased; • and runoff underestimation can be expected for rising limbs and overestimation for falling limbs. This nonparametric approach was compared with a distributed hydrological model designed for real-time flood forecasting, with both presenting similar results on average. Finally, a benchmark for hydrological research using semi-distributed modelling was proposed, based on the aforementioned analysis, modelling and forecasting of flow transmission processes. The aim of this benchmark was not to describe a blue-print for hydrological modelling design, but rather to propose a scientific method to improve hydrological knowledge using semi-distributed hydrological modelling. Following the application of the proposed benchmark to a case study, the actual state of its hydrological knowledge and its predictive uncertainty can be determined, primarily through rejected hypotheses on the dominant hydrological processes and differences in catchment/variables responses.
Modern biological analysis techniques supply scientists with various forms of data. One category of such data are the so called "expression data". These data indicate the quantities of biochemical compounds present in tissue samples. Recently, expression data can be generated at a high speed. This leads in turn to amounts of data no longer analysable by classical statistical techniques. Systems biology is the new field that focuses on the modelling of this information. At present, various methods are used for this purpose. One superordinate class of these methods is machine learning. Methods of this kind had, until recently, predominantly been used for classification and prediction tasks. This neglected a powerful secondary benefit: the ability to induce interpretable models. Obtaining such models from data has become a key issue within Systems biology. Numerous approaches have been proposed and intensively discussed. This thesis focuses on the examination and exploitation of one basic technique: decision trees. The concept of comparing sets of decision trees is developed. This method offers the possibility of identifying significant thresholds in continuous or discrete valued attributes through their corresponding set of decision trees. Finding significant thresholds in attributes is a means of identifying states in living organisms. Knowing about states is an invaluable clue to the understanding of dynamic processes in organisms. Applied to metabolite concentration data, the proposed method was able to identify states which were not found with conventional techniques for threshold extraction. A second approach exploits the structure of sets of decision trees for the discovery of combinatorial dependencies between attributes. Previous work on this issue has focused either on expensive computational methods or the interpretation of single decision trees a very limited exploitation of the data. This has led to incomplete or unstable results. That is why a new method is developed that uses sets of decision trees to overcome these limitations. Both the introduced methods are available as software tools. They can be applied consecutively or separately. That way they make up a package of analytical tools that usefully supplement existing methods. By means of these tools, the newly introduced methods were able to confirm existing knowledge and to suggest interesting and new relationships between metabolites.
On a planetary scale human populations need to adapt to both socio-economic and environmental problems amidst rapid global change. This holds true for coupled human-environment (socio-ecological) systems in rural and urban settings alike. Two examples are drylands and urban coasts. Such socio-ecological systems have a global distribution. Therefore, advancing the knowledge base for identifying socio-ecological adaptation needs with local vulnerability assessments alone is infeasible: The systems cover vast areas, while funding, time, and human resources for local assessments are limited. They are lacking in low an middle-income countries (LICs and MICs) in particular.
But places in a specific socio-ecological system are not only unique and complex – they also exhibit similarities. A global patchwork of local rural drylands vulnerability assessments of human populations to socio-ecological and environmental problems has already been reduced to a limited number of problem structures, which typically cause vulnerability. However, the question arises whether this is also possible in urban socio-ecological systems. The question also arises whether these typologies provide added value in research beyond global change. Finally, the methodology employed for drylands needs refining and standardizing to increase its uptake in the scientific community. In this dissertation, I set out to fill these three gaps in research.
The geographical focus in my dissertation is on LICs and MICs, which generally have lower capacities to adapt, and greater adaptation needs, regarding rapid global change. Using a spatially explicit indicator-based methodology, I combine geospatial and clustering methods to identify typical configurations of key factors in case studies causing vulnerability to human populations in two specific socio-ecological systems. Then I use statistical and analytical methods to interpret and appraise both the typical configurations and the global typologies they constitute.
First, I improve the indicator-based methodology and then reanalyze typical global problem structures of socio-ecological drylands vulnerability with seven indicator datasets. The reanalysis confirms the key tenets and produces a more realistic and nuanced typology of eight spatially explicit problem structures, or vulnerability profiles: Two new profiles with typically high natural resource endowment emerge, in which overpopulation has led to medium or high soil erosion. Second, I determine whether the new drylands typology and its socio-ecological vulnerability concept advance a thematically linked scientific debate in human security studies: what drives violent conflict in drylands? The typology is a much better predictor for conflict distribution and incidence in drylands than regression models typically used in peace research. Third, I analyze global problem structures typically causing vulnerability in an urban socio-ecological system - the rapidly urbanizing coastal fringe (RUCF) – with eleven indicator datasets. The RUCF also shows a robust typology, and its seven profiles show huge asymmetries in vulnerability and adaptive capacity. The fastest population increase, lowest income, most ineffective governments, most prevalent poverty, and lowest adaptive capacity are all typically stacked in two profiles in LICs. This shows that beyond local case studies tropical cyclones and/or coastal flooding are neither stalling rapid population growth, nor urban expansion, in the RUCF. I propose entry points for scaling up successful vulnerability reduction strategies in coastal cities within the same vulnerability profile.
This dissertation shows that patchworks of local vulnerability assessments can be generalized to structure global socio-ecological vulnerabilities in both rural and urban socio-ecological systems according to typical problems. In terms of climate-related extreme events in the RUCF, conflicting problem structures and means to deal with them are threatening to widen the development gap between LICs and high-income countries unless successful vulnerability reduction measures are comprehensively scaled up. The explanatory power for human security in drylands warrants further applications of the methodology beyond global environmental change research in the future. Thus, analyzing spatially explicit global typologies of socio-ecological vulnerability is a useful complement to local assessments: The typologies provide entry points for where to consider which generic measures to reduce typical problem structures – including the countless places without local assessments. This can save limited time and financial resources for adaptation under rapid global change.
The central aim of this thesis is to demonstrate the benefits of innovative frequency-based methods to better explain the variability observed in lake ecosystems. Freshwater ecosystems may be the most threatened part of the hydrosphere. Lake ecosystems are particularly sensitive to changes in climate and land use because they integrate disturbances across their entire catchment. This makes understanding the dynamics of lake ecosystems an intriguing and important research priority. This thesis adds new findings to the baseline knowledge regarding variability in lake ecosystems. It provides a literature-based, data-driven and methodological framework for the investigation of variability and patterns in environmental parameters in the time frequency domain.
Observational data often show considerable variability in the environmental parameters of lake ecosystems. This variability is mostly driven by a plethora of periodic and stochastic processes inside and outside the ecosystems. These run in parallel and may operate at vastly different time scales, ranging from seconds to decades. In measured data, all of these signals are superimposed, and dominant processes may obscure the signals of other processes, particularly when analyzing mean values over long time scales. Dominant signals are often caused by phenomena at long time scales like seasonal cycles, and most of these are well understood in the limnological literature. The variability injected by biological, chemical and physical processes operating at smaller time scales is less well understood. However, variability affects the state and health of lake ecosystems at all time scales. Besides measuring time series at sufficiently high temporal resolution, the investigation of the full spectrum of variability requires innovative methods of analysis.
Analyzing observational data in the time frequency domain allows to identify variability at different time scales and facilitates their attribution to specific processes. The merit of this approach is subsequently demonstrated in three case studies. The first study uses a conceptual analysis to demonstrate the importance of time scales for the detection of ecosystem responses to climate change. These responses often occur during critical time windows in the year, may exhibit a time lag and can be driven by the exceedance of thresholds in their drivers. This can only be detected if the temporal resolution of the data is high enough. The second study applies Fast Fourier Transform spectral analysis to two decades of daily water temperature measurements to show how temporal and spatial scales of water temperature variability can serve as an indicator for mixing in a shallow, polymictic lake. The final study uses wavelet coherence as a diagnostic tool for limnology on a multivariate high-frequency data set recorded between the onset of ice cover and a cyanobacteria summer bloom in the year 2009 in a polymictic lake. Synchronicities among limnological and meteorological time series in narrow frequency bands were used to identify and disentangle prevailing limnological processes.
Beyond the novel empirical findings reported in the three case studies, this thesis aims to more generally be of interest to researchers dealing with now increasingly available time series data at high temporal resolution. A set of innovative methods to attribute patterns to processes, their drivers and constraints is provided to help make more efficient use of this kind of data.
Der Porenraum eines Karbonatgesteins ist zumeist aus einer spezifischen Vergesellschaftung verschiedenster Porentypen aufgebaut, die eine unterschiedliche Herkunft aufweisen und zusätzlich in ihrer Form und Größe stark variieren können (e.g., Melim et al., 2001; Lee et al., 2009; He et al., 2014; Dernaika & Sinclair, 2017; Zhang et al., 2017). Diese für Karbonate typischen multimodalen Porensysteme entstehen sowohl durch primäre Ablagerungsprozesse, als auch durch mehrmalige Modifikation des Porenraumes nach Ablagerung des Sediments. Dies führt zu einer ungleichen Verteilung der Porenraumeigenschaften auf engstem Raum und das zeitgleiche Auftreten von effektiven und ineffektiven Poren. Diese immanenten Unterschiede in der Effektivität einzelner Porentypen sind der Hauptgrund für die häufig sehr niedrige Korellation zwischen Porosität und Permeabilität in Karbonaten (e.g., Mazzullo 2004; Ehrenberg & Nadeau, 2005; Hollis et al., 2010; He et al., 2014; Rashid et al., 2015; Dernaika & Sinclair, 2017). Durch die Extraktion von miteinander verbundenen und somit effektiven Porentypen jedoch kann das Verständnis und die Vorhersage der Permeabilität für einen gegeben Porositätswert stark verbessert werden (e.g., Melim et al., 2001; Zhang et al., 2017). Dazu wird in dieser Arbeit eine auf der digitalen Bildanalyse (DIA) beruhende Methode vorgestellt, mit der schrittweise die Effektivität von Poren aus den analysierten mittelmiozänen lakustrinen Karbonaten des Nördlinger Ries Kratersees (Süddeutschland) berechnet werden kann. Mithilfe des Porenformfaktors (sensu Anselmetti et al., 1998), der als Parameter zur Quantifizierung der Interkonnektivität zwischen Poren dient, wird der potentiellen Beitrag an Permeabilität jedes Porentyps zur Gesamtpermeabilität bestimmt. Somit können die effektivsten Porentypen innerhalb der analysierten Karbonate identifiziert werden. Desweiteren wird die digitale Bildanalyse dazu benutzt, zementierte Porenräume zu extrahieren, um den Einfluss der Zementation auf die Porenraumeigenschaften zu quantifizieren. Durch eine unabhängige Methode (Fluid-Flow-Simulation), deren Ergebnisse wiederum mit der digitalen Bildanalyse ausgewertet werden, können die vorherigen Erkentnisse bestätigt werden: Interpeloidale Poren und Lösungsporen sind die beiden effektivsten Porentypen im Porenraum der Riesseekarbonate. Die Extraktion des miteinander verbundenen (d.h. effektiven) Porennetzwerkes führt schließlich zu einer erheblich verbesserten Korrelation zwischen Porosität und Permeabilität in den analysierten Karbonaten. Die in dieser Arbeit beschriebene Methode bietet ein quantitatives petrographisches Werkzeug, mit dessen Hilfe die effektive Porosität eines Porenraumes extrahiert werden kann. Dies führt zu einem besseren Verständnis darüber, wie Porensysteme von Karbonaten Permeabilität erzeugen. Diese Dissertation zeigt auch, dass die Formkomplexität von Poren einer der wichtigsten Parameter ist, der die Interkonnektivität zwischen einzelnen Poren und somit die Entstehung von effektiver Porosität steuert. Außerdem erweist sich die digitale Bildanalyse als ausgezeichnetes Werkzeug um die Porosität und Permeabilität direkt an ihren gemeinsamen Ursprung zu knüpfen: die Gesteinstextur und die damit assoziierte Porenstruktur.
Anerkennung und Macht
(2021)
In der vorliegenden Untersuchung habe ich das Ziel verfolgt, einen sachlich-eigenständigen Beitrag für eine Debatte gegen Honneths kritische Gesellschaftstheorie zu leisten. In dieser Debatte wird Honneth dahingehend kritisiert, dass es ihm mit seiner kritischen Gesellschaftstheorie entgegen seiner eigenen systematischen Zielsetzung nicht gelingt, in modernen liberaldemokratischen Gesellschaften sämtliche Phänomene von sozialer Herrschaft kritisch zu hinterfragen. Denn soziale Anerkennung, die Honneth als Schlüsselbegriff für diese kritische Hinterfragung behandelt, bei der soziale Herrschaft in Verbindung mit sozialer Missachtung (als mangelnde soziale Anerkennung) steht, kann laut der Kritik faktisch selbst ein Medium für die Stiftung von sozialer Unterwerfung sein. Dies geschieht in Prozessen von Identitätsentwicklung, in denen soziale Anerkennung für Individuen als Anerkannte bestimmte Identitätsmöglichkeiten einräumt und auf diese Weise gleichzeitig andere Identitätsmöglichkeiten ausschließt, womit sie auf diese Identität einschränkend und insofern herrschend wirkt. Es handelt sich um eine Form von sozialer Herrschaft, die durch soziale Anerkennung gestiftet wird. Honneth zieht dem Vorwurf zufolge nicht in Erwägung, dass soziale Anerkennung bei Individuen als Anerkannte einen solchen negativen Effekt erzielen kann. Hieraus ergeben sich die Fragen, ob soziale Anerkennung in Prozessen von Identitätsentwicklung jeweils mit sozialer Herrschaft einhergeht und wie dieser Typus von sozialer Herrschaft kritisiert werden kann. Diese Fragen hat Honneth zuletzt in einem persönlichen Gespräch mit Allen und Cooke (als zwei Teilnehmerinnen der Debatte gegen Honneth) beantwortet. An dieser Stelle vertritt er mit beiden Gesprächsteilnehmerinnen die Auffassung, dass die Operation der Einschränkung von Identitätsmöglichkeiten an sich keine Operation darstellt, welche, wie sonst in der Debatte gegen seine kritische Gesellschaftstheorie behauptet wird, auf soziale Herrschaft zurückführt. Diese Auffassung beruht auf der Idee, wonach soziale Anerkennung sich in jenem praktischen Kontext nur unter der Bedingung als herrschaftsstiftend erweist, dass sie immanente Prinzipien verletzt, die substanziell kritische Maßstäbe definieren.
Mein Beitrag zu dieser Debatte gegen Honneth besteht auf der einen Seite in der Erklärung, dass sowohl jene Auffassung als auch jene Idee argumentativ mangelhaft sind, und auf der anderen Seite in der Ausführung des Vorhabens, diesen argumentativen Mangel selbst zu beheben. Gegen jene Auffassung behaupte ich, dass die drei Autoren in ihrem Gespräch nicht erläutern, inwiefern soziale Anerkennung nicht herrschend wirkt, wenn sie die Identitätsmöglichkeiten von Individuen als Anerkannte einschränkt, denn mit dieser Einschränkung wird vielmehr faktisch über diese Individuen geherrscht – die Debatte gegen Honneth, so zur Unterstützung dieser Ansicht, baut hauptsächlich auf ebendiesem Faktum auf. Gegen jene Idee habe ich fünf problematische Fragen gestellt und beantwortet, die Bezug eigentlich nicht allein auf diese Idee selbst, sondern überdies auf weitere, naheliegende Ideen nehmen, welche die drei Autoren angesprochen haben.
Die Expansion des renalen Tubulointerstitiums aufgrund einer Akkumulation zellulärer Bestandteile und extrazellulärer Matrix ist eine charakteristische Eigenschaft der chronischen Nierenerkrankung (CKD) und führt zu einer Progression der Erkrankung in Richtung eines terminalen Nierenversagens. Die Fibroblasten Proliferation und ihre Transformation hin zum sekretorischen Myofibroblasten-Phänotyp stellen hierbei Schlüsselereignisse dar. Signalprozesse, die zur Induktion der Myofibroblasten führen, werden aktiv beforscht um anti-fibrotische Therapieansätze zu identifizieren. Das anti-inflammatorische Protein Annexin A1 und sein Rezeptor Formyl-Peptid Rezeptor 2 (FPR2) wurden in verschiedenen Organsystemen mit der Regulation von Fibroblastenaktivität in Verbindung gebracht, jedoch wurden ihre Expression und Funktion bei renalen fibrotischen Erkrankungen bisher nicht untersucht. Ziel der aktuellen Studie war daher die Untersuchung der renalen Annexin A1- und FPR2-Expression in einem Tiermodell des chronischen Nierenversagens, sowie die Charakterisierung der funktionellen Rolle von Annexin A1 in der Regulation des Fibroblasten Phänotyps und ihrer Syntheseleistung. Dazu wurden neugeborene Sprague-Dawley Ratten in den ersten zwei Wochen ihres Lebens entweder mit Vehikel oder mit einem Angiotensin II Typ I Rezeptor Antagonisten behandelt und ohne weitere Intervention bis zu einem Alter von 11 Monaten (CKD Ratten) gehalten. Die Regulation und Lokalisation von Annexin A1 und FPR2 wurden mit Hilfe von Real-Time PCR und Immunhistochemie erfasst. Annexin A1- und FPR2-exprimierende Zellen wurden weiter durch Doppelimmunfluoreszenzfärbungen charakterisiert. Gefärbt wurde mit Antikörpern gegen endotheliale Zellen (rat endothelial cell antigen), Makrophagen (CD 68), Fibroblasten (CD73) und Myofibroblasten (alpha-smooth muscle actin (α-sma)). Zellkulturstudien wurden an immortalisierten renalen kortikalen Fibroblasten aus Wildtyp- und Annexin A1-defizienten Mäusen, sowie an etablierten humanen und murinen renalen Fibrolasten durchgeführt. Eine Überexpression von Annexin A1 wurde durch eine stabile Transfektion erreicht. Die Expression von Annexin A1, α-sma und Kollagen 1α1 wurde durch Real-Time PCR, Western Blot und Immuhistochemie erfasst. Die Sekretion des Annexin A1 Proteins wurde nach TCA-Fällung des Zellkulturüberstandes im Western Blot untersucht. Wie zu erwarten zeigten die CKD Ratten eine geringere Anzahl an Nephronen mit deutlicher glomerulären Hypertrophie. Der tubulointerstitielle Raum war durch fibrilläres Kollagen, aktivierte Fibroblasten und inflammatorische Zellen expandiert. Parallel dazu war die mRNA Expression von Annexin A1 und Transforming growth factor beta (TGF-β) signifikant erhöht. Die Annexin A1-Lokalisation mittels Doppelimmunfluorsezenz identifizierte eine große Anzahl von CD73-positiven kortikalen Fibroblasten und eine Subpopulation von Makrophagen als Annexin A1-positiv. Die Annexin A1-Menge in Myofibroblasten und renalen Endothelien war gering. FPR2 konnte in der Mehrzahl der renalen Fibroblasten, in Myofibroblasten, in einer Subpopulation von Makrophagen und in renalen Epithelzellen nachgewiesen werden. Eine Behandlung der murinen Fibroblasten mit dem pro-fibrotischen Zytokin TGF-β führte zu einem parallelen Anstieg der α-sma-, Kollagen 1α1- und Annexin A1-Biosynthese und zu einer gesteigerten Sekretion von Annexin A1. Eine Überexpression von Annexin A1 in murinen Fibroblasten reduzierte das Ausmaß der TGF-β induzierten α-sma- und Kollagen 1α1-Biosynthese. Fibroblasten aus Annexin A1-defizienten Mäusen zeigten einen starken Myofibroblasten-Phänotyp mit einer gesteigerten Expression an α-sma und Kollagen 1α1. Der Einsatz eines Peptidantagonisten des FPR2 (WRW4) resultierte in einer Stimulation der α-sma-Biosynthese, was die Vermutung nahe legte, dass Annexin A1 FPR2-vermittelt anti-fibrotische Effekte hat. Zusammenfassend zeigen diese Ergebnisse, dass renale kortikale Fibroblasten eine Hauptquelle des Annexin A1 im renalen Interstitium und einen Ansatzpunkt für Annexin A1-Signalwege in der Niere darstellen. Das Annexin A1/FPR2-System könnte daher eine wichtige Rolle in der Kontrolle des Fibroblasten Phänotyp und der Fibroblasten Aktivität spielen und daher einen neuen Ansatz für die anti-fibrotischen pharmakologischen Strategien in der Behandlung des CKD darstellen.
The mobile-immobile model (MIM) has been established in geoscience in the context of contaminant transport in groundwater. Here the tracer particles effectively immobilise, e.g., due to diffusion into dead-end pores or sorption. The main idea of the MIM is to split the total particle density into a mobile and an immobile density. Individual tracers switch between the mobile and immobile state following a two-state telegraph process, i.e., the residence times in each state are distributed exponentially. In geoscience the focus lies on the breakthrough curve (BTC), which is the concentration at a fixed location over time. We apply the MIM to biological experiments with a special focus on anomalous scaling regimes of the mean squared displacement (MSD) and non-Gaussian displacement distributions. As an exemplary system, we have analysed the motion of tau proteins, that diffuse freely inside axons of neurons. Their free diffusion thereby corresponds to the mobile state of the MIM. Tau proteins stochastically bind to microtubules, which effectively immobilises the tau proteins until they unbind and continue diffusing. Long immobilisation durations compared to the mobile durations give rise to distinct non-Gaussian Laplace shaped distributions. It is accompanied by a plateau in the MSD for initially mobile tracer particles at relevant intermediate timescales. An equilibrium fraction of initially mobile tracers gives rise to non-Gaussian displacements at intermediate timescales, while the MSD remains linear at all times. In another setting bio molecules diffuse in a biosensor and transiently bind to specific receptors, where advection becomes relevant in the mobile state. The plateau in the MSD observed for the advection-free setting and long immobilisation durations persists also for the case with advection. We find a new clear regime of anomalous diffusion with non-Gaussian distributions and a cubic scaling of the MSD. This regime emerges for initially mobile and for initially immobile tracers. For an equilibrium fraction of initially mobile tracers we observe an intermittent ballistic scaling of the MSD. The long-time effective diffusion coefficient is enhanced by advection, which we physically explain with the variance of mobile durations. Finally, we generalize the MIM to incorporate arbitrary immobilisation time distributions and focus on a Mittag-Leffler immobilisation time distribution with power-law tail ~ t^(-1-mu) with 0<mu<1 and diverging mean immobilisation durations. A fit of our model to the BTC of experimental data from tracer particles in aquifers matches the BTC including the power-law tail. We use the fit parameters for plotting the displacement distributions and the MSD. We find Gaussian normal diffusion at short times and long-time power-law decay of mobile mass accompanied by anomalous diffusion at long times. The long-time diffusion is subdiffusive in the advection-free setting, while it is either subdiffusive for 0<mu<1/2 or superdiffusive for 1/2<mu<1 when advection is present. In the long-time limit we show equivalence of our model to a bi-fractional diffusion equation.
Anschaulichkeiten herstellen
(2023)
Die Handschrift D der Driu liet von der maget entstand um 1220 und ist die älteste vollständige erhaltene Fassung des Textes. Priester Wernher gilt als Verfasser der geistlichen Dichtung und erstellt sie im Jahr 1172 auf der Grundlage apokrypher und biblischer Quellen. Die Handschrift D wird in der mediävistischen Germanistik gerne zu jenen Werken gezählt, die den Auftakt einer zunehmenden Verschriftlichung in der Volkssprache markieren. Gemäß ihrem Erscheinungsbild erweist sich die Handschrift und der Zeitpunkt ihrer Entstehung aber vor allem als ein Wendepunkt, an dem der Einfluss der lateinischen Schriftkultur auf die Produktion volkssprachlicher Literatur besonders deutlich wird. Während insgesamt der Anteil der geistlichen Dichtung kontinuierlich abnimmt, erfolgt von literaturwissenschaftlicher Seite zurecht auch die Einordnung der Handschrift in die weltlich-laikale Hofliteratur. In diesem Kontext steht die Handschrift D aber nicht am Beginn einer Blütezeit, sondern weist vor allem in das blühende Spannungsfeld zwischen der lateinisch geprägten Buchproduktion und der geistlichen Dichtung auf der einen und dem Anspruch einer laikalen Rezeptionsgemeinschaft auf der anderen Seite. In diesem Feld bewegt sich meine medienanthropologische Untersuchung. Die Schwerpunkte liegen hierbei auf der Rekonstruktion der Handschriftenherstellung sowie auf der Analyse der Darstellungsprinzipien im Bildprogramm.
Cardiac valves are essential for the continuous and unidirectional flow of blood throughout the body. During embryonic development, their formation is strictly connected to the mechanical forces exerted by blood flow. The endocardium that lines the interior of the heart is a specialized endothelial tissue and is highly sensitive to fluid shear stress. Endocardial cells harbor a signal transduction machinery required for the translation of these forces into biochemical signaling, which strongly impacts cardiac morphogenesis and physiology. To date, we lack a solid understanding on the mechanisms by which endocardial cells sense the dynamic mechanical stimuli and how they trigger different cellular responses. In the zebrafish embryo, endocardial cells at the atrioventricular canal respond to blood flow by rearranging from a monolayer to a double-layer, composed of a luminal cell population subjected to blood flow and an abluminal one that is not exposed to it. These early morphological changes lead to the formation of an immature valve leaflet. While previous studies mainly focused on genes that are positively regulated by shear stress, the mechanisms regulating cell behaviors and fates in cells that lack the stimulus of blood flow are largely unknown. One key discovery of my work is that the flow-sensitive Notch receptor and Krüppel-like factor (Klf) 2, one of the best characterized flow-regulated transcriptional factors, are activated by shear stress but that they function in two parallel signal transduction pathways. Each of these two pathways is essential for the rearrangement of atrioventricular cells into an immature double-layered valve leaflets. A second key discovery of my study is the finding that both Notch and Klf2 signaling negatively regulate the expression of the angiogenesis receptor Vegfr3/Flt4, which becomes restricted to abluminal endocardial cells of the valve leaflet. Within these cells, Flt4 downregulates the expressions of the cell adhesion proteins Alcam and VE-cadherin. A loss of Flt4 causes abluminal endocardial cells to ectopically express Notch, which is normally restricted to luminal cells, and impairs valve morphology. My study suggests that abluminal endocardial cells that do not experience mechanical stimuli loose Notch expression and this triggers expression of Flt4. In turn, Flt4 negatively regulates Notch on the abluminal side of the valve leaflet. These antagonistic signaling activities and fine-tuned gene regulatory mechanisms ultimately shape cardiac valve leaflets by inducing unique differences in the fates of endocardial cells.
This dissertation explores whether the processing of ellipsis is affected by changes in the complexity of the antecedent, either due to added linguistic material or to the presence of a temporary ambiguity. Murphy (1985) hypothesized that ellipsis is resolved via a string copying procedure when the antecedent is within the same sentence, and that copying longer strings takes more time. Such an account also implies that the antecedent is copied without its structure, which in turn implies that recomputing its syntax and semantics may be necessary at the ellipsis gap. Alternatively, several accounts predict null effects of antecedent complexity, as well as no reparsing. These either involve a structure copying mechanism that is cost-free and whose finishing time is thus independent of the form of the antecedent (Frazier & Clifton, 2001), treat ellipsis as a pointer into content-addressable memory with direct access (Martin & McElree, 2008, 2009), or assume that one structure is ‘shared’ between antecedent and gap (Frazier & Clifton, 2005).
In a self-paced reading study on German sluicing, temporarily ambiguous garden-path clauses were used as antecedents, but no evidence of reparsing in the form of a slowdown at the ellipsis site was found. Instead, results suggest that antecedents which had been reanalyzed from an initially incorrect structure were easier to retrieve at the gap. This finding that can be explained within the framework of cue-based retrieval parsing (Lewis & Vasishth, 2005), where additional syntactic operations on a structure yield memory reactivation effects.
Two further self-paced reading studies on German bare argument ellipsis and English verb phrase ellipsis investigated if adding linguistic content to the antecedent would increase processing times for the ellipsis, and whether insufficiently demanding comprehension tasks may have been responsible for earlier null results (Frazier & Clifton, 2000; Martin & McElree, 2008). It has also been suggested that increased antecedent complexity should shorten rather than lengthen retrieval times by providing more unique memory features (Hofmeister, 2011). Both experiments failed to yield reliable evidence that antecedent complexity affects ellipsis processing times in either direction, irrespectively of task demands.
Finally, two eye-tracking studies probed more deeply into the proposed reactivation-induced speedup found in the first experiment. The first study used three different kinds of French garden-path sentences as antecedents, with two of them failing to yield evidence for reactivation. Moreover, the third sentence type showed evidence suggesting that having failed to assign a structure to the antecedent leads to a slowdown at the ellipsis site, as well as regressions towards the ambiguous part of the sentence. The second eye-tracking study used the same materials as the initial self-paced reading study on German, with results showing a pattern similar to the one originally observed, with some notable differences.
Overall, the experimental results are compatible with the view that adding linguistic material to the antecedent has no or very little effect on the ease with which ellipsis is resolved, which is consistent with the predictions of cost-free copying, pointer-based approaches and structure sharing. Additionally, effects of the antecedent’s parsing history on ellipsis processing may be due to reactivation, the availability of multiple representations in memory, or complete failure to retrieve a matching target.
Die Untersuchung mikrogelinster astronomischer Objekte ermöglicht es, Informationen über die Größe und Struktur dieser Objekte zu erhalten. Im ersten Teil dieser Arbeit werden die Spektren von drei gelinsten Quasare, die mit dem Potsdamer Multi Aperture Spectrophotometer (PMAS) erhalten wurden, auf Anzeichen für Mikrolensing untersucht. In den Spektren des Vierfachquasares HE 0435-1223 und des Doppelquasares HE 0047-1756 konnten Hinweise für Mikrolensing gefunden werden, während der Doppelquasar UM 673 (Q 0142--100) keine Anzeichen für Mikrolensing zeigt. Die Invertierung der Lichtkurve eines Mikrolensing-Kausik-Crossing-Ereignisses ermöglicht es, das eindimensionale Helligkeitsprofil der gelinsten Quelle zu rekonstruieren. Dies wird im zweiten Teil dieser Arbeit untersucht. Die mathematische Beschreibung dieser Aufgabe führt zu einer Volterra'schen Integralgleichung der ersten Art, deren Lösung ein schlecht gestelltes Problem ist. Zu ihrer Lösung wird in dieser Arbeit ein lokales Regularisierungsverfahren angewendet, das an die kausale Strukture der Volterra'schen Gleichung besser angepasst ist als die bisher verwendete Tikhonov-Phillips-Regularisierung. Es zeigt sich, dass mit dieser Methode eine bessere Rekonstruktion kleinerer Strukturen in der Quelle möglich ist. Weiterhin wird die Anwendbarkeit der Regularisierungsmethode auf realistische Lichtkurven mit irregulärem Sampling bzw. größeren Lücken in den Datenpunkten untersucht.
Dentro de la cuenca intermontana de Quito-Guay llabamba de Ecuador, se han identificado y analizado en este estudio, cinco depósitos coluviales inusualmente grandes de antiguos deslizamientos. El gran deslizamiento rotacional MM-5 Guayllabamba es el más extenso, con un volumen de 1183 millones de m3. Las mega avalanchas de escombros MM-1 Conocoto, MM-3 Oyacoto, y MM-4 San Francisco fueron desencadenadas originalmente por una ruptura inicial que estuvo asociada a un deslizamiento rotacional, los depósitos correspondientes tienen volúmenes entre 399 a 317 millones de m3. Finalmente, el depósito de menor volumen, el deslizamiento rotacional y caída de detritos MM-2 Batán, tiene un volumen de 8,7 millones de m3. En esta tesis, se realizó un estudio detallado de estos grandes movimientos en masa utilizando métodos neotectónicos y lito-tefrostratigráficos para comprender las condiciones geológicas y geomorfológicas de contorno que podrían ser relevantes para desencadenar estos movimientos en masa. La parte neotectónica del estudio se basó en el análisis geomorfológico cualitativo y cuantitativo de estos grandes depósitos de movimientos en masa, a través de la caracterización estructural de anticlinales ubicados al este de la subcuenca de Quito y sus flancos colapsados que constituyen las áreas de ruptura. Esta parte del análisis fue además apoyada por la aplicación de diferentes índices morfométricos para revelar procesos de evolución del paisaje forzados tectónicamente que pueden haber contribuido a la generación de movimientos en masa. La parte lito-tefrostratigráfica del estudio se basó en el análisis de las características petrográficas, geoquímicas y geocronológicas de los horizontes del suelo y de las cenizas volcánicas intercaladas, con el objetivo de restringir la cronología de los eventos individuales de movimientos en masa y su posible de correlación. Los resultados se integraron en esquemas cronoestratigráficos utilizando superficies de ruptura, relaciones transversales y de superposición de depósitos de deslizamiento y estratos posteriores para comprender los movimientos en masa en el contexto tectónico y temporal del entorno de la cuenca intermontana, así como para identificar los mecanismos desencadenantes de cada evento. El movimiento en masa MM-5 Guayllabamba es el resultado del colapso de la ladera suroeste del volcán Mojanda y fue desencadenado por la interacción de condiciones geológicas y morfológicas hace aproximadamente 0,81 Ma. El primer episodio de avalancha de escombros de los movimientos en masa MM-3 Oyacoto y MM-4 San Francisco podría estar relacionado con condiciones tanto geológicas como morfológicas, dadas las rocas altamente fracturadas y el levantamiento del anticlinal Bellavista-Catequilla que posteriormente fue inciso al pie de la ladera por la erosión fluvial. Este primer episodio de colapso probablemente ocurrió alrededor de los 0,8 Ma. El movimiento en masa MM-2 Batán posiblemente también fue desencadenado por una combinación de condiciones geológicas y morfológicas, asociadas a una reducción de los esfuerzos litostáticos que afectaron a las formaciones Chiche y Machángara y a un aumento de los esfuerzos de cizalla durante procesos de socavación fluvial lateral en los flancos de las áreas de origen. Esto apunta a un proceso vinculado entre la erosión fluvial y los procesos de levantamiento asociados a la evolución del anticlinal El Batán-La Bota que podría haber ocurrido entre 0,5 y 0,25 Ma. La voluminosa avalancha de escombros MM-1 Conocoto, así como el segundo episodio de avalancha de escombros que generó los movimientos en masa MM-3 Oyacoto y MM-4 San Francisco, fueron provocados por el colapso gravitacional de las formaciones Mojanda y Cangahua que se caracterizan por la intercalación de cenizas volcánicas. La falla del flanco oriental de los anticlinales probablemente estuvo asociada al incremento de la humedad disponible relacionada con las variaciones climáticas regionales del Holoceno. Los resultados de la cronología de los paleosuelos combinados con los datos cronoestratigráficos y paleoclimáticos regionales sugieren que estas avalanchas de escombros se desencadenaron entre 5 y 4 ka.
La tectónica activa ha modelado los rasgos morfológicos de la cuenca intermontana Quito-Guayllabamba. El desencadenamiento de movimientos en masa en este ambiente está asociado a rupturas en litologías del Pleistoceno (sedimentos lacustres, depósitos aluviales y volcánicos) sometidas a procesos de deformación, actividad sísmica y episodios superpuestos de variabilidad climática. El Distrito Metropolitano de Quito es parte integral de este complejo entorno y de las condiciones geológicas, climáticas y topográficas que continúan influyendo en el espacio geográfico urbano dentro de esta cuenca intermontana. La ciudad de Quito comprende el área de mayor consolidación urbana incluyendo las subcuencas de Quito y San Antonio, con una población de 2,872 millones de habitantes, lo que refleja la importancia del estudio de las amenazas geológicas y climáticas inherentes a esta región.
This Thesis was devoted to the study of the coupled system composed by El Niño/Southern Oscillation and the Annual Cycle. More precisely, the work was focused on two main problems: 1. How to separate both oscillations into an affordable model for understanding the behaviour of the whole system. 2. How to model the system in order to achieve a better understanding of the interaction, as well as to predict future states of the system. We focused our efforts in the Sea Surface Temperature equations, considering that atmospheric effects were secondary to the ocean dynamics. The results found may be summarised as follows: 1. Linear methods are not suitable for characterising the dimensionality of the sea surface temperature in the tropical Pacific Ocean. Therefore they do not help to separate the oscillations by themselves. Instead, nonlinear methods of dimensionality reduction are proven to be better in defining a lower limit for the dimensionality of the system as well as in explaining the statistical results in a more physical way [1]. In particular, Isomap, a nonlinear modification of Multidimensional Scaling methods, provides a physically appealing method of decomposing the data, as it substitutes the euclidean distances in the manifold by an approximation of the geodesic distances. We expect that this method could be successfully applied to other oscillatory extended systems and, in particular, to meteorological systems. 2. A three dimensional dynamical system could be modeled, using a backfitting algorithm, for describing the dynamics of the sea surface temperature in the tropical Pacific Ocean. We observed that, although there were few data points available, we could predict future behaviours of the coupled ENSO-Annual Cycle system with an accuracy of less than six months, although the constructed system presented several drawbacks: few data points to input in the backfitting algorithm, untrained model, lack of forcing with external data and simplification using a close system. Anyway, ensemble prediction techniques showed that the prediction skills of the three dimensional time series were as good as those found in much more complex models. This suggests that the climatological system in the tropics is mainly explained by ocean dynamics, while the atmosphere plays a secondary role in the physics of the process. Relevant predictions for short lead times can be made using a low dimensional system, despite its simplicity. The analysis of the SST data suggests that nonlinear interaction between the oscillations is small, and that noise plays a secondary role in the fundamental dynamics of the oscillations [2]. A global view of the work shows a general procedure to face modeling of climatological systems. First, we should find a suitable method of either linear or nonlinear dimensionality reduction. Then, low dimensional time series could be extracted out of the method applied. Finally, a low dimensional model could be found using a backfitting algorithm in order to predict future states of the system.
Even though quite different in occurrence and consequences, from a modeling perspective many natural hazards share similar properties and challenges. Their complex nature as well as lacking knowledge about their driving forces and potential effects make their analysis demanding: uncertainty about the modeling framework, inaccurate or incomplete event observations and the intrinsic randomness of the natural phenomenon add up to different interacting layers of uncertainty, which require a careful handling. Nevertheless deterministic approaches are still widely used in natural hazard assessments, holding the risk of underestimating the hazard with disastrous effects. The all-round probabilistic framework of Bayesian networks constitutes an attractive alternative. In contrast to deterministic proceedings, it treats response variables as well as explanatory variables as random variables making no difference between input and output variables. Using a graphical representation Bayesian networks encode the dependency relations between the variables in a directed acyclic graph: variables are represented as nodes and (in-)dependencies between variables as (missing) edges between the nodes. The joint distribution of all variables can thus be described by decomposing it, according to the depicted independences, into a product of local conditional probability distributions, which are defined by the parameters of the Bayesian network. In the framework of this thesis the Bayesian network approach is applied to different natural hazard domains (i.e. seismic hazard, flood damage and landslide assessments). Learning the network structure and parameters from data, Bayesian networks reveal relevant dependency relations between the included variables and help to gain knowledge about the underlying processes. The problem of Bayesian network learning is cast in a Bayesian framework, considering the network structure and parameters as random variables itself and searching for the most likely combination of both, which corresponds to the maximum a posteriori (MAP score) of their joint distribution given the observed data. Although well studied in theory the learning of Bayesian networks based on real-world data is usually not straight forward and requires an adoption of existing algorithms. Typically arising problems are the handling of continuous variables, incomplete observations and the interaction of both. Working with continuous distributions requires assumptions about the allowed families of distributions. To "let the data speak" and avoid wrong assumptions, continuous variables are instead discretized here, thus allowing for a completely data-driven and distribution-free learning. An extension of the MAP score, considering the discretization as random variable as well, is developed for an automatic multivariate discretization, that takes interactions between the variables into account. The discretization process is nested into the network learning and requires several iterations. Having to face incomplete observations on top, this may pose a computational burden. Iterative proceedings for missing value estimation become quickly infeasible. A more efficient albeit approximate method is used instead, estimating the missing values based only on the observations of variables directly interacting with the missing variable. Moreover natural hazard assessments often have a primary interest in a certain target variable. The discretization learned for this variable does not always have the required resolution for a good prediction performance. Finer resolutions for (conditional) continuous distributions are achieved with continuous approximations subsequent to the Bayesian network learning, using kernel density estimations or mixtures of truncated exponential functions. All our proceedings are completely data-driven. We thus avoid assumptions that require expert knowledge and instead provide domain independent solutions, that are applicable not only in other natural hazard assessments, but in a variety of domains struggling with uncertainties.
Subject of this work is the study of applications of the Galactic Microlensing effect, where the light of a distant star (source) is bend according to Einstein's theory of gravity by the gravitational field of intervening compact mass objects (lenses), creating multiple (however not resolvable) images of the source. Relative motion of source, observer and lens leads to a variation of deflection/magnification and thus to a time dependant observable brightness change (lightcurve), a so-called microlensing event, lasting weeks to months. The focus lies on the modeling of binary-lens events, which provide a unique tool to fully characterize the lens-source system and to detect extra-solar planets around the lens star. Making use of the ability of genetic algorithms to efficiently explore large and intricate parameter spaces in the quest for the global best solution, a modeling software (Tango) for binary lenses is developed, presented and applied to data sets from the PLANET microlensing campaign. For the event OGLE-2002-BLG-069 the 2nd ever lens mass measurement has been achieved, leading to a scenario, where a G5III Bulge giant at 9.4 kpc is lensed by an M-dwarf binary with total mass of M=0.51 solar masses at distance 2.9 kpc. Furthermore a method is presented to use the absence of planetary lightcurve signatures to constrain the abundance of extra-solar planets.
Technological progress allows for producing ever more complex predictive models on the basis of increasingly big datasets. For risk management of natural hazards, a multitude of models is needed as basis for decision-making, e.g. in the evaluation of observational data, for the prediction of hazard scenarios, or for statistical estimates of expected damage. The question arises, how modern modelling approaches like machine learning or data-mining can be meaningfully deployed in this thematic field. In addition, with respect to data availability and accessibility, the trend is towards open data. Topic of this thesis is therefore to investigate the possibilities and limitations of machine learning and open geospatial data in the field of flood risk modelling in the broad sense. As this overarching topic is broad in scope, individual relevant aspects are identified and inspected in detail.
A prominent data source in the flood context is satellite-based mapping of inundated areas, for example made openly available by the Copernicus service of the European Union. Great expectations are directed towards these products in scientific literature, both for acute support of relief forces during emergency response action, and for modelling via hydrodynamic models or for damage estimation. Therefore, a focus of this work was set on evaluating these flood masks. From the observation that the quality of these products is insufficient in forested and built-up areas, a procedure for subsequent improvement via machine learning was developed. This procedure is based on a classification algorithm that only requires training data from a particular class to be predicted, in this specific case data of flooded areas, but not of the negative class (dry areas). The application for hurricane Harvey in Houston shows the high potential of this method, which depends on the quality of the initial flood mask.
Next, it is investigated how much the predicted statistical risk from a process-based model chain is dependent on implemented physical process details. Thereby it is demonstrated what a risk study based on established models can deliver. Even for fluvial flooding, such model chains are already quite complex, though, and are hardly available for compound or cascading events comprising torrential rainfall, flash floods, and other processes. In the fourth chapter of this thesis it is therefore tested whether machine learning based on comprehensive damage data can offer a more direct path towards damage modelling, that avoids explicit conception of such a model chain. For that purpose, a state-collected dataset of damaged buildings from the severe El Niño event 2017 in Peru is used. In this context, the possibilities of data-mining for extracting process knowledge are explored as well. It can be shown that various openly available geodata sources contain useful information for flood hazard and damage modelling for complex events, e.g. satellite-based rainfall measurements, topographic and hydrographic information, mapped settlement areas, as well as indicators from spectral data. Further, insights on damaging processes are discovered, which mainly are in line with prior expectations. The maximum intensity of rainfall, for example, acts stronger in cities and steep canyons, while the sum of rain was found more informative in low-lying river catchments and forested areas. Rural areas of Peru exhibited higher vulnerability in the presented study compared to urban areas. However, the general limitations of the methods and the dependence on specific datasets and algorithms also become obvious.
In the overarching discussion, the different methods – process-based modelling, predictive machine learning, and data-mining – are evaluated with respect to the overall research questions. In the case of hazard observation it seems that a focus on novel algorithms makes sense for future research. In the subtopic of hazard modelling, especially for river floods, the improvement of physical models and the integration of process-based and statistical procedures is suggested. For damage modelling the large and representative datasets necessary for the broad application of machine learning are still lacking. Therefore, the improvement of the data basis in the field of damage is currently regarded as more important than the selection of algorithms.
Since 1971, the Freudenthal Institute has developed an approach to mathematics education named Realistic Mathematics Education (RME). The philosophy of RME is based on Hans Freudenthal’s concept of ‘mathematics as a human activity’. Prof. Hans Freudenthal (1905-1990), a mathematician and educator, believes that ‘ready-made mathematics’ should not be taught in school. By contrast, he urges that students should be offered ‘realistic situations’ so that they can rediscover from informal to formal mathematics. Although mathematics education in Vietnam has some achievements, it still encounters several challenges. Recently, the reform of teaching methods has become an urgent task in Vietnam. It appears that Vietnamese mathematics education lacks necessary theoretical frameworks. At first sight, the philosophy of RME is suitable for the orientation of the teaching method reform in Vietnam. However, the potential of RME for mathematics education as well as the ability of applying RME to teaching mathematics is still questionable in Vietnam. The primary aim of this dissertation is to research into abilities of applying RME to teaching and learning mathematics in Vietnam and to answer the question “how could RME enrich Vietnamese mathematics education?”. This research will emphasize teaching geometry in Vietnamese middle school. More specifically, the dissertation will implement the following research tasks: • Analyzing the characteristics of Vietnamese mathematics education in the ‘reformed’ period (from the early 1980s to the early 2000s) and at present; • Implementing a survey of 152 middle school teachers’ ideas from several Vietnamese provinces and cities about Vietnamese mathematics education; • Analyzing RME, including Freudenthal’s viewpoints for RME and the characteristics of RME; • Discussing how to design RME-based lessons and how to apply these lessons to teaching and learning in Vietnam; • Experimenting RME-based lessons in a Vietnamese middle school; • Analyzing the feedback from the students’ worksheets and the teachers’ reports, including the potentials of RME-based lessons for Vietnamese middle school and the difficulties the teachers and their students encountered with RME-based lessons; • Discussing proposals for applying RME-based lessons to teaching and learning mathematics in Vietnam, including making suggestions for teachers who will apply these lessons to their teaching and designing courses for in-service teachers and teachers-in training. This research reveals that although teachers and students may encounter some obstacles while teaching and learning with RME-based lesson, RME could become a potential approach for mathematics education and could be effectively applied to teaching and learning mathematics in Vietnamese school.
Advances in biotechnologies rapidly increase the number of molecules of a cell which can be observed simultaneously. This includes expression levels of thousands or ten-thousands of genes as well as concentration levels of metabolites or proteins. Such Profile data, observed at different times or at different experimental conditions (e.g., heat or dry stress), show how the biological experiment is reflected on the molecular level. This information is helpful to understand the molecular behaviour and to identify molecules or combination of molecules that characterise specific biological condition (e.g., disease). This work shows the potentials of component extraction algorithms to identify the major factors which influenced the observed data. This can be the expected experimental factors such as the time or temperature as well as unexpected factors such as technical artefacts or even unknown biological behaviour. Extracting components means to reduce the very high-dimensional data to a small set of new variables termed components. Each component is a combination of all original variables. The classical approach for that purpose is the principal component analysis (PCA). It is shown that, in contrast to PCA which maximises the variance only, modern approaches such as independent component analysis (ICA) are more suitable for analysing molecular data. The condition of independence between components of ICA fits more naturally our assumption of individual (independent) factors which influence the data. This higher potential of ICA is demonstrated by a crossing experiment of the model plant Arabidopsis thaliana (Thale Cress). The experimental factors could be well identified and, in addition, ICA could even detect a technical artefact. However, in continuously observations such as in time experiments, the data show, in general, a nonlinear distribution. To analyse such nonlinear data, a nonlinear extension of PCA is used. This nonlinear PCA (NLPCA) is based on a neural network algorithm. The algorithm is adapted to be applicable to incomplete molecular data sets. Thus, it provides also the ability to estimate the missing data. The potential of nonlinear PCA to identify nonlinear factors is demonstrated by a cold stress experiment of Arabidopsis thaliana. The results of component analysis can be used to build a molecular network model. Since it includes functional dependencies it is termed functional network. Applied to the cold stress data, it is shown that functional networks are appropriate to visualise biological processes and thereby reveals molecular dynamics.
The Tibetan Plateau is the largest elevated landmass in the world and profoundly influences atmospheric circulation patterns such as the Asian monsoon system. Therefore this area has been increasingly in focus of palaeoenvironmental studies. This thesis evaluates the applicability of organic biomarkers for palaeolimnological purposes on the Tibetan Plateau with a focus on aquatic macrophyte-derived biomarkers. Submerged aquatic macrophytes have to be considered to significantly influence the sediment organic matter due to their high abundance in many Tibetan lakes. They can show highly 13C-enriched biomass because of their carbon metabolism and it is therefore crucial for the interpretation of δ13C values in sediment cores to understand to which extent aquatic macrophytes contribute to the isotopic signal of the sediments in Tibetan lakes and in which way variations can be explained in a palaeolimnological context. Additionally, the high abundance of macrophytes makes them interesting as potential recorders of lake water δD. Hydrogen isotope analysis of biomarkers is a rapidly evolving field to reconstruct past hydrological conditions and therefore of special relevance on the Tibetan Plateau due to the direct linkage between variations of monsoon intensity and changes in regional precipitation / evaporation balances. A set of surface sediment and aquatic macrophyte samples from the central and eastern Tibetan Plateau was analysed for composition as well as carbon and hydrogen isotopes of n-alkanes. It was shown how variable δ13C values of bulk organic matter and leaf lipids can be in submerged macrophytes even of a single species and how strongly these parameters are affected by them in corresponding sediments. The estimated contribution of the macrophytes by means of a binary isotopic model was calculated to be up to 60% (mean: 40%) to total organic carbon and up to 100% (mean: 66%) to mid-chain n-alkanes. Hydrogen isotopes of n-alkanes turned out to record δD of meteoric water of the summer precipitation. The apparent enrichment factor between water and n-alkanes was in range of previously reported ones (≈-130‰) at the most humid sites, but smaller (average: -86‰) at sites with a negative moisture budget. This indicates an influence of evaporation and evapotranspiration on δD of source water for aquatic and terrestrial plants. The offset between δD of mid- and long-chain n-alkanes was close to zero in most of the samples, suggesting that lake water as well as soil and leaf water are affected to a similar extent by those effects. To apply biomarkers in a palaeolimnological context, the aliphatic biomarker fraction of a sediment core from Lake Koucha (34.0° N; 97.2° E; eastern Tibetan Plateau) was analysed for concentrations, δ13C and δD values of compounds. Before ca. 8 cal ka BP, the lake was dominated by aquatic macrophyte-derived mid-chain n-alkanes, while after 6 cal ka BP high concentrations of a C20 highly branched isoprenoid compound indicate a predominance of phytoplankton. Those two principally different states of the lake were linked by a transition period with high abundances of microbial biomarkers. δ13C values were relatively constant for long-chain n-alkanes, while mid-chain n-alkanes showed variations between -23.5 to -12.6‰. Highest values were observed for the assumed period of maximum macrophyte growth during the late glacial and for the phytoplankton maximum during the middle and late Holocene. Therefore, the enriched values were interpreted to be caused by carbon limitation which in turn was induced by high macrophyte and primary productivity, respectively. Hydrogen isotope signatures of mid-chain n-alkanes have been shown to be able to track a previously deduced episode of reduced moisture availability between ca. 10 and 7 cal ka BP, indicated by a 20‰ shift towards higher δD values. Indications for cooler episodes at 6.0, 3.1 and 1.8 cal ka BP were gained from drops of biomarker concentrations, especially microbial-derived hopanoids, and from coincidental shifts towards lower δ13C values. Those episodes correspond well with cool events reported from other locations on the Tibetan Plateau as well as in the Northern Hemisphere. To conclude, the study of recent sediments and plants improved the understanding of factors affecting the composition and isotopic signatures of aliphatic biomarkers in sediments. Concentrations and isotopic signatures of the biomarkers in Lake Koucha could be interpreted in a palaeolimnological context and contribute to the knowledge about the history of the lake. Aquatic macrophyte-derived mid-chain n-alkanes were especially useful, due to their high abundance in many Tibetan Lakes and their ability to record major changes of lake productivity and palaeo-hydrological conditions. Therefore, they have the potential to contribute to a fuller understanding of past climate variability in this key region for atmospheric circulation systems.
Diese Arbeit besteht aus drei Aufsätzen. Der erste Aufsatz („Die Arbeitsmarktpolitik in Südosteuropa: Von der Transformation bis zur EU-Integration“) erörtert die wirtschaftlichen und politischen Rahmenbedingungen in Südosteuropa und die damit einhergehenden Entwicklungen auf den jeweiligen Arbeitsmärkten seit 1991. Im Fokus steht dabei der Einfluss der Arbeitslosigkeit (als systemunabhängiges Problem) auf den EU-Integrationsprozess in den jugoslawischen Nachfolgestaaten und Albanien.
Welchen Einfluss haben der qualifikatorische und regionale Mismatch auf die Arbeitslosigkeit in Kroatien? Um diese Frage zu beantworten, wird im zweiten Kapitel dieser Arbeit („Arbeitslosigkeit im Transformationsprozess: Qualifikatorischer und regionaler Mismatch in Kroatien“) der Mismatch sowohl statisch mit Mismatch-Indikatoren als auch dynamisch im Rahmen der Matching-Funktion erörtert. Unter Anwendung von Paneldaten für neun Berufsgruppen und 21 Regionen im Zeitraum zwischen Januar 2004 und Juni 2015 wird in diesem Kapitel mithilfe von Fixed-Effects-Modellen dieser Einfluss geschätzt.
Führt die Anpassung der Arbeitslosenversicherungsgesetze an die EU-Standards zu einer Verbesserung der Arbeitsmarktergebnisse in den Staaten Südosteuropas? Mit Hilfe von Paneldaten für den Zeitraum 1996–2014 wird für fünf südosteuropäische Staaten (Albanien, Kroatien, Mazedonien, Montenegro und Serbien) dieser Einfluss im Rahmen eines Differenz-in-Differenzen-Modells im dritten Aufsatz („Unvollständige Integration: Eine Differenz-in-Differenzen-Analyse der südosteuropäischen Arbeitsmärkte“) geschätzt.
The overall program "arborescent numbers" is to similarly perform the constructions from the natural numbers (N) to the positive fractional numbers (Q+) to positive real numbers (R+) beginning with (specific) binary trees instead of natural numbers. N can be regarded as the associative binary trees. The binary trees B and the left-commutative binary trees P allow the hassle-free definition of arbitrary high arithmetic operations (hyper ... hyperpowers). To construct the division trees the algebraic structure "coppice" is introduced which is a group with an addition over which the multiplication is right-distributive. Q+ is the initial associative coppice. The present work accomplishes one step in the program "arborescent numbers". That is the construction of the arborescent equivalent(s) of the positive fractional numbers. These equivalents are the "division binary trees" and the "fractional trees". A representation with decidable word problem for each of them is given. The set of functions f:R1->R1 generated from identity by taking powers is isomorphic to P and can be embedded into a coppice by taking inverses.
Researchers have made many approaches to study the complexities of the mammalian taste system; however molecular mechanisms of taste processing in the early structures of the central taste pathway remain unclear. More recently the Arc catFISH (cellular compartment analysis of temporal activity by fluorescent in situ hybridisation) method has been used in our lab to study neural activation following taste stimulation in the first central structure in the taste pathway, the nucleus of the solitary tract. This method uses the immediate early gene Arc as a neural activity marker to identify taste-responsive neurons. Arc plays a critical role in memory formation and is necessary for conditioned taste aversion memory formation. In the nucleus of the solitary tract only bitter taste stimulation resulted in increased Arc expression, however this did not occur following stimulation with tastants of any other taste quality. The primary target for gustatory NTS neurons is the parabrachial nucleus (PbN) and, like Arc, the PbN plays an important role in conditioned taste aversion learning.
The aim of this thesis is to investigate Arc expression in the PbN following taste stimulation to elucidate the molecular identity and function of Arc expressing, taste- responsive neurons. Naïve and taste-conditioned mice were stimulated with tastants from each of the five basic taste qualities (sweet, salty, sour, umami, and bitter), with additional bitter compounds included for comparison. The expression patterns of Arc and marker genes were analysed using in situ hybridisation (ISH). The Arc catFISH method was used to observe taste-responsive neurons following each taste stimulation. A double fluorescent in situ hybridisation protocol was then established to investigate possible neuropeptide genes involved in neural responses to taste stimulation.
The results showed that bitter taste stimulation induces increased Arc expression in the PbN in naïve mice. This was not true for other taste qualities. In mice conditioned to find an umami tastant aversive, subsequent umami taste stimulation resulted in an increase in Arc expression similar to that seen in bitter-stimulated mice. Taste-responsive Arc expression was denser in the lateral PbN than the medial PbN. In mice that received two temporally separated taste stimulations, each stimulation time-point showed a distinct population of Arc-expressing neurons, with only a small population (10 – 18 %) of neurons responding to both stimulations. This suggests that either each stimulation event activates a different population of neurons, or that Arc is marking something other than simple cellular activation, such as long-term cellular changes that do not occur twice within a 25 minute time frame. Investigation using the newly established double-FISH protocol revealed that, of the bitter-responsive Arc expressing neuron population: 16 % co-expressed calcitonin RNA; 17 % co-expressed glucagon-like peptide 1 receptor RNA; 17 % co-expressed hypocretin receptor 1 RNA; 9 % co-expressed gastrin-releasing peptide RNA; and 20 % co-expressed neurotensin RNA. This co-expression with multiple different neuropeptides suggests that bitter-activated Arc expression mediates multiple neural responses to the taste event, such as taste aversion learning, suppression of food intake, increased heart rate, and involves multiple brain structures such as the lateral hypothalamus, amygdala, bed nucleus of the stria terminalis, and the thalamus.
The increase in Arc-expression suggests that bitter taste stimulation, and umami taste stimulation in umami-averse animals, may result in an enhanced state of Arc- dependent synaptic plasticity in the PbN, allowing animals to form taste-relevant memories to these aversive compounds more readily. The results investigating neuropeptide RNA co- expression suggest the amygdala, bed nucleus of the stria terminalis, and thalamus as possible targets for bitter-responsive Arc-expressing PbN neurons.
Systems of Systems (SoS) have received a lot of attention recently. In this thesis we will focus on SoS that are built atop the techniques of Service-Oriented Architectures and thus combine the benefits and challenges of both paradigms. For this thesis we will understand SoS as ensembles of single autonomous systems that are integrated to a larger system, the SoS. The interesting fact about these systems is that the previously isolated systems are still maintained, improved and developed on their own. Structural dynamics is an issue in SoS, as at every point in time systems can join and leave the ensemble. This and the fact that the cooperation among the constituent systems is not necessarily observable means that we will consider these systems as open systems. Of course, the system has a clear boundary at each point in time, but this can only be identified by halting the complete SoS. However, halting a system of that size is practically impossible. Often SoS are combinations of software systems and physical systems. Hence a failure in the software system can have a serious physical impact what makes an SoS of this kind easily a safety-critical system. The contribution of this thesis is a modelling approach that extends OMG's SoaML and basically relies on collaborations and roles as an abstraction layer above the components. This will allow us to describe SoS at an architectural level. We will also give a formal semantics for our modelling approach which employs hybrid graph-transformation systems. The modelling approach is accompanied by a modular verification scheme that will be able to cope with the complexity constraints implied by the SoS' structural dynamics and size. Building such autonomous systems as SoS without evolution at the architectural level --- i. e. adding and removing of components and services --- is inadequate. Therefore our approach directly supports the modelling and verification of evolution.
This thesis presents methods for automated synthesis of flexible chip multiprocessor systems from parallel programs targeted at FPGAs to exploit both task-level parallelism and architecture customization. Automated synthesis is necessitated by the complexity of the design space. A detailed description of the design space is provided in order to determine which parameters should be modeled to facilitate automated synthesis by optimizing a cost function, the emphasis being placed on inclusive modeling of parameters from application, architectural and physical subspaces, as well as their joint coverage in order to avoid pre-constraining the design space. Given a parallel program and a set of an IP library, the automated synthesis problem is to simultaneously (i) select processors (ii) map and schedule tasks to them, and (iii) select one or several networks for inter-task communications such that design constraints and optimization objectives are met. The research objective in this thesis is to find a suitable model for automated synthesis, and to evaluate methods of using the model for architectural optimizations. Our contributions are a holistic approach for the design of such systems, corresponding models to facilitate automated synthesis, evaluation of optimization methods using state of the art integer linear and answer set programming, as well as the development of synthesis heuristics to solve runtime challenges.
Bobrowski hatte nach dem Abitur die Absicht geäußert, Kunstgeschichte zu studieren, doch Krieg und Kriegsgefangenschaft vereitelten seinen Plan: Der Wehrmachtsangehörige wurde einzig im Winter 1941/1942 für ein Studiensemester an der Universität Berlin vom Kriegsdienst freigestellt. Nachhaltig beeindruckt war Bobrowski insbesondere von der Vorlesung „Deutsche Kunst der Goethezeit“ des Lehrstuhlinhabers Wilhelm Pinder. Trotz eines grundlegenden Einflusses ist indessen zu keinem Zeitpunkt Pinders ideologischer Hintergrund in Bobrowskis Gedichten manifest geworden. Nach der Rückkehr aus sowjetischer Gefangenschaft an Weihnachten 1949 war für den mittlerweile Zweiunddreißigjährigen an ein Studium nicht mehr zu denken. Die lebenslange intermediale Auseinandersetzung mit Werken der bildenden Kunst in seinem Œuvre kann indessen als Ausdruck seiner vielfältigen kulturgeschichtlichen Interessen und Neigungen interpretiert werden. Die Lebensphasen des Dichters korrelieren mit einer motivischen Entwicklung seiner Bildgedichte: Insbesondere half ihm die unantastbare Ästhetik bedeutender Kunstwerke, das Grauen der letzten Kriegsjahre und die Entbehrungen in sowjetischer Kriegsgefangenschaft zu überwinden. Didaktisch-moralische Zielsetzungen prägten zunächst die in den Jahren nach seiner Heimkehr entstandenen Gedichte, bevor sich Bobrowski inhaltlich und formal von diesem Gedichttypus zu lösen vermochte und vermehrt Gedichte zu schreiben begann, die kulturgeschichtliche Dimensionen annahmen und historische, mythologische, biblische und religionsphilosophische Themen in epochenübergreifende Zusammenhänge stellten. Die Gedichte über die Künstler Jawlensky und Calder berühren gleichzeitig kulturlandschaftliche Aspekte. Im letzten Lebensjahrzehnt interessierte sich Bobrowski zunehmend für die Kunst des 20. Jahrhunderts, während die moderne Architektur aus seinem Werk ausgeklammert blieb.
Architektur bildet eine Leitmotivik in Bobrowskis lyrischem Werk. Die übertragene Bedeutungsebene der in den Gedichten benannten sakralen und profanen Einzelbauten, aber auch der städtischen und dörflichen Ensembles sowie einzelner Gebäudeteile, verändert sich mehrfach im Laufe der Jahre. Ausgehend von traditionellen, paargereimten Jugendgedichten in jambischem Versmaß, in denen architektonische Elemente Teil einer Wahrnehmung bilden, die alles Außerästhetische ausblendet, wandelt sich der Sinngehalt der Sakral- und Profanbauten in Bobrowskis lyrischem Werk ein erstes Mal während den Kriegsjahren in Russland, die der Wehrmachtsangehörige am Ilmensee verbracht hat. In den damals entstandenen Oden zeugen die architektonischen Relikte von Leid, Tod und Zerstörung. Noch fehlt indessen der später so zentrale Gedanke der Schuld, der erst im Rückblick auf jene Zeit in den Gedichten, die nach der Rückkehr aus der Kriegsgefangenschaft bis zu Bobrowskis frühem Tod entstanden sind, thematisiert worden ist.
Gegen Ende des Kriegs und in den Jahren der Kriegsgefangenschaft besinnt sich Bobrowski erneut auf Heimatthemen, und die Architektur in seinen Gedichten wird zu einem ästhetisch überhöhten Fluchtpunkt seiner Sehnsucht nach Ostpreußen und dem Memelgebiet. In Kriegsgefangenschaft tritt erstmals der Aspekt des Sublimen in seinen Gedichten auf, und zwar sowohl bezogen auf die Malerei als auch auf die Architektur. Dieser Gedanke wird einerseits nach der Rückkehr nach Berlin in den Gedichten über die Architektur gotischer Kathedralen und das bauliche Erbe des Klassizismus weitergesponnen, doch steht in den damals entstandenen Gedichten das Kulturerbe Europas auch für historisches Unrecht und eine schwere, weit zurückreichende Schuld.
Von dieser auf den ganzen Kontinent bezogenen Kritik wendet sich Bobrowski in den nachfolgenden Jahren ab und konzentriert sich auf die Schuld der Deutschen gegenüber den Völkern Osteuropas. Damit erhält auch die Architektur in seinen Gedichten eine neue Bedeutung. Die Relikte der Ritterburgen des deutschen Ordens zeugen von der Herrschaft der mittelalterlichen Eroberer und verschmelzen dabei mit der Natur: Das Zeichenhafte der Architektur wird Teil der Landschaft. Im letzten Lebensjahrzehnt entstehen vermehrt Gedichte, die sich auf Parkanlagen und städtische Grünräume beziehen.
Der Dichter hat sich nicht nur auf persönliche Erfahrungen, sondern mitunter auch auf Bildquellen abstützt, ohne dass er das Original je gesehen hätte. Nur schwer zugänglich sind die Gedichte über Chagall und Gauguin ohne die Erkenntnis, dass sie sich auf Bildvorlagen in schmalen, populärwissenschaftlichen Büchern beziehen, die Bobrowski jeweils kurz vor der Niederschrift der entsprechenden Gedichte erworben hat. Anders verhält es sich mit jenen russischen Kirchen, die Eingang in sein lyrisches Werk gefunden haben. Bobrowski hat sie alle selbst im Krieg gesehen, und die meisten scheinen noch heute zu bestehen und können mit einiger Sicherheit identifiziert werden, wozu auch die Briefe des Dichters aus jener Zeit beitragen.
Arthur Ewert (1890-1959)
(2015)
Arthur Ewert (1890-1959) war in den zwanziger und frühen dreißiger Jahren ein wichtiger Funktionär der Kommunistischen Partei Deutschlands und der Kommunistischen Internationale.
Er wurde in der Familie eines armen Bauern in Ostpreußen geboren. Nach dem Abschluß der Schule ging er nach Berlin, um hier eine Lehre als Sattler zu absolvieren. Über die Berliner Arbeiterjugendbewegung fand er Kontakt zur Sozialdemokratischen Partei Deutschlands, deren Mitglied er 1908 wurde.
Im Mai 1914 emigrierte er gemeinsam mit seiner langjährigen Lebensgefährtin und späteren Ehefrau Elise Saborowski (1886-1939) nach Nordamerika, wo er sich sofort der sozialistischen Bewegung anschloß. Anfang 1919 gehörte er zu den Mitbegründern der ersten Kommunistischen Partei Kanadas.
Im Sommer 1919 kehrte er nach Deutschland zurück und wurde Mitglied der wenige Monate zuvor gegründeten KPD.
Auf dem Leipziger Parteitag der KPD im Februar 1923 wurde er in die zwanzigköpfige Zentrale seiner Partei gewählt und stieg damit in den engeren Führungszirkel auf.
Nach der gescheiterten »Deutschen Oktoberrevolution« im Herbst 1923 kämpfte er gemeinsam mit Ernst Meyer, Hugo Eberlein, Wilhelm Pieck und anderen um das Überleben der KPD, doch gelang es seiner Gruppe nicht, den Sieg der Linken und Ultralinken im parteiinternen Machtkampf zu verhindern. Ewert wurde politisch »kaltgestellt« und schied für mehr als ein Jahr aus der Parteiführung aus.
In dieser Zeit erfüllte er verschiedene Aufgaben für die Kommunistische Internationale. Bereits im Juni 1923 war er Berichterstatter zur Lage in der Norwegischen Arbeiterpartei gewesen, ab Ende 1924 war er Abgesandter bei der KP Großbritanniens. Im Sommer und Frühherbst 1927 hielt er sich mehrere Monate in den USA auf.
Im Sommer 1925 wurde er auf Veranlassung der Kommunistischen Internationale in die Führung der KPD zurückgeholt. Er trug wesentlich dazu bei, die Parteiführung unter Ernst Thälmann zu stabilisieren und sie – zumindest zeitweise – auf einen realpolitischen Kurs zu orientieren.
Mit dem erneuten »Links«-Schwenk der KPD ab Anfang 1928 wurde er als »Versöhnler« stigmatisiert und zunehmend zur Zielscheibe innerparteilicher Attacken. Der Versuch eines »Befreiungsschlages« unter Ausnutzung der sogenannten Wittorf-Affäre im Herbst 1928 scheiterte, bis zum Sommer 1929 wurde Arthur Ewert auf Drängen Stalins und mit ausdrücklicher Zustimmung Thälmanns aus allen Funktionen in der KPD entfernt.
Nach der Auflösung des Reichstags und dem damit verbundenen Verlust seines Reichstagsmandats im Juli 1930 schied Ewert endgültig aus der deutschen Parteiarbeit aus.
Ende 1930 wurde er zum Leiter des Südamerikanischen Büros der Kommunistischen Internationale in Montevideo, der Hauptstadt Uruguays, ernannt. Er trug damit Verantwortung für die unmittelbare Anleitung der Kommunistischen Parteien im sogenannten Südkegel Südamerikas. In diese Zeit fielen seine ersten Kontakte zu Luiz Carlos Prestes, dem legendären »Ritter der Hoffnung«, mit dem er ab Anfang 1935 in Brasilien zusammenarbeitete.
Von 1932 bis 1934 leitete Arthur Ewert das Büro der Kommunistischen Internationale in Shanghai und spielte dabei eine entscheidende Rolle zugunsten Mao Tse-tungs, dessen politisches Überleben er in einem innerparteilichen Machtkampf der KP Chinas sicherte.
Als Vertreter der Kommunistischen Internationale war Arthur Ewert im Verlaufe des Jahres 1935 an den Versuchen beteiligt, in Brasilien – gestützt auf ein breites Bündnis, die »Nationale Befreiungsallianz« - einen politischen Machtwechsel herbeizuführen. Nach dem Scheitern des von Prestes geführten Aufstandsversuches wurde er Ende 1935 verhaftet. Arthur Ewert verlor in Folge der barbarischen Folterungen in brasilianischer Haft den Verstand.
Er kam im Mai 1945 im Ergebnis einer Amnestie frei. 1947 gelang es seiner Schwester, ihn in die sowjetische Besatzungszone zurückzuholen. Die Ärzte dort konnten nur noch feststellen, daß eine Heilung unmöglich war. Arthur Ewert verbrachte den Rest seines Lebens in einem Pflegeheim in Eberswalde, wo er 1959 starb.
The East Asian monsoons characterize the modern-day Asian climate, yet their geological history and driving mechanisms remain controversial. The southeasterly summer monsoon provides moisture, whereas the northwesterly winter monsoon sweeps up dust from the arid Asian interior to form the Chinese Loess Plateau. The onset of this loess accumulation, and therefore of the monsoons, was thought to be 8 million years ago (Ma). However, in recent years these loess records have been extended further back in time to the Eocene (56-34 Ma), a period characterized by significant changes in both the regional geography and global climate. Yet the extent to which these reconfigurations drive atmospheric circulation and whether the loess-like deposits are monsoonal remains debated. In this thesis, I study the terrestrial deposits of the Xining Basin previously identified as Eocene loess, to derive the paleoenvironmental evolution of the region and identify the geological processes that have shaped the Asian climate.
I review dust deposits in the geological record and conclude that these are commonly represented by a mix of both windblown and water-laid sediments, in contrast to the pure windblown material known as loess. Yet by using a combination of quartz surface morphologies, provenance characteristics and distinguishing grain-size distributions, windblown dust can be identified and quantified in a variety of settings. This has important implications for tracking aridification and dust-fluxes throughout the geological record.
Past reversals of Earth’s magnetic field are recorded in the deposits of the Xining Basin and I use these together with a dated volcanic ash layer to accurately constrain the age to the Eocene period. A combination of pollen assemblages, low dust abundances and other geochemical data indicates that the early Eocene was relatively humid suggesting an intensified summer monsoon due to the warmer greenhouse climate at this time. A subsequent shift from predominantly freshwater to salt lakes reflects a long-term aridification trend possibly driven by global cooling and the continuous uplift of the Tibetan Plateau. Superimposed on this aridification are wetter intervals reflected in more abundant lake deposits which correlate with highstands of the inland proto-Paratethys Sea. This sea covered the Eurasian continent and thereby provided additional moisture to the winter-time westerlies during the middle to late Eocene.
The long-term aridification culminated in an abrupt shift at 40 Ma reflected by the onset of windblown dust, an increase in steppe-desert pollen, the occurrence of high-latitude orbital cycles and northwesterly winds identified in deflated salt deposits. Together, these indicate the onset of a Siberian high atmospheric pressure system driving the East Asian winter monsoon as well as dust storms and was triggered by a major sea retreat from the Asian interior. These results therefore show that the proto-Paratethys Sea, though less well recognized than the Tibetan Plateau and global climate, has been a major driver in setting up the modern-day climate in Asia.
This project describes the nominal, verbal and ‘truncation’ systems of Awing and explains the syntactic and semantic functions of the multifunctional l<-><-> (LE) morpheme in copular and wh-focused constructions. Awing is a Bantu Grassfields language spoken in the North West region of Cameroon. The work begins with morphological processes viz. deverbals, compounding, reduplication, borrowing and a thorough presentation of the pronominal system and takes on verbal categories viz. tense, aspect, mood, verbal extensions, negation, adverbs and triggers of a homorganic N(asal)-prefix that attaches to the verb and other verbal categories. Awing grammar also has a very unusual phenomenon whereby nouns and verbs take long and short forms. A chapter entitled truncation is dedicated to the phenomenon. It is observed that the truncation process does not apply to bare singular NPs, proper names and nouns derived via morphological processes. On the other hand, with the exception of the 1st person non-emphatic possessive determiner and the class 7 noun prefix, nouns generally take the truncated form with modifiers (i.e., articles, demonstratives and other possessives). It is concluded that nominal truncation depicts movement within the DP system (Abney 1987). Truncation of the verb occurs in three contexts: a mass/plurality conspiracy (or lattice structuring in terms of Link 1983) between the verb and its internal argument (i.e., direct object); a means to align (exhaustive) focus (in terms of Fery’s 2013), and a means to form polar questions.
The second part of the work focuses on the role of the LE morpheme in copular and wh-focused clauses. Firstly, the syntax of the Awing copular clause is presented and it is shown that copular clauses in Awing have ‘subject-focus’ vs ‘topic-focus’ partitions and that the LE morpheme indirectly relates such functions. Semantically, it is shown that LE does not express contrast or exhaustivity in copular clauses. Turning to wh-constructions, the work adheres to Hamblin’s (1973) idea that the meaning of a question is the set of its possible answers and based on Rooth’s (1985) underspecified semantic notion of alternative focus, concludes that the LE morpheme is not a Focus Marker (FM) in Awing: LE does not generate or indicate the presence of alternatives (Krifka 2007); The LE morpheme can associate with wh-elements as a focus-sensitive operator with semantic import that operates on the focus alternatives by presupposing an exhaustive answer, among other notions. With focalized categories, the project further substantiates the claim in Fominyam & Šimík (2017), namely that exhaustivity is part of the semantics of the LE morpheme and not derived via contextual implicature, via a number of diagnostics. Hence, unlike in copular clauses, the LE morpheme with wh-focused categories is analysed as a morphological exponent of a functional head Exh corresponding to Horvath's (2010) EI (Exhaustive Identification). The work ends with the syntax of verb focus and negation and modifies the idea in Fominyam & Šimík (2017), namely that the focalized verb that associates with the exhaustive (LE) particle is a lower copy of the finite verb that has been moved to Agr. It is argued that the LE-focused verb ‘cluster’ is an instantiation of adjunction. The conclusion is that verb doubling with verb focus in Awing is neither a realization of two copies of one and the same verb (Fominyam and Šimík 2017), nor a result of a copy triggered by a focus marker (Aboh and Dyakonova 2009). Rather, the focalized copy is said to be merged directly as the complement of LE forming a type of adjoining cluster.
Proteine sind an praktisch allen Prozessen in lebenden Zellen maßgeblich beteiligt. Auch in der Biotechnologie werden Proteine in vielfältiger Weise eingesetzt.
Ein Protein besteht aus einer Kette von Aminosäuren. Häufig lagern sich mehrere dieser Ketten zu größeren Strukturen und Funktionseinheiten, sogenannten Proteinkomplexen,
zusammen. Kürzlich wurde gezeigt, dass eine Proteinkomplexbildung bereits während der Biosynthese der Proteine (co-translational) stattfinden kann
und nicht stets erst danach (post-translational) erfolgt. Da Fehlassemblierungen von Proteinen zu Funktionsverlusten und adversen Effekten führen, ist eine präzise und verlässliche Proteinkomplexbildung sowohl für zelluläre Prozesse als auch für biotechnologische Anwendungen essenziell. Mit experimentellen Methoden lassen sich zwar u.a. die Stöchiometrie und die Struktur von Proteinkomplexen bestimmen,
jedoch bisher nicht die Dynamik der Komplexbildung auf unterschiedlichen Zeitskalen. Daher sind grundlegende Mechanismen der Proteinkomplexbildung noch nicht vollständig verstanden. Die hier vorgestellte, auf experimentellen Erkenntnissen aufbauende, computergestützte Modellierung der Proteinkomplexbildung erlaubt eine umfassende Analyse des Einflusses physikalisch-chemischer Parameter
auf den Assemblierungsprozess. Die Modelle bilden möglichst realistisch die experimentellen Systeme der Kooperationspartner (Bar-Ziv, Weizmann-Institut, Israel; Bukau und Kramer, Universität Heidelberg) ab, um damit die Assemblierung von Proteinkomplexen einerseits in einem quasi-zweidimensionalen synthetischen Expressionssystem (in vitro) und andererseits im Bakterium Escherichia coli (in vivo) untersuchen zu können. Mit Hilfe eines vereinfachten Expressionssystems, in dem die Proteine nur an die Chip-Oberfläche, aber nicht aneinander binden können, wird das theoretische Modell parametrisiert. In diesem vereinfachten in-vitro-System durchläuft die Effizienz der Komplexbildung drei Regime – ein bindedominiertes Regime, ein Mischregime und ein produktionsdominiertes Regime. Ihr Maximum erreicht die Effizienz dabei kurz nach dem Übergang vom bindedominierten ins Mischregime und fällt anschließend monoton ab. Sowohl im nicht-vereinfachten in-vitro- als auch im in-vivo-System koexistieren je zwei konkurrierende Assemblierungspfade: Im in-vitro-System erfolgt die Komplexbildung entweder spontan in wässriger Lösung (Lösungsassemblierung) oder aber in einer definierten Schrittfolge an der Chip-Oberfläche (Oberflächenassemblierung); Im in-vivo-System konkurrieren hingegen die co- und die post-translationale Komplexbildung. Es zeigt sich, dass die Dominanz der Assemblierungspfade im in-vitro-System zeitabhängig ist und u.a. durch die Limitierung und Stärke der Bindestellen auf der Chip-Oberfläche beeinflusst werden kann. Im in-vivo-System hat der räumliche Abstand zwischen den Syntheseorten der beiden Proteinkomponenten nur dann einen Einfluss auf die Komplexbildung, wenn die Untereinheiten schnell degradieren. In diesem Fall dominiert die co-translationale Assemblierung auch auf kurzen Zeitskalen deutlich, wohingegen es bei stabilen Untereinheiten zu einem Wechsel von der Dominanz der post- hin zu einer geringen Dominanz der co-translationalen Assemblierung kommt. Mit den in-silico-Modellen lässt sich neben der Dynamik u.a. auch die Lokalisierung der Komplexbildung und -bindung darstellen, was einen Vergleich der theoretischen Vorhersagen mit experimentellen Daten und somit eine Validierung der Modelle ermöglicht. Der hier präsentierte in-silico Ansatz ergänzt die experimentellen Methoden, und erlaubt so, deren Ergebnisse zu interpretieren und neue Erkenntnisse davon abzuleiten.
Largescale patterns of global land use change are very frequently accompanied by natural habitat loss. To assess the consequences of habitat loss for the remaining natural and semi-natural biotopes, inclusion of cumulative effects at the landscape level is required. The interdisciplinary concept of vulnerability constitutes an appropriate assessment framework at the landscape level, though with few examples of its application for ecological assessments. A comprehensive biotope vulnerability analysis allows identification of areas most affected by landscape change and at the same time with the lowest chances of regeneration.
To this end, a series of ecological indicators were reviewed and developed. They measured spatial attributes of individual biotopes as well as some ecological and conservation characteristics of the respective resident species community. The final vulnerability index combined seven largely independent indicators, which covered exposure, sensitivity and adaptive capacity of biotopes to landscape changes. Results for biotope vulnerability were provided at the regional level. This seems to be an appropriate extent with relevance for spatial planning and designing the distribution of nature reserves.
Using the vulnerability scores calculated for the German federal state of Brandenburg, hot spots and clusters within and across the distinguished types of biotopes were analysed. Biotope types with high dependence on water availability, as well as biotopes of the open landscape containing woody plants (e.g., orchard meadows) are particularly vulnerable to landscape changes. In contrast, the majority of forest biotopes appear to be less vulnerable. Despite the appeal of such generalised statements for some biotope types, the distribution of values suggests that conservation measures for the majority of biotopes should be designed specifically for individual sites. Taken together, size, shape and spatial context of individual biotopes often had a dominant influence on the vulnerability score.
The implementation of biotope vulnerability analysis at the regional level indicated that large biotope datasets can be evaluated with high level of detail using geoinformatics. Drawing on previous work in landscape spatial analysis, the reproducible approach relies on transparent calculations of quantitative and qualitative indicators. At the same time, it provides a synoptic overview and information on the individual biotopes. It is expected to be most useful for nature conservation in combination with an understanding of population, species, and community attributes known for specific sites. The biotope vulnerability analysis facilitates a foresighted assessment of different land uses, aiding in identifying options to slow habitat loss to sustainable levels. It can also be incorporated into planning of restoration measures, guiding efforts to remedy ecological damage. Restoration of any specific site could yield synergies with the conservation objectives of other sites, through enhancing the habitat network or buffering against future landscape change.
Biotope vulnerability analysis could be developed in line with other important ecological concepts, such as resilience and adaptability, further extending the broad thematic scope of the vulnerability concept. Vulnerability can increasingly serve as a common framework for the interdisciplinary research necessary to solve major societal challenges.
The development of speaking competence is widely regarded as a central aspect of second language (L2) learning. It may be questioned, however, if the currently predominant ways of conceptualising the term fully satisfy the complexity of the construct: Although there is growing recognition that language primarily constitutes a tool for communication and participation in social life, as yet it is rare for conceptualisations of speaking competence to incorporate the ability to inter-act and co-construct meaning with co-participants. Accordingly, skills allowing for the successful accomplishment of interactional tasks (such as orderly speaker change, and resolving hearing and understanding trouble) also remain largely unrepresented in language teaching and assessment. As fostering the ability to successfully use the L2 within social interaction should arguably be a main objective of language teaching, it appears pertinent to broaden the construct of speaking competence by incorporating interactional competence (IC). Despite there being a growing research interest in the conceptualisation and development of (L2) IC, much of the materials and instruments required for its teaching and assessment, and thus for fostering a broader understanding of speaking competence in the L2 classroom, still await development. This book introduces an approach to the identification of candidate criterial features for the assessment of EFL learners’ L2 repair skills. Based on a corpus of video-recorded interaction between EFL learners, and following conversation-analytic and interactional-linguistic methodology as well as drawing on basic premises of research in the framework of Conversation Analysis for Second Language Acquisition, differences between (groups of) learners in terms of their L2 repair conduct are investigated through qualitative and inductive analyses. Candidate criterial features are derived from the analysis results. This book does not only contribute to the operationalisation of L2 IC (and of L2 repair skills in particular), but also lays groundwork for the construction of assessment scales and rubrics geared towards the evaluation of EFL learners’ L2 interactional skills.
Natural hazards can have serious societal and economic impacts. Worldwide, around one third of economic losses due to natural hazards are attributable to floods. The majority of natural hazards are triggered by weather-related extremes such as heavy precipitation, rapid snow melt, or extreme temperatures. Some of them, and in particular floods, are expected to further increase in terms of frequency and/or intensity in the coming decades due to the impacts of climate change. In this context, the European Alps areas are constantly disclosed as being particularly sensitive.
In order to enhance the resilience of societies to natural hazards, risk assessments are substantial as they can deliver comprehensive risk information to be used as a basis for effective and sustainable decision-making in natural hazards management. So far, current assessment approaches mostly focus on single societal or economic sectors – e.g. flood damage models largely concentrate on private-sector housing – and other important sectors, such as the transport infrastructure sector, are widely neglected. However, transport infrastructure considerably contributes to economic and societal welfare, e.g. by ensuring mobility of people and goods. In Austria, for example, the national railway network is essential for the European transit of passengers and freights as well as for the development of the complex Alpine topography. Moreover, a number of recent experiences show that railway infrastructure and transportation is highly vulnerable to natural hazards. As a consequence, the Austrian Federal Railways had to cope with economic losses on the scale of several million euros as a result of flooding and other alpine hazards.
The motivation of this thesis is to contribute to filling the gap of knowledge about damage to railway infrastructure caused by natural hazards by providing new risk information for actors and stakeholders involved in the risk management of railway transportation. Hence, in order to support the decision-making towards a more effective and sustainable risk management, the following two shortcomings in natural risks research are approached: i) the lack of dedicated models to estimate flood damage to railway infrastructure, and ii) the scarcity of insights into possible climate change impacts on the frequency of extreme weather events with focus on future implications for railway transportation in Austria.
With regard to flood impacts to railway infrastructure, the empirically derived damage model Railway Infrastructure Loss (RAIL) proved expedient to reliably estimate both structural flood damage at exposed track sections of the Northern Railway and resulting repair cost. The results show that the RAIL model is capable of identifying flood risk hot spots along the railway network and, thus, facilitates the targeted planning and implementation of (technical) risk reduction measures. However, the findings of this study also show that the development and validation of flood damage models for railway infrastructure is generally constrained by the continuing lack of detailed event and damage data.
In order to provide flood risk information on the large scale to support strategic flood risk management, the RAIL model was applied for the Austrian Mur River catchment using three different hydraulic scenarios as input as well as considering an increased risk aversion of the railway operator. Results indicate that the model is able to deliver comprehensive risk information also on the catchment level. It is furthermore demonstrated that the aspect of risk aversion can have marked influence on flood damage estimates for the study area and, hence, should be considered with regard to the development of risk management strategies.
Looking at the results of the investigation on future frequencies of extreme weather events jeopardizing railway infrastructure and transportation in Austria, it appears that an increase in intense rainfall events and heat waves has to be expected, whereas heavy snowfall and cold days are likely to decrease. Furthermore, results indicate that frequencies of extremes are rather sensitive to changes of the underlying thresholds. It thus emphasizes the importance to carefully define, validate, and — if needed — to adapt the thresholds that are used to detect and forecast meteorological extremes. For this, continuous and standardized documentation of damaging events and near-misses is a prerequisite.
Overall, the findings of the research presented in this thesis agree on the necessity to improve event and damage documentation procedures in order to enable the acquisition of comprehensive and reliable risk information via risk assessments and, thus, support strategic natural hazards management of railway infrastructure and transportation.
The main goal of this thesis is to explore the feasibility of using cross-lingual annotation projection as a method of alleviating the task of manual coreference annotation.
To reach our goal, we build a first trilingual parallel coreference corpus that encompasses multiple genres. For the annotation of the corpus, we develop common coreference annotation guidelines that are applicable to three languages (English, German, Russian) and include a novel domain-independent typology of bridging relations as well as state-of-the-art near-identity categories.
Thereafter, we design and perform several annotation projection experiments. In the first experiment, we implement a direct projection method with only one source language. Our results indicate that, already in a knowledge-lean scenario, our projection approach is superior to the most closely related work of Postolache et al. (2006). Since the quality of the resulting annotations is to a high degree dependent on the word alignment, we demonstrate how using limited syntactic information helps to further improve mention extraction on the target side. As a next step, in our second experiment, we show how exploiting two source languages helps to improve the quality of target annotations for both language pairs by concatenating annotations projected from two source languages. Finally, we assess the projection quality in a fully automatic scenario (using automatically produced source annotations), and propose a pilot experiment on manual projection of bridging pairs.
For each of the experiments, we carry out an in-depth error analysis, and we conclude that noisy word alignments, translation divergences and morphological and syntactic differences between languages are responsible for projection errors. We systematically compare and evaluate our projection methods, and we investigate the errors both qualitatively and quantitatively in order to identify problematic cases. Finally, we discuss the applicability of our method to coreference annotations and propose several avenues of future research.
Ecosystem services (ESs) are defined as the contributions that ecosystems make to human wellbeing and are increasingly being used as an approach to explore the importance of ecosystems for humans through their valuation. Although value plurality has been recognised long before the mainstreaming of ESs research, socio-cultural valuation is still underrepresented in ESs assessments. It is the central goal of this PhD dissertation to explore the ability of socio-cultural valuation methods for the operationalisation of ESs research in land management. To address this, I formulated three research objectives that are briefly outlined below and relate to the three studies conducted during this dissertation.
The first objective relates to the assessment of the current role of socio-cultural valuation in ESs research. Human values are central to ESs research yet non-monetary socio-cultural valuation methods have been found underrepresented in the field of ESs science. In regard to the unbalanced consideration of value domains and conceptual uncertainties, I perform a systematic literature review aiming to answer the research question: To what extent have socio-cultural values been addressed in ESs assessments.
The second objective aims to test socio-cultural valuation methods of ESs and their relevance for land use preferences by exploring their methodological opportunities and limitations. Socio-cultural valuation methods have only recently become a focus in ESs research and therefore bear various uncertainties in regard to their methodological implications. To overcome these uncertainties, I analysed responses to a visitor survey. The research questions related to the second objective were: What are the implications of different valuation methods for ESs values? To what extent are land use preferences explained by socio-cultural values of ESs?
The third objective addressed in this dissertation is the implementation of ESs research into land management through socio-cultural valuation. Though it is emphasised that the ESs approach can assist decision making, there is little empirical evidence of the effect of ESs knowledge on land management. I proposed a way to implement transdisciplinary, spatially explicit research on ESs by answering the following research questions: Which landscape features underpinning ESs supply are considered in land management? How can participatory approaches accounting for ESs be operationalised in land management?
The empirical research resulted in five main findings that provide answers to the research questions. First, this dissertation provides evidence that socio-cultural values are an integral part of ESs research. I found that they can be assessed for provisioning, regulating, and cultural services though they are linked to cultural services to a greater degree. Socio-cultural values have been assessed by monetary and non-monetary methods and their assessment is effectively facilitated by stakeholder participation. Second, I found that different methods of socio-cultural valuation revealed different information. Whereas rating revealed a general value of ESs, weighting was found more suitable to identify priorities across ESs. Value intentions likewise differed in the distribution of values, generally implying a higher value for others than for respondents themselves. Third, I showed that ESs values were distributed similarly across groups with differing land use preferences. Thus, I provided empirical evidence that ESs values and landscape values should not be used interchangeably. Fourth, I showed which landscape features important for ESs supply in a Scottish regional park are not sufficiently accounted for in the current management strategy. This knowledge is useful for the identification of priority sites for land management. Finally, I provide an approach to explore how ESs knowledge elicited by participatory mapping can be operationalised in land management. I demonstrate how stakeholder knowledge and values can be used for the identification of ESs hotspots and how these hotspots can be compared to current management priorities.
This dissertation helps to bridge current gaps of ESs science by advancing the understanding of the current role of socio-cultural values in ESs research, testing different methods and their relevance for land use preferences, and implementing ESs knowledge into land management. If and to what extent ESs and their values are implemented into ecosystem management is mainly the choice of the management. An advanced understanding of socio-cultural valuation methods contributes to the normative basis of this management, while the proposal for the implementation of ESs in land management presents a practical approach of how to transfer this type of knowledge into practice. The proposed methods for socio-cultural valuation can support guiding land management towards a balanced consideration of ESs and conservation goals.
Die Neue Ökonomische Geographie (NEG) erklärt Agglomerationen aus einem mikroökonomischen Totalmodell heraus. Zur Vereinfachung werden verschiedene Symmetrieannahmen getätigt. So wird davon ausgegangen, dass die betrachteten Regionen die gleiche Größe haben, die Ausgabenanteile für verschiedene Gütergruppen identisch sind und die Transportkosten für alle Industrieprodukte die selben sind. Eine Folge dieser Annahmen ist es, dass zwar erklärt werden kann, unter welchen Bedingungen es zur Agglomerationsbildung kommt, nicht aber wo dies geschieht. In dieser Arbeit werden drei Standardmodelle der NEG um verschiedene Asymmetrien erweitert und die Veränderung der Ergebnisse im Vergleich zum jeweiligen Basismodell dargestellt. Dabei wird neben der Theorie auf die Methoden der Simulation eingegangen, die sich grundsätzlich auf andere Modelle übertragen lassen. Darauf aufbauend wird eine asymmetrische Modellvariante auf die wirtschaftliche Entwicklung Deutschlands angewandt. So lässt sich das Ausbleiben eines flächendeckenden Aufschwungs in den neuen Ländern, die starken Wanderungsbewegungen in die alten Länder und das dauerhafte Lohnsatzgefälle in einem Totalmodell erklären.
We analyze the asymptotic behavior in the limit epsilon to zero for a wide class of difference operators H_epsilon = T_epsilon + V_epsilon with underlying multi-well potential. They act on the square summable functions on the lattice (epsilon Z)^d. We start showing the validity of an harmonic approximation and construct WKB-solutions at the wells. Then we construct a Finslerian distance d induced by H and show that short integral curves are geodesics and d gives the rate for the exponential decay of Dirichlet eigenfunctions. In terms of this distance, we give sharp estimates for the interaction between the wells and construct the interaction matrix.
Corvino, Corvino and Schoen, Chruściel and Delay have shown the existence of a large class of asymptotically flat vacuum initial data for Einstein's field equations which are static or stationary in a neighborhood of space-like infinity, yet quite general in the interior. The proof relies on some abstract, non-constructive arguments which makes it difficult to calculate such data numerically by using similar arguments. A quasilinear elliptic system of equations is presented of which we expect that it can be used to construct vacuum initial data which are asymptotically flat, time-reflection symmetric, and asymptotic to static data up to a prescribed order at space-like infinity. A perturbation argument is used to show the existence of solutions. It is valid when the order at which the solutions approach staticity is restricted to a certain range. Difficulties appear when trying to improve this result to show the existence of solutions that are asymptotically static at higher order. The problems arise from the lack of surjectivity of a certain operator. Some tensor decompositions in asymptotically flat manifolds exhibit some of the difficulties encountered above. The Helmholtz decomposition, which plays a role in the preparation of initial data for the Maxwell equations, is discussed as a model problem. A method to circumvent the difficulties that arise when fast decay rates are required is discussed. This is done in a way that opens the possibility to perform numerical computations. The insights from the analysis of the Helmholtz decomposition are applied to the York decomposition, which is related to that part of the quasilinear system which gives rise to the difficulties. For this decomposition analogous results are obtained. It turns out, however, that in this case the presence of symmetries of the underlying metric leads to certain complications. The question, whether the results obtained so far can be used again to show by a perturbation argument the existence of vacuum initial data which approach static solutions at infinity at any given order, thus remains open. The answer requires further analysis and perhaps new methods.
Atmospheric circulation and the surface mass balance in a regional climate model of Antarctica
(2007)
Understanding the Earth's climate system and particularly climate variability presents one of the most difficult and urgent challenges in science. The Antarctic plays a crucial role in the global climate system, since it is the principal region of radiative energy deficit and atmospheric cooling. An assessment of regional climate model HIRHAM is presented. The simulations are generated with the HIRHAM model, which is modified for Antarctic applications. With a horizontal resolution of 55km, the model has been run for the period 1958-1998 creating long-term simulations from initial and boundary conditions provided by the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA40 re-analysis. The model output is compared with observations from observation stations, upper air data, global atmospheric analyses and satellite data. In comparison with the observations, the evaluation shows that the simulations with the HIRHAM model capture both the large and regional scale circulation features with generally small bias in the modeled variables. On the annual time scale the largest errors in the model simulations are the overestimation total cloud cover and the colder near-surface temperature over the interior of the Antarctic plateau. The low-level temperature inversion as well as low-level wind jet is well captured by the model. The decadal scale processes were studied based on trend calculations. The long-term run was divided into two 20 years parts. The 2m temperature, 500 hPa temperature, MSLP, precipitation and net mass balance trends were calculated for both periods and over 1958 - 1998. During the last two decades the strong surface cooling was observed over the Eastern Antarctica, this result is in good agreement with the result of Chapman and Walsh (2005) who calculated the temperature trend based on the observational data. The MSLP trend reveals a big disparity between the first and second parts of the 40 year run. The overall trend shows the strengthening of the circumpolar vortex and continental anticyclone. The net mass balance as well as precipitation show a positive trend over the Antarctic Peninsula region, along Wilkes Land and in Dronning Maud Land. The Antarctic ice sheet grows over the Eastern part of Antarctica with small exceptions in Dronning Maud Land and Wilkes Land and sinks in the Antarctic Peninsula; this result is in good agreement with the satellite-measured altitude presented in Davis (2005) . To better understand the horizontal structure of MSLP, temperature and net mass balance trends the influence of the Southern Annual Mode (SAM) on the Antarctic climate was investigated. The main meteorological parameters during the positive and negative Antarctic Oscillation (AAO) phases were compared to each other. A positive/negative AAO index means strengthening/weakening of the circumpolar vortex, poleward/northward storm tracks and prevailing/weakening westerly winds. For detailed investigation of global teleconnection, two positive and one negative periods of AAO phase were chosen. The differences in MSLP and 2m temperature between positive and negative AAO years during the winter months partly explain the surface cooling during the last decades.
Most of the microelectronic circuits fabricated today are synchronous, i.e. they are driven by one or several clock signals. Synchronous circuit design faces several fundamental challenges such as high-speed clock distribution, integration of multiple cores operating at different clock rates, reduction of power consumption and dealing with voltage, temperature, manufacturing and runtime variations. Asynchronous or clockless design plays a key role in alleviating these challenges, however the design and test of asynchronous circuits is much more difficult in comparison to their synchronous counterparts. A driving force for a widespread use of asynchronous technology is the availability of mature EDA (Electronic Design Automation) tools which provide an entire automated design flow starting from an HDL (Hardware Description Language) specification yielding the final circuit layout. Even though there was much progress in developing such EDA tools for asynchronous circuit design during the last two decades, the maturity level as well as the acceptance of them is still not comparable with tools for synchronous circuit design. In particular, logic synthesis (which implies the application of Boolean minimisation techniques) for the entire system's control path can significantly improve the efficiency of the resulting asynchronous implementation, e.g. in terms of chip area and performance. However, logic synthesis, in particular for asynchronous circuits, suffers from complexity problems. Signal Transitions Graphs (STGs) are labelled Petri nets which are a widely used to specify the interface behaviour of speed independent (SI) circuits - a robust subclass of asynchronous circuits. STG decomposition is a promising approach to tackle complexity problems like state space explosion in logic synthesis of SI circuits. The (structural) decomposition of STGs is guided by a partition of the output signals and generates a usually much smaller component STG for each partition member, i.e. a component STG with a much smaller state space than the initial specification. However, decomposition can result in component STGs that in isolation have so-called irreducible CSC conflicts (i.e. these components are not SI synthesisable anymore) even if the specification has none of them. A new approach is presented to avoid such conflicts by introducing internal communication between the components. So far, STG decompositions are guided by the finest output partitions, i.e. one output per component. However, this might not yield optimal circuit implementations. Efficient heuristics are presented to determine coarser partitions leading to improved circuits in terms of chip area. For the new algorithms correctness proofs are given and their implementations are incorporated into the decomposition tool DESIJ. The presented techniques are successfully applied to some benchmarks - including 'real-life' specifications arising in the context of control resynthesis - which delivered promising results.
This research investigated the relationship between frequent engagement in industrial action (also known as ‘employee strikes’) and the internal attractiveness of government employment. It focused on a special group of public employees: public university lecturers and public-school teachers in Uganda who frequently engaged in industrial action. At the very basic level, the research explored whether public employees frequently engaged in industrial action because they considered public service employment to be unattractive or whether frequent engagement in industrial action was in fact part of the attractiveness of government employment. Beyond exploring these relationships, it also explained why (or why not) such relationships existed.
Methodologically, the research was conducted using an exploratory sequential design – a mixed methods study design that starts with a qualitative followed by a quantitative phase. It is the results of the initial qualitative phase that determined the direction of the subsequent quantitative phase. The qualitative phase started with an exploration of the relationship between industrial action and internal public service attractiveness, resulting into two specific research questions:
1) Why do public employees engage in industrial action and what role does frequent engagement in industrial action play in their perception of public service attractiveness?
2) Why and how is organizational justice related to public employees’ perception of public service attractiveness?
The above questions were answered both qualitatively and quantitatively. The theoretical postulations of the Social Movements Theories, Social Exchange Theory, and the Signaling Theory were used to structure the research assumptions and hypotheses.
The results showed that public employees engaged in industrial action mostly because of relative, rather than absolute deprivation. An established culture of workplace militancy was also found to be key in actualizing industrial action as was the (perceived) absence of alternatives to achieve workplace justice. Importantly, there was a clear dichotomy between absolute working conditions and frequent engagement in industrial action. Frequent engagement in industrial action was itself found to have both positive and negative effects on internal public service attractiveness. It was also found that public service attractiveness from the perspective of current public employees might be different from what it is from the perspective of prospective employees. This is because current public employees do not assume what it feels like to work for government, but mostly use their day-to-day lived experiences to judge the attractiveness of their employer. The existing literature is particularly deficient on analyzing public service attractiveness from an internal perspective, which is surprising given the public sector’s high reliance on internal recruitment.
The research results underlined key implications for theory, practice, and research. At theory level, the results suggested that public employee ratings of internal public service attractiveness were heavily affected by halo effects and should therefore not be taken at face value. The complex workplace social exchanges which are deeply rooted in organizational justice and the ‘personification metaphor’ were also emphasized. From an empirical perspective, the results underlined the need to prioritize internal public service attractiveness as recent research has confirmed the value of family socialization and internal recommendations in making public sector employment attractive, even to external applicants. This research argues that the centrality of organizational justice in public sector employee relations requires public sector organizations to be intentional in their bid to create fair, just, and attractive workplaces. Beyond assessing the fairness of personnel policies, procedures, and interactional relationships, it is also important to prepare and equip public managers with the right skills to promote and practice justice in their day-to-day interactions with public employees, and to encourage, improve, and facilitate alternative public employee feedback mechanisms.
Based on technological advances made within the past decades, ground-penetrating radar (GPR) has become a well-established, non-destructive subsurface imaging technique. Catalyzed by recent demands for high-resolution, near-surface imaging (e.g., the detection of unexploded ordnances and subsurface utilities, or hydrological investigations), the quality of today's GPR-based, near-surface images has significantly matured. At the same time, the analysis of oil and gas related reflection seismic data sets has experienced significant advances. Considering the sensitivity of attribute analysis with respect to data positioning in general, and multi-trace attributes in particular, trace positioning accuracy is of major importance for the success of attribute-based analysis flows. Therefore, to study the feasibility of GPR-based attribute analyses, I first developed and evaluated a real-time GPR surveying setup based on a modern tracking total station (TTS). The combination of current GPR systems capability of fusing global positioning system (GPS) and geophysical data in real-time, the ability of modern TTS systems to generate a GPS-like positional output and wireless data transmission using radio modems results in a flexible and robust surveying setup. To elaborate the feasibility of this setup, I studied the major limitations of such an approach: system cross-talk and data delays known as latencies. Experimental studies have shown that when a minimal distance of ~5 m between the GPR and the TTS system is considered, the signal-to-noise ratio of the acquired GPR data using radio communication equals the one without radio communication. To address the limitations imposed by system latencies, inherent to all real-time data fusion approaches, I developed a novel correction (calibration) strategy to assess the gross system latency and to correct for it. This resulted in the centimeter trace accuracy required by high-frequency and/or three-dimensional (3D) GPR surveys. Having introduced this flexible high-precision surveying setup, I successfully demonstrated the application of attribute-based processing to GPR specific problems, which may differ significantly from the geological ones typically addressed by the oil and gas industry using seismic data. In this thesis, I concentrated on archaeological and subsurface utility problems, as they represent typical near-surface geophysical targets. Enhancing 3D archaeological GPR data sets using a dip-steered filtering approach, followed by calculation of coherency and similarity, allowed me to conduct subsurface interpretations far beyond those obtained by classical time-slice analyses. I could show that the incorporation of additional data sets (magnetic and topographic) and attributes derived from these data sets can further improve the interpretation. In a case study, such an approach revealed the complementary nature of the individual data sets and, for example, allowed conclusions about the source location of magnetic anomalies by concurrently analyzing GPR time/depth slices to be made. In addition to archaeological targets, subsurface utility detection and characterization is a steadily growing field of application for GPR. I developed a novel attribute called depolarization. Incorporation of geometrical and physical feature characteristics into the depolarization attribute allowed me to display the observed polarization phenomena efficiently. Geometrical enhancement makes use of an improved symmetry extraction algorithm based on Laplacian high-boosting, followed by a phase-based symmetry calculation using a two-dimensional (2D) log-Gabor filterbank decomposition of the data volume. To extract the physical information from the dual-component data set, I employed a sliding-window principle component analysis. The combination of the geometrically derived feature angle and the physically derived polarization angle allowed me to enhance the polarization characteristics of subsurface features. Ground-truth information obtained by excavations confirmed this interpretation. In the future, inclusion of cross-polarized antennae configurations into the processing scheme may further improve the quality of the depolarization attribute. In addition to polarization phenomena, the time-dependent frequency evolution of GPR signals might hold further information on the subsurface architecture and/or material properties. High-resolution, sparsity promoting decomposition approaches have recently had a significant impact on the image and signal processing community. In this thesis, I introduced a modified tree-based matching pursuit approach. Based on different synthetic examples, I showed that the modified tree-based pursuit approach clearly outperforms other commonly used time-frequency decomposition approaches with respect to both time and frequency resolutions. Apart from the investigation of tuning effects in GPR data, I also demonstrated the potential of high-resolution sparse decompositions for advanced data processing. Frequency modulation of individual atoms themselves allows to efficiently correct frequency attenuation effects and improve resolution based on shifting the average frequency level. GPR-based attribute analysis is still in its infancy. Considering the growing widespread realization of 3D GPR studies there will certainly be an increasing demand towards improved subsurface interpretations in the future. Similar to the assessment of quantitative reservoir properties through the combination of 3D seismic attribute volumes with sparse well-log information, parameter estimation in a combined manner represents another step in emphasizing the potential of attribute-driven GPR data analyses.
Auf den Spuren der griechischen Mythen bei Anton Čechov in den Werken der frühen Schaffensperiode
(2012)
Die Poetik des Alltags des russischen Schriftstellers Anton Čechov fasziniert bereits über ein Jahrhundert die Leser weltweit. Dieser Faszination liegt nicht zuletzt der griechische Mythos zugrunde, ein Kulturerbe, das die Denkweise unserer Gesellschaft tief greifend beeinflusst hat. Die antiken Gottheiten und Helden wie Apollo, Dionysos, Pythia, Narziss werden in Čechovs wenig untersuchtem Frühwerk zu Menschen des Alltags. Diese Projektion ist eine parodie- und travestiehafte Modifikation der mythischen Elementarstrukturen. In dieser Verschmelzung des Mythischen mit dem Alltäglichen wird Čechov zum Nachfolger insbesondere des antiken Dramatikers Epicharm. Methodisch basiert meine Analyse auf dem Begriffspaar von „Wiedergebrauchs-Rede“ und „Verbrauchs-Rede“ des Rhetorikers Heinrich Lausberg: Čechov erzählt die prominenten Mythen so wieder, dass sie zwar ihre Erhabenheit verlieren, ihre untergründige Kraft jedoch beibehalten und so das Selbstbild des modernen Menschen bereichern.
Aufstiege aus der Mittelschicht : soziale Aufstiegsmobilität von Haushalten zwischen 1984 und 2010
(2012)
Die Dissertation widmet sich den intragenerationalen Aufstiegsprozessen von Haushalten aus der Mittelschicht zu den Wohlhabenden. Intragenerationale Mobilitätsforschung wird bislang vor allem als arbeitsmarktbezogene Inidivualmobilität angesehen. Diese Dissertation erweitert den Ansatz auf die Ebene des Haushaltes. Dem liegt der Gedanke zugrunde, dass die soziale Position eines Individuums nicht allein durch sein Erwerbseinkommen determiniert wird. Ebenso entscheidend ist der Kontext des Haushaltes. Dieser bestimmt darüber, wie viele Personen zum Einkommen beitragen können und wie viele daran partizipieren. Weiterhin kommt der Haushaltsebene in Paar-Haushalten die Rolle des Aushandlungsortes zu. Hier wird über Familienplanung, Kinderwunsch und damit in Zusammenhang stehend auch über die Erwerbsbeteiligung der Partner entscheiden. Die vorliegende Dissertation untersucht diese Annahmen mithilfe von Daten des Sozioökonomischen Panels (SOEP) der Jahre 1984 bis 2010. Der Fokus liegt auf der Erwerbsbeteiligung und dem Bildungsniveau des Haushaltes, seiner Struktur, sowie dem Beruf des Haushaltsvorstandes. Es wird davon ausgegangen, dass dies die Hauptfaktoren sind, die über die finanziellen Möglichkeiten eines Haushaltes entscheiden. Ein weiterer Schwerpunkt der Arbeit liegt in der Berücksichtigung des historischen Kontextes, da anzunehmen ist, dass die oben benannten Faktoren sich und ihren Einfluss auf die Aufstiegsmöglichkeiten von Haushalten im historischen Verlauf verändert haben.
Die folgenden Fragen standen im Mittelpunkt der Dissertation: Wie handeln Grundschulkinder aus, wenn sie von ihren Peers ungerecht behandelt werden? Welche unmittelbaren Wirkungen hat ihr Vorgehen? In welchem Zusammenhang steht das Vorgehen in der Konfliktsituation mit der Stellung unter den Peers? Theoretische Grundlage waren die Entwicklungsmodelle zum Aushandeln von Yeates und Selman (1989) sowie Hawley (1999). Es nahmen 213 Dritt- und Fünftklässler an der Untersuchung teil. Eine Kombination qualitativer und quantitativer Methoden wurde verwendet. In einem individuellen Interview wurde erhoben, welche Taktiken (eine Handlungseinheit) sowie welche Strategien (Abfolge von Taktiken) die Kinder in einer hypothetischen Normbruchsituation einsetzen würden. Die Kinder wurden auch gefragt, welche unmittelbaren Wirkungen sie erwarten, wenn sie die vorgeschlagene Taktik einsetzen. Die Stellung der Kinder wurde sowohl bezüglich ihres Einflusses (Peerrating) als auch bezüglich ihrer Akzeptanz (Soziometrie) unter den Klassenkameraden erhoben. Die von den Kindern genannten Taktiken wurden vier übergeordneten Kategorien zugeordnet: Verhandeln, Erzwingen, Ausweichen und Aufgeben. Nach den Erwartungen der Kinder führen sowohl Verhandlungstaktiken als auch erzwingende Taktiken in um die Hälfte der Fälle zur Durchsetzung. Erzwingende Taktiken gehen jedoch häufig mit unfreundlichen Reaktionen einher. Einfluss und Akzeptanz der Kinder waren davon abhängig, welche Kombination und Sequenz von Taktiken (Strategie) sie wählten. Beispielsweise waren Kinder einflussreich und beliebt, die eine Reihe von Verhandlungstaktiken generierten oder die zunächst Verhandlungstaktiken vorschlugen und danach erzwingende Taktiken. Kinder, die sofort Zwang einsetzen würden, hatten wenig Einfluss und wurden abgelehnt. Außerdem fanden sich Geschlechts- und Altersunterschiede hinsichtlich des Vorgehens in der hypothetischen Normbruchsituation sowie der Zusammenhänge zwischen Taktiken und Stellung unter Peers.
Ausprägungen räumlicher Identität in ehemaligen sudetendeutschen Gebieten der Tschechischen Republik
(2014)
Das tschechische Grenzgebiet ist eine der Regionen in Europa, die in der Folge des Zweiten Weltkrieges am gravierendsten von Umbrüchen in der zuvor bestehenden Bevölkerungsstruktur betroffen waren. Der erzwungenen Aussiedlung eines Großteils der ansässigen Bevölkerung folgten die Neubesiedlung durch verschiedenste Zuwanderergruppen sowie teilweise langanhaltende Fluktuationen der Einwohnerschaft. Die Stabilisierung der Bevölkerung stand sodann unter dem Zeichen der sozialistischen Gesellschafts- und Wirtschaftsordnung, die die Lebensweise und Raumwahrnehmung der neuen Einwohner nachhaltig prägte. Die Grenzöffnung von 1989, die politische Transformation sowie die Integration der Tschechischen Republik in die Europäische Union brachten neue demographische und sozioökonomische Entwicklungen mit sich. Sie schufen aber auch die Bedingungen dafür, sich neu und offen auch mit der spezifischen Geschichte des ehemaligen Sudetenlandes sowie mit dem Zustand der gegenwärtigen Gesellschaft in diesem Gebiet auseinanderzusetzen.
Im Rahmen der vorliegenden Arbeit wird anhand zweier Beispielregionen untersucht, welche Raumvorstellungen und Raumbindungen bei der heute in den ehemaligen sudetendeutschen Gebieten ansässigen Bevölkerung vorhanden sind und welche Einflüsse die unterschiedlichen raumstrukturellen Bedingungen darauf ausüben. Besonderes Augenmerk wird auf die soziale Komponente der Ausprägung räumlicher Identität gelegt, das heißt auf die Rolle von Bedeutungszuweisungen gegenüber Raumelementen im Rahmen sozialer Kommunikation und Interaktion. Dies erscheint von besonderer Relevanz in einem Raum, der sich durch eine gewisse Heterogenität seiner Einwohnerschaft hinsichtlich ihres ethnischen, kulturellen beziehungsweise biographischen Hintergrundes auszeichnet. Schließlich wird ermittelt, welche Impulse unter Umständen von einer ausgeprägten räumlichen Identität für die Entwicklung des Raumes ausgehen.
Die stetige Weiterentwicklung von VR-Systemen bietet neue Möglichkeiten der Interaktion mit virtuellen Objekten im dreidimensionalen Raum, stellt Entwickelnde von VRAnwendungen aber auch vor neue Herausforderungen. Selektions- und Manipulationstechniken müssen unter Berücksichtigung des Anwendungsszenarios, der Zielgruppe und der zur Verfügung stehenden Ein- und Ausgabegeräte ausgewählt werden. Diese Arbeit leistet einen Beitrag dazu, die Auswahl von passenden Interaktionstechniken zu unterstützen. Hierfür wurde eine repräsentative Menge von Selektions- und Manipulationstechniken untersucht und, unter Berücksichtigung existierender Klassifikationssysteme, eine Taxonomie entwickelt, die die Analyse der Techniken hinsichtlich interaktionsrelevanter Eigenschaften ermöglicht. Auf Basis dieser Taxonomie wurden Techniken ausgewählt, die in einer explorativen Studie verglichen wurden, um Rückschlüsse auf die Dimensionen der Taxonomie zu ziehen und neue Indizien für Vor- und Nachteile der Techniken in spezifischen Anwendungsszenarien zu generieren. Die Ergebnisse der Arbeit münden in eine Webanwendung, die Entwickelnde von VR-Anwendungen gezielt dabei unterstützt, passende Selektions- und Manipulationstechniken für ein Anwendungsszenario auszuwählen, indem Techniken auf Basis der Taxonomie gefiltert und unter Verwendung der Resultate aus der Studie sortiert werden können.
Hohe Leistungsansprüche im Wettkampfsport erfordern von den Athleten eine hohe sportliche Belastbarkeit. Möglichkeiten die Trainingsumfänge und -intensitäten zu erhöhen, sind z.T. ausgeschöpft. So bestehen nach wie vor Bestrebungen neue Wege zu finden, um mögliche Leistungsreserven zu erschließen. Elektrotherapieverfahren haben sich im klinischen Alltag, u.a. zur Behandlung von Traumata, bewährt und werden häufig zum Zweck der Analgesierung, Verbesserung der Gewebedurchblutung und zur Muskelstimulation angewandt. Deren Einsatz im adjuvanten Bereich der Trainingsbegleitung wurde bislang nur vereinzelt beschrieben. In der vorliegenden Studie wurden die Auswirkungen einer elektromagnetischen Anwendungsform auf ausgewählte psycho-physische Parameter untersucht (Kontrollgruppenvergleich mit placebokontrolliertem Design), um Aussagen über praxisrelevante Ansätze zur trainingsunterstützenden Betreuung abzuleiten. Es stellte sich die Frage, ob eine Intervention (15 x / 4 Wo.) mit frequenzmodulierten Wechselströmen im vorwiegend niederfrequenten Wirkungsspektrum (0-10000Hz, 5 μA / cm², CellVAS®) zu einer Beeinflussung der untersuchten Parameter führen und dahingehend nachhaltige leistungsfördernde oder -reduzierende Effekte erzielt werden könnten. Des Weiteren sollte geprüft werden, inwiefen die erhobenen Parameter (PWC170, Squat-Jump, Lateralflexion der Wirbelsäule und SF36®) aussagekräftig genug sind. Die Wirksamkeit des Applikationsform wurde im Prä-Post-Vergleich vor (T1), nach (T2) und 4 Wo. nach Abschluss (T3, Nachhaltigkeit) der Intervention analysiert. Die Teilnehmer der Kontrollgruppe erhielten vergleichbare Applikationen im Placebomodus. Das Probandenkollektiv bestand aus gesunden Leistungssportlern, deren Sportarten einen hohen Kraftausdaueranteil enthielten (n=127). Die Gruppenzuteilung erfolgte teilrandomisiert in Haupt- (HG) und Kontrollgruppe (KG). Zudem wurden die Gruppen zusätzlich geschlechtsspezifisch getrennt. Im Untersuchungsverlauf ließen sich Veränderungen für die Leistungsparameter PWC170 und Squat Jump erkennen. Inwiefern diese Abweichungen auf den Einfluss der Intervention mit frequenzmodulierten Wechselströmen im niederfrequenten Wirkungsspektrum zurückzuführen sind, konnte in dieser Untersuchung nicht eindeutig geklärt werden. Die nachgewiesenen Effekte ließen sich nach den zu Grunde liegenden wissenschaftlichen Standards nicht statistisch valide belegen. Der wissenschaftliche Nachweis einer mögliche Leistungsveränderung konnte nicht abschließend erbracht werden. Im therapeutischen Bereich hat die untersuchte Applikationsform, auf Basis der bestehen Studienlage, ihre Anwendung gefunden und kann bedenkenlos verwendet werden. Für den Einsatz als unterstützendes Verfahren in der sportlichen Praxis besteht nach wie vor Bedarf an validen, randomisierten Studien, die die Wirksamkeit der Applikationsform auf psycho-physische Parameter von Athleten nachhaltig belegen, bevor sie in der sportlichen Praxis Anwendung finden sollten.
Automated location of seismic events is a very important task in microseismic monitoring operations as well for local and regional seismic monitoring. Since microseismic records are generally characterised by low signal-to-noise ratio, such methods are requested to be noise robust and sufficiently accurate. Most of the standard automated location routines are based on the automated picking, identification and association of the first arrivals of P and S waves and on the minimization of the residuals between theoretical and observed arrival times of the considered seismic phases. Although current methods can accurately pick P onsets, the automatic picking of the S onset is still problematic, especially when the P coda overlaps the S wave onset. In this thesis I developed a picking free automated method based on the Short-Term-Average/Long-Term-Average (STA/LTA) traces at different stations as observed data. I used the STA/LTA of several characteristic functions in order to increase the sensitiveness to the P wave and the S waves. For the P phases we use the STA/LTA traces of the vertical energy function, while for the S phases, we use the STA/LTA traces of the horizontal energy trace and then a more optimized characteristic function which is obtained using the principal component analysis technique. The orientation of the horizontal components can be retrieved by robust and linear approach of waveform comparison between stations within a network using seismic sources outside the network (chapter 2). To locate the seismic event, we scan the space of possible hypocentral locations and origin times, and stack the STA/LTA traces along the theoretical arrival time surface for both P and S phases. Iterating this procedure on a three-dimensional grid we retrieve a multidimensional matrix whose absolute maximum corresponds to the spatial and temporal coordinates of the seismic event. Location uncertainties are then estimated by perturbing the STA/LTA parameters (i.e the length of both long and short time windows) and relocating each event several times. In order to test the location method I firstly applied it to a set of 200 synthetic events. Then we applied it to two different real datasets. A first one related to mining induced microseismicity in a coal mine in the northern Germany (chapter 3). In this case we successfully located 391 microseismic event with magnitude range between 0.5 and 2.0 Ml. To further validate the location method I compared the retrieved locations with those obtained by manual picking procedure. The second dataset consist in a pilot application performed in the Campania-Lucania region (southern Italy) using a 33 stations seismic network (Irpinia Seismic Network) with an aperture of about 150 km (chapter 4). We located 196 crustal earthquakes (depth < 20 km) with magnitude range 1.1 < Ml < 2.7. A subset of these locations were compared with accurate locations retrieved by a manual location procedure based on the use of a double difference technique. In both cases results indicate good agreement with manual locations. Moreover, the waveform stacking location method results noise robust and performs better than classical location methods based on the automatic picking of the P and S waves first arrivals.
Even though the majority of individuals know that exercising is healthy, a high percentage struggle to achieve the recommended amount of exercise. The (social-cognitive) theories that are commonly applied to explain exercise motivation refer to the assumption that people base their decisions mainly on rational reasoning. However, behavior is not only bound to reflection. In recent years, the role of automaticity and affect for exercise motivation has been increasingly discussed. In this dissertation, central assumptions of the affective–reflective theory of physical inactivity and exercise (ART; Brand & Ekkekakis, 2018), an exercise-specific dual-process theory that emphasizes the role of a momentary automatic affective reaction for exercise-decisions, were examined. The central aim of this dissertation was to investigate exercisers and non-exercisers automatic affective reactions to exercise-related stimuli (i.e., type-1 process). In particular, the two components of the ART’s type-1 process, that are, automatic associations with exercise and the automatic affective valuation to exercise, were under study.
In the first publication (Schinkoeth & Antoniewicz, 2017), research on automatic (evaluative) associations with exercise was summarized and evaluated in a systematic review. The results indicated that automatic associations with exercise appeared to be relevant predictors for exercise behavior and other exercise-related variables, providing evidence for a central assumption of the ART’s type-1 process. Furthermore, indirect methods seem to be suitable to assess automatic associations. The aim of the second publication (Schinkoeth, Weymar, & Brand, 2019) was to approach the somato-affective core of the automatic valuation of exercise using analysis of reactivity in vagal HRV while viewing exercise-related pictures. Results revealed that differences in exercise volume could be regressed on HRV reactivity. In light of the ART, these findings were interpreted as evidence of an inter-individual affective reaction elicited at the thought of exercise and triggered by exercise-stimuli. In the third publication (Schinkoeth & Brand, 2019, subm.), it was sought to disentangle and relate to each other the ART’s type-1 process components—automatic associations and the affective valuation of exercise. Automatic associations to exercise were assessed with a recoding-free variant of an implicit association test (IAT). Analysis of HRV reactivity was applied to approach a somatic component of the affective valuation, and facial reactions in a facial expression (FE) task served as indicators of the automatic affective reaction’s valence. Exercise behavior was assessed via self-report. The measurement of the affective valuation’s valence with the FE task did not work well in this study. HRV reactivity was predicted by the IAT score and did also statistically predict exercise behavior. These results thus confirm and expand upon the results of publication two and provide empirical evidence for the type-1 process, as defined in the ART. This dissertation advances the field of exercise psychology concerning the influence of automaticity and affect on exercise motivation. Moreover, both methodical implications and theoretical extensions for the ART can be derived from the results.
Changing the perspective sometimes offers completely new insights to an already well-known phenomenon. Exercising behavior, defined as planned, structured and repeated bodily movements with the intention to maintain or increase the physical fitness (Caspersen, Powell, & Christenson, 1985), can be thought of as such a well-known phenomenon that has been in the scientific focus for many decades (Dishman & O’Connor, 2005). Within these decades a perspective that assumes rational and controlled evaluations as the basis for decision making, was predominantly used to understand why some people engage in physical activity and others do not (Ekkekakis & Zenko, 2015).
Dual-process theories (Ekkekakis & Zenko, 2015; Payne & Gawronski, 2010) provide another perspective, that is not exclusively influenced by rational reasoning. These theories differentiate two different processes that guide behavior “depending on whether they operate automatically or in a controlled fashion“ (Gawronski & Creighton, 2012, p. 282). Following this line of thought, exercise behavior is not solely influenced by thoughtful deliberations (e.g. concluding that exercising is healthy) but also by spontaneous affective reactions (e.g. disliking being sweaty while exercising). The theoretical frameworks of dual-process models are not new in psychology (Chaiken & Trope, 1999) and have already been used for the explanation of numerous behaviors (e.g. Hofmann, Friese, & Wiers, 2008; Huijding, de Jong, Wiers, & Verkooijen, 2005). However, they have only rarely been used for the explanation of exercise behavior (e.g. Bluemke, Brand, Schweizer, & Kahlert, 2010; Conroy, Hyde, Doerksen, & Ribeiro, 2010; Hyde, Doerksen, Ribeiro, & Conroy, 2010). The assumption of two dissimilar behavior influencing processes, differs fundamentally from previous theories and thus from the research that has been conducted in the last decades in exercise psychology. Research mainly concentrated on predictors of the controlled processes and addressed the identified predictors in exercise interventions (Ekkekakis & Zenko, 2015; Hagger, Chatzisarantis, & Biddle, 2002).
Predictors arising from the described automatic processes, for example automatic evaluations for exercising (AEE), have been neglected in exercise psychology for many years. Until now, only a few researchers investigated the influence of these AEE for exercising behavior (Bluemke et al., 2010; Brand & Schweizer, 2015; Markland, Hall, Duncan, & Simatovic, 2015). Marginally more researchers focused on the impact of AEE for physical activity behavior (Calitri, Lowe, Eves, & Bennett, 2009; Conroy et al., 2010; Hyde et al., 2010; Hyde, Elavsky, Doerksen, & Conroy, 2012). The extant studies mainly focused on the quality of AEE and the associated quantity of exercise (exercise much or little; Bluemke et al., 2010; Calitri et al., 2009; Conroy et al., 2010; Hyde et al., 2012). In sum, there is still a dramatic lack of empirical knowledge, when applying dual-process theories to exercising behavior, even though these theories have proven to be successful in explaining behavior in many other health-relevant domains like eating, drinking or smoking behavior (e.g. Hofmann et al., 2008).
The main goal of the present dissertation was to collect empirical evidence for the influence of AEE on exercise behavior and to expand the so far exclusively correlational studies by experimentally controlled studies. By doing so, the ongoing debate on a paradigm shift from controlled and deliberative influences of exercise behavior towards approaches that consider automatic and affective influences (Ekkekakis & Zenko, 2015) should be encouraged. All three conducted publications are embedded in dual-process theorizing (Gawronski & Bodenhausen, 2006, 2014; Strack & Deutsch, 2004). These theories offer a theoretical framework that could integrate the established controlled variables of exercise behavior explanation and additionally consider automatic factors for exercise behavior like AEE.
Taken together, the empirical findings collected suggest that AEE play an important and diverse role for exercise behavior. They represent exercise setting preferences, are a cause for short-term exercise decisions and are decisive for long-term exercise adherence. Adding to the few already present studies in this field, the influence of (positive) AEE for exercise behavior was confirmed in all three presented publications. Even though the available set of studies needs to be extended in prospectively studies, first steps towards a more complete picture have been taken. Closing with the beginning of the synopsis: I think that time is right for a change of perspectives! This means a careful extension of the present theories with controlled evaluations explaining exercise behavior. Dual-process theories including controlled and automatic evaluations could provide such a basis for future research endeavors in exercise psychology.
The aim of this thesis is to develop approaches to automatically recognise the structure of argumentation in short monological texts. This amounts to identifying the central claim of the text, supporting premises, possible objections, and counter-objections to these objections, and connecting them correspondingly to a structure that adequately describes the argumentation presented in the text.
The first step towards such an automatic analysis of the structure of argumentation is to know how to represent it. We systematically review the literature on theories of discourse, as well as on theories of the structure of argumentation against a set of requirements and desiderata, and identify the theory of J. B. Freeman (1991, 2011) as a suitable candidate to represent argumentation structure. Based on this, a scheme is derived that is able to represent complex argumentative structures and can cope with various segmentation issues typically occurring in authentic text.
In order to empirically test our scheme for reliability of annotation, we conduct several annotation experiments, the most important of which assesses the agreement in reconstructing argumentation structure. The results show that expert annotators produce very reliable annotations, while the results of non-expert annotators highly depend on their training in and commitment to the task.
We then introduce the 'microtext' corpus, a collection of short argumentative texts. We report on the creation, translation, and annotation of it and provide a variety of statistics. It is the first parallel corpus (with a German and English version) annotated with argumentation structure, and -- thanks to the work of our colleagues -- also the first annotated according to multiple theories of (global) discourse structure.
The corpus is then used to develop and evaluate approaches to automatically predict argumentation structures in a series of six studies: The first two of them focus on learning local models for different aspects of argumentation structure. In the third study, we develop the main approach proposed in this thesis for predicting globally optimal argumentation structures: the 'evidence graph' model. This model is then systematically compared to other approaches in the fourth study, and achieves state-of-the-art results on the microtext corpus. The remaining two studies aim to demonstrate the versatility and elegance of the proposed approach by predicting argumentation structures of different granularity from text, and finally by using it to translate rhetorical structure representations into argumentation structures.
A decade ago, it became feasible to store multi-terabyte databases in main memory. These in-memory databases (IMDBs) profit from DRAM's low latency and high throughput as well as from the removal of costly abstractions used in disk-based systems, such as the buffer cache. However, as the DRAM technology approaches physical limits, scaling these databases becomes difficult. Non-volatile memory (NVM) addresses this challenge. This new type of memory is persistent, has more capacity than DRAM (4x), and does not suffer from its density-inhibiting limitations. Yet, as NVM has a higher latency (5-15x) and a lower throughput (0.35x), it cannot fully replace DRAM.
IMDBs thus need to navigate the trade-off between the two memory tiers. We present a solution to this optimization problem. Leveraging information about access frequencies and patterns, our solution utilizes NVM's additional capacity while minimizing the associated access costs. Unlike buffer cache-based implementations, our tiering abstraction does not add any costs when reading data from DRAM. As such, it can act as a drop-in replacement for existing IMDBs. Our contributions are as follows:
(1) As the foundation for our research, we present Hyrise, an open-source, columnar IMDB that we re-engineered and re-wrote from scratch. Hyrise enables realistic end-to-end benchmarks of SQL workloads and offers query performance which is competitive with other research and commercial systems. At the same time, Hyrise is easy to understand and modify as repeatedly demonstrated by its uses in research and teaching.
(2) We present a novel memory management framework for different memory and storage tiers. By encapsulating the allocation and access methods of these tiers, we enable existing data structures to be stored on different tiers with no modifications to their implementation. Besides DRAM and NVM, we also support and evaluate SSDs and have made provisions for upcoming technologies such as disaggregated memory.
(3) To identify the parts of the data that can be moved to (s)lower tiers with little performance impact, we present a tracking method that identifies access skew both in the row and column dimensions and that detects patterns within consecutive accesses. Unlike existing methods that have substantial associated costs, our access counters exhibit no identifiable overhead in standard benchmarks despite their increased accuracy.
(4) Finally, we introduce a tiering algorithm that optimizes the data placement for a given memory budget. In the TPC-H benchmark, this allows us to move 90% of the data to NVM while the throughput is reduced by only 10.8% and the query latency is increased by 11.6%. With this, we outperform approaches that ignore the workload's access skew and access patterns and increase the query latency by 20% or more.
Individually, our contributions provide novel approaches to current challenges in systems engineering and database research. Combining them allows IMDBs to scale past the limits of DRAM while continuing to profit from the benefits of in-memory computing.
Die automatisierte Objektidentifikation stellt ein modernes Werkzeug in den Geoinformationswissenschaften dar (BLASCHKE et al., 2012). Um bei thematischen Kartierungen untereinander vergleichbare Ergebnisse zu erzielen, sollen aus Sicht der Geoinformatik Mittel für die Objektidentifikation eingesetzt werden. Anstelle von Feldarbeit werden deshalb in der vorliegenden Arbeit multispektrale Fernerkundungsdaten als Primärdaten verwendet. Konkrete natürliche Objekte werden GIS-gestützt und automatisiert über große Flächen und Objektdichten aus Primärdaten identifiziert und charakterisiert. Im Rahmen der vorliegenden Arbeit wird eine automatisierte Prozesskette zur Objektidentifikation konzipiert. Es werden neue Ansätze und Konzepte der objektbasierten Identifikation von natürlichen isolierten terrestrischen Oberflächenformen entwickelt und implementiert. Die Prozesskette basiert auf einem Konzept, das auf einem generischen Ansatz für automatisierte Objektidentifikation aufgebaut ist. Die Prozesskette kann anhand charakteristischer quantitativer Parameter angepasst und so umgesetzt werden, womit das Konzept der Objektidentifikation modular und skalierbar wird. Die modulbasierte Architektur ermöglicht den Einsatz sowohl einzelner Module als auch ihrer Kombination und möglicher Erweiterungen. Die eingesetzte Methodik der Objektidentifikation und die daran anschließende Charakteristik der (geo)morphometrischen und morphologischen Parameter wird durch statistische Verfahren gestützt. Diese ermöglichen die Vergleichbarkeit von Objektparametern aus unterschiedlichen Stichproben. Mit Hilfe der Regressionsund Varianzanalyse werden Verhältnisse zwischen Objektparametern untersucht. Es werden funktionale Abhängigkeiten der Parameter analysiert, um die Objekte qualitativ zu beschreiben. Damit ist es möglich, automatisiert berechnete Maße und Indizes der Objekte als quantitative Daten und Informationen zu erfassen und unterschiedliche Stichproben anzuwenden. Im Rahmen dieser Arbeit bilden Thermokarstseen die Grundlage für die Entwicklungen und als Beispiel sowie Datengrundlage für den Aufbau des Algorithmus und die Analyse. Die Geovisualisierung der multivariaten natürlichen Objekte wird für die Entwicklung eines besseren Verständnisses der räumlichen Relationen der Objekte eingesetzt. Kern der Geovisualisierung ist das Verknüpfen von Visualisierungsmethoden mit kartenähnlichen Darstellungen.
Antarctic glacier forfields are extreme environments and pioneer sites for ecological succession. The Antarctic continent shows microbial community development as a natural laboratory because of its special environment, geographic isolation and little anthropogenic influence. Increasing temperatures due to global warming lead to enhanced deglaciation processes in cold-affected habitats and new terrain is becoming exposed to soil formation and accessible for microbial colonisation. This study aims to understand the structure and development of glacier forefield bacterial communities, especially how soil parameters impact the microorganisms and how those are adapted to the extreme conditions of the habitat. To this effect, a combination of cultivation experiments, molecular, geophysical and geochemical analysis was applied to examine two glacier forfields of the Larsemann Hills, East Antarctica. Culture-independent molecular tools such as terminal restriction length polymorphism (T-RFLP), clone libraries and quantitative real-time PCR (qPCR) were used to determine bacterial diversity and distribution. Cultivation of yet unknown species was carried out to get insights in the physiology and adaptation of the microorganisms. Adaptation strategies of the microorganisms were studied by determining changes of the cell membrane phospholipid fatty acid (PLFA) inventory of an isolated bacterium in response to temperature and pH fluctuations and by measuring enzyme activity at low temperature in environmental soil samples. The two studied glacier forefields are extreme habitats characterised by low temperatures, low water availability and small oligotrophic nutrient pools and represent sites of different bacterial succession in relation to soil parameters. The investigated sites showed microbial succession at an early step of soil formation near the ice tongue in comparison to closely located but rather older and more developed soil from the forefield. At the early step the succession is influenced by a deglaciation-dependent areal shift of soil parameters followed by a variable and prevalently depth-related distribution of the soil parameters that is driven by the extreme Antarctic conditions. The dominant taxa in the glacier forefields are Actinobacteria, Acidobacteria, Proteobacteria, Bacteroidetes, Cyanobacteria and Chloroflexi. The connection of soil characteristics with bacterial community structure showed that soil parameter and soil formation along the glacier forefield influence the distribution of certain phyla. In the early step of succession the relative undifferentiated bacterial diversity reflects the undifferentiated soil development and has a high potential to shift according to past and present environmental conditions. With progressing development environmental constraints such as water or carbon limitation have a greater influence. Adapting the culturing conditions to the cold and oligotrophic environment, the number of culturable heterotrophic bacteria reached up to 108 colony forming units per gram soil and 148 isolates were obtained. Two new psychrotolerant bacteria, Herbaspirillum psychrotolerans PB1T and Chryseobacterium frigidisoli PB4T, were characterised in detail and described as novel species in the family of Oxalobacteraceae and Flavobacteriaceae, respectively. The isolates are able to grow at low temperatures tolerating temperature fluctuations and they are not specialised to a certain substrate, therefore they are well-adapted to the cold and oligotrophic environment. The adaptation strategies of the microorganisms were analysed in environmental samples and cultures focussing on extracellular enzyme activity at low temperature and PLFA analyses. Extracellular phosphatases (pH 11 and pH 6.5), β-glucosidase, invertase and urease activity were detected in the glacier forefield soils at low temperature (14°C) catalysing the conversion of various compounds providing necessary substrates and may further play a role in the soil formation and total carbon turnover of the habitat. The PLFA analysis of the newly isolated species C. frigidisoli showed that the cold-adapted strain develops different strategies to maintain the cell membrane function under changing environmental conditions by altering the PLFA inventory at different temperatures and pH values. A newly discovered fatty acid, which was not found in any other microorganism so far, significantly increased at decreasing temperature and low pH and thus plays an important role in the adaption of C. frigidisoli. This work gives insights into the diversity, distribution and adaptation mechanisms of microbial communities in oligotrophic cold-affected soils and shows that Antarctic glacier forefields are suitable model systems to study bacterial colonisation in connection to soil formation.
In the presence of a solid-liquid or liquid-air interface, bacteria can choose between a planktonic and a sessile lifestyle. Depending on environmental conditions, cells swimming in close proximity to the interface can irreversibly attach to the surface and grow into three-dimensional aggregates where the majority of cells is sessile and embedded in an extracellular polymer matrix (biofilm). We used microfluidic tools and time lapse microscopy to perform experiments with the polarly flagellated soil bacterium Pseudomonas putida (P. putida), a bacterial species that is able to form biofilms. We analyzed individual trajectories of swimming cells, both in the bulk fluid and in close proximity to a glass-liquid interface. Additionally, surface related growth during the early phase of biofilm formation was investigated. In the bulk fluid, P.putida shows a typical bacterial swimming pattern of alternating periods of persistent displacement along a line (runs) and fast reorientation events (turns) and cells swim with an average speed around 24 micrometer per second. We found that the distribution of turning angles is bimodal with a dominating peak around 180 degrees. In approximately six out of ten turning events, the cell reverses its swimming direction. In addition, our analysis revealed that upon a reversal, the cell systematically changes its swimming speed by a factor of two on average. Based on the experimentally observed values of mean runtime and rotational diffusion, we presented a model to describe the spreading of a population of cells by a run-reverse random walker with alternating speeds. We successfully recover the mean square displacement and, by an extended version of the model, also the negative dip in the directional autocorrelation function as observed in the experiments. The analytical solution of the model demonstrates that alternating speeds enhance a cells ability to explore its environment as compared to a bacterium moving at a constant intermediate speed. As compared to the bulk fluid, for cells swimming near a solid boundary we observed an increase in swimming speed at distances below d= 5 micrometer and an increase in average angular velocity at distances below d= 4 micrometer. While the average speed was maximal with an increase around 15% at a distance of d= 3 micrometer, the angular velocity was highest in closest proximity to the boundary at d=1 micrometer with an increase around 90% as compared to the bulk fluid. To investigate the swimming behavior in a confinement between two solid boundaries, we developed an experimental setup to acquire three-dimensional trajectories using a piezo driven objective mount coupled to a high speed camera. Results on speed and angular velocity were consistent with motility statistics in the presence of a single boundary. Additionally, an analysis of the probability density revealed that a majority of cells accumulated near the upper and lower boundaries of the microchannel. The increase in angular velocity is consistent with previous studies, where bacteria near a solid boundary were shown to swim on circular trajectories, an effect which can be attributed to a wall induced torque. The increase in speed at a distance of several times the size of the cell body, however, cannot be explained by existing theories which either consider the drag increase on cell body and flagellum near a boundary (resistive force theory) or model the swimming microorganism by a multipole expansion to account for the flow field interaction between cell and boundary. An accumulation of swimming bacteria near solid boundaries has been observed in similar experiments. Our results confirm that collisions with the surface play an important role and hydrodynamic interactions alone cannot explain the steady-state accumulation of cells near the channel walls. Furthermore, we monitored the number growth of cells in the microchannel under medium rich conditions. We observed that, after a lag time, initially isolated cells at the surface started to grow by division into colonies of increasing size, while coexisting with a comparable smaller number of swimming cells. After 5:50 hours, we observed a sudden jump in the number of swimming cells, which was accompanied by a breakup of bigger clusters on the surface. After approximately 30 minutes where planktonic cells dominated in the microchannel, individual swimming cells reattached to the surface. We interpret this process as an emigration and recolonization event. A number of complementary experiments were performed to investigate the influence of collective effects or a depletion of the growth medium on the transition. Similar to earlier observations on another bacterium from the same family we found that the release of cells to the swimming phase is most likely the result of an individual adaption process, where syntheses of proteins for flagellar motility are upregulated after a number of division cycles at the surface.
While patients are known to respond differently to drug therapies, current clinical practice often still follows a standardized dosage regimen for all patients. For drugs with a narrow range of both effective and safe concentrations, this approach may lead to a high incidence of adverse events or subtherapeutic dosing in the presence of high patient variability. Model-informedprecision dosing (MIPD) is a quantitative approach towards dose individualization based on mathematical modeling of dose-response relationships integrating therapeutic drug/biomarker monitoring (TDM) data. MIPD may considerably improve the efficacy and safety of many drug therapies. Current MIPD approaches, however, rely either on pre-calculated dosing tables or on simple point predictions of the therapy outcome. These
approaches lack a quantification of uncertainties and the ability to account for effects that are delayed. In addition, the underlying models are not improved while applied to patient data. Therefore, current approaches are not well suited for informed clinical decision-making based on a differentiated understanding of the individually predicted therapy outcome.
The objective of this thesis is to develop mathematical approaches for MIPD, which (i) provide efficient fully Bayesian forecasting of the individual therapy outcome including associated uncertainties, (ii) integrate Markov decision processes via reinforcement learning (RL) for a comprehensive decision framework for dose individualization, (iii) allow for continuous learning across patients and hospitals. Cytotoxic anticancer chemotherapy with its major dose-limiting toxicity, neutropenia, serves as a therapeutically relevant application example.
For more comprehensive therapy forecasting, we apply Bayesian data assimilation (DA) approaches, integrating patient-specific TDM data into mathematical models of chemotherapy-induced neutropenia that build on prior population analyses. The value of uncertainty quantification is demonstrated as it allows reliable computation of the patient-specific probabilities of relevant clinical quantities, e.g., the neutropenia grade. In view of novel home monitoring devices that increase the amount of TDM data available, the data processing of
sequential DA methods proves to be more efficient and facilitates handling of the variability between dosing events.
By transferring concepts from DA and RL we develop novel approaches for MIPD. While DA-guided dosing integrates individualized uncertainties into dose selection, RL-guided dosing provides a framework to consider delayed effects of dose selections. The combined
DA-RL approach takes into account both aspects simultaneously and thus represents a holistic approach towards MIPD. Additionally, we show that RL can be used to gain insights into important patient characteristics for dose selection. The novel dosing strategies substantially reduce the occurrence of both subtherapeutic and life-threatening neutropenia grades in a simulation study based on a recent clinical study (CEPAC-TDM trial) compared to currently used MIPD approaches.
If MIPD is to be implemented in routine clinical practice, a certain model bias with respect to the underlying model is inevitable, as the models are typically based on data from comparably small clinical trials that reflect only to a limited extent the diversity in real-world patient populations. We propose a sequential hierarchical Bayesian inference framework that enables continuous cross-patient learning to learn the underlying model parameters of the target patient population. It is important to note that the approach only requires summary information of the individual patient data to update the model. This separation of the individual inference from population inference enables implementation across different centers of care.
The proposed approaches substantially improve current MIPD approaches, taking into account new trends in health care and aspects of practical applicability. They enable progress towards more informed clinical decision-making, ultimately increasing patient benefits beyond the current practice.
Estimation of the self-similarity exponent has attracted growing interest in recent decades and became a research subject in various fields and disciplines. Real-world data exhibiting self-similar behavior and/or parametrized by self-similarity exponent (in particular Hurst exponent) have been collected in different fields ranging from finance and human sciencies to hydrologic and traffic networks. Such rich classes of possible applications obligates researchers to investigate qualitatively new methods for estimation of the self-similarity exponent as well as identification of long-range dependencies (or long memory). In this thesis I present the Bayesian estimation of the Hurst exponent. In contrast to previous methods, the Bayesian approach allows the possibility to calculate the point estimator and confidence intervals at the same time, bringing significant advantages in data-analysis as discussed in this thesis. Moreover, it is also applicable to short data and unevenly sampled data, thus broadening the range of systems where the estimation of the Hurst exponent is possible. Taking into account that one of the substantial classes of great interest in modeling is the class of Gaussian self-similar processes, this thesis considers the realizations of the processes of fractional Brownian motion and fractional Gaussian noise. Additionally, applications to real-world data, such as the data of water level of the Nile River and fixational eye movements are also discussed.
Point processes are a common methodology to model sets of events. From earthquakes to social media posts, from the arrival times of neuronal spikes to the timing of crimes, from stock prices to disease spreading -- these phenomena can be reduced to the occurrences of events concentrated in points. Often, these events happen one after the other defining a time--series.
Models of point processes can be used to deepen our understanding of such events and for classification and prediction. Such models include an underlying random process that generates the events. This work uses Bayesian methodology to infer the underlying generative process from observed data. Our contribution is twofold -- we develop new models and new inference methods for these processes.
We propose a model that extends the family of point processes where the occurrence of an event depends on the previous events. This family is known as Hawkes processes. Whereas in most existing models of such processes, past events are assumed to have only an excitatory effect on future events, we focus on the newly developed nonlinear Hawkes process, where past events could have excitatory and inhibitory effects. After defining the model, we present its inference method and apply it to data from different fields, among others, to neuronal activity.
The second model described in the thesis concerns a specific instance of point processes --- the decision process underlying human gaze control. This process results in a series of fixated locations in an image. We developed a new model to describe this process, motivated by the known Exploration--Exploitation dilemma. Alongside the model, we present a Bayesian inference algorithm to infer the model parameters.
Remaining in the realm of human scene viewing, we identify the lack of best practices for Bayesian inference in this field. We survey four popular algorithms and compare their performances for parameter inference in two scan path models.
The novel models and inference algorithms presented in this dissertation enrich the understanding of point process data and allow us to uncover meaningful insights.
Within our research group Bayesian Risk Solutions we have coined the idea of a Bayesian Risk Management (BRM). It claims (1) a more transparent and diligent data analysis as well as (2)an open-minded incorporation of human expertise in risk management. In this dissertation we formulize a framework for BRM based on the two pillars Hardcore-Bayesianism (HCB) and Softcore-Bayesianism (SCB) providing solutions for the claims above. For data analysis we favor Bayesian statistics with its Markov Chain Monte Carlo (MCMC) simulation algorithm. It provides a full illustration of data-induced uncertainty beyond classical point-estimates. We calibrate twelve different stochastic processes to four years of CO2 price data. Besides, we calculate derived risk measures (ex ante/ post value-at-risks, capital charges, option prices) and compare them to their classical counterparts. When statistics fails because of a lack of reliable data we propose our integrated Bayesian Risk Analysis (iBRA) concept. It is a basic guideline for an expertise-driven quantification of critical risks. We additionally review elicitation techniques and tools supporting experts to express their uncertainty. Unfortunately, Bayesian thinking is often blamed for its arbitrariness. Therefore, we introduce the idea of a Bayesian due diligence judging expert assessments according to their information content and their inter-subjectivity.
BCH Codes mit kombinierter Korrektur und Erkennung In dieser Arbeit wird auf Grundlage des BCH Codes untersucht, wie eine Fehlerkorrektur mit einer Erkennung höherer Fehleranzahlen kombiniert werden kann. Mit dem Verfahren der 1-Bit Korrektur mit zusätzlicher Erkennung höherer Fehler wurde ein Ansatz entwickelt, welcher die Erkennung zusätzlicher Fehler durch das parallele Lösen einfacher Gleichungen der Form s_x = s_1^x durchführt. Die Anzahl dieser Gleichungen ist linear zu der Anzahl der zu überprüfenden höheren Fehler.
In dieser Arbeit wurde zusätzlich für bis zu 4-Bit Korrekturen mit zusätzlicher Erkennung höherer Fehler ein weiterer allgemeiner Ansatz vorgestellt. Dabei werden parallel für alle korrigierbaren Fehleranzahlen spekulative Fehlerkorrekturen durchgeführt. Aus den bestimmten Fehlerstellen werden spekulative Syndromkomponenten erzeugt, durch welche die Fehlerstellen bestätigt und höhere erkennbare Fehleranzahlen ausgeschlossen werden können. Die vorgestellten Ansätze unterscheiden sich von dem in entwickelten Ansatz, bei welchem die Anzahl der Fehlerstellen durch die Berechnung von Determinanten in absteigender Reihenfolge berechnet wird, bis die erste Determinante 0 bildet. Bei dem bekannten Verfahren ist durch die Berechnung der Determinanten eine faktorielle Anzahl an Berechnungen in Relation zu der Anzahl zu überprüfender Fehler durchzuführen. Im Vergleich zu dem bekannten sequentiellen Verfahrens nach Berlekamp Massey besitzen die Berechnungen im vorgestellten Ansatz simple Gleichungen und können parallel durchgeführt werden.Bei dem bekannten Verfahren zur parallelen Korrektur von 4-Bit Fehlern ist eine Gleichung vierten Grades im GF(2^m) zu lösen. Dies erfolgt, indem eine Hilfsgleichung dritten Grades und vier Gleichungen zweiten Grades parallel gelöst werden. In der vorliegenden Arbeit wurde gezeigt, dass sich eine Gleichung zweiten Grades einsparen lässt, wodurch sich eine Vereinfachung der Hardware bei einer parallelen Realisierung der 4-Bit Korrektur ergibt. Die erzielten Ergebnisse wurden durch umfangreiche Simulationen in Software und Hardwareimplementierungen überprüft.
Be Creative, Now!
(2018)
Purpose – This thesis set out to explore, describe, and evaluate the reality behind the rhetoric of freedom and control in the context of creativity. The overarching subject is concerned with the relationship between creativity, freedom, and control, considering freedom is also seen as an element of control to manage creativity.
Design/methodology/approach – In-depth qualitative data gathered from at two innovative start-ups. Two ethnographic studies were conducted. The data are based on participatory observations, interviews, and secondary sources, each of which included a three months field study and a total of 41 interviews from both organizations.
Findings – The thesis provides explanations for the practice of freedom and the control of creativity within organizations and expands the existing theory of neo-normative control. The findings indicate that organizations use complex control systems that allow a high degree of freedom that paradoxically leads to more control. Freedom is a cover of control, which in turn leads to creativity. Covert control even results in the responsibility to be creative outside working hours.
Practical implications – Organizations, which rely on creativity might use the results of this thesis. Positive workplace control of creativity provides both freedom and structure for creative work. While freedom leads to organizational members being more motivated and committing themselves more strongly to their and the organization’s goals, and a specific structure also helps to provide the requirements for creativity.
Originality/value – The thesis provides an insight into an approach to workplace control, which has mostly neglected in creativity research and proposes a modified concept of neo-normative control. It serves to provide a further understanding of freedom for creativity and to challenge the liberal claims of new control forms.
Gegenstand der Arbeit ist die Beanspruchungssituation des Pflegepersonals im Krankenhausbereich. Es wird der Frage nachgegangen, mit welchem Verhaltens- und Erlebensmuster Pflegepersonen ihren Anforderungen gegenübertreten und wie sie über die Art und Weise der persönlichen Auseinandersetzung mit den Anforderungen ihre Beanspruchungsverhältnisse mitgestalten.Den theoretischen Ausgangspunkt der Arbeit bilden salutogenetisch orientierte Ressourcenmodelle, insbesondere Beckers Modell der seelischen Gesundheit (Becker, 1982, 1986). Nach ihm hängt der Gesundheitszustand einer Person davon ab, wie gut es ihr gelingt, externe und interne Anforderungen mithilfe externer und interner Ressourcen zu bewältigen. Hier knüpft das in der Arbeit im Mittelpunkt stehende diagnostische Instrument AVEM (Arbeitsbezogenes Verhaltens- und Erlebensmuster; Schaarschmidt & Fischer, 1996, 2001) an, das die Erfassung interner Anforderungen und Ressourcen der Person sowie deren Zuordnung zu 4 Verhaltens- und Erlebensmustern gegenüber der Arbeit unter Gesundheits- und Motivationsbezug ermöglicht.Mit den Hypothesen wird angenommen, dass in Anbetracht der problematischen Arbeitsbedingungen in der Pflege eine Zurücknahme im Engagement bzw. eine Schutzhaltung vor nicht gewollten und als unangemessen empfundenen Anforderungen sowie wenig beeinflussbaren Bedingungen im Vordergrund stehen. Dort, wo zumindest partiell gesundheitsförderliche und als herausfordernd erlebte Arbeitsbedingungen anzutreffen sind, sollten günstigere Musterkonstellationen auftreten. Wir vermuteten, dass sich die ungünstigen Tendenzen bereits in der Berufsausbildung und in frühen Berufsjahren zeigen. Musterveränderungen in gesundheits- und persönlichkeitsförderlicher Hinsicht sollten durch gezielte Intervention herbeigeführt werden können. Schließlich nahmen wir an, dass die Tätigkeit und die mit ihr verbundenen Anforderungen und Ausführungsbedingungen musterspezifisch wahrgenommen werden.Zur Beantwortung der Fragen werden Ergebnisse aus verschiedenen Quer- und Längsschnittuntersuchungen herangezogen, die in Wiener Spitälern und Krankenpflegeschulen, aber auch in deutschen Krankenhäusern durchgeführt wurden. Zu Vergleichszwecken werden Befunde anderer Berufsgruppen dargestellt. Neben dem AVEM wurden weitere Fragebögen zu folgenden Inhalten eingesetzt: Arbeitsbezogene Werte, Erleben von Ressourcen in der Pflegetätigkeit, Belastungserleben und Objektive Merkmale der Arbeitstätigkeit.Die Ergebnisse bestätigen die Hypothesen in allen wesentlichen Punkten. Im Vergleich mit anderen Berufsgruppen fallen für die Pflegekräfte deutliche Einschränkungen im Arbeitsengagement auf. In Bezug auf die gesundheitlichen Risikomuster nimmt das Pflegepersonal eine Mittelstellung ein. Die Musterdifferenzierung in der Pflegepopulation lässt die stärksten Unterschiede in Abhängigkeit von der Position erkennen: Je höher die Position, desto größer ist der Anteil des Gesundheitsmusters und desto geringer ist die Resignationstendenz. Die meisten Risikomuster zeigen sich bei den Pflegekräften mit der niedrigsten Qualifikation. Für Pflegeschüler ist ein zeitweiliges starkes Auftreten von resignativen Verhaltens- und Erlebensweisen sowie eine kontinuierliche Abnahme des Engagements kennzeichnend. Dieser Trend setzt sich nach Aufnahme der Berufstätigkeit fort. Nur gezielte intensive personenorientierte Interventionen erwiesen sich als geeignet, Musterveränderungen in gesundheits- und persönlichkeitsförderlicher Hinsicht zu erreichen. Die Tätigkeit und die mit ihr verbundenen Anforderungen und Ausführungsbedingungen werden musterspezifisch wahrgenommen, wobei Personen mit eingeschränktem Engagement bzw. mit einer Resignationstendenz wesentliche Tätigkeitsmerkmale, denen persönlichkeits- und gesundheitsförderliche Wirkung zugesprochen wird, für sich als wenig wichtig beurteilen und sich mehr Defizite im Verhalten gegenüber Patienten bescheinigen.Die Ergebnisse verweisen darauf, dass im Pflegeberuf vor allem die Zurückhaltung im Engagement Anlass für eine kritische Betrachtung sein muss. Das Problem "Burnout" stellt sich in seiner Bedeutung relativiert dar. Günstigere Voraussetzungen für die Aufrechterhaltung und Förderung der Gesundheit bestehen dort, wo im konkreten Arbeitsfeld ein erweiterter Tätigkeits- und Handlungsspielraum sowie mehr Verantwortung vorliegen. Diese Befunde stehen in Einklang mit arbeitspsychologischen Ressourcenmodellen. Die Befunde zu den Pflegeschülern verweisen auf teilweise ungünstige Eignungsvoraussetzungen der Auszubildenden und legen nahe, die Angemessenheit der Anforderungen in den Krankenpflegeschulen zu hinterfragen. Hinsichtlich der Möglichkeiten der Veränderung der Muster in gesundheits- und motivationsdienlicher Weise brachten die Ergebnisse zum Ausdruck, dass verhaltensbezogenen Maßnahmen ohne gleichzeitige bedingungsbezogene Interventionen wenig Erfolg beschieden ist. Mit Blick auf die musterspezifische Wahrnehmung der Tätigkeit und der mit ihr verbundenen Anforderungen und Ausführungsbedingungen ist schließlich grundsätzlich festzuhalten, dass arbeitspsychologische Konzepte, die hohen bzw. komplexen Anforderungen und umfangreichen Freiheitsgraden in der Arbeit grundsätzlich persönlichkeits- und gesundheitsförderliche Wirkungen zuschreiben, einer Relativierung durch eine differentielle Perspektive bedürfen. Die vorgefundene Interaktion von Persönlichkeit und Arbeitsbedingungen hat zur Konsequenz, dass Verhaltens- und Verhältnisprävention in untrennbarem Zusammenhang gesehen werden sollten.
Der Na⁺-K⁺-2Cl⁻-Kotransporter (NKCC2) wird im distalen Nephron der Niere exprimiert. Seine Verteilung umfasst die Epithelien der medullären und kortikalen Teile der dicken aufsteigenden Henle-Schleife (Thick ascending limb, TAL) und die Macula densa. Resorptiver NaCl-Transport über den NKCC2 dient dem renalen Konzentrierungsmechanismus und reguliert systemisch auch Volumenstatus und Blutdruck. Die Aktivität des NKCC2 ist mit der Phosphorylierung seiner N-terminalen Aminosäurereste Serin 126 und Threonin 96/101 verbunden. Vermittelt wird diese durch die homologen Kinasen SPAK (SPS-related proline/alanine-rich kinase) und OSR1 (Oxidative stress responsive kinase 1), die hierzu ihrerseits phosphoryliert werden müssen. Der regulatorische Kontext dieser Kinasen ist mittlerweile gut charakterisiert. Über Mechanismen und Produkte, die den NKCC2 deaktivieren, war hingegen weniger bekannt. Ziel der Arbeit war daher zu untersuchen, welche Wege zur Deaktivierung des Transporters führen. Der intrazelluläre Sortierungsrezeptor SORLA (Sorting-protein-related receptor with A-type repeats) war zuvor in seiner Bedeutung für das Nephron charakterisiert worden. Ein SORLA-defizientes Mausmodell weist unter anderem eine stark verringerte NKCC2-Phosphorylierung auf. Unter osmotischem Stress können SORLA-defiziente Mäuse ihren Urin weniger effizient konzentrieren. Meine Resultate zeigen mit hochauflösender Technik, dass SORLA apikal im TAL lokalisiert ist und dass mit NKCC2 eine anteilige Kolokalisation besteht. Unter SORLA Defizienz war die für die NKCC2 Aktivität maßgebliche SPAK/OSR1-Phosphorylierung gegenüber dem Wildtyp nicht verändert. Jedoch war die ebenfalls im TAL exprimierte Phosphatase Calcineurin Aβ (CnAβ) per Western blot um das zweifache gesteigert. Parallel hierzu wurde immunhistochemisch die Kolokalisation von verstärktem CnAβ-Signal und NKCC2 bestätigt. Beide Befunde geben zusammen den Hinweis auf einen Bezug zwischen der reduzierten NKCC2-Phosphorylierung und der gesteigerten Präsenz von CnAβ bei SORLA Defizienz. Die parallel induzierte Überexpression von SORLA in HEK-Zellen zeigte entsprechend eine Halbierung der CnAβ Proteinmenge. SORLA steuert demzufolge sowohl die Abundanz als auch die zelluläre Verteilung der Phosphatase. Weiterhin ließ sich die Interaktion zwischen CnAβ und SORLA (intrazelluläre Domäne) mittels Co-Immunpräzipitation bzw. GST-pulldown assay nachweisen. Auch die Interaktion zwischen CnAβ und NKCC2 wurde auf diesem Weg belegt. Da allerdings weder SORLA noch NKCC2 ein spezifisches Bindungsmuster für CnAβ aufweisen, sind vermutlich intermediäre Adapterproteine bei ihrer Bindung involviert. Die pharmakologische Inhibition von CnAβ mittels Cyclosporin A (CsA; 1 h) führte bei SORLA Defizienz zur Normalisierung der NKCC2-Phosphorylierung. Entsprechend führte in vitro die Gabe von CsA bei TAL Zellen zu einer 7-fach gesteigerten NKCC2-Phosphorylierung. Zusammenfassend zeigen die Ergebnisse, dass die Phosphatase CnAβ über ihre Assoziation mit NKCC2 diesen im adluminalen Zellkompartiment deaktivieren kann. Gesteuert wird dieser Vorgang durch die Eigenschaft von SORLA, CnAβ apikal zu reduzieren und damit die adluminale Phosphorylierung und Aktivität von NKCC2 zu unterstützen. Da Calcineurin-Inhibitoren derzeit die Grundlage der immunsupprimierenden Therapie darstellen, haben die Ergebnisse eine klinische Relevanz. Angesichts der Co-Expression von SORLA und CnAβ in verschiedenen anderen Organen können die Ergebnisse auch über die Niere hinaus Bedeutung erlangen.
Das homotrimere Tailspikeadhäsin des Bakteriophagen P22 ist ein etabliertes Modellsystem, dessen Faltung, Assemblierung und Stabilität in vivo und in vitro umfassend charakterisiert ist. Das zentrale Strukturmotiv des Proteins ist eine parallele beta-Helix mit 13 Windungen, die von einer N‑terminalen Kapsidbindedomäne und einer C‑terminalen Trimerisierungsdomäne flankiert wird. Jede Windung beinhaltet drei kurze beta-Stränge, die durch turns und loops unterschiedlicher Länge verbunden sind. Durch den sich strukturell wiederholenden, spulenförmigen Aufbau formen beta-Stränge benachbarter Windungen elongierte beta-Faltblätter. Das Lumen der beta-Helix beinhaltet größtenteils hydrophobe Seitenketten, welche linear und sehr regelmäßig entlang der Längsachse gestapelt sind. Eine hoch repetitive Struktur, ausgedehnte beta-Faltblätter und die regelmäßige Anordnung von ähnlichen oder identischen Seitenketten entlang der beta-Faltblattachse sind ebenfalls typische Kennzeichen von Amyloidfibrillen, die bei Proteinfaltungskrankheiten wie Alzheimer, der Creutzfeld-Jakob-Krankheit, Chorea Huntington und Typ-II-Diabetes gebildet werden. Es wird vermutet, dass die hohe Stabilität des Tailspikeproteins und auch die der Amyloidfibrille durch Seitenkettenstapelung, einem geordneten Netzwerk von Wasserstoffbrückenbindungen und den rigiden, oligomeren Verbund bedingt ist. Um den Einfluss der Seitenkettenstapelung auf die Stabilität, Faltung und Struktur des P22 Tailspikeproteins zu untersuchen, wurden sieben Valine in einem im Lumen der beta-Helix begrabenen Seitenkettenstapel gegen das kleinere und weniger hydrophobe Alanin und das voluminösere Leucin substituiert. Der Einfluss der Mutationen wurde anhand zweier Tailspikevarianten, dem trimeren, N‑terminal verkürzten TSPdeltaN‑Konstrukt und der monomeren, isolierten beta-Helix Domäne analysiert. Generell wurde in den Experimenten deutlich, dass Mutationen zu Alanin stärkere Effekte auslösen als Mutationen zu Leucin. Die dichte und hydrophobe Packung im Kern der beta-Helix bildet somit die Basis für Stabilität und Faltung des Proteins. Anhand hoch aufgelöster Kristallstrukturen jeweils zweier Alanin‑ und Leucin‑Mutanten konnte verdeutlicht werden, dass das Strukturmotiv der parallelen beta-Helix stark formbar ist und mutationsbedingte Änderungen des Seitenkettenvolumens durch kleine und lokale Verschiebung der Haupt‑ und Seitenketten ausgeglichen werden, sodass mögliche Kavitäten gefüllt und sterische Spannung abgebaut werden können. Viele Mutanten zeigten in vivo und in vitro einen temperatursensitiven Faltungsphänotyp (temperature sensitive for folding, tsf), d.h. bei Temperaturerhöhung waren die Ausbeuten des N‑terminal verkürzten Trimers im Vergleich zum Wildtyp deutlich verringert. Weiterführende Experimente zeigten, dass der tsf‑Phänotyp durch die Beeinflussung unterschiedlicher Stadien des Reifungsprozesses oder auch durch die Verminderung der kinetischen Stabilität des nativen Trimers ausgelöst wurde. Durch Untersuchungen am vollständigen und am N‑terminal verkürzten Wildtypprotein wurde gezeigt, dass die Entfaltungsreaktion des Tailspiketrimers komplex ist. Die Verläufe der Kinetiken folgen zwar einem apparenten Zweizustandsverhalten, jedoch sind bei Darstellung der Entfaltungsäste im Chevronplot die Abhängigkeiten der Geschwindigkeitskonstanten vom Denaturierungsmittel nicht linear, sondern in unterschiedliche Richtungen gewölbt. Dieses Verhalten könnte durch ein hoch energetisches Entfaltungsintermediat, einen breiten Übergangsbereich oder parallele Entfaltungswege hervorgerufen sein. Mit Hilfe der monomeren, isolierten beta-Helix Domäne, bei der die N‑terminale Capsidbindedomäne und die C‑terminale Trimerisierungsdomäne deletiert sind und welche als unabhängige Faltungseinheit fungiert, wurde gezeigt, dass alle Mutanten im Harnstoff‑induzierten Gleichgewicht analog zum Wildtypprotein einem Zweizustandsverhalten mit vergleichbaren Kooperativitäten folgen. Die konformationellen Stabilitäten von in der beta-Helix zentral gelegenen Alanin‑ und Leucin‑Mutanten sind stark vermindert, während Mutationen in äußeren Bereichen der Domäne keinen Einfluss auf die Stabilität der beta-Helix haben. Bei Verlängerung der Inkubationszeiten der Gleichgewichtsexperimente konnte die langsame Bildung von Aggregaten im Übergangsbereich der destabilisierten Mutanten detektiert werden. Die in der Arbeit erlangten Erkenntnisse lassen vermuten, dass die isolierte beta-Helix einem für die Reifung des Tailspikeproteins entscheidenden thermolabilen Faltungsintermediat auf Monomerebene sehr ähnlich ist. Im Intermediat ist ein zentraler Kern, der die Windungen 4 bis 7 und die „Rückenflosse“ beinhaltet, stabilitätsbestimmend. Dieser Kern könnte als Faltungsnukleus dienen, an den sich sequenziell weitere Helixwindungen anlagern und im Zuge der „Monomerreifung“ kompaktieren.
Die Untersuchung widmet sich der aktuellen Bedeutung von Ethnizität im ländlichen Raum der rumänischen Dobrudscha unter dem Einfluss der politischen und ökonomischen Dimension von Globalisierung. Im Mittelpunkt der Betrachtung von Ethnizität stehen die Aspekte von Inklusion und Exklusion sowie Selbst- und Fremdbeschreibung, die dem sozialen Wandel unterliegen. Mit einem Überblick über die Geschichte der räumlich, wirtschaftlich und sozial peripheren Untersuchungsregion Dobrudscha und der ausgewählten Minderheitengruppen wird die Entwicklung der ethnischen Zusammensetzung der Bevölkerung in der Region dargestellt. Als Fallbeispiele werden sechs Minderheitengruppen gewählt: Aromunen, Roma, russische Lipowaner, Tataren, Türken und Ukrainer. Zentrales Element der Studie bilden qualitative Interviews mit professionellen Beobachtern und Akteuren einerseits, die selbst Einfluss auf die soziale Bedeutung von Ethnizität nehmen, und andererseits mit Einwohnern in 15 ausgewählten Untersuchungsorten in der Dobrudscha. Die erhobenen Daten werden mit Blick auf die soziale Rolle von Ethnizität anhand von drei Faktorengruppen ausgewertet: Globalisierung der Ökonomie, Minderheiten- und Kulturpolitik sowie internationale Beziehungen. Auf dieser Grundlage wird die regionale und lokale Bedeutung ethnischer Kategorisierungen analysiert, um die Wahrnehmung und Bewertung ethnischer Zuordnungen im örtlichen Lebensumfeld zu erfassen.
Risiken für Cyberressourcen können durch unbeabsichtigte oder absichtliche Bedrohungen entstehen. Dazu gehören Insider-Bedrohungen von unzufriedenen oder nachlässigen Mitarbeitern und Partnern, eskalierende und aufkommende Bedrohungen aus aller Welt, die stetige Weiterentwicklung der Angriffstechnologien und die Entstehung neuer und zerstörerischer Angriffe. Informationstechnik spielt mittlerweile in allen Bereichen des Lebens eine entscheidende Rolle, u. a. auch im Bereich des Militärs. Ein ineffektiver Schutz von Cyberressourcen kann hier Sicherheitsvorfälle und Cyberattacken erleichtern, welche die kritischen Vorgänge stören, zu unangemessenem Zugriff, Offenlegung, Änderung oder Zerstörung sensibler Informationen führen und somit die nationale Sicherheit, das wirtschaftliche Wohlergehen sowie die öffentliche Gesundheit und Sicherheit gefährden. Oftmals ist allerdings nicht klar, welche Bedrohungen konkret vorhanden sind und welche der kritischen Systemressourcen besonders gefährdet ist.
In dieser Dissertation werden verschiedene Analyseverfahren für Bedrohungen in militärischer Informationstechnik vorgeschlagen und in realen Umgebungen getestet. Dies bezieht sich auf Infrastrukturen, IT-Systeme, Netze und Anwendungen, welche Verschlusssachen (VS)/Staatsgeheimnisse verarbeiten, wie zum Beispiel bei militärischen oder Regierungsorganisationen. Die Besonderheit an diesen Organisationen ist das Konzept der Informationsräume, in denen verschiedene Datenelemente, wie z. B. Papierdokumente und Computerdateien, entsprechend ihrer Sicherheitsempfindlichkeit eingestuft werden, z. B. „STRENG GEHEIM“, „GEHEIM“, „VS-VERTRAULICH“, „VS-NUR-FÜR-DEN-DIENSTGEBRAUCH“ oder „OFFEN“.
Die Besonderheit dieser Arbeit ist der Zugang zu eingestuften Informationen aus verschiedenen Informationsräumen und der Prozess der Freigabe dieser. Jede in der Arbeit entstandene Veröffentlichung wurde mit Angehörigen in der Organisation besprochen, gegengelesen und freigegeben, so dass keine eingestuften Informationen an die Öffentlichkeit gelangen.
Die Dissertation beschreibt zunächst Bedrohungsklassifikationsschemen und Angreiferstrategien, um daraus ein ganzheitliches, strategiebasiertes Bedrohungsmodell für Organisationen abzuleiten. Im weiteren Verlauf wird die Erstellung und Analyse eines Sicherheitsdatenflussdiagramms definiert, welches genutzt wird, um in eingestuften Informationsräumen operationelle Netzknoten zu identifizieren, die aufgrund der Bedrohungen besonders gefährdet sind. Die spezielle, neuartige Darstellung ermöglicht es, erlaubte und verbotene Informationsflüsse innerhalb und zwischen diesen Informationsräumen zu verstehen.
Aufbauend auf der Bedrohungsanalyse werden im weiteren Verlauf die Nachrichtenflüsse der operationellen Netzknoten auf Verstöße gegen Sicherheitsrichtlinien analysiert und die Ergebnisse mit Hilfe des Sicherheitsdatenflussdiagramms anonymisiert dargestellt. Durch Anonymisierung der Sicherheitsdatenflussdiagramme ist ein Austausch mit externen Experten zur Diskussion von Sicherheitsproblematiken möglich.
Der dritte Teil der Arbeit zeigt, wie umfangreiche Protokolldaten der Nachrichtenflüsse dahingehend untersucht werden können, ob eine Reduzierung der Menge an Daten möglich ist. Dazu wird die Theorie der groben Mengen aus der Unsicherheitstheorie genutzt. Dieser Ansatz wird in einer Fallstudie, auch unter Berücksichtigung von möglichen auftretenden Anomalien getestet und ermittelt, welche Attribute in Protokolldaten am ehesten redundant sind.
Microswimmers, i.e. swimmers of micron size experiencing low Reynolds numbers, have received a great deal of attention in the last years, since many applications are envisioned in medicine and bioremediation. A promising field is the one of magnetic swimmers, since magnetism is biocom-patible and could be used to direct or actuate the swimmers. This thesis studies two examples of magnetic microswimmers from a physics point of view.
The first system to be studied are magnetic cells, which can be magnetic biohybrids (a swimming cell coupled with a magnetic synthetic component) or magnetotactic bacteria (naturally occurring bacteria that produce an intracellular chain of magnetic crystals). A magnetic cell can passively interact with external magnetic fields, which can be used for direction. The aim of the thesis is to understand how magnetic cells couple this magnetic interaction to their swimming strategies, mainly how they combine it with chemotaxis (the ability to sense external gradient of chemical species and to bias their walk on these gradients). In particular, one open question addresses the advantage given by these magnetic interactions for the magnetotactic bacteria in a natural environment, such as porous sediments. In the thesis, a modified Active Brownian Particle model is used to perform simulations and to reproduce experimental data for different systems such as bacteria swimming in the bulk, in a capillary or in confined geometries. I will show that magnetic fields speed up chemotaxis under special conditions, depending on parameters such as their swimming strategy (run-and-tumble or run-and-reverse), aerotactic strategy (axial or polar), and magnetic fields (intensities and orientations), but it can also hinder bacterial chemotaxis depending on the system.
The second example of magnetic microswimmer are rigid magnetic propellers such as helices or random-shaped propellers. These propellers are actuated and directed by an external rotating magnetic field. One open question is how shape and magnetic properties influence the propeller behavior; the goal of this research field is to design the best propeller for a given situation. The aim of the thesis is to propose a simulation method to reproduce the behavior of experimentally-realized propellers and to determine their magnetic properties. The hydrodynamic simulations are based on the use of the mobility matrix. As main result, I propose a method to match the experimental data, while showing that not only shape but also the magnetic properties influence the propellers swimming characteristics.
Business Process Management (BPM) emerged as a means to control, analyse, and optimise business operations. Conceptual models are of central importance for BPM. Most prominently, process models define the behaviour that is performed to achieve a business value. In essence, a process model is a mapping of properties of the original business process to the model, created for a purpose. Different modelling purposes, therefore, result in different models of a business process. Against this background, the misalignment of process models often observed in the field of BPM is no surprise. Even if the same business scenario is considered, models created for strategic decision making differ in content significantly from models created for process automation. Despite their differences, process models that refer to the same business process should be consistent, i.e., free of contradictions. Apparently, there is a trade-off between strictness of a notion of consistency and appropriateness of process models serving different purposes. Existing work on consistency analysis builds upon behaviour equivalences and hierarchical refinements between process models. Hence, these approaches are computationally hard and do not offer the flexibility to gradually relax consistency requirements towards a certain setting. This thesis presents a framework for the analysis of behaviour consistency that takes a fundamentally different approach. As a first step, an alignment between corresponding elements of related process models is constructed. Then, this thesis conducts behavioural analysis grounded on a relational abstraction of the behaviour of a process model, its behavioural profile. Different variants of these profiles are proposed, along with efficient computation techniques for a broad class of process models. Using behavioural profiles, consistency of an alignment between process models is judged by different notions and measures. The consistency measures are also adjusted to assess conformance of process logs that capture the observed execution of a process. Further, this thesis proposes various complementary techniques to support consistency management. It elaborates on how to implement consistent change propagation between process models, addresses the exploration of behavioural commonalities and differences, and proposes a model synthesis for behavioural profiles.
Moderne Technologien befähigen die beteiligten Akteure eines Produktionsprozesses die Informationsaufnahme, Entscheidungsfindung und -ausführung selbstständig auszuführen. Hierarchische Kontrollbeziehungen werden aufgelöst und die Entscheidungsfindung auf eine Vielzahl von Akteuren verteilt. Positive Folgen sind unter anderem die Nutzung lokaler Kompetenzen und ein schnelles Handeln vor Ort ohne (zeit-)aufwändige prozessübergreifende Planungsläufe durch eine zentrale Steuerungsinstanz. Die Bewertung der Dezentralität des Prozesses hilft beim Vergleich verschiedener Steuerungsstrategien und trägt so zur Beherrschung komplexerer Produktionsprozesse bei.
Obwohl die Kommunikationsstruktur der an der Entscheidungsfindung beteiligten Akteure zunehmend an Bedeutung gewinnt, existiert keine Methode, welche diese als Grundlage für die Operationalisierung der Dezentralität verwendet. Hier setzt diese Arbeit an. Es wird ein dreistufiges Bewertungsmodell entwickelt, dass die Dezentralität eines Produktionsprozesses auf Basis der Kommunikations- und Entscheidungsstruktur der am Prozess beteiligten, autonomen Akteure ermittelt.
Aufbauend auf einer Definition von Dezentralität von Produktionsprozessen werden Anforderungen an eine Kennzahl erhoben und - auf Basis der Kommunikationsstruktur - eine die strukturelle Autonomie der Akteure bestimmenden Kenngröße der sozialen Netzwerkanalyse ermittelt. Die Notwendigkeit der zusätzlichen Berücksichtigung der Entscheidungsstruktur wird basierend auf der Möglichkeit der Integration von Entscheidungsfindung und -ausführung begründet.
Die Differenzierung beider Faktoren bildet die Grundlage für die Klassifikation der Akteure; die Multiplikation beider Werte resultiert in dem die Autonomie eines Akteurs beschreibenden Kennwert tatsächliche Autonomie, welcher das Ergebnis der ersten Stufe des Modells darstellt. Homogene Akteurswerte charakterisieren eine hohe Dezentralität des Prozessschrittes, welcher Betrachtungsobjekt der zweiten Stufe ist. Durch einen Vergleich der vorhandenen mit der maximal möglichen Dezentralität der Prozessschritte wird auf der dritten Stufe der Autonomie Index ermittelt, welcher die Dezentralität des Prozesses operationalisiert.
Das erstellte Bewertungsmodell wird anhand einer Simulationsstudie im Zentrum Industrie 4.0 validiert. Dafür wird das Modell auf zwei Simulationsexperimente - einmal mit einer zentralen und einmal mit einer dezentralen Steuerung - angewendet und die Ergebnisse verglichen. Zusätzlich wird es auf einen umfangreichen Produktionsprozess aus der Praxis angewendet.