Refine
Has Fulltext
- yes (62)
Year of publication
Document Type
- Master's Thesis (62) (remove)
Language
- English (62) (remove)
Keywords
- BRD (2)
- Bürgerbeteiligung (2)
- Cold War (2)
- DDR (2)
- DPRK (2)
- Datenanalyse (2)
- Demokratisierung (2)
- Deutschland (2)
- Dezentralisierung (2)
- East Germany (2)
Institute
- Institut für Anglistik und Amerikanistik (10)
- Institut für Physik und Astronomie (8)
- Sozialwissenschaften (8)
- Institut für Geowissenschaften (6)
- Extern (5)
- Institut für Mathematik (4)
- Wirtschaftswissenschaften (4)
- Department Linguistik (3)
- Institut für Biochemie und Biologie (3)
- Institut für Informatik und Computational Science (3)
The acquisition of phonological alternations consists of many aspects as discussions in the relevant literature show. There are contrary findings about the role of naturalness. A natural process is grounded in phonetics; they are easy to learn, even in second language acquisition when adults have to learn certain processes that do not occur in their native language. There is also evidence that unnatural – arbitrary – rules can be learned. Current work on the acquisition of morphophonemic alternations suggests that their probability of occurrence is a crucial factor in acquisition. I have conducted an experiment to investigate the effects of naturalness as well as of probability of occurrence with 80 adult native speakers of German. It uses the Artificial Grammar paradigm: Two artificial languages were constructed, each with a particular alternation. In one language the alternation is natural (vowel harmony); in the other language the alternation is arbitrary (a vowel alternation depends on the sonorancy of the first consonant of the stem). The participants were divided in two groups, one group listened to the natural alternation and the other group listened to the unnatural alternation. Each group was divided into two subgroups. One subgroup then was presented with material in which the alternation occurred frequently and the other subgroup was presented with material in which the alternation occurred infrequently. After this exposure phase every participant was asked to produce new words during the test phase. Knowledge about the language-specific alternation pattern was needed to produce the forms correctly as the phonological contexts demanded certain alternants. The group performances have been compared with respect to the effects of naturalness and probability of occurrence. The natural rule was learned more easily than the unnatural one. Frequently presented rules were not learned more easily than the ones that were presented less frequently. Moreover, participants did not learn the unnatural rule at all, whether this rule was presented frequently or infrequently did not matter. There was a tendency that the natural rule was learned more easily if presented frequently than if presented infrequently, but it was not significant due to variability across participants.
Emotions are a complex concept and they are present in our everyday life. Persons on the autism spectrum are said to have difficulties in social interactions, showing deficits in emotion recognition in comparison to neurotypically developed persons. But social-emotional skills are believed to be positively augmented by training. A new adaptive social cognition training tool “E.V.A.” is introduced which teaches emotion recognition from face, voice and body language. One cross-sectional and one longitudinal study with adult neurotypical and autistic participants were conducted. The aim of the cross-sectional study was to characterize the two groups and see if differences in their social-emotional skills exist. The longitudinal study, on the other hand, aimed for detecting possible training effects following training with the new training tool. In addition, in both studies usability assessments were conducted to investigate the perceived usability of the new tool for neurotypical as well as autistic participants. Differences were found between autistic and neurotypical participants in their social-emotional and emotion recognition abilities. Training effects for neurotypical participants in an emotion recognition task were found after two weeks of home training. Similar perceived usability was found for the neurotypical and autistic participants. The current findings suggest that persons with ASC do not have a general deficit in emotion recognition, but are in need for more time to correctly recognize emotions. In addition, findings suggest that training emotion recognition abilities is possible. Further studies are needed to verify if the training effects found for neurotypical participants also manifest in a larger ASC sample.
In this cartography, I examine M.K. Gandhi’s practice of fasting for political purposes from a specifically aesthetic perspective. In other words, to foreground their dramatic qualities, how they in their expressive repetition, patterning and stylization produced a/effected heightened forms of emotions. To carry out this task, I follow the theater scholar Erika Fischer-Lichte’s features that give name to her book Äesthetik des Performativen (2004). The cartography is framed in a philosophical presentation of Gandhi’s discourse as well as of his historical sources. Moreover, as a second frame, I historicize the fasts, by means of a typology and teleology in context.
The historically and discoursively framed cartography maps four main dimensions that define the aesthetics of the performative: mediality, materiality, semioticity and aestheticity. The first part analyses the medial platforms in which the fasts as events have been historically recorded and in which they have left their traces and inscriptions. These historical sources are namely, newspapers, images, newsreels and a documentary film. Secondly, the material dimension depicts Gandhi’s corporeal condition, as well as the spatiality and temporality of the fasts. In the third place, I revise and reformulate critically Fischer-Lichte’s concepts of “presence” and “representation” with resonating concepts of G. C. Spivak and J. Rancière. This revision illustrates Gandhi’s fasts and shows the process of how an individual may become the embodiment or representation of a national body-politic. The last chapter of the cartography explores the autopoetic-feedback loop between Gandhi and the people and finishes with a comparison of the mise en scène of the hunger artists with the fasts of the Indian the politician, social reformer, and theologian. The text concludes interpreting Gandhi’s practice of fasting under the light of the concepts of “intellectual emancipation” and “de-subjectivation” of the philosopher J. Rancière.
The four main concerns of this cartography are: Firstly, in the field of Gandhi’s reception, to explore the aesthetic dimension as both alternative and complementary to the two hegemonic interpretative lenses, i.e. a hagiographic or a secular political understanding of the fasts. From a theoretical perspective, the cartography pursues to be a transdisciplinary experiment that aims at deploying concepts that have been traditionally developed, derived from and used in the field of the arts (theater, film, literature, aesthetic performance, etc.) in the field of the political. In brief, inverting an expression of Rancière, to understand politics as aesthetics. Thirdly, from a thematic point of view, the cartography inquires the historical forms of staging and perception of hunger. Last yet importantly, it is an inquiry of the practice of fasting as nonviolence, what Gandhi, its most sophisticated modern theoretician and practitioner considered its most radical expression.
Conventional wisdom holds that large sums of money poured into election campaigns are the gateway to corruption. Allegations of the corrupting influence of money in politics and policy are widespread on the national level. Yet, little empirical evidence has advanced the understanding of such a link on the local level, coupled with blurred corruption measures. This master’s thesis tests the effect of campaign finance on public procurement corruption risks in Colombian municipalities, focusing on donations, small donations, and financial disclosure. To that end, I seized publicly disclosed contribution-level data from the 2015 municipal elections and a novel index of institutionalized public procurement corruption risks based upon contract-level data from the near population of local governments. The analysis shows that donations are negatively associated with overall corruption risk, yet they affect specific corruption risks differently. By contrast, small donations seem to correlate positively with direct awarding for a sub-sample of medium-sized municipalities, whereas in their large-sized counterparts the effect of the former on institutionalized corruption is adverse. Finally, financial misreporting is positively linked with market competition restrictions and direct awarding. In the conclusion, I discuss the implications of these findings for future research and outline a series of policy recommendations.
Civil society is either considered as a motor of democratization or stabilizer of authoritarian rule. This dichotomy is partly due to the dominance of domains-based definitions of the concept that reduce civil society to a small range of formally organized, independent and democratically oriented NGOs. Additionally, research often treats civil society as a ‘black box’ without differentiating between potential variations in impact of different types of civil society actors on existing regime structures. In this thesis, I present an alternative conceptualization of civil society based on the interactions of societal actors to arrive at a more inclusive understanding of the term which is more suited for analysis in non-democratic settings. The operationalization of the action-based approach I develop allows for an empirical assessment of a large range of societal activities that can accordingly be categorized from little to very civil society-like depending on their specific modes of interactions within four dimensions. I employ this operationalization in a qualitative case study including different actors in the authoritarian monarchy of Jordan which suggests that Jordanian societal actors mostly exhibit tolerant and democratically oriented modes of interaction and do not reproduce authoritarian patterns. However, even democratically oriented actors do not necessarily take on an oppositional positions vis-à-vis the authoritarian regime. Thus, the Jordanian civil society might not feature a high potential to challenge existing power structures in the country.
Phase Space Reconstruction is a method that allows to reconstruct the phase space of a system using only an one dimensional time series as input. It can be used for calculating Lyapunov-exponents and detecting chaos. It helps to understand complex dynamics and their behavior. And it can reproduce datasets which were not measured. There are many different methods which produce correct reconstructions such as time-delay, Hilbert-transformation, derivation and integration. The most used one is time-delay but all methods have special properties which are useful in different situations. Hence, every reconstruction method has some situations where it is the best choice. Looking at all these different methods the questions are: Why can all these different looking methods be used for the same purpose? Is there any connection between all these functions? The answer is found in the frequency domain : Performing a Fourier transformation all these methods getting a similar shape: Every presented reconstruction method can be described as a multiplication in the frequency domain with a frequency-depending reconstruction function. This structure is also known as a filter. From this point of view every reconstructed dimension can be seen as a filtered version of the measured time series. It contains the original data but applies just a new focus: Some parts are amplified and other parts are reduced. Furthermore I show, that not every function can be used for reconstruction. In the thesis three characteristics are identified, which are mandatory for the reconstruction function. Under consideration of these restrictions one gets a whole bunch of new reconstruction functions. So it is possible to reduce noise within the reconstruction process itself or to use some advantages of already known reconstructions methods while suppressing unwanted characteristics of it.
Complex network theory provides an elegant and powerful framework to statistically investigate the topology of local and long range dynamical interrelationships, i.e., teleconnections, in the climate system. Employing a refined methodology relying on linear and nonlinear measures of time series analysis, the intricate correlation structure within a multivariate climatological data set is cast into network form. Within this graph theoretical framework, vertices are identified with grid points taken from the data set representing a region on the the Earth's surface, and edges correspond to strong statistical interrelationships between the dynamics on pairs of grid points. The resulting climate networks are neither perfectly regular nor completely random, but display the intriguing and nontrivial characteristics of complexity commonly found in real world networks such as the internet, citation and acquaintance networks, food webs and cortical networks in the mammalian brain. Among other interesting properties, climate networks exhibit the "small-world" effect and possess a broad degree distribution with dominating super-nodes as well as a pronounced community structure. We have performed an extensive and detailed graph theoretical analysis of climate networks on the global topological scale focussing on the flow and centrality measure betweenness which is locally defined at each vertex, but includes global topological information by relying on the distribution of shortest paths between all pairs of vertices in the network. The betweenness centrality field reveals a rich internal structure in complex climate networks constructed from reanalysis and atmosphere-ocean coupled general circulation model (AOGCM) surface air temperature data. Our novel approach uncovers an elaborately woven meta-network of highly localized channels of strong dynamical information flow, that we relate to global surface ocean currents and dub the backbone of the climate network in analogy to the homonymous data highways of the internet. This finding points to a major role of the oceanic surface circulation in coupling and stabilizing the global temperature field in the long term mean (140 years for the model run and 60 years for reanalysis data). Carefully comparing the backbone structures detected in climate networks constructed using linear Pearson correlation and nonlinear mutual information, we argue that the high sensitivity of betweenness with respect to small changes in network structure may allow to detect the footprints of strongly nonlinear physical interactions in the climate system. The results presented in this thesis are thoroughly founded and substantiated using a hierarchy of statistical significance tests on the level of time series and networks, i.e., by tests based on time series surrogates as well as network surrogates. This is particularly relevant when working with real world data. Specifically, we developed new types of network surrogates to include the additional constraints imposed by the spatial embedding of vertices in a climate network. Our methodology is of potential interest for a broad audience within the physics community and various applied fields, because it is universal in the sense of being valid for any spatially extended dynamical system. It can help to understand the localized flow of dynamical information in any such system by combining multivariate time series analysis, a complex network approach and the information flow measure betweenness centrality. Possible fields of application include fluid dynamics (turbulence), plasma physics and biological physics (population models, neural networks, cell models). Furthermore, the climate network approach is equally relevant for experimental data as well as model simulations and hence introduces a novel perspective on model evaluation and data driven model building. Our work is timely in the context of the current debate on climate change within the scientific community, since it allows to assess from a new perspective the regional vulnerability and stability of the climate system while relying on global and not only on regional knowledge. The methodology developed in this thesis hence has the potential to substantially contribute to the understanding of the local effect of extreme events and tipping points in the earth system within a holistic global framework.
DeepGeoMap
(2021)
In recent years, deep learning improved the way remote sensing data is processed. The classification of hyperspectral data is no exception. 2D or 3D convolutional neural networks have outperformed classical algorithms on hyperspectral image classification in many cases. However, geological hyperspectral image classification includes several challenges, often including spatially more complex objects than found in other disciplines of hyperspectral imaging that have more spatially similar objects (e.g., as in industrial applications, aerial urban- or farming land cover types). In geological hyperspectral image classification, classical algorithms that focus on the spectral domain still often show higher accuracy, more sensible results, or flexibility due to spatial information independence. In the framework of this thesis, inspired by classical machine learning algorithms that focus on the spectral domain like the binary feature fitting- (BFF) and the EnGeoMap algorithm, the author of this thesis proposes, develops, tests, and discusses a novel, spectrally focused, spatial information independent, deep multi-layer convolutional neural network, named 'DeepGeoMap’, for hyperspectral geological data classification. More specifically, the architecture of DeepGeoMap uses a sequential series of different 1D convolutional neural networks layers and fully connected dense layers and utilizes rectified linear unit and softmax activation, 1D max and 1D global average pooling layers, additional dropout to prevent overfitting, and a categorical cross-entropy loss function with Adam gradient descent optimization. DeepGeoMap was realized using Python 3.7 and the machine and deep learning interface TensorFlow with graphical processing unit (GPU) acceleration. This 1D spectrally focused architecture allows DeepGeoMap models to be trained with hyperspectral laboratory image data of geochemically validated samples (e.g., ground truth samples for aerial or mine face images) and then use this laboratory trained model to classify other or larger scenes, similar to classical algorithms that use a spectral library of validated samples for image classification. The classification capabilities of DeepGeoMap have been tested using two geological hyperspectral image data sets. Both are geochemically validated hyperspectral data sets one based on iron ore and the other based on copper ore samples. The copper ore laboratory data set was used to train a DeepGeoMap model for the classification and analysis of a larger mine face scene within the Republic of Cyprus, where the samples originated from. Additionally, a benchmark satellite-based dataset, the Indian Pines data set, was used for training and testing. The classification accuracy of DeepGeoMap was compared to classical algorithms and other convolutional neural networks. It was shown that DeepGeoMap could achieve higher accuracies and outperform these classical algorithms and other neural networks in the geological hyperspectral image classification test cases. The spectral focus of DeepGeoMap was found to be the most considerable advantage compared to spectral-spatial classifiers like 2D or 3D neural networks. This enables DeepGeoMap models to train data independently of different spatial entities, shapes, and/or resolutions.
Hunting Down Animal Verbs
(2022)
Language change is an essential feature of human language, and it is therefore one of the focal areas of the scientific study of language. Language change is always tacitly at work in all languages of the world and at all levels of a given language, be it phonology, morphology, syntax, semantics, etc. It has been suggested that it is precisely the capacity to constantly change and adjust that allows language to keep serving the communicative goals of its users, from ancient to modern times (Fauconnier & Turner, 2003, p. 179).
This thesis investigates an especially salient pattern of lexicogrammatical change, namely word-formation of verbs from animal nouns by zero-derivation, in the process of which such nouns as, for example, dog, horse, or beaver change their usage and meaning to produce animal verbs: to dog ‘to follow someone persistently and with a malicious intent’, to horse about/around ‘to make fun of, to ‘rag’, to ridicule someone’ and to beaver away ‘to work at working with great enthusiasm’ respectively. In the previous literature this pattern of language change has been termed verbal zoosemy (e.g. Kiełtyka, 2016), i.e. metaphorical construal of human actions by means of linguistic material from the domain of animals.
The approach taken in this study is not to simply report on the objective changes in the morphology, syntactic distribution and meaning of such linguistic units before and after conversion, but to uncover the complexity of cognitive mechanisms which allow the speakers of English to reclassify such well-established nominal units as animal noun into verbs. It is assumed that the grammatical change in these lexical units is predicated on and triggered by preceding semantic change. Thus, the study is set in the framework of Cognitive Historical Semantics and employs the Conceptual Metaphor and Metonymy Theory (CMMT) to untangle the intricacies of the semantic change making the grammatical change of animal nouns into verbs possible and acceptable in the minds of English speakers.
To this end, this study employed the Oxford English Dictionary Online (OED Online) to compile a glossary of 96 denominal animal verbal forms tied to 209 verbal senses (most verbs in the dataset displayed polysemy). The data collected from the OED Online included not only the senses of the verbs, but also the date of the earliest recorded use of the verbal form with the given sense (regarded in the study as the date of conversion), the earliest usage examples for individual senses and morphologically or semantically related linguistic units from the lexical field of the respective parent noun which were amenable to explaining the observed instances of semantic change. Each instance of zoosemisation, i.e. of the creation of a separate metaphorical verbal sense, was then carefully analysed on the basis of the data collected and classified with the help of the CMMT. In the final stage, a comprehensive and systematic classification of the senses of animal verbs in accordance with the cognitive mechanisms of their creation (metaphor, metonymy, or a combination thereof) was produced together with a timeline of the first appearance of individual metaphorical senses of animal verbs recorded in the OED.
The results show that animal verbs are produced through the interaction of conceptual metaphor and metonymy. Specifically, it was established that two major patterns of metaphor-metonymy interaction underpinning the process of verbal zoosemisation are metaphor from metonymy and metonymy from metaphor. In the former pattern, either an already existing metonymic animal verb is expanded to include the target domain PEOPLE, or the animal noun itself acts as a metonymic vehicle to a certain element of the idealised cognitive model of the given animal, which is metaphorically projected onto people. In the latter mechanism, a metaphorical projection of an animal term initially enters the lexicon in the form of a metaphorical animal noun referring to a human entity, and later in the course of language development it comes to metonymically stand for the action, which the given entity either performs or is involved in. Secondarily, it was observed that individual animal nouns can undergo multiple rounds of zoosemic conversion over time depending on the semantic frame in which the given linguistic unit undergoes denominal conversion, and that results in the polysemy of most animal verbs.
The Rio Conventions stand at the centerpiece of international cooperation within the governance area of climate change, biodiversity, and desertification. Due to substantial environmental and political linkages, there are interrelations between the three regimes. This study seeks to examine the inter-institutional relationship between the United Nations Framework Convention on Climate Change, the Convention on Biological Diversity and the United Nations Convention to Combat Desertification by analyzing and assessing their horizontal interplay activities from the starting point of their genesis at Earth Summit in 1992 until today. In this research, I address the connections between the three conventions and identify the conflicting, cooperative, and synergetic aspects of inter-institutional relationship. While the overall empirical analysis suggests weak indications of a conflictive type, this research asserts that the interplay activities have thus far led to a cooperative relationship between the Rio Conventions. Moreover, increasing coordination and collaboration between the conventions’ treaty secretariats signals characteristics of a synergetic relationship, which could open up a potential window of opportunity for these actors to further engage and progress in institutional management in the future. In a conclusion, this study explores the possibility of the formation of an overarching environmental institution as a result of joint institutional management within the complex of climate change, biodiversity, and desertification.
This paper deals with the teaching of grammar in the English as a foreign language (EFL) classroom. In this context, a course book (English G 21 A2) is examined in regard to whether it is compatible with current theories about second language acquisition (SLA).
At the beginning of this paper, views on grammar teaching from the past and the present are summarized and this is followed by an analysis of the current curriculum concerning its guidelines for grammar teaching in the foreign language classroom. This analysis concludes that the curriculum of Brandenburg hardly gives any recommendations regarding the question of which grammatical phenomena should to be taught. This explains, at least partly, the important position course books take in the foreign language classroom. Teachers use them as a source of material as well has a guideline for which topics can be taught and in which order.
The following part gives an overview of cognitive models of SLA and foreign language teaching, among others Krashen’s Monitor Hypothesis, R. Ellis’ Weak Interface Model and Pienemann’s Processability Theory. On the basis of these models criteria for the ideal design of a course book, which would support grammar teaching according to current findings, are developed. Among those criteria are the offering of a lot of input in the target language, provision of practice activities and consciousness-raising activities, taking into consideration the sequence of acquisition and the provision of a diagnostic tool which enables the students to find out in which areas of the target language they need to improve. Furthermore, the inclusion of opportunities for (individual) revision is regarded as essential. All of those criteria are of course given under the reservation that the influence of course books on the happenings in the classroom is restricted as the final decisions are made by the teacher in the teaching situation.
In the analysis, one communicative intention which is usually a topic in the English lessons between the third and sixth year of learning is focused on. This communicative intention is talking about the future. First, the possibilities to express futurity in the English language are analysed and reduced for the use in teaching. The chosen course book is then described and analysed and the way the book deals with the topic of talking about the future is compared to the criteria which where specified earlier in the paper. This comparison showed that the book is compatible with SLA theories in many ways (e.g. concerning the explanations of grammatical structures) but that there is still room for improvement (e.g. concerning the amount of input and the number of consciousness-raising activities).
Recently, several faint ringlets in the Saturnian ring system were found to maintain a peculiar orientation relative to Sun. The Encke gap ringlets as well as the ringlet in the outer rift of the Cassini division were found to have distinct spatial displacements of several tens of kilometers away from Saturn towards Sun, referred to as heliotropicity (Hedman et al., 2007). This is quite exceptional, since dynamically one would expect eccentric features in the Saturnian rings to precess around Saturn over periods of months. In our study we address this exceptional behavior by investigating the dynamics of circumplanetary dust particles with sizes in the range of 1-100 µm. These small particles are perturbed by non-gravitational forces, in particular, solar radiation pres- sure, Lorentz force, and planetary oblateness, on time-scales of the order of days. The combined influences of these forces cause periodical evolutions of grains’ orbital ec- centricities as well as precession of their pericenters, which can be shown by secular perturbation theory. We show that this interaction results in a stationary eccentric ringlet, oriented with its apocenter towards the Sun, which is consistent with obser- vational findings. By applying this heliotropic dynamics to the central Encke gap ringlet, we can give a limit for the expected smallest grain size in the ringlet of about 8.7 microns, and constrain the minimal lifetime to lie in the order of months. Furthermore, our model matches fairly well the observed ringlet eccentricity in the Encke gap, which supports recent estimates on the size distribution of the ringlet material (Hedman et al., 2007). The ringlet-width however, that results from our modeling based on heliotropic dynamics, slightly overestimates the observed confined ringlet-width by a factor of 3 to 10, depending on the width-measure being used. This is indicative for mechanisms, not included in the heliotropic model, which potentially confine the ringlet to its observed width, including shepherding and scattering by embedded moonlets in the ringlet region. Based on these results, early investigations (Cuzzi et al., 1984, Spahn and Wiebicke, 1989, Spahn and Sponholz, 1989), and recent work that has been published on the F ring (Murray et al., 2008) - to which the Encke gap ringlets are found to share similar morphological structures - we model the maintenance of the central ringlet by embedded moonlets. These moonlets, believed to have sizes of hundreds of meters across, release material into space, which is eroded by micrometeoroid bombardment (Divine, 1993). We further argue that Pan - one of Saturn’s moons, which shares its orbit with the central ringlet of the Encke gap - is a rather weak source of ringlet material that efficiently confines the ringlet sources (moonlets) to move on horseshoe-like orbits. Moreover, we suppose that most of the narrow heliotropic ringlets are fed by a moonlet population, which is held together by the largest member to move on horseshoe-like orbits. Modeling the equilibrium between particle source and sinks with a primitive balance equation based on photometric observations (Porco et al., 2005), we find the minimal effective source mass of the order of 3 · 10-2MPan, which is needed to keep the central ringlet from disappearing.
This document presents a formula selection system for classical first order theorem proving based on the relevance of formulae for the proof of a conjecture. It is based on unifiability of predicates and is also able to use a linguistic approach for the selection. The scope of the technique is the reduction of the set of formulae and the increase of the amount of provable conjectures in a given time. Since the technique generates a subset of the formula set, it can be used as a preprocessor for automated theorem proving. The document contains the conception, implementation and evaluation of both selection concepts. While the one concept generates a search graph over the negation normal forms or Skolem normal forms of the given formulae, the linguistic concept analyses the formulae and determines frequencies of lexemes and uses a tf-idf weighting algorithm to determine the relevance of the formulae. Though the concept is built for first order logic, it is not limited to it. The concept can be used for higher order and modal logik, too, with minimal adoptions. The system was also evaluated at the world championship of automated theorem provers (CADE ATP Systems Competition, CASC-24) in combination with the leanCoP theorem prover and the evaluation of the results of the CASC and the benchmarks with the problems of the CASC of the year 2012 (CASC-J6) show that the concept of the system has positive impact to the performance of automated theorem provers. Also, the benchmarks with two different theorem provers which use different calculi have shown that the selection is independent from the calculus. Moreover, the concept of TEMPLAR has shown to be competitive to some extent with the concept of SinE and even helped one of the theorem provers to solve problems that were not (or slower) solved with SinE selection in the CASC. Finally, the evaluation implies that the combination of the unification based and linguistic selection yields more improved results though no optimisation was done for the problems.
Over the last decades Britain´s ethnic minorities have successfully established themselves in a multicultural society. In particular, Indian – Hindu communities generally improved their social and economic situation. In this context, the third generation of British Indians is now growing up. In contrast to the previous generation of the Indian diaspora, these children grow up in an established ethnic community, which learned to retain its religion, traditions and culture in a foreign environment. At the same time, these children are part of the multicultural British society. Based on the academic discussion about the second generation of immigrated ethnic communities, when the youth often suffered from cultural differences, racism and discrimination and therefore rejected aspects of their culture of origin, this paper assumes that the loss of the culture of origin further increases in the third generation. This thesis follows the main theories about the connection between generation and integration. It is believed that the preference of western culture influences the personal, ethnic and cultural identity of young people. This leads to the rejection of traditional bonds. Before introducing this thesis various theoretical concepts are discussed which are inevitable for the comprehension of the diasporic situation in which British Indian youngsters grow up. As part of the worldwide Asian Indian diaspora Indian families in Britain maintain manifold links to Indian communities in various countries. Particularly, the link to India plays a decisive role; the subcontinent is referred to as an abstract homeland, especially by the first generation. While the grandparents strongly adhere to their Indian culture and Hindu religion, the second generation already generated cultural change. In this process various cultural values of the Indian ethnic community have been questioned and modified. Further, the second generation pushed the integration into the British society by giving up the dependence on the ethnic network. This paper is based on a hybrid and fluent definition of culture. This definition also applies to the underlying understanding of identity and ethnicity. Due to migration, cultural contact and the multilocality of the diaspora, diasporic and post-diasporic identities and cultures are characterized by hybridity, heterogeneity, fragmentation and flexibility. Particularly, in the younger generation – though dependent on a number of social and structural factors - cultural change and mixture happen; in this process new ethnicities and identities evolve. In the second and third part of this paper the thesis of loss of culture of origin is refuted on the basis of findings from empirical research. British - Indian youngsters in London have been questioned for the study. Half of the youngsters are related to a sampradaya, a Hindu sect. This enables the author to compare youngsters who do not belong to a particular religious group with those who are included into a religious and / or ethnic community through a sampradaya. The analysis of the findings which are based on qualitative and quantitative social research shows that the young people have great interest in their culture of origin and that they aim to maintain this culture in the diaspora. They identify as Indian and are proud of their cultural differences. In this, they differ from the second generation. In contrast to the generation of their grandparents the Indian identity of the third generation is not based on nostalgic memories. They confirm and emphasize their postdiasporic difference in a western multicultural society. The findings from the survey hereby exceed the thesis from Hansen’s theory about the rediscovery of the culture of origin in the third generation. The comparison of both groups shows that in the context of the differentiation of postmodern and postcolonial communities also ethnic groups become increasingly differentiated. Therefore, the Indian heritage and culture does not play the same role for every young British Indian.
Midbrain dopamine neurons invigorate responses by signaling opportunity costs (tonic dopamine) and promote associative learning by encoding a reward prediction error signal (phasic dopamine). Recent studies on Bayesian sensorimotor control have implicated midbrain dopamine concentration in the integration of prior knowledge and current sensory information. The present behavioral study addressed the contributions of tonic and phasic dopamine in a Bayesian decision-making task by alternating reward magnitude and inferring reward prediction errors. Twenty-four participants were asked to indicate the position of a hidden target stimulus under varying prior and likelihood uncertainty. Trial-by-trial rewards were allocated based on performance and two different reward maxima. Overall, participants’ behavior agreed with Bayesian decision theory, but indicated excessive reliance on likelihood information. These results thus
oppose accounts of statistically optimal integration in sensorimotor control, and suggest that the sensorimotor system is subject to additional decision heuristics. Moreover, higher reward magnitude was not observed to induce enhanced response vigor, and was associated with less Bayes-like integration. In addition, the weighting of prior knowledge and current sensory information proceeded independently of reward prediction errors.
Taken together, these findings suggest that the process of combining prior and likelihood uncertainties in sensorimotor control is largely robust to variations in reward.
Assuming that liquid iron alloy from the outer core interacts with the solid silicate-rich lower mantle the influence on the core-mantle reflected phase PcP is studied. If the core-mantle boundary is not a sharp discontinuity, this becomes apparent in the waveform and amplitude of PcP. Iron-silicate mixing would lead to regions of partial melting with higher density which in turn reduces the velocity of seismic waves. On the basis of the calculation and interpretation of short-period synthetic seismograms, using the reflectivity and Gauss Beam method, a model space is evaluated for these ultra-low velocity zones (ULVZs). The aim of this thesis is to analyse the behaviour of PcP between 10° and 40° source distance for such models using different velocity and density configurations. Furthermore, the resolution limits of seismic data are discussed. The influence of the assumed layer thickness, dominant source frequency and ULVZ topography are analysed. The Gräfenberg and NORSAR arrays are then used to investigate PcP from deep earthquakes and nuclear explosions. The seismic resolution of an ULVZ is limited both for velocity and density contrasts and layer thicknesses. Even a very thin global core-mantle transition zone (CMTZ), rather than a discrete boundary and also with strong impedance contrasts, seems possible: If no precursor is observable but the PcP_model /PcP_smooth amplitude reduction amounts to more than 10%, a very thin ULVZ of 5 km with a first-order discontinuity may exist. Otherwise, if amplitude reductions of less than 10% are obtained, this could indicate either a moderate, thin ULVZ or a gradient mantle-side CMTZ. Synthetic computations reveal notable amplitude variations as function of the distance and the impedance contrasts. Thereby a primary density effect in the very steep-angle range and a pronounced velocity dependency in the wide-angle region can be predicted. In view of the modelled findings, there is evidence for a 10 to 13.5 km thick ULVZ 600 km south-eastern of Moscow with a NW-SE extension of about 450 km. Here a single specific assumption about the velocity and density anomaly is not possible. This is in agreement with the synthetic results in which several models create similar amplitude-waveform characteristics. For example, a ULVZ model with contrasts of -5% VP , -15% VS and +5% density explain the measured PcP amplitudes. Moreover, below SW Finland and NNW of the Caspian Sea a CMB topography can be assumed. The amplitude measurements indicate a wavelength of 200 km and a height of 1 km topography, previously also shown in the study by Kampfmann and Müller (1989). Better constraints might be provided by a joined analysis of seismological data, mineralogical experiments and geodynamic modelling.
This thesis investigates the Casimir effect between plates made of normal and superconducting metals over a broad range of temperatures, as well as the Casimir-Polder interaction of an atom to such a surface. Numerical and asymptotical calculations have been the main tools in order to do so. The optical properties of the surfaces are described by dielectric functions or optical conductivities, which are reviewed for common models and have been analyzed with special weight on distributional properties and causality. The calculation of the Casimir energy between two normally conducting plates (cavity) is reviewed and previous work on the contribution to the Casimir energy due to the surface plasmons, present in all metallic cavities, has been generalized to finite temperatures for the first time. In the field of superconductivity, a new analytical continuation of the BCS conductivity to to purely imaginary frequencies has been obtained both inside and outside the extremely dirty limit of vanishing mean free path. The Casimir free energy calculated from this description was shown to coincide well with the values obtained from the two fluid model of superconductivity in certain regimes of the material parameters. The Casimir entropy in a superconducting cavity fulfills the third law of thermodynamics and features a characteristic discontinuity at the phase transition temperature. These effects were equally encountered in the Casimir-Polder interaction of an atom with a superconducting wall. The magnetic dipole coupling of an atom to a metal was shown to be highly sensible to dissipation and especially to the surface currents. This leads to a strong quenching of the magnetic Casimir-Polder energy at finite temperature. Violations of the third law of thermodynamics are encountered in special models, similar to phenomena in the Casimir-effect between two plates, that are debated controversely. None of these effects occurs in the analog electric dipole interaction. The results of this work suggest to reestablish the well-known plasma model as the low temperature limit of a superconductor as in London theory rather than use it for the description of normal metals. Superconductors offer the opportunity to control the dissipation of surface currents to a great extent. This could be used to access experimentally the low frequency optical response of metals, which is strongly connected to the thermal Casimir-effect. Here, differently from corresponding microwave experiments, energy and momentum are independent quantities. A measurement of the total Casimir-Polder interaction of atoms with superconductors seems to be in reach in today’s microchip-based atom-traps and the contribution due to magnetic coupling might be accessed by spectroscopic techniques
This study investigates the reform of the public budgeting and accounting system (Doppik) in Brandenburg. On the one hand, this thesis aims to identify the key variables shaping employees’ commitment to change and, on the other hand, to examine the extent employees’ commitment to change influences the implementation process of the reform. The results of this study show that the commitment of civil servants towards the Doppik is primarily determined by the content, but also by the context of the reform. Moreover, it is revealed for the case of Brandenburg that civil servants’ affective commitment to change has a significant positive influence on the perceived success of the reform implementation. The results of the study are not only of high scientific importance, but also of practical relevance. The recommendations developed in this study offer grounded guidelines on how to successfully implement the Doppik on local level in Brandenburg.
Writing an alternative Australia : women and national discourse in nineteenth-century literature
(2007)
In this thesis, I want to outline the emergence of the Australian national identity in colonial Australia. National identity is not a politically determined construct but culturally produced through discourse on literary works by female and male writers. The emergence of the dominant bushman myth exhibited enormous strength and influence on subsequent generations and infused the notion of “Australianness” with exclusively male characteristics. It provided a unique geographical space, the bush, on and against which the colonial subject could model his identity. Its dominance rendered non-male and non-bush experiences of Australia as “un-Australian.” I will present a variety of contemporary voices – postcolonial, Aboriginal, feminist, cultural critics – which see the Australian identity as a prominent topic, not only in the academia but also in everyday culture and politics. Although positioned in different disciplines and influenced by varying histories, these voices share a similar view on Australian society: Australia is a plural society, it is home to millions of different people – women, men, and children, Aboriginal Australians and immigrants, newly arrived and descendents of the first settlers – with millions of different identities which make up one nation. One version of national identity does not account for the multitude of experiences; one version, if applied strictly, renders some voices unheard and oppressed. After exemplifying how the literature of the 1890s and its subsequent criticism constructed the itinerant worker as “the” Australian, literary productions by women will be singled out to counteract the dominant version by presenting different opinions on the state of colonial Australia. The writers Louisa Lawson, Barbara Baynton, and Tasma are discussed with regard to their assessment of their mother country. These women did not only present a different picture, they were also gifted writers and lived the ideal of the “New Women:” they obtained divorces, remarried, were politically active, worked for their living and led independent lives. They paved the way for many Australian women to come. In their literary works they allowed for a dual approach to the bush and the Australian nation. Louisa Lawson credited the bushwoman with heroic traits and described the bush as both cruel and full of opportunities not known to women in England. She understood women’s position in Australian society as oppressed and tried to change politics and culture through the writings in her feminist magazine the Dawn and her courageous campaign for women suffrage. Barbara Baynton painted a gloomy picture of the Australian bush and its inhabitants and offered one of the fiercest critiques of bush society. Although the woman is presented as the able and resourceful bushperson, she does not manage to survive in an environment which functions on male rules and only values the economic potential of the individual. Finally, Tasma does not present as outright a critique as Barbara Baynton, however, she also attests the colonies a fascination with wealth which she renders questionable. She offers an informed judgement on colonial developments in the urban surrounds of the city of Melbourne through the comparison of colonial society with the mother country England. Tasma attests that the colonies had a fascination with wealth which she renders questionable. She offers an informed judgement on colonial developments in the urban surrounds of the city of Melbourne through the comparison of colonial society with the mother country England and demonstrates how uncertainties and irritations emerged in the course of Australia’s nation formation. These three women, as writers, commentators, and political activists, faced exclusion from the dominant literary discourses. Their assessment of colonial society remained unheard for a long time. Now, after much academic excavation, these voices speak to us from the past and remind us that people are diverse, thus nation is diverse. Dominant power structures, the institutions and individuals who decide who can contribute to the discourse on nation, have to be questioned and reassessed, for they mute voices which contribute to a wider, to the “full”, and maybe “real” picture of society.
The development of rural areas concerning food security, sustainability and social-economic stability is key issue to the globalized community. Regarding the current state of climatic change, especially semi-arid regions in uenced by monsoon or El Niño are prone to extreme weather events. Droughts, ooding, erosion, degradation of soils and water quality and deserti cation are some of the common impacts. State of the art in hydrologic environmental modeling is generally operating under a reductionist paradigm (Sivapalan 2005). Even an enormous quantity of process-oriented models exists, we fail in due reproduction of complexly interacting processes in their effective scale in the space-time-continuum, as they are described through deterministic small-scale process theories (e.g. Beven 2002). Yet large amounts of parameters - with partly doubtful physical expression - and input data are needed. In contradiction to that most soft information about patterns and organizing principles cannot be employed (Seibert and McDonnell 2002). For an analysis of possible strategies on the one hand towards integrated hydrologic modeling as decision support and on the other hand for sustainable land use development the 512 km2 large catchment of the Mod river in Jhabua, Madhya Pradesh, India has been chosen. It is characterized by a setting of common problems of peripheral rural semi-arid human-eco-systems with intensive agriculture, deforestation, droughts and general hardship for the people. Scarce data and missing gauges are adding to the requirements of data acquisition and process description. The study at hand presents a methodical framework to combine eld scale data analysis and remote sensing for the setup of a database focusing plausibility over strict data accuracy. The catena-based hydrologic model WASA (Güntner 2002) employes this database. It is expanded by a routine for crop development simulation after the de Wit approach (e.g. in Bouman et al. 1996). For its application as decision support system an agentbased land use algorithm is developed which decides on base of site speci cations and certain constraints (like maximum pro t or best local adaptation) about the cropping. The new model is employed to analyze (some) land use strategies. Not anticipated and a priori de ned scenarios will account for the realization of the model but the interactions within the system. This study points out possible approaches to enhance the situation in the catchment. It also approaches central questions of ways towards due integrated hydrological modeling on catchment scale for ungauged conditions and to overcome current paradigms.
The involvement of the two German states in Korea during the 1950s in the context of the Cold War
(2020)
This master thesis will analyze the background of the involvement of the Federal Republic of Germany (FRG) and the German Democratic Republic (GDR) in Korea during the 1950s in the context of the Cold War. In both Korean states, the Democratic People’s Republic of Korea (DPRK) as well as the Republic of Korea (ROK), the so-called humanitarian aid that was provided to them in the form of medical and economic assistance to help surmount the hardship of the postwar period is remembered with great appreciation to this day. However, critical views on the German engagement in Korea are still relatively hard to find. In this paper, two exemplary cases will be studied: the GDR’s city reconstruction project in the North Korean cities of Hamheung and Heungnam and the FRG’s medical assistance to the ROK by means of the West German Red Cross Hospital in Busan. By looking at primary sources like governmental documents, this thesis will examine the geopolitical conditions and particular national interests that stood behind the German development and humanitarian aid for the Korean states at that time, thus shedding light on the political goals the two German states pursued, and the benefit they expected to derive from their engagement in Korea. Sources consulted include primary archival materials, secondary sources like monographs, journal articles, contemporary newspaper articles, and interviews with contemporary witnesses.
The involvement of the two German states in Korea during the 1950s in the context of the Cold War
(2020)
This master thesis will analyze the background of the involvement of the Federal Republic of Germany (FRG) and the German Democratic Republic (GDR) in Korea during the 1950s in the context of the Cold War. In both Korean states, the Democratic People’s Republic of Korea (DPRK) as well as the Republic of Korea (ROK), the so-called humanitarian aid that was provided to them in the form of medical and economic assistance to help surmount the hardship of the postwar period is remembered with great appreciation to this day. However, critical views on the German engagement in Korea are still relatively hard to find. In this paper, two exemplary cases will be studied: the GDR’s city reconstruction project in the North Korean cities of Hamheung and Heungnam and the FRG’s medical assistance to the ROK by means of the West German Red Cross Hospital in Busan. By looking at primary sources like governmental documents, this thesis will examine the geopolitical conditions and particular national interests that stood behind the German development and humanitarian aid for the Korean states at that time, thus shedding light on the political goals the two German states pursued, and the benefit they expected to derive from their engagement in Korea. Sources consulted include primary archival materials, secondary sources like monographs, journal articles, contemporary newspaper articles, and interviews with contemporary witnesses.
From Brock to Brett
(2021)
This master's thesis in US American cultural studies posits that the phenomenon of rape culture represents a socio-cultural system of social power structures and cultural myths. Based on so-called rape myths, this system also constitutes an ideology. The thesis aims to demonstrate how these rape myths are instrumentalized in order to protect (primarily white, cis-male) perpetrators and instead assign responsibility to those affected by sexualized violence. In doing so, the thesis shows that young men like Brock Turner, who benefit from patriarchal power structures, grow up to become men like Brett Kavanaugh, who not only benefit from the fact that rape culture excuses their abusive behavior, but also from the fact that this enables them to reach positions of power through which they, as decision-makers, can in turn maintain the structures underlying rape culture.
The thesis focuses on the rape myths of so-called victim blaming and shaming as well as the victimization of perpetrators. These myths are examined by analyzing 19th-century newspaper articles and then traced into the 21st century. Based on Mary Douglas' theory on ideas of purity, the thesis shows the extent to which not only social categories, namely gender, race, socio-economic status, and age, but also the sexual purity or impurity of those affected have an impact on the societal response to rape cases.
Furthermore, the thesis demonstrates how female bodies function as an ideological battleground for political and social change in the US, and how perceived threats to the patriarchal status quo are framed in public discourse as moral dangers posed by female bodies. The paper argues that rape culture is driven by (white cis) male entitlement to female bodies but moreover to positions of power in the patriarchal system. The thesis shows how this system instrumentalizes rape culture to maintain its underlying structures that favor (cis) men and, in contrast, disadvantage (cis) women and other marginalized and non-heteronormative groups. This is illustrated by analyzing the 2016 Stanford rape case and the 2018 Kavanaugh hearing.
Today about 24 Million people worldwide suffer from dementia, Alzheimer’s Disease accounts for approximately 50-60% of all dementia cases. As the prevalence of dementia grows with increasing age Alzheimer’s Disease becomes more and more of an issue for society as the proportion of elderly people increases from year to year. It is well established, that the amino acid glutamate - quantitatively being the most important neurotransmitter in the central nervous system (CNS) - may reach toxic concentrations if not cleared from the synaptic cleft into which it is released during transmittance of action potentials. In Alzheimer’s Disease there is strong evidence for a generally impaired glutamate uptake system which in turn is thought to result in toxic levels of the amino acid with the potential to kill off neurons. The excitatory amino acid transporter 1 (EAAT1) belongs to the family of Na+-dependent glutamate transporter and accounts together with EAAT2 for most of the glutamate uptake in the CNS. In this project a new splice variant of EAAT1, skipping exon 3 was detected in human brain samples and subsequently called EAAT1Δ3, this being the second splice variant found after the recent detection of EAAT1Δ9. A method was developed to quantify the transcript of EAAT1 wt, EAAT1Δ3 and EAAT1Δ9 by means of real-time PCR. Samples were taken from different brain areas of a set of control and AD cases. The areas chosen for examination are affected differently in Alzheimer’s Disease, this was used an internal control for the experiments done in this project as to determine whether any effect observed is specific for AD, i.e. AD affected areas or is generally seen in all areas examined. The results of this project show that EAAT1Δ3 is transcribed in very low copy numbers making up a proportion of 0.15% of EAAT1 wt whereas EAAT1Δ9 is transcribed in a considerably large proportion of EAAT1 wt of 26.6%. It was moreover found that all EAAT1 variants are transcribed at significantly lower rates (P<0.0001) in AD cases, supporting the theory that EAAT1 protein expression is reduced to a point where glutamate uptake normally mediated by this transporter is impaired. This in turn is thought to result in toxic levels glutamate accounting for neuronal loss in the disease. No area-dependent effects were found, suggesting that the reduction of EAAT1 transcription is rather a result of an underlying general mechanism present in AD. Further research will have to be done to assess the degree of EAAT1 expression in AD and whether those future findings match with the result of this project.
The heterogeneity in species assemblages of epigeal spiders was studied in a natural forest and in a managed forest. Additionally the effects of small-scale microhabitat heterogeneity of managed and unmanaged forests were determined by analysing the spider assemblages of three different microhabitat structures (i. vegetation, ii. dead wood. iii. litter cover). The spider were collected in a block design by pitfall traps (n=72) in a 4-week interval. To reveal key environmental factors affecting the spider distribution abiotic and biotic habitat parameters (e.g. vegetation parameters, climate parameters, soil moisture) were assessed around each pitfall trap. A TWINSPAN analyses separated pitfall traps from the natural forest from traps of the managed forest. A subsequent discriminant analyses revealed that the temperature, the visible sky, the plant diversity and the mean diameter at breast height as key discriminant factors between the microhabitat groupings designated by the TWINSPAN analyses. Finally a Redundant analysis (RDA) was done revealing similar environmental factors responsible for the spider species distribution, as a good separation of the different forest types as well as the separation of the microhabitat groupings from the TWINSPAN. Overall the study revealed that the spider communities differed between the forest types as well as between the microhabitat structures and thus species distribution changed within a forest stand on a fine spatial scale. It was documented that the structure of managed forests affects the composition of spider assemblages compared to natural forests significantly and even small scale-heterogeneity seems to influence the spider species composition.
Three quantum cryptographic protocols of multiuser quantum networks with embedded authentication, allowing quantum key distribution or quantum direct communication, are discussed in this work. The security of the protocols against different types of attacks is analysed with a focus on various impersonation attacks and the man-in-the-middle attack. On the basis of the security analyses several improvements are suggested and implemented in order to adjust the investigated vulnerabilities. Furthermore, the impact of the eavesdropping test procedure on impersonation attacks is outlined. The framework of a general eavesdropping test is proposed to provide additional protection against security risks in impersonation attacks.
Dynamic earthquake rupture modeling provides information on the rupture physics as the rupture velocity, frictions or tractions acting during the rupture process. Nevertheless, as often based on spatial gridded preset geometries, dynamic modeling is depending on many free parameters leading to both a high non-uniqueness of the results and large computation times. That decreases the possibilities of full Bayesian error analysis.
To assess the named problems we developed the quasi-dynamic rupture model which is presented in this work. It combines the kinematic Eikonal rupture model with a boundary element method for quasi-static slip calculation.
The orientation of the modeled rupture plane is defined by a previously performed moment tensor inversion. The simultanously inverted scalar seismic moment allows an estimation of the extension of the rupture. The modeled rupture plane is discretized by a set of rectangular boundary elements. For each boundary element an applied traction vector is defined as the boundary value.
For insights in the dynamic rupture behaviour the rupture front propagation is calculated for incremental time steps based on the 2D Eikonal equation. The needed location-dependent rupture velocity field is assumed to scale linearly with a layered shear wave velocity field.
At each time all boundary elements enclosed within the rupture front are used to calculate the quasi-static slip distribution. Neither friction nor stress propagation are considered. Therefore the algorithm is assumed to be “quasi-static”. A series of the resulting quasi-static slip snapshots can be used as a quasi-dynamic model of the rupture process.
As many a priori information is used from the earth model (shear wave velocity and elastic parameters) and the moment tensor inversion (rupture extension and orientation) our model is depending on few free parameters as the traction field, the linear factor between rupture and shear wave velocity and the nucleation point and time. Hence stable and fast modeling results are obtained as proven from the comparison to different infinite and finite static crack solutions.
First dynamic applications show promissing results. The location-dependent rise time is automatically derived by the model. Different simple kinematic models as the slip-pulse or the penny-shaped crack model can be reproduced as well as their corresponding slip rate functions. A source time function (STF) approximation calculated from the cumulative sum of moment rates of each boundary element gives results similar to theoretical and empirical known STFs.
The model was also applied to the 2015 Illapel earthquake. Using a simple rectangular rupture geometry and a 2-layered traction regime yields good estimates of both the rupture front propagation and the slip patterns which are comparable to literature results. The STF approximation shows a good fit with previously published STFs.
The quasi-dynamic rupture model is hence able to fastly calculate reproducable slip results. That allows to test full Bayesian error analysis in the future. Further work on a full seismic source inversion or even a traction field inversion can also extend the scope of our model.
It was the goal of this work to explore two different synthesis pathways using green chemistry. The first part of this thesis is focusing on the use of the urea-glass route towards single phase manganese nitride and manganese nitride/oxide nano-composites embedded in carbon, while the second part of the thesis is focusing on the use of the “saccharide route” (namely cellulose, sucrose, glucose and lignin) towards metal (Ni0), metal alloy (Pd0.9Ni0.1, Pd0.5Ni0.5, Fe0.5Ni0.5, Cu0.5Ni0.5 and W0.15Ni0.85) and ternary carbide (Mn0.75Fe2.25C) nanoparticles embedded in carbon. In the interest of battery application, MnN0.43 nanoparticles surrounded by a graphitic shell and embedded in carbon with a high surface area (79 m^2/g) were synthesized, following a previously set route.The comparison of the material characteristics before and after the discharge showed no remarkable difference in terms of composition and just slight differences in the morphological point of view, meaning the particles are stable but agglomerate. The graphitic shell is contributing to the resistance of the material and leads to a fine cyclic stability over 140 cycles of 230 mAh/g after the first charge/discharge and coulombic efficiencies close to 100%. Due to the low voltage towards Li/Li+ and the low polarization, it might be an attractive anode material for lithium ion batteries. However, the capacity is still noticeably lower than the theoretical value for MnN0.43. A mixture of MnN0.43 and MnO nanoparticles embedded in carbon (surface area 93 m^2/g) was able to improve the cyclic stability to over 160 cycles giving a capacity of 811 mAh/g, which is considerably higher than the capacity of the conventional material graphite (372 mAh/g). This nano-composite seems to agglomerate less during the process of discharge. Interestingly, although the capacity is much higher than of the single phase manganese nitride, the nano-composite seems to only contain MnN0.43 nanoparticles after the process of discharge with no oxide phase to be found. Concerning catalysis application, different metal, metal alloy, and metal carbide nanoparticles were synthesized using the saccharide route. At first, systems that were already investigated before, being Pd0.9Ni0.1, Pd0.5Ni0.5, Fe0.5Ni0.5 and Mn0.75Fe2.25C using cellulose as the carbon source were prepared and tested in an alkylation reaction of toluene with benzylchloride. Unexpectedly, the metal alloys did not show any catalytic activity, but the ternary carbide Mn0.75Fe2.25C showed fine catalytic activity of 98% conversion after 9 hour reaction time (110 °C). In a second step, the saccharide route was modified towards other carbon sources and carbon to metal ratios in order to improve the homogeneity of the samples and accessibility of the particle surfaces. The used carbon sources sucrose and glucose are similar in their basic structure of carbohydrates, but reducing the (polymeric) chain length. Indeed, the cellulose could be successfully replaced by sucrose and glucose. A lower carbon to metal ratio was found to influence the size, homogeneity and accessibility (as evidenced by TEM) of the samples. Since sucrose is an aliment, glucose is the better choice as a carbon source. Using glucose, the synthesis of Cu0.5Ni0.5 and W0.15Ni0.85 nano-composites was also possible, although the later was never obtained as pure phase. These alloy nano-composites were tested, along with nickel0 nanoparticles also prepared with glucose and on their catalytic activity towards the reduction of phenylacetylene. The results obtained let believe that any (poly) saccharide, including lignin, could be used as carbon source. The nickel0 nano-composites prepared with lignin as a carbon source were tested along with those prepared with cellulose and sucrose for their catalytic activity in the transfer hydrogenation of nitrobenzene (results compared with exposed nickel nanoparticles and nickel supported on carbon) leading to very promising results. Based on the urea-glass route and the saccharide route, simple equipment and transition metals, it was possible to have a one-pot synthesize with scale-up possibilities towards new material that can be applied in catalysis and battery systems.
The intention of this master-thesis is a critical assessment of the European Union´s (EU) approach to external democracy promotion in Morocco. The study follows a comparative approach and compares the approach pursued by the EU within the framework of the European Neighbourhood Policy (ENP), incepted in 2004, with the approach that it had developed up until then under the framework of the Euro-Mediterranean Partnership (EMP). The comparison is done with the intention to analyse, to what degree it is justified to speak of a new impetus for democratisation through the ENP in partner countries. The analysis takes into consideration the range of possible instruments for external democracy promotion in the categories „diplomacy“, „conditionality“ and „positive instruments“. For the comparison of democracy promotion under the EMP and the ENP it is suggested to compare the implemented measures in respect to three distinct dimensions: As a first dimension, instruments of democracy promotion are analysed with respect to the focus on indirect vs. direct instruments, e.g. those which aim at establishing socio-economic preconditions favourable to successful democratisation, vs. those which immediately intervene in the processes of political reform. As a second dimension, it is asked whether there has been a shift in the democracy promotion approach on a continuum between consensual cooptation and coercive intervention. As a third dimension, finally, it is analysed whether the approach has undergone a general intensification of efforts, e.g. whether the approach to democracy promotion has become a more active one. The analysis in this master-thesis comes to the conclusion that since the inception of the ENP the EU is indeed pursuing a slightly more direct and certainly a more active approach to democracy promotion in Morocco, while no significant change can be observed in comparison to the strictly partnership-oriented and consensual approach of the EMP. It can be argued that, under the ENP, relations to Morocco have indeed become somewhat more “political”, although at the same time they are still not pro-actively oriented at a political liberalisation of the political regime. Reforms promoted by the EU in Morocco are modest and largely in line with the reform agenda of the Morrocan government itself – e.g. a still largely authoritarian monarchy. Concrete reform steps directed at an opening of the political space, which is largely reserved to the king and its administration, are neither demanded nor supported by democracy promotion instruments, also under the ENP.
In this thesis, the properties of nonlinear disordered one dimensional lattices is investigated. Part I gives an introduction to the phenomenon of Anderson Localization, the Discrete Nonlinear Schroedinger Equation and its properties as well as the generalization of this model by introducing the nonlinear index α. In Part II, the spreading behavior of initially localized states in large, disordered chains due to nonlinearity is studied. Therefore, different methods to measure localization are discussed and the structural entropy as a measure for the peak structure of probability distributions is introduced. Finally, the spreading exponent for several nonlinear indices is determined numerically and compared with analytical approximations. Part III deals with the thermalization in short disordered chains. First, the term thermalization and its application to the system in use is explained. Then, results of numerical simulations on this topic are presented where the focus lies especially on the energy dependence of the thermalization properties. A connection with so-called breathers is drawn.
This study on the Messianic Jewish movement and its relationship to the Torah explores the various aspects of the relationship to the Torah on the basis of 10 interviews with selected Yeshua-believing Jews in leadership positions. The selection of interviewees results in a range of different positions typical of the movement as a whole, which overlap in many respects but are often fundamentally different and sometimes contradictory. Particular attention is paid to the theologically based, divergent and contradictory positions in an attempt to make these understandable.
After a brief introduction to the Messianic Jewish movement, aspects of the Messianic Jewish dual identity are examined and their relevance for the relationship to the Torah is demonstrated. This is followed by an overview of the forums in which Yeshua-believing Jews discuss their relationship to the Torah. The extensive bibliography at the end of the work provides an insight into a lively discussion process within the movement that is still far from complete. A briefly annotated differentiation of terms serves as an overview of the most important meanings of Torah used in the Messianic Jewish movement. Following this preliminary work, the field study is presented. A description of the research field and methodological reflections precede the interviews. In the interviews, the associations with the term Torah are first recorded and the conceptual meaning and use clarified. This already reveals some serious differences. The theological positions and understandings of Torah are presented with the biographical context and main field of influence, and the most important formative influences are named. The points on which they all agree are noted first, as they serve as a common basis. All study the written Torah and consider it, as well as the rest of the Tanakh and the writings of the New Testament in their present form, to be divinely inspired and authoritative. All have found a positive approach to the Torah according to their own definition of the term. For all of them, the written Torah and the Tanakh point to Yeshua. All agree that Yeshua did not abrogate the Torah, but fulfilled it. And all feel a responsibility as a Jew to the Torah in some way. With regard to keeping commandments, all say that no one can earn their way to heaven by doing so. G-d's faithfulness to His promises to Israel is affirmed by all, but whether the new covenant in Yeshua superseded the old covenant of Mt. Sinai, or whether it is simply added to the already existing covenant of Sinai, whether ritual commandments are to continue to be kept after Yeshua's death and resurrection and the destruction of the Temple, whether the commandments aiming at separation from the nations should continue to be kept, whether and under what conditions rabbinic halacha should be followed and what individuals do and teach in their families and communities - all this is discussed interview by interview. It becomes clear how different ways of reading and weighting key scriptures produce different positions. Just as the diversity of positions in relation to the Torah already suggests, the interview partners are divided on the question of a Messianic Jewish Halacha. But here too, the term halacha is interpreted differently by the representatives. At the end of the field study, the attempts to produce Messianic Jewish Halacha and the problems and points of criticism expressed by other interviewees are explained. The work concludes with a theological framework able to contain all the different positions and relationships to the Torah and some starting points for a possible Messianic Jewish hermeneutic theology of the Torah.
Governments at central and sub-national levels are increasingly pursuing participatory mechanisms in a bid to improve governance and service delivery. This has been largely in the context of decentralization reforms in which central governments transfer (share) political, administrative, fiscal and economic powers and functions to sub-national units. Despite the great international support and advocacy for participatory governance where citizen’s voice plays a key role in decision making of decentralized service delivery, there is a notable dearth of empirical evidence as to the effect of such participation. This is the question this study sought to answer based on a case study of direct citizen participation in Local Authorities (LAs) in Kenya. This is as formally provided for by the Local Authority Service Delivery Action Plan (LASDAP) framework that was established to ensure citizens play a central role in planning and budgeting, implementation and monitoring of locally identified services towards improving livelihoods and reducing poverty. Influence of participation was assessed in terms of how it affected five key determinants of effective service delivery namely: efficient allocation of resources; equity in service delivery; accountability and reduction of corruption; quality of services; and, cost recovery. It finds that the participation of citizens is minimal and the resulting influence on the decentralized service delivery negligible. It concludes that despite the dismal performance of citizen participation, LASDAP has played a key role towards institutionalizing citizen participation that future structures will build on. It recommends that an effective framework of citizen participation should be one that is not directly linked to politicians; one that is founded on a legal framework and where citizens have a legal recourse opportunity; and, one that obliges LA officials both to implement what citizen’s proposals which meet the set criteria as well as to account for their actions in the management of public resources.
Geomagnetic field modeling using spherical harmonics requires the inversion for hundreds to thousands of parameters. This large-scale problem can always be formulated as an optimization problem, where a global minimum of a certain cost function has to be calculated. A variety of approaches is known in order to solve this inverse problem, e.g. derivative-based methods or least-squares methods and their variants. Each of these methods has its own advantages and disadvantages, which affect for example the applicability to non-differentiable functions or the runtime of the corresponding algorithm.
In this work, we pursue the goal to find an algorithm which is faster than the established methods and which is applicable to non-linear problems. Such non-linear problems occur for example when estimating Euler angles or when the more robust L_1 norm is applied. Therefore, we will investigate the usability of stochastic optimization methods from the CMAES family for modeling the geomagnetic field of Earth's core. On one hand, basics of core field modeling and their parameterization are discussed using some examples from the literature. On the other hand, the theoretical background of the stochastic methods are provided. A specific CMAES algorithm was successfully applied in order to invert data of the Swarm satellite mission and to derive the core field model EvoMag. The EvoMag model agrees well with established models and observatory data from Niemegk. Finally, we present some observed difficulties and discuss the results of our model.
This paper compares police reforms during democratization in Poland, Hungary, and Bosnia-Herzegovina. It analyses the changes to the structure of the democratic control of the police in each reform, paying special attention to the decentralization versus centralization aspect of it. The research question of this paper is: Why are some states decentralizing the democratic control of the police, while others are centralizing it, both with the aim of democratization? The theoretical background of this study are theories about policy diffusion and policy transfer. Therefore this study can be categorized as part of two different research areas. On the one hand, it is a paper from the discipline of International Relations. On the other hand, it is a paper from the discipline of Comparative Politics. The combined attention to international and national factors influencing police reform is reflected by the structure of this paper. Chapter 3 examines police structures and police reforms in established democracies as possible role models for new democracies. Chapter 4 looks at international and transnational actors that actively try to influence police reform. After having examined these external factors, three cases of police reform in new democracies are examined in chapter 5.
The Vogtland, located at the border region between the Czech Republic and Germany, is known for Holocene volcanism, gas and fluid emissions as well as for reoccurring earthquake swarms, pointing towards a high geodynamic activity. During the earthquake swarm in 2008/2009, a temporary array was installed close to Rohrbach (Germany), at an epicentral distance of about 10 km from the Nový Kostel focal zone (aperture ~0.75 km).
22 events of the recorded swarm were selected to set up a source array. Source arrays are spatially clustered earthquakes, which can be used in a similar manner as receiver array recordings of single events (Green’s functions reciprocity). The application of array seismology techniques like beam forming requires similar waveforms and precisely known origin times and locations. The resemblance of waveforms was assured by visual selection of events and quantified with the calculation of cross-correlation coefficients. We observed that the different events recorded at a single station generally show greater resemblances than the recordings of one event at all stations of the receiver array. This indicates a heterogeneous subsurface beneath the receiver array and a comparably homogeneous source array volume with respect to the frequency-dependent resolution of both arrays.
Beam forming was applied on the Z, N and E component recordings of the source array events at 11 stations, and the results were analysed with respect to converted or reflected crustal phases. While the theoretical back azimuth of the direct phases match the beam forming results in case of the source array analysis, in case of receiver array beam forming derivations of 15°-25° are observed.
PS phases, closely following the direct P phase and presumably SP phases, arriving shortly before the direct S phase can be observed on several stations. Based on the time differences to the direct P and S phases we inferred a conversion depth of about 0.6-0.9 km. A second deeper source array was set up in order to interpret a structural phase arriving 0.85 s after the direct P phase on records of deeper events only.
Additionally to the source array beam forming method an analytical method with a fixed medium velocity and a grid search method, both for determining conversion/ reflection locations of phases traveling off the direct line between source and receiver array, were developed and applied to other observed phases.
In conclusion, we think that the distinct beam forming results along with the striking waveform resemblance reveal the opportunities of using source arrays consisting of small swarm events for the analysis of crustal structures.
The aesthetic phenomenon of the uncanny in literature and art is a spatial and gendered aesthetic concept, which is expressed in the spatial characteristics of a literary or photographic narrative. The intention of this thesis is to evaluate the entanglement of the uncanny, space, domesticity and femininity in the context of Gothic literature and photography. These four objects can only be read in their interplay with each other and how they each function as structural principles in the framework of Gothic fiction and photography. The literary texts, Charlotte Perkins Gilman’s “The Yellow Wall-Paper” (1892) and Shirley Jackson’s “The Lovely House” (1950) and The Haunting of Hill House (1959) as well as Francesca Woodman’s self-portraits that will be discussed further share one particular quality; they use the haunted house motif to express the protagonist’s psychological state by transferring mental hauntings onto the narrative’s spatial layer. The establishment of a connection between the concepts at hand, the uncanny, domesticity, spatiality and femininity, is the basis for the first half of the thesis. What follows is an overview of how domestic politics and gendered perceptions of and behaviors in spaces are expressed in the Gothic mode in particular. In the literary analysis two ways in which the Freudian uncanny constitutes itself in the haunted house narrative, first the house as the site of repetition and second the house as a stand-in for the maternal body, are examined. Drawing from Gernot Böhme’s and Martina Löw’s theoretic work on space and atmosphere the thesis focuses on the different aesthetic strategies that produce the uncanny atmosphere associated with the Gothic haunted house. The female subjects at the narratives’ center are in the ambiguous process of disappearing or becoming, this (dis)appearing act is facilitated by their haunted surroundings. In the case of the unnamed narrator in “Wall-Paper” her suppressed rage at her husband is mirrored in the strangled woman trapped inside the yellow wallpaper. Once she recognizes her doppelganger the union of her two selves takes place in the short story’s dramatic climax. In Shirley Jackson’s literary works the haunted houses, protagonists in themselves, entrap, transform, and ultimately devour their female daughter-victims. The haunted houses are symbols, means and places of the continuous tradition of female entrapment within the domestic sphere, be it as wives, mothers or daughters. In Francesca Woodman’s self-portraits the themes of creation/destruction and becoming/disappearing within the ruinous (post)domestic sphere are acted out by the fragmented and blurry female figure who intriguingly oscillates between self-empowerment and submission to destruction.
Both horizontal-to-vertical (H/V) spectral ratios and the spatial autocorrelation method (SPAC) have proven to be valuable tools to gain insight into local site effects by ambient noise measurements. Here, the two methods are employed to assess the subsurface velocity structure at the Piano delle Concazze area on Mt Etna. Volcanic tremor records from an array of 26 broadband seismometers is processed and a strong variability of H/V ratios during periods of increased volcanic activity is found. From the spatial distribution of H/V peak frequencies, a geologic structure in the north-east of Piano delle Concazze is imaged which is interpreted as the Ellittico caldera rim. The method is extended to include both velocity data from the broadband stations and distributed acoustic sensing data from a co-located 1.5 km long fibre optic cable. High maximum amplitude values of the resulting ratios along the trajectory of the cable coincide with known faults. The outcome also indicates previously unmapped parts of a fault. The geologic interpretation is in good agreement with inversion results from magnetic survey data. Using the neighborhood algorithm, spatial autocorrelation curves obtained from the modified SPAC are inverted alone and jointly with the H/V peak frequencies for 1D shear wave velocity profiles. The obtained models are largely consistent with published models and were able to validate the results from the fibre optic cable.
Alluvial fans are important geomorphic markers and sedimentary archives of tectonic and climatic changes. Hence, basins providing perfect studying conditions can often be found in arid regions due to the low weathering impact and thus well preservation of sedimentary features. Twelve samples for optically/infrared stimulated luminescence (OSL/IRSL) dating and one depth profile for cosmogenic radionuclide dating (10Be) were collected in the Santa Maria Valley in NW Argentina, where the exceptional preservation of several generations of alluvial fans allow exploring the external forcing conditions that led to repeated cycles of incision and aggradation. The results of the OSL/IRSL dating yielded ages ranging between 0.4 ± 0.1 ka and 271.8 ± 24.5 ka. Previous studies next to the study area indicate a depositional age of 1.5-2 Mio years for the oldest generation of alluvial fans, which might still be supported by our ongoing 10Be dating. Due to field observations, sediment provenance, stratigraphic characteristics and the geomorphic pattern of erosion, seven (/eight) generations of alluvial fan deposits were recognized. Comparing my ages with global glaciation cycles as well as linking them to temperature proxies retrieved from a lake on the Altiplano Plateau, a good fit between alluvial fan accumulation phases and global glacial periods (corresponding to cold/wet phases within the central Andes) is observed. This suggests that aggradation occurs during the early stages of glacial periods, while incision is expected at the end of glacial phases. This pattern might be linked to variations in the vegetational cover (controlled by water availability), which will decrease/increase during hot and dry/cold and wet interglacial/glacial phases favoring/limiting sediment production and will increase/decrease during cold and wet/hot and dry glacial/interglacial phases. Even though the eastern Andean margin is showing neotectonic activities and is assumed to be active up to recent times, deformation and seismicity might most probably have played only a minor role in relation to the rather short timescale reflected by the data.
State space models enjoy wide popularity in mathematical and statistical modelling across disciplines and research fields. Frequent solutions to problems of estimation and forecasting of a latent signal such as the celebrated Kalman filter hereby rely on a set of strong assumptions such as linearity of system dynamics and Gaussianity of noise terms.
We investigate fallacy in mis-specification of the noise terms, that is signal noise
and observation noise, regarding heavy tailedness in that the true dynamic frequently produces observation outliers or abrupt jumps of the signal state due to realizations of these heavy tails not considered by the model. We propose a formalisation of observation noise mis-specification in terms of Huber’s ε-contamination as well as a computationally cheap solution via generalised Bayesian posteriors with a diffusion Stein divergence loss resulting in the diffusion score matching Kalman filter - a modified algorithm akin in complexity to the regular Kalman filter. For this new filter interpretations of novel terms, stability and an ensemble variant are discussed. Regarding signal noise mis-specification, we propose a formalisation in the frame work of change point detection and join ideas from the popular CUSUM algo-
rithm with ideas from Bayesian online change point detection to combine frequent reliability constraints and online inference resulting in a Gaussian mixture model variant of multiple Kalman filters. We hereby exploit open-end sequential probability ratio tests on the evidence of Kalman filters on observation sub-sequences for aggregated inference under notions of plausibility.
Both proposed methods are combined to investigate the double mis-specification problem and discussed regarding their capabilities in reliable and well-tuned uncertainty quantification. Each section provides an introduction to required terminology and tools as well as simulation experiments on the popular target tracking task and the non-linear, chaotic Lorenz-63 system to showcase practical performance of theoretical considerations.
In many regions of the world, snow accumulation and melt constitute important components of the hydrologic cycle. With the objective to improve model performance of the hydrological model WASA-SED (Water Availability in Semi-Arid environments - SEDiments) in catchments affected by snow and ice, a physically-based snow routine has been implemented into the model. The snow routine bases on the energy-balance method of the ECHSE (Eco-hydrological Simulation Environment) software. A first test application has been conducted in two sub-basins of the Isábena river catchment (Central Spanish Pre-Pyrenees). Results were validated using satellite-derived snow cover data. Furthermore, a rainfall gauge correction algorithm to restore the liquid precipitation signal of measurements affected by solid precipitation was applied. The snow module proved to be able to capture the dynamics of the snow cover forming during the cold months of the year. The temporary storage of water in the snow cover is able to improve simulations of river discharge. General patterns of the temporal evolution of observed and simulated snow cover fractions coincide. The work conducted only represents a first step in the process of implementation and evaluation of a physically-based snow routine into WASA-SED. Future work is necessary to further improve and test the snow routine and to resolve difficulties that occurred during model applications in the catchment.
The subject of the present thesis is the one-dimensional Bose gas. Since long-rang order is destroyed by infra-red fluctuations in one dimension, only the formation of a quasi-condensate is possible, which exhibits suppressed density fluctuations, but whose phase fluctuates strongly. It is shown that modified mean-field theories based on a symmetry-breaking approach can even characterise phase coherence properties of such a quasi-condensate properly. A correct description of the transition from the degenerate ideal Bose gas to the quasi-condensate, which is a smooth cross-over rather than a phase transition, is not possible though. Basic conditions for the applicability of the theories are not fulfilled in this regime, such that the existence of a critical point is predicted.
The theories are compared on the basis of their excitation sprectum, equation of state, density fluctuations and related correlation functions. High-temperature expansions of the corresponding integrals are derived analytically for the numerical evaluation of the self-consistent integral equations. Apart from that, the Stochastic Gross-Pitaevskii equation (SGPE), a non-linear Langevin equation, is analysed numerically by means of Monte-Carlo simulations and the results are compared to those of the mean-field theories. In this context, a lot of attention is payed to the appropriate choice of the parameters. The simulations prove that the SGPE is capable of describing the cross-over properly, but highlight the limitations of the widely used local density approximation as well.
The main results of this thesis are formulated in a class of surfaces (varifolds) generalizing closed and connected smooth submanifolds of Euclidean space which allows singularities. Given an indecomposable varifold with dimension at least two in some Euclidean space such that the first variation is locally bounded, the total variation is absolutely continuous with respect to the weight measure, the density of the weight measure is at least one outside a set of weight measure zero and the generalized mean curvature is locally summable to a natural power (dimension of the varifold minus one) with respect to the weight measure. The thesis presents an improved estimate of the set where the lower density is small in terms of the one dimensional Hausdorff measure. Moreover, if the support of the weight measure is compact, then the intrinsic diameter with respect to the support of the weight measure is estimated in terms of the generalized mean curvature. This estimate is in analogy to the diameter control for closed connected manifolds smoothly immersed in some Euclidean space of Peter Topping. Previously, it was not known whether the hypothesis in this thesis implies that two points in the support of the weight measure have finite geodesic distance.
This paper focuses on mysteries written by the Afro-American women authors Barbara Neely and Valerie Wilson Wesley. Both authors place a black woman in the role of the detective - an innovative feature not only in the realm of female detective literature of the past two decades but also with regard to the current discourse about race and class in US-American society. This discourse is important because detective novels are considered popular literature and thus a mass product designed to favor commercial instead of literary claims. Thus, the focus is placed on the development of the two protagonists, on their lives as detectives and as black women, in order to find out whether or not and how the genre influences the depiction of Afro-American experiences. It appears that both of these detective series represent Afro-American culture in different ways, which confirms a heterogenic development of this ethnic group. However, the protagonist's search for identity and their relationships to white people could be identified as a major unifying claim of Afro-American literature. With differing intensity, the authors Neely and Wesley provide the white or mainstream reader with insight into their culture and confront the reader's ignorance of black culture. In light of this, it is a great achievement that Neely and Wesley have reached not only a black audience but also a growing number of white readers.
Udmurt as an OV language
(2016)
This is the first study to investigate Hubert Haider's (2000, 2010, 2013, 2014) proposed systematic differences between OV and VO language in a family other than Germanic. Its aim is to gather evidence on whether basic word order is predictive of further properties of a language. The languages under investigation are the Finno-Ugric languages Udmurt (as an OV language) and Finnish (as a VO language). Counter to Kayne (1994), Haider proposes that the structure of a sentence with a head-final VP is fundamentally different from that of a sentence with a head-initial VP, e.g., OV languages do not exhibit a VP-shell structure, and they do not employ a TP layer with a structural subject position. Haider's proposed structural differences are said to result in the following empirically testable differences:
(a) VP: the availability of VP-internal adverbial intervention and scrambling only in OV-VPs;
(b) subjects: the lack of certain subject-object asymmetries in OV languages, i.e., lack of the subject condition and lack of superiority effects;
(c) V-complexes: the availability of partial predicate fronting only in OV languages; different orderings between selecting and selected verbs; the intervention of non-verbal material between verbs only in VO languages;
(d) V-particles: differences in the distribution of resultative phrases and verb particles.
Udmurt and Finnish behave in line with Haider's predictions with regard to the status of the subject, with regard to the order of selecting and selected verbs, and with regard to the availability of partial predicate fronting. Moreover, Udmurt allows for adverbial intervention and scrambling, as predicted, whereas the status of these properties in Finnish could not be reliably determined due to obligatory V-to-T. There is also counterevidence to Haider's predictions: Udmurt allows for non-verbal material between verbs, and the distribution of resultative phrases and verb particles is essentially as free as the distribution of adverbial phrases in both Finno-Ugric languages. As such, Haider's theory is not falsified by the data from Udmurt and Finnish (except for his theory on verb particles), but it is also not fully supported by the data.
This MA thesis examines novels by Native American authors of the 20th century in regard to their representation of conflicts between the indigenous population of North America and the dominant Christian religion of the mainstream society. Several major points can be followed throughout the century, which have been presented repeatedly and discussed in various perspectives. Historical conflicts of colonization and Christianization, as well as the perpetual question of Native American Christians -- 'How can you go to a church that killed so many Indians?' [Alexie, Reservation Blues] -- are debated in these novels and analyzed in this paper. Furthermore, I have tried to position and classify the works according to their representation of these problems within literary history. Following Charles Larson's chronologic and thematic examination of American Indian Fiction, the categories rejection, (syncretic) adaptation, and postmodern-ironic revision are introduced to describe the various forms of representation. On the basis of five main examples, we can observe an evolution of contemporary Native American literature, which has liberated itself from the narrow definition of the 1960s and 1970s, in favor of a broader and more varied approach. In so doing, and by means of intercultural and intertextual referencing, postmodern irony, and a new Indian self-confidence, it has also taken a new position towards the religion of the former colonizer.
Gender-inclusive language has evolved into a much-debated topic during the past years, discussed interdisciplinarily from theoretical to psycholinguistics, sociology, and economy – and by anyone who uses language.
Studies on German that primarily relied on questionnaires (reviewed in Braun et al. 2005), cloze tests (Klein 1988), and categorisation tasks with picture matching (Irmen & Köhncke 1996) disqualify the generically used masculine forms as pseudo-generic – failing their grammatically prescribed function to include referents of any Gender. Gender-balanced expressions (pair and split forms like Lehrer und Lehrerinnen) make explicit reference to female presence and participation, and thus elevate a more equitable interpretation.
Online methods to investigate the processing of Gender-sensitive language are surprisingly rare among research on the phenomenon, except for reaction time measures (Irmen & Köhncke 1996, Irmen & Kaczmarek 2000) and eye-tracking in reading (Irmen & Schumann 2011).
In addition, Gender-neutral language (GNL) has not been focused on in the majority of experiments, and when it was among the stimuli, results were inconclusive (De Backer & De Cuypere 2012) or found such alternatives to be ineffective (resembling masculine generics, Braun et al. 2005), despite the fact that guidelines on non-discriminatory language use commonly recommend these.
Gender-neutral (GN) expressions for personal reference in German include
• nominalised participles; nominalisations in general: Interessierte, Lehrende
• collective singulars: Publikum, Kollegium
• compounds (e.g., with a notion of “-person”): Ansprechpersonen, Lehrkräfte
• paraphrases that background a (gendered) subject: e.g., passives, relatives
In a visual world eye-tracking study, the comprehension of plural generics using masculine nouns and GN forms was tested for roles and occupations.
In complex stimulus scenarios, reference had to be established to referent images presented on a screen. At the end of each item, a question was asked in order to (re)identify the image that matched the referents of the respective setting best. Images depicted 1) a single person (protagonist), 2) an all-female group, 3) an all-male group, 4) a mixed Gender group of female and male members. The group referents were introduced with either a) masculine nouns (die Lehrer), b) female-specific feminine nouns (die Lehrerinnen), or c) one of the upper three nominal GN variants (die Lehrkräfte).
Results confirm the frequent male bias in masculine forms that are used as generics, that is, their male-specific interpretation. Furthermore, stereotypicality of nouns had an impact on responses. The GN alternatives, which are generally known to aim for indefinite reference (“marked” for Gender-fair language) were found to be most qualified to elicit mixed Gender group interpretations. When reference was established with GN terms, an inclusive response was consistently elicited. This was both indicated by eye movements and response proportions, but to a different extent depending on the particular GN noun type. Concepts that abstract from Gender in their linguistic forms (“neutralising” it) appear to be more inclusive, and thus better candidates for generic reference than masculines.
This diploma thesis deals with the process of political and administrative decentralisation in the Kingdom of Lesotho. Although decentralization in itself does not automatically lead to development it became an integral part of reform processes in many developing countries. Governments and international donors consider efficient decentralized political and administrative structures as essential elements of “good governance” and a prerequisite for structural poverty alleviation. This paper seeks to analyse how the given decentralization strategy and its implementation is affecting different features of good governance in the case of Lesotho. The results of the analysis confirm that the decentralisation process significantly improved political participation of the local population. However, the second objective of enhancing efficiency through decentralisation was not achieved. To the contrary, in the institutional design of the newly created local authorities and in the civil service recruitment policy efficiency considerations did not matter. Additionally, the created mechanisms for political participation generate relevant costs. Thus it is impossible to judge unambiguously on the contribution of decentralisation to the achievement of good governance. Different subtargets of good governance are influenced contrarily. Consequently, the adequacy of the concept of good governance as a guiding concept for decentralisation policies can be questioned. The assessment of the success of decentralisation policies requires a normative framework that takes into account the relations between both participation and efficiency. Despite the partly reduced administrative efficiency the author’s overall impression of the decentralisation process in Lesotho is positive. The establishment of democratically legitimised and participatory local governments justifies certain additional expenditure. However, mistakes in the design and the implementation of the decentralisation strategy would have been avoidable.
The present study investigated the attribution of responsibility to victims and perpetrators in rape compared to robbery cases in Turkey. Each participant read three short case scenarios (vignettes) and completed items pertaining to the female victim and male perpetrator. The vignettes were systematically varied with regard to the type of crime that was committed (rape or robbery), the perpetrator’s coercive strategy (physical force or exploiting the victim’s alcohol-induced defenselessness), and the victim-perpetrator relationship prior to the incident (stranger, acquaintance, or ex-partner). Furthermore, participant gender and acceptance of rape myths (beliefs that justify or trivialize sexual violence) were taken into account. One half of the participants completed the rape myth acceptance (RMA) scales first and then received the vignettes, while the other half were given the vignettes first and then completed the RMA scales.
As expected, more blame was attributed to victims of rape than to victims of robbery. Conversely, perpetrators of rape were blamed less than perpetrators of robbery. The more participants endorsed rape myths, the more blame was attributed to the victim and the less blame was attributed to the perpetrators. Increasing levels of RMA were associated with an increase in victim blame (VB) in both rape and robbery cases, but the increase in rape VB was significantly more pronounced than in robbery VB. Increasing RMA was associated with an attenuation of perpetrator blame (PB) that was more pronounced for rape than for robbery cases, but the difference was not significant. As expected, victims of rape were blamed more when the perpetrator exploited their defenselessness due to alcohol intoxication than when they were overpowered by physical force. Contrary to the hypothesis, this was also true for robbery victims. Rape victims who knew their attacker (ex-partner or acquaintance) were blamed more than victims who were assaulted by strangers. Contrary to the hypothesis, robbery victims who were assaulted by an ex-partner were blamed more than acquaintance or stranger robbery victims. As predicted, the closer the relationship between victim and perpetrator, the less blame was attributed to perpetrators of rape while this factor had no effect on PB in robbery cases.
Men compared to women attributed more blame to the victims and less blame to the perpetrators. As expected, these gender differences in blame attributions were partially mediated by gender differences in RMA: After RMA was taken into account, the gender differences disappeared nearly completely for VB and were significantly reduced in PB. The order of presentation of the vignettes and the RMA measures was systematically varied to test the causal influence of RMA on rape blame attributions. The hypothesis that RMA causes VB and PB in rape cases (as opposed to the other way around or both are caused by a third variable) was not supported. Possible reasons for this failed manipulation and its implications for the mediation model are discussed.
With regard to blame attribution in rape cases, the present results match what was expected from previous studies which were mainly conducted in “Western” countries like the United States, the United Kingdom, or Germany. The present results support the notion that the victim-perpetrator relationship and the victim’s alcohol consumption are cross-culturally stable factors for blame attribution in rape cases. It was expected that blame attribution in robbery cases would be unaffected by the perpetrator’s coercive strategy and the victim-perpetrator relationship, but the results were inconsistent.
One unexpected effect is particularly noteworthy: When the perpetrator used physical force, more blame was attributed to rape than to robbery victims, but intoxicated victims were blamed more and almost equally so for both types of crime. Perpetrators who exploited drunk victims were blamed less in both rape and robbery cases. These results contradict German results collected with the German version of the same instruments (Bieneck & Krahé, 2011). Turkey is a Muslim country and alcohol is surrounded by a certain taboo. Possibly, the results reflect a cultural difference in that intoxicated victims are generally blamed more for their victimization and this factor is not limited to rape cases.
The forcing from the anthropogenic heat flux (AHF), i.e. the dissipation of primary energy consumed by the human civilisation, produces a direct climate warming. Today, the globally averaged AHF is negligibly small compared to the indirect forcing from greenhouse gas emissions. Locally or regionally, though, it has a significant impact. Historical observations show a constant exponential growth of worldwide energy production. A continuation of this trend might be fueled or even amplified by the exploration of new carbon-free energy sources like fusion power. In such a scenario, the impacts of the AHF become a relevant factor for anthropogenic post-greenhouse gas climate change on the global scale, as well.
This master thesis aims at estimating the climate impacts of such a growing AHF forcing. In the first part of this work, the AHF is built into simple and conceptual, zero- and one-dimensional Energy Balance Models (EBMs), providing quick order of magnitude estimations of the temperature impact. In the one-dimensional EBM, the ice-albedo feedback from enhanced ice melting due to the AHF increases the temperature impact significantly compared to the zero-dimensional EBM.
Additionally, the forcing is built into a climate model of intermediate complexity, CLIMBER-3α. This allows for the investigation of the effect of localised AHF and gives further insights into the impact of the AHF on processes like the ocean heat uptake, sea ice and snow pattern changes
and the ocean circulation.
The global mean temperature response from the AHF today is of the order of 0.010 − 0.016 K in all reasonable model configurations tested. A transient tenfold increase of this forcing heats up the Earth System additionally by roughly 0.1 − 0.2 K in the presented models. Further growth
can also affect the tipping probability of certain climate elements.
Most renewable energy sources do not or only partially contribute to the AHF forcing as the energy from these sources dissipates anyway. Hence, the transition to a (carbon-free) renewable energy mix, which, in particular, does not rely on nuclear power, eliminates the local and global climate impacts from the increasing AHF forcing, independent of the growth of energy production.
Swearing in a public place
(2017)
The paper deals with the usage of swear words on the online forum "reddit". Three research questions are dealt with:
How often are swear words used?
How are these swear words received by other users?
Does the topic of the conversation have an influence on the reception and amount of usage of swear words?
The corpus from which the results are taken comprises almost 900 million words. The words are taken from February 2017. Compared to other, similar studies, the corpus is considerably larger and contempory.
In addition, the theoretical part discusses the linguistic basics of swear words. These include concepts such as the theory of politeness, the topic of taboos and its corresponding words and censorship. This is done to explain the factors that influence the use and application of swear words and to explain why swearwords are so special in comparison to other word groups. In addition, further research results from other corpora are presented and compared with the results afterwards. This includes corpora that are also composed of online communication, as well as corpora that reproduce spoken language. The results from all the corpora presented deal with results from the English language.
The results of this study indicate that the swear words on "reddit" are used approximately as often as they are on other platforms. The perception of these swear words is mostly positive, which suggests that the use of swear words on "reddit" is not perceived as impolite. In addition, an influence of the discussion topic on the frequency and reception of swear words could be determined.
This thesis addresses real-time rendering techniques for 3D information lenses based on the focus & context metaphor. It analyzes, conceives, implements, and reviews its applicability to objects and structures of virtual 3D city models. In contrast to digital terrain models, the application of focus & context visualization to virtual 3D city models is barely researched. However, the purposeful visualization of contextual data of is extreme importance for the interactive exploration and analysis of this field. Programmable hardware enables the implementation of new lens techniques, that allow the augmentation of the perceptive and cognitive quality of the visualization compared to classical perspective projections. A set of 3D information lenses is integrated into a 3D scene-graph system: • Occlusion lenses modify the appearance of virtual 3D city model objects to resolve their occlusion and consequently facilitate the navigation. • Best-view lenses display city model objects in a priority-based manner and mediate their meta information. Thus, they support exploration and navigation of virtual 3D city models. • Color and deformation lenses modify the appearance and geometry of 3D city models to facilitate their perception. The presented techniques for 3D information lenses and their application to virtual 3D city models clarify their potential for interactive visualization and form a base for further development.
Bolivia is one of the poorest countries in Latin America. This study analyzes whether rural poverty increases the incidence of food insecurity and whether food insecurity perpetuates the condition of poverty among the rural poor in Bolivia. In order to achieve this aim, the risks that households face and the capacity of households to implement coping strategies in order to mitigate vulnerability shocks are identified. We suggest that efforts by households to become food secure may be difficult in rural areas because of poverty and the vulnerability associated with a lack of physical assets, low levels of human capital, poor infrastructure, and poor health; as well as the precarious regional environment aggravating the severity of vulnerability to food insecurity.
This thesis aims at presenting in an organized fashion the required basics to understand the Glauber dynamics as a way of simulating configurations according to the Gibbs distribution of the Curie-Weiss Potts model. Therefore, essential aspects of discrete-time Markov chains on a finite state space are examined, especially their convergence behavior and related mixing times. Furthermore, special emphasis is placed on a consistent and comprehensive presentation of the Curie-Weiss Potts model and its analysis. Finally, the Glauber dynamics is studied in general and applied afterwards in an exemplary way to the Curie-Weiss model as well as the Curie-Weiss Potts model. The associated considerations are supplemented with two computer simulations aiming to show the cutoff phenomenon and the temperature dependence of the convergence behavior.
Pedagogy of integrity
(2019)
The master thesis “Pedagogy of Integrity: an Analysis of the Conceptualization and Implementation of the MA Program Anglophone Modernities in Literature and Culture” deals with colonial patterns in higher education practices. It provides a theoretical framework for decolonization of academic teaching-learning practices on the micro- and meso-didactic levels and suggests concrete solutions for the decolonized education practices, especially for degree programs, which content focuses on post-colonial issues. Besides, through the exemplary analysis of the conceptualization and implementation of the MA Program Anglophone Modernities in Literature and Culture the work explores patterns of colonial heritage as well as will to decolonise these. The main thesis claims that (higher) education should be liberated from colonial patterns, so that real participation for all students in the collective knowledge production becomes possible.
In the theoretical elaborations different concepts of critical and radical pedagogy, e.g. the ones of Paulo Freire and bell hooks, in combination with concepts about modalities of adult learning (e.g. transformative learning) and approaches to education, seeking to combine learning and social justice (e.g. Social Justice Learning) are systematised and explored for their substance and potential to contribute to a criteria catalogue for decolonised educational practises. Besides, attention is paid on higher education research results, which reveal, that students, who belong to underrepresented groups at university (non-traditional students) in their societies of origin, face more difficulties and discrimination as international students at Western universities, than ‘traditional’ international students do. Based on the theoretical elaborations, the work claims that:
(1) the homogeneity-preserving dynamics, found in Western colleges, are an inheritance of colonial time and mindsets, which continue to function in education and multiply social inequality in the context of internationalization, migration, and participation;
(2) all, but especially those higher educational programs, dealing explicitly with inequality phenomena, social and cultural diversity, power relations and issues of domination, as well as with postcolonial criticism, should establish premises of equity and provide de-facto equal opportunities for participation through embodiment of social justice as a way to remain credible;
(3) decolonization of the educational space can be enabled through appropriate didactic action both on the meso- (institution) and micro-didactical (teaching-learning arrangements) agency levels with sufficient will and willingness of responsible professionals at.
By examining representative documents, published by the MA Program Anglophone Modernities in Literature and Culture, using the 'close reading' methodology, as well as through the exemplary analysis of the concept of a teaching-learning program’s event and a student survey, the work seeks to examine wo what extent the Master's Degree Program represents a space of decolonised higher education. The results of the analysis indicate the need for stronger normative value-positioning of the Study program, while many practices that show commitment to participation, social justice and diversity, have been identified.
In the last chapter, the results of the theoretical elaboration and the program’s analysis are synthesized in the concept of an integrity-based pedagogy conceptualisation, called Pedagogy of Integrity, and suggestions are formulated for the teaching practice in the study program, which are meant to help overcome the discrepancy between will and practice towards decolonised educational space.
The Role of Bargaining Power
(2019)
Neoclassical theory omits the role of bargaining power in the determination of wages. As a result, the importance of changes in the bargaining position for the development of income shares in the last decades is underestimated. This paper presents a theoretical argument why collective bargaining power is a main determinant of workers’ share of income and how its decline contributed to the severe changes in the distribution of income since the 1980s. In order to confirm this hypothesis, a panel data regression analysis is performed that suggests that unions significantly influence the distribution of income in developed countries.
To characterise the habitat preferences of ring ouzel (Turdus torquatus) and blackbird (T. merula) in Switzerland, we adopt species distribution modelling and predict the species’ spatial distribution. We model on two different scales to analyse in how far downscaling leads to a different set of predictors to describe the realised habitat best. While the models on macroscale (grid of one square kilometre) cover the entire country, we select a set of smaller plots for modelling on territory scale. Whereas ring ouzels occur in altitudes above 1’000 m a.s.l. only, blackbirds occur from the lowlands up to the timber line. The altitudinal range overlap of the two species is up to 400 m. Despite both species coexist on macroscale, a direct niche overlap on territory scale is rare. Small-scale differences in vegetation cover and structure seem to play a dominant role for habitat selection. On macroscale however, we observe a high dependency on climatic variables mainly representing the altitudinal range and the related forest structure preferred by the two species. Applying the models for climate change scenarios, we predict a decline of suitable habitat for the ring ouzel with a simultaneous median altitudinal shift of +440 m until 2070. In contrast, the blackbird is predicted to benefit from higher temperatures and expand its range to higher elevations.
The present work is a case study contributing to the major planning project “Suedlink”. It is structured as follows: first, in a theoretical part, mandatory theories of social acceptance (Wüstenhagen et al., 2007), steps of participation (Münnich, 2014), and the governance theory (Benz and Dose, 2011) are elaborated. Secondly, the relevant methods are discussed. Thirdly, in a qualitative analytical part, the information that were gathered from the expert interviews are analyzed with the use of the aforementioned theories. In the fourth place, an empirical quantitative analysis of data regarding the public acceptance towards Suedlink is presented.
In this case study, with the use of qualitative and quantitative methods, two questions are answered: first, which governance aspects were relevant for the priority use of underground cables for the construction of high voltage direct current transmission lines? For this question, intensive document analysis and different expert interviews were conducted. Secondly, the central question of the present work addresses the question whether local or/and individual factors affect the public acceptance towards SüdLink. Here, in particular, it is interesting to analyze if the priority use of underground cables affected the people’s acceptance towards SuedLink. In order to respond to both questions, an online survey was conducted among citizen initiatives, district administrators, and individuals in social media during March till July 2016. Thereafter, the data was analyzed with the use of descriptive quantitative methods. The data shows, that underground cables not necessarily increase public acceptance (see also Menges and Beyer, 2013). On the contrary, individual and local criteria were relevant for the survey respondents. For example criteria such as the quality of participation, distance between home and transmission lines, and the additional financial burden (taxes, higher prices for electricity) were important for the evaluation. In addition, survey respondents who participated in citizen initiatives were more critical against the priority use of underground cables and SuedLink in general. Likewise, residential homeowners rejected every form of transmission lines.
Different habitat models were created for the White Stork (Ciconia ciconia) in the region of the former German province of East Prussia (equals app. the current Russian oblast Kaliningrad and the Polish voivodship Warmia-Masuria). Different historical data sets describing the occurrence of the White Stork in the 1930s, as well as selected variables for the description of landscape and habitat, were employed. The processing and modeling of the applied data sets was done with a geographical information system (ArcGIS) and a statistical modeling approach that comes from the disciplines of machine-learning and data mining (TreeNet by Salford Systems Ltd.). Applying historical habitat descriptors, as well as data on the occurrence of the White Stork, models on two different scales were created: (i) a point scale model applying a raster with a cell size of 1 km2 and (ii) an administrative district scale model based on the organization of the former province of East Prussia. The evaluation of the created models show that the occurrence of White Stork nesting grounds in the former East Prussia for most parts is defined by the variables ‘forest’, ‘settlement area’, ‘pasture land’ and ‘proximity to coastline’. From this set of variables it can be assumed that a good food supply and nesting opportunities are provided to the White Stork in pasture and meadows as well as in the proximity to human settlements. These could be seen as crucial factors for the choice of nesting White Stork in East Prussia. Dense forest areas appear to be unsuited as nesting grounds of White Storks. The high influence of the variable ‘coastline’ is most likely explained by the specific landscape composition of East Prussia parallel to the coastline and is to be seen as a proximal factor for explaining the distribution of breeding White Storks. In a second step, predictions for the period of 1981 to 1993 could be made applying both scales of the models created in this study. In doing so, a decline of potential nesting habitat was predicted on the point scale. In contrast, the predicted White Stork occurrence increases when applying the model of the administrative district scale. The difference between both predictions is to be seen in the application of different scales (density versus suitability as breeding ground) and partly dissimilar explanatory variables. More studies are needed to investigate this phenomenon. The model predictions for the period 1981 to 1993 could be compared to the available inventories of that period. It shows that the figures predicted here were higher than the figures established by the census. This means that the models created here show rather a capacity of the habitat (potential niche). Other factors affecting the population size e.g. breeding success or mortality have to be investigated further. A feasible approach on how to generate possible habitat models was shown employing the methods presented here and applying historical data as well as assessing the effects of changes in land use on the White Stork. The models present the first of their kind, and could be improved by means of further data regarding the structure of the habitat and more exact spatially explicit information on the location of the nesting sites of the White Stork. In a further step, a habitat model of the present times should be created. This would allow for a more precise comparison regarding the findings from the changes of land use and relevant conditions of the environment on the White Stork in the region of former East Prussia, e.g. in the light of coming landscape changes brought by the European Union (EU).
The present thesis looks at cultural conceptualisations in relation to DEATH in Irish English from a Cultural Linguistic perspective and puts a special focus on the diachronic development of these conceptualisations. For the study, a corpus consisting of 1,400 death notices from the Dublin-based national newspaper The Irish Times from 14 historical periods between 1859 and 2023 was compiled, resulting in a highly specialised 70,000-word corpus. First, the manual qualitative analysis of the death notices produced evidence for eight superordinate cultural conceptualisations surrounding DEATH, namely, in the order of their frequency THE DEAD ARE TO BE REMEMBERED OR REGRETTED, DEATH IS SOMETHING POSITIVE, DEATH IS REST, DEATH IS A JOURNEY, DYING IS THE BEGINNING OF ANOTHER LIFE, DEATH IS (NOT) A TABOO, DEATH IS GOD’S WILL, and DEATH IS THE END. These conceptualisations were derived from linguistic expressions in the death notices that have these conceptualisations as a cognitive basis. Second, the quantitative comparison of the individual conceptualisations detected diachronic variation, which is interconnected with historical and social developments in Ireland. The thesis, therefore, illustrates the applicability of Cultural Linguistics as an adequate method for diachronic studies interested in culturally determined developments of conceptualisations.
“Jewish, Gay and Proud”
(2020)
This publication examines the foundation and institutional integration of the first gay-lesbian synagogue Beth Chayim Chadashim, which was founded in Los Angeles in 1972. As early as June 1974, the synagogue was admitted to the Union of American Hebrew Congregations, the umbrella organization of the Reform congregations in the United States. Previously, the potential acceptance of a congregation by and for homosexual Jews triggered an intense and broad debate within Reform Judaism. The work asks how it was possible to successfully establish a gay-lesbian synagogue at a time when homosexual acts were considered unnatural and contrary to tradition by almost the entire Jewish community. The starting point of the argumentation is, in addition to general changes in American synagogues after World War II, the assumption that Los Angeles was the most suitable place for this foundation. Los Angeles has an impressive queer history and the Jewish community was more open, tolerant and innovative here than its counterpart on the East Coast. The Metropolitan Community Church was also founded in the city, and as the largest religious institution for homosexual Christians, it also served as the birthplace of queer synagogues.
Reform Judaism was chosen as the place of institutional integration of the community because a relative openness for such an endeavor was only seen here. Responsa written in response to a potential admission of Beth Chayim Chadashim can be used to understand the arguments and positions of rabbis and psychologists regarding homosexuality and communities for homosexual Jews in the early 1970s.
Ultimately, the commitment and dedication of the congregation and its heterosexual supporters convinced the decision-makers in Reform Judaism. The decisive impulse to question the situation of homosexual Jews in Judaism came from Los Angeles. With its analysis, the publication contributes to the understanding of Queer Jewish History in general and queer synagogues in particular.
This thesis covers the topic ”Thinning and Turbulence in Aqueous Films”. Experimental studies in two-dimensional systems gained an increasing amount of attention during the last decade. Thin liquid films serve as paradigms of atmospheric convection, thermal convection in the Earth’s mantle or turbulence in magnetohydrodynamics. Recent research on colloids, interfaces and nanofluids lead to advances in the developtment of micro-mixers (lab-on-a-chip devices). In this project a detailed description of a thin film experiment with focus on the particular surface forces is presented. The impact of turbulence on the thinning of liquid films which are oriented parallel to the gravitational force is studied. An experimental setup was developed which permits the capturing of thin film interference patterns under controlled surface and atmospheric conditions. The measurement setup also serves as a prototype of a mixer on the basis of thermally induced turbulence in liquid thin films with thicknesses in the nanometer range. The convection is realized by placing a cooled copper rod in the center of the film. The temperature gradient between the rod and the atmosphere results in a density gradient in the liquid film, so that different buoyancies generate turbulence. In the work at hand the thermally driven convection is characterized by a newly developed algorithm, named Cluster Imaging Velocimetry (CIV). This routine determines the flow relevant vector fields (velocity and deformation). On the basis of these insights the flow in the experiment was investigated with respect to its mixing properties. The mixing characteristics were compared to theoretical models and mixing efficiency of the flow scheme calculated. The gravitationally driven thinning of the liquid film was analyzed under the influence of turbulence. Strong shear forces lead to the generation of ultra-thin domains which consist of Newton black film. Due to the exponential expansion of the thin areas and the efficient mixing, this two-phase flow rapidly turns into the convection of only ultra-thin film. This turbulence driven transition was observed and quantified for the first time. The existence of stable convection in liquid nanofilms was proven for the first time in the context of this work.
Migration and development in Senegal : a system dynamics analysis of the feedback relationships
(2011)
This thesis investigates the reciprocal relationship between migration and development in Senegal. Therewith, it contributes to the debate as to whether migration in developing countries enhances or rather impedes the development process. Even though extensive and controversial discussions can be found in the scientific literature regarding the impact of migration on development, research has scarcely examined the feedback relationships between migration and development. Science however agrees with both the fact that migration affects development as well as that the level of development in a country determines migration behaviour. Thus, both variables are neither dependent nor independent, but endogenous variables influencing each other and producing behavioural pattern that cannot be investigated using a static and unidirectional approach. On account of this, the thesis studies the feedback mechanisms existing between migration and development and the behavioural pattern generated by the high interdependence in order to be able to draw conclusions concerning the impact of changes in migration behaviour on the development process. To explore these research questions, the study applies the computer simulation method ‘System Dynamics’ and amplifies the simulation model for national development planning called ‘Threshold 21’ (T21), representing development processes endogenously and integrating economic, social and environmental aspects, using a structure that portrays the reasons and consequences of migration. The model has been customised to Senegal, being an appropriate representative of the theoretical interesting universe of cases. The comparison of the model generated scenarios - in which the intensity of emigration, the loss and gain of education, the remittances or the level of dependence changes - facilitates the analysis. The present study produces two important results. The first outcome is the development of an integrative framework representing migration and development in an endogenous way and incorporating several aspects of different theories. This model can be used as a starting point for further discussions and improvements and it is a fairly relevant and useful result against the background that migration is not integrated into most of the development planning tools despite its significant impact. The second outcome is the gained insights concerning the feedback relations between migration and development and the impact of changes in migration on development. To give two examples: It could be found that migration impacts development positively, indicated by HDI, but that the dominant behaviour of migration and development is a counteracting behaviour. That means that an increase in emigration leads to an improvement in development, while this in turn causes a decline in emigration, counterbalancing the initial increase. Another insight concerns the discovery that migration causes a decline in education in the short term, but leads to an increase in the long term, after approximately 25 years - a typical worse-before-better behaviour. From these and further observations, important policy implications can be derived for the sending and receiving countries. Hence, by overcoming the unidirectional perspective, this study contributes to an improved understanding of the highly complex relationship between migration and development and their feedback relations.