Refine
Has Fulltext
- yes (61) (remove)
Year of publication
Document Type
- Master's Thesis (61) (remove)
Language
- English (61) (remove)
Keywords
- BRD (2)
- Bürgerbeteiligung (2)
- Cold War (2)
- DDR (2)
- DPRK (2)
- Datenanalyse (2)
- Demokratisierung (2)
- Deutschland (2)
- Dezentralisierung (2)
- East Germany (2)
Institute
- Institut für Anglistik und Amerikanistik (10)
- Institut für Physik und Astronomie (8)
- Sozialwissenschaften (8)
- Institut für Geowissenschaften (6)
- Extern (5)
- Wirtschaftswissenschaften (4)
- Department Linguistik (3)
- Institut für Biochemie und Biologie (3)
- Institut für Informatik und Computational Science (3)
- Institut für Mathematik (3)
The acquisition of phonological alternations consists of many aspects as discussions in the relevant literature show. There are contrary findings about the role of naturalness. A natural process is grounded in phonetics; they are easy to learn, even in second language acquisition when adults have to learn certain processes that do not occur in their native language. There is also evidence that unnatural – arbitrary – rules can be learned. Current work on the acquisition of morphophonemic alternations suggests that their probability of occurrence is a crucial factor in acquisition. I have conducted an experiment to investigate the effects of naturalness as well as of probability of occurrence with 80 adult native speakers of German. It uses the Artificial Grammar paradigm: Two artificial languages were constructed, each with a particular alternation. In one language the alternation is natural (vowel harmony); in the other language the alternation is arbitrary (a vowel alternation depends on the sonorancy of the first consonant of the stem). The participants were divided in two groups, one group listened to the natural alternation and the other group listened to the unnatural alternation. Each group was divided into two subgroups. One subgroup then was presented with material in which the alternation occurred frequently and the other subgroup was presented with material in which the alternation occurred infrequently. After this exposure phase every participant was asked to produce new words during the test phase. Knowledge about the language-specific alternation pattern was needed to produce the forms correctly as the phonological contexts demanded certain alternants. The group performances have been compared with respect to the effects of naturalness and probability of occurrence. The natural rule was learned more easily than the unnatural one. Frequently presented rules were not learned more easily than the ones that were presented less frequently. Moreover, participants did not learn the unnatural rule at all, whether this rule was presented frequently or infrequently did not matter. There was a tendency that the natural rule was learned more easily if presented frequently than if presented infrequently, but it was not significant due to variability across participants.
Emotions are a complex concept and they are present in our everyday life. Persons on the autism spectrum are said to have difficulties in social interactions, showing deficits in emotion recognition in comparison to neurotypically developed persons. But social-emotional skills are believed to be positively augmented by training. A new adaptive social cognition training tool “E.V.A.” is introduced which teaches emotion recognition from face, voice and body language. One cross-sectional and one longitudinal study with adult neurotypical and autistic participants were conducted. The aim of the cross-sectional study was to characterize the two groups and see if differences in their social-emotional skills exist. The longitudinal study, on the other hand, aimed for detecting possible training effects following training with the new training tool. In addition, in both studies usability assessments were conducted to investigate the perceived usability of the new tool for neurotypical as well as autistic participants. Differences were found between autistic and neurotypical participants in their social-emotional and emotion recognition abilities. Training effects for neurotypical participants in an emotion recognition task were found after two weeks of home training. Similar perceived usability was found for the neurotypical and autistic participants. The current findings suggest that persons with ASC do not have a general deficit in emotion recognition, but are in need for more time to correctly recognize emotions. In addition, findings suggest that training emotion recognition abilities is possible. Further studies are needed to verify if the training effects found for neurotypical participants also manifest in a larger ASC sample.
In this cartography, I examine M.K. Gandhi’s practice of fasting for political purposes from a specifically aesthetic perspective. In other words, to foreground their dramatic qualities, how they in their expressive repetition, patterning and stylization produced a/effected heightened forms of emotions. To carry out this task, I follow the theater scholar Erika Fischer-Lichte’s features that give name to her book Äesthetik des Performativen (2004). The cartography is framed in a philosophical presentation of Gandhi’s discourse as well as of his historical sources. Moreover, as a second frame, I historicize the fasts, by means of a typology and teleology in context.
The historically and discoursively framed cartography maps four main dimensions that define the aesthetics of the performative: mediality, materiality, semioticity and aestheticity. The first part analyses the medial platforms in which the fasts as events have been historically recorded and in which they have left their traces and inscriptions. These historical sources are namely, newspapers, images, newsreels and a documentary film. Secondly, the material dimension depicts Gandhi’s corporeal condition, as well as the spatiality and temporality of the fasts. In the third place, I revise and reformulate critically Fischer-Lichte’s concepts of “presence” and “representation” with resonating concepts of G. C. Spivak and J. Rancière. This revision illustrates Gandhi’s fasts and shows the process of how an individual may become the embodiment or representation of a national body-politic. The last chapter of the cartography explores the autopoetic-feedback loop between Gandhi and the people and finishes with a comparison of the mise en scène of the hunger artists with the fasts of the Indian the politician, social reformer, and theologian. The text concludes interpreting Gandhi’s practice of fasting under the light of the concepts of “intellectual emancipation” and “de-subjectivation” of the philosopher J. Rancière.
The four main concerns of this cartography are: Firstly, in the field of Gandhi’s reception, to explore the aesthetic dimension as both alternative and complementary to the two hegemonic interpretative lenses, i.e. a hagiographic or a secular political understanding of the fasts. From a theoretical perspective, the cartography pursues to be a transdisciplinary experiment that aims at deploying concepts that have been traditionally developed, derived from and used in the field of the arts (theater, film, literature, aesthetic performance, etc.) in the field of the political. In brief, inverting an expression of Rancière, to understand politics as aesthetics. Thirdly, from a thematic point of view, the cartography inquires the historical forms of staging and perception of hunger. Last yet importantly, it is an inquiry of the practice of fasting as nonviolence, what Gandhi, its most sophisticated modern theoretician and practitioner considered its most radical expression.
Conventional wisdom holds that large sums of money poured into election campaigns are the gateway to corruption. Allegations of the corrupting influence of money in politics and policy are widespread on the national level. Yet, little empirical evidence has advanced the understanding of such a link on the local level, coupled with blurred corruption measures. This master’s thesis tests the effect of campaign finance on public procurement corruption risks in Colombian municipalities, focusing on donations, small donations, and financial disclosure. To that end, I seized publicly disclosed contribution-level data from the 2015 municipal elections and a novel index of institutionalized public procurement corruption risks based upon contract-level data from the near population of local governments. The analysis shows that donations are negatively associated with overall corruption risk, yet they affect specific corruption risks differently. By contrast, small donations seem to correlate positively with direct awarding for a sub-sample of medium-sized municipalities, whereas in their large-sized counterparts the effect of the former on institutionalized corruption is adverse. Finally, financial misreporting is positively linked with market competition restrictions and direct awarding. In the conclusion, I discuss the implications of these findings for future research and outline a series of policy recommendations.
Civil society is either considered as a motor of democratization or stabilizer of authoritarian rule. This dichotomy is partly due to the dominance of domains-based definitions of the concept that reduce civil society to a small range of formally organized, independent and democratically oriented NGOs. Additionally, research often treats civil society as a ‘black box’ without differentiating between potential variations in impact of different types of civil society actors on existing regime structures. In this thesis, I present an alternative conceptualization of civil society based on the interactions of societal actors to arrive at a more inclusive understanding of the term which is more suited for analysis in non-democratic settings. The operationalization of the action-based approach I develop allows for an empirical assessment of a large range of societal activities that can accordingly be categorized from little to very civil society-like depending on their specific modes of interactions within four dimensions. I employ this operationalization in a qualitative case study including different actors in the authoritarian monarchy of Jordan which suggests that Jordanian societal actors mostly exhibit tolerant and democratically oriented modes of interaction and do not reproduce authoritarian patterns. However, even democratically oriented actors do not necessarily take on an oppositional positions vis-à-vis the authoritarian regime. Thus, the Jordanian civil society might not feature a high potential to challenge existing power structures in the country.
Phase Space Reconstruction is a method that allows to reconstruct the phase space of a system using only an one dimensional time series as input. It can be used for calculating Lyapunov-exponents and detecting chaos. It helps to understand complex dynamics and their behavior. And it can reproduce datasets which were not measured. There are many different methods which produce correct reconstructions such as time-delay, Hilbert-transformation, derivation and integration. The most used one is time-delay but all methods have special properties which are useful in different situations. Hence, every reconstruction method has some situations where it is the best choice. Looking at all these different methods the questions are: Why can all these different looking methods be used for the same purpose? Is there any connection between all these functions? The answer is found in the frequency domain : Performing a Fourier transformation all these methods getting a similar shape: Every presented reconstruction method can be described as a multiplication in the frequency domain with a frequency-depending reconstruction function. This structure is also known as a filter. From this point of view every reconstructed dimension can be seen as a filtered version of the measured time series. It contains the original data but applies just a new focus: Some parts are amplified and other parts are reduced. Furthermore I show, that not every function can be used for reconstruction. In the thesis three characteristics are identified, which are mandatory for the reconstruction function. Under consideration of these restrictions one gets a whole bunch of new reconstruction functions. So it is possible to reduce noise within the reconstruction process itself or to use some advantages of already known reconstructions methods while suppressing unwanted characteristics of it.
Complex network theory provides an elegant and powerful framework to statistically investigate the topology of local and long range dynamical interrelationships, i.e., teleconnections, in the climate system. Employing a refined methodology relying on linear and nonlinear measures of time series analysis, the intricate correlation structure within a multivariate climatological data set is cast into network form. Within this graph theoretical framework, vertices are identified with grid points taken from the data set representing a region on the the Earth's surface, and edges correspond to strong statistical interrelationships between the dynamics on pairs of grid points. The resulting climate networks are neither perfectly regular nor completely random, but display the intriguing and nontrivial characteristics of complexity commonly found in real world networks such as the internet, citation and acquaintance networks, food webs and cortical networks in the mammalian brain. Among other interesting properties, climate networks exhibit the "small-world" effect and possess a broad degree distribution with dominating super-nodes as well as a pronounced community structure. We have performed an extensive and detailed graph theoretical analysis of climate networks on the global topological scale focussing on the flow and centrality measure betweenness which is locally defined at each vertex, but includes global topological information by relying on the distribution of shortest paths between all pairs of vertices in the network. The betweenness centrality field reveals a rich internal structure in complex climate networks constructed from reanalysis and atmosphere-ocean coupled general circulation model (AOGCM) surface air temperature data. Our novel approach uncovers an elaborately woven meta-network of highly localized channels of strong dynamical information flow, that we relate to global surface ocean currents and dub the backbone of the climate network in analogy to the homonymous data highways of the internet. This finding points to a major role of the oceanic surface circulation in coupling and stabilizing the global temperature field in the long term mean (140 years for the model run and 60 years for reanalysis data). Carefully comparing the backbone structures detected in climate networks constructed using linear Pearson correlation and nonlinear mutual information, we argue that the high sensitivity of betweenness with respect to small changes in network structure may allow to detect the footprints of strongly nonlinear physical interactions in the climate system. The results presented in this thesis are thoroughly founded and substantiated using a hierarchy of statistical significance tests on the level of time series and networks, i.e., by tests based on time series surrogates as well as network surrogates. This is particularly relevant when working with real world data. Specifically, we developed new types of network surrogates to include the additional constraints imposed by the spatial embedding of vertices in a climate network. Our methodology is of potential interest for a broad audience within the physics community and various applied fields, because it is universal in the sense of being valid for any spatially extended dynamical system. It can help to understand the localized flow of dynamical information in any such system by combining multivariate time series analysis, a complex network approach and the information flow measure betweenness centrality. Possible fields of application include fluid dynamics (turbulence), plasma physics and biological physics (population models, neural networks, cell models). Furthermore, the climate network approach is equally relevant for experimental data as well as model simulations and hence introduces a novel perspective on model evaluation and data driven model building. Our work is timely in the context of the current debate on climate change within the scientific community, since it allows to assess from a new perspective the regional vulnerability and stability of the climate system while relying on global and not only on regional knowledge. The methodology developed in this thesis hence has the potential to substantially contribute to the understanding of the local effect of extreme events and tipping points in the earth system within a holistic global framework.
DeepGeoMap
(2021)
In recent years, deep learning improved the way remote sensing data is processed. The classification of hyperspectral data is no exception. 2D or 3D convolutional neural networks have outperformed classical algorithms on hyperspectral image classification in many cases. However, geological hyperspectral image classification includes several challenges, often including spatially more complex objects than found in other disciplines of hyperspectral imaging that have more spatially similar objects (e.g., as in industrial applications, aerial urban- or farming land cover types). In geological hyperspectral image classification, classical algorithms that focus on the spectral domain still often show higher accuracy, more sensible results, or flexibility due to spatial information independence. In the framework of this thesis, inspired by classical machine learning algorithms that focus on the spectral domain like the binary feature fitting- (BFF) and the EnGeoMap algorithm, the author of this thesis proposes, develops, tests, and discusses a novel, spectrally focused, spatial information independent, deep multi-layer convolutional neural network, named 'DeepGeoMap’, for hyperspectral geological data classification. More specifically, the architecture of DeepGeoMap uses a sequential series of different 1D convolutional neural networks layers and fully connected dense layers and utilizes rectified linear unit and softmax activation, 1D max and 1D global average pooling layers, additional dropout to prevent overfitting, and a categorical cross-entropy loss function with Adam gradient descent optimization. DeepGeoMap was realized using Python 3.7 and the machine and deep learning interface TensorFlow with graphical processing unit (GPU) acceleration. This 1D spectrally focused architecture allows DeepGeoMap models to be trained with hyperspectral laboratory image data of geochemically validated samples (e.g., ground truth samples for aerial or mine face images) and then use this laboratory trained model to classify other or larger scenes, similar to classical algorithms that use a spectral library of validated samples for image classification. The classification capabilities of DeepGeoMap have been tested using two geological hyperspectral image data sets. Both are geochemically validated hyperspectral data sets one based on iron ore and the other based on copper ore samples. The copper ore laboratory data set was used to train a DeepGeoMap model for the classification and analysis of a larger mine face scene within the Republic of Cyprus, where the samples originated from. Additionally, a benchmark satellite-based dataset, the Indian Pines data set, was used for training and testing. The classification accuracy of DeepGeoMap was compared to classical algorithms and other convolutional neural networks. It was shown that DeepGeoMap could achieve higher accuracies and outperform these classical algorithms and other neural networks in the geological hyperspectral image classification test cases. The spectral focus of DeepGeoMap was found to be the most considerable advantage compared to spectral-spatial classifiers like 2D or 3D neural networks. This enables DeepGeoMap models to train data independently of different spatial entities, shapes, and/or resolutions.
Hunting Down Animal Verbs
(2022)
Language change is an essential feature of human language, and it is therefore one of the focal areas of the scientific study of language. Language change is always tacitly at work in all languages of the world and at all levels of a given language, be it phonology, morphology, syntax, semantics, etc. It has been suggested that it is precisely the capacity to constantly change and adjust that allows language to keep serving the communicative goals of its users, from ancient to modern times (Fauconnier & Turner, 2003, p. 179).
This thesis investigates an especially salient pattern of lexicogrammatical change, namely word-formation of verbs from animal nouns by zero-derivation, in the process of which such nouns as, for example, dog, horse, or beaver change their usage and meaning to produce animal verbs: to dog ‘to follow someone persistently and with a malicious intent’, to horse about/around ‘to make fun of, to ‘rag’, to ridicule someone’ and to beaver away ‘to work at working with great enthusiasm’ respectively. In the previous literature this pattern of language change has been termed verbal zoosemy (e.g. Kiełtyka, 2016), i.e. metaphorical construal of human actions by means of linguistic material from the domain of animals.
The approach taken in this study is not to simply report on the objective changes in the morphology, syntactic distribution and meaning of such linguistic units before and after conversion, but to uncover the complexity of cognitive mechanisms which allow the speakers of English to reclassify such well-established nominal units as animal noun into verbs. It is assumed that the grammatical change in these lexical units is predicated on and triggered by preceding semantic change. Thus, the study is set in the framework of Cognitive Historical Semantics and employs the Conceptual Metaphor and Metonymy Theory (CMMT) to untangle the intricacies of the semantic change making the grammatical change of animal nouns into verbs possible and acceptable in the minds of English speakers.
To this end, this study employed the Oxford English Dictionary Online (OED Online) to compile a glossary of 96 denominal animal verbal forms tied to 209 verbal senses (most verbs in the dataset displayed polysemy). The data collected from the OED Online included not only the senses of the verbs, but also the date of the earliest recorded use of the verbal form with the given sense (regarded in the study as the date of conversion), the earliest usage examples for individual senses and morphologically or semantically related linguistic units from the lexical field of the respective parent noun which were amenable to explaining the observed instances of semantic change. Each instance of zoosemisation, i.e. of the creation of a separate metaphorical verbal sense, was then carefully analysed on the basis of the data collected and classified with the help of the CMMT. In the final stage, a comprehensive and systematic classification of the senses of animal verbs in accordance with the cognitive mechanisms of their creation (metaphor, metonymy, or a combination thereof) was produced together with a timeline of the first appearance of individual metaphorical senses of animal verbs recorded in the OED.
The results show that animal verbs are produced through the interaction of conceptual metaphor and metonymy. Specifically, it was established that two major patterns of metaphor-metonymy interaction underpinning the process of verbal zoosemisation are metaphor from metonymy and metonymy from metaphor. In the former pattern, either an already existing metonymic animal verb is expanded to include the target domain PEOPLE, or the animal noun itself acts as a metonymic vehicle to a certain element of the idealised cognitive model of the given animal, which is metaphorically projected onto people. In the latter mechanism, a metaphorical projection of an animal term initially enters the lexicon in the form of a metaphorical animal noun referring to a human entity, and later in the course of language development it comes to metonymically stand for the action, which the given entity either performs or is involved in. Secondarily, it was observed that individual animal nouns can undergo multiple rounds of zoosemic conversion over time depending on the semantic frame in which the given linguistic unit undergoes denominal conversion, and that results in the polysemy of most animal verbs.
The Rio Conventions stand at the centerpiece of international cooperation within the governance area of climate change, biodiversity, and desertification. Due to substantial environmental and political linkages, there are interrelations between the three regimes. This study seeks to examine the inter-institutional relationship between the United Nations Framework Convention on Climate Change, the Convention on Biological Diversity and the United Nations Convention to Combat Desertification by analyzing and assessing their horizontal interplay activities from the starting point of their genesis at Earth Summit in 1992 until today. In this research, I address the connections between the three conventions and identify the conflicting, cooperative, and synergetic aspects of inter-institutional relationship. While the overall empirical analysis suggests weak indications of a conflictive type, this research asserts that the interplay activities have thus far led to a cooperative relationship between the Rio Conventions. Moreover, increasing coordination and collaboration between the conventions’ treaty secretariats signals characteristics of a synergetic relationship, which could open up a potential window of opportunity for these actors to further engage and progress in institutional management in the future. In a conclusion, this study explores the possibility of the formation of an overarching environmental institution as a result of joint institutional management within the complex of climate change, biodiversity, and desertification.
This paper deals with the teaching of grammar in the English as a foreign language (EFL) classroom. In this context, a course book (English G 21 A2) is examined in regard to whether it is compatible with current theories about second language acquisition (SLA).
At the beginning of this paper, views on grammar teaching from the past and the present are summarized and this is followed by an analysis of the current curriculum concerning its guidelines for grammar teaching in the foreign language classroom. This analysis concludes that the curriculum of Brandenburg hardly gives any recommendations regarding the question of which grammatical phenomena should to be taught. This explains, at least partly, the important position course books take in the foreign language classroom. Teachers use them as a source of material as well has a guideline for which topics can be taught and in which order.
The following part gives an overview of cognitive models of SLA and foreign language teaching, among others Krashen’s Monitor Hypothesis, R. Ellis’ Weak Interface Model and Pienemann’s Processability Theory. On the basis of these models criteria for the ideal design of a course book, which would support grammar teaching according to current findings, are developed. Among those criteria are the offering of a lot of input in the target language, provision of practice activities and consciousness-raising activities, taking into consideration the sequence of acquisition and the provision of a diagnostic tool which enables the students to find out in which areas of the target language they need to improve. Furthermore, the inclusion of opportunities for (individual) revision is regarded as essential. All of those criteria are of course given under the reservation that the influence of course books on the happenings in the classroom is restricted as the final decisions are made by the teacher in the teaching situation.
In the analysis, one communicative intention which is usually a topic in the English lessons between the third and sixth year of learning is focused on. This communicative intention is talking about the future. First, the possibilities to express futurity in the English language are analysed and reduced for the use in teaching. The chosen course book is then described and analysed and the way the book deals with the topic of talking about the future is compared to the criteria which where specified earlier in the paper. This comparison showed that the book is compatible with SLA theories in many ways (e.g. concerning the explanations of grammatical structures) but that there is still room for improvement (e.g. concerning the amount of input and the number of consciousness-raising activities).
Recently, several faint ringlets in the Saturnian ring system were found to maintain a peculiar orientation relative to Sun. The Encke gap ringlets as well as the ringlet in the outer rift of the Cassini division were found to have distinct spatial displacements of several tens of kilometers away from Saturn towards Sun, referred to as heliotropicity (Hedman et al., 2007). This is quite exceptional, since dynamically one would expect eccentric features in the Saturnian rings to precess around Saturn over periods of months. In our study we address this exceptional behavior by investigating the dynamics of circumplanetary dust particles with sizes in the range of 1-100 µm. These small particles are perturbed by non-gravitational forces, in particular, solar radiation pres- sure, Lorentz force, and planetary oblateness, on time-scales of the order of days. The combined influences of these forces cause periodical evolutions of grains’ orbital ec- centricities as well as precession of their pericenters, which can be shown by secular perturbation theory. We show that this interaction results in a stationary eccentric ringlet, oriented with its apocenter towards the Sun, which is consistent with obser- vational findings. By applying this heliotropic dynamics to the central Encke gap ringlet, we can give a limit for the expected smallest grain size in the ringlet of about 8.7 microns, and constrain the minimal lifetime to lie in the order of months. Furthermore, our model matches fairly well the observed ringlet eccentricity in the Encke gap, which supports recent estimates on the size distribution of the ringlet material (Hedman et al., 2007). The ringlet-width however, that results from our modeling based on heliotropic dynamics, slightly overestimates the observed confined ringlet-width by a factor of 3 to 10, depending on the width-measure being used. This is indicative for mechanisms, not included in the heliotropic model, which potentially confine the ringlet to its observed width, including shepherding and scattering by embedded moonlets in the ringlet region. Based on these results, early investigations (Cuzzi et al., 1984, Spahn and Wiebicke, 1989, Spahn and Sponholz, 1989), and recent work that has been published on the F ring (Murray et al., 2008) - to which the Encke gap ringlets are found to share similar morphological structures - we model the maintenance of the central ringlet by embedded moonlets. These moonlets, believed to have sizes of hundreds of meters across, release material into space, which is eroded by micrometeoroid bombardment (Divine, 1993). We further argue that Pan - one of Saturn’s moons, which shares its orbit with the central ringlet of the Encke gap - is a rather weak source of ringlet material that efficiently confines the ringlet sources (moonlets) to move on horseshoe-like orbits. Moreover, we suppose that most of the narrow heliotropic ringlets are fed by a moonlet population, which is held together by the largest member to move on horseshoe-like orbits. Modeling the equilibrium between particle source and sinks with a primitive balance equation based on photometric observations (Porco et al., 2005), we find the minimal effective source mass of the order of 3 · 10-2MPan, which is needed to keep the central ringlet from disappearing.
This document presents a formula selection system for classical first order theorem proving based on the relevance of formulae for the proof of a conjecture. It is based on unifiability of predicates and is also able to use a linguistic approach for the selection. The scope of the technique is the reduction of the set of formulae and the increase of the amount of provable conjectures in a given time. Since the technique generates a subset of the formula set, it can be used as a preprocessor for automated theorem proving. The document contains the conception, implementation and evaluation of both selection concepts. While the one concept generates a search graph over the negation normal forms or Skolem normal forms of the given formulae, the linguistic concept analyses the formulae and determines frequencies of lexemes and uses a tf-idf weighting algorithm to determine the relevance of the formulae. Though the concept is built for first order logic, it is not limited to it. The concept can be used for higher order and modal logik, too, with minimal adoptions. The system was also evaluated at the world championship of automated theorem provers (CADE ATP Systems Competition, CASC-24) in combination with the leanCoP theorem prover and the evaluation of the results of the CASC and the benchmarks with the problems of the CASC of the year 2012 (CASC-J6) show that the concept of the system has positive impact to the performance of automated theorem provers. Also, the benchmarks with two different theorem provers which use different calculi have shown that the selection is independent from the calculus. Moreover, the concept of TEMPLAR has shown to be competitive to some extent with the concept of SinE and even helped one of the theorem provers to solve problems that were not (or slower) solved with SinE selection in the CASC. Finally, the evaluation implies that the combination of the unification based and linguistic selection yields more improved results though no optimisation was done for the problems.
Over the last decades Britain´s ethnic minorities have successfully established themselves in a multicultural society. In particular, Indian – Hindu communities generally improved their social and economic situation. In this context, the third generation of British Indians is now growing up. In contrast to the previous generation of the Indian diaspora, these children grow up in an established ethnic community, which learned to retain its religion, traditions and culture in a foreign environment. At the same time, these children are part of the multicultural British society. Based on the academic discussion about the second generation of immigrated ethnic communities, when the youth often suffered from cultural differences, racism and discrimination and therefore rejected aspects of their culture of origin, this paper assumes that the loss of the culture of origin further increases in the third generation. This thesis follows the main theories about the connection between generation and integration. It is believed that the preference of western culture influences the personal, ethnic and cultural identity of young people. This leads to the rejection of traditional bonds. Before introducing this thesis various theoretical concepts are discussed which are inevitable for the comprehension of the diasporic situation in which British Indian youngsters grow up. As part of the worldwide Asian Indian diaspora Indian families in Britain maintain manifold links to Indian communities in various countries. Particularly, the link to India plays a decisive role; the subcontinent is referred to as an abstract homeland, especially by the first generation. While the grandparents strongly adhere to their Indian culture and Hindu religion, the second generation already generated cultural change. In this process various cultural values of the Indian ethnic community have been questioned and modified. Further, the second generation pushed the integration into the British society by giving up the dependence on the ethnic network. This paper is based on a hybrid and fluent definition of culture. This definition also applies to the underlying understanding of identity and ethnicity. Due to migration, cultural contact and the multilocality of the diaspora, diasporic and post-diasporic identities and cultures are characterized by hybridity, heterogeneity, fragmentation and flexibility. Particularly, in the younger generation – though dependent on a number of social and structural factors - cultural change and mixture happen; in this process new ethnicities and identities evolve. In the second and third part of this paper the thesis of loss of culture of origin is refuted on the basis of findings from empirical research. British - Indian youngsters in London have been questioned for the study. Half of the youngsters are related to a sampradaya, a Hindu sect. This enables the author to compare youngsters who do not belong to a particular religious group with those who are included into a religious and / or ethnic community through a sampradaya. The analysis of the findings which are based on qualitative and quantitative social research shows that the young people have great interest in their culture of origin and that they aim to maintain this culture in the diaspora. They identify as Indian and are proud of their cultural differences. In this, they differ from the second generation. In contrast to the generation of their grandparents the Indian identity of the third generation is not based on nostalgic memories. They confirm and emphasize their postdiasporic difference in a western multicultural society. The findings from the survey hereby exceed the thesis from Hansen’s theory about the rediscovery of the culture of origin in the third generation. The comparison of both groups shows that in the context of the differentiation of postmodern and postcolonial communities also ethnic groups become increasingly differentiated. Therefore, the Indian heritage and culture does not play the same role for every young British Indian.
Midbrain dopamine neurons invigorate responses by signaling opportunity costs (tonic dopamine) and promote associative learning by encoding a reward prediction error signal (phasic dopamine). Recent studies on Bayesian sensorimotor control have implicated midbrain dopamine concentration in the integration of prior knowledge and current sensory information. The present behavioral study addressed the contributions of tonic and phasic dopamine in a Bayesian decision-making task by alternating reward magnitude and inferring reward prediction errors. Twenty-four participants were asked to indicate the position of a hidden target stimulus under varying prior and likelihood uncertainty. Trial-by-trial rewards were allocated based on performance and two different reward maxima. Overall, participants’ behavior agreed with Bayesian decision theory, but indicated excessive reliance on likelihood information. These results thus
oppose accounts of statistically optimal integration in sensorimotor control, and suggest that the sensorimotor system is subject to additional decision heuristics. Moreover, higher reward magnitude was not observed to induce enhanced response vigor, and was associated with less Bayes-like integration. In addition, the weighting of prior knowledge and current sensory information proceeded independently of reward prediction errors.
Taken together, these findings suggest that the process of combining prior and likelihood uncertainties in sensorimotor control is largely robust to variations in reward.
Assuming that liquid iron alloy from the outer core interacts with the solid silicate-rich lower mantle the influence on the core-mantle reflected phase PcP is studied. If the core-mantle boundary is not a sharp discontinuity, this becomes apparent in the waveform and amplitude of PcP. Iron-silicate mixing would lead to regions of partial melting with higher density which in turn reduces the velocity of seismic waves. On the basis of the calculation and interpretation of short-period synthetic seismograms, using the reflectivity and Gauss Beam method, a model space is evaluated for these ultra-low velocity zones (ULVZs). The aim of this thesis is to analyse the behaviour of PcP between 10° and 40° source distance for such models using different velocity and density configurations. Furthermore, the resolution limits of seismic data are discussed. The influence of the assumed layer thickness, dominant source frequency and ULVZ topography are analysed. The Gräfenberg and NORSAR arrays are then used to investigate PcP from deep earthquakes and nuclear explosions. The seismic resolution of an ULVZ is limited both for velocity and density contrasts and layer thicknesses. Even a very thin global core-mantle transition zone (CMTZ), rather than a discrete boundary and also with strong impedance contrasts, seems possible: If no precursor is observable but the PcP_model /PcP_smooth amplitude reduction amounts to more than 10%, a very thin ULVZ of 5 km with a first-order discontinuity may exist. Otherwise, if amplitude reductions of less than 10% are obtained, this could indicate either a moderate, thin ULVZ or a gradient mantle-side CMTZ. Synthetic computations reveal notable amplitude variations as function of the distance and the impedance contrasts. Thereby a primary density effect in the very steep-angle range and a pronounced velocity dependency in the wide-angle region can be predicted. In view of the modelled findings, there is evidence for a 10 to 13.5 km thick ULVZ 600 km south-eastern of Moscow with a NW-SE extension of about 450 km. Here a single specific assumption about the velocity and density anomaly is not possible. This is in agreement with the synthetic results in which several models create similar amplitude-waveform characteristics. For example, a ULVZ model with contrasts of -5% VP , -15% VS and +5% density explain the measured PcP amplitudes. Moreover, below SW Finland and NNW of the Caspian Sea a CMB topography can be assumed. The amplitude measurements indicate a wavelength of 200 km and a height of 1 km topography, previously also shown in the study by Kampfmann and Müller (1989). Better constraints might be provided by a joined analysis of seismological data, mineralogical experiments and geodynamic modelling.
This thesis investigates the Casimir effect between plates made of normal and superconducting metals over a broad range of temperatures, as well as the Casimir-Polder interaction of an atom to such a surface. Numerical and asymptotical calculations have been the main tools in order to do so. The optical properties of the surfaces are described by dielectric functions or optical conductivities, which are reviewed for common models and have been analyzed with special weight on distributional properties and causality. The calculation of the Casimir energy between two normally conducting plates (cavity) is reviewed and previous work on the contribution to the Casimir energy due to the surface plasmons, present in all metallic cavities, has been generalized to finite temperatures for the first time. In the field of superconductivity, a new analytical continuation of the BCS conductivity to to purely imaginary frequencies has been obtained both inside and outside the extremely dirty limit of vanishing mean free path. The Casimir free energy calculated from this description was shown to coincide well with the values obtained from the two fluid model of superconductivity in certain regimes of the material parameters. The Casimir entropy in a superconducting cavity fulfills the third law of thermodynamics and features a characteristic discontinuity at the phase transition temperature. These effects were equally encountered in the Casimir-Polder interaction of an atom with a superconducting wall. The magnetic dipole coupling of an atom to a metal was shown to be highly sensible to dissipation and especially to the surface currents. This leads to a strong quenching of the magnetic Casimir-Polder energy at finite temperature. Violations of the third law of thermodynamics are encountered in special models, similar to phenomena in the Casimir-effect between two plates, that are debated controversely. None of these effects occurs in the analog electric dipole interaction. The results of this work suggest to reestablish the well-known plasma model as the low temperature limit of a superconductor as in London theory rather than use it for the description of normal metals. Superconductors offer the opportunity to control the dissipation of surface currents to a great extent. This could be used to access experimentally the low frequency optical response of metals, which is strongly connected to the thermal Casimir-effect. Here, differently from corresponding microwave experiments, energy and momentum are independent quantities. A measurement of the total Casimir-Polder interaction of atoms with superconductors seems to be in reach in today’s microchip-based atom-traps and the contribution due to magnetic coupling might be accessed by spectroscopic techniques
This study investigates the reform of the public budgeting and accounting system (Doppik) in Brandenburg. On the one hand, this thesis aims to identify the key variables shaping employees’ commitment to change and, on the other hand, to examine the extent employees’ commitment to change influences the implementation process of the reform. The results of this study show that the commitment of civil servants towards the Doppik is primarily determined by the content, but also by the context of the reform. Moreover, it is revealed for the case of Brandenburg that civil servants’ affective commitment to change has a significant positive influence on the perceived success of the reform implementation. The results of the study are not only of high scientific importance, but also of practical relevance. The recommendations developed in this study offer grounded guidelines on how to successfully implement the Doppik on local level in Brandenburg.
Writing an alternative Australia : women and national discourse in nineteenth-century literature
(2007)
In this thesis, I want to outline the emergence of the Australian national identity in colonial Australia. National identity is not a politically determined construct but culturally produced through discourse on literary works by female and male writers. The emergence of the dominant bushman myth exhibited enormous strength and influence on subsequent generations and infused the notion of “Australianness” with exclusively male characteristics. It provided a unique geographical space, the bush, on and against which the colonial subject could model his identity. Its dominance rendered non-male and non-bush experiences of Australia as “un-Australian.” I will present a variety of contemporary voices – postcolonial, Aboriginal, feminist, cultural critics – which see the Australian identity as a prominent topic, not only in the academia but also in everyday culture and politics. Although positioned in different disciplines and influenced by varying histories, these voices share a similar view on Australian society: Australia is a plural society, it is home to millions of different people – women, men, and children, Aboriginal Australians and immigrants, newly arrived and descendents of the first settlers – with millions of different identities which make up one nation. One version of national identity does not account for the multitude of experiences; one version, if applied strictly, renders some voices unheard and oppressed. After exemplifying how the literature of the 1890s and its subsequent criticism constructed the itinerant worker as “the” Australian, literary productions by women will be singled out to counteract the dominant version by presenting different opinions on the state of colonial Australia. The writers Louisa Lawson, Barbara Baynton, and Tasma are discussed with regard to their assessment of their mother country. These women did not only present a different picture, they were also gifted writers and lived the ideal of the “New Women:” they obtained divorces, remarried, were politically active, worked for their living and led independent lives. They paved the way for many Australian women to come. In their literary works they allowed for a dual approach to the bush and the Australian nation. Louisa Lawson credited the bushwoman with heroic traits and described the bush as both cruel and full of opportunities not known to women in England. She understood women’s position in Australian society as oppressed and tried to change politics and culture through the writings in her feminist magazine the Dawn and her courageous campaign for women suffrage. Barbara Baynton painted a gloomy picture of the Australian bush and its inhabitants and offered one of the fiercest critiques of bush society. Although the woman is presented as the able and resourceful bushperson, she does not manage to survive in an environment which functions on male rules and only values the economic potential of the individual. Finally, Tasma does not present as outright a critique as Barbara Baynton, however, she also attests the colonies a fascination with wealth which she renders questionable. She offers an informed judgement on colonial developments in the urban surrounds of the city of Melbourne through the comparison of colonial society with the mother country England. Tasma attests that the colonies had a fascination with wealth which she renders questionable. She offers an informed judgement on colonial developments in the urban surrounds of the city of Melbourne through the comparison of colonial society with the mother country England and demonstrates how uncertainties and irritations emerged in the course of Australia’s nation formation. These three women, as writers, commentators, and political activists, faced exclusion from the dominant literary discourses. Their assessment of colonial society remained unheard for a long time. Now, after much academic excavation, these voices speak to us from the past and remind us that people are diverse, thus nation is diverse. Dominant power structures, the institutions and individuals who decide who can contribute to the discourse on nation, have to be questioned and reassessed, for they mute voices which contribute to a wider, to the “full”, and maybe “real” picture of society.
The development of rural areas concerning food security, sustainability and social-economic stability is key issue to the globalized community. Regarding the current state of climatic change, especially semi-arid regions in uenced by monsoon or El Niño are prone to extreme weather events. Droughts, ooding, erosion, degradation of soils and water quality and deserti cation are some of the common impacts. State of the art in hydrologic environmental modeling is generally operating under a reductionist paradigm (Sivapalan 2005). Even an enormous quantity of process-oriented models exists, we fail in due reproduction of complexly interacting processes in their effective scale in the space-time-continuum, as they are described through deterministic small-scale process theories (e.g. Beven 2002). Yet large amounts of parameters - with partly doubtful physical expression - and input data are needed. In contradiction to that most soft information about patterns and organizing principles cannot be employed (Seibert and McDonnell 2002). For an analysis of possible strategies on the one hand towards integrated hydrologic modeling as decision support and on the other hand for sustainable land use development the 512 km2 large catchment of the Mod river in Jhabua, Madhya Pradesh, India has been chosen. It is characterized by a setting of common problems of peripheral rural semi-arid human-eco-systems with intensive agriculture, deforestation, droughts and general hardship for the people. Scarce data and missing gauges are adding to the requirements of data acquisition and process description. The study at hand presents a methodical framework to combine eld scale data analysis and remote sensing for the setup of a database focusing plausibility over strict data accuracy. The catena-based hydrologic model WASA (Güntner 2002) employes this database. It is expanded by a routine for crop development simulation after the de Wit approach (e.g. in Bouman et al. 1996). For its application as decision support system an agentbased land use algorithm is developed which decides on base of site speci cations and certain constraints (like maximum pro t or best local adaptation) about the cropping. The new model is employed to analyze (some) land use strategies. Not anticipated and a priori de ned scenarios will account for the realization of the model but the interactions within the system. This study points out possible approaches to enhance the situation in the catchment. It also approaches central questions of ways towards due integrated hydrological modeling on catchment scale for ungauged conditions and to overcome current paradigms.