Refine
Year of publication
Document Type
- Article (21599)
- Doctoral Thesis (3205)
- Postprint (2356)
- Monograph/Edited Volume (1227)
- Other (682)
- Review (631)
- Preprint (531)
- Conference Proceeding (493)
- Part of a Book (267)
- Working Paper (182)
- Habilitation Thesis (77)
- Master's Thesis (62)
- Part of Periodical (46)
- Report (15)
- Bachelor Thesis (8)
- Journal/Publication series (6)
- Contribution to a Periodical (3)
- Moving Images (2)
- Lecture (1)
Language
- English (31393) (remove)
Keywords
- climate change (178)
- Germany (106)
- machine learning (90)
- diffusion (78)
- Arabidopsis thaliana (70)
- German (68)
- morphology (67)
- anomalous diffusion (59)
- stars: massive (59)
- COVID-19 (55)
Institute
- Institut für Physik und Astronomie (5048)
- Institut für Biochemie und Biologie (4793)
- Institut für Geowissenschaften (3388)
- Institut für Chemie (2931)
- Institut für Mathematik (1890)
- Department Psychologie (1500)
- Institut für Ernährungswissenschaft (1056)
- Department Linguistik (1029)
- Wirtschaftswissenschaften (860)
- Institut für Informatik und Computational Science (844)
- Institut für Umweltwissenschaften und Geographie (834)
- Department Sport- und Gesundheitswissenschaften (745)
- Extern (654)
- Mathematisch-Naturwissenschaftliche Fakultät (538)
- Institut für Anglistik und Amerikanistik (514)
- Hasso-Plattner-Institut für Digital Engineering GmbH (444)
- Sozialwissenschaften (444)
- Strukturbereich Kognitionswissenschaften (405)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (384)
- Fachgruppe Betriebswirtschaftslehre (317)
- Fachgruppe Politik- & Verwaltungswissenschaft (287)
- Humanwissenschaftliche Fakultät (253)
- Historisches Institut (244)
- Department Erziehungswissenschaft (228)
- Institut für Romanistik (196)
- Fachgruppe Volkswirtschaftslehre (184)
- Institut für Jüdische Studien und Religionswissenschaft (175)
- Institut für Germanistik (167)
- Vereinigung für Jüdische Studien e. V. (144)
- Philosophische Fakultät (129)
- Öffentliches Recht (124)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (113)
- Fachgruppe Soziologie (94)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (88)
- Fakultät für Gesundheitswissenschaften (85)
- Center for Economic Policy Analysis (CEPA) (79)
- Institut für Künste und Medien (76)
- Institut für Philosophie (65)
- Strukturbereich Bildungswissenschaften (60)
- Institut für Slavistik (59)
- Department für Inklusionspädagogik (56)
- MenschenRechtsZentrum (49)
- Institut für Jüdische Theologie (48)
- Berlin Potsdam Research Group "The International Rule of Law - Rise or Decline?" (45)
- Department Grundschulpädagogik (44)
- Bürgerliches Recht (40)
- Interdisziplinäres Zentrum für Dünne Organische und Biochemische Schichten (26)
- Referat für Presse- und Öffentlichkeitsarbeit (23)
- Sonderforschungsbereich 632 - Informationsstruktur (23)
- Klassische Philologie (22)
- Zentrum für Gerechtigkeitsforschung (21)
- Department Musik und Kunst (20)
- Potsdam Research Institute for Multilingualism (PRIM) (18)
- Hochschulambulanz (17)
- Strafrecht (16)
- WeltTrends e.V. Potsdam (15)
- Lehreinheit für Wirtschafts-Arbeit-Technik (13)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (13)
- Digital Engineering Fakultät (12)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (12)
- Interdisziplinäres Zentrum für Biopolymere (11)
- Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung (11)
- Potsdam Transfer - Zentrum für Gründung, Innovation, Wissens- und Technologietransfer (10)
- Zentrum für Umweltwissenschaften (8)
- Institut für Religionswissenschaft (7)
- Juristische Fakultät (6)
- Multilingualism (6)
- Zentrum für Qualitätsentwicklung in Lehre und Studium (ZfQ) (6)
- Abraham Geiger Kolleg gGmbH (5)
- Zentrum für Lern- und Lehrforschung (5)
- Forschungsbereich „Politik, Verwaltung und Management“ (4)
- Gesundheitsmanagement (4)
- Moses Mendelssohn Zentrum für europäisch-jüdische Studien e. V. (4)
- Patholinguistics/Neurocognition of Language (4)
- Akademie für Psychotherapie und Interventionsforschung GmbH (3)
- Botanischer Garten (3)
- Interdisziplinäres Zentrum für Kognitive Studien (3)
- Psycholinguistics and Neurolinguistics (3)
- Zentrum für Sprachen und Schlüsselkompetenzen (Zessko) (3)
- An-Institute (2)
- DV und Statistik Wirtschaftswissenschaften (2)
- Senat (2)
- UP Transfer (2)
- Universitätsbibliothek (2)
- Zentrum für Australienforschung (2)
- Applied Computational Linguistics (1)
- Arbeitskreis Militär und Gesellschaft in der Frühen Neuzeit e. V. (1)
- Deutsches GeoForschungsZentrum GFZ (1)
- Foundations of Computational Linguistics (1)
- Institut für Lebensgestaltung-Ethik-Religionskunde (1)
- Interdisziplinäres Zentrum für Massenspektronomie von Biopolymeren (1)
- Kommissionen des Senats (1)
- Kommunalwissenschaftliches Institut (1)
- Language Acquisition (1)
- Organe und Gremien (1)
- Phonology & Phonetics (1)
- Syntax, Morphology & Variability (1)
- Theodor-Fontane-Archiv (1)
- Weitere Einrichtungen (1)
Recent years witnessed a vast advent of stalagmites as palaeoclimate archives. The multitude of geochemical and physical proxies and a promise of a precise and accurate age model greatly appeal to palaeoclimatologists. Although substantial progress was made in speleothem-based palaeoclimate research and despite high-resolution records from low-latitudinal regions, proving that palaeo-environmental changes can be archived on sub-annual to millennial time scales our comprehension of climate dynamics is still fragmentary. This is in particular true for the summer monsoon system on the Indian subcontinent. The Indian summer monsoon (ISM) is an integral part of the intertropical convergence zone (ITCZ). As this rainfall belt migrates northward during boreal summer, it brings monsoonal rainfall. ISM strength depends however on a variety of factors, including snow cover in Central Asia and oceanic conditions in the Indic and Pacific. Presently, many of the factors influencing the ISM are known, though their exact forcing mechanism and mutual relations remain ambiguous. Attempts to make an accurate prediction of rainfall intensity and frequency and drought recurrence, which is extremely important for South Asian countries, resemble a puzzle game; all interaction need to fall into the right place to obtain a complete picture. My thesis aims to create a faithful picture of climate change in India, covering the last 11,000 ka. NE India represents a key region for the Bay of Bengal (BoB) branch of the ISM, as it is here where the monsoon splits into a northwestward and a northeastward directed arm. The Meghalaya Plateau is the first barrier for northward moving air masses and receives excessive summer rainfall, while the winter season is very dry. The proximity of Meghalaya to the Tibetan Plateau on the one hand and the BoB on the other hand make the study area a key location for investigating the interaction between different forcings that governs the ISM. A basis for the interpretation of palaeoclimate records, and a first important outcome of my thesis is a conceptual model which explains the observed pattern of seasonal changes in stable isotopes (d18O and d2H) in rainfall. I show that although in tropical and subtropical regions the amount effect is commonly called to explain strongly depleted isotope values during enhanced rainfall, alone it cannot account for observed rainwater isotope variability in Meghalaya. Monitoring of rainwater isotopes shows no expected negative correlation between precipitation amount and d18O of rainfall. In turn I find evidence that the runoff from high elevations carries an inherited isotopic signature into the BoB, where during the ISM season the freshwater builds a strongly depleted plume on top of the marine water. The vapor originating from this plume is likely to memorize' and transmit further very negative d18O values. The lack of data does not allow for quantication of this plume effect' on isotopes in rainfall over Meghalaya but I suggest that it varies on seasonal to millennial timescales, depending on the runoff amount and source characteristics. The focal point of my thesis is the extraction of climatic signals archived in stalagmites from NE India. High uranium concentration in the stalagmites ensured excellent age control required for successful high-resolution climate reconstructions. Stable isotope (d18O and d13C) and grey-scale data allow unprecedented insights into millennial to seasonal dynamics of the summer and winter monsoon in NE India. ISM strength (i. e. rainfall amount) is recorded in changes in d18Ostalagmites. The d13C signal, reflecting drip rate changes, renders a powerful proxy for dry season conditions, and shows similarities to temperature-related changes on the Tibetan Plateau. A sub-annual grey-scale profile supports a concept of lower drip rate and slower stalagmite growth during dry conditions. During the Holocene, ISM followed a millennial-scale decrease of insolation, with decadal to centennial failures resulting from atmospheric changes. The period of maximum rainfall and enhanced seasonality corresponds to the Holocene Thermal Optimum observed in Europe. After a phase of rather stable conditions, 4.5 kyr ago, the strengthening ENSO system dominated the ISM. Strong El Nino events weakened the ISM, especially when in concert with positive Indian Ocean dipole events. The strongest droughts of the last 11 kyr are recorded during the past 2 kyr. Using the advantage of a well-dated stalagmite record at hand I tested the application of laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) to detect sub-annual to sub-decadal changes in element concentrations in stalagmites. The development of a large ablation cell allows for ablating sample slabs of up to 22 cm total length. Each analyzed element is a potential proxy for different climatic parameters. Combining my previous results with the LAICP- MS-generated data shows that element concentration depends not only on rainfall amount and associated leaching from the soil. Additional factors, like biological activity and hydrogeochemical conditions in the soil and vadose zone can eventually affect the element content in drip water and in stalagmites. I present a theoretical conceptual model for my study site to explain how climatic signals can be transmitted and archived in stalagmite carbonate. Further, I establish a first 1500 year long element record, reconstructing rainfall variability. Additionally, I hypothesize that volcanic eruptions, producing large amounts of sulfuric acid, can influence soil acidity and hence element mobilization.
We study buckling instabilities of filaments in biological systems. Filaments in a cell are the building blocks of the cytoskeleton. They are responsible for the mechanical stability of cells and play an important role in intracellular transport by molecular motors, which transport cargo such as organelles along cytoskeletal filaments. Filaments of the cytoskeleton are semiflexible polymers, i.e., their bending energy is comparable to the thermal energy such that they can be viewed as elastic rods on the nanometer scale, which exhibit pronounced thermal fluctuations. Like macroscopic elastic rods, filaments can undergo a mechanical buckling instability under a compressive load. In the first part of the thesis, we study how this buckling instability is affected by the pronounced thermal fluctuations of the filaments. In cells, compressive loads on filaments can be generated by molecular motors. This happens, for example, during cell division in the mitotic spindle. In the second part of the thesis, we investigate how the stochastic nature of such motor-generated forces influences the buckling behavior of filaments. In chapter 2 we review briefly the buckling instability problem of rods on the macroscopic scale and introduce an analytical model for buckling of filaments or elastic rods in two spatial dimensions in the presence of thermal fluctuations. We present an analytical treatment of the buckling instability in the presence of thermal fluctuations based on a renormalization-like procedure in terms of the non-linear sigma model where we integrate out short-wavelength fluctuations in order to obtain an effective theory for the mode of the longest wavelength governing the buckling instability. We calculate the resulting shift of the critical force by fluctuation effects and find that, in two spatial dimensions, thermal fluctuations increase this force. Furthermore, in the buckled state, thermal fluctuations lead to an increase in the mean projected length of the filament in the force direction. As a function of the contour length, the mean projected length exhibits a cusp at the buckling instability, which becomes rounded by thermal fluctuations. Our main result is the observation that a buckled filament is stretched by thermal fluctuations, i.e., its mean projected length in the direction of the applied force increases by thermal fluctuations. Our analytical results are confirmed by Monte Carlo simulations for buckling of semiflexible filaments in two spatial dimensions. We also perform Monte Carlo simulations in higher spatial dimensions and show that the increase in projected length by thermal fluctuations is less pronounced than in two dimensions and strongly depends on the choice of the boundary conditions. In the second part of this work, we present a model for buckling of semiflexible filaments under the action of molecular motors. We investigate a system in which a group of motors moves along a clamped filament carrying a second filament as a cargo. The cargo-filament is pushed against the wall and eventually buckles. The force-generating motors can stochastically unbind and rebind to the filament during the buckling process. We formulate a stochastic model of this system and calculate the mean first passage time for the unbinding of all linking motors which corresponds to the transition back to the unbuckled state of the cargo filament in a mean-field model. Our results show that for sufficiently short microtubules the movement of kinesin-I-motors is affected by the load force generated by the cargo filament. Our predictions could be tested in future experiments.
There are many factors which make speaking and understanding a second language (L2) a highly complex challenge. Skills and competencies in in both linguistic and metalinguistic areas emerge as parts of a multi-faceted, flexible concept underlying bilingual/multilingual communication. On the linguistic level, a combination of an extended knowledge of idiomatic expressions, a broad lexical familiarity, a large vocabulary size, and the ability to deal with phonetic distinctions and fine phonetic detail has been argued necessary for effective nonnative comprehension of spoken language. The scientific interest in these factors has also led to more interest in the L2’s information structure, the way in which information is organised and packaged into informational units, both within and between clauses. On a practical level, the information structure of a language can offer the means to assign focus to a certain element considered important. Speakers can draw from a rich pool of linguistic means to express this focus, and listeners can in turn interpret these to guide them to the highlighted information which in turn facilitates comprehension, resulting in an appropriate understanding of what has been said. If a speaker doesn’t follow the principles of information structure, and the main accent in a sentence is placed on an unimportant word, then there may be inappropriate information transfer within the discourse, and misunderstandings. The concept of focus as part of the information structure of a language, the linguistic means used to express it, and the differential use of focus in native and nonnative language processing are central to this dissertation. Languages exhibit a wide range of ways of directing focus, including by prosodic means, by syntactic constructions, and by lexical means. The general principles underlying information structure seem to contrast structurally across different languages, and they can also differ in the way they express focus. In the context of L2 acquisition, characteristics of the L1 linguistic system are argued to influence the acquisition of the L2. Similarly, the conceptual patterns of information structure of the L1 may influence the organization of information in the L2. However, strategies and patterns used to exploit information structure for succesful language comprehension in the native L1, may not apply at all, or work in different ways or todifferent degrees in the L2. This means that L2 learners ideally have to understand the way that information structure is expressed in the L2 to fully use the information structural benefit in the L2. The knowledge of information structural requirements in the L2 could also imply that the learner would have to make adjustments regarding the use of information structural devices in the L2. The general question is whether the various means to mark focus in the learners’ native language are also accessible in the nonnative language, and whether a L1-L2 transfer of their usage should be considered desirable. The current work explores how information structure helps the listener to discover and structure the forms and meanings of the L2. The central hypothesis is that the ability to access information structure has an impact on the level of the learners’ appropriateness and linguistic competence in the L2. Ultimately, the ability to make use of information structure in the L2 is believed to underpin the L2 learners’ ability to effectively communicate in the L2. The present study investigated how use of focus markers affects processing speed and word recall recall in a native-nonnative language comparison. The predominant research question was whether the type of focus marking leads to more efficient and accurate word processing in marked structures than in unmarked structures, and whether differences in processing patterns can be observed between the two language conditions. Three perception studies were conducted, each concentrating on one of the following linguistic parameters: 1. Prosodic prominence: Does prosodic focus conveyed by sentence accent and by word position facilitate word recognition? 2. Syntactical means: Do cleft constructions result in faster and more accurate word processing? 3. Lexical means: Does focus conveyed by the particles even/only (German: sogar/nur) facilitate word processing and word recall? Experiments 2 and 3 additionally investigated the contribution of context in the form of preceding questions. Furthermore, they considered accent and its facilitative effect on the processing of words which are in the scope of syntactic or lexical focus marking. All three experiments tested German learners of English in a native German language condition and in English as their L2. Native English speakers were included as a control for the English language condition. Test materials consisted of single sentences, all dealing with bird life. Experiment 1 tested word recognition in three focus conditions (broad focus, narrow focus on the target, and narrow focus on a constituent than the target) in one condition using natural unmanipulated sentences, and in the other two conditions using spliced sentences. Experiment 2 (effect of syntactic focus marking) and Experiment 3 (effect of lexical focus marking) used phoneme monitoring as a measure for the speed of word processing. Additionally, a word recall test (4AFC) was conducted to assess the effective entry of target-bearing words in the listeners’ memory. Experiment 1: Focus marking by prosodic means Prosodic focus marking by pitch accent was found to highlight important information (Bolinger, 1972), making the accented word perceptually more prominent (Klatt, 1976; van Santen & Olive, 1990; Eefting, 1991; Koopmans-van Beinum & van Bergem, 1989). However, accent structure seems to be processed faster in native than in nonnative listening (Akker& Cutler, 2003, Expt. 3). Therefore, it is expected that prosodically marked words are better recognised than unmarked words, and that listeners can exploit accent structure better for accurate word recognition in their L1 than they do in the L2 (L1 > L2). Altogether, a difference in word recognition performance in L1 listening is expected between different focus conditions (narrow focus > broad focus). Results of Experiments 1 show that words were better recognized in native listening than in nonnative listening. Focal accent, however, doesn’t seem to help the German subjects recognize accented words more accurately, in both the L1 and the L2. This could be due to the focus conditions not being acoustically distinctive enough. Results of experiments with spliced materials suggest that the surrounding prosodic sentence contour made listeners remember a target word and not the local, prosodic realization of the word. Prosody seems to indeed direct listeners’ attention to the focus of the sentence (see Cutler, 1976). Regarding the salience of word position, VanPatten (2002; 2004) postulated a sentence location principle for L2 processing, stating a ranking of initial > final > medial word position. Other evidence mentions a processing adantage of items occurring late in the sentence (Akker & Cutler, 2003), and Rast (2003) observed in an English L2 production study a trend of an advantage of items occurring at the outer ends of the sentence. The current Experiment 1 aimed to keep the length of the sentences to an acceptable length, mainly to keep the task in the nonnative lnaguage condition feasable. Word length showed an effect only in combination with word position (Rast, 2003; Rast & Dommergues, 2003). Therefore, word length was included in the current experiment as a secondary factor and without hypotheses. Results of Experiment 1 revealed that the length of a word doesn’t seem to be important for its accurate recognition. Word position, specifically the final position, clearly seems to facilitate accurate word recognition in German. A similar trend emerges in condition English L2, confirming Klein (1984) and Slobin (1985). Results don’t support the sentence location principle of VanPatten (2002; 2004). The salience of the final position is interpreted as recency effect (Murdock, 1962). In addition, the advantage of the final position may benefit from the discourse convention that relevant background information is referred to first, and then what is novel later (Haviland & Clark, 1974). This structure is assumed to cue the listener as to what the speaker considers to be important information, and listeners might have reacted according to this convention. Experiment 2: Focus marking by syntactic means Atypical syntactic structures often draw listeners’ attention to certain information in an utterance, and the cleft structure as a focus marking device appears to be a common surface feature in many languages (Lambrecht, 2001). Surface structure influences sentence processing (Foss & Lynch, 1969; Langford & Holmes, 1979), which leads to competing hypotheses in Experiment 2: on the one hand, the focusing effect of the cleft construction might reduce processing times. On the other, cleft constructions in German were found to be used less to mark fo than in English (Ahlemeyer & Kohlhof, 1999; Doherty, 1999; E. Klein, 1988). The complexity of the constructions, and the experience from the native language might work against an advantage of the focus effect in the L2. Results of Experiment 2 show that the cleft structure is an effective device to mark focus in German L1. The processing advantage is explained by the low degree of structural markedness of cleft structures: listeners use the focus function of sentence types headed by the dummy subject es (English: it) due to reliance on 'safe' subject-prominent SVO-structures. The benefit of cleft is enhanced when the sentences are presented with context, suggesting a substantial benefit when focus effects of syntactic surface structure and coherence relation between sentences are integrated. Clefts facilitate word processing for English native speakers. Contrary to German L1, the marked cleft construction doesn’t reduce processing times in English L2. The L1-L2 difference was interpreted as a learner problem of applying specific linguistic structures according to the principles of information structure in the target language. Focus marking by cleft did not help German learners in native or in nonnative word recall. This could be attributed to the phonological similarity of the multiple choice options (Conrad & Hull, 1964), and to a long time span between listening and recall (Birch & Garnsey, 1995; McKoon et al., 1993). Experiment 3: Focus marking by lexical means Focus particles are elements of structure that can indicate focus (König, 1991), and their function is to emphasize a certain part of the sentence (Paterson et al., 1999). I argue that the focus particles even/only (German: sogar/nur) evoke contrast sets of alternatives resp. complements to the element in focus (Ni et al., 1996), which causes interpretations of context. Therefore, lexical focus marking isn’t expected to lead to faster word processing. However, since different mechanisms of encoding seem to underlie word memory, a benefit of the focusing function of particles is expected to show in the recall task: due to focus particles being a preferred and well-used feature for native speakers of German, a transfer of this habitualness is expected, resulting in a better recall of focused words. Results indicated that focus particles seem to be the weakest option to mark focus: Focus marking by lexical particle don’t seem to reduce word processing times in either German L1, English L2, or in English L1. The presence of focus particles is likely to instantiate a complex discourse model which lets the listener await further modifying information (Liversedge et al., 2002). This semantic complexity might slow down processing. There are no indications that focus particles facilitate native language word recall in German L1 and English L1. This could be because focus particles open sets of conditions and contexts that enlarge the set of representations in listeners rather than narrowing it down to the element in the scope of the focus particle. In word recall, the facilitative effect of focus particles emerges only in the nonnative language condition. It is suggested that L2 learners, when faced with more demanding tasks in an L2, use a broad variety of means that identify focus for a better representation of novel words in the memory. In Experiments 2 and 3, evidence suggests that accent is an important factor for efficient word processing and accurate recall in German L1 and English L1, but less so in English L2. This underlines the function of accent as core speech parameter and consistent cue to the perception of prominence native language use (see Cutler & Fodor, 1979; Pitt & Samuel, 1990a; Eriksson et al., 2002; Akker & Cutler, 2003); the L1-L2 difference is attributed to patterns of expectation that are employed in the L1 but not (yet?) in the L2. There seems to exist a fine-tuned sensitivity to how accents are distributed in the native language, listeners expect an appropriate distribution and interpret it accordingly (Eefting, 1991). This pleads for accent placement as extremely important to L2 proficiency; the current results also suggest that accent and its relationship with other speech parameters has to be newly established in the L2 to fully reveal its benefits for efficient processing of speech. There is evidence that additional context facilitates processing of complex syntactic structures but that a surplus of information has no effect if the sentence construction is less challenging for the listener. The increased amount of information to be processed seems to impede better word recall, particularly in the L2. Altogether, it seems that focus marking devices and context can combine to form an advantageous alliance: a substantial benefit in processing efficiency is found when parameters of focus marking and sentence coherence are integrated. L2 research advocates the beneficial aspects of providing context for efficient L2 word learning (Lawson & Hogben, 1996). The current thesis promotes the view that a context which offers more semantic, prosodic, or lexical connections might compensate for the additional processing load that context constitutes for the listeners. A methodological consideration concerns the order in which language conditions are presented to listeners, i.e., L1-L2 or L2-L1. Findings suggest that presentation order could enforce a learning bias, with the performance in the second experiment being influenced by knowledge acquired in the first (see Akker & Cutler, 2003). To conclude this work: The results of the present study suggest that information structure is more accessible in the native language than it is in the nonnative language. There is, however, some evidence that L2 learners have an understanding of the significance of some information-structural parameters of focus marking. This has a beneficial effect on processing efficiency and recall accuracy; on the cognitive side it illustrates the benefits and also the need of a dynamic exchange of information-structural organization between L1 and L2. The findings of the current thesis encourage the view that an understanding of information structure can help the learner to discover and categorise forms and meanings of the L2. Information structure thus emerges as a valuable resource to advance proficiency in a second language.
Although the basic structure of biological membranes is provided by the lipid bilayer, most of the specific functions are carried out by membrane proteins (MPs) such as channels, ion-pumps and receptors. Additionally, it is known, that mutations in MPs are directly or indirectly involved in many diseases. Thus, structure determination of MPs is of major interest not only in structural biology but also in pharmacology, especially for drug development. Advances in structural biology of membrane proteins (MPs) have been strongly supported by the success of three leading techniques: X-ray crystallography, electron microscopy and solution NMR spectroscopy. However, X-ray crystallography and electron microscopy, require highly diffracting 3D or 2D crystals, respectively. Today, structure determination of non-crystalline solid protein preparations has been made possible through rapid progress of solid-state MAS NMR methodology for biological systems. Castellani et. al. solved and refined the first structure of a microcrystalline protein using only solid-state MAS NMR spectroscopy. These successful application open up perspectives to access systems that are difficult to crystallise or that form large heterogeneous complexes and insoluble aggregates, for example ligands bound to a MP-receptor, protein fibrils and heterogeneous proteins aggregates. Solid-state MAS NMR spectroscopy is in principle well suited to study MP at atomic resolution. In this thesis, different types of MP preparations were tested for their suitability to be studied by solid-state MAS NMR. Proteoliposomes, poorly diffracting 2D crystals and a PEG precipitate of the outer membrane protein G (OmpG) were prepared as a model system for large MPs. Results from this work, combined with data found in the literature, show that highly diffracting crystalline material is not a prerequirement for structural analysis of MPs by solid-state MAS NMR. Instead, it is possible to use non-diffracting 3D crystals, MP precipitates, poorly diffracting 2D crystals and proteoliposomes. For the latter two types of preparations, the MP is reconstituted into a lipid bilayer, which thus allows the structural investigation in a quasi-native environment. In addition, to prepare a MP sample for solid-state MAS NMR it is possible to use screening methods, that are well established for 3D and 2D crystallisation of MPs. Hopefully, these findings will open a fourth method for structural investigation of MP. The prerequisite for structural studies by NMR in general, and the most time consuming step, is always the assignment of resonances to specific nuclei within the protein. Since the last few years an ever-increasing number of assignments from solid-state MAS NMR of uniformly carbon and nitrogen labelled samples is being reported, mostly for small proteins of up to around 150 amino acids in length. However, the complexity of the spectra increases with increasing molecular weight of the protein. Thus the conventional assignment strategies developed for small proteins do not yield a sufficiently high degree of assignment for the large MP OmpG (281 amino acids). Therefore, a new assignment strategy to find starting points for large MPs was devised. The assignment procedure is based on a sample with [2,3-13C, 15N]-labelled Tyr and Phe and uniformly labelled alanine and glycine. This labelling pattern reduces the spectral overlap as well as the number of assignment possibilities. In order to extend the assignment, four other specifically labelled OmpG samples were used. The assignment procedure starts with the identification of the spin systems of each labelled amino acid using 2D 13C-13C and 3D NCACX correlation experiments. In a second step, 2D and 3D NCOCX type experiments are used for the sequential assignment of the observed resonances to specific nuclei in the OmpG amino acid sequence. Additionally, it was shown in this work, that biosynthetically site directed labelled samples, which are normally used to observe long-range correlations, were helpful to confirm the assignment. Another approach to find assignment starting points in large protein systems, is the use of spectroscopic filtering techniques. A filtering block that selects methyl resonances was used to find further assignment starting points for OmpG. Combining all these techniques, it was possible to assign nearly 50 % of the observed signals to the OmpG sequence. Using this information, a prediction of the secondary structure elements of OmpG was possible. Most of the calculated motifs were in good aggreement with the crystal structures of OmpG. The approaches presented here should be applicable to a wide variety of MPs and MP-complexes and should thus open a new avenue for the structural biology of MPs.
In this contribution, we gather major academic and design approaches for explaining how space in games is constructed and how it constructs games, thereby defining the conceptual dimensions of gamespace. Each concept’s major inquiry is briefly discussed, iterated if applicable, as well as named. Thus, we conclude with an overview of the locative, the representational, the programmatic, the dramaturgical, the typological, the perspectivistic, the form-functional, and the form-emotive dimensions.
Governments all over the world have responded to the offer of violent and sexual-themed video games by inaugurating regulatory bodies. Still, video games with content that is deemed unsuitable for children are played even by young children. With a focus on the situation in Germany the aim of this paper is twofold. On the one hand, the current state of literature on the importance of age ratings for the regulation of video games is scrutinized. Therefore, the focus is on the German rating system by the Entertainment Software Self Control. This scheme is compared in particular to the American Entertainment Software Rating Board scheme and parallels with the Pan-European Game Information-system are drawn. On the other hand, results from an exploratory survey study on the preferences for video games among German 8 to 12 year olds are presented (N=1703), arguing that the preference for video games that are not suitable for them is a widespread phenomenon in particular among boys.
Normality is one of the defining categories of our society. Statistics of all kinds play a crucial part in establishing the normal. Computers, on the other hand, have a very close connection to statistics as the digital world is a statistical one in itself. In a multitude of games statistics are used as an element of gameplay. In this perspective, games can be regarded as a training in self-normalization. However, it is still questionable whether this leads to a genuine production of normality.
Fun and frustration
(2009)
This paper draws on Bernard Stiegler’s critique of “hyperindustrialism” to suggest that digital gaming is a privileged site for critiques of affective labor; games themselves routinely nod towards such critiques. Stiegler’s work adds, however, the important dimension of historical differentiation to recent critiques of affective labor, emphasizing “style” and “idiom” as key concerns in critical analyses of globalizing technocultures. These insights are applied to situate digital play in terms of affective labor, and conclude with a summary analysis of the gestural-technical stylistics of the Wii. The result is that interaction stylistics become comparable across an array of home networking devices, providing a gloss, in terms of affect, of the “simple enjoyment” Nintendo designers claim characterizes use of the Wii-console and its complex controllers.
The claim is made, that in order to analyze them sufficiently, computer games first of all have to be described according to their mediality, understood as the very form in which possible contents are presented to be interacted with. This calls for a categorical approach that defines the condition of possible actions that are determined by the program, but that can only be perceived as aesthetic features.
Stellar magnetic fields, as a crucial component of star formation and evolution, evade direct observation at least with current and near future instruments. However investigating whether magnetic fields are generated by a dynamo process or represent relics from the formation process, or whether they show a behavior similar to the sun or something very different, it is essential to investigate their structure and temporal evolution. Fortunately nature provides us with the possibility to indirectly observe surface topologies on distant stars by means of Doppler shift and polarization of light, though not without its challenges. Based on the mentioned effects, the so called Zeeman-Doppler Imaging technique is a powerful method to retrieve magnetic fields from rapid rotating stars based on measurements of spectropolarimetric observations in terms of Stokes profiles. In recent years, a large number of stellar magnetic field distributions could be reconstructed by Zeeman-Doppler Imaging (ZDI). However, the implementation of this method often relies on many approximations because, as an inversion method, it entails enormous computational requirements. The aim of this thesis is to develop methods for a ZDI, designed to invert time-resolved spectropolarimetric data of active late type stars, and to account for the expected complex and small scale magnetic fields on these stars. In order to reliably reconstruct the detailed field orientation and strength, the inversion method is employed to be able to use of all four Stokes components. Furthermore it is based on fully polarized radiative transfer calculations to account for the intricate interplay between temperature and magnetic field. Finally, the application of a newly developed ZDI code to Stokes I and V observations of II Pegasi (short: II Peg) was supposed to deliver the first magnetic surface maps for this highly active star. To accomplish the high computational burden of a radiative transfer based ZDI, we developed a novel approximation method to speed up the inversion process. It is based on Principal Component Analysis and Artificial Neural Networks. The latter approximate the functional mapping between atmospheric parameters and the corresponding local Stokes profiles. Inverse problems, as we are dealing with, are potentially ill-posed and require a regularization method. We propose a new regularization scheme, which implements a local entropy function that accounts for the peculiarities of the reconstruction of localized magnetic fields. To deal with the relatively large noise that is always present in polarimetric data, we developed a multi-line denoising technique based on Principal Component Analysis. In contrast to other multi-line techniques that extract from a large number of spectral lines a sort of mean profile, this method allows to extract individual spectral lines and thus allows for an inversion on the basis of specific lines. All these methods are incorporated in our newly developed ZDI code iMap, which is based on a conjugated gradient method. An in depth validation of our new synthesis method demonstrates the reliability and accuracy of this approach as well as a gain in computation time by almost three orders of magnitude relative to the conventional radiative transfer calculations. We investigated the influence of the different Stokes components (IV / IVQU) on the ability to reconstruct a known synthetic field configuration. In doing so we validate the capability of our inversion code, and we also assess limitations of magnetic field inversions in general. In a first application to II Peg, a K2 IV subgiant, we derived temperature and magnetic field surface distributions from spectropolarimetric data obtained in 2004 and 2007. It gives for the first time simultaneously the temporal evolution of the surface temperature and magnetic field distribution on II Peg.
This research is about local actors' response to problems of uneven development and unemployment. Policies to combat these problems are usually connected to socio-economic regeneration in England and economic and employment promotion (Wirtschafts- und Beschäftigungsförderung) in Germany. The main result of this project is a description of those factors which support the emergence of local socio-economic initiatives aimed at job creation. Eight social and formal economy initiatives have been examined and the ways in which their emergence has been influenced by institutional factors has been analysed. The role of local actors and forms of governance as well as wider regional and national policy frameworks has been taken into account. Socio-economic initiatives have been defined as non-routine local projects or schemes with the objective of direct job creation. Such initiatives often focus on specific local assets for the formal or the social economy. Socio-economic initiatives are grounded on ideas of local economic development, and the creation of local jobs for local people. The adopted understanding of governance focuses on the processes of decision taking. Thus, this understanding of governance is broadly construed to include the ways in which actors in addition to traditional government manage urban development. The applied understanding of governance lays a focus on 'strategic' forms of decision taking about both long term objectives and short term action linked to socio-economic regeneration. Four old industrial towns in North England and East Germany have been selected for case studies due to their particular socio-economic background. These towns, with between 10.000 and 70.000 inhabitants, are located outside of the main agglomerations and bear central functions for their hinterland. The approach has been comparative, with a focus on examining common themes rather than gaining in-depth knowledge of a single case. Until now, most urban governance studies have analysed the impacts of particular forms of governance such as regeneration partnerships. This project looks at particular initiatives and poses the question to what extent their emergence can be understood as a result of particular forms of governance, local institutional factors or regional and national contexts.
This study explores learning-disabled students' academic and psychosocial adjustment as compared to their non-disabled classmates within the mainstream public education system in Greece. A brief description of the special education services in Greece is also presented. The sample of the study consisted of fifth and sixth grade elementary school students in northern Greece. The learning-disabled students were identified based on teachers' evaluation (N=30). The control group consisted of all classmates of these students (N=307). Teacher-, peer and self-ratings were used and achievement data were obtained. The learning-disabled students were found to exhibit various academic and psychosocial difficulties based on the perceptions of all raters. Implications of the findings are discussed.
The acquisition of phonological alternations consists of many aspects as discussions in the relevant literature show. There are contrary findings about the role of naturalness. A natural process is grounded in phonetics; they are easy to learn, even in second language acquisition when adults have to learn certain processes that do not occur in their native language. There is also evidence that unnatural – arbitrary – rules can be learned. Current work on the acquisition of morphophonemic alternations suggests that their probability of occurrence is a crucial factor in acquisition. I have conducted an experiment to investigate the effects of naturalness as well as of probability of occurrence with 80 adult native speakers of German. It uses the Artificial Grammar paradigm: Two artificial languages were constructed, each with a particular alternation. In one language the alternation is natural (vowel harmony); in the other language the alternation is arbitrary (a vowel alternation depends on the sonorancy of the first consonant of the stem). The participants were divided in two groups, one group listened to the natural alternation and the other group listened to the unnatural alternation. Each group was divided into two subgroups. One subgroup then was presented with material in which the alternation occurred frequently and the other subgroup was presented with material in which the alternation occurred infrequently. After this exposure phase every participant was asked to produce new words during the test phase. Knowledge about the language-specific alternation pattern was needed to produce the forms correctly as the phonological contexts demanded certain alternants. The group performances have been compared with respect to the effects of naturalness and probability of occurrence. The natural rule was learned more easily than the unnatural one. Frequently presented rules were not learned more easily than the ones that were presented less frequently. Moreover, participants did not learn the unnatural rule at all, whether this rule was presented frequently or infrequently did not matter. There was a tendency that the natural rule was learned more easily if presented frequently than if presented infrequently, but it was not significant due to variability across participants.
Business process management experiences a large uptake by the industry, and process models play an important role in the analysis and improvement of processes. While an increasing number of staff becomes involved in actual modeling practice, it is crucial to assure model quality and homogeneity along with providing suitable aids for creating models. In this paper we consider the problem of offering recommendations to the user during the act of modeling. Our key contribution is a concept for defining and identifying so-called action patterns - chunks of actions often appearing together in business processes. In particular, we specify action patterns and demonstrate how they can be identified from existing process model repositories using association rule mining techniques. Action patterns can then be used to suggest additional actions for a process model. Our approach is challenged by applying it to the collection of process models from the SAP Reference Model.
What did Cain say to Abel?
(2009)
Regulating public space
(2009)
The adaptive evolutionary potential of a species or population to cope with omnipresent environmental challenges is based on its genetic variation. Variability at immune genes, such as the major histocompatibility complex (MHC) genes, is assumed to be a very powerful and effective tool to keep pace with diverse and rapidly evolving pathogens. In my thesis, I studied natural levels of variation at the MHC genes, which have a key role in immune defence, and parasite burden in different small mammal species. I assessed the importance of MHC variation for parasite burden in small mammal populations in their natural environment. To understand the processes shaping different patterns of MHC variation I focused on evidence of selection through pathogens upon the host. Further, I addressed the issue of low MHC diversity in populations or species, which could potentially arise as a result from habitat fragmentation and isolation. Despite their key role in the mammalian evolution the marsupial MHC has been rarely investigated. Studies on primarily captive or laboratory bred individuals indicated very little or even no polymorphism at the marsupial MHC class II genes. However, natural levels of marsupial MHC diversity and selection are unknown to date as studies on wild populations are virtually absent. I investigated MHC II variation in two Neotropical marsupial species endemic to the threatened Brazilian Atlantic Forest (Gracilinanus microtarsus, Marmosops incanus) to test whether the predicted low marsupial MHC class II polymorphism proves to be true under natural conditions. For the first time in marsupials I confirmed characteristics of MHC selection that were so far only known from eutherian mammals, birds, and fish: Positive selection on specific codon sites, recombination, and trans-species polymorphism. Beyond that, the two marsupial species revealed considerable differences in their MHC class II diversity. Diversity was rather low in M. incanus but tenfold higher in G. microtarsus, disproving the predicted general low marsupial MHC class II variation. As pathogens are believed to be very powerful drivers of MHC diversity, I studied parasite burden in both host species to understand the reasons for the remarkable differences in MHC diversity. In both marsupial species specific MHC class II variants were associated to either high or low parasite load highlighting the importance of the marsupial MHC class II in pathogen defence. I developed two alternative scenarios with regard to MHC variation, parasite load, and parasite diversity. In the ‘evolutionary equilibrium’ scenario I assumed the species with low MHC diversity, M. incanus, to be under relaxed pathogenic selection and expected low parasite diversity. Alternatively, low MHC diversity could be the result of a recent loss of genetic variation by means of a genetic bottleneck event. Under this ‘unbalanced situation’ scenario, I assumed a high parasite burden in M. incanus due to a lack of resistance alleles. Parasitological results clearly reject the first scenario and point to the second scenario, as M. incanus is distinctly higher parasitised but parasite diversity is relatively equal compared to G. microtarsus. Hence, I suggest that the parasite load in M. incanus is rather the consequence than the cause for its low MHC diversity. MHC variation and its associations to parasite burden have been typically studied within single populations but MHC variation between populations was rarely taken into account. To gain scientific insight on this issue, I chose a common European rodent species. In the yellow necked mouse (Apodemus flavicollis), I investigated the effects of genetic diversity on parasite load not on the individual but on the population level. I included populations, which possess different levels of variation at the MHC as well as at neutrally evolving genetic markers (microsatellites). I was able to show that mouse populations with a high MHC allele diversity are better armed against high parasite burdens highlighting the significance of adaptive genetic diversity in the field of conservation genetics. An individual itself will not directly benefit from its population’s large MHC allele pool in terms of parasite resistance. But confronted with the multitude of pathogens present in the wild a population with a large MHC allele reservoir is more likely to possess individuals with resistance alleles. These results deepen our understanding of the complex causes and processes of evolutionary adaptations between hosts and pathogens.
We construct elliptic elements in the algebra of (classical pseudo-differential) operators on a manifold M with conical singularities. The ellipticity of any such operator A refers to a pair of principal symbols (σ0, σ1) where σ0 is the standard (degenerate) homogeneous principal symbol, and σ1 is the so-called conormal symbol, depending on the complex Mellin covariable z. The conormal symbol, responsible for the conical singularity, is operator-valued and acts in Sobolev spaces on the base X of the cone. The σ1-ellipticity is a bijectivity condition for all z of real part (n + 1)/2 − γ, n = dimX, for some weight γ. In general, we have to rule out a discrete set of exceptional weights that depends on A. We show that for every operator A which is elliptic with respect to σ0, and for any real weight γ there is a smoothing Mellin operator F in the cone algebra such that A + F is elliptic including σ1. Moreover, we apply the results to ellipticity and index of (operator-valued) edge symbols from the calculus on manifolds with edges.
The Cauchy problem of the vacuum Einstein's equations aims to find a semimetric g(αβ) of a spacetime with vanishing Ricci curvature Rα,β and prescribed initial data. Under the harmonic gauge condition, the equations Rα,β = 0 are transferred into a system of quasi-linear wave equations which are called the reduced Einstein equations. The initial data for Einstein's equations are a proper Riemannian metric h(αβ) and a second fundamental form K(αβ). A necessary condition for the reduced Einstein equation to satisfy the vacuum equations is that the initial data satisfy Einstein constraint equations. Hence the data (h(αβ),K(αβ)) cannot serve as initial data for the reduced Einstein equations. Previous results in the case of asymptotically flat spacetimes provide a solution to the constraint equations in one type of Sobolev spaces, while initial data for the evolution equations belong to a different type of Sobolev spaces. The goal of our work is to resolve this incompatibility and to show that under the harmonic gauge the vacuum Einstein equations are well-posed in one type of Sobolev spaces.
The comprehension of figurative language : electrophysiological evidence on the processing of irony
(2008)
This dissertation investigates the comprehension of figurative language, in particular the temporal processing of verbal irony. In six experiments using event-related potentials(ERP) brain activity during the comprehension of ironic utterances in relation to equivalent non-ironic utterances was measured and analyzed. Moreover, the impact of various language-accompanying cues, e.g., prosody or the use of punctuation marks, as well as non-verbal cues such as pragmatic knowledge has been examined with respect to the processing of irony. On the basis of these findings different models on figurative language comprehension, i.e., the 'standard pragmatic model', the 'graded salience hypothesis', and the 'direct access view', are discussed.
School performance and adjustment of Greek remigrant students in the schools of their home country
(1992)
This study explores the adjustment of Greek remigrant students in Greek public schools after their families' return to Greece from the Federal Republic of Germany. Teacher and self-rating instruments were used, and achievement and language competence data were obtained. The sample consisted of 13- to 15-year-old junior high school students in northern Greece. The remigrant students were divided into two groups ("early return" and "late return"), based on the year of return to Greece. The control group consisted of all the local classmates of these students. Remigrant students (mainly late return) were found to experience difficulties mainly in the language/learning domain and less in the interpersonal and intrapersonal behavior domains.
Classical semiconductor physics has been continuously improving electronic components such as diodes, light-emitting diodes, solar cells and transistors based on highly purified inorganic crystals over the past decades. Organic semiconductors, notably polymeric, are a comparatively young field of research, the first light-emitting diode based on conjugated polymers having been demonstrated in 1990. Polymeric semiconductors are of tremendous interest for high-volume, low-cost manufacturing ("printed electronics"). Due to their rather simple device structure mostly comprising only one or two functional layers, polymeric diodes are much more difficult to optimize compared to small-molecular organic devices. Usually, functions such as charge injection and transport are handled by the same material which thus needs to be highly optimized. The present work contributes to expanding the knowledge on the physical mechanisms determining device performance by analyzing the role of charge injection and transport on device efficiency for blue and white-emitting devices, based on commercially relevant spiro-linked polyfluorene derivatives. It is shown that such polymers can act as very efficient electron conductors and that interface effects such as charge trapping play the key role in determining the overall device efficiency. This work contributes to the knowledge of how charges drift through the polymer layer to finally find neutral emissive trap states and thus allows a quantitative prediction of the emission color of multichromophoric systems, compatible with the observed color shifts upon driving voltage and temperature variation as well as with electrical conditioning effects. In a more methodically oriented part, it is demonstrated that the transient device emission observed upon terminating the driving voltage can be used to monitor the decay of geminately-bound species as well as to determine trapped charge densities. This enables direct comparisons with numerical simulations based on the known properties of charge injection, transport and recombination. The method of charge extraction under linear increasing voltages (CELIV) is investigated in some detail, correcting for errors in the published approach and highlighting the role of non-idealized conditions typically present in experiments. An improved method is suggested to determine the field dependence of charge mobility in a more accurate way. Finally, it is shown that the neglect of charge recombination has led to a misunderstanding of experimental results in terms of a time-dependent mobility relaxation.
Learning from the past
(2009)
The author's recently published monograph on Alexander von Humboldt[1] describes the multiple images of this great cultural icon. The book is a metabiographical study that shows how from the middle of the nineteenth century to the present day Humboldt has served as a nucleus of crystallisation for a variety of successive socio-political ideologies, each producing its own distinctive representation of him. The historiographical implications of this biographical diversity are profound and support current attempts to understand historical scholarship in terms of memory cultures.
The Humboldt Digital Library
(2005)
Alexander von Humboldt’s maps, graphs and illustrations contain a great deal of detail, but in the available rare editions they are hardly visible to the naked eye. In many editions they have been reduced. In a digital library, they will become accessible in their entirety, and Internet technology will reproduce them in a form that overcomes the limitations of the original printing. The user will be able to enlarge the images and see details that might have been overlooked in the past. The Humboldt’s digital library will adhere to the standards for digital libraries established by the Open Archives Initiative (OAI) and the tools EPRINTS and DSPACE to provide the Web services and determine the most effective way to establish dynamic linking and knowledge based searching of information within the archive.
Alexander von Humboldt’s descriptions of volcanic mountains in his travel journals (Reise auf dem Río Magdalena, durch die Anden und Mexico) show both his reliance on and impatience with literary conventions and travel narratives. Using Goethe’s Italienische Reise and Bürger’s Münchhausen as points of comparison for literary treatments of the volcano ascent, Humboldt’s process of writing is examined. Humboldt shows the failure of the existing discourse and begins to experiment with narratives which fragment and recombine personal and historical modes of writing with, in this case, images from new technical inventions which visualize landscape according to fundamental scientific principles. While the inclusion of scientific prose is relevant, Humboldt’s link to modernity is based on experimental narrative techniques which draw upon changing sets of discourse practices to describe complex realities.
Stephen Jay Gould wrote recently that “when Church began to paint his great canvases, Alexander von Humboldt may well have been the world’s most famous and influential intellectual.” Humboldt’s influence in the case of the landscape artist Church is especially interesting. If we examine the precise relationship between the German explorer and his American admirer, we gain an insight into how Humboldt transformed Church’s life and signaled a new phase in the career of the artist. Church retraced Humboldt’s travels in Ecuador and in Mexico. If we compare the texts available to Church and the comparison of Church’s paintings and the texts and images of Humboldt’s works we can arrive at new perspectives on Humboldt’s extraordinary influence on American landscape painting in the nineteenth century.
The article provides historical background for Alexander von Humboldt’s expedition into Russia in 1829. It includes information on Humboldt’s works and publications in Russia over the course of his lifetime, as well as an explanation of the Russian scientific community’s response to those works. Humboldt’s ideas on the existence of an active volcano in Central Asia attracted the attention of two prominent Russian geographers, P. Semenov and P. Kropotkin, whose views on the nature of volcanism were quite different. P. Semenov personally met Humboldt in Berlin. P. Kropotkin made one of the most important geological discoveries of the 19th Century: he found the fresh volcanic cones near Lake Baikal.
Soon after Humboldt’s Russian expedition, and partly as a result of it, an important mineral was found in the Ilmen mountains – samarskite, which later gave its name to the chemical element Samarium, developed in 1879. At the beginning of the 20th Century, the Russian scientist V. Vernadskiy pointed out that samarskite was the first uranium-rich mineral found in Russia.
My essay attends to a number of passages in Alexander von Humboldt’s Personal Narrative in which the Prussian explorer expresses anxiety about the apparent dangers posed by the overwhelmingly productive tropical landscapes he observes. In these passages, the excesses of an “exotic nature” threaten European identity and modes of civilization—and they trouble the accuracy of Humboldt’s own observational project. I also explore Humboldt’s related worry that South American vegetable (and visual) overload will exert a destabilizing effect on his aesthetic sensibility, disrupting his ability to represent the “New Continent” accurately in writing. Finally, I sketch the influence of Humboldt’s representations of tropical excess on nineteenth-century British cultural thought and literary practice. Studying the instabilities experienced by Personal Narrative’s expatriates and colonists promises to draw out important tensions latent in Humboldt’s treatment of tropical landscape and to illuminate broader epistemological and aesthetic shifts being worked out during the period.
Pectic polysaccharides, a class of plant cell wall polymers, form one of the most complex networks known in nature. Despite their complex structure and their importance in plant biology, little is known about the molecular mechanism of their biosynthesis, modification, and turnover, particularly their structure-function relationship. One way to gain insight into pectin metabolism is the identification of mutants with an altered pectin structure. Those were obtained by a recently developed pectinase-based genetic screen. Arabidopsis thaliana seedlings grown in liquid medium containing pectinase solutions exhibited particular phenotypes: they were dwarfed and slightly chlorotic. However, when genetically different A. thaliana seed populations (random T-DNA insertional populations as well as EMS-mutagenized populations and natural variations) were subjected to this treatment, individuals were identified that exhibit a different visible phenotype compared to wild type or other ecotypes and may thus contain a different pectin structure (pec-mutants). After confirming that the altered phenotype occurs only when the pectinase is present, the EMS mutants were subjected to a detailed cell wall analysis with particular emphasis on pectins. This suite of mutants identified in this study is a valuable resource for further analysis on how the pectin network is regulated, synthesized and modified. Flanking sequences of some of the T-DNA lines have pointed toward several interesting genes, one of which is PEC100. This gene encodes a putative sugar transporter gene, which, based on our data, is implicated in rhamnogalacturonan-I synthesis. The subcellular localization of PEC100 was studied by GFP fusion and this protein was found to be localized to the Golgi apparatus, the organelle where pectin biosynthesis occurs. Arabidopsis ecotype C24 was identified as a susceptible one when grown with pectinases in liquid culture and had a different oligogalacturonide mass profile when compared to ecotype Col-0. Pectic oligosaccharides have been postulated to be signal molecules involved in plant pathogen defense mechanisms. Indeed, C24 showed elevated accumulation of reactive oxygen species upon pectinase elicitation and had altered response to the pathogen Alternaria brassicicola in comparison to Col-0. Using a recombinant inbred line population three major QTLs were identified to be responsible for the susceptibility of C24 to pectinases. In a reverse genetic approach members of the qua2 (putative pectin methyltransferase) family were tested for potential target genes that affect pectin methyl-esterification. The list of these genes was determined by in silico study of the pattern of expression and co-expression of all 34 members of this family resulting in 6 candidate genes. For only for one of the 6 analyzed genes a difference in the oligogalacturonide mass profile was observed in the corresponding knock-out lines, confirming the hypothesis that the methyl-esterification pattern of pectin is fine tuned by members of this gene family. This study of pectic polysaccharides through forward and reverse genetic screens gave new insight into how pectin structure is regulated and modified, and how these modifications could influence pectin mediated signalling and pathogenicity.
Acclimatization
(2003)
Together with their wives Otto and Richard Schomburgk arrived in Port Adelaide (South Australia) on August 16th 1849. The essay looks at how these two brothers, who had received their scientific training and promotion in the circle surrounding Alexander von Humboldt, reacted to the unfamiliar conditions in the young British colony. Some indication will be given as to the differences between the Schomburgk brothers treatment of the natural resources of the new colony and that of the English colonists of the time.
If Humboldt had a laptop
(2001)
The difficult publication history and expensive editions of Alexander von Humboldt’s volumes on the expedition to the Americas have resulted in incomplete library holdings which has limited scholarly access and sometimes caused unbalanced scholarship. A plan for a Humboldt Digital Library examines the structures and features of this representational system in print and proposes models for converting these materials to electronic form. Several issues posed by Humboldt’s works include: establishing authoritative standard editions in several languages, creating high-resolution access to the many visual innovations in the works, and using software to restore the grand concept that all of the separate disciplines of study can be seen as interrelated parts of the whole. Using techniques of geographic visualization, a prototype is planned which will connect this historical body of knowledge with modern scientific databases.
In the middle of the 19th century the question whether expanding civilization and industrialization had an effect on climate was discussed intensely worldwide. It was feared that increasing deforestation would lead to continuous decrease in rainfall. This first scientific discussion about climate change as the result of human intervention was strongly influenced by the research Alexander von Humboldt and Jean-Baptiste Boussingault had undertaken when they investigated the falling water levels of Lake Valencia in Venezuela. This essay aims to clarify the question whether Alexander von Humboldt can be counted among the leading figures of modern environmentalism on account of this research as is being claimed by Richard H. Grove in his influential book Green Imperialism. Colonial Expansion, Tropical Island Edens and Origins of Environmentalism, 1600-1860 (1995).
The scientist as Weltbürger
(2001)
Humboldt's works on Mexico
(2000)
Humboldt wrote about Mexico from the perspective of a scientific explorer and naturalist. His works include his diaries, the Essai politique sur le royaume de la Nouvelle-Espagne, the Tablas géograficas, the Vues des Cordillères and a geographic atlas. Concerning the scientific aspect, the lack of a section on Mexico in the Relation historique is not a real deficit, since this can be found in the Essai. But only the diaries and letters from the journey, both published by the Alexander-von-Humboldt Research Centre, Berlin, can be considered an adequate substitute. The following will show the origin of Humboldt's writings on Mexico, offer historical and bibliographical facts and present the publications "Beiträge zur Alexander von Humboldt-Forschung", as well as Humboldt’s handwritten estate as far as they are available to us.
Content: Synopsis The Attitudes toward Rape Victims Scale: Psychometric Data from 14 Countries Scale Construction and Validation - Study One: Preliminary Analyses - Study Two: Test-Retest Reliability - Study Three: Construct Validity Cross-cultural Extensions - United States - United Kingdom - Germany - New Zealand - Canada - West Indies - Israel - Turkey - India - Hong Kong - Malaysia - Zimbabwe - Mexico - Metric Equivalence Discussion
A study is reported which investigates the fakeability of personality profiles as measured by a standard personality inventory, the Freiburger Persönlichkeitsinventar (FPI). Unlike previous studies investigating laypersons' ability to fake a global good or bad impression, the present study examined individuals' ability to fake a specific personality profile. Four groups of subjects were instructed to fake their FPI scores so as to present themselves as high vs low scorers on the "social orientation" dimension or high vs low scorers on the "achievement orientation" dimension. The results clearly demonstrate that subjects are successful in manipulating their scores on the critical dimensions according to instruction. Moreover, they also fake related scales in a way that corroborates the intended image of a person with a high (or low) achievement (or social) orientation. The overall pattern of results reveals that subjects were able to distort their responses in a way that reflects their intuitive understanding of the dimensional structure of the FPI. The implications of the present findings for the use of personality inventories as valid diagnostic instruments are discussed.
Similar perceptions, similar reactions : an idiographic approach to cross-situational coherence
(1986)
The study provides a test of the interactionist concept of behavioral coherence across situations. Following an approach suggested by D. Magnusson and B. Ekehammer (1978, Journal of Research in Personality, 12, 41-48), individual correlations between self-reported behavior patterns and perceived similarity ratings across anxiety-provoking situations are obtained as measures of coherence. Unlike the Magnusson and Ekehammar study, the present measures of situation cognition and behavior are based on an idiographic sampling of anxiety-provoking situations. As a step toward concept-based measurement of situation cognition, further measures of perceived situational similarity are derived from the script, prototype, and social episodes models in social psychology and correlated with cross-situational similarity of behavioral profiles. It is demonstrated, in comparison with the findings of Magnusson and Ekehammar, that correlations between similarity ratings and behavior patterns increase substantially as a result of an idiographic sampling of situations. Moreover, it is shown that "script," "prototype," and "social episode" measures can be utilized to investigate the covariation between situation cognition and behavior, thus contributing to the clarification of the principles of cognitive representation of situational experience.
To investigate the relationship between implicit psychological hypotheses and explicit empirical findings, summaries of twenty published studies on attitude-behaviour consistency were presented to a sample of forty-eight psychology undergraduates. Subjects were asked to estimate the percentage of agreement between attitudes and behaviour obtained by each study. Correlations between subjects' covariation judgements and empirically obtained attitude-behaviour consistencies were minimal and nonsignificant. Results are discussed in the light of more recent research on attitudebehaviour relationship.
Active continental margins are affected by complex feedbacks between tectonic, climate and surface processes, the intricate relations of which are still a matter of discussion. The Chilean convergent margin, forming the outstanding Andean subduction orogen, constitutes an ideal natural laboratory for the investigation of climate, tectonics and their interactions. In order to study both processes, I examined marine and lacustrine sediments from different depositional environments on- and offshore the south-central Chilean coast (38-40°S). I combined sedimentological, geochemical and isotopical analyses to identify climatic and tectonic signals within the sedimentary records. The investigation of marine trench sediments (ODP Site 1232, SONNE core 50SL) focused on frequency changes of turbiditic event layers since the late Pleistocene. In the active margin setting of south-central Chile, these layers were considered to reflect periodically occurring earthquakes and to constitute an archive of the regional paleoseismicity. The new results indicate glacial-interglacial changes in turbidite frequencies during the last 140 kyr, with short recurrence times (~200 years) during glacial and long recurrence times (~1000 years) during interglacial periods. Hence, the generation of turbidites appears to be strongly influenced by climate and sea level changes, which control on the amount of sediment delivered to the shelf edge and therewith the stability of the continental slope: more stable slope conditions during interglacial periods entail lower turbidite frequencies than in glacial periods. Since glacial turbidite recurrence times are congruent with earthquake recurrence times derived from the historical record and other paleoseismic archives of the region, I concluded that only during cold stages the sediment availability and slope instability enabled the complete series of large earthquakes to be recorded. The sediment transport to the shelf region is not only driven by climate conditions but also influenced by local forearc tectonics. Accelerating uplift rates along major tectonic structures involved drainage anomalies and river flow inversions, which seriously altered the sediment supply to the Pacific Ocean. Two examples for the tectonic hindrance of fluvial systems are the coastal lakes Lago Lanalhue and Lago Lleu Lleu. Both lakes developed within former river valleys, which once discharged towards the Pacific and were dammed by tectonically uplifted sills at ~8000 yr BP. Analyses of sediment cores from the lakes showed similar successions of marine/brackish deposits at the bottom, covered by lacustrine sediments on top. Dating of the transitions between these different units and the comparison with global sea level curves allowed me to calculate local Holocene uplift rates, which are distinctly higher for the upraised sills (Lanalhue: 8.83 ± 2.7 mm/yr, Lleu Lleu: 11.36 ± 1.77 mm/yr) than for the lake basins (Lanalhue: 0.42 ± 0.71 mm/yr, Lleu Lleu: 0.49 ± 0.44 mm/yr). I hence considered the sills to be the surface expression of a blind thrust associated with a prominent inverse fault that is controlling regional uplift and folding. After the final separation of Lago Lanalhue and Lago Lleu Lleu from the Pacific, a constant deposition of lacustrine sediments preserved continuous records of local environmental changes. Sequences from both lakes indicate a long-term climate trend with a significant shift from more arid conditions during the Mid-Holocene (8000 – 4200 cal yr BP) to more humid conditions during the Late Holocene (4200 cal yr BP – present). This trend is consistent with other regional paleoclimatic data and interpreted to reflect changes in the strength/position of the Southern Westerly Winds. Since ~5000 years, sediments of Lago Lleu Lleu are marked by numerous intercalated detrital layers that recur with a mean frequency of ~210 years. Deposition of these layers may be triggered by local tectonics (i.e. earthquakes), but may also originate from changes in the local climate (e.g. onset of modern ENSO conditions). During the last 2000 years, pronounced variations in the terrigenous sediment supply to both lakes suggest important hydrological changes on the centennial time-scale as well. A lower input of terrigenous matter points to less humid phases between 200 cal yr B.C. - 150 cal yr A.D., 900 - 1350 cal yr A.D. and 1850 cal yr A.D. to present (broadly corresponding to the Roman, Medieval, and Modern Warm Periods). More humid periods persisted from 150 - 900 cal yr A.D. and 1350 - 1850 cal yr A.D. (broadly corresponding to the Dark Ages and the Little Ice Age). In conclusion, the combined investigation of marine and lacustrine sediments is a feasible method for the reconstruction of climatic and tectonic processes on different time scales. My approach allows exploring both climate and tectonics in one and the same archive, and is largely transferable to other active margins worldwide.
Personality and language
(1992)
Content Social stereotypes and responsibility attributions to victims of rape Atributing responsibillty to rape victims: a German study Rape myth acceptance and responsibility judgments: a British study Police officers' definitions of rape A study on cognitive prototypes of rape Conclusion References
The study investigates police officers' definitions of different rape situations. On the basis of the concept of 'cognitive prototypes' a methodology is developed which elicits consensual feature lists describing six rape situations: the typical, i.e. most common rape, the credible, dubious, and false rape complaints as weil as the rape experiences that are particularly hard vs. relatively easy for the victim to cope with. Qualitative analysis of the data allows the identification of the characteristic features defining the prototype of each rape situation, as weil as comparisons between the situations in terms of their common and distinctive features. It is shown that police officers, while sharing some of the widely held stereotypes about rape, generally perceive rape as a serious crime with long-term negative consequences for the victim. The quantitative analysis of prototype similarity between the six situations corroborates this conclusion by demonstrating a high similarity between the prototypes of the typical and the credible rape situation: In addition, subjects' general attitude towards rape victims is measured to compare the prototypes provided by respondents holding a positive vs. negative attitude towards rape victims. Findings for the two groups, however, reveal more similarities than differences in their descriptions of rape prototypes. The paper concludes with a discussion of the feasibility of the prototype approach presented in this study as a strategy for investigating implicit or common-sense theories of rape.
The chapter presents a social psychological approach to the study of rape and sexual assault. Two issues are at the core of this approach: identifying the critical variables that affect attributions of responsibility to victims of rape. and exploring people's subjective definitions of rape, which may differ markedly from legal definitions. Following a review of the American evidence, a series of studies conductcd in two European countries is presented to address these issues.
The main points raised by Borkenau against our challenge of the 'intuitive psychometrics' view of personality judgements are discussed, in particular his example of the link between school grades and intelligence. It is argued that the semantic similarity interpretation advanced in our paper is more adequate and more parsimonious than explanations in terms of psychometric reasoning.
Explaining perceived cross-situational consistency : intuitive psychometrics or semantic mediation?
(1988)
Recent studies at the interface of social cognition and personality theory have stressed lay persons' ability to 'function as intuitive psychometricians' (Epstein and Teraspulsky, 1986). This research argues that lay persons not only show a substantial degree of accuracy in estimating cross-situational generality of behaviour, but also take into account principles of aggregation over time. In contrast, it is argued here that lay persons' perceptions of the degree of relatedness of different behaviours are mediated largely by the decontextualized semantic relationships between behavioural descriptions. This argument finds support in two experimental studies which demonstrate that the main source for subjects' judgments of 'cross-situational consistency' can be found in an abstracted knowledge base which is represented and mediated through language. The implications of the findings are drawn out for personality research. in particular with reference to domain and item selection in questionnaires for research.
Two field studies were conducted lo investigate the influence of observer and victim characteristics on attributions of victim and assailant responsibility in a rape case. In the first study, male and female subjects completed a measure of rape myth acceptance and were presented with a rape account after which they were asked to attribute responsibility to victim and assailant. In the second study, a new sample was asked to attribute responsibility to victim and assailant on the basis of one of two rape accounts in which victim's pre-rape behavior was manipulated. Results showed that both rape myth acceptance and victims' pre-rape behavior in influenced the degree of responsibility attributed to victims and assailants. No significant effects of subject gender were found. A more complex conceptualization is suggested of the link between observer and victim characteristics in social reactions to and evaluations of rape victims.
Two studies are reported which examine the availability of scientific propositions of personality in lay conceptions of personality. It is argued from a social constructivist perspective that models of personality must derive from and refer to lay conceptions of persons. Eysenck's trait-type model of introversion-extroversion, containing specific propositions about phenotypic and genotypic differences between extraverts and introverts, was utilized as the scientific model of personality and its availability in lay conceptions of personality was examined in two studies. In the first study, subjects were presented with a genotypic characterization of either an introvert or an extravert target person and asked to infer corresponding phenotypic differences. In the second study, the inference process was reversed with subjects being asked to infer genotypic characteristics of introverts versus extraverts on the basis of phenotypic target person descriptions of the two types. Results from both studies show a high degree of accuracy in subjects' inferences, suggesting that laypersons have well-formed conceptions about personality containing 'higher-order' psychogenetic propositions corresponding to Eysenck's trait-type model. The implications of the findings for theory construction are discussed.
This contribution presents a quantitative evaluation procedure for Information Retrieval models and the results of this procedure applied on the enhanced Topic-based Vector Space Model (eTVSM). Since the eTVSM is an ontology-based model, its effectiveness heavily depends on the quality of the underlaying ontology. Therefore the model has been tested with different ontologies to evaluate the impact of those ontologies on the effectiveness of the eTVSM. On the highest level of abstraction, the following results have been observed during our evaluation: First, the theoretically deduced statement that the eTVSM has a similar effecitivity like the classic Vector Space Model if a trivial ontology (every term is a concept and it is independet of any other concepts) is used has been approved. Second, we were able to show that the effectiveness of the eTVSM raises if an ontology is used which is only able to resolve synonyms. We were able to derive such kind of ontology automatically from the WordNet ontology. Third, we observed that more powerful ontologies automatically derived from the WordNet, dramatically dropped the effectiveness of the eTVSM model even clearly below the effectiveness level of the Vector Space Model. Fourth, we were able to show that a manually created and optimized ontology is able to raise the effectiveness of the eTVSM to a level which is clearly above the best effectiveness levels we have found in the literature for the Latent Semantic Index model with compareable document sets.
Since 2002, keywords like service-oriented engineering, service-oriented computing, and service-oriented architecture have been widely used in research, education, and enterprises. These and related terms are often misunderstood or used incorrectly. To correct these misunderstandings, a deeper knowledge of the concepts, the historical backgrounds, and an overview of service-oriented architectures is demanded and given in this paper.
It is predicted that Service-oriented Architectures (SOA) will have a high impact on future electronic business and markets. Services will provide an self-contained and standardised interface towards business and are considered as the future platform for business-to-business and business-toconsumer trades. Founded by the complexity of real world business scenarios a huge need for an easy, flexible and automated creation and enactment of service compositions is observed. This survey explores the relationship of service composition with workflow management—a technology/ concept already in use in many business environments. The similarities between the both and the key differences between them are elaborated. Furthermore methods for composition of services ranging from manual, semi- to full-automated composition are sketched. This survey concludes that current tools for service composition are in an immature state and that there is still much research to do before service composition can be used easily and conveniently in real world scenarios. However, since automated service composition is a key enabler for the full potential of Service-oriented Architectures, further research on this field is imperative. This survey closes with a formal sample scenario presented in appendix A to give the reader an impression on how full-automated service composition works.
For interactive construction of CSG models understanding the layout of a model is essential for its efficient manipulation. To understand position and orientation of aggregated components of a CSG model, we need to realize its visible and occluded parts as a whole. Hence, transparency and enhanced outlines are key techniques to assist comprehension. We present a novel real-time rendering technique for visualizing design and spatial assembly of CSG models. As enabling technology we combine an image-space CSG rendering algorithm with blueprint rendering. Blueprint rendering applies depth peeling for extracting layers of ordered depth from polygonal models and then composes them in sorted order facilitating a clear insight of the models. We develop a solution for implementing depth peeling for CSG models considering their depth complexity. Capturing surface colors of each layer and later combining the results allows for generating order-independent transparency as one major rendering technique for CSG models. We further define visually important edges for CSG models and integrate an image-space edgeenhancement technique for detecting them in each layer. In this way, we extract visually important edges that are directly and not directly visible to outline a model’s layout. Combining edges with transparency rendering, finally, generates edge-enhanced depictions of image-based CSG models and allows us to realize their complex, spatial assembly.
This work analyzes the saving and consumption behavior of agents faced with the possibility of unemployment in a dynamic and stochastic life cycle model. The intertemporal optimization is based on Dynamic Programming with a backward recursion algorithm. The implemented uncertainty is not based on income shocks as it is done in traditional life cycle models but uses Markov probabilities where the probability for the next employment status of the agent depends on the current status. The utility function used is a CRRA function (constant relative risk aversion), combined with a CES function (constant elasticity of substitution) and has several consumption goods, a subsistence level, money and a bequest function.
Background: To improve the understanding of consequences of climate change for annual plant communities, I used a detailed, grid-based model that simulates the effect of daily rainfall variability on individual plants in five climatic regions on a gradient from 100 to 800 mm mean annual precipitation (MAP). The model explicitly considers moisture storage in the soil. I manipulated daily rainfall variability by changing the daily mean rain (DMR, rain volume on rainy days averaged across years for each day of the year) by ± 20%. At the same time I adjusted intervals appropriately between rainy days for keeping the mean annual volume constant. In factorial combination with changing DMR I also changed MAP by ± 20%. Results: Increasing MAP generally increased water availability, establishment, and peak shoot biomass. Increasing DMR increased the time that water was continuously available to plants in the upper 15 to 30 cm of the soil (longest wet period, LWP). The effect of DMR diminished with increasing humidity of the climate. An interaction between water availability and density-dependent germination increased the establishment of seedlings in the arid region, but in the more humid regions the establishment of seedlings decreased with increasing DMR. As plants matured, competition among individuals and their productivity increased, but the size of these effects decreased with the humidity of the regions. Therefore, peak shoot biomass generally increased with increasing DMR but the effect size diminished from the semiarid to the mesic Mediterranean region. Increasing DMR reduced via LWP the annual variability of biomass in the semiarid and dry Mediterranean regions. Conclusion: More rainstorms (greater DMR) increased the recharge of soil water reservoirs in more arid sites with consequences for germination, establishment, productivity, and population persistence. The order of magnitudes of DMR and MAP overlapped partially so that their combined effect is important for projections of climate change effects on annual vegetation.
This study examined relationships among interest, achievement motivation, mathematical ability, the quality of experience when doing mathematics, and mathematics achievement. One hundred eight freshmen and sophomores (41 males, 67 females) completed interest ratings, an achievement motivation questionnaire, and the Preliminary Scholastic Aptitude Test. These assessments were followed by 1 week of experience sampling. Mathematics grades were available from the year before the study started, from the same year, and from the following 3 years. In addition, a measure of the students' course level in mathematics was included. The results showed that quality of experience when doing mathematics was mainly related to interest. Grades and course level were most strongly predicted by level of ability. Interest was found to contribute significantly to the prediction of grades for the second year and to the prediction of course level. Quality of experience was significantly correlated with grades but not course level.
This study investigated the relation between interest in four different subject areas (mathematics, biology, English, history) and the quality of experience in class. The strength of interest as a predictor of experience was contrasted with that of achievement motivation and scholastic ability. A total of208 highly able freshmen and sophomores completed interest ratings, an achievement motivation questionnaire, and the Preliminary Scholastic Aptitude Test (PSAT). These assessments were followed by one week of experience sampling. In addition, grades were available for the subject areas involved. The results showed that interest was a significant predictor of the experience of potency, intrinsic motivation, self-esteem, and perception of skill. Controlling for ability and achievement motivation did not decrease the strength of these relations. Achievement motivation and ability proved to be considerably weaker predictors of the quality of experience than was interest. In addition, interest contributed significantly to the prediction of grades in mathematics, biology, and history, but not English. The main results and some limitations of the study are discussed, and suggestions for future research are made.
This article presents results from a meta-analysis of studies on the relation between subject-matter-related interest and school achievement. For the time period between 1965 and 1990 a total of 21 studies reporting 127 independent correlations (i. e., correlations based on independent samples) were identified. For the overall relation between interestand achievement a mean correlation of .30 was found. Male students exhibited significantly higher interest-achievement correlations than female students. In addition, significant differences among school subjects were observed. Grade level, however, did not produce a significant moderator effect. Finally, the results are discussed on the basis of theories of interest and methodological considerations.
This volume reexamines the long-standing controversy about consistency in personality from a social psychological perspective. Barabara Krahé reconsiders the concept of consistency in terms of the systematic coherence of situation cognition and behaviour across situations. In the first part of the volume she undertakes an examination of recent social psychological models of situation cognition for their ability to clarify the principles underlying the perception of situational similarities. She then advances an individual-centred methedology in which nomothetic hypotheses about cross-situational coherence are tested on the basis of idiogrphic measurement of situation cognition and behaviour. In the second part of the volume, a series of empirical studies is reported which apply the individual-centred framework to the analysis of cross-situational coherence in the domain of anxiety-provoking situations. These studies are distinctive in that they extend over several months and use free-response data; they are based on idiographic sampling; and they employ explicit theoretical models to capture the central features of situation perception. The results demonstrate the benefits of integrating idiographic and nomothetic research strategies and exploiting the advantages of both perspectives.
Content: Introduction: Do the Arts Really Matter? Aesthetic Cognition and Human Development The Significance of Arts in Everyday Life: Evidence from Case StudiesArts and Quality of Experience: A Systematic Analysis The Conditions of Optimal Experience The Representation of Experience in Personality Consequences for Teaching the Arts
Recent research related to the concept of interest is reviewed. It is argued that current constructs of motivation fail to include crucial aspects of the meaning of interest emphasized by classical American and German educational theorists. In contrast with many contemporary concepts (e.g., intrinsic learning orientation), interest is defined as a content-specific motivational characteristic composed of intrinsic feeling-related and value-related valences. Results from a number of studies are presented that indicate the importance of interest for the depth of text comprehension, the use of learning strategies, and the quality of the emotional experience while learning. The implications of these results and possible directions for future research are discussed.
The influence of topic interest, prior knowledge, and cognitive capabilities on text comprehension
(1990)
The present study investigated the influence of topic interest on the comprehension of texts. The primary goals of the study were as follows: (1) to formulate a new definition of the concept "topic interest", (2) to control for cognitive capabilities (intelligence, short-term memory) and prior knowledge, and(3) to assess different levels of comprehension. A total of 53 male students, majoring in computer science, took part in the study. Subjects were presented with a text on "Psychology of Emotion". Prior to reading the text, they were asked to indicate their level of interest in the topic. After reading the text, subjects were given a test of comprehension involving open-ended questions. The questions were designed to represent different levels of comprehension. The results show that the effect of topic interest on text comprehension is especially pronounced when a deeper level of understanding is required. Surprisingly, prior knowledge had no effect on the level of comprehension. Verbal intelligence, on the other hand, showed a clear effect on comprehension, especially in answering questions of simple knowledge. The effects of interest and verbal intelligence could be shown to be independent of one another.
Motivational conditions have been thus far largely neglected by contemporary theoretical approaches in knowledge psychology. The present article attempts to demonstrate the necessity for the greater integration of both. Suggestions are made regarding the choice and conceptualization of relevant motivational factors. Two possible groups of factors can be distinguished: (1) motivational factors of personality, and (2) motivational effects of action. Available theoretical approaches (e.g., the "levels of processing" approach) and examples are used to clarify the potential effects of these factors on the acquisition and representation of knowledge. Finally, a review is made of empirical studies allowing confirmatory allegations about the posited relationships between motivational factors and processes related to knowledge. This review reveals substantial research deficits on this topic.
Content: 1 The Development of the Estonian Gender Policy Machinery 1.1 Initiation of Institutionalisation as a Result of International Commitments 1.2 Institutional Measures Facilitating EU Membership 1.3 Assessment of the Gender Equality Machinery 2 Conditions for Gender Mainstreaming in Estonia 2.1 Social Conditions 2.2 Administrative Conditions 3 Gender Mainstreaming Activities in the Estonian Public Administration 3.1 The Legal Foundations 3.2 Inter-ministerial Cooperation 3.3 Gender Mainstreaming Training 3.4 Knowledge Basis 3.5 Lack of Standards for data and Statistics 3.6 Non-adminsitrative Liaisons 4 Conclusion
One of the main problems in machine learning is to train a predictive model from training data and to make predictions on test data. Most predictive models are constructed under the assumption that the training data is governed by the exact same distribution which the model will later be exposed to. In practice, control over the data collection process is often imperfect. A typical scenario is when labels are collected by questionnaires and one does not have access to the test population. For example, parts of the test population are underrepresented in the survey, out of reach, or do not return the questionnaire. In many applications training data from the test distribution are scarce because they are difficult to obtain or very expensive. Data from auxiliary sources drawn from similar distributions are often cheaply available. This thesis centers around learning under differing training and test distributions and covers several problem settings with different assumptions on the relationship between training and test distributions-including multi-task learning and learning under covariate shift and sample selection bias. Several new models are derived that directly characterize the divergence between training and test distributions, without the intermediate step of estimating training and test distributions separately. The integral part of these models are rescaling weights that match the rescaled or resampled training distribution to the test distribution. Integrated models are studied where only one optimization problem needs to be solved for learning under differing distributions. With a two-step approximation to the integrated models almost any supervised learning algorithm can be adopted to biased training data. In case studies on spam filtering, HIV therapy screening, targeted advertising, and other applications the performance of the new models is compared to state-of-the-art reference methods.
This document presents the results of the seminar "Coneptual Arachitecture Patterns" of the winter term 2002 in the Hasso-Plattner-Institute. It is a compilation of the student's elaborations dealing with some conceptual architecture patterns which can be found in literature. One important focus laid on the runtime structures and the presentation of the patterns. 1. Introduction 1.1. The Seminar 1.2. Literature 2 Pipes and Filters (André Langhorst and Martin Steinle) 3 Broker (Konrad Hübner and Einar Lück) 4 Microkernel (Eiko Büttner and Stefan Richter) 5 Component Configurator (Stefan Röck and Alexander Gierak) 6 Interceptor (Marc Förster and Peter Aschenbrenner) 7 Reactor (Nikolai Cieslak and Dennis Eder) 8 Half–Sync/Half–Async (Robert Mitschke and Harald Schubert) 9 Leader/Followers (Dennis Klemann and Steffen Schmidt)
This document is an analysis of the 'Java Language Conversion Assistant'. Itr will also cover a language analysis of the Java Programming Language as well as a survey of related work concerning Java and C# interoperability on the one hand and language conversion in general on the other. Part I deals with language analysis. Part II covers the JLCA tool and tests used to analyse the tool. Additionally, it gives an overview of the above mentioned related work. Part III presents a complete project that has been translated using the JLCA.
The Apache Modeling Project
(2004)
This document presents an introduction to the Apache HTTP Server, covering both an overview and implementation details. It presents results of the Apache Modelling Project done by research assistants and students of the Hasso–Plattner–Institute in 2001, 2002 and 2003. The Apache HTTP Server was used to introduce students to the application of the modeling technique FMC, a method that supports transporting knowledge about complex systems in the domain of information processing (software and hardware as well). After an introduction to HTTP servers in general, we will focus on protocols and web technology. Then we will discuss Apache, its operational environment and its extension capabilities— the module API. Finally we will guide the reader through parts of the Apache source code and explain the most important pieces.
1 Introduction 1.1 Project formulation 1.2 Our contribution 2 Pedagogical Aspect 4 2.1 Modern teaching 2.2 Our Contribution 2.2.1 Autonomous and exploratory learning 2.2.2 Human machine interaction 2.2.3 Short multimedia clips 3 Ontology Aspect 3.1 Ontology driven expert systems 3.2 Our contribution 3.2.1 Ontology language 3.2.2 Concept Taxonomy 3.2.3 Knowledge base annotation 3.2.4 Description Logics 4 Natural language approach 4.1 Natural language processing in computer science 4.2 Our contribution 4.2.1 Explored strategies 4.2.2 Word equivalence 4.2.3 Semantic interpretation 4.2.4 Various problems 5 Information Retrieval Aspect 5.1 Modern information retrieval 5.2 Our contribution 5.2.1 Semantic query generation 5.2.2 Semantic relatedness 6 Implementation 6.1 Prototypes 6.2 Semantic layer architecture 6.3 Development 7 Experiments 7.1 Description of the experiments 7.2 General characteristics of the three sessions, instructions and procedure 7.3 First Session 7.4 Second Session 7.5 Third Session 7.6 Discussion and conclusion 8 Conclusion and future work 8.1 Conclusion 8.2 Open questions A Description Logics B Probabilistic context-free grammars
E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in con-ventional computer laboratories poses the problem of immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. This report introduces the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibilities to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide effective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system configurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by the virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present security management solutions to prevent misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This report demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.
1. Design and Composition of 3D Geoinformation Services Benjamin Hagedorn 2. Operating System Abstractions for Service-Based Systems Michael Schöbel 3. A Task-oriented Approach to User-centered Design of Service-Based Enterprise Applications Matthias Uflacker 4. A Framework for Adaptive Transport in Service- Oriented Systems based on Performance Prediction Flavius Copaciu 5. Asynchronicity and Loose Coupling in Service-Oriented Architectures Nikola Milanovic
Dynamics in urban environments encompasses complex processes and phenomena such as related to movement (e.g.,traffic, people) and development (e.g., construction, settlement). This paper presents novel methods for creating human-centric illustrative maps for visualizing the movement dynamics in virtual 3D environments. The methods allow a viewer to gain rapid insight into traffic density and flow. The illustrative maps represent vehicle behavior as light threads. Light threads are a familiar visual metaphor caused by moving light sources producing streaks in a long-exposure photograph. A vehicle’s front and rear lights produce light threads that convey its direction of motion as well as its velocity and acceleration. The accumulation of light threads allows a viewer to quickly perceive traffic flow and density. The light-thread technique is a key element to effective visualization systems for analytic reasoning, exploration, and monitoring of geospatial processes.
Submarine landslides can generate local tsunamis posing a hazard to human lives and coastal facilities. Two major related problems are: (i) quantitative estimation of tsunami hazard and (ii) early detection of the most dangerous landslides. This thesis focuses on both those issues by providing numerical modeling of landslide-induced tsunamis and by suggesting and justifying a new method for fast detection of tsunamigenic landslides by means of tiltmeters. Due to the proximity to the Sunda subduction zone, Indonesian coasts are prone to earthquake, but also landslide tsunamis. The aim of the GITEWS-project (German-Indonesian Tsunami Early Warning System) is to provide fast and reliable tsunami warnings, but also to deepen the knowledge about tsunami hazards. New bathymetric data at the Sunda Arc provide the opportunity to evaluate the hazard potential of landslide tsunamis for the adjacent Indonesian islands. I present nine large mass movements in proximity to Sumatra, Java, Sumbawa and Sumba, whereof the largest event displaced 20 km³ of sediments. Using numerical modeling, I compute the generated tsunami of each event, its propagation and runup at the coast. Moreover, I investigate the age of the largest slope failures by relating them to the Great 1977 Sumba earthquake. Continental slopes off northwest Europe are well known for their history of huge underwater landslides. The current geological situation west of Spitsbergen is comparable to the continental margin off Norway after the last glaciation, when the large tsunamigenic Storegga slide took place. The influence of Arctic warming on the stability of the Svalbard glacial margin is discussed. Based on new geophysical data, I present four possible landslide scenarios and compute the generated tsunamis. Waves of 6 m height would be capable of reaching northwest Europe threatening coastal areas. I present a novel technique to detect large submarine landslides using an array of tiltmeters, as a possible tool in future tsunami early warning systems. The dislocation of a large amount of sediment during a landslide produces a permanent elastic response of the earth. I analyze this response with a mathematical model and calculate the theoretical tilt signal. Applications to the hypothetical Spitsbergen event and the historical Storegga slide show tilt signals exceeding 1000 nrad. The amplitude of landslide tsunamis is controlled by the product of slide volume and maximal velocity (slide tsunamigenic potential). I introduce an inversion routine that provides slide location and tsunamigenic potential, based on tiltmeter measurements. The accuracy of the inversion and of the estimated tsunami height near the coast depends on the noise level of tiltmeter measurements, the distance of tiltmeters from the slide, and the slide tsunamigenic potential. Finally, I estimate the applicability scope of this method by employing it to known landslide events worldwide.
This thesis investigates the Casimir effect between plates made of normal and superconducting metals over a broad range of temperatures, as well as the Casimir-Polder interaction of an atom to such a surface. Numerical and asymptotical calculations have been the main tools in order to do so. The optical properties of the surfaces are described by dielectric functions or optical conductivities, which are reviewed for common models and have been analyzed with special weight on distributional properties and causality. The calculation of the Casimir energy between two normally conducting plates (cavity) is reviewed and previous work on the contribution to the Casimir energy due to the surface plasmons, present in all metallic cavities, has been generalized to finite temperatures for the first time. In the field of superconductivity, a new analytical continuation of the BCS conductivity to to purely imaginary frequencies has been obtained both inside and outside the extremely dirty limit of vanishing mean free path. The Casimir free energy calculated from this description was shown to coincide well with the values obtained from the two fluid model of superconductivity in certain regimes of the material parameters. The Casimir entropy in a superconducting cavity fulfills the third law of thermodynamics and features a characteristic discontinuity at the phase transition temperature. These effects were equally encountered in the Casimir-Polder interaction of an atom with a superconducting wall. The magnetic dipole coupling of an atom to a metal was shown to be highly sensible to dissipation and especially to the surface currents. This leads to a strong quenching of the magnetic Casimir-Polder energy at finite temperature. Violations of the third law of thermodynamics are encountered in special models, similar to phenomena in the Casimir-effect between two plates, that are debated controversely. None of these effects occurs in the analog electric dipole interaction. The results of this work suggest to reestablish the well-known plasma model as the low temperature limit of a superconductor as in London theory rather than use it for the description of normal metals. Superconductors offer the opportunity to control the dissipation of surface currents to a great extent. This could be used to access experimentally the low frequency optical response of metals, which is strongly connected to the thermal Casimir-effect. Here, differently from corresponding microwave experiments, energy and momentum are independent quantities. A measurement of the total Casimir-Polder interaction of atoms with superconductors seems to be in reach in today’s microchip-based atom-traps and the contribution due to magnetic coupling might be accessed by spectroscopic techniques
Erster Deutscher IPv6 Gipfel
(2008)
Inhalt: KOMMUNIQUÉ GRUßWORT PROGRAMM HINTERGRÜNDE UND FAKTEN REFERENTEN: BIOGRAFIE & VOTRAGSZUSAMMENFASSUNG 1.) DER ERSTE DEUTSCHE IPV6 GIPFEL AM HASSO PLATTNER INSTITUT IN POTSDAM - PROF. DR. CHRISTOPH MEINEL - VIVIANE REDING 2.) IPV6, ITS TIME HAS COME - VINTON CERF 3.) DIE BEDEUTUNG VON IPV6 FÜR DIE ÖFFENTLICHE VERWALTUNG IN DEUTSCHLAND - MARTIN SCHALLBRUCH 4.) TOWARDS THE FUTURE OF THE INTERNET - PROF. DR. LUTZ HEUSER 5.) IPV6 STRATEGY & DEPLOYMENT STATUS IN JAPAN - HIROSHI MIYATA 6.) IPV6 STRATEGY & DEPLOYMENT STATUS IN CHINA - PROF. WU HEQUAN 7.) IPV6 STRATEGY AND DEPLOYMENT STATUS IN KOREA - DR. EUNSOOK KIM 8.) IPV6 DEPLOYMENT EXPERIENCES IN GREEK SCHOOL NETWORK - ATHANASSIOS LIAKOPOULOS 9.) IPV6 NETWORK MOBILITY AND IST USAGE - JEAN-MARIE BONNIN 10.) IPV6 - RÜSTZEUG FÜR OPERATOR & ISP IPV6 DEPLOYMENT UND STRATEGIEN DER DEUTSCHEN TELEKOM - HENNING GROTE 11.) VIEW FROM THE IPV6 DEPLOYMENT FRONTLINE - YVES POPPE 12.) DEPLOYING IPV6 IN MOBILE ENVIRONMENTS - WOLFGANG FRITSCHE 13.) PRODUCTION READY IPV6 FROM CUSTOMER LAN TO THE INTERNET - LUTZ DONNERHACKE 14.) IPV6 - DIE BASIS FÜR NETZWERKZENTRIERTE OPERATIONSFÜHRUNG (NETOPFÜ) IN DER BUNDESWEHR HERAUSFORDERUNGEN - ANWENDUNGSFALLBETRACHTUNGEN - AKTIVITÄTEN - CARSTEN HATZIG 15.) WINDOWS VISTA & IPV6 - BERND OURGHANLIAN 16.) IPV6 & HOME NETWORKING TECHINCAL AND BUSINESS CHALLENGES - DR. TAYEB BEN MERIEM 17.) DNS AND DHCP FOR DUAL STACK NETWORKS - LAWRENCE HUGHES 18.) CAR INDUSTRY: GERMAN EXPERIENCE WITH IPV6 - AMARDEO SARMA 19.) IPV6 & AUTONOMIC NETWORKING - RANGANAI CHAPARADZA 20.) P2P & GRID USING IPV6 AND MOBILE IPV6 - DR. LATIF LADID
Contents 1. Styling for Service-Based 3D Geovisualization Benjamin Hagedorn 2. The Windows Monitoring Kernel Michael Schöbel 3. A Resource-Oriented Information Network Platform for Global Design Processes Matthias Uflacker 4. Federation in SOA – Secure Service Invocation across Trust Domains Michael Menzel 5. KStruct: A Language for Kernel Runtime Inspection Alexander Schmidt 6. Deconstructing Resources Hagen Overdick 7. FMC-QE – Case Studies Stephan Kluth 8. A Matter of Trust Rehab Al-Nemr 9. From Semi-automated Service Composition to Semantic Conformance Harald Meyer
Motivations and research objectives: During the passage of rain water through a forest canopy two main processes take place. First, water is redistributed; and second, its chemical properties change substantially. The rain water redistribution and the brief contact with plant surfaces results in a large variability of both throughfall and its chemical composition. Since throughfall and its chemistry influence a range of physical, chemical and biological processes at or below the forest floor the understanding of throughfall variability and the prediction of throughfall patterns potentially improves the understanding of near-surface processes in forest ecosystems. This thesis comprises three main research objectives. The first objective is to determine the variability of throughfall and its chemistry, and to investigate some of the controlling factors. Second, I explored throughfall spatial patterns. Finally, I attempted to assess the temporal persistence of throughfall and its chemical composition. Research sites and methods: The thesis is based on investigations in a tropical montane rain forest in Ecuador, and lowland rain forest ecosystems in Brazil and Panama. The first two studies investigate both throughfall and throughfall chemistry following a deterministic approach. The third study investigates throughfall patterns with geostatistical methods, and hence, relies on a stochastic approach. Results and Conclusions: Throughfall is highly variable. The variability of throughfall in tropical forests seems to exceed that of many temperate forests. These differences, however, do not solely reflect ecosystem-inherent characteristics, more likely they also mirror management practices. Apart from biotic factors that influence throughfall variability, rainfall magnitude is an important control. Throughfall solute concentrations and solute deposition are even more variable than throughfall. In contrast to throughfall volumes, the variability of solute deposition shows no clear differences between tropical and temperate forests, hence, biodiversity is not a strong predictor of solute deposition heterogeneity. Many other factors control solute deposition patterns, for instance, solute concentration in rainfall and antecedent dry period. The temporal variability of the latter factors partly accounts for the low temporal persistence of solute deposition. In contrast, measurements of throughfall volume are quite stable over time. Results from the Panamanian research site indicate that wet and dry areas outlast consecutive wet seasons. At this research site, throughfall exhibited only weak or pure nugget autocorrelation structures over the studies lag distances. A close look at the geostatistical tools at hand provided evidence that throughfall datasets, in particular those of large events, require robust variogram estimation if one wants to avoid outlier removal. This finding is important because all geostatistical throughfall studies that have been published so far analyzed their data using the classical, non-robust variogram estimator.
Duplicate detection consists in determining different representations of real-world objects in a database. Recent research has considered the use of relationships among object representations to improve duplicate detection. In the general case where relationships form a graph, research has mainly focused on duplicate detection quality/effectiveness. Scalability has been neglected so far, even though it is crucial for large real-world duplicate detection tasks. In this paper we scale up duplicate detection in graph data (DDG) to large amounts of data and pairwise comparisons, using the support of a relational database system. To this end, we first generalize the process of DDG. We then present how to scale algorithms for DDG in space (amount of data processed with limited main memory) and in time. Finally, we explore how complex similarity computation can be performed efficiently. Experiments on data an order of magnitude larger than data considered so far in DDG clearly show that our methods scale to large amounts of data not residing in main memory.
Contents: Artem Polyvanny, Sergey Smirnow, and Mathias Weske The Triconnected Abstraction of Process Models 1 Introduction 2 Business Process Model Abstraction 3 Preliminaries 4 Triconnected Decomposition 4.1 Basic Approach for Process Component Discovery 4.2 SPQR-Tree Decomposition 4.3 SPQR-Tree Fragments in the Context of Process Models 5 Triconnected Abstraction 5.1 Abstraction Rules 5.2 Abstraction Algorithm 6 Related Work and Conclusions
The complex system of strike-slip and thrust faults in the Alborz Mountains, Northern Iran, are not well understood yet. Mainly structural and geomorphic data are available so far. As a more extensive base for seismotectonic studies and seismic hazard analysis we plan to do a comprehensive seismic moment tensor study also from smaller magnitudes (M < 4.5) by developing a new algorithm. Here, we present first preliminary results.
It has always been enigmatic which processes control the accretion of the North American terranes towards the Pacific plate and the landward migration of the San Andreas plate boundary. One of the theories suggests that the Pacific plate first cools and captures the uprising mantle in the slab window, and then it causes the accretion of the continental crustal blocks. The alternative theory attributes the accretion to the capture of Farallon plate fragments (microplates) stalled in the ceased Farallon-North America subduction zone. Quantitative judgement between these two end-member concepts requires a 3D thermomechanical numerical modeling. However, the software tool required for such modeling is not available at present in the geodynamic modeling community. The major aim of the presented work is comprised basically of two interconnected tasks. The first task is the development and testing of the research Finite Element code with sufficiently advanced facilities to perform the three-dimensional geological time scale simulations of lithospheric deformation. The second task consists in the application of the developed tool to the Neogene deformations of the crust and the mantle along the San Andreas Fault System in Central and northern California. The geological time scale modeling of lithospheric deformation poses numerous conceptual and implementation challenges for the software tools. Among them is the necessity to handle the brittle-ductile transition within the single computational domain, adequately represent the rock rheology in a broad range of temperatures and stresses, and resolve the extreme deformations of the free surface and internal boundaries. In the framework of this thesis the new Finite Element code (SLIM3D) has been successfully developed and tested. This code includes a coupled thermo-mechanical treatment of deformation processes and allows for an elasto-visco-plastic rheology with diffusion, dislocation and Peierls creep mechanisms and Mohr-Coulomb plasticity. The code incorporates an Arbitrary Lagrangian Eulerian formulation with free surface and Winkler boundary conditions. The modeling technique developed is used to study the aspects influencing the Neogene lithospheric deformation in central and northern California. The model setup is focused on the interaction between three major tectonic elements in the region: the North America plate, the Pacific plate and the Gorda plate, which join together near the Mendocino Triple Junction. Among the modeled effects is the influence of asthenosphere upwelling in the opening slab window on the overlying North American plate. The models also incorporate the captured microplate remnants in the fossil Farallon subduction zone, simplified subducting Gorda slab, and prominent crustal heterogeneity such as the Salinian block. The results show that heating of the mantle roots beneath the older fault zones and the transpression related to fault stepping, altogether, render cooling in the slab window alone incapable to explain eastward migration of the plate boundary. From the viewpoint of the thermomechanical modeling, the results confirm the geological concept, which assumes that a series of microplate capture events has been the primary reason of the inland migration of the San Andreas plate boundary over the recent 20 Ma. The remnants of the Farallon slab, stalled in the fossil subduction zone, create much stronger heterogeneity in the mantle than the cooling of the uprising asthenosphere, providing the more efficient and direct way for transferring the North American terranes to Pacific plate. The models demonstrate that a high effective friction coefficient on major faults fails to predict the distinct zones of strain localization in the brittle crust. The magnitude of friction coefficient inferred from the modeling is about 0.075, which is far less than typical values 0.6 – 0.8 obtained by variety of borehole stress measurements and laboratory data. Therefore, the model results presented in this thesis provide additional independent constrain which supports the “weak-fault” hypothesis in the long-term ongoing debate over the strength of major faults in the SAFS.
This thesis presents investigations on sediments from two African lakes which have been recording changes in their surrounding environmental and climate conditions since more than 200,000 years. Focus of this work is the time of the last Glacial and the Holocene (the last ~100,000 years before present [in the following 100 kyr BP]). One important precondition for this kind of research is a good understanding of the present ecosystems in and around the lakes and of the sediment formation under modern climate conditions. Both studies therefore include investigations on the modern environment (including organisms, soils, rocks, lake water and sediments). A 90 m long sediment sequence was investigated from Lake Tswaing (north-eastern South Africa) using geochemical analyses. These investigations document alternating periods of high detrital input and low (especially autochthonous) organic matter content and periods of low detrital input, carbonatic or evaporitic sedimentation and high autochthonous organic matter content. These alternations are interpreted as changes between relatively humid and arid conditions, respectively. Before c. 75 kyr BP, they seem to follow changes in local insolation whereas afterwards they appear to be acyclic and are probably caused by changes in ocean circulation and/or in the mean position of the Inter-Tropical Convergence Zone (ITCZ). Today, these factors have main influence on precipitation in this area where rainfall occurs almost exclusively during austral summer. All modern organisms were analysed for their biomarker and bulk organic and compound-specific stable carbon isotope composition. The same investigations on sediments from the modern lake floor document the mixed input of the investigated individual organisms and reveal additional influences by methanotrophic bacteria. A comparison of modern sediment characteristics with those of sediments covering the time 14 to 2 kyr BP shows changes in the productivity of the lake and the surrounding vegetation which are best explained by changes in hydrology. More humid conditions are indicated for times older than 10 kyr BP and younger than 7.5 kyr BP, whereas arid conditions prevailed in between. These observations agree with the results from sediment composition and indications from other climate archives nearby. The second lake study deals with Lake Challa, a small, deep crater lake on the foot of Mount Kilimanjaro. In this lake form mm-scale laminated sediments which were analyses with micro-XRF scanning for changes in the element composition. By comparing these results with investigations on thin sections, results from ongoing sediment trap studies, meteorological data, and investigations on the surrounding rocks and soils, I develop a model for seasonal variability in the limnology and sedimentation of Lake Challa. The lake appears to be stratified during the warm rain seasons (October – December and March – May) during which detrital material is delivered to the lake and carbonates precipitate. On the lake floor forms a dark lamina with high contents of Fe and Ti and high Ca/Al and low Mn/Fe ratios. Diatoms bloom during the cool and windy season (June – September) when mixing down to c. 60 m depth provides easily bio-available nutrients. Contemporaneously, Fe and Mn-oxides are precipitating which cause high Mn/Fe ratios in the light diatom-rich laminae of the sediments. Trends in the Mn/Fe ratio of the sediments are interpreted to reflect changes in the intensity or duration of seasonal mixing in Lake Challa. This interpretation is supported by parallel changes in the organic matter and biogenic silica content observed in the 22 m long profile recovered from Lake Challa. This covers the time of the last 25 kyr BP. It documents a transition around 16 kyr BP from relatively well-mixed conditions with high detrital input during glacial times to stronger stratified conditions which are probably related to increasing lake levels in Challa and generally more humid conditions in East Africa. Intensified mixing is recorded for the time of the Younger Dryas and the period between 11.4 and 10.7 kyr BP. For these periods, reduced intensity of the SW monsoon and intensified NE monsoon are reported from archives of the Indian-Asian Monsoon region, arguing for the latter as a probable source for wind mixing in Lake Challa. This connection is probably also responsible for contemporaneous events in the Mn/Fe ratios of the Lake Challa sediments and in other records of northern hemisphere monsoon intensity during the Holocene and underlines the close interaction of global low latitude atmospheric circulation.
The bibliographic project 'Renaissance Linguistics Archive' (R.L.A.) aimed at establishing a comprehensive database of secondary sources covering the linguistics ideas developed by Renaissance scholars in Europe. The database project, founded in 1986 by Mirko Tavoni (Pisa) and in 1994 transferred to Gerda Haßler (Potsdam), resulted so far in three print-outs, each of them counting 1000 records. It is the aim of this website to publish the results of the collective efforts undertaken thus far (R.L.A. 1.0, 1986-1999).
Content: 1 The Typology 1.1 Object Placement 2 Treatment of StG in terms of LF Movement – with and without Head Movement 3 An OT-solution in terms of linearisation (‘LF-to-PF-Mapping’) 3.1 The trigger for additional orders: Focus 3.2 Competitions 3.3 Summary 4 RP 4.1 LF Movement – with and without Head Movement 4.2 The OT-account for RP 4.3 Competitions 5 Summary
Content: 0 Introduction 1 Elements that block verb raising – a discussion 1.1 Haider’s observation 1.2 The other constructions 1.3 A possible explanation 1.4 Riemsdijk’s grafting approach as a possible alternative? 1.5 Intermediate Summary 2 Parsing problems with speech act adverbials in the pre-field
Content: 1 Introduction 2 A restrictive theory of head movement 2.1 Preliminary Remarks 2.2 Theoretical Problems of Head Movement 2.3 Remnant Phrasal Movement 2.4 Münchhausen Style Head Movement 3 Verb Second Movement 3.1 Introductory Remarks 3.2 Problems of V/2 constructions: Does V really move to Comp? 3.3 The preverbal position 3.4 The Second Position 4 References
Counting Markedness
(2003)
This paper reports the results of a corpus investigation on case conflicts in German argument free relative constructions. We investigate how corpus frequencies reflect the relative markedness of free relative and correlative constructions, the relative markedness of different case conflict configurations, and the relative markedness of different conflict resolution strategies. Section 1 introduces the conception of markedness as used in Optimality Theory. Section 2 introduces the facts about German free relative clauses, and section 3 presents the results of the corpus study. By and large, markedness and frequency go hand in hand. However, configurations at the highest end of the markedness scale rarely show up in corpus data, and for the configuration at the lowest end we found an unexpected outcome: the more marked structure is preferred.
The present paper addresses a current view in the psycholinguistic literature that case exhibits processing properties distinct from those of other morphological features such as number (cf. Fodor & Inoue, 2000; Meng & Bader, 2000a/b). In a speeded-acceptability judgement experiment, we show that the low performance previously found for case in contrast to number violations is limited to nominative case, whereas violations involving accusative and dative are judged more accurately. The data thus do not support the proposal that case per se is associated with special properties (in contrast to other features such as number) in reanalysis processes. Rather, there are significant judgement differences between the object cases accusative and dative on the one hand and the subject nominative case on the other. This may be explained by the fact that nominative has a specific status in German (and many other languages) as a default case.
The present paper addresses a current view in the psycholinguistic literature that case exhibits processing properties distinct from those of other morphological features such as number (cf. Fodor & Inoue, 2000; Meng & Bader, 2000a/b). In a speeded-acceptability judgement experiment, we show that the low performance previously found for case in contrast to number violations is limited to nominative case, whereas violations involving accusative and dative are judged more accurately. The data thus do not support the proposal that case per se is associated with special properties (in contrast to other features such as number) in reanalysis processes. Rather, there are significant judgement differences between the object cases accusative and dative on the one hand and the subject nominative case on the other. This may be explained by the fact that nominative has a specific status in German (and many other languages) as a default case.