Refine
Year of publication
- 2009 (121) (remove)
Document Type
- Doctoral Thesis (121) (remove)
Language
- English (121) (remove)
Is part of the Bibliography
- yes (121)
Keywords
- Modellierung (3)
- Arabidopsis thaliana (2)
- Biomarker (2)
- Paläoklima (2)
- Sternatmosphären (2)
- biomarker (2)
- eye movements (2)
- modelling (2)
- stable isotopes (2)
- stellar atmospheres (2)
Institute
- Institut für Biochemie und Biologie (41)
- Institut für Physik und Astronomie (21)
- Institut für Chemie (14)
- Institut für Geowissenschaften (14)
- Institut für Umweltwissenschaften und Geographie (7)
- Institut für Ernährungswissenschaft (6)
- Institut für Mathematik (4)
- Department Linguistik (3)
- Institut für Informatik und Computational Science (3)
- Extern (2)
This thesis considers on the one hand the construction of point processes via conditional intensities, motivated by the partial Integration of the Campbell measure of a point process. Under certain assumptions on the intensity the existence of such a point process is shown. A fundamental example turns out to be the Pólya sum process, whose conditional intensity is a generalisation of the Pólya urn dynamics. A Cox process representation for that point process is shown. A further process considered is a Poisson process of Gaussian loops, which represents a noninteracting particle system derived from the discussion of indistinguishable particles. Both processes are used to define particle systems locally, for which thermodynamic limits are determined.
This study presents noble gas compositions (He, Ne, Ar, Kr, and Xe) of lavas from several Hawaiian volcanoes. Lavas from the Hawaii Scientific Drilling Project (HSDP) core, surface samples from Mauna Kea, Mauna Loa, Kilauea, Hualalai, Kohala and Haleakala as well as lavas from a deep well on the summit of Kilauea were investigated. Noble gases, especially helium, are used as tracers for mantle reservoirs, based on the assumption that high 3He/4He ratios (>8 RA) represent material from the deep and supposedly less degassed mantle, whereas lower ratios (~ 8 RA) are thought to represent the upper mantle. Shield stage Mauna Kea, Kohala and Kilauea lavas yielded MORB-like to moderately high 3He/4He ratios, while 3He/4He ratios in post-shield stage Haleakala lavas are MORB-like. Few samples show 20Ne/22Ne and 21Ne/22Ne ratios different from the atmospheric values, however, Mauna Kea and Kilauea lavas with excess in mantle Ne agree well with the Loihi-Kilauea line in a neon three-isotope plot, whereas one Kohala sample plots on the MORB correlation line. The values in the 4He/40Ar* (40Ar* denotes radiogenic Ar) versus 4He diagram imply open system fractionation of He from Ar, with a deficiency in 4He. Calculated 4He/40Ar*, 3He/22Nes (22NeS denotes solar Ne) and 4He/21Ne ratios for the sample suite are lower than the respective production and primordial ratios, supporting the observation of a fractionation of He from the heavier noble gases, with a depletion of He with respect to Ne and Ar. The depletion of He is interpreted to be partly due to solubility controlled gas loss during magma ascent. However, the preferential He loss suggests that He is more incompatible than Ne and Ar during magmatic processes. In a binary mixing model, the isotopic He and Ne pattern are best explained by a mixture of a MORB-like end-member with a plume like or primordial end-member with a fractionation in 3He/22Ne, represented by a curve parameter r of 15 (r=(³He/²²Ne)MORB/(³He/²²Ne)PLUME or PRIMORDIAL). Whether the high 3He/4He ratios in Hawaiian lavas are indicative of a primitive component within the Hawaiian plume or are rather a product of the crystal-melt- partitioning behavior during partial melting remains to be resolved.
The recent discovery of an intricate and nontrivial interaction topology among the elements of a wide range of natural systems has altered the manner we understand complexity. For example, the axonal fibres transmitting electrical information between cortical regions form a network which is neither regular nor completely random. Their structure seems to follow functional principles to balance between segregation (functional specialisation) and integration. Cortical regions are clustered into modules specialised in processing different kinds of information, e.g. visual or auditory. However, in order to generate a global perception of the real world, the brain needs to integrate the distinct types of information. Where this integration happens, nobody knows. We have performed an extensive and detailed graph theoretical analysis of the cortico-cortical organisation in the brain of cats, trying to relate the individual and collective topological properties of the cortical areas to their function. We conclude that the cortex possesses a very rich communication structure, composed of a mixture of parallel and serial processing paths capable of accommodating dynamical processes with a wide variety of time scales. The communication paths between the sensory systems are not random, but largely mediated by a small set of areas. Far from acting as mere transmitters of information, these central areas are densely connected to each other, strongly indicating their functional role as integrators of the multisensory information. In the quest of uncovering the structure-function relationship of cortical networks, the peculiarities of this network have led us to continuously reconsider the stablished graph measures. For example, a normalised formalism to identify the “functional roles” of vertices in networks with community structure is proposed. The tools developed for this purpose open the door to novel community detection techniques which may also characterise the overlap between modules. The concept of integration has been revisited and adapted to the necessities of the network under study. Additionally, analytical and numerical methods have been introduced to facilitate understanding of the complicated statistical interrelations between the distinct network measures. These methods are helpful to construct new significance tests which may help to discriminate the relevant properties of real networks from side-effects of the evolutionary-growth processes.
The seismicity of the Dead Sea fault zone (DSFZ) during the last two millennia is characterized by a number of damaging and partly devastating earthquakes. These events pose a considerable seismic hazard and seismic risk to Syria, Lebanon, Palestine, Jordan, and Israel. The occurrence rates for large earthquakes along the DSFZ show indications to temporal changes in the long-term view. The aim of this thesis is to find out, if the occurrence rates of large earthquakes (Mw ≥ 6) in different parts of the DSFZ are time-dependent and how. The results are applied to probabilistic seismic hazard assessments (PSHA) in the DSFZ and neighboring areas. Therefore, four time-dependent statistical models (distributions), including Weibull, Gamma, Lognormal and Brownian Passage Time (BPT), are applied beside the exponential distribution (Poisson process) as the classical time-independent model. In order to make sure, if the earthquake occurrence rate follows a unimodal or a multimodal form, a nonparametric bootstrap test of multimodality has been done. A modified method of weighted Maximum Likelihood Estimation (MLE) is applied to estimate the parameters of the models. For the multimodal cases, an Expectation Maximization (EM) method is used in addition to the MLE method. The selection of the best model is done by two methods; the Bayesian Information Criterion (BIC) as well as a modified Kolmogorov-Smirnov goodness-of-fit test. Finally, the confidence intervals of the estimated parameters corresponding to the candidate models are calculated, using the bootstrap confidence sets. In this thesis, earthquakes with Mw ≥ 6 along the DSFZ, with a width of about 20 km and inside 29.5° ≤ latitude ≤ 37° are considered as the dataset. The completeness of this dataset is calculated since 300 A.D. The DSFZ has been divided into three sub zones; the southern, the central and the northern sub zone respectively. The central and the northern sub zones have been investigated but not the southern sub zone, because of the lack of sufficient data. The results of the thesis for the central part of the DSFZ show that the earthquake occurrence rate does not significantly pursue a multimodal form. There is also no considerable difference between the time-dependent and time-independent models. Since the time-independent model is easier to interpret, the earthquake occurrence rate in this sub zone has been estimated under the exponential distribution assumption (Poisson process) and will be considered as time-independent with the amount of 9.72 * 10-3 events/year. The northern part of the DSFZ is a special case, where the last earthquake has occurred in 1872 (about 137 years ago). However, the mean recurrence time of Mw ≥ 6 events in this area is about 51 years. Moreover, about 96 percent of the observed earthquake inter-event times (the time between two successive earthquakes) in the dataset regarding to this sub zone are smaller than 137 years. Therefore, it is a zone with an overdue earthquake. The results for this sub zone verify that the earthquake occurrence rate is strongly time-dependent, especially shortly after an earthquake occurrence. A bimodal Weibull-Weibull model has been selected as the best fit for this sub zone. The earthquake occurrence rate, corresponding to the selected model, is a smooth function of time and reveals two clusters within the time after an earthquake occurrence. The first cluster begins right after an earthquake occurrence, lasts about 80 years, and is explicitly time-dependent. The occurrence rate, regarding to this cluster, is considerably lower right after an earthquake occurrence, increases strongly during the following ten years and reaches its maximum about 0.024 events/year, then decreases over the next 70 years to its minimum about 0.0145 events/year. The second cluster begins 80 years after an earthquake occurrence and lasts until the next earthquake occurs. The earthquake occurrence rate, corresponding to this cluster, increases extremely slowly, such as it can be considered as an almost constant rate about 0.015 events/year. The results are applied to calculate the time-dependent PSHA in the northern part of the DSFZ and neighbouring areas.
This thesis presents investigations on sediments from two African lakes which have been recording changes in their surrounding environmental and climate conditions since more than 200,000 years. Focus of this work is the time of the last Glacial and the Holocene (the last ~100,000 years before present [in the following 100 kyr BP]). One important precondition for this kind of research is a good understanding of the present ecosystems in and around the lakes and of the sediment formation under modern climate conditions. Both studies therefore include investigations on the modern environment (including organisms, soils, rocks, lake water and sediments). A 90 m long sediment sequence was investigated from Lake Tswaing (north-eastern South Africa) using geochemical analyses. These investigations document alternating periods of high detrital input and low (especially autochthonous) organic matter content and periods of low detrital input, carbonatic or evaporitic sedimentation and high autochthonous organic matter content. These alternations are interpreted as changes between relatively humid and arid conditions, respectively. Before c. 75 kyr BP, they seem to follow changes in local insolation whereas afterwards they appear to be acyclic and are probably caused by changes in ocean circulation and/or in the mean position of the Inter-Tropical Convergence Zone (ITCZ). Today, these factors have main influence on precipitation in this area where rainfall occurs almost exclusively during austral summer. All modern organisms were analysed for their biomarker and bulk organic and compound-specific stable carbon isotope composition. The same investigations on sediments from the modern lake floor document the mixed input of the investigated individual organisms and reveal additional influences by methanotrophic bacteria. A comparison of modern sediment characteristics with those of sediments covering the time 14 to 2 kyr BP shows changes in the productivity of the lake and the surrounding vegetation which are best explained by changes in hydrology. More humid conditions are indicated for times older than 10 kyr BP and younger than 7.5 kyr BP, whereas arid conditions prevailed in between. These observations agree with the results from sediment composition and indications from other climate archives nearby. The second lake study deals with Lake Challa, a small, deep crater lake on the foot of Mount Kilimanjaro. In this lake form mm-scale laminated sediments which were analyses with micro-XRF scanning for changes in the element composition. By comparing these results with investigations on thin sections, results from ongoing sediment trap studies, meteorological data, and investigations on the surrounding rocks and soils, I develop a model for seasonal variability in the limnology and sedimentation of Lake Challa. The lake appears to be stratified during the warm rain seasons (October – December and March – May) during which detrital material is delivered to the lake and carbonates precipitate. On the lake floor forms a dark lamina with high contents of Fe and Ti and high Ca/Al and low Mn/Fe ratios. Diatoms bloom during the cool and windy season (June – September) when mixing down to c. 60 m depth provides easily bio-available nutrients. Contemporaneously, Fe and Mn-oxides are precipitating which cause high Mn/Fe ratios in the light diatom-rich laminae of the sediments. Trends in the Mn/Fe ratio of the sediments are interpreted to reflect changes in the intensity or duration of seasonal mixing in Lake Challa. This interpretation is supported by parallel changes in the organic matter and biogenic silica content observed in the 22 m long profile recovered from Lake Challa. This covers the time of the last 25 kyr BP. It documents a transition around 16 kyr BP from relatively well-mixed conditions with high detrital input during glacial times to stronger stratified conditions which are probably related to increasing lake levels in Challa and generally more humid conditions in East Africa. Intensified mixing is recorded for the time of the Younger Dryas and the period between 11.4 and 10.7 kyr BP. For these periods, reduced intensity of the SW monsoon and intensified NE monsoon are reported from archives of the Indian-Asian Monsoon region, arguing for the latter as a probable source for wind mixing in Lake Challa. This connection is probably also responsible for contemporaneous events in the Mn/Fe ratios of the Lake Challa sediments and in other records of northern hemisphere monsoon intensity during the Holocene and underlines the close interaction of global low latitude atmospheric circulation.
There are many factors which make speaking and understanding a second language (L2) a highly complex challenge. Skills and competencies in in both linguistic and metalinguistic areas emerge as parts of a multi-faceted, flexible concept underlying bilingual/multilingual communication. On the linguistic level, a combination of an extended knowledge of idiomatic expressions, a broad lexical familiarity, a large vocabulary size, and the ability to deal with phonetic distinctions and fine phonetic detail has been argued necessary for effective nonnative comprehension of spoken language. The scientific interest in these factors has also led to more interest in the L2’s information structure, the way in which information is organised and packaged into informational units, both within and between clauses. On a practical level, the information structure of a language can offer the means to assign focus to a certain element considered important. Speakers can draw from a rich pool of linguistic means to express this focus, and listeners can in turn interpret these to guide them to the highlighted information which in turn facilitates comprehension, resulting in an appropriate understanding of what has been said. If a speaker doesn’t follow the principles of information structure, and the main accent in a sentence is placed on an unimportant word, then there may be inappropriate information transfer within the discourse, and misunderstandings. The concept of focus as part of the information structure of a language, the linguistic means used to express it, and the differential use of focus in native and nonnative language processing are central to this dissertation. Languages exhibit a wide range of ways of directing focus, including by prosodic means, by syntactic constructions, and by lexical means. The general principles underlying information structure seem to contrast structurally across different languages, and they can also differ in the way they express focus. In the context of L2 acquisition, characteristics of the L1 linguistic system are argued to influence the acquisition of the L2. Similarly, the conceptual patterns of information structure of the L1 may influence the organization of information in the L2. However, strategies and patterns used to exploit information structure for succesful language comprehension in the native L1, may not apply at all, or work in different ways or todifferent degrees in the L2. This means that L2 learners ideally have to understand the way that information structure is expressed in the L2 to fully use the information structural benefit in the L2. The knowledge of information structural requirements in the L2 could also imply that the learner would have to make adjustments regarding the use of information structural devices in the L2. The general question is whether the various means to mark focus in the learners’ native language are also accessible in the nonnative language, and whether a L1-L2 transfer of their usage should be considered desirable. The current work explores how information structure helps the listener to discover and structure the forms and meanings of the L2. The central hypothesis is that the ability to access information structure has an impact on the level of the learners’ appropriateness and linguistic competence in the L2. Ultimately, the ability to make use of information structure in the L2 is believed to underpin the L2 learners’ ability to effectively communicate in the L2. The present study investigated how use of focus markers affects processing speed and word recall recall in a native-nonnative language comparison. The predominant research question was whether the type of focus marking leads to more efficient and accurate word processing in marked structures than in unmarked structures, and whether differences in processing patterns can be observed between the two language conditions. Three perception studies were conducted, each concentrating on one of the following linguistic parameters: 1. Prosodic prominence: Does prosodic focus conveyed by sentence accent and by word position facilitate word recognition? 2. Syntactical means: Do cleft constructions result in faster and more accurate word processing? 3. Lexical means: Does focus conveyed by the particles even/only (German: sogar/nur) facilitate word processing and word recall? Experiments 2 and 3 additionally investigated the contribution of context in the form of preceding questions. Furthermore, they considered accent and its facilitative effect on the processing of words which are in the scope of syntactic or lexical focus marking. All three experiments tested German learners of English in a native German language condition and in English as their L2. Native English speakers were included as a control for the English language condition. Test materials consisted of single sentences, all dealing with bird life. Experiment 1 tested word recognition in three focus conditions (broad focus, narrow focus on the target, and narrow focus on a constituent than the target) in one condition using natural unmanipulated sentences, and in the other two conditions using spliced sentences. Experiment 2 (effect of syntactic focus marking) and Experiment 3 (effect of lexical focus marking) used phoneme monitoring as a measure for the speed of word processing. Additionally, a word recall test (4AFC) was conducted to assess the effective entry of target-bearing words in the listeners’ memory. Experiment 1: Focus marking by prosodic means Prosodic focus marking by pitch accent was found to highlight important information (Bolinger, 1972), making the accented word perceptually more prominent (Klatt, 1976; van Santen & Olive, 1990; Eefting, 1991; Koopmans-van Beinum & van Bergem, 1989). However, accent structure seems to be processed faster in native than in nonnative listening (Akker& Cutler, 2003, Expt. 3). Therefore, it is expected that prosodically marked words are better recognised than unmarked words, and that listeners can exploit accent structure better for accurate word recognition in their L1 than they do in the L2 (L1 > L2). Altogether, a difference in word recognition performance in L1 listening is expected between different focus conditions (narrow focus > broad focus). Results of Experiments 1 show that words were better recognized in native listening than in nonnative listening. Focal accent, however, doesn’t seem to help the German subjects recognize accented words more accurately, in both the L1 and the L2. This could be due to the focus conditions not being acoustically distinctive enough. Results of experiments with spliced materials suggest that the surrounding prosodic sentence contour made listeners remember a target word and not the local, prosodic realization of the word. Prosody seems to indeed direct listeners’ attention to the focus of the sentence (see Cutler, 1976). Regarding the salience of word position, VanPatten (2002; 2004) postulated a sentence location principle for L2 processing, stating a ranking of initial > final > medial word position. Other evidence mentions a processing adantage of items occurring late in the sentence (Akker & Cutler, 2003), and Rast (2003) observed in an English L2 production study a trend of an advantage of items occurring at the outer ends of the sentence. The current Experiment 1 aimed to keep the length of the sentences to an acceptable length, mainly to keep the task in the nonnative lnaguage condition feasable. Word length showed an effect only in combination with word position (Rast, 2003; Rast & Dommergues, 2003). Therefore, word length was included in the current experiment as a secondary factor and without hypotheses. Results of Experiment 1 revealed that the length of a word doesn’t seem to be important for its accurate recognition. Word position, specifically the final position, clearly seems to facilitate accurate word recognition in German. A similar trend emerges in condition English L2, confirming Klein (1984) and Slobin (1985). Results don’t support the sentence location principle of VanPatten (2002; 2004). The salience of the final position is interpreted as recency effect (Murdock, 1962). In addition, the advantage of the final position may benefit from the discourse convention that relevant background information is referred to first, and then what is novel later (Haviland & Clark, 1974). This structure is assumed to cue the listener as to what the speaker considers to be important information, and listeners might have reacted according to this convention. Experiment 2: Focus marking by syntactic means Atypical syntactic structures often draw listeners’ attention to certain information in an utterance, and the cleft structure as a focus marking device appears to be a common surface feature in many languages (Lambrecht, 2001). Surface structure influences sentence processing (Foss & Lynch, 1969; Langford & Holmes, 1979), which leads to competing hypotheses in Experiment 2: on the one hand, the focusing effect of the cleft construction might reduce processing times. On the other, cleft constructions in German were found to be used less to mark fo than in English (Ahlemeyer & Kohlhof, 1999; Doherty, 1999; E. Klein, 1988). The complexity of the constructions, and the experience from the native language might work against an advantage of the focus effect in the L2. Results of Experiment 2 show that the cleft structure is an effective device to mark focus in German L1. The processing advantage is explained by the low degree of structural markedness of cleft structures: listeners use the focus function of sentence types headed by the dummy subject es (English: it) due to reliance on 'safe' subject-prominent SVO-structures. The benefit of cleft is enhanced when the sentences are presented with context, suggesting a substantial benefit when focus effects of syntactic surface structure and coherence relation between sentences are integrated. Clefts facilitate word processing for English native speakers. Contrary to German L1, the marked cleft construction doesn’t reduce processing times in English L2. The L1-L2 difference was interpreted as a learner problem of applying specific linguistic structures according to the principles of information structure in the target language. Focus marking by cleft did not help German learners in native or in nonnative word recall. This could be attributed to the phonological similarity of the multiple choice options (Conrad & Hull, 1964), and to a long time span between listening and recall (Birch & Garnsey, 1995; McKoon et al., 1993). Experiment 3: Focus marking by lexical means Focus particles are elements of structure that can indicate focus (König, 1991), and their function is to emphasize a certain part of the sentence (Paterson et al., 1999). I argue that the focus particles even/only (German: sogar/nur) evoke contrast sets of alternatives resp. complements to the element in focus (Ni et al., 1996), which causes interpretations of context. Therefore, lexical focus marking isn’t expected to lead to faster word processing. However, since different mechanisms of encoding seem to underlie word memory, a benefit of the focusing function of particles is expected to show in the recall task: due to focus particles being a preferred and well-used feature for native speakers of German, a transfer of this habitualness is expected, resulting in a better recall of focused words. Results indicated that focus particles seem to be the weakest option to mark focus: Focus marking by lexical particle don’t seem to reduce word processing times in either German L1, English L2, or in English L1. The presence of focus particles is likely to instantiate a complex discourse model which lets the listener await further modifying information (Liversedge et al., 2002). This semantic complexity might slow down processing. There are no indications that focus particles facilitate native language word recall in German L1 and English L1. This could be because focus particles open sets of conditions and contexts that enlarge the set of representations in listeners rather than narrowing it down to the element in the scope of the focus particle. In word recall, the facilitative effect of focus particles emerges only in the nonnative language condition. It is suggested that L2 learners, when faced with more demanding tasks in an L2, use a broad variety of means that identify focus for a better representation of novel words in the memory. In Experiments 2 and 3, evidence suggests that accent is an important factor for efficient word processing and accurate recall in German L1 and English L1, but less so in English L2. This underlines the function of accent as core speech parameter and consistent cue to the perception of prominence native language use (see Cutler & Fodor, 1979; Pitt & Samuel, 1990a; Eriksson et al., 2002; Akker & Cutler, 2003); the L1-L2 difference is attributed to patterns of expectation that are employed in the L1 but not (yet?) in the L2. There seems to exist a fine-tuned sensitivity to how accents are distributed in the native language, listeners expect an appropriate distribution and interpret it accordingly (Eefting, 1991). This pleads for accent placement as extremely important to L2 proficiency; the current results also suggest that accent and its relationship with other speech parameters has to be newly established in the L2 to fully reveal its benefits for efficient processing of speech. There is evidence that additional context facilitates processing of complex syntactic structures but that a surplus of information has no effect if the sentence construction is less challenging for the listener. The increased amount of information to be processed seems to impede better word recall, particularly in the L2. Altogether, it seems that focus marking devices and context can combine to form an advantageous alliance: a substantial benefit in processing efficiency is found when parameters of focus marking and sentence coherence are integrated. L2 research advocates the beneficial aspects of providing context for efficient L2 word learning (Lawson & Hogben, 1996). The current thesis promotes the view that a context which offers more semantic, prosodic, or lexical connections might compensate for the additional processing load that context constitutes for the listeners. A methodological consideration concerns the order in which language conditions are presented to listeners, i.e., L1-L2 or L2-L1. Findings suggest that presentation order could enforce a learning bias, with the performance in the second experiment being influenced by knowledge acquired in the first (see Akker & Cutler, 2003). To conclude this work: The results of the present study suggest that information structure is more accessible in the native language than it is in the nonnative language. There is, however, some evidence that L2 learners have an understanding of the significance of some information-structural parameters of focus marking. This has a beneficial effect on processing efficiency and recall accuracy; on the cognitive side it illustrates the benefits and also the need of a dynamic exchange of information-structural organization between L1 and L2. The findings of the current thesis encourage the view that an understanding of information structure can help the learner to discover and categorise forms and meanings of the L2. Information structure thus emerges as a valuable resource to advance proficiency in a second language.
The present thesis aims to introduce process-based model for species range dynamics that can be fitted to abundance data. For this purpose, the well-studied Proteaceae species of the South African Cape Floristic Region (CFR) offer a great data set to fit process-based models. These species are subject to wildflower harvesting and environmental threats like habitat loss and climate change. The general introduction of this thesis presents shortly the available models for species distribution modelling. Subsequently, it presents the feasibility of process-based modelling. Finally, it introduces the study system as well as the objectives and layout. In Chapter 1, I present the process-based model for range dynamics and a statistical framework to fit it to abundance distribution data. The model has a spatially-explicit demographic submodel (describing dispersal, reproduction, mortality and local extinction) and an observation submodel (describing imperfect detection of individuals). The demographic submodel links species-specific habitat models describing the suitable habitat and process-based demographic models that consider local dynamics and anemochoric seed dispersal between populations. After testing the fitting framework with simulated data, I applied it to eight Proteaceae species with different demographic properties. Moreover, I assess the role of two other demographic mechanisms: positive (Allee effects) and negative density-dependence. Results indicate that Allee effects and overcompensatory local dynamics (including chaotic behaviour) seem to be important for several species. Most parameter estimates quantitatively agreed with independent data. Hence, the presented approach seemed to suit the demand of investigating non-equilibrium scenarios involving wildflower harvesting (Chapter 2) and environmental change (Chapter 3). The Chapter 2 addresses the impacts of wildflower harvesting. The chapter includes a sensitivity analysis over multiple spatial scales and demographic properties (dispersal ability, strength of Allee effects, maximum reproductive rate, adult mortality, local extinction probability and carrying capacity). Subsequently, harvesting effects are investigated on real case study species. Plant response to harvesting showed abrupt threshold behavior. Species with short-distance seed dispersal, strong Allee effects, low maximum reproductive rate, high mortality and high local extinction are most affected by harvesting. Larger spatial scales benefit species response, but the thresholds become sharper. The three case study species supported very low to moderate harvesting rates. Summarizing, demographic knowledge about the study system and careful identification of the spatial scale of interest should guide harvesting assessments and conservation of exploited species. The sensitivity analysis’ results can be used to qualitatively assess harvesting impacts for poorly studied species. I investigated in Chapter 3 the consequences of past habitat loss, future climate change and their interaction on plant response. I use the species-specific estimates of the best model describing local dynamics obtained in Chapter 1. Both habitat loss and climate change had strong negative impacts on species dynamics. Climate change affected mainly range size and range filling due to habitat reductions and shifts combined with low colonization. Habitat loss affected mostly local abundances. The scenario with both habitat loss and climate change was the worst for most species. However, this impact was better than expected by simple summing of separate effects of habitat loss and climate change. This is explained by shifting ranges to areas less affected by humans. Range size response was well predicted by the strength of environmental change, whereas range filling and local abundance responses were better explained by demographic properties. Hence, risk assessments under global change should consider demographic properties. Most surviving populations were restricted to refugia, serving as key conservation focus.The findings obtained for the study system as well as the advantages, limitations and potentials of the model presented here are further discussed in the General Discussion. In summary, the results indicate that 1) process-based demographic models for range dynamics can be fitted to data; 2) demographic processes improve species distribution models; 3) different species are subject to different processes and respond differently to environmental change and exploitation; 4) density regulation type and Allee effects should be considered when investigating range dynamics of species; 5) the consequences of wildflower harvesting, habitat loss and climate change could be disastrous for some species, but impacts vary depending on demographic properties; 6) wildflower harvesting impacts varies over spatial scale; 7) The effects of habitat loss and climate change are not always additive.
Due to the unique environmental conditions and different feedback mechanisms, the Arctic region is especially sensitive to climate changes. The influence of clouds on the radiation budget is substantial, but difficult to quantify and parameterize in models. In the framework of the PhD, elastic backscatter and depolarization lidar observations of Arctic clouds were performed during the international Arctic Study of Tropospheric Aerosol, Clouds and Radiation (ASTAR) from Svalbard in March and April 2007. Clouds were probed above the inaccessible Arctic Ocean with a combination of airborne instruments: The Airborne Mobile Aerosol Lidar (AMALi) of the Alfred Wegener Institute for Polar and Marine Research provided information on the vertical and horizontal extent of clouds along the flight track, optical properties (backscatter coefficient), and cloud thermodynamic phase. From the data obtained by the spectral albedometer (University of Mainz), the cloud phase and cloud optical thickness was deduced. Furthermore, in situ observations with the Polar Nephelometer, Cloud Particle Imager and Forward Scattering Spectrometer Probe (Laboratoire de Météorologie Physique, France) provided information on the microphysical properties, cloud particle size and shape, concentration, extinction, liquid and ice water content. In the thesis, a data set of four flights is analyzed and interpreted. The lidar observations served to detect atmospheric structures of interest, which were then probed by in situ technique. With this method, an optically subvisible ice cloud was characterized by the ensemble of instruments (10 April 2007). Radiative transfer simulations based on the lidar, radiation and in situ measurements allowed the calculation of the cloud forcing, amounting to -0.4 W m-2. This slight surface cooling is negligible on a local scale. However, thin Arctic clouds have been reported more frequently in winter time, when the clouds' effect on longwave radiation (a surface warming of 2.8 W m-2) is not balanced by the reduced shortwave radiation (surface cooling). Boundary layer mixed-phase clouds were analyzed for two days (8 and 9 April 2007). The typical structure consisting of a predominantly liquid water layer on cloud top and ice crystals below were confirmed by all instruments. The lidar observations were compared to European Centre for Medium-Range Weather Forecasts (ECMWF) meteorological analyses. A change of air masses along the flight track was evidenced in the airborne data by a small completely glaciated cloud part within the mixed-phase cloud system. This indicates that the updraft necessary for the formation of new cloud droplets at cloud top is disturbed by the mixing processes. The measurements served to quantify the shortcomings of the ECMWF model to describe mixed-phase clouds. As the partitioning of cloud condensate into liquid and ice water is done by a diagnostic equation based on temperature, the cloud structures consisting of a liquid cloud top layer and ice below could not be reproduced correctly. A small amount of liquid water was calculated for the lowest (and warmest) part of the cloud only. Further, the liquid water content was underestimated by an order of magnitude compared to in situ observations. The airborne lidar observations of 9 April 2007 were compared to space borne lidar data on board of the satellite Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO). The systems agreed about the increase of cloud top height along the same flight track. However, during the time delay of 1 h between the lidar measurements, advection and cloud processing took place, and a detailed comparison of small-scale cloud structures was not possible. A double layer cloud at an altitude of 4 km was observed with lidar at the West coast in the direct vicinity of Svalbard (14 April 2007). The cloud system consisted of two geometrically thin liquid cloud layers (each 150 m thick) with ice below each layer. While the upper one was possibly formed by orographic lifting under the influence of westerly winds, or by the vertical wind shear shown by ECMWF analyses, the lower one might be the result of evaporating precipitation out of the upper layer. The existence of ice precipitation between the two layers supports the hypothesis that humidity released from evaporating precipitation was cooled and consequently condensed as it experienced the radiative cooling from the upper layer. In summary, a unique data set characterizing tropospheric Arctic clouds was collected with lidar, in situ and radiation instruments. The joint evaluation with meteorological analyses allowed a detailed insight in cloud properties, cloud evolution processes and radiative effects.
Submarine landslides can generate local tsunamis posing a hazard to human lives and coastal facilities. Two major related problems are: (i) quantitative estimation of tsunami hazard and (ii) early detection of the most dangerous landslides. This thesis focuses on both those issues by providing numerical modeling of landslide-induced tsunamis and by suggesting and justifying a new method for fast detection of tsunamigenic landslides by means of tiltmeters. Due to the proximity to the Sunda subduction zone, Indonesian coasts are prone to earthquake, but also landslide tsunamis. The aim of the GITEWS-project (German-Indonesian Tsunami Early Warning System) is to provide fast and reliable tsunami warnings, but also to deepen the knowledge about tsunami hazards. New bathymetric data at the Sunda Arc provide the opportunity to evaluate the hazard potential of landslide tsunamis for the adjacent Indonesian islands. I present nine large mass movements in proximity to Sumatra, Java, Sumbawa and Sumba, whereof the largest event displaced 20 km³ of sediments. Using numerical modeling, I compute the generated tsunami of each event, its propagation and runup at the coast. Moreover, I investigate the age of the largest slope failures by relating them to the Great 1977 Sumba earthquake. Continental slopes off northwest Europe are well known for their history of huge underwater landslides. The current geological situation west of Spitsbergen is comparable to the continental margin off Norway after the last glaciation, when the large tsunamigenic Storegga slide took place. The influence of Arctic warming on the stability of the Svalbard glacial margin is discussed. Based on new geophysical data, I present four possible landslide scenarios and compute the generated tsunamis. Waves of 6 m height would be capable of reaching northwest Europe threatening coastal areas. I present a novel technique to detect large submarine landslides using an array of tiltmeters, as a possible tool in future tsunami early warning systems. The dislocation of a large amount of sediment during a landslide produces a permanent elastic response of the earth. I analyze this response with a mathematical model and calculate the theoretical tilt signal. Applications to the hypothetical Spitsbergen event and the historical Storegga slide show tilt signals exceeding 1000 nrad. The amplitude of landslide tsunamis is controlled by the product of slide volume and maximal velocity (slide tsunamigenic potential). I introduce an inversion routine that provides slide location and tsunamigenic potential, based on tiltmeter measurements. The accuracy of the inversion and of the estimated tsunami height near the coast depends on the noise level of tiltmeter measurements, the distance of tiltmeters from the slide, and the slide tsunamigenic potential. Finally, I estimate the applicability scope of this method by employing it to known landslide events worldwide.
The Tibetan Plateau is the largest elevated landmass in the world and profoundly influences atmospheric circulation patterns such as the Asian monsoon system. Therefore this area has been increasingly in focus of palaeoenvironmental studies. This thesis evaluates the applicability of organic biomarkers for palaeolimnological purposes on the Tibetan Plateau with a focus on aquatic macrophyte-derived biomarkers. Submerged aquatic macrophytes have to be considered to significantly influence the sediment organic matter due to their high abundance in many Tibetan lakes. They can show highly 13C-enriched biomass because of their carbon metabolism and it is therefore crucial for the interpretation of δ13C values in sediment cores to understand to which extent aquatic macrophytes contribute to the isotopic signal of the sediments in Tibetan lakes and in which way variations can be explained in a palaeolimnological context. Additionally, the high abundance of macrophytes makes them interesting as potential recorders of lake water δD. Hydrogen isotope analysis of biomarkers is a rapidly evolving field to reconstruct past hydrological conditions and therefore of special relevance on the Tibetan Plateau due to the direct linkage between variations of monsoon intensity and changes in regional precipitation / evaporation balances. A set of surface sediment and aquatic macrophyte samples from the central and eastern Tibetan Plateau was analysed for composition as well as carbon and hydrogen isotopes of n-alkanes. It was shown how variable δ13C values of bulk organic matter and leaf lipids can be in submerged macrophytes even of a single species and how strongly these parameters are affected by them in corresponding sediments. The estimated contribution of the macrophytes by means of a binary isotopic model was calculated to be up to 60% (mean: 40%) to total organic carbon and up to 100% (mean: 66%) to mid-chain n-alkanes. Hydrogen isotopes of n-alkanes turned out to record δD of meteoric water of the summer precipitation. The apparent enrichment factor between water and n-alkanes was in range of previously reported ones (≈-130‰) at the most humid sites, but smaller (average: -86‰) at sites with a negative moisture budget. This indicates an influence of evaporation and evapotranspiration on δD of source water for aquatic and terrestrial plants. The offset between δD of mid- and long-chain n-alkanes was close to zero in most of the samples, suggesting that lake water as well as soil and leaf water are affected to a similar extent by those effects. To apply biomarkers in a palaeolimnological context, the aliphatic biomarker fraction of a sediment core from Lake Koucha (34.0° N; 97.2° E; eastern Tibetan Plateau) was analysed for concentrations, δ13C and δD values of compounds. Before ca. 8 cal ka BP, the lake was dominated by aquatic macrophyte-derived mid-chain n-alkanes, while after 6 cal ka BP high concentrations of a C20 highly branched isoprenoid compound indicate a predominance of phytoplankton. Those two principally different states of the lake were linked by a transition period with high abundances of microbial biomarkers. δ13C values were relatively constant for long-chain n-alkanes, while mid-chain n-alkanes showed variations between -23.5 to -12.6‰. Highest values were observed for the assumed period of maximum macrophyte growth during the late glacial and for the phytoplankton maximum during the middle and late Holocene. Therefore, the enriched values were interpreted to be caused by carbon limitation which in turn was induced by high macrophyte and primary productivity, respectively. Hydrogen isotope signatures of mid-chain n-alkanes have been shown to be able to track a previously deduced episode of reduced moisture availability between ca. 10 and 7 cal ka BP, indicated by a 20‰ shift towards higher δD values. Indications for cooler episodes at 6.0, 3.1 and 1.8 cal ka BP were gained from drops of biomarker concentrations, especially microbial-derived hopanoids, and from coincidental shifts towards lower δ13C values. Those episodes correspond well with cool events reported from other locations on the Tibetan Plateau as well as in the Northern Hemisphere. To conclude, the study of recent sediments and plants improved the understanding of factors affecting the composition and isotopic signatures of aliphatic biomarkers in sediments. Concentrations and isotopic signatures of the biomarkers in Lake Koucha could be interpreted in a palaeolimnological context and contribute to the knowledge about the history of the lake. Aquatic macrophyte-derived mid-chain n-alkanes were especially useful, due to their high abundance in many Tibetan Lakes and their ability to record major changes of lake productivity and palaeo-hydrological conditions. Therefore, they have the potential to contribute to a fuller understanding of past climate variability in this key region for atmospheric circulation systems.
Pectic polysaccharides, a class of plant cell wall polymers, form one of the most complex networks known in nature. Despite their complex structure and their importance in plant biology, little is known about the molecular mechanism of their biosynthesis, modification, and turnover, particularly their structure-function relationship. One way to gain insight into pectin metabolism is the identification of mutants with an altered pectin structure. Those were obtained by a recently developed pectinase-based genetic screen. Arabidopsis thaliana seedlings grown in liquid medium containing pectinase solutions exhibited particular phenotypes: they were dwarfed and slightly chlorotic. However, when genetically different A. thaliana seed populations (random T-DNA insertional populations as well as EMS-mutagenized populations and natural variations) were subjected to this treatment, individuals were identified that exhibit a different visible phenotype compared to wild type or other ecotypes and may thus contain a different pectin structure (pec-mutants). After confirming that the altered phenotype occurs only when the pectinase is present, the EMS mutants were subjected to a detailed cell wall analysis with particular emphasis on pectins. This suite of mutants identified in this study is a valuable resource for further analysis on how the pectin network is regulated, synthesized and modified. Flanking sequences of some of the T-DNA lines have pointed toward several interesting genes, one of which is PEC100. This gene encodes a putative sugar transporter gene, which, based on our data, is implicated in rhamnogalacturonan-I synthesis. The subcellular localization of PEC100 was studied by GFP fusion and this protein was found to be localized to the Golgi apparatus, the organelle where pectin biosynthesis occurs. Arabidopsis ecotype C24 was identified as a susceptible one when grown with pectinases in liquid culture and had a different oligogalacturonide mass profile when compared to ecotype Col-0. Pectic oligosaccharides have been postulated to be signal molecules involved in plant pathogen defense mechanisms. Indeed, C24 showed elevated accumulation of reactive oxygen species upon pectinase elicitation and had altered response to the pathogen Alternaria brassicicola in comparison to Col-0. Using a recombinant inbred line population three major QTLs were identified to be responsible for the susceptibility of C24 to pectinases. In a reverse genetic approach members of the qua2 (putative pectin methyltransferase) family were tested for potential target genes that affect pectin methyl-esterification. The list of these genes was determined by in silico study of the pattern of expression and co-expression of all 34 members of this family resulting in 6 candidate genes. For only for one of the 6 analyzed genes a difference in the oligogalacturonide mass profile was observed in the corresponding knock-out lines, confirming the hypothesis that the methyl-esterification pattern of pectin is fine tuned by members of this gene family. This study of pectic polysaccharides through forward and reverse genetic screens gave new insight into how pectin structure is regulated and modified, and how these modifications could influence pectin mediated signalling and pathogenicity.
Trying to do two things at once decreases performance of one or both tasks in many cases compared to the situation when one performs each task by itself. The present thesis deals with the question why and in which cases these dual-task costs emerge and moreover, whether there are cases in which people are able to process two cognitive tasks at the same time without costs. In four experiments the influence of stimulus-response (S-R) compatibility, S-R modality pairings, interindividual differences, and practice on parallel processing ability of two tasks are examined. Results show that parallel processing is possible. Nevertheless, dual-task costs emerge when: the personal processing strategy is serial, the two tasks have not been practiced together, S-R compatibility of both tasks is low (e.g. when a left target has to be responded with a right key press and in the other task an auditorily presented “A” has to be responded by saying “B”), and modality pairings of both tasks are Non Standard (i.e., visual-spatial stimuli are responded vocally whereas auditory-verbal stimuli are responded manually). Results are explained with respect to executive-based (S-R compatibility) and content-based crosstalk (S-R modality pairings) between tasks. Finally, an alternative information processing account with respect to the central stage of response selection (i.e., the translation of the stimulus to the response) is presented.
Analysis of phosphorylation dynamics under nitrogen limitation and nitrate or ammonium resupply
(2009)
Mixed 1,2-Dümine-1,2-Dithiolate Ligand Complexes : Structure, Proberties and EPR Spectroscopy
(2009)
From Supported Paladium to Metal free Catalysts : different approaches in heterogeneous catalysis
(2009)
Radical additions to glycals : synthesis and transformations of 2-functionalized carbohydrates
(2009)
Transcription factor networks in the initial ohase of drouht stress in rice (Oryza sativa L.)
(2009)
Chronic kidney disease and type 2 diabetes mellitus as factors influencing retinol-binding protein 4
(2009)
This chapter provides a description of generative syntax as a discipline within Slavic linguistic research from a theoretical, methodological and scientific-historical viewpoint, including those descriptive models and theoretical approaches which are also preferred in Slavic generative linguistics working within the Principles and Parameters framework (Chomsky 1995 pasim). A general comprehensive description of generative syntax, syntactic levels ad methods of description is followed by a short overview of the current state of the art and the goals and targets of syntactic theory and the description of some syntactically relevant categories (such as negation, word order and clitics). In chapter 2, I will introduce some basic notions of the Minimalist framework. I will concentrate on the question how syntactic levels have to be represented in the Minimalist program (2.1), how the structure of sentential negation can be motivated by the raising of the finite verb (2.2), how negation syntactically interacts with pronominal and verbal clitics (2.3) and related phenomena such as Prosodic Inversion (PrI) (2.4), and finally, what the driving force for V- raising and negation in Imperatives, Gerunds and Infinitives is (2.5).
Motivations and research objectives: During the passage of rain water through a forest canopy two main processes take place. First, water is redistributed; and second, its chemical properties change substantially. The rain water redistribution and the brief contact with plant surfaces results in a large variability of both throughfall and its chemical composition. Since throughfall and its chemistry influence a range of physical, chemical and biological processes at or below the forest floor the understanding of throughfall variability and the prediction of throughfall patterns potentially improves the understanding of near-surface processes in forest ecosystems. This thesis comprises three main research objectives. The first objective is to determine the variability of throughfall and its chemistry, and to investigate some of the controlling factors. Second, I explored throughfall spatial patterns. Finally, I attempted to assess the temporal persistence of throughfall and its chemical composition. Research sites and methods: The thesis is based on investigations in a tropical montane rain forest in Ecuador, and lowland rain forest ecosystems in Brazil and Panama. The first two studies investigate both throughfall and throughfall chemistry following a deterministic approach. The third study investigates throughfall patterns with geostatistical methods, and hence, relies on a stochastic approach. Results and Conclusions: Throughfall is highly variable. The variability of throughfall in tropical forests seems to exceed that of many temperate forests. These differences, however, do not solely reflect ecosystem-inherent characteristics, more likely they also mirror management practices. Apart from biotic factors that influence throughfall variability, rainfall magnitude is an important control. Throughfall solute concentrations and solute deposition are even more variable than throughfall. In contrast to throughfall volumes, the variability of solute deposition shows no clear differences between tropical and temperate forests, hence, biodiversity is not a strong predictor of solute deposition heterogeneity. Many other factors control solute deposition patterns, for instance, solute concentration in rainfall and antecedent dry period. The temporal variability of the latter factors partly accounts for the low temporal persistence of solute deposition. In contrast, measurements of throughfall volume are quite stable over time. Results from the Panamanian research site indicate that wet and dry areas outlast consecutive wet seasons. At this research site, throughfall exhibited only weak or pure nugget autocorrelation structures over the studies lag distances. A close look at the geostatistical tools at hand provided evidence that throughfall datasets, in particular those of large events, require robust variogram estimation if one wants to avoid outlier removal. This finding is important because all geostatistical throughfall studies that have been published so far analyzed their data using the classical, non-robust variogram estimator.
The presented thesis describes the observations of the Galactic center Quintuplet cluster, the spectral analysis of the cluster Wolf-Rayet stars of the nitrogen sequence to determine their fundamental stellar parameters, and discusses the obtained results in a general context. The Quintuplet cluster was discovered in one of the first infrared surveys of the Galactic center region (Okuda et al. 1987, 1989) and was observed for this project with the ESO-VLT near-infrared integral field instrument SINFONI-SPIFFI. The subsequent data reduction was performed in parts with a self-written pipeline to obtain flux-calibrated spectra of all objects detected in the imaged field of view. First results of the observation were compiled and published in a spectral catalog of 160 flux-calibrated $K$-band spectra in the range of 1.95 to 2.45\,$\mu$m, containing 85 early-type (OB) stars, 62 late-type (KM) stars, and 13 Wolf-Rayet stars. About 100 of these stars are cataloged for the first time. The main part of the thesis project was concentrated on the analysis of the WR stars of the nitrogen sequence and one further identified emission line star (Of/WN) with tailored Potsdam Wolf-Rayet (PoWR) models for expanding atmospheres (Hamann et al. 1995) that are applied to derive the stellar parameters of these stars. For this purpose, the atomic input data of the PoWR models had to be extended by further line transitions in the near-infrared spectral range to enable adaequate model spectra to be calculated. These models were then fitted to the observed spectra, revealing typical paramters for this class of stars. A significant amount of hydrogen of up to $X_\text{H} \sim 0.2$ by mass fraction is still present in their stellar atmospheres. The stars are also found to be very luminous ($\log{(L/L_\odot)} > 6.0$) and show mass-loss rates and wind characteristics typical for radiation-driven winds. By comparison with stellar evolutionary models (Meynet \& Maeder 2003a; Langer et al. 1994), the initial masses were estimated and indicate that the Quintuplet WN stars are descendants from the most massive O stars with $M_\text{init} > 60 M_\odot$ and their ages correspond to a cluster age of 3-5\,million years. The analysis of the individual WN stars revealed an average extinction of $A_K =3.1 \pm 0.5$\,mag ($A_V = 27 \pm 4$) towards the Quintuplet cluster. This extinction was applied to derive the stellar luminosities of the remaining early-type and late-type stars in the catalog and a Hertzsprung-Russell diagram could be compiled. Surprisingly, two stellar populations are found, a group of main sequence OB stars and a group of evolved late-type stars, i.e. red supergiants (RSG). The main sequence stars indicate a cluster age of 4 million years, which would be too young for red supergiants to be already present. A star formation event lasting for a few million years might possibly explain the Quintuplet's population and the cluster would still be considered coeval. However, the unexpected and simultaneous presence of red supergiants and Wolf-Rayet stars in the cluster points out that the details of star formation and cluster evolution are not yet well understood for the Quintuplet cluster.
Modern acquisition of seismic data on receiver networks worldwide produces an increasing amount of continuous wavefield recordings. Hence, in addition to manual data inspection, seismogram interpretation requires new processing utilities for event detection, signal classification and data visualization. Various machine learning algorithms, which can be adapted to seismological problems, have been suggested in the field of pattern recognition. This can be done either by means of supervised learning using manually defined training data or by unsupervised clustering and visualization. The latter allows the recognition of wavefield patterns, such as short-term transients and long-term variations, with a minimum of domain knowledge. Besides classical earthquake seismology, investigations of temporal patterns in seismic data also concern novel approaches such as noise cross-correlation or ambient seismic vibration analysis in general, which have moved into focus within the last decade. In order to find records suitable for the respective approach or simply for quality control, unsupervised preprocessing becomes important and valuable for large data sets. Machine learning techniques require the parametrization of the data using feature vectors. Applied to seismic recordings, wavefield properties have to be computed from the raw seismograms. For an unsupervised approach, all potential wavefield features have to be considered to reduce subjectivity to a minimum. Furthermore, automatic dimensionality reduction, i.e. feature selection, is required in order to decrease computational cost, enhance interpretability and improve discriminative power. This study presents an unsupervised feature selection and learning approach for the discovery, imaging and interpretation of significant temporal patterns in seismic single-station or network recordings. In particular, techniques permitting an intuitive, quickly interpretable and concise overview of available records are suggested. For this purpose, the data is parametrized by real-valued feature vectors for short time windows using standard seismic analysis tools as feature generation methods, such as frequency-wavenumber, polarization, and spectral analysis. The choice of the time window length is dependent on the expected durations of patterns to be recognized or discriminated. We use Self-Organizing Maps (SOMs) for a data-driven feature selection, visualization and clustering procedure, which is particularly suitable for high-dimensional data sets. Using synthetics composed of Rayleigh and Love waves and three different types of real-world data sets, we show the robustness and reliability of our unsupervised learning approach with respect to the effect of algorithm parameters and data set properties. Furthermore, we approve the capability of the clustering and imaging techniques. For all data, we find improved discriminative power of our feature selection procedure compared to feature subsets manually selected from individual wavefield parametrization methods. In particular, enhanced performance is observed compared to the most favorable individual feature generation method, which is found to be the frequency spectrum. The method is applied to regional earthquake records at the European Broadband Network with the aim to define suitable features for earthquake detection and seismic phase classification. For the latter, we find that a combination of spectral and polarization features favor S wave detection at a single receiver. However, SOM-based visualization of phase discrimination shows that clustering applied to the records of two stations only allows onset or P wave detection, respectively. In order to improve the discrimination of S waves on receiver networks, we recommend to consider additionally the temporal context of feature vectors. The application to continuous recordings of seismicity close to an active volcano (Mount Merapi, Java, Indonesia) shows that two typical volcano-seismic events (VTB and Guguran) can be detected and distinguished by clustering. In contrast, so-called MP events cannot be discriminated. Comparable results are obtained for selected features and recognition rates regarding a previously implemented supervised classification system. Finally, we test the reliability of wavefield clustering to improve common ambient vibration analysis methods such as estimation of dispersion curves and horizontal to vertical spectral ratios. It is found, that in general, the identified short- and long-term patterns have no significant impact on those estimates. However, for individual sites, effects of local sources can be identified. Leaving out the corresponding clusters, yields reduced uncertainties or allows for improving estimation of dispersion curves.
This work presents mathematical and computational approaches to cover various aspects of metabolic network modelling, especially regarding the limited availability of detailed kinetic knowledge on reaction rates. It is shown that precise mathematical formulations of problems are needed i) to find appropriate and, if possible, efficient algorithms to solve them, and ii) to determine the quality of the found approximate solutions. Furthermore, some means are introduced to gain insights on dynamic properties of metabolic networks either directly from the network structure or by additionally incorporating steady-state information. Finally, an approach to identify key reactions in a metabolic networks is introduced, which helps to develop simple yet useful kinetic models. The rise of novel techniques renders genome sequencing increasingly fast and cheap. In the near future, this will allow to analyze biological networks not only for species but also for individuals. Hence, automatic reconstruction of metabolic networks provides itself as a means for evaluating this huge amount of experimental data. A mathematical formulation as an optimization problem is presented, taking into account existing knowledge and experimental data as well as the probabilistic predictions of various bioinformatical methods. The reconstructed networks are optimized for having large connected components of high accuracy, hence avoiding fragmentation into small isolated subnetworks. The usefulness of this formalism is exemplified on the reconstruction of the sucrose biosynthesis pathway in Chlamydomonas reinhardtii. The problem is shown to be computationally demanding and therefore necessitates efficient approximation algorithms. The problem of minimal nutrient requirements for genome-scale metabolic networks is analyzed. Given a metabolic network and a set of target metabolites, the inverse scope problem has as it objective determining a minimal set of metabolites that have to be provided in order to produce the target metabolites. These target metabolites might stem from experimental measurements and therefore are known to be produced by the metabolic network under study, or are given as the desired end-products of a biotechological application. The inverse scope problem is shown to be computationally hard to solve. However, I assume that the complexity strongly depends on the number of directed cycles within the metabolic network. This might guide the development of efficient approximation algorithms. Assuming mass-action kinetics, chemical reaction network theory (CRNT) allows for eliciting conclusions about multistability directly from the structure of metabolic networks. Although CRNT is based on mass-action kinetics originally, it is shown how to incorporate further reaction schemes by emulating molecular enzyme mechanisms. CRNT is used to compare several models of the Calvin cycle, which differ in size and level of abstraction. Definite results are obtained for small models, but the available set of theorems and algorithms provided by CRNT can not be applied to larger models due to the computational limitations of the currently available implementations of the provided algorithms. Given the stoichiometry of a metabolic network together with steady-state fluxes and concentrations, structural kinetic modelling allows to analyze the dynamic behavior of the metabolic network, even if the explicit rate equations are not known. In particular, this sampling approach is used to study the stabilizing effects of allosteric regulation in a model of human erythrocytes. Furthermore, the reactions of that model can be ranked according to their impact on stability of the steady state. The most important reactions in that respect are identified as hexokinase, phosphofructokinase and pyruvate kinase, which are known to be highly regulated and almost irreversible. Kinetic modelling approaches using standard rate equations are compared and evaluated against reference models for erythrocytes and hepatocytes. The results from this simplified kinetic models can simulate acceptably the temporal behavior for small changes around a given steady state, but fail to capture important characteristics for larger changes. The aforementioned approach to rank reactions according to their influence on stability is used to identify a small number of key reactions. These reactions are modelled in detail, including knowledge about allosteric regulation, while all other reactions were still described by simplified reaction rates. These so-called hybrid models can capture the characteristics of the reference models significantly better than the simplified models alone. The resulting hybrid models might serve as a good starting point for kinetic modelling of genome-scale metabolic networks, as they provide reasonable results in the absence of experimental data, regarding, for instance, allosteric regulations, for a vast majority of enzymatic reactions.
Dispersal behavior plays an important role for the geographical distribution and population structure of any given species. Individual’s fitness, reproductive and competitive ability, and dispersal behavior can be determined by the age of the individual. Age-dependent as well as density-dependent dispersal patterns are common in many bird species. In this thesis, I first present age-dependent breeding ability and natal site fidelity in white storks (Ciconia ciconia); migratory birds breeding in large parts of Europe. I predicted that both the proportion of breeding birds and natal site fidelity increase with the age. After the seventies of the last century, following a steep population decline, a recovery of the white stork population has been observed in many regions in Europe. Increasing population density in the white stork population in Eastern Germany especially after 1983 allowed examining density- as well as age-dependent breeding dispersal patterns. Therefore second, I present whether: young birds show more often and longer breeding dispersal than old birds, and frequency of dispersal events increase with the population density increase, especially in the young storks. Third, I present age- and density-dependent dispersal direction preferences in the give population. I asked whether and how the major spring migration direction interacts with dispersal directions of white storks: in different age, and under different population densities. The proportion of breeding individuals increased in the first 22 years of life and then decreased suggesting, the senescent decay in aging storks. Young storks were more faithful to their natal sites than old storks probably due to their innate migratory direction and distance. Young storks dispersed more frequently than old storks in general, but not for longer distance. Proportion of dispersing individuals increased significantly with increasing population densities indicating, density- dependent dispersal behavior in white storks. Moreover, the finding of a significant interaction effects between the age of dispersing birds and year (1980–2006) suggesting, older birds dispersed more from their previous nest sites over time due to increased competition. Both young and old storks dispersed along their spring migration direction; however, directional preferences were different in young storks and old storks. Young storks tended to settle down before reaching their previous nest sites (leading to the south-eastward dispersal) while old birds tended to keep migrating along the migration direction after reaching their previous nest sites (leading to the north-westward dispersal). Cues triggering dispersal events may be age-dependent. Changes in the dispersal direction over time were observed. Dispersal direction became obscured during the second half of the observation period (1993–2006). Increase in competition may affect dispersal behavior in storks. I discuss the potential role of: age for the observed age-dependent dispersal behavior, and competition for the density dependent dispersal behavior. This Ph.D. thesis contributes significantly to the understanding of population structure and geographical distribution of white storks. Moreover, presented age- and density (competition)-dependent dispersal behavior helps understanding underpinning mechanisms of dispersal behavior in bird species.
To date, positive relationships between diversity and community biomass have been mainly found, especially in terrestrial ecosystems due to the complementarity and/or dominance effect. In this thesis, the effect of diversity on the performance of terrestrial plant and phytoplankton communities was investigated to get a better understanding of the underlying mechanisms in the biodiversity-ecosystem functioning context. In a large grassland biodiversity experiment, the Jena Experiment, the effect of community diversity on the individual plant performance was investigated for all species. The species pool consisted of 60 plant species belonging to 4 functional groups (grasses, small herbs, tall herbs, legumes). The experiment included 82 large plots which differed in species richness (1-60), functional richness (1-4), and community composition. Individual plant height increased with increasing species richness suggesting stronger competition for light in more diverse communities. The aboveground biomass of the individual plants decreased with increasing species richness indicating stronger competition in more species-rich communities. Moreover, in more species-rich communities plant individuals were less likely to flower out and had fewer inflorescences which may be resulting from a trade-off between resource allocation to vegetative height growth and to reproduction. Responses to changing species richness differed strongly between functional groups and between species of similar functional groups. To conclude, individual plant performance can largely depend on the diversity of the surrounding community. Positive diversity effects on biomass have been mainly found for substrate-bound plant communities. Therefore, the effect of diversity on the community biomass of phytoplankton was studied using microcosms. The communities consisted of 8 algal species belonging to 4 functional groups (green algae, diatoms, cyanobacteria, phytoflagellates) and were grown at different functional richness levels (1-4). Functional richness and community biomass were negatively correlated and all community biomasses were lower than their average monoculture biomasses of the component species, revealing community underyielding. This was mainly caused by the dominance of a fast-growing species which built up low biomasses in monoculture and mixture. A trade-off between biomass and growth rate in monoculture was found for all species, and thus fast-growing species built up low biomasses and slow-growing species reached high biomasses in monoculture. As the fast-growing, low-productive species monopolised nutrients in the mixtures, they became the dominant species resulting in the observed community underyielding. These findings suggest community overyielding when biomasses of the component species are positively correlated with their growth rates in monocultures. Aquatic microcosm experiments with an extensive design were performed to get a broad range of community responses. The phytoplankton communities differed in species diversity (1, 2, 4, 8, and 12), functional diversity (1, 2, 3, and 4) and community composition. The species/functional diversity positively affected community biomass, revealing overyielding in most of the communities. This was mainly caused by a positive complementarity effect which can be attributed to resource use complementarity and/or facilitative interaction among the species. Overyielding of more diverse communities occurred when the biomass of the component species was correlated positively with their growth rates in monoculture and thus, fast-growing and high-productive species were dominant in mixtures. This and the study mentioned above generated an emergent pattern for community overyielding and underyielding from the relationship between biomass and growth rate in monoculture as long as the initial community structure prevailed. Invasive species can largely affect ecosystem processes, whereas invasion is also influenced by diversity. To date, studies revealed negative and positive diversity effects on the invasibility (susceptibility of a community to the invasion by new species). The effect of productivity (nutrient concentration ranging from 10 to 640 µg P L-1), herbivory (presence/absence of the generalist feeder) and diversity (3, 4, 6 species were randomly chosen from the resident species pool) on the invasibility of phytoplankton communities consisting of 10 resident species was investigated using semi-continuous microcosms. Two functionally diverse invaders were chosen: the filamentous and less-edible cynaobacterium C. raciborskii and the unicellular and well-edible phytoflagellate Cryptomonas sp. The phytoflagellate indirectly benefited from grazing pressure of herbivores whereas C. raciborskii suffered more from it. Diversity did not affect the invasibility of the phytoplankton communities. Rather, it was strongly influenced by the functional traits of the resident and invasive species.
This work presents the synthesis and the self-assembly of symmetrical amphiphilic ABA and BAB triblock copolymers in dilute, semi-concentrated and highly concentrated aqueous solution. A series of new bifunctional bistrithiocarbonates as RAFT agents was used to synthesise these triblock copolymers, which are characterised by a long hydrophilic middle block and relatively small, but strongly hydrophobic end blocks. As hydrophilic A blocks, poly(N-isopropylacrylamide) (PNIPAM) and poly(methoxy diethylene glycol acrylate) (PMDEGA) were employed, while as hydrophobic B blocks, poly(4-tert-butyl styrene), polystyrene, poly(3,5-dibromo benzyl acrylate), poly(2-ethylhexyl acrylate), and poly(octadecyl acrylate) were explored as building blocks with different hydrophobicities and glass transition temperatures. The five bifunctional trithiocarbonates synthesised belong to two classes: the first are RAFT agents, which position the active group of the growing polymer chain at the outer ends of the polymer (Z-C(=S)-S-R-S-C(=S)-Z, type I). The second class places the active groups in the middle of the growing polymer chain (R-S-C(=S)-Z-C(=S)-S-R, type II). These RAFT agents enable the straightforward synthesis of amphiphilic triblock copolymers in only two steps, allowing to vary the nature of the hydrophobic blocks as well as the length of the hydrophobic and hydrophilic blocks broadly with good molar mass control and narrow polydispersities. Specific side reactions were observed among some RAFT agents including the elimination of ethylenetrithiocarbonate in the early stage of the polymerisation of styrene mediated by certain agents of the type II, while the use of the RAFT agents of type I resulted in retardation of the chain extension of PNIPAM with styrene. These results underline the need of a careful choice of RAFT agents for a given task. The various copolymers self-assemble in dilute and semi-concentrated aqueous solution into small flower-like micelles. No indication for the formation of micellar clusters was found, while only at high concentration, physical hydrogels are formed. The reversible thermoresponsive behaviour of the ABA and BAB type copolymer solutions in water with A made of PNIPAM was examined by turbidimetry and dynamic light scattering (DLS). The cloud point of the copolymers was nearly identical to the cloud point of the homopolymer and varied between 28-32 °C with concentrations from 0.01 to 50 wt%. This is attributed to the formation of micelles where the hydrophobic blocks are shielded from a direct contact with water, so that the hydrophobic interactions of the copolymers are nearly the same as for pure PNIPAM. Dynamic light scattering measurements showed the presence of small micelles at ambient temperature. The aggregate size dramatically increased above the cloud point, indicating a change of aggregate morphology into clusters due to the thermosensitivity of the PNIPAM block. The rheological behaviour of the amphiphilic BAB triblock copolymers demonstrated the formation of hydrogels at high concentrations, typically above 30-35 wt%. The minimum concentration to induce hydrogels decreased with the increasing glass transition temperatures and increasing length of the end blocks. The weak tendency to form hydrogels was attributed to a small share of bridged micelles only, due to the strong segregation regime occurring. In order to learn about the role of the nature of the thermoresponsive block for the aggregation, a new BAB triblock copolymer consisting of short polystyrene end blocks and PMDEGA as stimuli-responsive middle block was prepared and investigated. Contrary to PNIPAM, dilute aqueous solutions of PMDEGA and of its block copolymers showed reversible phase transition temperatures characterised by a strong dependence on the polymer composition. Moreover, the PMDEGA block copolymer allowed the formation of physical hydrogels at lower concentration, i.e. from 20 wt%. This result suggests that PMDEGA has a higher degree of water-swellability than PNIPAM.
Analytical ultracentrifugation (AUC) has made an important contribution to polymer and particle characterization since its invention by Svedberg (Svedberg and Nichols 1923; Svedberg and Pederson 1940) in 1923. In 1926, Svedberg won the Nobel price for his scientific work on disperse systems including work with AUC. The first important discovery performed with AUC was to show the existence of macromolecules. Since that time AUC has become an important tool to study polymers in biophysics and biochemistry. AUC is an absolute technique that does not need any standard. Molar masses between 200 and 1014 g/mol and particle size between 1 and 5000 nm can be detected by AUC. Sample can be fractionated into its components due to its molar mass, particle size, structure or density without any stationary phase requirement as it is the case in chromatographic techniques. This very property of AUC earns it an important status in the analysis of polymers and particles. The distribution of molar mass, particle sizes and densities can be measured with the fractionation. Different types of experiments can give complementary physicochemical parameters. For example, sedimentation equilibrium experiments can lead to the study of pure thermodynamics. For complex mixtures, AUC is the main method that can analyze the system. Interactions between molecules can be studied at different concentrations without destroying the chemical equilibrium (Kim et al. 1977). Biologically relevant weak interactions can also be monitored (K ≈ 10-100 M-1). An analytical ultracentrifuge experiment can yield the following information: • Molecular weight of the sample • Number of the components in the sample if the sample is not a single component • Homogeneity of the sample • Molecular weight distribution if the sample is not a single component • Size and shape of macromolecules & particles • Aggregation & interaction of macromolecules • Conformational changes of macromolecules • Sedimentation coefficient and density distribution Such an extremely wide application area of AUC allows the investigation of all samples consisting of a solvent and a dispersed or dissolved substance including gels, micro gels, dispersions, emulsions and solutions. Another fact is that solvent or pH limitation does not exist for this method. A lot of new application areas are still flourishing, although the technique is 80 years old. In 1970s, 1500 AUC were operational throughout the world. At those times, due to the limitation in detection technologies, experimental results were obtained with photographic records. As time passed, faster techniques such as size exclusion chromatography (SEC), light scattering (LS) or SDS-gel electrophoresis occupied the same research fields with AUC. Due to these relatively new techniques, AUC began to loose its importance. In the 1980s, only a few AUC were in use throughout the world. In the beginning of the 1990s a modern AUC -the Optima XL-A - was released by Beckman Instruments (Giebeler 1992). The Optima XL-A was equipped with a modern computerized scanning absorption detector. The addition of Rayleigh Interference Optics is introduced which is called XL-I AUC. Furthermore, major development in computers made the analysis easier with the help of new analysis software. Today, about 400 XL-I AUC exist worldwide. It is usually applied in the industry of pharmacy, biopharmacy and polymer companies as well as in academic research fields such as biochemistry, biophysics, molecular biology and material science. About 350 core scientific publications which use analytical ultracentrifugation are published every year (source: SciFinder 2008 ) with an increasing number of references (436 reference in 2008). A tremendous progress has been made in method and analysis software after digitalization of experimental data with the release of XL-I. In comparison to the previous decade, data analysis became more efficient and reliable. Today, AUC labs can routinely use sophisticated data analysis methods for determination of sedimentation coefficient distributions (Demeler and van Holde 2004; Schuck 2000; Stafford 1992), molar mass distributions (Brookes and Demeler 2008; Brookes et al. 2006; Brown and Schuck 2006), interaction constants (Cao and Demeler 2008; Schuck 1998; Stafford and Sherwood 2004), particle size distributions with Angstrom resolution (Cölfen and Pauck 1997) and the simulations determination of size and shape distributions from sedimentation velocity experiments (Brookes and Demeler 2005; Brookes et al. 2006). These methods are also available in powerful software packages that combines various methods, such as, Ultrascan (Demeler 2005), Sedift/Sedphat (Schuck 1998; Vistica et al. 2004) and Sedanal (Stafford and Sherwood 2004). All these powerful packages are free of charge. Furthermore, Ultrascans source code is licensed under the GNU Public License (http://www.gnu.org/copyleft/gpl.html). Thus, Ultrascan can be further improved by any research group. Workshops are organized to support these software packages. Despite of the tremendous developments in data analysis, hardware for the system has not developed much. Although there are various user developed detectors in research laboratories, they are not commercially available. Since 1992, only one new optical system called “the fluorescence optics” (Schmidt and Reisner, 1992, MacGregor et al. 2004, MacGregor, 2006, Laue and Kroe, in press) has been commercialized. However, except that, there has been no commercially available improvement in the optical system. The interesting fact about the current hardware of the XL-I is that it is 20 years old, although there has been an enormous development in microelectronics, software and in optical systems in the last 20 years, which could be utilized for improved detectors. As examples of user developed detector, Bhattacharyya (Bhattacharyya 2006) described a Multiwavelength-Analytical Ultracentrifuge (MWL-AUC), a Raman detector and a small angle laser light scattering detector in his PhD thesis. MWL-AUC became operational, but a very high noise level prevented to work with real samples. Tests with the Raman detector were not successful due to the low light intensity and thus high integration time is required. The small angle laser light scattering detector could only detect latex particles but failed to detect smaller particles and molecules due to low sensitivity of the detector (a photodiode was used as detector). The primary motivation of this work is to construct a detector which can measure new physico-chemical properties with AUC with a nicely fractionated sample in the cell. The final goal is to obtain a multiwavelength detector for the AUC that measures complementary quantities. Instrument development is an option for a scientist only when there is a huge potential benefit but there is no available commercial enterprise developing appropriate equipment, or if there is not enough financial support to buy it. The first case was our motivation for developing detectors for AUC. Our aim is to use today’s technological advances in microelectronics, programming, mechanics in order to develop new detectors for AUC and improve the existing MWL detector to routine operation mode. The project has multiple aspects which can be listed as mechanical, electronical, optical, software, hardware, chemical, industrial and biological. Hence, by its nature it is a multidisciplinary project. Again by its nature it contains the structural problem of its kind; the problem of determining the exact discipline to follow at each new step. It comprises the risk of becoming lost in some direction. Having that fact in mind, we have chosen the simplest possible solution to any optical, mechanical, electronic, software or hardware problem we have encountered and we have always tried to see the overall picture. In this research, we have designed CCD-C-AUC (CCD Camera UV/Vis absorption detector for AUC) and SLS-AUC (Static Light Scattering detector for AUC) and tested them. One of the SLS-AUC designs produced successful test results, but the design could not be brought to the operational stage. However, the operational state Multiwavelength Analytical Ultracentrifuge (MWL-AUC) AUC has been developed which is an important detector in the fields of chemistry, biology and industry. In this thesis, the operational state Multiwavelength Analytical Ultracentrifuge (MWL-AUC) AUC is to be introduced. Consequently, three different applications of MWL-AUC to the aforementioned disciplines shall be presented. First of all, application of MWL-AUC to a biological system which is a mixture of proteins lgG, aldolase and BSA is presented. An application of MWL-AUC to a mass-produced industrial sample (β-carotene gelatin composite particles) which is manufactured by BASF AG, is presented. Finally, it is shown how MWL-AUC will impact on nano-particle science by investigating the quantum size effect of CdTe and its growth mechanism. In this thesis, mainly the relation between new technological developments and detector development for AUC is investigated. Pioneering results are obtained that indicate the possible direction to be followed for the future of AUC. As an example, each MWL-AUC data contains thousands of wavelengths. MWL-AUC data also contains spectral information at each radial point. Data can be separated to its single wavelength files and can be analyzed classically with existing software packages. All the existing software packages including Ultrascan, Sedfit, Sedanal can analyze only single wavelength data, so new extraordinary software developments are needed. As a first attempt, Emre Brookes and Borries Demeler have developed mutliwavelength module in order to analyze the MWL-AUC data. This module analyzes each wavelength separately and independently. We appreciate Emre Brookes and Borries Demeler for their important contribution to the development of the software. Unfortunately, this module requires huge amount of computer power and does not take into account the spectral information during the analysis. New software algorithms are needed which take into account the spectral information and analyze all wavelengths accordingly. We would like also invite the programmers of Ultrascan, Sedfit, Sedanal and the other programs, to develop new algorithms in this direction.
A huge number of applications require coherent radiation in the visible spectral range. Since diode lasers are very compact and efficient light sources, there exists a great interest to cover these applications with diode laser emission. Despite modern band gap engineering not all wavelengths can be accessed with diode laser radiation. Especially in the visible spectral range between 480 nm and 630 nm no emission from diode lasers is available, yet. Nonlinear frequency conversion of near-infrared radiation is a common way to generate coherent emission in the visible spectral range. However, radiation with extraordinary spatial temporal and spectral quality is required to pump frequency conversion. Broad area (BA) diode lasers are reliable high power light sources in the near-infrared spectral range. They belong to the most efficient coherent light sources with electro-optical efficiencies of more than 70%. Standard BA lasers are not suitable as pump lasers for frequency conversion because of their poor beam quality and spectral properties. For this purpose, tapered lasers and diode lasers with Bragg gratings are utilized. However, these new diode laser structures demand for additional manufacturing and assembling steps that makes their processing challenging and expensive. An alternative to BA diode lasers is the stripe-array architecture. The emitting area of a stripe-array diode laser is comparable to a BA device and the manufacturing of these arrays requires only one additional process step. Such a stripe-array consists of several narrow striped emitters realized with close proximity. Due to the overlap of the fields of neighboring emitters or the presence of leaky waves, a strong coupling between the emitters exists. As a consequence, the emission of such an array is characterized by a so called supermode. However, for the free running stripe-array mode competition between several supermodes occurs because of the lack of wavelength stabilization. This leads to power fluctuations, spectral instabilities and poor beam quality. Thus, it was necessary to study the emission properties of those stripe-arrays to find new concepts to realize an external synchronization of the emitters. The aim was to achieve stable longitudinal and transversal single mode operation with high output powers giving a brightness sufficient for efficient nonlinear frequency conversion. For this purpose a comprehensive analysis of the stripe-array devices was done here. The physical effects that are the origin of the emission characteristics were investigated theoretically and experimentally. In this context numerical models could be verified and extended. A good agreement between simulation and experiment was observed. One way to stabilize a specific supermode of an array is to operate it in an external cavity. Based on mathematical simulations and experimental work, it was possible to design novel external cavities to select a specific supermode and stabilize all emitters of the array at the same wavelength. This resulted in stable emission with 1 W output power, a narrow bandwidth in the range of 2 MHz and a very good beam quality with M²<1.5. This is a new level of brightness and brilliance compared to other BA and stripe-array diode laser systems. The emission from this external cavity diode laser (ECDL) satisfied the requirements for nonlinear frequency conversion. Furthermore, a huge improvement to existing concepts was made. In the next step newly available periodically poled crystals were used for second harmonic generation (SHG) in single pass setups. With the stripe-array ECDL as pump source, more than 140 mW of coherent radiation at 488 nm could be generated with a very high opto-optical conversion efficiency. The generated blue light had very good transversal and longitudinal properties and could be used to generate biphotons by parametric down-conversion. This was feasible because of the improvement made with the infrared stripe-array diode lasers due to the development of new physical concepts.
With the rise of electronic integration between organizations, the need for a precise specification of interaction behavior increases. Information systems, replacing interaction previously carried out by humans via phone, faxes and emails, require a precise specification for handling all possible situations. Such interaction behavior is described in process choreographies. Choreographies enumerate the roles involved, the allowed interactions, the message contents and the behavioral dependencies between interactions. Choreographies serve as interaction contract and are the starting point for adapting existing business processes and systems or for implementing new software components. As a thorough analysis and comparison of choreography modeling languages is missing in the literature, this thesis introduces a requirements framework for choreography languages and uses it for comparing current choreography languages. Language proposals for overcoming the limitations are given for choreography modeling on the conceptual and on the technical level. Using an interconnection modeling style, behavioral dependencies are defined on a per-role basis and different roles are interconnected using message flow. This thesis reveals a number of modeling "anti-patterns" for interconnection modeling, motivating further investigations on choreography languages following the interaction modeling style. Here, interactions are seen as atomic building blocks and the behavioral dependencies between them are defined globally. Two novel language proposals are put forward for this modeling style which have already influenced industrial standardization initiatives. While avoiding many of the pitfalls of interconnection modeling, new anomalies can arise in interaction models. A choreography might not be realizable, i.e. there does not exist a set of interacting roles that collectively realize the specified behavior. This thesis investigates different dimensions of realizability.
Fiscal federalism has been an important topic among public finance theorists in the last four decades. There is a series of arguments that decentralization of governments enhances growth by improving allocation efficiency. However, the empirical studies have shown mixed results for industrialized and developing countries and some of them have demonstrated that there might be a threshold level of economic development below which decentralization is not effective. Developing and transition countries have developed a variety of forms of fiscal decentralization as a possible strategy to achieve effective and efficient governmental structures. A generalized principle of decentralization due to the country specific circumstances does not exist. Therefore, decentralization has taken place in different forms in various countries at different times, and even exactly the same extent of decentralization may have had different impacts under different conditions. The purpose of this study is to investigate the current state of the fiscal decentralization in Mongolia and to develop policy recommendations for the efficient and effective intergovernmental fiscal relations system for Mongolia. Within this perspective the analysis concentrates on the scope and structure of the public sector, the expenditure and revenue assignment as well as on the design of the intergovernmental transfer and sub-national borrowing. The study is based on data for twenty-one provinces and the capital city of Mongolia for the period from 2000 to 2009. As a former socialist country Mongolia has had a highly centralized governmental sector. The result of the analysis below revealed that the Mongolia has introduced a number of decentralization measures, which followed a top down approach and were slowly implemented without any integrated decentralization strategy in the last decade. As a result Mongolia became de-concentrated state with fiscal centralization. The revenue assignment is lacking a very important element, for instance significant revenue autonomy given to sub-national governments, which is vital for the efficient service delivery at the local level. According to the current assignments of the expenditure and revenue responsibilities most of the provinces are unable to provide a certain national standard of public goods supply. Hence, intergovernmental transfers from the central jurisdiction to the sub-national jurisdictions play an important role for the equalization of the vertical and horizontal imbalances in Mongolia. The critical problem associated with intergovernmental transfers is that there is not a stable, predictable and transparent system of transfer allocation. The amount of transfers to sub-national governments is determined largely by political decisions on ad hoc basis and disregards local differences in needs and fiscal capacity. Thus a fiscal equalization system based on the fiscal needs of the provinces should be implemented. The equalization transfers will at least partly offset the regional disparities in revenues and enable the sub-national governments to provide a national minimum standard of local public goods.
Throughout its empirical research history eye movement research has always been aware of the differences in reading behavior induced by individual differences and task demands. This work introduces a novel comprehensive concept of reading strategy, comprising individual differences in reading style and reading skill as well as reader goals. In a series of sentence reading experiments recording eye movements, the influence of reading strategies on reader- and word-level effects assuming distributed processing has been investigated. Results provide evidence for strategic, top-down influences on eye movement control that extend our understanding of eye guidance in reading.
Although the basic structure of biological membranes is provided by the lipid bilayer, most of the specific functions are carried out by membrane proteins (MPs) such as channels, ion-pumps and receptors. Additionally, it is known, that mutations in MPs are directly or indirectly involved in many diseases. Thus, structure determination of MPs is of major interest not only in structural biology but also in pharmacology, especially for drug development. Advances in structural biology of membrane proteins (MPs) have been strongly supported by the success of three leading techniques: X-ray crystallography, electron microscopy and solution NMR spectroscopy. However, X-ray crystallography and electron microscopy, require highly diffracting 3D or 2D crystals, respectively. Today, structure determination of non-crystalline solid protein preparations has been made possible through rapid progress of solid-state MAS NMR methodology for biological systems. Castellani et. al. solved and refined the first structure of a microcrystalline protein using only solid-state MAS NMR spectroscopy. These successful application open up perspectives to access systems that are difficult to crystallise or that form large heterogeneous complexes and insoluble aggregates, for example ligands bound to a MP-receptor, protein fibrils and heterogeneous proteins aggregates. Solid-state MAS NMR spectroscopy is in principle well suited to study MP at atomic resolution. In this thesis, different types of MP preparations were tested for their suitability to be studied by solid-state MAS NMR. Proteoliposomes, poorly diffracting 2D crystals and a PEG precipitate of the outer membrane protein G (OmpG) were prepared as a model system for large MPs. Results from this work, combined with data found in the literature, show that highly diffracting crystalline material is not a prerequirement for structural analysis of MPs by solid-state MAS NMR. Instead, it is possible to use non-diffracting 3D crystals, MP precipitates, poorly diffracting 2D crystals and proteoliposomes. For the latter two types of preparations, the MP is reconstituted into a lipid bilayer, which thus allows the structural investigation in a quasi-native environment. In addition, to prepare a MP sample for solid-state MAS NMR it is possible to use screening methods, that are well established for 3D and 2D crystallisation of MPs. Hopefully, these findings will open a fourth method for structural investigation of MP. The prerequisite for structural studies by NMR in general, and the most time consuming step, is always the assignment of resonances to specific nuclei within the protein. Since the last few years an ever-increasing number of assignments from solid-state MAS NMR of uniformly carbon and nitrogen labelled samples is being reported, mostly for small proteins of up to around 150 amino acids in length. However, the complexity of the spectra increases with increasing molecular weight of the protein. Thus the conventional assignment strategies developed for small proteins do not yield a sufficiently high degree of assignment for the large MP OmpG (281 amino acids). Therefore, a new assignment strategy to find starting points for large MPs was devised. The assignment procedure is based on a sample with [2,3-13C, 15N]-labelled Tyr and Phe and uniformly labelled alanine and glycine. This labelling pattern reduces the spectral overlap as well as the number of assignment possibilities. In order to extend the assignment, four other specifically labelled OmpG samples were used. The assignment procedure starts with the identification of the spin systems of each labelled amino acid using 2D 13C-13C and 3D NCACX correlation experiments. In a second step, 2D and 3D NCOCX type experiments are used for the sequential assignment of the observed resonances to specific nuclei in the OmpG amino acid sequence. Additionally, it was shown in this work, that biosynthetically site directed labelled samples, which are normally used to observe long-range correlations, were helpful to confirm the assignment. Another approach to find assignment starting points in large protein systems, is the use of spectroscopic filtering techniques. A filtering block that selects methyl resonances was used to find further assignment starting points for OmpG. Combining all these techniques, it was possible to assign nearly 50 % of the observed signals to the OmpG sequence. Using this information, a prediction of the secondary structure elements of OmpG was possible. Most of the calculated motifs were in good aggreement with the crystal structures of OmpG. The approaches presented here should be applicable to a wide variety of MPs and MP-complexes and should thus open a new avenue for the structural biology of MPs.
The adaptive evolutionary potential of a species or population to cope with omnipresent environmental challenges is based on its genetic variation. Variability at immune genes, such as the major histocompatibility complex (MHC) genes, is assumed to be a very powerful and effective tool to keep pace with diverse and rapidly evolving pathogens. In my thesis, I studied natural levels of variation at the MHC genes, which have a key role in immune defence, and parasite burden in different small mammal species. I assessed the importance of MHC variation for parasite burden in small mammal populations in their natural environment. To understand the processes shaping different patterns of MHC variation I focused on evidence of selection through pathogens upon the host. Further, I addressed the issue of low MHC diversity in populations or species, which could potentially arise as a result from habitat fragmentation and isolation. Despite their key role in the mammalian evolution the marsupial MHC has been rarely investigated. Studies on primarily captive or laboratory bred individuals indicated very little or even no polymorphism at the marsupial MHC class II genes. However, natural levels of marsupial MHC diversity and selection are unknown to date as studies on wild populations are virtually absent. I investigated MHC II variation in two Neotropical marsupial species endemic to the threatened Brazilian Atlantic Forest (Gracilinanus microtarsus, Marmosops incanus) to test whether the predicted low marsupial MHC class II polymorphism proves to be true under natural conditions. For the first time in marsupials I confirmed characteristics of MHC selection that were so far only known from eutherian mammals, birds, and fish: Positive selection on specific codon sites, recombination, and trans-species polymorphism. Beyond that, the two marsupial species revealed considerable differences in their MHC class II diversity. Diversity was rather low in M. incanus but tenfold higher in G. microtarsus, disproving the predicted general low marsupial MHC class II variation. As pathogens are believed to be very powerful drivers of MHC diversity, I studied parasite burden in both host species to understand the reasons for the remarkable differences in MHC diversity. In both marsupial species specific MHC class II variants were associated to either high or low parasite load highlighting the importance of the marsupial MHC class II in pathogen defence. I developed two alternative scenarios with regard to MHC variation, parasite load, and parasite diversity. In the ‘evolutionary equilibrium’ scenario I assumed the species with low MHC diversity, M. incanus, to be under relaxed pathogenic selection and expected low parasite diversity. Alternatively, low MHC diversity could be the result of a recent loss of genetic variation by means of a genetic bottleneck event. Under this ‘unbalanced situation’ scenario, I assumed a high parasite burden in M. incanus due to a lack of resistance alleles. Parasitological results clearly reject the first scenario and point to the second scenario, as M. incanus is distinctly higher parasitised but parasite diversity is relatively equal compared to G. microtarsus. Hence, I suggest that the parasite load in M. incanus is rather the consequence than the cause for its low MHC diversity. MHC variation and its associations to parasite burden have been typically studied within single populations but MHC variation between populations was rarely taken into account. To gain scientific insight on this issue, I chose a common European rodent species. In the yellow necked mouse (Apodemus flavicollis), I investigated the effects of genetic diversity on parasite load not on the individual but on the population level. I included populations, which possess different levels of variation at the MHC as well as at neutrally evolving genetic markers (microsatellites). I was able to show that mouse populations with a high MHC allele diversity are better armed against high parasite burdens highlighting the significance of adaptive genetic diversity in the field of conservation genetics. An individual itself will not directly benefit from its population’s large MHC allele pool in terms of parasite resistance. But confronted with the multitude of pathogens present in the wild a population with a large MHC allele reservoir is more likely to possess individuals with resistance alleles. These results deepen our understanding of the complex causes and processes of evolutionary adaptations between hosts and pathogens.
‘Heterosis’ is a term used in genetics and breeding referring to hybrid vigour or the superiority of hybrids over their parents in terms of traits such as size, growth rate, biomass, fertility, yield, nutrient content, disease resistance or tolerance to abiotic and abiotic stress. Parental plants which are two different inbred (pure) lines that have desired traits are crossed to obtain hybrids. Maximum heterosis is observed in the first generation (F1) of crosses. Heterosis has been utilised in plant and animal breeding programs for at least 90 years: by the end of the 21st century, 65% of worldwide maize production was hybrid-based. Generally, it is believed that an understanding of the molecular basis of heterosis will allow the creation of new superior genotypes which could either be used directly as F1 hybrids or form the basis for the future breeding selection programmes. Two selected accessions of a research model plant Arabidopsis thaliana (thale cress) were crossed to obtain hybrids. These typically exhibited a 60-80% increase of biomass when compared to the average weight of both parents. This PhD project focused on investigating the role of selected regulatory genes given their potentially key involvement in heterosis. In the first part of the project, the most appropriate developmental stage for this heterosis study was determined by metabolite level measurements and growth observations in parents and hybrids. At the selected stage, around 60 candidate regulatory genes (i.e. differentially expressed in hybrids when compared to parents) were identified. Of these, the majority were transcription factors, genes that coordinate the expression of other genes. Subsequent expression analyses of the candidate genes in biomass-heterotic hybrids of other Arabidopsis accessions revealed a differential expression in a gene subset, highlighting their relevance for heterosis. Moreover, a fraction of the candidate regulatory genes were found within DNA regions closely linked to the genes that underlie the biomass or growth heterosis. Additional analyses to validate the role of selected candidate regulatory genes in heterosis appeared insufficient to establish their role in heterosis. This uncovered a need for using novel approaches as discussed in the thesis. Taken together, the work provided an insight into studies on the molecular mechanisms underlying heterosis. Although studies on heterosis date back to more than one hundred years, this project as many others revealed that more investigations will be needed to uncover this phenomenon.
Active continental margins are affected by complex feedbacks between tectonic, climate and surface processes, the intricate relations of which are still a matter of discussion. The Chilean convergent margin, forming the outstanding Andean subduction orogen, constitutes an ideal natural laboratory for the investigation of climate, tectonics and their interactions. In order to study both processes, I examined marine and lacustrine sediments from different depositional environments on- and offshore the south-central Chilean coast (38-40°S). I combined sedimentological, geochemical and isotopical analyses to identify climatic and tectonic signals within the sedimentary records. The investigation of marine trench sediments (ODP Site 1232, SONNE core 50SL) focused on frequency changes of turbiditic event layers since the late Pleistocene. In the active margin setting of south-central Chile, these layers were considered to reflect periodically occurring earthquakes and to constitute an archive of the regional paleoseismicity. The new results indicate glacial-interglacial changes in turbidite frequencies during the last 140 kyr, with short recurrence times (~200 years) during glacial and long recurrence times (~1000 years) during interglacial periods. Hence, the generation of turbidites appears to be strongly influenced by climate and sea level changes, which control on the amount of sediment delivered to the shelf edge and therewith the stability of the continental slope: more stable slope conditions during interglacial periods entail lower turbidite frequencies than in glacial periods. Since glacial turbidite recurrence times are congruent with earthquake recurrence times derived from the historical record and other paleoseismic archives of the region, I concluded that only during cold stages the sediment availability and slope instability enabled the complete series of large earthquakes to be recorded. The sediment transport to the shelf region is not only driven by climate conditions but also influenced by local forearc tectonics. Accelerating uplift rates along major tectonic structures involved drainage anomalies and river flow inversions, which seriously altered the sediment supply to the Pacific Ocean. Two examples for the tectonic hindrance of fluvial systems are the coastal lakes Lago Lanalhue and Lago Lleu Lleu. Both lakes developed within former river valleys, which once discharged towards the Pacific and were dammed by tectonically uplifted sills at ~8000 yr BP. Analyses of sediment cores from the lakes showed similar successions of marine/brackish deposits at the bottom, covered by lacustrine sediments on top. Dating of the transitions between these different units and the comparison with global sea level curves allowed me to calculate local Holocene uplift rates, which are distinctly higher for the upraised sills (Lanalhue: 8.83 ± 2.7 mm/yr, Lleu Lleu: 11.36 ± 1.77 mm/yr) than for the lake basins (Lanalhue: 0.42 ± 0.71 mm/yr, Lleu Lleu: 0.49 ± 0.44 mm/yr). I hence considered the sills to be the surface expression of a blind thrust associated with a prominent inverse fault that is controlling regional uplift and folding. After the final separation of Lago Lanalhue and Lago Lleu Lleu from the Pacific, a constant deposition of lacustrine sediments preserved continuous records of local environmental changes. Sequences from both lakes indicate a long-term climate trend with a significant shift from more arid conditions during the Mid-Holocene (8000 – 4200 cal yr BP) to more humid conditions during the Late Holocene (4200 cal yr BP – present). This trend is consistent with other regional paleoclimatic data and interpreted to reflect changes in the strength/position of the Southern Westerly Winds. Since ~5000 years, sediments of Lago Lleu Lleu are marked by numerous intercalated detrital layers that recur with a mean frequency of ~210 years. Deposition of these layers may be triggered by local tectonics (i.e. earthquakes), but may also originate from changes in the local climate (e.g. onset of modern ENSO conditions). During the last 2000 years, pronounced variations in the terrigenous sediment supply to both lakes suggest important hydrological changes on the centennial time-scale as well. A lower input of terrigenous matter points to less humid phases between 200 cal yr B.C. - 150 cal yr A.D., 900 - 1350 cal yr A.D. and 1850 cal yr A.D. to present (broadly corresponding to the Roman, Medieval, and Modern Warm Periods). More humid periods persisted from 150 - 900 cal yr A.D. and 1350 - 1850 cal yr A.D. (broadly corresponding to the Dark Ages and the Little Ice Age). In conclusion, the combined investigation of marine and lacustrine sediments is a feasible method for the reconstruction of climatic and tectonic processes on different time scales. My approach allows exploring both climate and tectonics in one and the same archive, and is largely transferable to other active margins worldwide.
We study buckling instabilities of filaments in biological systems. Filaments in a cell are the building blocks of the cytoskeleton. They are responsible for the mechanical stability of cells and play an important role in intracellular transport by molecular motors, which transport cargo such as organelles along cytoskeletal filaments. Filaments of the cytoskeleton are semiflexible polymers, i.e., their bending energy is comparable to the thermal energy such that they can be viewed as elastic rods on the nanometer scale, which exhibit pronounced thermal fluctuations. Like macroscopic elastic rods, filaments can undergo a mechanical buckling instability under a compressive load. In the first part of the thesis, we study how this buckling instability is affected by the pronounced thermal fluctuations of the filaments. In cells, compressive loads on filaments can be generated by molecular motors. This happens, for example, during cell division in the mitotic spindle. In the second part of the thesis, we investigate how the stochastic nature of such motor-generated forces influences the buckling behavior of filaments. In chapter 2 we review briefly the buckling instability problem of rods on the macroscopic scale and introduce an analytical model for buckling of filaments or elastic rods in two spatial dimensions in the presence of thermal fluctuations. We present an analytical treatment of the buckling instability in the presence of thermal fluctuations based on a renormalization-like procedure in terms of the non-linear sigma model where we integrate out short-wavelength fluctuations in order to obtain an effective theory for the mode of the longest wavelength governing the buckling instability. We calculate the resulting shift of the critical force by fluctuation effects and find that, in two spatial dimensions, thermal fluctuations increase this force. Furthermore, in the buckled state, thermal fluctuations lead to an increase in the mean projected length of the filament in the force direction. As a function of the contour length, the mean projected length exhibits a cusp at the buckling instability, which becomes rounded by thermal fluctuations. Our main result is the observation that a buckled filament is stretched by thermal fluctuations, i.e., its mean projected length in the direction of the applied force increases by thermal fluctuations. Our analytical results are confirmed by Monte Carlo simulations for buckling of semiflexible filaments in two spatial dimensions. We also perform Monte Carlo simulations in higher spatial dimensions and show that the increase in projected length by thermal fluctuations is less pronounced than in two dimensions and strongly depends on the choice of the boundary conditions. In the second part of this work, we present a model for buckling of semiflexible filaments under the action of molecular motors. We investigate a system in which a group of motors moves along a clamped filament carrying a second filament as a cargo. The cargo-filament is pushed against the wall and eventually buckles. The force-generating motors can stochastically unbind and rebind to the filament during the buckling process. We formulate a stochastic model of this system and calculate the mean first passage time for the unbinding of all linking motors which corresponds to the transition back to the unbuckled state of the cargo filament in a mean-field model. Our results show that for sufficiently short microtubules the movement of kinesin-I-motors is affected by the load force generated by the cargo filament. Our predictions could be tested in future experiments.