Refine
Year of publication
- 2012 (1321) (remove)
Document Type
- Article (960)
- Doctoral Thesis (158)
- Conference Proceeding (52)
- Preprint (43)
- Postprint (39)
- Review (31)
- Monograph/Edited Volume (22)
- Other (9)
- Part of a Book (2)
- Master's Thesis (2)
- Working Paper (2)
- Habilitation Thesis (1)
Language
- English (1321) (remove)
Keywords
- Curriculum Framework (16)
- European values education (16)
- Europäische Werteerziehung (16)
- Lehrevaluation (16)
- Studierendenaustausch (16)
- Unterrichtseinheiten (16)
- curriculum framework (16)
- lesson evaluation (16)
- student exchange (16)
- teaching units (16)
Institute
- Institut für Biochemie und Biologie (235)
- Institut für Physik und Astronomie (215)
- Institut für Chemie (174)
- Institut für Geowissenschaften (170)
- Department Psychologie (75)
- Institut für Mathematik (60)
- Institut für Ernährungswissenschaft (55)
- Institut für Informatik und Computational Science (41)
- Department Linguistik (29)
- Department Sport- und Gesundheitswissenschaften (29)
The Garzn Complex of the Garzn Massif in SW Colombia is composed of the Vergel Granulite Unit (VG) and the Las Margaritas Migmatite Unit (LMM). Previous studies reveal peak temperature conditions for the VG of about 740 A degrees C. The present study considers the remarkable exsolution phenomena in feldspars and pyroxenes and titanium-in-quartz thermometry. Recalculated ternary feldspar compositions indicate temperatures around 900-1,000 A degrees C just at or above the ultra-high temperature-metamorphism (UHTM) boundary of granulites. The calculated temperatures range of exsolved ortho- and clinopyroxenes also supports the existence of an UHTM event. In addition, titanium-in-quartz thermometry points towards ultra-high temperatures. It is the first known UHTM crustal segment in the northern part of South America. Although a mean geothermal gradient of ca 38 A degrees C km(-1) could imply additional heat supply in the lower crust controlling this extreme of peak metamorphism, an alternative model is suggested. The formation of the Vergel Granulite Unit is supposed to be formed in a continental back-arc environment with a thinned and weakened crust behind a magmatic arc (Guapotn-Mancagua Gneiss) followed by collision. In contrast, rocks of the adjacent Las Margaritas Migmatite Unit display "normal" granulite facies temperatures and are formed in a colder lower crust outside the arc, preserved by the Guapotn-Mancagu Gneiss. Back-arc formation was followed by inversion and thickening of the basin. The three units that form the modern-day Garzn Massif, were juxtaposed upon each other during collision (at ca. 1,000 Ma) and exhumation. The collision leading to the deformation of the studied area is part of the Grenville orogeny leading to the amalgamation of Rodinia.
Innovation response behaviour is defined as individuals novelty-supporting or novelty-impeding action when navigating innovation initiatives through the organization. A typology of innovation response behaviour is developed, distinguishing between active and passive modes of conduct for novelty-supporting and novelty-impeding behaviour, respectively. The antecedents of innovation response behaviour are delineated based on West and Farr's five-factor model of individual innovation. Moreover, we argue that within organizational contexts, individuals often fail to implement their ideas due to innovation barriers, perceived as factors that are beyond their control. Based on the theory of planned behaviour, we reveal how these barriers influence individuals intentional and exhibited innovation response behaviour. Propositions about proximal and distal antecedents of individuals innovation response behaviour are derived. Proposing a research framework to study the organizational process of innovation from an actor-based perspective, this paper intends to link existing research on individual innovation with the process of innovation at the organizational level, explicitly accounting for the socio-political dynamics and arising managerial problems associated with successful innovation implementation within organizational realities. Implications for research in innovation management are discussed and avenues for future research outlined.
Insertion of artificial cell surface receptors for antigen-specific labelling of hybridoma cells
(2012)
The actual evapotranspiration is an important, but difficult to determine, element in the water balance of lakes and their catchment areas. Reliable data on evapotranspiration are not available for most lake basins for which paleoclimate reconstructions and modeling have been performed, particularly those in remote parts of Africa. We have used thermal infrared multispectral data for 14 ASTER scenes from the TERRA satellite to estimate the actual evapotranspiration in the 12,800 km(2) catchment of the Suguta Valley, northern Kenya Rift Evidence from sediments and paleo-shorelines indicates that, during the African Humid Period (AHP, 14.8 to 5.5 kyrs BP), this valley contained a large lake, 280 m deep and covering similar to 2200 km(2), which has now virtually disappeared. Evapotranspiration estimates for the Suguta Basin were generated using the Surface Energy Balance Algorithm for Land (SEBAL). Climate data required for the model were extracted from a high-resolution gridded dataset obtained from the Climatic Research Unit (East Anglia, UK). Results suggest significant spatial variations in evapotranspiration within the catchment area (ranging from 450 mm/yr in the basin to the north to 2000 mm/yr in more elevated areas) and precipitation that was similar to 20% higher during the AHP than in recent times. These results are in agreement with other estimates of paleo-precipitation in East Africa. The extreme response of the lake system (similar to 280 m greater water depth than today, and a lake surface area of 2200 km(2)) to only moderately higher precipitation illustrates the possible sensitivity of this area to future climate change.
A temporary seismic network composed of 11 stations was installed in the city of Potenza (Southern Italy) to record local and regional seismicity within the context of a national project funded by the Italian Department of Civil Protection (DPC). Some stations were moved after a certain time in order to increase the number of measurement points, leading to a total of 14 sites within the city by the end of the experiment. Recordings from 26 local earthquakes (M-l 2.2-3.8 ) were analyzed to compute the site responses at the 14 sites by applying both reference and non-reference site techniques. Furthermore, the Spectral Intensity (SI) for each local earthquake, as well as their ratios with respect to the values obtained at a reference site, were also calculated. In addition, a field survey of 233 single station noise measurements within the city was carried out to increase the information available at localities different from the 14 monitoring sites. By using the results of the correlation analysis between the horizontal-to-vertical spectral ratios computed from noise recordings (NHV) at the 14 selected sites and those derived by the single station noise measurements within the town as a proxy, the spectral intensity correction factors for site amplification obtained from earthquake analysis were extended to the entire city area. This procedure allowed us to provide a microzonation map of the urban area that can be directly used when calculating risk scenarios for civil defence purposes. The amplification factors estimated following this approach show values increasing along the main valley toward east where the detrital and alluvial complexes reach their maximum thickness.
The safe upper limit for inclusion of vitamin A in complete diets for growing dogs is uncertain, with the result that current recommendations range from 5.24 to 104.80 mu mol retinol (5000 to 100 000 IU vitamin A)/4184 kJ (1000 kcal) metabolisable energy (ME). The aim of the present study was to determine the effect of feeding four concentrations of vitamin A to puppies from weaning until 1 year of age. A total of forty-nine puppies, of two breeds, Labrador Retriever and Miniature Schnauzer, were randomly assigned to one of four treatment groups. Following weaning at 8 weeks of age, puppies were fed a complete food supplemented with retinyl acetate diluted in vegetable oil and fed at 1ml oil/100 g diet to achieve an intake of 5.24, 13.10, 78.60 and 104.80 mu mol retinol (5000, 12 500, 75 000 and 100 000 IU vitamin A)/4184 kJ (1000 kcal) ME. Fasted blood and urine samples were collected at 8, 10, 12, 14, 16, 20, 26, 36 and 52 weeks of age and analysed for markers of vitamin A metabolism and markers of safety including haematological and biochemical variables, bone-specific alkaline phosphatase, cross-linked carboxyterminal telopeptides of type I collagen and dual-energy X-ray absorptiometry. Clinical examinations were conducted every 4 weeks. Data were analysed by means of a mixed model analysis with Bonferroni corrections for multiple endpoints. There was no effect of vitamin A concentration on any of the parameters, with the exception of total serum retinyl esters, and no effect of dose on the number, type and duration of adverse events. We therefore propose that 104.80 mu mol retinol (100 000 IU vitamin A)/4184 kJ (1000 kcal) is a suitable safe upper limit for use in the formulation of diets designed for puppy growth.
The reaction of the German labor market to the Great Recession 2008/09 was relatively mild – especially compared to other countries. The reason lies not only in the specific type of the recession – which was favorable for the German economy structure – but also in a series of labor market reforms initiated between 2002 and 2005 altering, inter alia, labor supply incentives. However, irrespective of the mild response to the Great Recession, there are a number of substantial future challenges the German labor market will soon have to face. Female labor supply still lies well below that of other countries and a massive demographic change over the next 50 years will have substantial effects on labor supply as well as the pension system. In addition, due to a skill-biased technological change over the next decades, firms will face problems of finding employees with adequate skills. The aim of this paper is threefold. First, we outline why the German labor market reacted in such a mild fashion, describe current economic trends of the labor market in light of general trends in the European Union, and reveal some of the main associated challenges. Thereafter, the paper analyzes recent reforms of the main institutional settings of the labor market which influence labor supply. Finally, based on the status quo of these institutional settings, the paper gives a brief overview of strategies to combat adequately the challenges in terms of labor supply and to ensure economic growth in the future.
Dynamic regulatory on/off minimization for biological systems under internal temporal perturbations
(2012)
Background: Flux balance analysis (FBA) together with its extension, dynamic FBA, have proven instrumental for analyzing the robustness and dynamics of metabolic networks by employing only the stoichiometry of the included reactions coupled with adequately chosen objective function. In addition, under the assumption of minimization of metabolic adjustment, dynamic FBA has recently been employed to analyze the transition between metabolic states.
Results: Here, we propose a suite of novel methods for analyzing the dynamics of (internally perturbed) metabolic networks and for quantifying their robustness with limited knowledge of kinetic parameters. Following the biochemically meaningful premise that metabolite concentrations exhibit smooth temporal changes, the proposed methods rely on minimizing the significant fluctuations of metabolic profiles to predict the time-resolved metabolic state, characterized by both fluxes and concentrations. By conducting a comparative analysis with a kinetic model of the Calvin-Benson cycle and a model of plant carbohydrate metabolism, we demonstrate that the principle of regulatory on/off minimization coupled with dynamic FBA can accurately predict the changes in metabolic states.
Conclusions: Our methods outperform the existing dynamic FBA-based modeling alternatives, and could help in revealing the mechanisms for maintaining robustness of dynamic processes in metabolic networks over time.
Cell-level kinetic models for therapeutically relevant processes increasingly benefit the early stages of drug development. Later stages of the drug development processes, however, rely on pharmacokinetic compartment models while cell-level dynamics are typically neglected. We here present a systematic approach to integrate cell-level kinetic models and pharmacokinetic compartment models. Incorporating target dynamics into pharmacokinetic models is especially useful for the development of therapeutic antibodies because their effect and pharmacokinetics are inherently interdependent. The approach is illustrated by analysing the F(ab)-mediated inhibitory effect of therapeutic antibodies targeting the epidermal growth factor receptor. We build a multi-level model for anti-EGFR antibodies by combining a systems biology model with in vitro determined parameters and a pharmacokinetic model based on in vivo pharmacokinetic data. Using this model, we investigated in silico the impact of biochemical properties of anti-EGFR antibodies on their F(ab)-mediated inhibitory effect. The multi-level model suggests that the F(ab)-mediated inhibitory effect saturates with increasing drug-receptor affinity, thereby limiting the impact of increasing antibody affinity on improving the effect. This indicates that observed differences in the therapeutic effects of high affinity antibodies in the market and in clinical development may result mainly from Fc-mediated indirect mechanisms such as antibody-dependent cell cytotoxicity.
Parafoveal preview benefit (PB) is an implicit measure of lexical activation in reading. PB has been demonstrated for orthographic and phonological but not for semantically related information in English. In contrast, semantic PB is obtained in German and Chinese. We propose that these language differences reveal differential resource demands and timing of phonological and semantic decoding in different orthographic systems.
This study provides a detailed analysis of the mid-Holocene to present-day precipitation change in the Asian monsoon region. We compare for the first time results of high resolution climate model simulations with a standardised set of mid-Holocene moisture reconstructions. Changes in the simulated summer monsoon characteristics (onset, withdrawal, length and associated rainfall) and the mechanisms causing the Holocene precipitation changes are investigated. According to the model, most parts of the Indian subcontinent received more precipitation (up to 5 mm/day) at mid-Holocene than at present-day. This is related to a stronger Indian summer monsoon accompanied by an intensified vertically integrated moisture flux convergence. The East Asian monsoon region exhibits local inhomogeneities in the simulated annual precipitation signal. The sign of this signal depends on the balance of decreased pre-monsoon and increased monsoon precipitation at mid-Holocene compared to present-day. Hence, rainfall changes in the East Asian monsoon domain are not solely associated with modifications in the summer monsoon circulation but also depend on changes in the mid-latitudinal westerly wind system that dominates the circulation during the pre-monsoon season. The proxy-based climate reconstructions confirm the regional dissimilarities in the annual precipitation signal and agree well with the model results. Our results highlight the importance of including the pre-monsoon season in climate studies of the Asian monsoon system and point out the complex response of this system to the Holocene insolation forcing. The comparison with a coarse climate model simulation reveals that this complex response can only be resolved in high resolution simulations.
The closer the better
(2012)
A growing literature has suggested that processing of visual information presented near the hands is facilitated. In this study, we investigated whether the near-hands superiority effect also occurs with the hands moving. In two experiments, participants performed a cyclical bimanual movement task requiring concurrent visual identification of briefly presented letters. For both the static and dynamic hand conditions, the results showed improved letter recognition performance with the hands closer to the stimuli. The finding that the encoding advantage for near-hand stimuli also occurred with the hands moving suggests that the effect is regulated in real time, in accordance with the concept of a bimodal neural system that dynamically updates hand position in external space.
This paper presents a highly effective compactor architecture for processing test responses with a high percentage of x-values. The key component is a hierarchical configurable masking register, which allows the compactor to dynamically adapt to and provide excellent performance over a wide range of x-densities. A major contribution of this paper is a technique that enables the efficient loading of the x-masking data into the masking logic in a parallel fashion using the scan chains. A method for eliminating the requirement for dedicated mask control signals using automated test equipment timing flexibility is also presented. The proposed compactor is especially suited to multisite testing. Experiments with industrial designs show that the proposed compactor enables compaction ratios exceeding 200x.
Background: Mediterranean temporary water bodies are important reservoirs of biodiversity and host a unique assemblage of diapausing aquatic invertebrates. These environments are currently vanishing because of increasing human pressure. Chirocephalus kerkyrensis is a fairy shrimp typical of temporary water bodies in Mediterranean plain forests and has undergone a substantial decline in number of populations in recent years due to habitat loss. We assessed patterns of genetic connectivity and phylogeographic history in the seven extant populations of the species from Albania, Corfu Is. (Greece), Southern and Central Italy.
Methodology/Principal Findings: We analyzed sequence variation at two mitochondrial DNA genes (Cytochrome Oxidase I and 16s rRNA) in all the known populations of C. kerkyrensis. We used multiple phylogenetic, phylogeographic and coalescence-based approaches to assess connectivity and historical demography across the whole distribution range of the species. C. kerkyrensis is genetically subdivided into three main mitochondrial lineages; two of them are geographically localized (Corfu Is. and Central Italy) and one encompasses a wide geographic area (Albania and Southern Italy). Most of the detected genetic variation (approximate to 81%) is apportioned among the aforementioned lineages.
Conclusions/Significance: Multiple analyses of mismatch distributions consistently supported both past demographic and spatial expansions with the former predating the latter; demographic expansions were consistently placed during interglacial warm phases of the Pleistocene while spatial expansions were restricted to cold periods. Coalescence methods revealed a scenario of past isolation with low levels of gene flow in line with what is already known for other co-distributed fairy shrimps and suggest drift as the prevailing force in promoting local divergence. We recommend that these evolutionary trajectories should be taken in proper consideration in any effort aimed at protecting Mediterranean temporary water bodies.
In this study we used molecular markers to screen for the occurrence and prevalence of the three most common haemosporidian genera (Haemoproteus, Plasmodium, and Leucocytozoon) in blood samples of the Philippine Bulbul (Hypsipetes philippinus), a thrush-size passerine bird endemic to the Philippine Archipelago. We then used molecular data to ask whether the phylogeographic patterns in this insular host-parasite system might follow similar evolutionary trajectories or not. We took advantage of a previous study describing the pattern of genetic structuring in the Philippine Bulbul across the Central Philippine Archipelago (6 islands, 7 populations and 58 individuals; three mitochondrial DNA genes). The very same birds were here screened for the occurrence of parasites by species-specific PCR assays of the mitochondrial cytochrome b gene (471 base pairs). Twenty-eight out of the 58 analysed birds had Haemoproteus (48%) infections while just 2% of the birds were infected with either Leucocytozoon or Plasmodium. Sixteen of the 28 birds carrying Haemoproteus had multiple infections. The phylogeography of the Philippine Bulbul mostly reflects the geographical origin of samples and it is consistent with the occurrence of two different subspecies on (1) Semirara and (2) Carabao, Boracay, North Gigante, Panay, and Negros, respectively. Haemoproteus phylogeography shows very little geographical structure, suggesting extensive gene flow among locations. While movements of birds among islands seem very sporadic, we found co-occurring evolutionary divergent parasite lineages. We conclude that historical processes have played a major role in shaping the host phylogeography, while they have left no signature in that of the parasites. Here ongoing population processes, possibly multiple reinvasions mediated by other hosts, are predominant.
Hidden diversity in diatoms of Kenyan Lake Naivasha a genetic approach detects temporal variation
(2012)
This study provides insights into the morphological and genetic diversity in diatoms occurring in core sediments from tropical lakes in Kenya. We developed a genetic survey technique specific for diatoms utilizing a short region (7667 bp) of the ribulose-1,5-bisphosphate carboxylase/oxygenase large subunit (rbcL) gene as genetic barcode. Our analyses (i) validated the use of rbcL as a barcoding marker for diatoms, applied to sediment samples, (ii) showed a significant correlation between the results obtained by morphological and molecular data and (iii) indicated temporal variation in diatom assemblages on the inter- and intra-specific level. Diatom assemblages from a short core from Lake Naivasha show a drastic shift over the last 200 years, as littoral species (e.g. Navicula) are replaced by more planktonic ones (e.g. Aulacoseira). Within that same period, we detected periodic changes in the respective frequencies of distinct haplotype groups of Navicula, which coincide with wet and dry periods of Lake Naivasha between 1820 and 1938 AD. Our genetic analyses on historical lake sediments revealed inter- and intra-specific variation in diatoms, which is partially hidden behind single morphotypes. The occurrence of particular genetic lineages is probably correlated with environmental factors.
In this study, we report the genetic population structure of the Fire-bellied toad Bombina bombina in Brandenburg (East Germany) in the context of conservation. We analysed 298 samples originating from 11 populations in Brandenburg using mitochondrial control region sequences and six polymorphic microsatellite loci. For comparison, we included one population each from Poland and Ukraine into our analysis. Within Brandenburg, we detected a moderate variability in the mitochondrial control region (19 different haplotypes) and at microsatellite loci (9-12 alleles per locus). These polymorphisms revealed a clear population structure among toads in Brandenburg, despite a relatively high overall population density and the moderate size of single populations (100-2000 individuals). The overall genetic population structure is consistent with a postglacial colonization from South East-Europe and a subsequent population expansion. Based on genetic connectivity, we infer Management Units (MUs) as targets for conservation. Our genetic survey identified MUs, within which human infrastructure is currently preventing any genetic exchange. We also detect an unintentional translocation from South East to North West Brandenburg, presumably in the course of fish stocking activities. Provided suitable conservation measures are taken, Brandenburg should continue to harbor large populations of this critically endangered species.
Fourteen microsatellite markers were isolated and characterized for the endangered Visayan tarictic hornbill (Penelopides panini, Aves: Bucerotidae). In an analysis of 76 individuals, the number of alleles per locus varied from one to 12. Expected and observed heterozygosity ranged from 0.00 to 0.87 and from 0.00 to 0.89, respectively. All primers also amplify microsatellite loci in Luzon tarictic hornbill (Penelopides manillae), Mindanao tarictic hornbill (Penelopides affinis), the critically endangered Walden's hornbill (Aceros waldeni) and the near-threatened writhed hornbill (Aceros leucocephalus). Two loci which are monomorphic in P. panini were found polymorphic in at least one of the other species. These 14 new microsatellite markers specifically developed for two genera of Philippine hornbills, in combination with those already available for the hornbill genera Buceros and Bucorvus, comprise a reasonable number of loci to genetically analyse wild and captive populations of these and probably other related, often endangered hornbills.
Rates of multiple paternities were investigated in the sailfin molly (Poecilia latipinna), using eight microsatellite loci. Genotyping was performed for offspring and mothers in 40 broods from four allopatric populations from the south-eastern U.S.A. along a geographic stretch of 1200 km in west-east direction and approximately 200 km from north to south. No significant differences regarding rates of multiple paternities were found between populations despite sample populations stemming from ecologically divergent habitats. Even the most conservative statistical approach revealed a minimum of 70% of the broods being sired by at least two males, with an average of 1.80-2.95 putative fathers per brood. Within broods, one male typically sired far more offspring than would be expected under an assumed equal probability of all detected males siring offspring.
We tested the utility of a 230 base pair intron fragment of the highly conserved nuclear gene Elongation Factor 1-alpha (EF1-alpha) as a proper marker to reconstruct the phylogeography of the marine amphipod Pontogammarus maeoticus (Sowinsky, 1894) from the Caspian and Black Seas. As a prerequisite for further analysis, we confirmed by Southern blot analysis that EF1-alpha is encoded at a single locus in P. maeoticus. We included 15 populations and 60 individuals in the study. Both the phylogeny of the 27 unique alleles found and population genetic analyses revealed a significant differentiation between populations from the aforementioned sea basins. Our results are in remarkable agreement with recent studies on a variety of species from the same area, which invariably support a major phylogeographic break between the Caspian and Black Seas. We thus conclude that our EF1-alpha intron is an informative marker for phylogeographic studies in amphipods at the shallow population level.
Background: The Visayan Tarictic Hornbill (Penelopides panini) and the Walden's Hornbill (Aceros waldeni) are two threatened hornbill species endemic to the western islands of the Visayas that constitute - between Luzon and Mindanao - the central island group of the Philippine archipelago. In order to evaluate their genetic diversity and to support efforts towards their conservation, we analyzed genetic variation in similar to 600 base pairs (bp) of the mitochondrial control region I and at 12-19 nuclear microsatellite loci. The sampling covered extant populations, still occurring only on two islands (P. panini: Panay and Negros, A. waldeni: only Panay), and it was augmented with museum specimens of extinct populations from neighboring islands. For comparison, their less endangered (= more abundant) sister taxa, the Luzon Tarictic Hornbill (P. manillae) from the Luzon and Polillo Islands and the Writhed Hornbill (A. leucocephalus) from Mindanao Island, were also included in the study. We reconstructed the population history of the two Penelopides species and assessed the genetic population structure of the remaining wild populations in all four species.
Results: Mitochondrial and nuclear data concordantly show a clear genetic separation according to the island of origin in both Penelopides species, but also unravel sporadic over-water movements between islands. We found evidence that deforestation in the last century influenced these migratory events. Both classes of markers and the comparison to museum specimens reveal a genetic diversity loss in both Visayan hornbill species, P. panini and A. waldeni, as compared to their more abundant relatives. This might have been caused by local extinction of genetically differentiated populations together with the dramatic decline in the abundance of the extant populations.
Conclusions: We demonstrated a loss in genetic diversity of P. panini and A. waldeni as compared to their sister taxa P. manillae and A. leucocephalus. Because of the low potential for gene flow and population exchange across islands, saving of the remaining birds of almost extinct local populations - be it in the wild or in captivity - is particularly important to preserve the species' genetic potential.
The ongoing global amphibian decline calls for an increase of habitat and population management efforts. Pond restoration and construction is more and more accompanied by breeding and translocation programs. However, the appropriateness of translocations as a tool for conservation has been widely debated, as it can cause biodiversity loss through genetic homogenization and can disrupt local adaptation, eventually leading to outbreeding depression. In this study, we investigated the genetic structure of two translocated populations of the critically endangered fire-bellied toad Bombina bombina at its north western distribution edge using supposedly neutral genetic markers (variation in the mitochondrial control region and microsatellites) as well as a marker under selection (major histocompatibility complex (MHC) genes). While one of the newly established populations showed the typical genetic composition of surrounding populations, the other was extremely diverged without clear affinity to its putative source. In this population we detected a profound impact of allochthonous individuals: 100% of the analyzed individuals exhibited a highly divergent mitochondrial haplotype which was otherwise found in Austria. 83% of them were also assigned to Austria by the analysis of microsatellites. Interestingly, for the adaptive marker (MHC) local alleles were predominant in this population, while only very few alleles were shared with the Austrian population. Probably Mendelian inheritance has reshuffled genotypes such that adaptive local alleles are maintained (here, MHC), while presumably neutral allochthonous alleles dominate at other loci. The release of allochthonous individuals generally increased the genetic variability of the affected population without wiping out locally adaptive genotypes. Thus, outbreeding depression might be less apparent than sometimes thought and natural selection appears strong enough to maintain locally adaptive alleles, at least in functionally important immune system genes.
We examined the prevalence and host fidelity of avian haemosporidian parasites belonging to the genera Haemoproteus, Leucocytozoon and Plasmodium in the central Philippine islands by sampling 23 bird families (42 species). Using species-specific PCR assays of the mitochondrial cytochrome b gene (471 base pairs, bp), we detected infections in 91 of the 215 screened individuals (42%). We also discriminated between single and multiple infections. Thirty-one infected individuals harbored a single Haemoproteus lineage (14%), 18 a single Leucocytozoon lineage (8%) and 12 a single Plasmodium lineage (6%). Of the 215 screened birds, 30 (14%) presented different types of multiple infections. Intrageneric mixed infections were generally more common (18 Haemoproteus/Haemoproteus, 3 Leucocytozoon/Leucocytozoon, and 1 Plasmodium/Plasmodium) than intergeneric mixed infections (7 Haemoproteus/Leucocytozoon and 1 Haemoproteus/Leucocytozoon/Plasmodium). We recovered 81 unique haemosporidian mitochondrial haplotypes. These clustered in three strongly supported monophyletic clades that correspond to the three haemosporidian genera. Related lineages of Haemoproteus and Leucocytozoon were more likely to derive from the same host family than predicted by chance; however, this was not the case for Plasmodium. These results indicate that switches between host families are more likely to occur in Plasmodium. We conclude that Haemoproteus has undergone a recent diversification across well-supported host-family specific clades, while Leucocytozoon shows a longer association with its host(s). This study supports previous evidence of a higher prevalence and stronger host-family specificity of Haemoproteus and Leucocytozoon compared to Plasmodium. (C) 2012 Elsevier Ireland Ltd. All rights reserved.
Talitrids are semiterrestrial crustacean amphipods inhabiting sandy and rocky beaches; they generally show limited active dispersal over long distances. In this study we assessed levels of population genetic structure and variability in the talitrid amphipod Orchestia montagui, a species strictly associated to stranded decaying heaps of the seagrass Posidonia oceanica. The study is based on six populations (153 individuals) and covers five basins of the Mediterranean Sea (Tyrrhenian, Ionian, Adriatic, Western and Eastern basins). Samples were screened for polymorphisms at a fragment of the mitochondrial DNA (mtDNA) coding for the cytochrome oxidase subunit I gene (COI; 571 base pairs) and at eight microsatellite loci. MtDNA revealed a relatively homogeneous haplogroup, which clustered together the populations from the Western, Tyrrhenian and Eastern basins, but not the populations from the Adriatic and Ionian ones; microsatellites detected two clusters, one including the Adriatic and Ionian populations, the second grouping all the others. We found a weak geographic pattern in the genetic structuring of the species, with a lack of isolation by distance at either class of markers. Results are discussed in terms of probability of passive dispersal over long distances through heaps of seagrass.
Duplicate detection is the task of identifying all groups of records within a data set that represent the same real-world entity, respectively. This task is difficult, because (i) representations might differ slightly, so some similarity measure must be defined to compare pairs of records and (ii) data sets might have a high volume making a pair-wise comparison of all records infeasible. To tackle the second problem, many algorithms have been suggested that partition the data set and compare all record pairs only within each partition. One well-known such approach is the Sorted Neighborhood Method (SNM), which sorts the data according to some key and then advances a window over the data comparing only records that appear within the same window. We propose several variations of SNM that have in common a varying window size and advancement. The general intuition of such adaptive windows is that there might be regions of high similarity suggesting a larger window size and regions of lower similarity suggesting a smaller window size. We propose and thoroughly evaluate several adaption strategies, some of which are provably better than the original SNM in terms of efficiency (same results with fewer comparisons).
Developing Critical Thinking
(2012)
Governments at central and sub-national levels are increasingly pursuing participatory mechanisms in a bid to improve governance and service delivery. This has been largely in the context of decentralization reforms in which central governments transfer (share) political, administrative, fiscal and economic powers and functions to sub-national units. Despite the great international support and advocacy for participatory governance where citizen’s voice plays a key role in decision making of decentralized service delivery, there is a notable dearth of empirical evidence as to the effect of such participation. This is the question this study sought to answer based on a case study of direct citizen participation in Local Authorities (LAs) in Kenya. This is as formally provided for by the Local Authority Service Delivery Action Plan (LASDAP) framework that was established to ensure citizens play a central role in planning and budgeting, implementation and monitoring of locally identified services towards improving livelihoods and reducing poverty. Influence of participation was assessed in terms of how it affected five key determinants of effective service delivery namely: efficient allocation of resources; equity in service delivery; accountability and reduction of corruption; quality of services; and, cost recovery. It finds that the participation of citizens is minimal and the resulting influence on the decentralized service delivery negligible. It concludes that despite the dismal performance of citizen participation, LASDAP has played a key role towards institutionalizing citizen participation that future structures will build on. It recommends that an effective framework of citizen participation should be one that is not directly linked to politicians; one that is founded on a legal framework and where citizens have a legal recourse opportunity; and, one that obliges LA officials both to implement what citizen’s proposals which meet the set criteria as well as to account for their actions in the management of public resources.
Developing critical thinking
(2012)
Bad governance causes economic, social, developmental and environmental problems in many developing countries. Developing countries have adopted a number of reforms that have assisted in achieving good governance. The success of governance reform depends on the starting point of each country – what institutional arrangements exist at the out-set and who the people implementing reforms within the existing institutional framework are. This dissertation focuses on how formal institutions (laws and regulations) and informal institutions (culture, habit and conception) impact on good governance. Three characteristics central to good governance - transparency, participation and accountability are studied in the research.
A number of key findings were: Good governance in Hanoi and Berlin represent the two extremes of the scale, while governance in Berlin is almost at the top of the scale, governance in Hanoi is at the bottom. Good governance in Hanoi is still far from achieved. In Berlin, information about public policies, administrative services and public finance is available, reliable and understandable. People do not encounter any problems accessing public information. In Hanoi, however, public information is not easy to access. There are big differences between Hanoi and Berlin in the three forms of participation. While voting in Hanoi to elect local deputies is formal and forced, elections in Berlin are fair and free. The candidates in local elections in Berlin come from different parties, whereas the candidacy of local deputies in Hanoi is thoroughly controlled by the Fatherland Front. Even though the turnout of voters in local deputy elections is close to 90 percent in Hanoi, the legitimacy of both the elections and the process of representation is non-existent because the local deputy candidates are decided by the Communist Party.
The involvement of people in solving local problems is encouraged by the government in Berlin. The different initiatives include citizenry budget, citizen activity, citizen initiatives, etc. Individual citizens are free to participate either individually or through an association.
Lacking transparency and participation, the quality of public service in Hanoi is poor. Citizens seldom get their services on time as required by the regulations. Citizens who want to receive public services can bribe officials directly, use the power of relationships, or pay a third person – the mediator ("Cò" - in Vietnamese).
In contrast, public service delivery in Berlin follows the customer-orientated principle. The quality of service is high in relation to time and cost. Paying speed money, bribery and using relationships to gain preferential public service do not exist in Berlin.
Using the examples of Berlin and Hanoi, it is clear to see how transparency, participation and accountability are interconnected and influence each other. Without a free and fair election as well as participation of non-governmental organisations, civil organisations, and the media in political decision-making and public actions, it is hard to hold the Hanoi local government accountable.
The key differences in formal institutions (regulative and cognitive) between Berlin and Hanoi reflect the three main principles: rule of law vs. rule by law, pluralism vs. monopoly Party in politics and social market economy vs. market economy with socialist orientation.
In Berlin the logic of appropriateness and codes of conduct are respect for laws, respect of individual freedom and ideas and awareness of community development. People in Berlin take for granted that public services are delivered to them fairly. Ideas such as using money or relationships to shorten public administrative procedures do not exist in the mind of either public officials or citizens.
In Hanoi, under a weak formal framework of good governance, new values and norms (prosperity, achievement) generated in the economic transition interact with the habits of the centrally-planned economy (lying, dependence, passivity) and traditional values (hierarchy, harmony, family, collectivism) influence behaviours of those involved.
In Hanoi “doing the right thing” such as compliance with law doesn’t become “the way it is”.
The unintended consequence of the deliberate reform actions of the Party is the prevalence of corruption. The socialist orientation seems not to have been achieved as the gap between the rich and the poor has widened.
Good governance is not achievable if citizens and officials are concerned only with their self-interest. State and society depend on each other. Theoretically to achieve good governance in Hanoi, institutions (formal and informal) able to create good citizens, officials and deputies should be generated. Good citizens are good by habit rather than by nature.
The rule of law principle is necessary for the professional performance of local administrations and People’s Councils. When the rule of law is applied consistently, the room for informal institutions to function will be reduced.
Promoting good governance in Hanoi is dependent on the need and desire to change the government and people themselves. Good governance in Berlin can be seen to be the result of the efforts of the local government and citizens after a long period of development and continuous adjustment.
Institutional transformation is always a long and complicated process because the change in formal regulations as well as in the way they are implemented may meet strong resistance from the established practice. This study has attempted to point out the weaknesses of the institutions of Hanoi and has identified factors affecting future development towards good governance. But it is not easy to determine how long it will take to change the institutional setting of Hanoi in order to achieve good governance.
We develop the method of Fischer-Riesz equations for general boundary value problems elliptic in the sense of Douglis-Nirenberg. To this end we reduce them to a boundary problem for a (possibly overdetermined) first order system whose classical symbol has a left inverse. For such a problem there is a uniquely determined boundary value problem which is adjoint to the given one with respect to the Green formula. On using a well elaborated theory of approximation by solutions of the adjoint problem, we find the Cauchy data of solutions of our problem.
In many applications one is faced with the problem of inferring some functional relation between input and output variables from given data. Consider, for instance, the task of email spam filtering where one seeks to find a model which automatically assigns new, previously unseen emails to class spam or non-spam. Building such a predictive model based on observed training inputs (e.g., emails) with corresponding outputs (e.g., spam labels) is a major goal of machine learning. Many learning methods assume that these training data are governed by the same distribution as the test data which the predictive model will be exposed to at application time. That assumption is violated when the test data are generated in response to the presence of a predictive model. This becomes apparent, for instance, in the above example of email spam filtering. Here, email service providers employ spam filters and spam senders engineer campaign templates such as to achieve a high rate of successful deliveries despite any filters. Most of the existing work casts such situations as learning robust models which are unsusceptible against small changes of the data generation process. The models are constructed under the worst-case assumption that these changes are performed such to produce the highest possible adverse effect on the performance of the predictive model. However, this approach is not capable to realistically model the true dependency between the model-building process and the process of generating future data. We therefore establish the concept of prediction games: We model the interaction between a learner, who builds the predictive model, and a data generator, who controls the process of data generation, as an one-shot game. The game-theoretic framework enables us to explicitly model the players' interests, their possible actions, their level of knowledge about each other, and the order at which they decide for an action. We model the players' interests as minimizing their own cost function which both depend on both players' actions. The learner's action is to choose the model parameters and the data generator's action is to perturbate the training data which reflects the modification of the data generation process with respect to the past data. We extensively study three instances of prediction games which differ regarding the order in which the players decide for their action. We first assume that both player choose their actions simultaneously, that is, without the knowledge of their opponent's decision. We identify conditions under which this Nash prediction game has a meaningful solution, that is, a unique Nash equilibrium, and derive algorithms that find the equilibrial prediction model. As a second case, we consider a data generator who is potentially fully informed about the move of the learner. This setting establishes a Stackelberg competition. We derive a relaxed optimization criterion to determine the solution of this game and show that this Stackelberg prediction game generalizes existing prediction models. Finally, we study the setting where the learner observes the data generator's action, that is, the (unlabeled) test data, before building the predictive model. As the test data and the training data may be governed by differing probability distributions, this scenario reduces to learning under covariate shift. We derive a new integrated as well as a two-stage method to account for this data set shift. In case studies on email spam filtering we empirically explore properties of all derived models as well as several existing baseline methods. We show that spam filters resulting from the Nash prediction game as well as the Stackelberg prediction game in the majority of cases outperform other existing baseline methods.
Education in knowledge society is challenged with a lot of problems in particular the interaction between the teacher and learner in social networking software as a key factor affects the learners’ learning and satisfaction (Prammanee, 2005) where “to teach is to communicate, to communicate is to interact, to interact is to learn” (Hefzallah, 2004, p. 48). Analyzing the relation between teacher-learner interaction from a side and learning outcome and learners’ satisfaction from the other side, some basic problems regarding a new learning culture using social networking software are discussed. Most of the educational institutions pay a lot of attentions to the equipments and emerging Information and Communication Technologies (ICTs) in learning situations. They try to incorporate ICT into their institutions as teaching and learning environments. They do this because they expect that by doing so they will improve the outcome of the learning process. Despite this, the learning outcome as reported in most studies is very limited, because the expectations of self-directed learning are much higher than the reality. Findings from an empirical study (investigating the role of teacher-learner interaction through new digital media wiki in higher education and learning outcome and learner’s satisfaction) are presented recommendations about the necessity of pedagogical interactions in support of teaching and learning activities in wiki courses in order to improve the learning outcome. Conclusions show the necessity for significant changes in the approach of vocational teacher training programs of online teachers in order to meet the requirements of new digital media in coherence with a new learning culture. These changes have to address collaborative instead of individual learning and ICT wiki as a tool for knowledge construction instead of a tool for gathering information.
Cargo transport by molecular motors is ubiquitous in all eukaryotic cells and is typically driven cooperatively by several molecular motors, which may belong to one or several motor species like kinesin, dynein or myosin. These motor proteins transport cargos such as RNAs, protein complexes or organelles along filaments, from which they unbind after a finite run length. Understanding how these motors interact and how their movements are coordinated and regulated is a central and challenging problem in studies of intracellular transport. In this thesis, we describe a general theoretical framework for the analysis of such transport processes, which enables us to explain the behavior of intracellular cargos based on the transport properties of individual motors and their interactions. Motivated by recent in vitro experiments, we address two different modes of transport: unidirectional transport by two identical motors and cooperative transport by actively walking and passively diffusing motors. The case of cargo transport by two identical motors involves an elastic coupling between the motors that can reduce the motors’ velocity and/or the binding time to the filament. We show that this elastic coupling leads, in general, to four distinct transport regimes. In addition to a weak coupling regime, kinesin and dynein motors are found to exhibit a strong coupling and an enhanced unbinding regime, whereas myosin motors are predicted to attain a reduced velocity regime. All of these regimes, which we derive both by analytical calculations and by general time scale arguments, can be explored experimentally by varying the elastic coupling strength. In addition, using the time scale arguments, we explain why previous studies came to different conclusions about the effect and relevance of motor-motor interference. In this way, our theory provides a general and unifying framework for understanding the dynamical behavior of two elastically coupled molecular motors. The second mode of transport studied in this thesis is cargo transport by actively pulling and passively diffusing motors. Although these passive motors do not participate in active transport, they strongly enhance the overall cargo run length. When an active motor unbinds, the cargo is still tethered to the filament by the passive motors, giving the unbound motor the chance to rebind and continue its active walk. We develop a stochastic description for such cooperative behavior and explicitly derive the enhanced run length for a cargo transported by one actively pulling and one passively diffusing motor. We generalize our description to the case of several pulling and diffusing motors and find an exponential increase of the run length with the number of involved motors.
One of the most exciting predictions of Einstein's theory of gravitation that have not yet been proven experimentally by a direct detection are gravitational waves. These are tiny distortions of the spacetime itself, and a world-wide effort to directly measure them for the first time with a network of large-scale laser interferometers is currently ongoing and expected to provide positive results within this decade. One potential source of measurable gravitational waves is the inspiral and merger of two compact objects, such as binary black holes. Successfully finding their signature in the noise-dominated data of the detectors crucially relies on accurate predictions of what we are looking for. In this thesis, we present a detailed study of how the most complete waveform templates can be constructed by combining the results from (A) analytical expansions within the post-Newtonian framework and (B) numerical simulations of the full relativistic dynamics. We analyze various strategies to construct complete hybrid waveforms that consist of a post-Newtonian inspiral part matched to numerical-relativity data. We elaborate on exsisting approaches for nonspinning systems by extending the accessible parameter space and introducing an alternative scheme based in the Fourier domain. Our methods can now be readily applied to multiple spherical-harmonic modes and precessing systems. In addition to that, we analyze in detail the accuracy of hybrid waveforms with the goal to quantify how numerous sources of error in the approximation techniques affect the application of such templates in real gravitational-wave searches. This is of major importance for the future construction of improved models, but also for the correct interpretation of gravitational-wave observations that are made utilizing any complete waveform family. In particular, we comprehensively discuss how long the numerical-relativity contribution to the signal has to be in order to make the resulting hybrids accurate enough, and for currently feasible simulation lengths we assess the physics one can potentially do with template-based searches.
One of the key challenges in service-oriented systems engineering is the prediction and assurance of non-functional properties, such as the reliability and the availability of composite interorganizational services. Such systems are often characterized by a variety of inherent uncertainties, which must be addressed in the modeling and the analysis approach. The different relevant types of uncertainties can be categorized into (1) epistemic uncertainties due to incomplete knowledge and (2) randomization as explicitly used in protocols or as a result of physical processes. In this report, we study a probabilistic timed model which allows us to quantitatively reason about nonfunctional properties for a restricted class of service-oriented real-time systems using formal methods. To properly motivate the choice for the used approach, we devise a requirements catalogue for the modeling and the analysis of probabilistic real-time systems with uncertainties and provide evidence that the uncertainties of type (1) and (2) in the targeted systems have a major impact on the used models and require distinguished analysis approaches. The formal model we use in this report are Interval Probabilistic Timed Automata (IPTA). Based on the outlined requirements, we give evidence that this model provides both enough expressiveness for a realistic and modular specifiation of the targeted class of systems, and suitable formal methods for analyzing properties, such as safety and reliability properties in a quantitative manner. As technical means for the quantitative analysis, we build on probabilistic model checking, specifically on probabilistic time-bounded reachability analysis and computation of expected reachability rewards and costs. To carry out the quantitative analysis using probabilistic model checking, we developed an extension of the Prism tool for modeling and analyzing IPTA. Our extension of Prism introduces a means for modeling probabilistic uncertainty in the form of probability intervals, as required for IPTA. For analyzing IPTA, our Prism extension moreover adds support for probabilistic reachability checking and computation of expected rewards and costs. We discuss the performance of our extended version of Prism and compare the interval-based IPTA approach to models with fixed probabilities.
The dissertation examines the use of performance information by public managers. “Use” is conceptualized as purposeful utilization in order to steer, learn, and improve public services. The main research question is: Why do public managers use performance information? To answer this question, I systematically review the existing literature, identify research gaps and introduce the approach of my dissertation. The first part deals with manager-related variables that might affect performance information use but which have thus far been disregarded. The second part models performance data use by applying a theory from social psychology which is based on the assumption that this management behavior is conscious and reasoned. The third part examines the extent to which explanations of performance information use vary if we include others sources of “unsystematic” feedback in our analysis. The empirical results are based on survey data from 2011. I surveyed middle managers from eight selected divisions of all German cities with county status (n=954). To analyze the data, I used factor analysis, multiple regression analysis, and structural equation modeling. My research resulted in four major findings: 1) The use of performance information can be modeled as a reasoned behavior which is determined by the attitude of the managers and of their immediate peers. 2) Regular users of performance data surprisingly are not generally inclined to analyze abstract data but rather prefer gathering information through personal interaction. 3) Managers who take on ownership of performance information at an early stage in the measurement process are also more likely to use this data when it is reported to them. 4) Performance reports are only one source of information among many. Public managers prefer verbal feedback from insiders and feedback from external stakeholders over systematic performance reports. The dissertation explains these findings using a deductive approach and discusses their implications for theory and practice.
This thesis contains several theoretical studies on optomechanical systems, i.e. physical devices where mechanical degrees of freedom are coupled with optical cavity modes. This optomechanical interaction, mediated by radiation pressure, can be exploited for cooling and controlling mechanical resonators in a quantum regime. The goal of this thesis is to propose several new ideas for preparing meso- scopic mechanical systems (of the order of 10^15 atoms) into highly non-classical states. In particular we have shown new methods for preparing optomechani-cal pure states, squeezed states and entangled states. At the same time, proce-dures for experimentally detecting these quantum effects have been proposed. In particular, a quantitative measure of non classicality has been defined in terms of the negativity of phase space quasi-distributions. An operational al- gorithm for experimentally estimating the non-classicality of quantum states has been proposed and successfully applied in a quantum optics experiment. The research has been performed with relatively advanced mathematical tools related to differential equations with periodic coefficients, classical and quantum Bochner’s theorems and semidefinite programming. Nevertheless the physics of the problems and the experimental feasibility of the results have been the main priorities.
Complex networks have been successfully employed to represent different levels of biological systems, ranging from gene regulation to protein-protein interactions and metabolism. Network-based research has mainly focused on identifying unifying structural properties, including small average path length, large clustering coefficient, heavy-tail degree distribution, and hierarchical organization, viewed as requirements for efficient and robust system architectures. Existing studies estimate the significance of network properties using a generic randomization scheme - a Markov-chain switching algorithm - which generates unrealistic reactions in metabolic networks, as it does not account for the physical principles underlying metabolism. Therefore, it is unclear whether the properties identified with this generic approach are related to the functions of metabolic networks. Within this doctoral thesis, I have developed an algorithm for mass-balanced randomization of metabolic networks, which runs in polynomial time and samples networks almost uniformly at random. The properties of biological systems result from two fundamental origins: ubiquitous physical principles and a complex history of evolutionary pressure. The latter determines the cellular functions and abilities required for an organism’s survival. Consequently, the functionally important properties of biological systems result from evolutionary pressure. By employing randomization under physical constraints, the salient structural properties, i.e., the smallworld property, degree distributions, and biosynthetic capabilities of six metabolic networks from all kingdoms of life are shown to be independent of physical constraints, and thus likely to be related to evolution and functional organization of metabolism. This stands in stark contrast to the results obtained from the commonly applied switching algorithm. In addition, a novel network property is devised to quantify the importance of reactions by simulating the impact of their knockout. The relevance of the identified reactions is verified by the findings of existing experimental studies demonstrating the severity of the respective knockouts. The results suggest that the novel property may be used to determine the reactions important for viability of organisms. Next, the algorithm is employed to analyze the dependence between mass balance and thermodynamic properties of Escherichia coli metabolism. The thermodynamic landscape in the vicinity of the metabolic network reveals two regimes of randomized networks: those with thermodynamically favorable reactions, similar to the original network, and those with less favorable reactions. The results suggest that there is an intrinsic dependency between thermodynamic favorability and evolutionary optimization. The method is further extended to optimizing metabolic pathways by introducing novel chemically feasibly reactions. The results suggest that, in three organisms of biotechnological importance, introduction of the identified reactions may allow for optimizing their growth. The approach is general and allows identifying chemical reactions which modulate the performance with respect to any given objective function, such as the production of valuable compounds or the targeted suppression of pathway activity. These theoretical developments can find applications in metabolic engineering or disease treatment. The developed randomization method proposes a novel approach to measuring the significance of biological network properties, and establishes a connection between large-scale approaches and biological function. The results may provide important insights into the functional principles of metabolic networks, and open up new possibilities for their engineering.
Since available phosphate (Pi) resources in soil are limited, symbiotic interactions between plant roots and arbuscular mycorrhizal (AM) fungi are a widespread strategy to improve plant phosphate nutrition. The repression of AM symbiosis by a high plant Pi-status indicates a link between Pi homeostasis signalling and AM symbiosis development. This assumption is supported by the systemic induction of several microRNA399 (miR399) primary transcripts in shoots and a simultaneous accumulation of mature miR399 in roots of mycorrhizal plants. However, the physiological role of this miR399 expression pattern is still elusive and offers the question whether other miRNAs are also involved in AM symbiosis. Therefore, a deep sequencing approach was applied to investigate miRNA-mediated posttranscriptional gene regulation in M. truncatula mycorrhizal roots. Degradome analysis revealed that 185 transcripts were cleaved by miRNAs, of which the majority encoded transcription factors and disease resistance genes, suggesting a tight control of transcriptional reprogramming and a downregulation of defence responses by several miRNAs in mycorrhizal roots. Interestingly, 45 of the miRNA-cleaved transcripts showed a significant differentially regulated between mycorrhizal and non-mycorrhizal roots. In addition, key components of the Pi homeostasis signalling pathway were analyzed concerning their expression during AM symbiosis development. MtPhr1 overexpression and time course expression data suggested a strong interrelation between the components of the PHR1-miR399-PHO2 signalling pathway and AM symbiosis, predominantly during later stages of symbiosis. In situ hybridizations confirmed accumulation of mature miR399 in the phloem and in arbuscule-containing cortex cells of mycorrhizal roots. Moreover, a novel target of the miR399 family, named as MtPt8, was identified by the above mentioned degradome analysis. MtPt8 encodes a Pi-transporter exclusively transcribed in mycorrhizal roots and its promoter activity was restricted to arbuscule-containing cells. At a low Pi-status, MtPt8 transcript abundance inversely correlated with a mature miR399 expression pattern. Increased MtPt8 transcript levels were accompanied by elevated symbiotic Pi-uptake efficiency, indicating its impact on balancing plant and fungal Pi-acquisition. In conclusion, this study provides evidence for a direct link of the regulatory mechanisms of plant Pi-homeostasis and AM symbiosis at a cell-specific level. The results of this study, especially the interaction of miR399 and MtPt8 provide a fundamental step for future studies of plant-microbe-interactions with regard to agricultural and ecological aspects.
During the overall development of complex engineering systems different modeling notations are employed. For example, in the domain of automotive systems system engineering models are employed quite early to capture the requirements and basic structuring of the entire system, while software engineering models are used later on to describe the concrete software architecture. Each model helps in addressing the specific design issue with appropriate notations and at a suitable level of abstraction. However, when we step forward from system design to the software design, the engineers have to ensure that all decisions captured in the system design model are correctly transferred to the software engineering model. Even worse, when changes occur later on in either model, today the consistency has to be reestablished in a cumbersome manual step. In this report, we present in an extended version of [Holger Giese, Stefan Neumann, and Stephan Hildebrandt. Model Synchronization at Work: Keeping SysML and AUTOSAR Models Consistent. In Gregor Engels, Claus Lewerentz, Wilhelm Schäfer, Andy Schürr, and B. Westfechtel, editors, Graph Transformations and Model Driven Enginering - Essays Dedicated to Manfred Nagl on the Occasion of his 65th Birthday, volume 5765 of Lecture Notes in Computer Science, pages 555–579. Springer Berlin / Heidelberg, 2010.] how model synchronization and consistency rules can be applied to automate this task and ensure that the different models are kept consistent. We also introduce a general approach for model synchronization. Besides synchronization, the approach consists of tool adapters as well as consistency rules covering the overlap between the synchronized parts of a model and the rest. We present the model synchronization algorithm based on triple graph grammars in detail and further exemplify the general approach by means of a model synchronization solution between system engineering models in SysML and software engineering models in AUTOSAR which has been developed for an industrial partner. In the appendix as extension to [19] the meta-models and all TGG rules for the SysML to AUTOSAR model synchronization are documented.
In the course of this thesis gold nanoparticle/polyelectrolyte multilayer structures were prepared, characterized, and investigated according to their static and ultrafast optical properties. Using the dip-coating or spin-coating layer-by-layer deposition method, gold-nanoparticle layers were embedded in a polyelectrolyte environment with high structural perfection. Typical structures exhibit four repetition units, each consisting of one gold-particle layer and ten double layers of polyelectrolyte (cationic+anionic polyelectrolyte). The structures were characterized by X-ray reflectivity measurements, which reveal Bragg peaks up to the seventh order, evidencing the high stratication of the particle layers. In the same measurements pronounced Kiessig fringes were observed, which indicate a low global roughness of the samples. Atomic force microscopy (AFM) images veried this low roughness, which results from the high smoothing capabilities of polyelectrolyte layers. This smoothing effect facilitates the fabrication of stratified nanoparticle/polyelectrolyte multilayer structures, which were nicely illustrated in a transmission electron microscopy image. The samples' optical properties were investigated by static spectroscopic measurements in the visible and UV range. The measurements revealed a frequency shift of the reflectance and of the plasmon absorption band, depending on the thickness of the polyelectrolyte layers that cover a nanoparticle layer. When the covering layer becomes thicker than the particle interaction range, the absorption spectrum becomes independent of the polymer thickness. However, the reflectance spectrum continues shifting to lower frequencies (even for large thicknesses). The range of plasmon interaction was determined to be in the order of the particle diameter for 10 nm, 20 nm, and 150 nm particles. The transient broadband complex dielectric function of a multilayer structure was determined experimentally by ultrafast pump-probe spectroscopy. This was achieved by simultaneous measurements of the changes in the reflectance and transmittance of the excited sample over a broad spectral range. The changes in the real and imaginary parts of the dielectric function were directly deduced from the measured data by using a recursive formalism based on the Fresnel equations. This method can be applied to a broad range of nanoparticle systems where experimental data on the transient dielectric response are rare. This complete experimental approach serves as a test ground for modeling the dielectric function of a nanoparticle compound structure upon laser excitation.
Theory of mRNA degradation
(2012)
One of the central themes of biology is to understand how individual cells achieve a high fidelity in gene expression. Each cell needs to ensure accurate protein levels for its proper functioning and its capability to proliferate. Therefore, complex regulatory mechanisms have evolved in order to render the expression of each gene dependent on the expression level of (all) other genes. Regulation can occur at different stages within the framework of the central dogma of molecular biology. One very effective and relatively direct mechanism concerns the regulation of the stability of mRNAs. All organisms have evolved diverse and powerful mechanisms to achieve this. In order to better comprehend the regulation in living cells, biochemists have studied specific degradation mechanisms in detail. In addition to that, modern high-throughput techniques allow to obtain quantitative data on a global scale by parallel analysis of the decay patterns of many different mRNAs from different genes. In previous studies, the interpretation of these mRNA decay experiments relied on a simple theoretical description based on an exponential decay. However, this does not account for the complexity of the responsible mechanisms and, as a consequence, the exponential decay is often not in agreement with the experimental decay patterns. We have developed an improved and more general theory of mRNA degradation which provides a general framework of mRNA expression and allows describing specific degradation mechanisms. We have made an attempt to provide detailed models for the regulation in different organisms. In the yeast S. cerevisiae, different degradation pathways are known to compete and furthermore most of them rely on the biochemical modification of mRNA molecules. In bacteria such as E. coli, degradation proceeds primarily endonucleolytically, i.e. it is governed by the initial cleavage within the coding region. In addition, it is often coupled to the level of maturity and the size of the polysome of an mRNA. Both for S. cerevisiae and E. coli, our descriptions lead to a considerable improvement of the interpretation of experimental data. The general outcome is that the degradation of mRNA must be described by an age-dependent degradation rate, which can be interpreted as a consequence of molecular aging of mRNAs. Within our theory, we find adequate ways to address this much debated topic from a theoretical perspective. The improvements of the understanding of mRNA degradation can be readily applied to further comprehend the mRNA expression under different internal or environmental conditions such as after the induction of transcription or stress application. Also, the role of mRNA decay can be assessed in the context of translation and protein synthesis. The ultimate goal in understanding gene regulation mediated by mRNA stability will be to identify the relevance and biological function of different mechanisms. Once more quantitative data will become available, our description allows to elaborate the role of each mechanism by devising a suitable model.
This paper develops a spatial model to analyze the stability of a market sharing agreement between two firms. We find that the stability of the cartel depends on the relative market size of each firm. Collusion is not attractive for firms with a small home market, but the incentive for collusion increases when the firm’s home market is getting larger relative to the home market of the competitor. The highest stability of a cartel and additionally the highest social welfare is found when regions are symmetric. Further we can show that a monetary transfer can stabilize the market sharing agreement.
Climate is the principal driving force of hydrological extremes like floods and attributing generating mechanisms is an essential prerequisite for understanding past, present, and future flood variability. Successively enhanced radiative forcing under global warming enhances atmospheric water-holding capacity and is expected to increase the likelihood of strong floods. In addition, natural climate variability affects the frequency and magnitude of these events on annual to millennial time-scales. Particularly in the mid-latitudes of the Northern Hemisphere, correlations between meteorological variables and hydrological indices suggest significant effects of changing climate boundary conditions on floods. To date, however, understanding of flood responses to changing climate boundary conditions is limited due to the scarcity of hydrological data in space and time. Exploring paleoclimate archives like annually laminated (varved) lake sediments allows to fill this gap in knowledge offering precise dated time-series of flood variability for millennia. During river floods, detrital catchment material is eroded and transported in suspension by fluid turbulence into downstream lakes. In the water body the transport capacity of the inflowing turbidity current successively diminishes leading to the deposition of detrital layers on the lake floor. Intercalated into annual laminations these detrital layers can be dated down to seasonal resolution. Microfacies analyses and X-ray fluorescence scanning (µ-XRF) at 200 µm resolution were conducted on the varved Mid- to Late Holocene interval of two sediment profiles from pre-alpine Lake Ammersee (southern Germany) located in a proximal (AS10prox) and distal (AS10dist) position towards the main tributary River Ammer. To shed light on sediment distribution within the lake, particular emphasis was (1) the detection of intercalated detrital layers and their micro-sedimentological features, and (2) intra-basin correlation of these deposits. Detrital layers were dated down to the season by microscopic varve counting and determination of the microstratigraphic position within a varve. The resulting chronology is verified by accelerator mass spectrometry (AMS) 14C dating of 14 terrestrial plant macrofossils. Since ~5500 varve years before present (vyr BP), in total 1573 detrital layers were detected in either one or both of the investigated sediment profiles. Based on their microfacies, geochemistry, and proximal-distal deposition pattern, detrital layers were interpreted as River Ammer flood deposits. Calibration of the flood layer record using instrumental daily River Ammer runoff data from AD 1926 to 1999 proves the flood layer succession to represent a significant time-series of major River Ammer floods in spring and summer, the flood season in the Ammersee region. Flood layer frequency trends are in agreement with decadal variations of the East Atlantic-Western Russia (EA-WR) atmospheric pattern back to 200 yr BP (end of the used atmospheric data) and solar activity back to 5500 vyr BP. Enhanced flood frequency corresponds to the negative EA-WR phase and reduced solar activity. These common links point to a central role of varying large-scale atmospheric circulation over Europe for flood frequency in the Ammersee region and suggest that these atmospheric variations, in turn, are likely modified by solar variability during the past 5500 years. Furthermore, the flood layer record indicates three shifts in mean layer thickness and frequency of different manifestation in both sediment profiles at ~5500, ~2800, and ~500 vyr BP. Combining information from both sediment profiles enabled to interpret these shifts in terms of stepwise increases in mean flood intensity. Likely triggers of these shifts are gradual reduction of Northern Hemisphere orbital summer forcing and long-term solar activity minima. Hypothesized atmospheric response to this forcing is hemispheric cooling that enhances equator-to-pole temperature gradients and potential energy in the troposphere. This energy is transferred into stronger westerly cyclones, more extreme precipitation, and intensified floods at Lake Ammersee. Interpretation of flood layer frequency and thickness data in combination with reanalysis models and time-series analysis allowed to reconstruct the flood history and to decipher flood triggering climate mechanisms in the Ammersee region throughout the past 5500 years. Flood frequency and intensity are not stationary, but influenced by multi-causal climate forcing of large-scale atmospheric modes on time-scales from years to millennia. These results challenge future projections that propose an increase in floods when Earth warms based only on the assumption of an enhanced hydrological cycle.