Refine
Year of publication
Document Type
- Article (20726)
- Doctoral Thesis (3140)
- Postprint (2090)
- Monograph/Edited Volume (1198)
- Other (660)
- Review (585)
- Conference Proceeding (326)
- Preprint (232)
- Part of a Book (231)
- Working Paper (134)
Language
- English (29537) (remove)
Is part of the Bibliography
- yes (29537) (remove)
Keywords
- climate change (172)
- Germany (103)
- machine learning (86)
- diffusion (76)
- German (68)
- Arabidopsis thaliana (67)
- anomalous diffusion (58)
- stars: massive (58)
- Climate change (55)
- Holocene (55)
Institute
- Institut für Physik und Astronomie (4876)
- Institut für Biochemie und Biologie (4710)
- Institut für Geowissenschaften (3309)
- Institut für Chemie (2855)
- Institut für Mathematik (1571)
- Department Psychologie (1405)
- Institut für Ernährungswissenschaft (1031)
- Department Linguistik (924)
- Wirtschaftswissenschaften (825)
- Institut für Informatik und Computational Science (796)
Stellar magnetic fields, as a crucial component of star formation and evolution, evade direct observation at least with current and near future instruments. However investigating whether magnetic fields are generated by a dynamo process or represent relics from the formation process, or whether they show a behavior similar to the sun or something very different, it is essential to investigate their structure and temporal evolution. Fortunately nature provides us with the possibility to indirectly observe surface topologies on distant stars by means of Doppler shift and polarization of light, though not without its challenges. Based on the mentioned effects, the so called Zeeman-Doppler Imaging technique is a powerful method to retrieve magnetic fields from rapid rotating stars based on measurements of spectropolarimetric observations in terms of Stokes profiles. In recent years, a large number of stellar magnetic field distributions could be reconstructed by Zeeman-Doppler Imaging (ZDI). However, the implementation of this method often relies on many approximations because, as an inversion method, it entails enormous computational requirements. The aim of this thesis is to develop methods for a ZDI, designed to invert time-resolved spectropolarimetric data of active late type stars, and to account for the expected complex and small scale magnetic fields on these stars. In order to reliably reconstruct the detailed field orientation and strength, the inversion method is employed to be able to use of all four Stokes components. Furthermore it is based on fully polarized radiative transfer calculations to account for the intricate interplay between temperature and magnetic field. Finally, the application of a newly developed ZDI code to Stokes I and V observations of II Pegasi (short: II Peg) was supposed to deliver the first magnetic surface maps for this highly active star. To accomplish the high computational burden of a radiative transfer based ZDI, we developed a novel approximation method to speed up the inversion process. It is based on Principal Component Analysis and Artificial Neural Networks. The latter approximate the functional mapping between atmospheric parameters and the corresponding local Stokes profiles. Inverse problems, as we are dealing with, are potentially ill-posed and require a regularization method. We propose a new regularization scheme, which implements a local entropy function that accounts for the peculiarities of the reconstruction of localized magnetic fields. To deal with the relatively large noise that is always present in polarimetric data, we developed a multi-line denoising technique based on Principal Component Analysis. In contrast to other multi-line techniques that extract from a large number of spectral lines a sort of mean profile, this method allows to extract individual spectral lines and thus allows for an inversion on the basis of specific lines. All these methods are incorporated in our newly developed ZDI code iMap, which is based on a conjugated gradient method. An in depth validation of our new synthesis method demonstrates the reliability and accuracy of this approach as well as a gain in computation time by almost three orders of magnitude relative to the conventional radiative transfer calculations. We investigated the influence of the different Stokes components (IV / IVQU) on the ability to reconstruct a known synthetic field configuration. In doing so we validate the capability of our inversion code, and we also assess limitations of magnetic field inversions in general. In a first application to II Peg, a K2 IV subgiant, we derived temperature and magnetic field surface distributions from spectropolarimetric data obtained in 2004 and 2007. It gives for the first time simultaneously the temporal evolution of the surface temperature and magnetic field distribution on II Peg.
This research is about local actors' response to problems of uneven development and unemployment. Policies to combat these problems are usually connected to socio-economic regeneration in England and economic and employment promotion (Wirtschafts- und Beschäftigungsförderung) in Germany. The main result of this project is a description of those factors which support the emergence of local socio-economic initiatives aimed at job creation. Eight social and formal economy initiatives have been examined and the ways in which their emergence has been influenced by institutional factors has been analysed. The role of local actors and forms of governance as well as wider regional and national policy frameworks has been taken into account. Socio-economic initiatives have been defined as non-routine local projects or schemes with the objective of direct job creation. Such initiatives often focus on specific local assets for the formal or the social economy. Socio-economic initiatives are grounded on ideas of local economic development, and the creation of local jobs for local people. The adopted understanding of governance focuses on the processes of decision taking. Thus, this understanding of governance is broadly construed to include the ways in which actors in addition to traditional government manage urban development. The applied understanding of governance lays a focus on 'strategic' forms of decision taking about both long term objectives and short term action linked to socio-economic regeneration. Four old industrial towns in North England and East Germany have been selected for case studies due to their particular socio-economic background. These towns, with between 10.000 and 70.000 inhabitants, are located outside of the main agglomerations and bear central functions for their hinterland. The approach has been comparative, with a focus on examining common themes rather than gaining in-depth knowledge of a single case. Until now, most urban governance studies have analysed the impacts of particular forms of governance such as regeneration partnerships. This project looks at particular initiatives and poses the question to what extent their emergence can be understood as a result of particular forms of governance, local institutional factors or regional and national contexts.
The acquisition of phonological alternations consists of many aspects as discussions in the relevant literature show. There are contrary findings about the role of naturalness. A natural process is grounded in phonetics; they are easy to learn, even in second language acquisition when adults have to learn certain processes that do not occur in their native language. There is also evidence that unnatural – arbitrary – rules can be learned. Current work on the acquisition of morphophonemic alternations suggests that their probability of occurrence is a crucial factor in acquisition. I have conducted an experiment to investigate the effects of naturalness as well as of probability of occurrence with 80 adult native speakers of German. It uses the Artificial Grammar paradigm: Two artificial languages were constructed, each with a particular alternation. In one language the alternation is natural (vowel harmony); in the other language the alternation is arbitrary (a vowel alternation depends on the sonorancy of the first consonant of the stem). The participants were divided in two groups, one group listened to the natural alternation and the other group listened to the unnatural alternation. Each group was divided into two subgroups. One subgroup then was presented with material in which the alternation occurred frequently and the other subgroup was presented with material in which the alternation occurred infrequently. After this exposure phase every participant was asked to produce new words during the test phase. Knowledge about the language-specific alternation pattern was needed to produce the forms correctly as the phonological contexts demanded certain alternants. The group performances have been compared with respect to the effects of naturalness and probability of occurrence. The natural rule was learned more easily than the unnatural one. Frequently presented rules were not learned more easily than the ones that were presented less frequently. Moreover, participants did not learn the unnatural rule at all, whether this rule was presented frequently or infrequently did not matter. There was a tendency that the natural rule was learned more easily if presented frequently than if presented infrequently, but it was not significant due to variability across participants.
Business process management experiences a large uptake by the industry, and process models play an important role in the analysis and improvement of processes. While an increasing number of staff becomes involved in actual modeling practice, it is crucial to assure model quality and homogeneity along with providing suitable aids for creating models. In this paper we consider the problem of offering recommendations to the user during the act of modeling. Our key contribution is a concept for defining and identifying so-called action patterns - chunks of actions often appearing together in business processes. In particular, we specify action patterns and demonstrate how they can be identified from existing process model repositories using association rule mining techniques. Action patterns can then be used to suggest additional actions for a process model. Our approach is challenged by applying it to the collection of process models from the SAP Reference Model.
The adaptive evolutionary potential of a species or population to cope with omnipresent environmental challenges is based on its genetic variation. Variability at immune genes, such as the major histocompatibility complex (MHC) genes, is assumed to be a very powerful and effective tool to keep pace with diverse and rapidly evolving pathogens. In my thesis, I studied natural levels of variation at the MHC genes, which have a key role in immune defence, and parasite burden in different small mammal species. I assessed the importance of MHC variation for parasite burden in small mammal populations in their natural environment. To understand the processes shaping different patterns of MHC variation I focused on evidence of selection through pathogens upon the host. Further, I addressed the issue of low MHC diversity in populations or species, which could potentially arise as a result from habitat fragmentation and isolation. Despite their key role in the mammalian evolution the marsupial MHC has been rarely investigated. Studies on primarily captive or laboratory bred individuals indicated very little or even no polymorphism at the marsupial MHC class II genes. However, natural levels of marsupial MHC diversity and selection are unknown to date as studies on wild populations are virtually absent. I investigated MHC II variation in two Neotropical marsupial species endemic to the threatened Brazilian Atlantic Forest (Gracilinanus microtarsus, Marmosops incanus) to test whether the predicted low marsupial MHC class II polymorphism proves to be true under natural conditions. For the first time in marsupials I confirmed characteristics of MHC selection that were so far only known from eutherian mammals, birds, and fish: Positive selection on specific codon sites, recombination, and trans-species polymorphism. Beyond that, the two marsupial species revealed considerable differences in their MHC class II diversity. Diversity was rather low in M. incanus but tenfold higher in G. microtarsus, disproving the predicted general low marsupial MHC class II variation. As pathogens are believed to be very powerful drivers of MHC diversity, I studied parasite burden in both host species to understand the reasons for the remarkable differences in MHC diversity. In both marsupial species specific MHC class II variants were associated to either high or low parasite load highlighting the importance of the marsupial MHC class II in pathogen defence. I developed two alternative scenarios with regard to MHC variation, parasite load, and parasite diversity. In the ‘evolutionary equilibrium’ scenario I assumed the species with low MHC diversity, M. incanus, to be under relaxed pathogenic selection and expected low parasite diversity. Alternatively, low MHC diversity could be the result of a recent loss of genetic variation by means of a genetic bottleneck event. Under this ‘unbalanced situation’ scenario, I assumed a high parasite burden in M. incanus due to a lack of resistance alleles. Parasitological results clearly reject the first scenario and point to the second scenario, as M. incanus is distinctly higher parasitised but parasite diversity is relatively equal compared to G. microtarsus. Hence, I suggest that the parasite load in M. incanus is rather the consequence than the cause for its low MHC diversity. MHC variation and its associations to parasite burden have been typically studied within single populations but MHC variation between populations was rarely taken into account. To gain scientific insight on this issue, I chose a common European rodent species. In the yellow necked mouse (Apodemus flavicollis), I investigated the effects of genetic diversity on parasite load not on the individual but on the population level. I included populations, which possess different levels of variation at the MHC as well as at neutrally evolving genetic markers (microsatellites). I was able to show that mouse populations with a high MHC allele diversity are better armed against high parasite burdens highlighting the significance of adaptive genetic diversity in the field of conservation genetics. An individual itself will not directly benefit from its population’s large MHC allele pool in terms of parasite resistance. But confronted with the multitude of pathogens present in the wild a population with a large MHC allele reservoir is more likely to possess individuals with resistance alleles. These results deepen our understanding of the complex causes and processes of evolutionary adaptations between hosts and pathogens.
The comprehension of figurative language : electrophysiological evidence on the processing of irony
(2008)
This dissertation investigates the comprehension of figurative language, in particular the temporal processing of verbal irony. In six experiments using event-related potentials(ERP) brain activity during the comprehension of ironic utterances in relation to equivalent non-ironic utterances was measured and analyzed. Moreover, the impact of various language-accompanying cues, e.g., prosody or the use of punctuation marks, as well as non-verbal cues such as pragmatic knowledge has been examined with respect to the processing of irony. On the basis of these findings different models on figurative language comprehension, i.e., the 'standard pragmatic model', the 'graded salience hypothesis', and the 'direct access view', are discussed.
Classical semiconductor physics has been continuously improving electronic components such as diodes, light-emitting diodes, solar cells and transistors based on highly purified inorganic crystals over the past decades. Organic semiconductors, notably polymeric, are a comparatively young field of research, the first light-emitting diode based on conjugated polymers having been demonstrated in 1990. Polymeric semiconductors are of tremendous interest for high-volume, low-cost manufacturing ("printed electronics"). Due to their rather simple device structure mostly comprising only one or two functional layers, polymeric diodes are much more difficult to optimize compared to small-molecular organic devices. Usually, functions such as charge injection and transport are handled by the same material which thus needs to be highly optimized. The present work contributes to expanding the knowledge on the physical mechanisms determining device performance by analyzing the role of charge injection and transport on device efficiency for blue and white-emitting devices, based on commercially relevant spiro-linked polyfluorene derivatives. It is shown that such polymers can act as very efficient electron conductors and that interface effects such as charge trapping play the key role in determining the overall device efficiency. This work contributes to the knowledge of how charges drift through the polymer layer to finally find neutral emissive trap states and thus allows a quantitative prediction of the emission color of multichromophoric systems, compatible with the observed color shifts upon driving voltage and temperature variation as well as with electrical conditioning effects. In a more methodically oriented part, it is demonstrated that the transient device emission observed upon terminating the driving voltage can be used to monitor the decay of geminately-bound species as well as to determine trapped charge densities. This enables direct comparisons with numerical simulations based on the known properties of charge injection, transport and recombination. The method of charge extraction under linear increasing voltages (CELIV) is investigated in some detail, correcting for errors in the published approach and highlighting the role of non-idealized conditions typically present in experiments. An improved method is suggested to determine the field dependence of charge mobility in a more accurate way. Finally, it is shown that the neglect of charge recombination has led to a misunderstanding of experimental results in terms of a time-dependent mobility relaxation.
Pectic polysaccharides, a class of plant cell wall polymers, form one of the most complex networks known in nature. Despite their complex structure and their importance in plant biology, little is known about the molecular mechanism of their biosynthesis, modification, and turnover, particularly their structure-function relationship. One way to gain insight into pectin metabolism is the identification of mutants with an altered pectin structure. Those were obtained by a recently developed pectinase-based genetic screen. Arabidopsis thaliana seedlings grown in liquid medium containing pectinase solutions exhibited particular phenotypes: they were dwarfed and slightly chlorotic. However, when genetically different A. thaliana seed populations (random T-DNA insertional populations as well as EMS-mutagenized populations and natural variations) were subjected to this treatment, individuals were identified that exhibit a different visible phenotype compared to wild type or other ecotypes and may thus contain a different pectin structure (pec-mutants). After confirming that the altered phenotype occurs only when the pectinase is present, the EMS mutants were subjected to a detailed cell wall analysis with particular emphasis on pectins. This suite of mutants identified in this study is a valuable resource for further analysis on how the pectin network is regulated, synthesized and modified. Flanking sequences of some of the T-DNA lines have pointed toward several interesting genes, one of which is PEC100. This gene encodes a putative sugar transporter gene, which, based on our data, is implicated in rhamnogalacturonan-I synthesis. The subcellular localization of PEC100 was studied by GFP fusion and this protein was found to be localized to the Golgi apparatus, the organelle where pectin biosynthesis occurs. Arabidopsis ecotype C24 was identified as a susceptible one when grown with pectinases in liquid culture and had a different oligogalacturonide mass profile when compared to ecotype Col-0. Pectic oligosaccharides have been postulated to be signal molecules involved in plant pathogen defense mechanisms. Indeed, C24 showed elevated accumulation of reactive oxygen species upon pectinase elicitation and had altered response to the pathogen Alternaria brassicicola in comparison to Col-0. Using a recombinant inbred line population three major QTLs were identified to be responsible for the susceptibility of C24 to pectinases. In a reverse genetic approach members of the qua2 (putative pectin methyltransferase) family were tested for potential target genes that affect pectin methyl-esterification. The list of these genes was determined by in silico study of the pattern of expression and co-expression of all 34 members of this family resulting in 6 candidate genes. For only for one of the 6 analyzed genes a difference in the oligogalacturonide mass profile was observed in the corresponding knock-out lines, confirming the hypothesis that the methyl-esterification pattern of pectin is fine tuned by members of this gene family. This study of pectic polysaccharides through forward and reverse genetic screens gave new insight into how pectin structure is regulated and modified, and how these modifications could influence pectin mediated signalling and pathogenicity.
The scientist as Weltbürger
(2001)
Active continental margins are affected by complex feedbacks between tectonic, climate and surface processes, the intricate relations of which are still a matter of discussion. The Chilean convergent margin, forming the outstanding Andean subduction orogen, constitutes an ideal natural laboratory for the investigation of climate, tectonics and their interactions. In order to study both processes, I examined marine and lacustrine sediments from different depositional environments on- and offshore the south-central Chilean coast (38-40°S). I combined sedimentological, geochemical and isotopical analyses to identify climatic and tectonic signals within the sedimentary records. The investigation of marine trench sediments (ODP Site 1232, SONNE core 50SL) focused on frequency changes of turbiditic event layers since the late Pleistocene. In the active margin setting of south-central Chile, these layers were considered to reflect periodically occurring earthquakes and to constitute an archive of the regional paleoseismicity. The new results indicate glacial-interglacial changes in turbidite frequencies during the last 140 kyr, with short recurrence times (~200 years) during glacial and long recurrence times (~1000 years) during interglacial periods. Hence, the generation of turbidites appears to be strongly influenced by climate and sea level changes, which control on the amount of sediment delivered to the shelf edge and therewith the stability of the continental slope: more stable slope conditions during interglacial periods entail lower turbidite frequencies than in glacial periods. Since glacial turbidite recurrence times are congruent with earthquake recurrence times derived from the historical record and other paleoseismic archives of the region, I concluded that only during cold stages the sediment availability and slope instability enabled the complete series of large earthquakes to be recorded. The sediment transport to the shelf region is not only driven by climate conditions but also influenced by local forearc tectonics. Accelerating uplift rates along major tectonic structures involved drainage anomalies and river flow inversions, which seriously altered the sediment supply to the Pacific Ocean. Two examples for the tectonic hindrance of fluvial systems are the coastal lakes Lago Lanalhue and Lago Lleu Lleu. Both lakes developed within former river valleys, which once discharged towards the Pacific and were dammed by tectonically uplifted sills at ~8000 yr BP. Analyses of sediment cores from the lakes showed similar successions of marine/brackish deposits at the bottom, covered by lacustrine sediments on top. Dating of the transitions between these different units and the comparison with global sea level curves allowed me to calculate local Holocene uplift rates, which are distinctly higher for the upraised sills (Lanalhue: 8.83 ± 2.7 mm/yr, Lleu Lleu: 11.36 ± 1.77 mm/yr) than for the lake basins (Lanalhue: 0.42 ± 0.71 mm/yr, Lleu Lleu: 0.49 ± 0.44 mm/yr). I hence considered the sills to be the surface expression of a blind thrust associated with a prominent inverse fault that is controlling regional uplift and folding. After the final separation of Lago Lanalhue and Lago Lleu Lleu from the Pacific, a constant deposition of lacustrine sediments preserved continuous records of local environmental changes. Sequences from both lakes indicate a long-term climate trend with a significant shift from more arid conditions during the Mid-Holocene (8000 – 4200 cal yr BP) to more humid conditions during the Late Holocene (4200 cal yr BP – present). This trend is consistent with other regional paleoclimatic data and interpreted to reflect changes in the strength/position of the Southern Westerly Winds. Since ~5000 years, sediments of Lago Lleu Lleu are marked by numerous intercalated detrital layers that recur with a mean frequency of ~210 years. Deposition of these layers may be triggered by local tectonics (i.e. earthquakes), but may also originate from changes in the local climate (e.g. onset of modern ENSO conditions). During the last 2000 years, pronounced variations in the terrigenous sediment supply to both lakes suggest important hydrological changes on the centennial time-scale as well. A lower input of terrigenous matter points to less humid phases between 200 cal yr B.C. - 150 cal yr A.D., 900 - 1350 cal yr A.D. and 1850 cal yr A.D. to present (broadly corresponding to the Roman, Medieval, and Modern Warm Periods). More humid periods persisted from 150 - 900 cal yr A.D. and 1350 - 1850 cal yr A.D. (broadly corresponding to the Dark Ages and the Little Ice Age). In conclusion, the combined investigation of marine and lacustrine sediments is a feasible method for the reconstruction of climatic and tectonic processes on different time scales. My approach allows exploring both climate and tectonics in one and the same archive, and is largely transferable to other active margins worldwide.
Content Social stereotypes and responsibility attributions to victims of rape Atributing responsibillty to rape victims: a German study Rape myth acceptance and responsibility judgments: a British study Police officers' definitions of rape A study on cognitive prototypes of rape Conclusion References
This work analyzes the saving and consumption behavior of agents faced with the possibility of unemployment in a dynamic and stochastic life cycle model. The intertemporal optimization is based on Dynamic Programming with a backward recursion algorithm. The implemented uncertainty is not based on income shocks as it is done in traditional life cycle models but uses Markov probabilities where the probability for the next employment status of the agent depends on the current status. The utility function used is a CRRA function (constant relative risk aversion), combined with a CES function (constant elasticity of substitution) and has several consumption goods, a subsistence level, money and a bequest function.
One of the main problems in machine learning is to train a predictive model from training data and to make predictions on test data. Most predictive models are constructed under the assumption that the training data is governed by the exact same distribution which the model will later be exposed to. In practice, control over the data collection process is often imperfect. A typical scenario is when labels are collected by questionnaires and one does not have access to the test population. For example, parts of the test population are underrepresented in the survey, out of reach, or do not return the questionnaire. In many applications training data from the test distribution are scarce because they are difficult to obtain or very expensive. Data from auxiliary sources drawn from similar distributions are often cheaply available. This thesis centers around learning under differing training and test distributions and covers several problem settings with different assumptions on the relationship between training and test distributions-including multi-task learning and learning under covariate shift and sample selection bias. Several new models are derived that directly characterize the divergence between training and test distributions, without the intermediate step of estimating training and test distributions separately. The integral part of these models are rescaling weights that match the rescaled or resampled training distribution to the test distribution. Integrated models are studied where only one optimization problem needs to be solved for learning under differing distributions. With a two-step approximation to the integrated models almost any supervised learning algorithm can be adopted to biased training data. In case studies on spam filtering, HIV therapy screening, targeted advertising, and other applications the performance of the new models is compared to state-of-the-art reference methods.
This document presents the results of the seminar "Coneptual Arachitecture Patterns" of the winter term 2002 in the Hasso-Plattner-Institute. It is a compilation of the student's elaborations dealing with some conceptual architecture patterns which can be found in literature. One important focus laid on the runtime structures and the presentation of the patterns. 1. Introduction 1.1. The Seminar 1.2. Literature 2 Pipes and Filters (André Langhorst and Martin Steinle) 3 Broker (Konrad Hübner and Einar Lück) 4 Microkernel (Eiko Büttner and Stefan Richter) 5 Component Configurator (Stefan Röck and Alexander Gierak) 6 Interceptor (Marc Förster and Peter Aschenbrenner) 7 Reactor (Nikolai Cieslak and Dennis Eder) 8 Half–Sync/Half–Async (Robert Mitschke and Harald Schubert) 9 Leader/Followers (Dennis Klemann and Steffen Schmidt)
This document is an analysis of the 'Java Language Conversion Assistant'. Itr will also cover a language analysis of the Java Programming Language as well as a survey of related work concerning Java and C# interoperability on the one hand and language conversion in general on the other. Part I deals with language analysis. Part II covers the JLCA tool and tests used to analyse the tool. Additionally, it gives an overview of the above mentioned related work. Part III presents a complete project that has been translated using the JLCA.
The Apache Modeling Project
(2004)
This document presents an introduction to the Apache HTTP Server, covering both an overview and implementation details. It presents results of the Apache Modelling Project done by research assistants and students of the Hasso–Plattner–Institute in 2001, 2002 and 2003. The Apache HTTP Server was used to introduce students to the application of the modeling technique FMC, a method that supports transporting knowledge about complex systems in the domain of information processing (software and hardware as well). After an introduction to HTTP servers in general, we will focus on protocols and web technology. Then we will discuss Apache, its operational environment and its extension capabilities— the module API. Finally we will guide the reader through parts of the Apache source code and explain the most important pieces.
1 Introduction 1.1 Project formulation 1.2 Our contribution 2 Pedagogical Aspect 4 2.1 Modern teaching 2.2 Our Contribution 2.2.1 Autonomous and exploratory learning 2.2.2 Human machine interaction 2.2.3 Short multimedia clips 3 Ontology Aspect 3.1 Ontology driven expert systems 3.2 Our contribution 3.2.1 Ontology language 3.2.2 Concept Taxonomy 3.2.3 Knowledge base annotation 3.2.4 Description Logics 4 Natural language approach 4.1 Natural language processing in computer science 4.2 Our contribution 4.2.1 Explored strategies 4.2.2 Word equivalence 4.2.3 Semantic interpretation 4.2.4 Various problems 5 Information Retrieval Aspect 5.1 Modern information retrieval 5.2 Our contribution 5.2.1 Semantic query generation 5.2.2 Semantic relatedness 6 Implementation 6.1 Prototypes 6.2 Semantic layer architecture 6.3 Development 7 Experiments 7.1 Description of the experiments 7.2 General characteristics of the three sessions, instructions and procedure 7.3 First Session 7.4 Second Session 7.5 Third Session 7.6 Discussion and conclusion 8 Conclusion and future work 8.1 Conclusion 8.2 Open questions A Description Logics B Probabilistic context-free grammars
E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in con-ventional computer laboratories poses the problem of immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. This report introduces the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibilities to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide effective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system configurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by the virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present security management solutions to prevent misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This report demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.
1. Design and Composition of 3D Geoinformation Services Benjamin Hagedorn 2. Operating System Abstractions for Service-Based Systems Michael Schöbel 3. A Task-oriented Approach to User-centered Design of Service-Based Enterprise Applications Matthias Uflacker 4. A Framework for Adaptive Transport in Service- Oriented Systems based on Performance Prediction Flavius Copaciu 5. Asynchronicity and Loose Coupling in Service-Oriented Architectures Nikola Milanovic
Dynamics in urban environments encompasses complex processes and phenomena such as related to movement (e.g.,traffic, people) and development (e.g., construction, settlement). This paper presents novel methods for creating human-centric illustrative maps for visualizing the movement dynamics in virtual 3D environments. The methods allow a viewer to gain rapid insight into traffic density and flow. The illustrative maps represent vehicle behavior as light threads. Light threads are a familiar visual metaphor caused by moving light sources producing streaks in a long-exposure photograph. A vehicle’s front and rear lights produce light threads that convey its direction of motion as well as its velocity and acceleration. The accumulation of light threads allows a viewer to quickly perceive traffic flow and density. The light-thread technique is a key element to effective visualization systems for analytic reasoning, exploration, and monitoring of geospatial processes.
Submarine landslides can generate local tsunamis posing a hazard to human lives and coastal facilities. Two major related problems are: (i) quantitative estimation of tsunami hazard and (ii) early detection of the most dangerous landslides. This thesis focuses on both those issues by providing numerical modeling of landslide-induced tsunamis and by suggesting and justifying a new method for fast detection of tsunamigenic landslides by means of tiltmeters. Due to the proximity to the Sunda subduction zone, Indonesian coasts are prone to earthquake, but also landslide tsunamis. The aim of the GITEWS-project (German-Indonesian Tsunami Early Warning System) is to provide fast and reliable tsunami warnings, but also to deepen the knowledge about tsunami hazards. New bathymetric data at the Sunda Arc provide the opportunity to evaluate the hazard potential of landslide tsunamis for the adjacent Indonesian islands. I present nine large mass movements in proximity to Sumatra, Java, Sumbawa and Sumba, whereof the largest event displaced 20 km³ of sediments. Using numerical modeling, I compute the generated tsunami of each event, its propagation and runup at the coast. Moreover, I investigate the age of the largest slope failures by relating them to the Great 1977 Sumba earthquake. Continental slopes off northwest Europe are well known for their history of huge underwater landslides. The current geological situation west of Spitsbergen is comparable to the continental margin off Norway after the last glaciation, when the large tsunamigenic Storegga slide took place. The influence of Arctic warming on the stability of the Svalbard glacial margin is discussed. Based on new geophysical data, I present four possible landslide scenarios and compute the generated tsunamis. Waves of 6 m height would be capable of reaching northwest Europe threatening coastal areas. I present a novel technique to detect large submarine landslides using an array of tiltmeters, as a possible tool in future tsunami early warning systems. The dislocation of a large amount of sediment during a landslide produces a permanent elastic response of the earth. I analyze this response with a mathematical model and calculate the theoretical tilt signal. Applications to the hypothetical Spitsbergen event and the historical Storegga slide show tilt signals exceeding 1000 nrad. The amplitude of landslide tsunamis is controlled by the product of slide volume and maximal velocity (slide tsunamigenic potential). I introduce an inversion routine that provides slide location and tsunamigenic potential, based on tiltmeter measurements. The accuracy of the inversion and of the estimated tsunami height near the coast depends on the noise level of tiltmeter measurements, the distance of tiltmeters from the slide, and the slide tsunamigenic potential. Finally, I estimate the applicability scope of this method by employing it to known landslide events worldwide.
This thesis investigates the Casimir effect between plates made of normal and superconducting metals over a broad range of temperatures, as well as the Casimir-Polder interaction of an atom to such a surface. Numerical and asymptotical calculations have been the main tools in order to do so. The optical properties of the surfaces are described by dielectric functions or optical conductivities, which are reviewed for common models and have been analyzed with special weight on distributional properties and causality. The calculation of the Casimir energy between two normally conducting plates (cavity) is reviewed and previous work on the contribution to the Casimir energy due to the surface plasmons, present in all metallic cavities, has been generalized to finite temperatures for the first time. In the field of superconductivity, a new analytical continuation of the BCS conductivity to to purely imaginary frequencies has been obtained both inside and outside the extremely dirty limit of vanishing mean free path. The Casimir free energy calculated from this description was shown to coincide well with the values obtained from the two fluid model of superconductivity in certain regimes of the material parameters. The Casimir entropy in a superconducting cavity fulfills the third law of thermodynamics and features a characteristic discontinuity at the phase transition temperature. These effects were equally encountered in the Casimir-Polder interaction of an atom with a superconducting wall. The magnetic dipole coupling of an atom to a metal was shown to be highly sensible to dissipation and especially to the surface currents. This leads to a strong quenching of the magnetic Casimir-Polder energy at finite temperature. Violations of the third law of thermodynamics are encountered in special models, similar to phenomena in the Casimir-effect between two plates, that are debated controversely. None of these effects occurs in the analog electric dipole interaction. The results of this work suggest to reestablish the well-known plasma model as the low temperature limit of a superconductor as in London theory rather than use it for the description of normal metals. Superconductors offer the opportunity to control the dissipation of surface currents to a great extent. This could be used to access experimentally the low frequency optical response of metals, which is strongly connected to the thermal Casimir-effect. Here, differently from corresponding microwave experiments, energy and momentum are independent quantities. A measurement of the total Casimir-Polder interaction of atoms with superconductors seems to be in reach in today’s microchip-based atom-traps and the contribution due to magnetic coupling might be accessed by spectroscopic techniques
Erster Deutscher IPv6 Gipfel
(2008)
Inhalt: KOMMUNIQUÉ GRUßWORT PROGRAMM HINTERGRÜNDE UND FAKTEN REFERENTEN: BIOGRAFIE & VOTRAGSZUSAMMENFASSUNG 1.) DER ERSTE DEUTSCHE IPV6 GIPFEL AM HASSO PLATTNER INSTITUT IN POTSDAM - PROF. DR. CHRISTOPH MEINEL - VIVIANE REDING 2.) IPV6, ITS TIME HAS COME - VINTON CERF 3.) DIE BEDEUTUNG VON IPV6 FÜR DIE ÖFFENTLICHE VERWALTUNG IN DEUTSCHLAND - MARTIN SCHALLBRUCH 4.) TOWARDS THE FUTURE OF THE INTERNET - PROF. DR. LUTZ HEUSER 5.) IPV6 STRATEGY & DEPLOYMENT STATUS IN JAPAN - HIROSHI MIYATA 6.) IPV6 STRATEGY & DEPLOYMENT STATUS IN CHINA - PROF. WU HEQUAN 7.) IPV6 STRATEGY AND DEPLOYMENT STATUS IN KOREA - DR. EUNSOOK KIM 8.) IPV6 DEPLOYMENT EXPERIENCES IN GREEK SCHOOL NETWORK - ATHANASSIOS LIAKOPOULOS 9.) IPV6 NETWORK MOBILITY AND IST USAGE - JEAN-MARIE BONNIN 10.) IPV6 - RÜSTZEUG FÜR OPERATOR & ISP IPV6 DEPLOYMENT UND STRATEGIEN DER DEUTSCHEN TELEKOM - HENNING GROTE 11.) VIEW FROM THE IPV6 DEPLOYMENT FRONTLINE - YVES POPPE 12.) DEPLOYING IPV6 IN MOBILE ENVIRONMENTS - WOLFGANG FRITSCHE 13.) PRODUCTION READY IPV6 FROM CUSTOMER LAN TO THE INTERNET - LUTZ DONNERHACKE 14.) IPV6 - DIE BASIS FÜR NETZWERKZENTRIERTE OPERATIONSFÜHRUNG (NETOPFÜ) IN DER BUNDESWEHR HERAUSFORDERUNGEN - ANWENDUNGSFALLBETRACHTUNGEN - AKTIVITÄTEN - CARSTEN HATZIG 15.) WINDOWS VISTA & IPV6 - BERND OURGHANLIAN 16.) IPV6 & HOME NETWORKING TECHINCAL AND BUSINESS CHALLENGES - DR. TAYEB BEN MERIEM 17.) DNS AND DHCP FOR DUAL STACK NETWORKS - LAWRENCE HUGHES 18.) CAR INDUSTRY: GERMAN EXPERIENCE WITH IPV6 - AMARDEO SARMA 19.) IPV6 & AUTONOMIC NETWORKING - RANGANAI CHAPARADZA 20.) P2P & GRID USING IPV6 AND MOBILE IPV6 - DR. LATIF LADID
Contents 1. Styling for Service-Based 3D Geovisualization Benjamin Hagedorn 2. The Windows Monitoring Kernel Michael Schöbel 3. A Resource-Oriented Information Network Platform for Global Design Processes Matthias Uflacker 4. Federation in SOA – Secure Service Invocation across Trust Domains Michael Menzel 5. KStruct: A Language for Kernel Runtime Inspection Alexander Schmidt 6. Deconstructing Resources Hagen Overdick 7. FMC-QE – Case Studies Stephan Kluth 8. A Matter of Trust Rehab Al-Nemr 9. From Semi-automated Service Composition to Semantic Conformance Harald Meyer
Motivations and research objectives: During the passage of rain water through a forest canopy two main processes take place. First, water is redistributed; and second, its chemical properties change substantially. The rain water redistribution and the brief contact with plant surfaces results in a large variability of both throughfall and its chemical composition. Since throughfall and its chemistry influence a range of physical, chemical and biological processes at or below the forest floor the understanding of throughfall variability and the prediction of throughfall patterns potentially improves the understanding of near-surface processes in forest ecosystems. This thesis comprises three main research objectives. The first objective is to determine the variability of throughfall and its chemistry, and to investigate some of the controlling factors. Second, I explored throughfall spatial patterns. Finally, I attempted to assess the temporal persistence of throughfall and its chemical composition. Research sites and methods: The thesis is based on investigations in a tropical montane rain forest in Ecuador, and lowland rain forest ecosystems in Brazil and Panama. The first two studies investigate both throughfall and throughfall chemistry following a deterministic approach. The third study investigates throughfall patterns with geostatistical methods, and hence, relies on a stochastic approach. Results and Conclusions: Throughfall is highly variable. The variability of throughfall in tropical forests seems to exceed that of many temperate forests. These differences, however, do not solely reflect ecosystem-inherent characteristics, more likely they also mirror management practices. Apart from biotic factors that influence throughfall variability, rainfall magnitude is an important control. Throughfall solute concentrations and solute deposition are even more variable than throughfall. In contrast to throughfall volumes, the variability of solute deposition shows no clear differences between tropical and temperate forests, hence, biodiversity is not a strong predictor of solute deposition heterogeneity. Many other factors control solute deposition patterns, for instance, solute concentration in rainfall and antecedent dry period. The temporal variability of the latter factors partly accounts for the low temporal persistence of solute deposition. In contrast, measurements of throughfall volume are quite stable over time. Results from the Panamanian research site indicate that wet and dry areas outlast consecutive wet seasons. At this research site, throughfall exhibited only weak or pure nugget autocorrelation structures over the studies lag distances. A close look at the geostatistical tools at hand provided evidence that throughfall datasets, in particular those of large events, require robust variogram estimation if one wants to avoid outlier removal. This finding is important because all geostatistical throughfall studies that have been published so far analyzed their data using the classical, non-robust variogram estimator.
Duplicate detection consists in determining different representations of real-world objects in a database. Recent research has considered the use of relationships among object representations to improve duplicate detection. In the general case where relationships form a graph, research has mainly focused on duplicate detection quality/effectiveness. Scalability has been neglected so far, even though it is crucial for large real-world duplicate detection tasks. In this paper we scale up duplicate detection in graph data (DDG) to large amounts of data and pairwise comparisons, using the support of a relational database system. To this end, we first generalize the process of DDG. We then present how to scale algorithms for DDG in space (amount of data processed with limited main memory) and in time. Finally, we explore how complex similarity computation can be performed efficiently. Experiments on data an order of magnitude larger than data considered so far in DDG clearly show that our methods scale to large amounts of data not residing in main memory.
Contents: Artem Polyvanny, Sergey Smirnow, and Mathias Weske The Triconnected Abstraction of Process Models 1 Introduction 2 Business Process Model Abstraction 3 Preliminaries 4 Triconnected Decomposition 4.1 Basic Approach for Process Component Discovery 4.2 SPQR-Tree Decomposition 4.3 SPQR-Tree Fragments in the Context of Process Models 5 Triconnected Abstraction 5.1 Abstraction Rules 5.2 Abstraction Algorithm 6 Related Work and Conclusions
It has always been enigmatic which processes control the accretion of the North American terranes towards the Pacific plate and the landward migration of the San Andreas plate boundary. One of the theories suggests that the Pacific plate first cools and captures the uprising mantle in the slab window, and then it causes the accretion of the continental crustal blocks. The alternative theory attributes the accretion to the capture of Farallon plate fragments (microplates) stalled in the ceased Farallon-North America subduction zone. Quantitative judgement between these two end-member concepts requires a 3D thermomechanical numerical modeling. However, the software tool required for such modeling is not available at present in the geodynamic modeling community. The major aim of the presented work is comprised basically of two interconnected tasks. The first task is the development and testing of the research Finite Element code with sufficiently advanced facilities to perform the three-dimensional geological time scale simulations of lithospheric deformation. The second task consists in the application of the developed tool to the Neogene deformations of the crust and the mantle along the San Andreas Fault System in Central and northern California. The geological time scale modeling of lithospheric deformation poses numerous conceptual and implementation challenges for the software tools. Among them is the necessity to handle the brittle-ductile transition within the single computational domain, adequately represent the rock rheology in a broad range of temperatures and stresses, and resolve the extreme deformations of the free surface and internal boundaries. In the framework of this thesis the new Finite Element code (SLIM3D) has been successfully developed and tested. This code includes a coupled thermo-mechanical treatment of deformation processes and allows for an elasto-visco-plastic rheology with diffusion, dislocation and Peierls creep mechanisms and Mohr-Coulomb plasticity. The code incorporates an Arbitrary Lagrangian Eulerian formulation with free surface and Winkler boundary conditions. The modeling technique developed is used to study the aspects influencing the Neogene lithospheric deformation in central and northern California. The model setup is focused on the interaction between three major tectonic elements in the region: the North America plate, the Pacific plate and the Gorda plate, which join together near the Mendocino Triple Junction. Among the modeled effects is the influence of asthenosphere upwelling in the opening slab window on the overlying North American plate. The models also incorporate the captured microplate remnants in the fossil Farallon subduction zone, simplified subducting Gorda slab, and prominent crustal heterogeneity such as the Salinian block. The results show that heating of the mantle roots beneath the older fault zones and the transpression related to fault stepping, altogether, render cooling in the slab window alone incapable to explain eastward migration of the plate boundary. From the viewpoint of the thermomechanical modeling, the results confirm the geological concept, which assumes that a series of microplate capture events has been the primary reason of the inland migration of the San Andreas plate boundary over the recent 20 Ma. The remnants of the Farallon slab, stalled in the fossil subduction zone, create much stronger heterogeneity in the mantle than the cooling of the uprising asthenosphere, providing the more efficient and direct way for transferring the North American terranes to Pacific plate. The models demonstrate that a high effective friction coefficient on major faults fails to predict the distinct zones of strain localization in the brittle crust. The magnitude of friction coefficient inferred from the modeling is about 0.075, which is far less than typical values 0.6 – 0.8 obtained by variety of borehole stress measurements and laboratory data. Therefore, the model results presented in this thesis provide additional independent constrain which supports the “weak-fault” hypothesis in the long-term ongoing debate over the strength of major faults in the SAFS.
This thesis presents investigations on sediments from two African lakes which have been recording changes in their surrounding environmental and climate conditions since more than 200,000 years. Focus of this work is the time of the last Glacial and the Holocene (the last ~100,000 years before present [in the following 100 kyr BP]). One important precondition for this kind of research is a good understanding of the present ecosystems in and around the lakes and of the sediment formation under modern climate conditions. Both studies therefore include investigations on the modern environment (including organisms, soils, rocks, lake water and sediments). A 90 m long sediment sequence was investigated from Lake Tswaing (north-eastern South Africa) using geochemical analyses. These investigations document alternating periods of high detrital input and low (especially autochthonous) organic matter content and periods of low detrital input, carbonatic or evaporitic sedimentation and high autochthonous organic matter content. These alternations are interpreted as changes between relatively humid and arid conditions, respectively. Before c. 75 kyr BP, they seem to follow changes in local insolation whereas afterwards they appear to be acyclic and are probably caused by changes in ocean circulation and/or in the mean position of the Inter-Tropical Convergence Zone (ITCZ). Today, these factors have main influence on precipitation in this area where rainfall occurs almost exclusively during austral summer. All modern organisms were analysed for their biomarker and bulk organic and compound-specific stable carbon isotope composition. The same investigations on sediments from the modern lake floor document the mixed input of the investigated individual organisms and reveal additional influences by methanotrophic bacteria. A comparison of modern sediment characteristics with those of sediments covering the time 14 to 2 kyr BP shows changes in the productivity of the lake and the surrounding vegetation which are best explained by changes in hydrology. More humid conditions are indicated for times older than 10 kyr BP and younger than 7.5 kyr BP, whereas arid conditions prevailed in between. These observations agree with the results from sediment composition and indications from other climate archives nearby. The second lake study deals with Lake Challa, a small, deep crater lake on the foot of Mount Kilimanjaro. In this lake form mm-scale laminated sediments which were analyses with micro-XRF scanning for changes in the element composition. By comparing these results with investigations on thin sections, results from ongoing sediment trap studies, meteorological data, and investigations on the surrounding rocks and soils, I develop a model for seasonal variability in the limnology and sedimentation of Lake Challa. The lake appears to be stratified during the warm rain seasons (October – December and March – May) during which detrital material is delivered to the lake and carbonates precipitate. On the lake floor forms a dark lamina with high contents of Fe and Ti and high Ca/Al and low Mn/Fe ratios. Diatoms bloom during the cool and windy season (June – September) when mixing down to c. 60 m depth provides easily bio-available nutrients. Contemporaneously, Fe and Mn-oxides are precipitating which cause high Mn/Fe ratios in the light diatom-rich laminae of the sediments. Trends in the Mn/Fe ratio of the sediments are interpreted to reflect changes in the intensity or duration of seasonal mixing in Lake Challa. This interpretation is supported by parallel changes in the organic matter and biogenic silica content observed in the 22 m long profile recovered from Lake Challa. This covers the time of the last 25 kyr BP. It documents a transition around 16 kyr BP from relatively well-mixed conditions with high detrital input during glacial times to stronger stratified conditions which are probably related to increasing lake levels in Challa and generally more humid conditions in East Africa. Intensified mixing is recorded for the time of the Younger Dryas and the period between 11.4 and 10.7 kyr BP. For these periods, reduced intensity of the SW monsoon and intensified NE monsoon are reported from archives of the Indian-Asian Monsoon region, arguing for the latter as a probable source for wind mixing in Lake Challa. This connection is probably also responsible for contemporaneous events in the Mn/Fe ratios of the Lake Challa sediments and in other records of northern hemisphere monsoon intensity during the Holocene and underlines the close interaction of global low latitude atmospheric circulation.
The bibliographic project 'Renaissance Linguistics Archive' (R.L.A.) aimed at establishing a comprehensive database of secondary sources covering the linguistics ideas developed by Renaissance scholars in Europe. The database project, founded in 1986 by Mirko Tavoni (Pisa) and in 1994 transferred to Gerda Haßler (Potsdam), resulted so far in three print-outs, each of them counting 1000 records. It is the aim of this website to publish the results of the collective efforts undertaken thus far (R.L.A. 1.0, 1986-1999).
This work presents mathematical and computational approaches to cover various aspects of metabolic network modelling, especially regarding the limited availability of detailed kinetic knowledge on reaction rates. It is shown that precise mathematical formulations of problems are needed i) to find appropriate and, if possible, efficient algorithms to solve them, and ii) to determine the quality of the found approximate solutions. Furthermore, some means are introduced to gain insights on dynamic properties of metabolic networks either directly from the network structure or by additionally incorporating steady-state information. Finally, an approach to identify key reactions in a metabolic networks is introduced, which helps to develop simple yet useful kinetic models. The rise of novel techniques renders genome sequencing increasingly fast and cheap. In the near future, this will allow to analyze biological networks not only for species but also for individuals. Hence, automatic reconstruction of metabolic networks provides itself as a means for evaluating this huge amount of experimental data. A mathematical formulation as an optimization problem is presented, taking into account existing knowledge and experimental data as well as the probabilistic predictions of various bioinformatical methods. The reconstructed networks are optimized for having large connected components of high accuracy, hence avoiding fragmentation into small isolated subnetworks. The usefulness of this formalism is exemplified on the reconstruction of the sucrose biosynthesis pathway in Chlamydomonas reinhardtii. The problem is shown to be computationally demanding and therefore necessitates efficient approximation algorithms. The problem of minimal nutrient requirements for genome-scale metabolic networks is analyzed. Given a metabolic network and a set of target metabolites, the inverse scope problem has as it objective determining a minimal set of metabolites that have to be provided in order to produce the target metabolites. These target metabolites might stem from experimental measurements and therefore are known to be produced by the metabolic network under study, or are given as the desired end-products of a biotechological application. The inverse scope problem is shown to be computationally hard to solve. However, I assume that the complexity strongly depends on the number of directed cycles within the metabolic network. This might guide the development of efficient approximation algorithms. Assuming mass-action kinetics, chemical reaction network theory (CRNT) allows for eliciting conclusions about multistability directly from the structure of metabolic networks. Although CRNT is based on mass-action kinetics originally, it is shown how to incorporate further reaction schemes by emulating molecular enzyme mechanisms. CRNT is used to compare several models of the Calvin cycle, which differ in size and level of abstraction. Definite results are obtained for small models, but the available set of theorems and algorithms provided by CRNT can not be applied to larger models due to the computational limitations of the currently available implementations of the provided algorithms. Given the stoichiometry of a metabolic network together with steady-state fluxes and concentrations, structural kinetic modelling allows to analyze the dynamic behavior of the metabolic network, even if the explicit rate equations are not known. In particular, this sampling approach is used to study the stabilizing effects of allosteric regulation in a model of human erythrocytes. Furthermore, the reactions of that model can be ranked according to their impact on stability of the steady state. The most important reactions in that respect are identified as hexokinase, phosphofructokinase and pyruvate kinase, which are known to be highly regulated and almost irreversible. Kinetic modelling approaches using standard rate equations are compared and evaluated against reference models for erythrocytes and hepatocytes. The results from this simplified kinetic models can simulate acceptably the temporal behavior for small changes around a given steady state, but fail to capture important characteristics for larger changes. The aforementioned approach to rank reactions according to their influence on stability is used to identify a small number of key reactions. These reactions are modelled in detail, including knowledge about allosteric regulation, while all other reactions were still described by simplified reaction rates. These so-called hybrid models can capture the characteristics of the reference models significantly better than the simplified models alone. The resulting hybrid models might serve as a good starting point for kinetic modelling of genome-scale metabolic networks, as they provide reasonable results in the absence of experimental data, regarding, for instance, allosteric regulations, for a vast majority of enzymatic reactions.
The Earth’s magnetic field (EMF) is generated by convections in the electrically conducting liquid iron-rich outer core, modified by the Earth’s rotation. A drastic manifestation of the dynamics of this fluid body is the occurrence of geomagnetic field reversals in the Earth’s history but also geomagnetic excursions, which are more frequent features of otherwise stable polarity chrons, but often poorly constrained in the geological record. To better understand the origin of the field, we need to know how the field has varied on different geological timescales. This includes not only information about changes in the ancient field’s direction but also about the absolute intensity (palaeointensity) and the age. This palaeointensity record is needed for compiling a full-vector description of the field. A palaeomagnetic and palaeointensity study on lava flows allows gaining insights about the evolution of the EMF through time and space. However, constraining the EMF evolution over different geological timescales remains a difficult objective due to the paucity of available palaeointensity data. One new alternative approach in palaeointensity studies is the recently proposed multispecimen parallel differential pTRM (MS) method, which has potentially several advantages over the commonly used Thellier method, because it is in theory independent of magnetic domain state, less prone to biasing effects, such as thermal alteration and significantly faster to perform in the laboratory. A study of highly active volcanic regions, such as the Trans-Mexican Volcanic Belt, seems promising when attempting a full-vector reconstruction or when looking for field excursions. One aim of this thesis was to gain new information about the occurrence and global validity of geomagnetic excursions from the Brunhes- or Matuyama Chron. For this purpose some 75 lava flows from within the Trans-Mexican Volcanic Belt were sampled for palaeomagnetic analyses. The scatter of virtual geomagnetic poles from lavas younger than 1.7 Ma was used for estimating palaeosecular variation and was found to be consistent with latitude dependent Model G and other high quality palaeomagnetic data from Mexico. The palaeomagnetic mean-vectors of 56 lavas were correlated to the Geomagnetic Polarity Timescale supplemented with information on geomagnetic excursions. On the grounds of their associated radioisotopic ages, four lavas were tentatively correlated with known excursions from marine records. Two lava flows dating of Brunhes Chron were associated with the Big Lost and Delts/Stage 17 excursions, respectively. From further two flows dating of Matuyama Chron, one flow was associated with either the Santa Rosa- or Kamikatsura excursions, while the other could have been emplaced during the Gilsa excursion. The most significant outcome was the finding that both Brunhes excursional flows display nearly fully reversed directions that deviate almost 180°C from the expected normal polarity direction. This observation could indicate that in particular the Big Lost and Delta/Stage17 excursions may represent other short periods during which the field completed a full reversal for a short time, such as was previously found for other older cryptochrons or tiny wiggles. Another focus of this thesis was set on estimating the feasibility of the new MS method for routine palaeointensity determination. This was accomplished by applying the MS method to samples from 11 historical lava flows from Mexico and Iceland from which the actual field intensity was either known from contemporary observatory data, or deduced from magnetic field models. Comparing observed with expected intensity values allowed to test the accuracy of the MS method. It a was found that the majority of palaeointensity estimates after the MS method yielded results that were very close or indistinguishable within the range of uncertainty from the expected values. However, a general trend towards an overestimate in the palaeointensity was also observed, which, on the grounds of corroborating rock magnetic analyses, was associated with multidomain material. This observation was taken as first evidence that the MS method is not entirely independent of magnetic domain state, as was originally claimed. However, a second experiment in which a modification of the most widely used Thellier method was applied to sister samples from 5 Icelandic flows revealed that, in comparison to the MS method, the latter produced more accurate and statistically better defined palaeointensities. Thus, from these first results, the MS method appeared as a viable alternative for future palaeointensity studies. Subsequently it was attempted to corroborate the directional record from Mexican lavas with palaeointensity data. It was possible to acquire palaeointensity estimates for 32 out of 51 investigated lava flows. These new results revealed that the new MS palaeointensities for Mexico are, with a high degree of statistical significance, around 30% higher than expected. The generally high palaeointensities seem to corroborate the results obtained from historical lava flows in this study and other previous studies on synthetic samples where domain state effects were found to cause overestimates in the palaeointensity of up to 30 per cent in the MS method. The primary process that leads to this overestimate is assigned to an asymmetry in the demagnetisation and remagnetisation process. Yet, this overestimate is expected to be no larger than what might be expected from Thellier experiments performed on samples with a given degree of multidomain behaviour.
Modern anthropogenic forcing of atmospheric chemistry poses the question of how the Earth System will respond as thousands of gigatons of greenhouse gas are rapidly added to the atmosphere. A similar, albeit nonanthropogenic, situation occurred during the early Paleogene, when catastrophic release of carbon to the atmosphere triggered abrupt increase in global temperatures. The best documented of these events is the Paleocene-Eocene Thermal Maximum (PETM, ~55 Ma) when the magnitude of carbon addition to the oceans and atmosphere was similar to those expected for the future. This event initiated global warming, changes in hydrological cycles, biotic extinction and migrations. A recently proposed hypothesis concerning changes in marine ecosystems suggests that this global warming strongly influenced the shallow-water biosphere, triggering extinctions and turnover in the Larger Foraminifera (LF) community and the demise of corals. The successions from the Adriatic Carbonate Platform (SW Slovenia) represent an ideal location to test the hypothesis of a possible causal link between the PETM and evolution of shallow-water organisms because they record continuous sedimentation from the Late Paleocene to the Early Eocene and are characterized by a rich biota, especially LF, fundamental for detailed biostratigraphic studies. In order to reconstruct paleoenvironmental conditions during deposition, I focused on sedimentological analysis and paleoecological study of benthic assemblages. During the Late Paleocene-earliest Eocene, sedimentation occurred on a shallow-water carbonate ramp system characterized by enhanced nutrient levels. LF represent the common constituent of the benthic assemblages that thrived in this setting throughout the Late Paleocene to the Early Eocene. With detailed biostratigraphic and chemostratigraphic analyses documenting the most complete record to date available for the PETM event in a shallow-water marine environment, I correlated chemostratigraphically for the first time the evolution of LF with the δ¹³C curves. This correlation demonstrated that no major turnover in the LF communities occurred synchronous with the PETM; thus the evolution of LF was mainly controlled by endogenous biotic forces. The study of Late Thanetian metric-sized microbialite-coral mounds which developed in the middle part of the ramp, documented the first Cenozoic occurrence of microbially-cemented mounds. The development of these mounds, with temporary dominance of microbial communities over corals, suggest environmentally-triggered “phase shifts” related to frequent fluctuations of nutrient/turbidity levels during recurrent wet phases which preceding the extreme greenhouse conditions of the PETM. The paleoecological study of the coral community in the microbialites-coral mounds, the study of corals from Early Eocene platform from SW France, and a critical, extensive literature research of Late Paleocene – Early Eocene coral occurrences from the Tethys, the Atlantic, the Caribbean realms suggested that these corals types, even if not forming extensive reefs, are common in the biofacies as small isolated colonies, piles of rubble or small patch-reefs. These corals might have developed ‘alternative’ life strategies to cope with harsh conditions (high/fluctuating nutrients/turbidity, extreme temperatures, perturbation of aragonite saturation state) during the greenhouse times of the early Paleogene, representing a good fossil analogue to modern corals thriving close to their thresholds for survival. These results demonstrate the complexity of the biological responses to extreme conditions, not only in terms of temperature but also nutrient supply, physical disturbance and their temporal variability and oscillating character.
Dietary antioxidants are believed to play an important role in the prevention and treatment of a variety of diseases associated with oxidative stress. Although there is a wide range of dietary antioxidants, the bulk of the research to date has been focused on the nutrient antioxidants vitamin C, E, and carotenoids. Certain relatively uncommon antioxidants such as lipoic acid (LA), and phenolic compounds such as (-)-epicatechin (EC), (-)-epigallocatechin (EGC), (-)-epicatechin gallate (ECG), and (-)-epigallocatechin gallate (EGCG), have not been extensively investigated although they may exert greater antioxidant potency than that of carotenoids and vitamins. Extracts from selected plants and plant byproducts may represent rich sources for one or more of such antioxidants and therefore exhibit higher effects than a single antioxidant due to the synergistic effects produced between such antioxidants. However, in the last decade a number of epidemiological, animal and in vitro studies have suggested a protective and therapeutic potency of these antioxidants in a broad range of diseases such as cancer, diabetes, atherosclerosis, cataract and acute and chronic neurological disorders. Inflammation, the response of the host toward any infection or injury, plays a central role in the development of many chronic diseases. Several evidences demonstrated the rise of different types of cancer from sites of inflammation. This suggests that active oxygen species and some cytokines generated in the inflamed tissues can cause injury to DNA and ultimately lead to carcinogenesis. Diethylnitrosamine (DEN) is one of the most important environmental carcinogens, present in a variety of foods, alcoholic beverages, tobacco smoke and it can be synthesized endogenously. In addition to the liver it can induce carcinogenesis in other organs like kidney, trachea, lung, esophagus, fore stomach, and nasal cavity. Several epidemiological and laboratory studies indicate that nitroso compounds including DEN may induce hyperplasia and chronic inflammation which is closely associated with the development of hepatocellular carcinoma. Despite increasing evidence on the potential of antioxidants in modulating the etiology of chronic diseases, little is known about their role in inflammation and acute phase response (APR). Therefore the aim of the present work was to study the protective effect of water and solvent extracts of eight plant and plant byproducts including green tea, artichoke, spinach, broccoli, onion and eggplant, orange and potato peels as well as eight antioxidants agents including EC, EGC, ECG, EGCG, ascorbic acid (AA), acetylcysteine (NAC), α-LA, and alpha-tocopherol (α-TOC) toward acute inflammation induced by interleukin-6 (IL-6) and hepatotoxicity induced by DEN in vitro. The negative acute phase proteins (APP), transthyretin (TTR) and retinol-binding protein (RBP) were used as inflammatory biomarkers analyzed by ELISA, whereas neutral red assay was used for evaluating the cytotoxicity. All experiments were performed in vitro using human hepatocarcinoma cell line (HepG2). Additionally the antioxidant activity was measured by TEAC and FRAP assays, phenolic content was measured by Folin–Ciocalteu and characterized by HPLC. Moreover, the microheterogeneity of TTR was detected using immunoprecipitation assay combined with SELDI-TOF MS. Results of present study showed that HepG2 cells provide a simple, sensitive in vitro system for studying the regulation of the negative APP, TTR and RBP under free and inflammatory condition. IL-6, a potent proinflammatory cytokine, in a concentration of 25 ng/ml was able to reduce TTR and RBP secretion by approximately 50-60% after 24h of incubation. With exception of broccoli and water extract of onion which showed pro-inflammatory effects in this study, all other plant extracts, at specific concentrations, were able to elevate TTR secretion in normal condition and even under treatment of IL-6 where the effect was quite lower. Green tea followed by artichoke and potato peel exhibited the highest elevation in TTR concentration which reached 1.1 and 2.5 folds of control in presence and absences of IL-6 respectively. In general Plant extracts were ordered according their anti-inflammatory potency as following: in water extracts; green tea > artichoke > potato peel > orange peel > spinach > eggplant peel, where in solvent extracts; green tea > artichoke > potato peel > spinach > eggplant peel > onion > orange peel. The antiinflammatory effect of water extracts of green tea, artichoke and orange peel were significantly higher than their corresponding solvent extracts whereas water extracts of eggplant-, potato peels and spinach showed lower effect than their solvent extracts. On the other hand α-LA followed by EGCG and ECG exhibited the highest elevation in TTR concentration compared to other antioxidants. The relation between the anti-inflammatory potential and antioxidants activity and phenolic content for the investigated substances was generally weak. This may suggest the involvement of other mechanisms than antioxidants properties for the observed effect. TTR secreted by HepG2 cells has a molecular structure quite similar to the purified standard and serum TTR in which all the three main variants are contained including native, S-cystinylated and Sglutathionylated TTR. Interestingly, a variant with molecular mass of 13453.8 + 8.3 Da has been detected only in TTR secreted by HepG2. Among all investigated antioxidants and plant extracts, six substances were able to elevate the native preferable TTR variant. The potency of these substances can be ordered as following α-LA > NAC > onion > AA > EGCG > green tea. A weak correlation between elevation on TTR and shifting to the native form was observed. Similar weak correlation has also been observed between antioxidants activity and elevation in native TTR. Although DEN was able to induce cell death in a concentration dependent manner, it requires considerably higher concentrations for its effects especially after 24h. This may be attributed to a lack in cytochrome P450 enzymes produced by HepG2. At selected concentrations some antioxidants and plant extracts significantly attenuate DEN cytotoxicity as following: spinach > α-LA > artichoke > orange peel > eggplant peel > α-TOC > onion > AA. Contrary all other substances especially green tea, broccoli, potato peel, and ECG stimulate DEN toxicity. In conclusion, this study demonstrated that selected antioxidants and plant extracts may attenuate the inflammatory process, not only by their antioxidants potency but also by other mechanisms which remain unclear. They may also play a vital role on stabilizing the tetramic structure of TTR and thereby prevent amyloidosis diseases. Lipoic acid represents in this study unique function against inflammation and hepatotoxicity. Despite the protective effect demonstrated by investigated substances, attention should also be given to the pro-oxidant and potential cytotoxic effects produced at higher concentrations.
The study of biological interaction networks is a central theme in systems biology. Here, we investigate common as well as differentiating principles of molecular interaction networks associated with different levels of molecular organization. They include metabolic pathway maps, protein-protein interaction networks as well as kinase interaction networks. First, we present an integrated analysis of metabolic pathway maps and protein-protein interaction networks (PIN). It has long been established that successive enzymatic steps are often catalyzed by physically interacting proteins forming permanent or transient multi-enzyme complexes. Inspecting high-throughput PIN data, it has been shown recently that, indeed, enzymes involved in successive reactions are generally more likely to interact than other protein pairs. In this study, we expanded this line of research to include comparisons of the respective underlying network topologies as well as to investigate whether the spatial organization of enzyme interactions correlates with metabolic efficiency. Analyzing yeast data, we detected long-range correlations between shortest paths between proteins in both network types suggesting a mutual correspondence of both network architectures. We discovered that the organizing principles of physical interactions between metabolic enzymes differ from the general PIN of all proteins. While physical interactions between proteins are generally dissortative, enzyme interactions were observed to be assortative. Thus, enzymes frequently interact with other enzymes of similar rather than different degree. Enzymes carrying high flux loads are more likely to physically interact than enzymes with lower metabolic throughput. In particular, enzymes associated with catabolic pathways as well as enzymes involved in the biosynthesis of complex molecules were found to exhibit high degrees of physical clustering. Single proteins were identified that connect major components of the cellular metabolism and hence might be essential for the structural integrity of several biosynthetic systems. Besides metabolic aspects of PINs, we investigated the characteristic topological properties of protein interactions involved in signaling and regulatory functions mediated by kinase interactions. Characteristic topological differences between PINs associated with metabolism, and those describing phosphorylation networks were revealed and shown to reflect the different modes of biological operation of both network types. The construction of phosphorylation networks is based on the identification of specific kinase-target relations including the determination of the actual phosphorylation sites (P-sites). The computational prediction of P-sites as well as the identification of involved kinases still suffers from insufficient accuracies and specificities of the underlying prediction algorithms, and the experimental identification in a genome-scale manner is not (yet) doable. Computational prediction methods have focused primarily on extracting predictive features from the local, one-dimensional sequence information surrounding P-sites. However the recognition of such motifs by the respective kinases is a spatial event. Therefore, we characterized the spatial distributions of amino acid residue types around P-sites and extracted signature 3D-profiles. We then tested the added value of spatial information on the prediction performance. When compared to sequence-only based predictors, a consistent performance gain was obtained. The availability of reliable training data of experimentally determined P-sites is critical for the development of computational prediction methods. As part of this thesis, we provide an assessment of false-positive rates of phosphoproteomic data.
Water shortage is a serious threat for many societies worldwide. In drylands, water management measures like the construction of reservoirs are affected by eroded sediments transported in the rivers. Thus, the capability of assessing water and sediment fluxes at the river basin scale is of vital importance to support management decisions and policy making. This subject was addressed by the DFG-funded SESAM-project (Sediment Export from large Semi-Arid catchments: Measurements and Modelling). As a part of this project, this thesis focuses on (1) the development and implementation of an erosion module for a meso-scale catchment model, (2) the development of upscaling and generalization methods for the parameterization of such model, (3) the execution of measurements to obtain data required for the modelling and (4) the application of the model to different study areas and its evaluation. The research was carried out in two meso-scale dryland catchments in NE-Spain: Ribera Salada (200 km²) and Isábena (450 km²). Adressing objective 1, WASA-SED, a spatially semi-distributed model for water and sediment transport at the meso-scale was developed. The model simulates runoff and erosion processes at the hillslope scale, transport processes of suspended and bedload fluxes in the river reaches, and retention and remobilisation processes of sediments in reservoirs. This thesis introduces the model concept, presents current model applications and discusses its capabilities and limitations. Modelling at larger scales faces the dilemma of describing relevant processes while maintaining a manageable demand for input data and computation time. WASA-SED addresses this challenge by employing an innovative catena-based upscaling approach: the landscape is represented by characteristic toposequences. For deriving these toposequences with regard to multiple attributes (eg. topography, soils, vegetation) the LUMP-algorithm (Landscape Unit Mapping Program) was developed and related to objective 2. It incorporates an algorithm to retrieve representative catenas and their attributes, based on a Digital Elevation Model and supplemental spatial data. These catenas are classified to provide the discretization for the WASA-SED model. For objective 3, water and sediment fluxes were monitored at the catchment outlet of the Isábena and some of its sub-catchments. For sediment yield estimation, the intermittent measurements of suspended sediment concentration (SSC) had to be interpolated. This thesis presents a comparison of traditional sediment rating curves (SRCs), generalized linear models (GLMs) and non-parametric regression using Random Forests (RF) and Quantile Regression Forests (QRF). The observed SSCs are highly variable and range over six orders of magnitude. For these data, traditional SRCs performed poorly, as did GLMs, despite including other relevant process variables (e.g. rainfall intensities, discharge characteristics). RF and QRF proved to be very robust and performed favourably for reproducing sediment dynamics. QRF additionally excels in providing estimates on the accuracy of the predictions. Subsequent analysis showed that most of the sediment was exported during intense storms of late summer. Later floods yielded successively less sediment. Comparing sediment generation to yield at the outlet suggested considerable storage effects within the river channel. Addressing objective 4, the WASA-SED model was parameterized for the two study areas in NE Spain and applied with different foci. For Ribera Salada, the uncalibrated model yielded reasonable results for runoff and sediment. It provided quantitative measures of the change in runoff and sediment yield for different land-uses. Additional land management scenarios were presented and compared to impacts caused by climate change projections. In contrast, the application for the Isábena focussed on exploring the full potential of the model's predictive capabilities. The calibrated model achieved an acceptable performance for the validation period in terms of water and sediment fluxes. The inadequate representation of the lower sub-catchments inflicted considerable reductions on model performance, while results for the headwater catchments showed good agreement despite stark contrasts in sediment yield. In summary, the application of WASA-SED to three catchments proved the model framework to be a practicable multi-scale approach. It successfully links the hillslope to the catchment scale and integrates the three components hillslope, river and reservoir in one model. Thus, it provides a feasible approach for tackling issues of water and sediment yield at the meso-scale. The crucial role of processes like transmission losses and sediment storage in the river has been identified. Further advances can be expected when the representation of connectivity of water and sediment fluxes (intra-hillslope, hillslope-river, intra-river) is refined and input data improves.
The papers contained in this issue share the insight that the different components of the grammar sometimes impose conflicting requirements on the grammar’s output, and that, in order to handle such conflicts, it seems advantageous to combine aspects from minimalist and OT modelling. The papers show that this can be undertaken in a multiplicity of ways, by using varying proportions of each framework, and offer a broad range of perspectives for future research.
Service-oriented modeling employs collaborations to capture the coordination of multiple roles in form of service contracts. In case of dynamic collaborations the roles may join and leave the collaboration at runtime and therefore complex structural dynamics can result, which makes it very hard to ensure their correct and safe operation. We present in this paper our approach for modeling and verifying such dynamic collaborations. Modeling is supported using a well-defined subset of UML class diagrams, behavioral rules for the structural dynamics, and UML state machines for the role behavior. To be also able to verify the resulting service-oriented systems, we extended our former results for the automated verification of systems with structural dynamics [7, 8] and developed a compositional reasoning scheme, which enables the reuse of verification results. We outline our approach using the example of autonomous vehicles that use such dynamic collaborations via ad-hoc networking to coordinate and optimize their joint behavior.
In this thesis, the properties of nonlinear disordered one dimensional lattices is investigated. Part I gives an introduction to the phenomenon of Anderson Localization, the Discrete Nonlinear Schroedinger Equation and its properties as well as the generalization of this model by introducing the nonlinear index α. In Part II, the spreading behavior of initially localized states in large, disordered chains due to nonlinearity is studied. Therefore, different methods to measure localization are discussed and the structural entropy as a measure for the peak structure of probability distributions is introduced. Finally, the spreading exponent for several nonlinear indices is determined numerically and compared with analytical approximations. Part III deals with the thermalization in short disordered chains. First, the term thermalization and its application to the system in use is explained. Then, results of numerical simulations on this topic are presented where the focus lies especially on the energy dependence of the thermalization properties. A connection with so-called breathers is drawn.
Music is a powerful and reliable means to stimulate the percept of both intense pleasantness and unpleasantness in the perceiver. However, everyone’s social experiences with music suggest that the same music piece may elicit a very different valence percept in different individuals. A comparison of music from different historical periods suggests that enculturation modulates the valence percept of intervals and harmonies, and thus possibly also of relatively basic feature extraction processes. Strikingly, it is still largely unknown how much the valence percept is dependent on physical properties of the stimulus and thus mediated by a universal perceptual mechanism, and how much it is dependent on cultural imprinting. The current thesis investigates the neurophysiology of the valence percept, and the modulating influence of culture on several distinguishable sub-processes of music processing, so-called functional modules of music processing, engaged in the mediation of the valence percept.
In my dissertation on 'Security Cooperation as a Way to Stop the Spread of Nu-clear Weapons? Nuclear Nonproliferation Policies of the United States towards the Federal Republic of Germany and Israel, 1945-1968', I study the use of security assistance as nonproliferation policy. I use insights of the Structural Realist and the Rational Institutionalist theories of International Relations to explain, respectively, important foreign policy goals and the basic orientation of policies, on the one hand, and the practical workings and effects of security cooperation on states’ behavior, on the other hand. Moreover, I consider the relations of the United States (US) with the two states in light of bargaining theory to explain the level of US ability to press other states to its preferred courses of action. The study is thus a combination of theory proposing and testing and historic description and explanation. It is also policy-relevant as I seek general lessons regarding the use of security cooperation as nonproliferation policy. I show that the US sought to keep the Federal Republic of Germany (FRG) from acquiring nuclear weapons in order to avoid crises with Moscow and threats to the cohesion of NATO. But the US also saw it as necessary to credibly guarantee the security of the FRG and treat it well in order to ensure that it would remain satisfied as an ally and without own nuclear weapons. Through various institutionalized security cooperation schemes, the US succeeded in this – though the FRG did acquire an option to produce nuclear weapons. The US opposed Israel’s nuclear weapon ambitions in turn because of an expectation that Arab states’ reactions could otherwise result in greater tension and risks of escalation and a worse balance-of-power in the area. But as also a US-Israel alliance could have led to stronger Arab-Soviet ties and thus a worse balance-of-power, and as it was not in US in-terest to be tied to Israel’s side in all regional issues, the US was not prepared to guarantee Israel’s security in a formal, credible way like it did in West Germany’s case. The US failed to persuade Israel to forgo producing nuclear weapons but gradually, an opaque nu-clear status combined with US arms sales that helped Israel to maintain a conventional military advantage over Arabs emerged as a solution to Israel’s security strategy. Because of perceptions that Israel and the FRG had also other options than cooperation with the US, and because the US ability to punish them for unwanted action was limited, these states were able to offer resistance when the US pressed its nonproliferation stance on them.
The fat-soluble vitamin A, which is chemically referred to retinol (ROH), is known to be essential for the process of vision, the immune system but also for cell differentiation and proliferation. Recently, ROH itself has been reported to be involved in adipogenesis and a ROH transport protein, the retinol-binding protein 4 (RBP4), in insulin resistance and type 2 diabetes. However, there is still considerable scientific debate about this relation. With the increasing amount of studies investigating the relation of ROH in obesity and type 2 diabetes, basic research is an essential prerequisite for interpreting these results. This thesis enhances the knowledge on this relation by reviewing ROH metabolism on extra- and intracellular level. Aim 1: In the blood stream ROH is transported in a complex with RBP4 and a second protein, transthyretin (TTR), to the target cells. The levels of RBP4 and TTR are influenced by several factors but mainly by liver and kidney function. The reason for that is that liver and the kidneys are the sites of RBP4 synthesis and catabolism, respectively. Interestingly, obesity and type 2 diabetes involve disorders of the liver and the kidneys. Therefore the aim was to investigate factors that influence RBP4 and TTR levels in relation to obesity and type 2 diabetes (Part 1). Aim 2: Once arrived in the target cell ROH is bound to cellular retinol-binding protein type I (CRBP-I) and metabolised: ROH can either be stored as retinylesters or it can be oxidised to retinoic acid (RA). By acting as a transcription factor in the nucleus RA may influence processes such as adipogenesis. Therefore vitamin A has been postulated to be involved in obesity and type 2 diabetes. CRBP-I is known to mediate the storage of ROH in the liver, but the extra-hepatic metabolism and the functions of CRBP-I are not well known. This has been investigated in Part 2 of this work. Material & Methods: RBP4 and TTR levels were investigated by ELISA in serum samples of human subjects with overweight, type 2 diabetes, kidney or liver dysfunction. Molecular alterations of the RBP4 and TTR protein structure were analysed by MALDI-TOF mass spectrometry. The functions of intracellular CRBP-I were investigated in CRBP-I knock-out mice in liver and extra-hepatic tissues by measuring ROH levels as well as the levels of its storage form, the retinylesters, using reverse phase HPLC. The postprandial uptake of ROH into tissues was analysed using labelled ROH. The mRNA levels of enzymes that metabolize ROH were examined by real-time polymerase chain reaction (RCR). Results: The previous published results showing increased RBP4 levels in type 2 diabetic patients could not be confirmed in this work. However, it could be shown that during kidney dysfunction RBP4 levels are increased and that RBP4 and TTR levels are decreased during liver dysfunction. The important new finding of this work is that increased RBP4 levels in type 2 diabetic mice were increased when kidney function was decreased. Thus an increase in RBP4 levels in type 2 diabetes may be the effect of a reduced kidney function which is common in type 2 diabetes. Interestingly, during severe kidney dysfunction the molecular structure of RBP4 and TTR was altered in a specific manner which was not the case during liver diseases and type 2 diabetes. This underlines the important function of the kidneys in RBP4 metabolism. CRBP-I has been confirmed to be responsible for the ROH storage in the liver since CRBP-I knock-out mice had decreased ROH and retinylesters (the storage form of ROH) levels in the liver. Interestingly, in the adipose tissue (the second largest ROH storage tissue in the body) ROH and retinylesters levels were higher in the CRBP-I knock-out compared to the wild-type mice. It could be shown in this work that a different ROH binding protein, cellular retinol-binding protein type III, is upregulated in CRBP-I knock-out mice. Moreover enzymes were identified which mediate very efficiently ROH esterification in the adipose tissue of the knock-out mice. In the pancreas there was a higher postprandial ROH uptake in the CRBP-I knock-out compard to wild-type mice. Even under a vitamin A deficient diet the knock-out animals had ROH and retinylesters levels which were comparable to wild-type animals. These results underline the important role of ROH for insulin secretion in the pancreas. Summing up, there is evidence that RBP4 levels are more determined by kidney function than by type 2 diabetes and that specific molecular modifications occur during kidney dysfunction. The results in adipose tissue and pancreas of CRBP-I knock-out mice support the hypothesis that ROH plays an important role in glucose and lipid metabolism.
New ABC triblock copolymers were synthesized by controlled free-radical polymerization via Reversible Addition-Fragmentation chain Transfer (RAFT). Compared to amphiphilic diblock copolymers, the prepared materials formed more complex self-assembled structures in water due to three different functional units. Two strategies were followed: The first approach relied on double-thermoresponsive triblock copolymers exhibiting Lower Critical Solution Temperature (LCST) behavior in water. While the first phase transition triggers the self-assembly of triblock copolymers upon heating, the second one allows to modify the self-assembled state. The stepwise self-assembly was followed by turbidimetry, dynamic light scattering (DLS) and 1H NMR spectroscopy as these methods reflect the behavior on the macroscopic, mesoscopic and molecular scale. Although the first phase transition could be easily monitored due to the onset of self-assembly, it was difficult to identify the second phase transition unambiguously as the changes are either marginal or coincide with the slow response of the self-assembled system to relatively fast changes of temperature. The second approach towards advanced polymeric micelles exploited the thermodynamic incompatibility of “triphilic” block copolymers – namely polymers bearing a hydrophilic, a lipophilic and a fluorophilic block – as the driving force for self-assembly in water. The self-assembly of these polymers in water produced polymeric micelles comprising a hydrophilic corona and a microphase-separated micellar core with lipophilic and fluorophilic domains – so called multi-compartment micelles. The association of triblock copolymers in water was studied by 1H NMR spectroscopy, DLS and cryogenic transmission electron microscopy (cryo-TEM). Direct imaging of the polymeric micelles in solution by cryo-TEM revealed different morphologies depending on the block sequence and the preparation conditions. While polymers with the sequence hydrophilic-lipophilic-fluorophilic built core-shell-corona micelles with the core being the fluorinated compartment, block copolymers with the hydrophilic block in the middle formed spherical micelles where single or multiple fluorinated domains “float” as disks on the surface of the lipophilic core. Increasing the temperature during micelle preparation or annealing of the aqueous solutions after preparation at higher temperatures induced occasionally a change of the micelle morphology or the particle size distribution. By RAFT polymerization not only the desired polymeric architectures could be realized, but the technique provided in addition a precious tool for molar mass characterization. The thiocarbonylthio moieties, which are present at the chain ends of polymers prepared by RAFT, absorb light in the UV and visible range and were employed for end-group analysis by UV-vis spectroscopy. A variety of dithiobenzoate and trithiocarbonate RAFT agents with differently substituted initiating R groups were synthesized. The investigation of their absorption characteristics showed that the intensity of the absorptions depends sensitively on the substitution pattern next to the thiocarbonylthio moiety and on the solvent polarity. According to these results, the conditions for a reliable and convenient end-group analysis by UV-vis spectroscopy were optimized. As end-group analysis by UV-vis spectroscopy is insensitive to the potential association of polymers in solution, it was advantageously exploited for the molar mass characterization of the prepared amphiphilic block copolymers.
Since the end of the Apartheid international tourism in South Africa has increasingly gained importance for the national economy. The centre of this PKS issue’s attention is a particular form of tourism: Township tourism, i.e. guided tours to the residential areas of the black population. About 300,000 tourists per year visit the townships of Cape Town. The tours are also called Cultural, Social, or Reality Tours. The different aspects of township tourism in Cape Town were subject of a geographic field study, which was undertaken during a student research project of Potsdam University in 2007. The text at hand presents the empirical results of the field study, and demonstrates how townships are constructed as spaces of tourism.
Model-driven software development requires techniques to consistently propagate modifications between different related models to realize its full potential. For large-scale models, efficiency is essential in this respect. In this paper, we present an improved model synchronization algorithm based on triple graph grammars that is highly efficient and, therefore, can also synchronize large-scale models sufficiently fast. We can show, that the overall algorithm has optimal complexity if it is dominating the rule matching and further present extensive measurements that show the efficiency of the presented model transformation and synchronization technique.
Modern acquisition of seismic data on receiver networks worldwide produces an increasing amount of continuous wavefield recordings. Hence, in addition to manual data inspection, seismogram interpretation requires new processing utilities for event detection, signal classification and data visualization. Various machine learning algorithms, which can be adapted to seismological problems, have been suggested in the field of pattern recognition. This can be done either by means of supervised learning using manually defined training data or by unsupervised clustering and visualization. The latter allows the recognition of wavefield patterns, such as short-term transients and long-term variations, with a minimum of domain knowledge. Besides classical earthquake seismology, investigations of temporal patterns in seismic data also concern novel approaches such as noise cross-correlation or ambient seismic vibration analysis in general, which have moved into focus within the last decade. In order to find records suitable for the respective approach or simply for quality control, unsupervised preprocessing becomes important and valuable for large data sets. Machine learning techniques require the parametrization of the data using feature vectors. Applied to seismic recordings, wavefield properties have to be computed from the raw seismograms. For an unsupervised approach, all potential wavefield features have to be considered to reduce subjectivity to a minimum. Furthermore, automatic dimensionality reduction, i.e. feature selection, is required in order to decrease computational cost, enhance interpretability and improve discriminative power. This study presents an unsupervised feature selection and learning approach for the discovery, imaging and interpretation of significant temporal patterns in seismic single-station or network recordings. In particular, techniques permitting an intuitive, quickly interpretable and concise overview of available records are suggested. For this purpose, the data is parametrized by real-valued feature vectors for short time windows using standard seismic analysis tools as feature generation methods, such as frequency-wavenumber, polarization, and spectral analysis. The choice of the time window length is dependent on the expected durations of patterns to be recognized or discriminated. We use Self-Organizing Maps (SOMs) for a data-driven feature selection, visualization and clustering procedure, which is particularly suitable for high-dimensional data sets. Using synthetics composed of Rayleigh and Love waves and three different types of real-world data sets, we show the robustness and reliability of our unsupervised learning approach with respect to the effect of algorithm parameters and data set properties. Furthermore, we approve the capability of the clustering and imaging techniques. For all data, we find improved discriminative power of our feature selection procedure compared to feature subsets manually selected from individual wavefield parametrization methods. In particular, enhanced performance is observed compared to the most favorable individual feature generation method, which is found to be the frequency spectrum. The method is applied to regional earthquake records at the European Broadband Network with the aim to define suitable features for earthquake detection and seismic phase classification. For the latter, we find that a combination of spectral and polarization features favor S wave detection at a single receiver. However, SOM-based visualization of phase discrimination shows that clustering applied to the records of two stations only allows onset or P wave detection, respectively. In order to improve the discrimination of S waves on receiver networks, we recommend to consider additionally the temporal context of feature vectors. The application to continuous recordings of seismicity close to an active volcano (Mount Merapi, Java, Indonesia) shows that two typical volcano-seismic events (VTB and Guguran) can be detected and distinguished by clustering. In contrast, so-called MP events cannot be discriminated. Comparable results are obtained for selected features and recognition rates regarding a previously implemented supervised classification system. Finally, we test the reliability of wavefield clustering to improve common ambient vibration analysis methods such as estimation of dispersion curves and horizontal to vertical spectral ratios. It is found, that in general, the identified short- and long-term patterns have no significant impact on those estimates. However, for individual sites, effects of local sources can be identified. Leaving out the corresponding clusters, yields reduced uncertainties or allows for improving estimation of dispersion curves.