Refine
Has Fulltext
- yes (95) (remove)
Year of publication
- 2013 (95) (remove)
Document Type
- Article (57)
- Doctoral Thesis (17)
- Preprint (10)
- Conference Proceeding (6)
- Monograph/Edited Volume (2)
- Master's Thesis (2)
- Habilitation Thesis (1)
Language
- English (95) (remove)
Is part of the Bibliography
- no (95) (remove)
Keywords
- Curriculum Framework (16)
- European values education (16)
- Europäische Werteerziehung (16)
- Familie (16)
- Family (16)
- Lehrevaluation (16)
- Studierendenaustausch (16)
- Unterrichtseinheiten (16)
- curriculum framework (16)
- lesson evaluation (16)
Institute
- Extern (18)
- Institut für Umweltwissenschaften und Geographie (16)
- Wirtschaftswissenschaften (15)
- Institut für Mathematik (10)
- Institut für Informatik und Computational Science (6)
- Sonderforschungsbereich 632 - Informationsstruktur (6)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (5)
- Institut für Chemie (5)
- Institut für Biochemie und Biologie (3)
- Institut für Jüdische Studien und Religionswissenschaft (3)
User-centered design processes are the first choice when new interactive systems or services are developed to address real customer needs and provide a good user experience. Common tools for collecting user research data, conducting brainstormings, or sketching ideas are whiteboards and sticky notes. They are ubiquitously available, and no technical or domain knowledge is necessary to use them. However, traditional pen and paper tools fall short when saving the content and sharing it with others unable to be in the same location. They are also missing further digital advantages such as searching or sorting content. Although research on digital whiteboard and sticky note applications has been conducted for over 20 years, these tools are not widely adopted in company contexts. While many research prototypes exist, they have not been used for an extended period of time in a real-world context. The goal of this thesis is to investigate what the enablers and obstacles for the adoption of digital whiteboard systems are. As an instrument for different studies, we developed the Tele-Board software system for collaborative creative work. Based on interviews, observations, and findings from former research, we tried to transfer the analog way of working to the digital world. Being a software system, Tele-Board can be used with a variety of hardware and does not depend on special devices. This feature became one of the main factors for adoption on a larger scale. In this thesis, I will present three studies on the use of Tele-Board with different user groups and foci. I will use a combination of research methods (laboratory case studies and data from field research) with the overall goal of finding out when a digital whiteboard system is used and in which cases not. Not surprisingly, the system is used and accepted if a user sees a main benefit that neither analog tools nor other applications can offer. However, I found that these perceived benefits are very different for each user and usage context. If a tool provides possibilities to use in different ways and with different equipment, the chances of its adoption by a larger group increase. Tele-Board has now been in use for over 1.5 years in a global IT company in at least five countries with a constantly growing user base. Its use, advantages, and disadvantages will be described based on 42 interviews and usage statistics from server logs. Through these insights and findings from laboratory case studies, I will present a detailed analysis of digital whiteboard use in different contexts with design implications for future systems.
HPI Future SOC Lab
(2013)
The “HPI Future SOC Lab” is a cooperation of the Hasso-Plattner-Institut (HPI) and industrial partners. Its mission is to enable and promote exchange and interaction between the research community and the industrial partners. The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard- and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies. This technical report presents results of research projects executed in 2012. Selected projects have presented their results on June 18th and November 26th 2012 at the Future SOC Lab Day events.
This document presents a formula selection system for classical first order theorem proving based on the relevance of formulae for the proof of a conjecture. It is based on unifiability of predicates and is also able to use a linguistic approach for the selection. The scope of the technique is the reduction of the set of formulae and the increase of the amount of provable conjectures in a given time. Since the technique generates a subset of the formula set, it can be used as a preprocessor for automated theorem proving. The document contains the conception, implementation and evaluation of both selection concepts. While the one concept generates a search graph over the negation normal forms or Skolem normal forms of the given formulae, the linguistic concept analyses the formulae and determines frequencies of lexemes and uses a tf-idf weighting algorithm to determine the relevance of the formulae. Though the concept is built for first order logic, it is not limited to it. The concept can be used for higher order and modal logik, too, with minimal adoptions. The system was also evaluated at the world championship of automated theorem provers (CADE ATP Systems Competition, CASC-24) in combination with the leanCoP theorem prover and the evaluation of the results of the CASC and the benchmarks with the problems of the CASC of the year 2012 (CASC-J6) show that the concept of the system has positive impact to the performance of automated theorem provers. Also, the benchmarks with two different theorem provers which use different calculi have shown that the selection is independent from the calculus. Moreover, the concept of TEMPLAR has shown to be competitive to some extent with the concept of SinE and even helped one of the theorem provers to solve problems that were not (or slower) solved with SinE selection in the CASC. Finally, the evaluation implies that the combination of the unification based and linguistic selection yields more improved results though no optimisation was done for the problems.
Pronoun resolution normally takes place without conscious effort or awareness, yet the processes behind it are far from straightforward. A large number of cues and constraints have previously been recognised as playing a role in the identification and integration of potential antecedents, yet there is considerable debate over how these operate within the resolution process. The aim of this thesis is to investigate how the parser handles multiple antecedents in order to understand more about how certain information sources play a role during pronoun resolution. I consider how both structural information and information provided by the prior discourse is used during online processing. This is investigated through several eye tracking during reading experiments that are complemented by a number of offline questionnaire experiments. I begin by considering how condition B of the Binding Theory (Chomsky 1981; 1986) has been captured in pronoun processing models; some researchers have claimed that processing is faithful to syntactic constraints from the beginning of the search (e.g. Nicol and Swinney 1989), while others have claimed that potential antecedents which are ruled out on structural grounds nonetheless affect processing, because the parser must also pay attention to a potential antecedent’s features (e.g. Badecker and Straub 2002). My experimental findings demonstrate that the parser is sensitive to the subtle changes in syntactic configuration which either allow or disallow pronoun reference to a local antecedent, and indicate that the parser is normally faithful to condition B at all stages of processing. Secondly, I test the Primitives of Binding hypothesis proposed by Koornneef (2008) based on work by Reuland (2001), which is a modular approach to pronoun resolution in which variable binding (a semantic relationship between pronoun and antecedent) takes place before coreference. I demonstrate that a variable-binding (VB) antecedent is not systematically considered earlier than a coreference (CR) antecedent online. I then go on to explore whether these findings could be attributed to the linear order of the antecedents, and uncover a robust recency preference both online and offline. I consider what role the factor of recency plays in pronoun resolution and how it can be reconciled with the first-mention advantage (Gernsbacher and Hargreaves 1988; Arnold 2001; Arnold et al., 2007). Finally, I investigate how aspects of the prior discourse affect pronoun resolution. Prior discourse status clearly had an effect on pronoun resolution, but an antecedent’s appearance in the previous context was not always facilitative; I propose that this is due to the number of topic switches that a reader must make, leading to a lack of discourse coherence which has a detrimental effect on pronoun resolution. The sensitivity of the parser to structural cues does not entail that cue types can be easily separated into distinct sequential stages, and I therefore propose that the parser is structurally sensitive but not modular. Aspects of pronoun resolution can be captured within a parallel constraints model of pronoun resolution, however, such a model should be sensitive to the activation of potential antecedents based on discourse factors, and structural cues should be strongly weighted.
LCST-type synthetic thermoresponsive polymers can reversibly respond to certain stimuli in aqueous media with a massive change of their physical state. When fluorophores, that are sensitive to such changes, are incorporated into the polymeric structure, the response can be translated into a fluorescence signal. Based on this idea, this thesis presents sensing schemes which transduce the stimuli-induced variations in the solubility of polymer chains with covalently-bound fluorophores into a well-detectable fluorescence output. Benefiting from the principles of different photophysical phenomena, i.e. of fluorescence resonance energy transfer and solvatochromism, such fluorescent copolymers enabled monitoring of stimuli such as the solution temperature and ionic strength, but also of association/disassociation mechanisms with other macromolecules or of biochemical binding events through remarkable changes in their fluorescence properties. For instance, an aqueous ratiometric dual sensor for temperature and salts was developed, relying on the delicate supramolecular assembly of a thermoresponsive copolymer with a thiophene-based conjugated polyelectrolyte. Alternatively, by taking advantage of the sensitivity of solvatochromic fluorophores, an increase in solution temperature or the presence of analytes was signaled as an enhancement of the fluorescence intensity. A simultaneous use of the sensitivity of chains towards the temperature and a specific antibody allowed monitoring of more complex phenomena such as competitive binding of analytes. The use of different thermoresponsive polymers, namely poly(N-isopropylacrylamide) and poly(meth)acrylates bearing oligo(ethylene glycol) side chains, revealed that the responsive polymers differed widely in their ability to perform a particular sensing function. In order to address questions regarding the impact of the chemical structure of the host polymer on the sensing performance, the macromolecular assembly behavior below and above the phase transition temperature was evaluated by a combination of fluorescence and light scattering methods. It was found that although the temperature-triggered changes in the macroscopic absorption characteristics were similar for these polymers, properties such as the degree of hydration or the extent of interchain aggregations differed substantially. Therefore, in addition to the demonstration of strategies for fluorescence-based sensing with thermoresponsive polymers, this work highlights the role of the chemical structure of the two popular thermoresponsive polymers on the fluorescence response. The results are fundamentally important for the rational choice of polymeric materials for a specific sensing strategy.
This thesis gives formal definitions of discourse-givenness, coreference and reference, and reports on experiments with computational models of discourse-givenness of noun phrases for English and German. Definitions are based on Bach's (1987) work on reference, Kibble and van Deemter's (2000) work on coreference, and Kamp and Reyle's Discourse Representation Theory (1993). For the experiments, the following corpora with coreference annotation were used: MUC-7, OntoNotes and ARRAU for Englisch, and TueBa-D/Z for German. As for classification algorithms, they cover J48 decision trees, the rule based learner Ripper, and linear support vector machines. New features are suggested, representing the noun phrase's specificity as well as its context, which lead to a significant improvement of classification quality.
For the first time the transcriptional reprogramming of distinct root cortex cells during the arbuscular mycorrhizal (AM) symbiosis was investigated by combining Laser Capture Mirodissection and Affymetrix GeneChip® Medicago genome array hybridization. The establishment of cryosections facilitated the isolation of high quality RNA in sufficient amounts from three different cortical cell types. The transcript profiles of arbuscule-containing cells (arb cells), non-arbuscule-containing cells (nac cells) of Rhizophagus irregularis inoculated Medicago truncatula roots and cortex cells of non-inoculated roots (cor) were successfully explored. The data gave new insights in the symbiosis-related cellular reorganization processes and indicated that already nac cells seem to be prepared for the upcoming fungal colonization. The mycorrhizal- and phosphate-dependent transcription of a GRAS TF family member (MtGras8) was detected in arb cells and mycorrhizal roots. MtGRAS shares a high sequence similarity to a GRAS TF suggested to be involved in the fungal colonization processes (MtRAM1). The function of MtGras8 was unraveled upon RNA interference- (RNAi-) mediated gene silencing. An AM symbiosis-dependent expression of a RNAi construct (MtPt4pro::gras8-RNAi) revealed a successful gene silencing of MtGras8 leading to a reduced arbuscule abundance and a higher proportion of deformed arbuscules in root with reduced transcript levels. Accordingly, MtGras8 might control the arbuscule development and life-time. The targeting of MtGras8 by the phosphate-dependent regulated miRNA5204* was discovered previously (Devers et al., 2011). Since miRNA5204* is known to be affected by phosphate, the posttranscriptional regulation might represent a link between phosphate signaling and arbuscule development. In this work, the posttranscriptional regulation was confirmed by mis-expression of miRNA5204* in M. truncatula roots. The miRNA-mediated gene silencing affects the MtGras8 transcript abundance only in the first two weeks of the AM symbiosis and the mis-expression lines seem to mimic the phenotype of MtGras8-RNAi lines. Additionally, MtGRAS8 seems to form heterodimers with NSP2 and RAM1, which are known to be key regulators of the fungal colonization process (Hirsch et al., 2009; Gobbato et al., 2012). These data indicate that MtGras8 and miRNA5204* are linked to the sym pathway and regulate the arbuscule development in phosphate-dependent manner.
The Semantic Web provides information contained in the World Wide Web as machine-readable facts. In comparison to a keyword-based inquiry, semantic search enables a more sophisticated exploration of web documents. By clarifying the meaning behind entities, search results are more precise and the semantics simultaneously enable an exploration of semantic relationships. However, unlike keyword searches, a semantic entity-focused search requires that web documents are annotated with semantic representations of common words and named entities. Manual semantic annotation of (web) documents is time-consuming; in response, automatic annotation services have emerged in recent years. These annotation services take continuous text as input, detect important key terms and named entities and annotate them with semantic entities contained in widely used semantic knowledge bases, such as Freebase or DBpedia. Metadata of video documents require special attention. Semantic analysis approaches for continuous text cannot be applied, because information of a context in video documents originates from multiple sources possessing different reliabilities and characteristics. This thesis presents a semantic analysis approach consisting of a context model and a disambiguation algorithm for video metadata. The context model takes into account the characteristics of video metadata and derives a confidence value for each metadata item. The confidence value represents the level of correctness and ambiguity of the textual information of the metadata item. The lower the ambiguity and the higher the prospective correctness, the higher the confidence value. The metadata items derived from the video metadata are analyzed in a specific order from high to low confidence level. Previously analyzed metadata are used as reference points in the context for subsequent disambiguation. The contextually most relevant entity is identified by means of descriptive texts and semantic relationships to the context. The context is created dynamically for each metadata item, taking into account the confidence value and other characteristics. The proposed semantic analysis follows two hypotheses: metadata items of a context should be processed in descendent order of their confidence value, and the metadata that pertains to a context should be limited by content-based segmentation boundaries. The evaluation results support the proposed hypotheses and show increased recall and precision for annotated entities, especially for metadata that originates from sources with low reliability. The algorithms have been evaluated against several state-of-the-art annotation approaches. The presented semantic analysis process is integrated into a video analysis framework and has been successfully applied in several projects for the purpose of semantic video exploration of videos.
Passive plant actuators have fascinated many researchers in the field of botany and structural biology since at least one century. Up to date, the most investigated tissue types in plant and artificial passive actuators are fibre-reinforced composites (and multilayered assemblies thereof) where stiff, almost inextensible cellulose microfibrils direct the otherwise isotropic swelling of a matrix. In addition, Nature provides examples of actuating systems based on lignified, low-swelling, cellular solids enclosing a high-swelling cellulosic phase. This is the case of the Delosperma nakurense seed capsule, in which a specialized tissue promotes the reversible opening of the capsule upon wetting. This tissue has a diamond-shaped honeycomb microstructure characterized by high geometrical anisotropy: when the cellulosic phase swells inside this constraining structure, the tissue deforms up to four times in one principal direction while maintaining its original dimension in the other. Inspired by the example of the Delosoperma nakurense, in this thesis we analyze the role of architecture of 2D cellular solids as models for natural hygromorphs. To start off, we consider a simple fluid pressure acting in the cells and try to assess the influence of several architectural parameters onto their mechanical actuation. Since internal pressurization is a configurational type of load (that is the load direction is not fixed but it “follows” the structure as it deforms) it will result in the cellular structure acquiring a “spontaneous” shape. This shape is independent of the load but just depends on the architectural characteristics of the cells making up the structure itself. Whereas regular convex tiled cellular solids (such as hexagonal, triangular or square lattices) deform isotropically upon pressurization, we show through finite element simulations that by introducing anisotropic and non-convex, reentrant tiling large expansions can be achieved in each individual cell. The influence of geometrical anisotropy onto the expansion behaviour of a diamond shaped honeycomb is assessed by FEM calculations and a Born lattice approximation. We found that anisotropic expansions (eigenstrains) comparable to those observed in the keels tissue of the Delosoperma nakurense are possible. In particular these depend on the relative contributions of bending and stretching of the beams building up the honeycomb. Moreover, by varying the walls’ Young modulus E and internal pressure p we found that both the eigenstrains and 2D elastic moduli scale with the ratio p/E. Therefore the potential of these pressurized structures as soft actuators is outlined. This approach was extended by considering several 2D cellular solids based on two types of non-convex cells. Each honeycomb is build as a lattice made of only one non-convex cell. Compared to usual honeycombs, these lattices have kinked walls between neighbouring cells which offers a hidden length scale allowing large directed deformations. By comparing the area expansion in all lattices, we were able to show that less convex cells are prone to achieve larger area expansions, but the direction in which the material expands is variable and depends on the local cell’s connectivity. This has repercussions both at the macroscopic (lattice level) and microscopic (cells level) scales. At the macroscopic scale, these non-convex lattices can experience large anisotropic (similarly to the diamond shaped honeycomb) or perfectly isotropic principal expansions, large shearing deformations or a mixed behaviour. Moreover, lattices that at the macroscopic scale expand similarly can show quite different microscopic deformation patterns that include zig-zag motions and radical changes of the initial cell shape. Depending on the lattice architecture, the microscopic deformations of the individual cells can be equal or not, so that they can build up or mutually compensate and hence give rise to the aforementioned variety of macroscopic behaviours. Interestingly, simple geometrical arguments involving the undeformed cell shape and its local connectivity enable to predict the results of the FE simulations. Motivated by the results of the simulations, we also created experimental 3D printed models of such actuating structures. When swollen, the models undergo substantial deformation with deformation patterns qualitatively following those predicted by the simulations. This work highlights how the internal architecture of a swellable cellular solid can lead to complex shape changes which may be useful in the fields of soft robotics or morphing structures.
Measuring the metabolite profile of plants can be a strong phenotyping tool, but the changes of metabolite pool sizes are often difficult to interpret, not least because metabolite pool sizes may stay constant while carbon flows are altered and vice versa. Hence, measuring the carbon allocation of metabolites enables a better understanding of the metabolic phenotype. The main challenge of such measurements is the in vivo integration of a stable or radioactive label into a plant without perturbation of the system. To follow the carbon flow of a precursor metabolite, a method is developed in this work that is based on metabolite profiling of primary metabolites measured with a mass spectrometer preceded by a gas chromatograph (Wagner et al. 2003; Erban et al. 2007; Dethloff et al. submitted). This method generates stable isotope profiling data, besides conventional metabolite profiling data. In order to allow the feeding of a 13C sucrose solution into the plant, a petiole and a hypocotyl feeding assay are developed. To enable the processing of large numbers of single leaf samples, their preparation and extraction are simplified and optimised. The metabolite profiles of primary metabolites are measured, and a simple relative calculation is done to gain information on carbon allocation from 13C sucrose. This method is tested examining single leaves of one rosette in different developmental stages, both metabolically and regarding carbon allocation from 13C sucrose. It is revealed that some metabolite pool sizes and 13C pools are tightly associated to relative leaf growth, i.e. to the developmental stage of the leaf. Fumaric acid turns out to be the most interesting candidate for further studies because pool size and 13C pool diverge considerably. In addition, the analyses are also performed on plants grown in the cold, and the initial results show a different metabolite pool size pattern across single leaves of one Arabidopsis rosette, compared to the plants grown under normal temperatures. Lastly, in situ expression of REIL genes in the cold is examined using promotor-GUS plants. Initial results suggest that single leaf metabolite profiles of reil2 differ from those of the WT.
The habilitation thesis covers theoretical investigations on light-induced processes in molecules. The study is focussed on changes of the molecular electronic structure and geometry, caused either by photoexcitation in the event of a spectroscopic analysis, or by a selective control with shaped laser pulses. The applied and developed methods are predominantly based on quantum chemistry as well as on electron and nuclear quantum dynamics, and in parts on molecular dynamics. The studied scientific problems deal with stereoisomerism and the question of how to either switch or distinguish chiral molecules using laser pulses, and with the essentials for the simulation of the spectroscopic response of biochromophores, in order to unravel their photophysics. The accomplished findings not only explain experimental results and extend existing approaches, but also contribute significantly to the basic understanding of the investigated light-driven molecular processes. The main achievements can be divided in three parts: First, a quantum theory for an enantio- and diastereoselective or, in general, stereoselective laser pulse control was developed and successfully applied to influence the chirality of molecular switches. The proposed axially chiral molecules possess different numbers of "switchable" stable chiral conformations, with one particular switch featuring even a true achiral "off"-state which allows to enantioselectively "turn on" its chirality. Furthermore, surface mounted chiral molecular switches with several well-defined orientations were treated, where a newly devised highly flexible stochastic pulse optimization technique provides high stereoselectivity and efficiency at the same time, even for coupled chirality-changing degrees of freedom. Despite the model character of these studies, the proposed types of chiral molecular switches and, all the more, the developed basic concepts are generally applicable to design laser pulse controlled catalysts for asymmetric synthesis, or to achieve selective changes in the chirality of liquid crystals or in chiroptical nanodevices, implementable in information processing or as data storage. Second, laser-driven electron wavepacket dynamics based on ab initio calculations, namely time-dependent configuration interaction, was extended by the explicit inclusion of magnetic field-magnetic dipole interactions for the simulation of the qualitative and quantitative distinction of enantiomers in mass spectrometry by means of circularly polarized ultrashort laser pulses. The developed approach not only allows to explain the origin of the experimentally observed influence of the pulse duration on the detected circular dichroism in the ion yield, but also to predict laser pulse parameters for an optimal distinction of enantiomers by ultrashort shaped laser pulses. Moreover, these investigations in combination with the previous ones provide a fundamental understanding of the relevance of electric and magnetic interactions between linearly or non-linearly polarized laser pulses and (pro-)chiral molecules for either control by enantioselective excitation or distinction by enantiospecific excitation. Third, for selected light-sensitive biological systems of central importance, like e.g. antenna complexes of photosynthesis, simulations of processes which take place during and after photoexcitation of their chromophores were performed, in order to explain experimental (spectroscopic) findings as well as to understand the underlying photophysical and photochemical principles. In particular, aspects of normal mode mixing due to geometrical changes upon photoexcitation and their impact on (time-dependent) vibronic and resonance Raman spectra, as well as on intramolecular energy redistribution were addressed. In order to explain unresolved experimental findings, a simulation program for the calculation of vibronic and resonance Raman spectra, accounting for changes in both vibrational frequencies and normal modes, was created based on a time-dependent formalism. In addition, the influence of the biochemical environment on the electronic structure of the chromophores was studied by electrostatic interactions and mechanical embedding using hybrid quantum-classical methods. Environmental effects were found to be of importance, in particular, for the excitonic coupling of chromophores in light-harvesting complex II. Although the simulations for such highly complex systems are still restricted by various approximations, the improved approaches and obtained results have proven to be important contributions for a better understanding of light-induced processes in biosystems which also adds to efforts of their artificial reproduction.
In this work, the development of temperature- and protein-responsive sensor materials based on biocompatible, inverse hydrogel opals (IHOs) is presented. With these materials, large biomolecules can be specifically recognised and the binding event visualised. The preparation of the IHOs was performed with a template process, for which monodisperse silica particles were vertically deposited onto glass slides as the first step. The obtained colloidal crystals with a thickness of 5 μm displayed opalescent reflections because of the uniform alignment of the colloids. As a second step, the template was embedded in a matrix consisting of biocompatible, thermoresponsive hydrogels. The comonomers were selected from the family of oligo(ethylene glycol)methacrylates. The monomer solution was injected into a polymerisation mould, which contained the colloidal crystals as a template. The space in-between the template particles was filled with the monomer solution and the hydrogel was cured via UV-polymerisation. The particles were chemically etched, which resulted in a porous inner structure. The uniform alignment of the pores and therefore the opalescent reflection were maintained, so these system were denoted as inverse hydrogel opals. A pore diameter of several hundred nanometres as well as interconnections between the pores should facilitate a diffusion of bigger (bio)molecules, which was always a challenge in the presented systems until now. The copolymer composition was chosen to result in a hydrogel collapse over 35 °C. All hydrogels showed pronounced swelling in water below the critical temperature. The incorporation of a reactive monomer with hydroxyl groups ensured a potential coupling group for the introduction of recognition units for analytes, e.g. proteins. As a test system, biotin as a recognition unit for avidin was coupled to the IHO via polymer-analogous Steglich esterification. The amount of accessible biotin was quantified with a colorimetric binding assay. When avidin was added to the biotinylated IHO, the wavelength of the opalescent reflection was significantly shifted and therefore the binding event was visualised. This effect is based on the change in swelling behaviour of the hydrogel after binding of the hydrophilic avidin, which is amplified by the thermoresponsive nature of the hydrogel. A swelling or shrinking of the pores induces a change in distance of the crystal planes, which are responsible for the colour of the reflection. With these findings, the possibility of creating sensor materials or additional biomolecules in the size range of avidin is given.
The Prussian geologist Leopold von Buch was a lifelong friend of Alexander von Humboldt and had a significant influence on Humboldt’s geological ideas. In a talk, held in Berlin in 1831, which is published here for the first time, von Buch presented the Duria Antiquior of 1830 by the English geologist Henry De La Beche. The Duria Antiquior is widely regarded as the earliest depiction of a scene of prehistoric life from deep time. The print raised new questions about the processes of geohistorical change. The talk reveals that Leopold von Buch was a true scientist of the Romantic Age. His descriptions of geohistorical organismic transformations are taken from pictorial examples of organismic transformation from the classical literature. The talk also illustrates how influential English geologists were for geo-historical reconstructions in Germany.
It was the goal of this work to explore two different synthesis pathways using green chemistry. The first part of this thesis is focusing on the use of the urea-glass route towards single phase manganese nitride and manganese nitride/oxide nano-composites embedded in carbon, while the second part of the thesis is focusing on the use of the “saccharide route” (namely cellulose, sucrose, glucose and lignin) towards metal (Ni0), metal alloy (Pd0.9Ni0.1, Pd0.5Ni0.5, Fe0.5Ni0.5, Cu0.5Ni0.5 and W0.15Ni0.85) and ternary carbide (Mn0.75Fe2.25C) nanoparticles embedded in carbon. In the interest of battery application, MnN0.43 nanoparticles surrounded by a graphitic shell and embedded in carbon with a high surface area (79 m^2/g) were synthesized, following a previously set route.The comparison of the material characteristics before and after the discharge showed no remarkable difference in terms of composition and just slight differences in the morphological point of view, meaning the particles are stable but agglomerate. The graphitic shell is contributing to the resistance of the material and leads to a fine cyclic stability over 140 cycles of 230 mAh/g after the first charge/discharge and coulombic efficiencies close to 100%. Due to the low voltage towards Li/Li+ and the low polarization, it might be an attractive anode material for lithium ion batteries. However, the capacity is still noticeably lower than the theoretical value for MnN0.43. A mixture of MnN0.43 and MnO nanoparticles embedded in carbon (surface area 93 m^2/g) was able to improve the cyclic stability to over 160 cycles giving a capacity of 811 mAh/g, which is considerably higher than the capacity of the conventional material graphite (372 mAh/g). This nano-composite seems to agglomerate less during the process of discharge. Interestingly, although the capacity is much higher than of the single phase manganese nitride, the nano-composite seems to only contain MnN0.43 nanoparticles after the process of discharge with no oxide phase to be found. Concerning catalysis application, different metal, metal alloy, and metal carbide nanoparticles were synthesized using the saccharide route. At first, systems that were already investigated before, being Pd0.9Ni0.1, Pd0.5Ni0.5, Fe0.5Ni0.5 and Mn0.75Fe2.25C using cellulose as the carbon source were prepared and tested in an alkylation reaction of toluene with benzylchloride. Unexpectedly, the metal alloys did not show any catalytic activity, but the ternary carbide Mn0.75Fe2.25C showed fine catalytic activity of 98% conversion after 9 hour reaction time (110 °C). In a second step, the saccharide route was modified towards other carbon sources and carbon to metal ratios in order to improve the homogeneity of the samples and accessibility of the particle surfaces. The used carbon sources sucrose and glucose are similar in their basic structure of carbohydrates, but reducing the (polymeric) chain length. Indeed, the cellulose could be successfully replaced by sucrose and glucose. A lower carbon to metal ratio was found to influence the size, homogeneity and accessibility (as evidenced by TEM) of the samples. Since sucrose is an aliment, glucose is the better choice as a carbon source. Using glucose, the synthesis of Cu0.5Ni0.5 and W0.15Ni0.85 nano-composites was also possible, although the later was never obtained as pure phase. These alloy nano-composites were tested, along with nickel0 nanoparticles also prepared with glucose and on their catalytic activity towards the reduction of phenylacetylene. The results obtained let believe that any (poly) saccharide, including lignin, could be used as carbon source. The nickel0 nano-composites prepared with lignin as a carbon source were tested along with those prepared with cellulose and sucrose for their catalytic activity in the transfer hydrogenation of nitrobenzene (results compared with exposed nickel nanoparticles and nickel supported on carbon) leading to very promising results. Based on the urea-glass route and the saccharide route, simple equipment and transition metals, it was possible to have a one-pot synthesize with scale-up possibilities towards new material that can be applied in catalysis and battery systems.
The sharply rising level of atmospheric carbon dioxide resulting from anthropogenic emissions is one of the greatest environmental concerns facing our civilization today. Metal-organic frameworks (MOFs) are a new class of materials that constructed by metal-containing nodes bonded to organic bridging ligands. MOFs could serve as an ideal platform for the development of next generation CO2 capture materials owing to their large capacity for the adsorption of gases and their structural and chemical tunability. The ability to rationally select the framework components is expected to allow the affinity of the internal pore surface toward CO2 to be precisely controlled, facilitating materials properties that are optimized for the specific type of CO2 capture to be performed (post-combustion capture, precombustion capture, or oxy-fuel combustion) and potentially even for the specific power plant in which the capture system is to be installed. For this reason, significant effort has been made in recent years in improving the gas separation performance of MOFs and some studies evaluating the prospects of deploying these materials in real-world CO2 capture systems have begun to emerge. We have developed six new MOFs, denoted as IFPs (IFP-5, -6, -7, -8, -9, -10, IFP = Imidazolate Framework Potsdam) and two hydrogen-bonded molecular building block (MBB, named as 1 and 2 for Zn and Co based, respectively) have been synthesized, characterized and applied for gas storage. The structure of IFP possesses 1D hexagonal channels. Metal centre and the substituent groups of C2 position of the linker protrude into the open channels and determine their accessible diameter. Interestingly, the channel diameters (range : 0.3 to 5.2 Å) for IFP structures are tuned by the metal centre (Zn, Co and Cd) and substituent of C2 position of the imidazolate linker. Moreover hydrogen bonded MBB of 1 and 2 is formed an in situ functionalization of a ligand under solvothermal condition. Two different types of channels are observed for 1 and 2. Materials contain solvent accessible void space. Solvent could be easily removed by under high vacuum. The porous framework has maintained the crystalline integrity even without solvent molecules. N2, H2, CO2 and CH4 gas sorption isotherms were performed. Gas uptake capacities are comparable with other frameworks. Gas uptake capacity is reduced when the channel diameter is narrow. For example, the channel diameter of IFP-5 (channel diameter: 3.8 Å) is slightly lower than that of IFP-1 (channel diameter: 4.2 Å); hence, the gas uptake capacity and Brunauer-Emmett-Teller (BET) surface area are slightly lower than IFP-1. The selectivity does not depend only on the size of the gas components (kinetic diameter: CO2 3.3 Å, N2 3.6 Å and CH4 3.8 ) but also on the polarizability of the surface and of the gas components. IFP-5 and-6 have the potential applications for the separation of CO2 and CH4 from N2-containing gas mixtures and CO2 and CH4 containing gas mixtures. Gas sorption isotherms of IFP-7, -8, -9, -10 exhibited hysteretic behavior due to flexible alkoxy (e.g., methoxy and ethoxy) substituents. Such phenomenon is a kind of gate effects which is rarely observed in microporous MOFs. IFP-7 (Zn-centred) has a flexible methoxy substituent. This is the first example where a flexible methoxy substituent shows the gate opening behavior in a MOF. Presence of methoxy functional group at the hexagonal channels, IFP-7 acted as molecular gate for N2 gas. Due to polar methoxy group and channel walls, wide hysteretic isotherm was observed during gas uptake. The N2 The estimated BET surface area for 1 is 471 m2 g-1 and the Langmuir surface area is 570 m2 g-1. However, such surface area is slightly higher than azolate-based hydrogen-bonded supramolecular assemblies and also comparable and higher than some hydrogen-bonded porous organic molecules.
We consider infinite-dimensional diffusions where the interaction between the coordinates has a finite extent both in space and time. In particular, it is not supposed to be smooth or Markov. The initial state of the system is Gibbs, given by a strong summable interaction. If the strongness of this initial interaction is lower than a suitable level, and if the dynamical interaction is bounded from above in a right way, we prove that the law of the diffusion at any time t is a Gibbs measure with absolutely summable interaction. The main tool is a cluster expansion in space uniformly in time of the Girsanov factor coming from the dynamics and exponential ergodicity of the free dynamics to an equilibrium product measure.
Within the course of this thesis, I have investigated the complex interplay between electron and lattice dynamics in nanostructures of perovskite oxides. Femtosecond hard X-ray pulses were utilized to probe the evolution of atomic rearrangement directly, which is driven by ultrafast optical excitation of electrons. The physics of complex materials with a large number of degrees of freedom can be interpreted once the exact fingerprint of ultrafast lattice dynamics in time-resolved X-ray diffraction experiments for a simple model system is well known. The motion of atoms in a crystal can be probed directly and in real-time by femtosecond pulses of hard X-ray radiation in a pump-probe scheme. In order to provide such ultrashort X-ray pulses, I have built up a laser-driven plasma X-ray source. The setup was extended by a stable goniometer, a two-dimensional X-ray detector and a cryogen-free cryostat. The data acquisition routines of the diffractometer for these ultrafast X-ray diffraction experiments were further improved in terms of signal-to-noise ratio and angular resolution. The implementation of a high-speed reciprocal-space mapping technique allowed for a two-dimensional structural analysis with femtosecond temporal resolution. I have studied the ultrafast lattice dynamics, namely the excitation and propagation of coherent phonons, in photoexcited thin films and superlattice structures of the metallic perovskite SrRuO3. Due to the quasi-instantaneous coupling of the lattice to the optically excited electrons in this material a spatially and temporally well-defined thermal stress profile is generated in SrRuO3. This enables understanding the effect of the resulting coherent lattice dynamics in time-resolved X-ray diffraction data in great detail, e.g. the appearance of a transient Bragg peak splitting in both thin films and superlattice structures of SrRuO3. In addition, a comprehensive simulation toolbox to calculate the ultrafast lattice dynamics and the resulting X-ray diffraction response in photoexcited one-dimensional crystalline structures was developed in this thesis work. With the powerful experimental and theoretical framework at hand, I have studied the excitation and propagation of coherent phonons in more complex material systems. In particular, I have revealed strongly localized charge carriers after above-bandgap femtosecond photoexcitation of the prototypical multiferroic BiFeO3, which are the origin of a quasi-instantaneous and spatially inhomogeneous stress that drives coherent phonons in a thin film of the multiferroic. In a structurally imperfect thin film of the ferroelectric Pb(Zr0.2Ti0.8)O3, the ultrafast reciprocal-space mapping technique was applied to follow a purely strain-induced change of mosaicity on a picosecond time scale. These results point to a strong coupling of in- and out-of-plane atomic motion exclusively mediated by structural defects.
1. Motivation and introduction 2. International asset allocation 2.1 Risk and return drivers in international asset allocation 2.2 Passive and active investment approaches 2.3 Is international diversification advantageous? 3. Case 4. Interaction levels of the exchange rate dimension 4.1 Role of the reference currency 4.2 Decision on hedging exchange rate risks 4.3 Role of the investment currency 4.4 Role of the investment claim 5. Conclusion
1. Introduction of China’s bank reform 1.1 Stage 1 (1978–1993): Rebuilding the financial system 1.2 Stage 2 (1994–1997): Regulating the financial system 1.3 Stage 3 (1998–2002): Deepening reform of state-owned commercial banks 1.4 Stage 4 (2003-present): Public listing of state-owned banks 2. The roles of SWF in China’s bank reform 3. Future challenges
1. Introduction 2. The role of banks and what is different in banks? 3. Corporate Governance and risk management 4. Risk taking and executive board composition 5. Compensation structures – how to improve models for banks? 6. Banking supervision and regulation 7. Reform of European institutions for financial stability
1. Introduction 2. The architecture of the financial market regulation in Europe prior to the crisis 3. The new architecture of the financial market regulation in Europe 4. Actual issues of the political discussion on further needs to adapt the regulation and the structure of the financial markets in Europe 5. A brief summary
1. Porter strategic competitive analysis 2. A Porter analysis of the competitive advantage of banks in business lending and proprietary trading 3. Summary, competitive advantage of banks in business lending and proprietary trading 4. JPMorgan’s “London Whale” speculation 5. A common misapprehension about hedged positions in corporate debt 6. Conclusion
Banking System in Russia
(2013)
1. Introduction 2. The growth of China’s SMBs and the changes of the banking market structure – a land of small- and medium-sized companies 2.1 The characteristics of China’s banking market structure 2.2 The growth of China’s SMBs 2.3 The changes of China’s banking market structure 3. The opportunities and challenges facing SMBs in China 3.1 Opportunities 3.2 Challenges 4. Conclusion
1. Introduction 2. Analysis of implementation of the Basel III in China 2.1 Implementation of capital adequacy rules 2.2 Implementation of leverage ratio rules 2.3 Implementation of liquidity management rules 3. Suggestions for further development of China’s banking industry 3.1 Promoting capital structure adjustment and broadening capital supplement channels 3.2 Transforming business models and developing intermediary and off-balance business 3.3 Increasing the intensity of risk management and refining its standards
1. Abstract 2. Introduction to the main monetary policy tools in China 2.1 Reserve requirements 2.2 Open market operations 2.3 Interest rate policy 2.4 Credit policy and window guidance 2.5 Real estate credit control 3. Loosening monetary policy and its effect on the banking 3.1 Loosening monetary policy measures 3.2 The effect of the expansionary monetary policy on the banking 4. Sound monetary policy with tight trend and its effect on banking 4.1 Main measures of the sound monetary policy with tight trend 4.2 The effect of sound monetary policy with tight trend on banking 5. Conclusion
The German Banking System
(2013)
Cloud computing is a model for enabling on-demand access to a shared pool of computing resources. With virtually limitless on-demand resources, a cloud environment enables the hosted Internet application to quickly cope when there is an increase in the workload. However, the overhead of provisioning resources exposes the Internet application to periods of under-provisioning and performance degradation. Moreover, the performance interference, due to the consolidation in the cloud environment, complicates the performance management of the Internet applications. In this dissertation, we propose two approaches to mitigate the impact of the resources provisioning overhead. The first approach employs control theory to scale resources vertically and cope fast with workload. This approach assumes that the provider has knowledge and control over the platform running in the virtual machines (VMs), which limits it to Platform as a Service (PaaS) and Software as a Service (SaaS) providers. The second approach is a customer-side one that deals with the horizontal scalability in an Infrastructure as a Service (IaaS) model. It addresses the trade-off problem between cost and performance with a multi-goal optimization solution. This approach finds the scale thresholds that achieve the highest performance with the lowest increase in the cost. Moreover, the second approach employs a proposed time series forecasting algorithm to scale the application proactively and avoid under-utilization periods. Furthermore, to mitigate the interference impact on the Internet application performance, we developed a system which finds and eliminates the VMs suffering from performance interference. The developed system is a light-weight solution which does not imply provider involvement. To evaluate our approaches and the designed algorithms at large-scale level, we developed a simulator called (ScaleSim). In the simulator, we implemented scalability components acting as the scalability components of Amazon EC2. The current scalability implementation in Amazon EC2 is used as a reference point for evaluating the improvement in the scalable application performance. ScaleSim is fed with realistic models of the RUBiS benchmark extracted from the real environment. The workload is generated from the access logs of the 1998 world cup website. The results show that optimizing the scalability thresholds and adopting proactive scalability can mitigate 88% of the resources provisioning overhead impact with only a 9% increase in the cost.
Lakes are increasingly being recognized as an important component of the global carbon cycle, yet anthropogenic activities that alter their community structure may change the way they transport and process carbon. This research focuses on the relationship between carbon cycling and community structure of primary producers in small, shallow lakes, which are the most abundant lake type in the world, and furthermore subject to intense terrestrial-aquatic coupling due to their high perimeter:area ratio. Shifts between macrophyte and phytoplankton dominance are widespread and common in shallow lakes, with potentially large consequences to regional carbon cycling. I thus compared a lake with clear-water conditions and a submerged macrophyte community to a turbid, phytoplankton-dominated lake, describing differences in the availability, processing, and export of organic and inorganic carbon. I furthermore examined the effects of increasing terrestrial carbon inputs on internal carbon cycling processes. Pelagic diel (24-hour) oxygen curves and independent fluorometric approaches of individual primary producers together indicated that the presence of a submerged macrophyte community facilitated higher annual rates of gross primary production than could be supported in a phytoplankton-dominated lake at similar nutrient concentrations. A simple model constructed from the empirical data suggested that this difference between regime types could be common in moderately eutrophic lakes with mean depths under three to four meters, where benthic primary production is a potentially major contributor to the whole-lake primary production. It thus appears likely that a regime shift from macrophyte to phytoplankton dominance in shallow lakes would typically decrease the quantity of autochthonous organic carbon available to lake food webs. Sediment core analyses indicated that a regime shift from macrophyte to phytoplankton dominance was associated with a four-fold increase in carbon burial rates, signalling a major change in lake carbon cycling dynamics. Carbon mass balances suggested that increasing carbon burial rates were not due to an increase in primary production or allochthonous loading, but instead were due to a higher carbon burial efficiency (carbon burial / carbon deposition). This, in turn, was associated with diminished benthic mineralization rates and an increase in calcite precipitation, together resulting in lower surface carbon dioxide emissions. Finally, a period of unusually high precipitation led to rising water levels, resulting in a feedback loop linking increasing concentrations of dissolved organic carbon (DOC) to severely anoxic conditions in the phytoplankton-dominated system. High water levels and DOC concentrations diminished benthic primary production (via shading) and boosted pelagic respiration rates, diminishing the hypolimnetic oxygen supply. The resulting anoxia created redox conditions which led to a major release of nutrients, DOC, and iron from the sediments. This further transformed the lake metabolism, providing a prolonged summertime anoxia below a water depth of 1 m, and leading to the near-complete loss of fish and macroinvertebrates. Pelagic pH levels also decreased significantly, increasing surface carbon dioxide emissions by an order of magnitude compared to previous years. Altogether, this thesis adds an important body of knowledge to our understanding of the significance of the benthic zone to carbon cycling in shallow lakes. The contribution of the benthic zone towards whole-lake primary production was quantified, and was identified as an important but vulnerable site for primary production. Benthic mineralization rates were furthermore found to influence carbon burial and surface emission rates, and benthic primary productivity played an important role in determining hypolimnetic oxygen availability, thus controlling the internal sediment loading of nutrients and carbon. This thesis also uniquely demonstrates that the ecological community structure (i.e. stable regime) of a eutrophic, shallow lake can significantly influence carbon availability and processing. By changing carbon cycling pathways, regime shifts in shallow lakes may significantly alter the role of these ecosystems with respect to the global carbon cycle.
This thesis proposes a privacy protection framework for the controlled distribution and use of personal private data. The framework is based on the idea that privacy policies can be set directly by the data owner and can be automatically enforced against the data user. Data privacy continues to be a very important topic, as our dependency on electronic communication maintains its current growth, and private data is shared between multiple devices, users and locations. The growing amount and the ubiquitous availability of personal private data increases the likelihood of data misuse. Early privacy protection techniques, such as anonymous email and payment systems have focused on data avoidance and anonymous use of services. They did not take into account that data sharing cannot be avoided when people participate in electronic communication scenarios that involve social interactions. This leads to a situation where data is shared widely and uncontrollably and in most cases the data owner has no control over further distribution and use of personal private data. Previous efforts to integrate privacy awareness into data processing workflows have focused on the extension of existing access control frameworks with privacy aware functions or have analysed specific individual problems such as the expressiveness of policy languages. So far, very few implementations of integrated privacy protection mechanisms exist and can be studied to prove their effectiveness for privacy protection. Second level issues that stem from practical application of the implemented mechanisms, such as usability, life-time data management and changes in trustworthiness have received very little attention so far, mainly because they require actual implementations to be studied. Most existing privacy protection schemes silently assume that it is the privilege of the data user to define the contract under which personal private data is released. Such an approach simplifies policy management and policy enforcement for the data user, but leaves the data owner with a binary decision to submit or withhold his or her personal data based on the provided policy. We wanted to empower the data owner to express his or her privacy preferences through privacy policies that follow the so-called Owner-Retained Access Control (ORAC) model. ORAC has been proposed by McCollum, et al. as an alternate access control mechanism that leaves the authority over access decisions by the originator of the data. The data owner is given control over the release policy for his or her personal data, and he or she can set permissions or restrictions according to individually perceived trust values. Such a policy needs to be expressed in a coherent way and must allow the deterministic policy evaluation by different entities. The privacy policy also needs to be communicated from the data owner to the data user, so that it can be enforced. Data and policy are stored together as a Protected Data Object that follows the Sticky Policy paradigm as defined by Mont, et al. and others. We developed a unique policy combination approach that takes usability aspects for the creation and maintenance of policies into consideration. Our privacy policy consists of three parts: A Default Policy provides basic privacy protection if no specific rules have been entered by the data owner. An Owner Policy part allows the customisation of the default policy by the data owner. And a so-called Safety Policy guarantees that the data owner cannot specify disadvantageous policies, which, for example, exclude him or her from further access to the private data. The combined evaluation of these three policy-parts yields the necessary access decision. The automatic enforcement of privacy policies in our protection framework is supported by a reference monitor implementation. We started our work with the development of a client-side protection mechanism that allows the enforcement of data-use restrictions after private data has been released to the data user. The client-side enforcement component for data-use policies is based on a modified Java Security Framework. Privacy policies are translated into corresponding Java permissions that can be automatically enforced by the Java Security Manager. When we later extended our work to implement server-side protection mechanisms, we found several drawbacks for the privacy enforcement through the Java Security Framework. We solved this problem by extending our reference monitor design to use Aspect-Oriented Programming (AOP) and the Java Reflection API to intercept data accesses in existing applications and provide a way to enforce data owner-defined privacy policies for business applications.
The correction of software failures tends to be very cost-intensive because their debugging is an often time-consuming development activity. During this activity, developers largely attempt to understand what causes failures: Starting with a test case that reproduces the observable failure they have to follow failure causes on the infection chain back to the root cause (defect). This idealized procedure requires deep knowledge of the system and its behavior because failures and defects can be far apart from each other. Unfortunately, common debugging tools are inadequate for systematically investigating such infection chains in detail. Thus, developers have to rely primarily on their intuition and the localization of failure causes is not time-efficient. To prevent debugging by disorganized trial and error, experienced developers apply the scientific method and its systematic hypothesis-testing. However, even when using the scientific method, the search for failure causes can still be a laborious task. First, lacking expertise about the system makes it hard to understand incorrect behavior and to create reasonable hypotheses. Second, contemporary debugging approaches provide no or only partial support for the scientific method. In this dissertation, we present test-driven fault navigation as a debugging guide for localizing reproducible failures with the scientific method. Based on the analysis of passing and failing test cases, we reveal anomalies and integrate them into a breadth-first search that leads developers to defects. This systematic search consists of four specific navigation techniques that together support the creation, evaluation, and refinement of failure cause hypotheses for the scientific method. First, structure navigation localizes suspicious system parts and restricts the initial search space. Second, team navigation recommends experienced developers for helping with failures. Third, behavior navigation allows developers to follow emphasized infection chains back to root causes. Fourth, state navigation identifies corrupted state and reveals parts of the infection chain automatically. We implement test-driven fault navigation in our Path Tools framework for the Squeak/Smalltalk development environment and limit its computation cost with the help of our incremental dynamic analysis. This lightweight dynamic analysis ensures an immediate debugging experience with our tools by splitting the run-time overhead over multiple test runs depending on developers’ needs. Hence, our test-driven fault navigation in combination with our incremental dynamic analysis answers important questions in a short time: where to start debugging, who understands failure causes best, what happened before failures, and which state properties are infected.
Background: Increased numbers of intestinal E. coli are observed in inflammatory bowel disease, but the reasons for this proliferation and it exact role in intestinal inflammation are unknown. Aim of this PhD-project was to identify E. coli proteins involved in E. coli’s adaptation to the inflammatory conditions in the gut and to investigate whether these factors affect the host. Furthermore, the molecular basis for strain-specific differences between probiotic and harmful E. coli in their response to intestinal inflammation was investigated. Methods: Using mice monoassociated either with the adherent-invasive E. coli (AIEC) strain UNC or the probiotic E. coli Nissle, two different mouse models of intestinal inflammation were analysed: On the one hand, severe inflammation was induced by treating mice with 3.5% dextran sodium sulphate (DSS). On the other hand, a very mild intestinal inflammation was generated by associating interleukin 10-deficient (IL-10-/-) mice with E. coli. Differentially expressed proteins in the E. coli strains collected from caecal contents of these mice were identified by two-dimensional fluorescence difference gel electrophoresis. Results DSS-experiment: All DSS-treated mice revealed signs of a moderate caecal and a severe colonic inflammation. However, mice monoassociated with E. coli Nissle were less affected. In both E. coli strains, acute inflammation led to a downregulation of pathways involved in carbohydrate breakdown and energy generation. Accordingly, DSS-treated mice had lower caecal concentrations of bacterial fermentation products than the control mice. Differentially expressed proteins also included the Fe-S cluster repair protein NfuA, the tryptophanase TnaA, and the uncharacterised protein YggE. NfuA was upregulated nearly 3-fold in both E. coli strains after DSS administration. Reactive oxygen species produced during intestinal inflammation damage Fe-S clusters and thereby lead to an inactivation of Fe-S proteins. In vitro data indicated that the repair of Fe-S proteins by NfuA is a central mechanism in E. coli to survive oxidative stress. Expression of YggE, which has been reported to reduce the intracellular level of reactive oxygen species, was 4- to 8-fold higher in E. coli Nissle than in E. coli UNC under control and inflammatory conditions. In vitro growth experiments confirmed these results, indicating that E. coli Nissle is better equipped to cope with oxidative stress than E. coli UNC. Additionally, E. coli Nissle isolated from DSS-treated and control mice had TnaA levels 4- to 7-fold higher than E. coli UNC. In turn, caecal indole concentrations resulting from cleavage of tryptophan by TnaA were higher in E. coli Nissle- associated control mice than in the respective mice associated with E. coli UNC. Because of its anti-inflammatory effect, indole is hypothesised to be involved in the extension of the remission phase in ulcerative colitis described for E. coli Nissle. Results IL-10-/--experiment: Only IL-10-/- mice monoassociated with E. coli UNC for 8 weeks exhibited signs of a very mild caecal inflammation. In agreement with this weak inflammation, the variations in the bacterial proteome were small. Similar to the DSS-experiment, proteins downregulated by inflammation belong mainly to the central energy metabolism. In contrast to the DSS-experiment, no upregulation of chaperone proteins and NfuA were observed, indicating that these are strategies to overcome adverse effects of strong intestinal inflammation. The inhibitor of vertebrate C-type lysozyme, Ivy, was 2- to 3-fold upregulated on mRNA and protein level in E. coli Nissle in comparison to E. coli UNC isolated from IL-10-/- mice. By overexpressing ivy, it was demonstrated in vitro that Ivy contributes to a higher lysozyme resistance observed for E. coli Nissle, supporting the role of Ivy as a potential fitness factor in this E. coli strain. Conclusions: The results of this PhD-study demonstrate that intestinal bacteria sense even minimal changes in the health status of the host. While some bacterial adaptations to the inflammatory conditions are equal in response to strong and mild intestinal inflammation, other reactions are unique to a specific disease state. In addition, probiotic and colitogenic E. coli differ in their response to the intestinal inflammation and thereby may influence the host in different ways.
We consider a (generally, non-coercive) mixed boundary value problem in a bounded domain for a second order elliptic differential operator A. The differential operator is assumed to be of divergent form and the boundary operator B is of Robin type. The boundary is assumed to be a Lipschitz surface. Besides, we distinguish a closed subset of the boundary and control the growth of solutions near this set. We prove that the pair (A,B) induces a Fredholm operator L in suitable weighted spaces of Sobolev type, the weight function being a power of the distance to the singular set. Moreover, we prove the completeness of root functions related to L.
Kotzo shel yod by Y. L. Gordon (1832–1892) – one of the prominent intellectuals of the Jewish Enlightenment period – is a well-known Hebrew poem. This poem is characterized by a daring, sharp criticism of the traditional Jewish institutions, which the author felt required a critical shake-up. Gordon’s literary works were inspired by the Jewish Ashkenazi world. This unique and pioneering literary work was translated into Judeo-Spanish (Ladino). The aim of this article is to present the Sephardic version of Gordon’s poem. The article will attempt to examine the motives behind the translation of this work into Ladino, the reception of the translated work by its readership and the challenges faced by the anonymous translator who sought to make this work accessibleto the Ladino-reading public, in the clear knowledge that this version was quite far removed from the Ashkenazi original from which it sprang.
In this article we analyze several examples of the syntactic structure ansí un...(Eng. such a...) apparently calqued from the German expression so ein... that can be found in different Judeo-Spanish texts since the second half of 19th Century. Although the eldest examples appeared in Judeo-Spanish translations of German novels, published in Vienna – what suggests that they could be mere cases derived from a kind of translation too attached to the original –, we can also find more examples in Sephardic texts produced outside the German speaking area (Bosnia, Bulgaria, etc.), not being necessarily translations of a German original. Dealing with all these cases, we will try to trace (and explain) the spread of the ansí un syntactic structure in modern Judeo-Spanish prose.
The political and social changes with which the 19th century began in the Balkans after a great part of their territories were taken over by the Austrian Empire, also resulted in social and intellectual activity and created a new framework in the relationship with the Ottoman Empire. Vienna turned into the shelter of many citizens from the Balkans who then became the transmitters of innovation to their co-citizens through their contact with central European culture. In this sense, the members of Jewish communities participated as much as members of other ethnical and social groups. The most prominent of these Jews was Israel Hayim de Belogrado (‘of Belgrade’), who developed an important intellectual work in the Austrian capital between 1813 and 1837. He even reformed Judeo-Spanish spelling and introduced new methodologies for learning Hebrew as a second language, based on the use of a trilingual nomenclature (Hebrew, Judeo-Spanish, German) when presenting the lexical repertoire.
Academic entrepreneurship
(2013)
Research on entrepreneurial motivation of university scientists is often determined by quantitative methods without taking into account context-related influences. According to different studies, entrepreneurial scientists found a spin-off company due to motives like independency, market opportunity, money or risk of unemployment (short-term contracts). To give a comprehensive explanation, it is important to use a qualitative research view that considers academic rank, norms and values of university scientists. The author spoke with 35 natural scientists and asked professors and research fellows for their entrepreneurial motivation. The results of this study are used to develop a typology of entrepreneurial and non-entrepreneurial scientists within German universities. This paper presents the key findings of the study (Sass 2011).
The paper discusses the distribution and meaning of the additive particle -m@s in Ishkashimi. -m@s receives different semantic associations while staying in the same syntactic position. Thus, structurally combined with an object, it can semantically associate with the focused object or with the whole focused VP; similarly, combined with the subject it can semantically associate with the focused subject and with the whole focused sentence.
In a production experiment and two follow-up perception experiments on read German we investigated the (de-)coding of discourse-new, inferentially and textually accessible and given discourse referents by prosodic means. Results reveal that a decrease in the referent’s level of givenness is reflected by an increase in its prosodic prominence (expressed by differences in the status and type of accent used) providing evidence for the relevance of different intermediate types of information status between the poles given and new. Furthermore, perception data indicate that the degree of prosodic prominence can serve as the decisive cue for decoding a referent’s level of givenness.
In this paper, doubling in Russian Sign Language and Sign Language of the Netherlands is discussed. In both sign languages different constituents (including verbs, nouns, adjectives, adverbs, and whole clauses) can be doubled. It is shown that doubling in both languages has common functions and exhibits a similar structure, despite some differences. On this basis, a unified pragmatic explanation for many doubling phenomena on both the discourse and the clause-internal levels is provided, namely that the main function of doubling both in RSL and NGT is foregrounding of the doubled information.
Avatime, a Kwa language of Ghana, has an additive particle tsyɛ that at first sight looks similar to additive particles such as too and also in English. However, on closer inspection, the Avatime particle behaves differently. Contrary to what is usually claimed about additive particles, tsyɛ does not only associate with focused elements. Moreover, unlike its English equivalents, tsyɛ does not come with a requirement of identity between the expressed proposition and an alternative. Instead, it indicates that the proposition it occurs in is similar to or compatible with a presupposed alternative proposition.
Scrambling and interfaces
(2013)
This paper proposes a novel analysis of the Russian OVS construction and argues that the parametric variation in the availability of OVS cross-linguistically depends on the type of relative interpretative argument prominence that a language encodes via syntactic structure. When thematic and information-structural prominence relations do not coincide, only one of them can be structurally/linearly represented. The relation that is not structurally/linearly encoded must be made visible at the PF interface either via prosody or morphology.
Recent models of Information Structure (IS) identify a low level contrast feature that functions within the topic and focus of the utterance. This study investigates the exact nature of this feature based on empirical evidence from a controlled read speech experiment on the prosodic realization of different levels of contrast in Modern Greek. Results indicate that only correction is truly contrastive, and that it is similarly realized in both topic and focus, suggesting that contrast is an independent IS dimension. Non default focus position is further identified as a parameter that triggers a prosodically marked rendition, similar to correction.