Refine
Year of publication
Document Type
- Doctoral Thesis (51) (remove)
Language
- English (51) (remove)
Is part of the Bibliography
- no (51) (remove)
Keywords
- Klimawandel (2)
- Kontext (2)
- Transkriptionsfaktoren (2)
- climate change (2)
- metabolism (2)
- sensor (2)
- transcription factors (2)
- ATP (1)
- Adana Basin (1)
- Adana Becken (1)
Institute
- Institut für Biochemie und Biologie (12)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (7)
- Institut für Physik und Astronomie (7)
- Institut für Chemie (5)
- Institut für Umweltwissenschaften und Geographie (5)
- Extern (4)
- Institut für Geowissenschaften (4)
- Department Psychologie (3)
- Institut für Informatik und Computational Science (3)
- Department Linguistik (2)
This thesis gives formal definitions of discourse-givenness, coreference and reference, and reports on experiments with computational models of discourse-givenness of noun phrases for English and German. Definitions are based on Bach's (1987) work on reference, Kibble and van Deemter's (2000) work on coreference, and Kamp and Reyle's Discourse Representation Theory (1993). For the experiments, the following corpora with coreference annotation were used: MUC-7, OntoNotes and ARRAU for Englisch, and TueBa-D/Z for German. As for classification algorithms, they cover J48 decision trees, the rule based learner Ripper, and linear support vector machines. New features are suggested, representing the noun phrase's specificity as well as its context, which lead to a significant improvement of classification quality.
It sometimes happens that we finish reading a passage of text just to realize that we have no idea what we just read. During these episodes of mindless reading our mind is elsewhere yet the eyes still move across the text. The phenomenon of mindless reading is common and seems to be widely recognized in lay psychology. However, the scientific investigation of mindless reading has long been underdeveloped. Recent progress in research on mindless reading has been based on self-report measures and on treating it as an all-or-none phenomenon (dichotomy-hypothesis). Here, we introduce the levels-of-inattention hypothesis proposing that mindless reading is graded and occurs at different levels of cognitive processing. Moreover, we introduce two new behavioral paradigms to study mindless reading at different levels in the eye-tracking laboratory. First (Chapter 2), we introduce shuffled text reading as a paradigm to approximate states of weak mindless reading experimentally and compare it to reading of normal text. Results from statistical analyses of eye movements that subjects perform in this task qualitatively support the ‘mindless’ hypothesis that cognitive influences on eye movements are reduced and the ‘foveal load’ hypothesis that the response of the zoom lens of attention to local text difficulty is enhanced when reading shuffled text. We introduce and validate an advanced version of the SWIFT model (SWIFT 3) incorporating the zoom lens of attention (Chapter 3) and use it to explain eye movements during shuffled text reading. Simulations of the SWIFT 3 model provide fully quantitative support for the ‘mindless’ and the ‘foveal load’ hypothesis. They moreover demonstrate that the zoom lens is an important concept to explain eye movements across reading and mindless reading tasks. Second (Chapter 4), we introduce the sustained attention to stimulus task (SAST) to catch episodes when external attention spontaneously lapses (i.e., attentional decoupling or mind wandering) via the overlooking of errors in the text and via signal detection analyses of error detection. Analyses of eye movements in the SAST revealed reduced influences from cognitive text processing during mindless reading. Based on these findings, we demonstrate that it is possible to predict states of mindless reading from eye movement recordings online. That cognition is not always needed to move the eyes supports autonomous mechanisms for saccade initiation. Results from analyses of error detection and eye movements provide support to our levels-of-inattention hypothesis that errors at different levels of the text assess different levels of decoupling. Analyses of pupil size in the SAST (Chapter 5) provide further support to the levels of inattention hypothesis and to the decoupling hypothesis that off-line thought is a distinct mode of cognitive functioning that demands cognitive resources and is associated with deep levels of decoupling. The present work demonstrates that the elusive phenomenon of mindless reading can be vigorously investigated in the cognitive laboratory and further incorporated in the theoretical framework of cognitive science.
For the first time the transcriptional reprogramming of distinct root cortex cells during the arbuscular mycorrhizal (AM) symbiosis was investigated by combining Laser Capture Mirodissection and Affymetrix GeneChip® Medicago genome array hybridization. The establishment of cryosections facilitated the isolation of high quality RNA in sufficient amounts from three different cortical cell types. The transcript profiles of arbuscule-containing cells (arb cells), non-arbuscule-containing cells (nac cells) of Rhizophagus irregularis inoculated Medicago truncatula roots and cortex cells of non-inoculated roots (cor) were successfully explored. The data gave new insights in the symbiosis-related cellular reorganization processes and indicated that already nac cells seem to be prepared for the upcoming fungal colonization. The mycorrhizal- and phosphate-dependent transcription of a GRAS TF family member (MtGras8) was detected in arb cells and mycorrhizal roots. MtGRAS shares a high sequence similarity to a GRAS TF suggested to be involved in the fungal colonization processes (MtRAM1). The function of MtGras8 was unraveled upon RNA interference- (RNAi-) mediated gene silencing. An AM symbiosis-dependent expression of a RNAi construct (MtPt4pro::gras8-RNAi) revealed a successful gene silencing of MtGras8 leading to a reduced arbuscule abundance and a higher proportion of deformed arbuscules in root with reduced transcript levels. Accordingly, MtGras8 might control the arbuscule development and life-time. The targeting of MtGras8 by the phosphate-dependent regulated miRNA5204* was discovered previously (Devers et al., 2011). Since miRNA5204* is known to be affected by phosphate, the posttranscriptional regulation might represent a link between phosphate signaling and arbuscule development. In this work, the posttranscriptional regulation was confirmed by mis-expression of miRNA5204* in M. truncatula roots. The miRNA-mediated gene silencing affects the MtGras8 transcript abundance only in the first two weeks of the AM symbiosis and the mis-expression lines seem to mimic the phenotype of MtGras8-RNAi lines. Additionally, MtGRAS8 seems to form heterodimers with NSP2 and RAM1, which are known to be key regulators of the fungal colonization process (Hirsch et al., 2009; Gobbato et al., 2012). These data indicate that MtGras8 and miRNA5204* are linked to the sym pathway and regulate the arbuscule development in phosphate-dependent manner.
The contractile vacuole (CV) is an osmoregulatory organelle found exclusively in algae and protists. In addition to expelling excessive water out of the cell, it also expels ions and other metabolites and thereby contributes to the cell's metabolic homeostasis. The interest in the CV reaches beyond its immediate cellular roles. The CV's function is tightly related to basic cellular processes such as membrane dynamics and vesicle budding and fusion; several physiological processes in animals, such as synaptic neurotransmission and blood filtration in the kidney, are related to the CV's function; and several pathogens, such as the causative agents of sleeping sickness, possess CVs, which may serve as pharmacological targets. The green alga Chlamydomonas reinhardtii has two CVs. They are the smallest known CVs in nature, and they remain relatively untouched in the CV-related literature. Many genes that have been shown to be related to the CV in other organisms have close homologues in C. reinhardtii. We attempted to silence some of these genes and observe the effect on the CV. One of our genes, VMP1, caused striking, severe phenotypes when silenced. Cells exhibited defective cytokinesis and aberrant morphologies. The CV, incidentally, remained unscathed. In addition, mutant cells showed some evidence of disrupted autophagy. Several important regulators of the cell cycle as well as autophagy were found to be underexpressed in the mutant. Lipidomic analysis revealed many meaningful changes between wild-type and mutant cells, reinforcing the compromised-autophagy observation. VMP1 is a singular protein, with homologues in numerous eukaryotic organisms (aside from fungi), but usually with no relatives in each particular genome. Since its first characterization in 2002 it has been associated with several cellular processes and functions, namely autophagy, programmed cell-death, secretion, cell adhesion, and organelle biogenesis. It has been implicated in several human diseases: pancreatitis, diabetes, and several types of cancer. Our results reiterate some of the observations in VMP1's six reported homologues, but, importantly, show for the first time an involvement of this protein in cell division. The mechanisms underlying this involvement in Chlamydomonas, as well as other key aspects, such as VMP1's subcellular localization and interaction partners, still await elucidation.
The Semantic Web provides information contained in the World Wide Web as machine-readable facts. In comparison to a keyword-based inquiry, semantic search enables a more sophisticated exploration of web documents. By clarifying the meaning behind entities, search results are more precise and the semantics simultaneously enable an exploration of semantic relationships. However, unlike keyword searches, a semantic entity-focused search requires that web documents are annotated with semantic representations of common words and named entities. Manual semantic annotation of (web) documents is time-consuming; in response, automatic annotation services have emerged in recent years. These annotation services take continuous text as input, detect important key terms and named entities and annotate them with semantic entities contained in widely used semantic knowledge bases, such as Freebase or DBpedia. Metadata of video documents require special attention. Semantic analysis approaches for continuous text cannot be applied, because information of a context in video documents originates from multiple sources possessing different reliabilities and characteristics. This thesis presents a semantic analysis approach consisting of a context model and a disambiguation algorithm for video metadata. The context model takes into account the characteristics of video metadata and derives a confidence value for each metadata item. The confidence value represents the level of correctness and ambiguity of the textual information of the metadata item. The lower the ambiguity and the higher the prospective correctness, the higher the confidence value. The metadata items derived from the video metadata are analyzed in a specific order from high to low confidence level. Previously analyzed metadata are used as reference points in the context for subsequent disambiguation. The contextually most relevant entity is identified by means of descriptive texts and semantic relationships to the context. The context is created dynamically for each metadata item, taking into account the confidence value and other characteristics. The proposed semantic analysis follows two hypotheses: metadata items of a context should be processed in descendent order of their confidence value, and the metadata that pertains to a context should be limited by content-based segmentation boundaries. The evaluation results support the proposed hypotheses and show increased recall and precision for annotated entities, especially for metadata that originates from sources with low reliability. The algorithms have been evaluated against several state-of-the-art annotation approaches. The presented semantic analysis process is integrated into a video analysis framework and has been successfully applied in several projects for the purpose of semantic video exploration of videos.
The objective and motivation behind this research is to provide applications with easy-to-use interfaces to communities of deaf and functionally illiterate users, which enables them to work without any human assistance. Although recent years have witnessed technological advancements, the availability of technology does not ensure accessibility to information and communication technologies (ICT). Extensive use of text from menus to document contents means that deaf or functionally illiterate can not access services implemented on most computer software. Consequently, most existing computer applications pose an accessibility barrier to those who are unable to read fluently. Online technologies intended for such groups should be developed in continuous partnership with primary users and include a thorough investigation into their limitations, requirements and usability barriers. In this research, I investigated existing tools in voice, web and other multimedia technologies to identify learning gaps and explored ways to enhance the information literacy for deaf and functionally illiterate users. I worked on the development of user-centered interfaces to increase the capabilities of deaf and low literacy users by enhancing lexical resources and by evaluating several multimedia interfaces for them. The interface of the platform-independent Italian Sign Language (LIS) Dictionary has been developed to enhance the lexical resources for deaf users. The Sign Language Dictionary accepts Italian lemmas as input and provides their representation in the Italian Sign Language as output. The Sign Language dictionary has 3082 signs as set of Avatar animations in which each sign is linked to a corresponding Italian lemma. I integrated the LIS lexical resources with MultiWordNet (MWN) database to form the first LIS MultiWordNet(LMWN). LMWN contains information about lexical relations between words, semantic relations between lexical concepts (synsets), correspondences between Italian and sign language lexical concepts and semantic fields (domains). The approach enhances the deaf users’ understanding of written Italian language and shows that a relatively small set of lexicon can cover a significant portion of MWN. Integration of LIS signs with MWN made it useful tool for computational linguistics and natural language processing. The rule-based translation process from written Italian text to LIS has been transformed into service-oriented system. The translation process is composed of various modules including parser, semantic interpreter, generator, and spatial allocation planner. This translation procedure has been implemented in the Java Application Building Center (jABC), which is a framework for extreme model driven design (XMDD). The XMDD approach focuses on bringing software development closer to conceptual design, so that the functionality of a software solution could be understood by someone who is unfamiliar with programming concepts. The transformation addresses the heterogeneity challenge and enhances the re-usability of the system. For enhancing the e-participation of functionally illiterate users, two detailed studies were conducted in the Republic of Rwanda. In the first study, the traditional (textual) interface was compared with the virtual character-based interactive interface. The study helped to identify usability barriers and users evaluated these interfaces according to three fundamental areas of usability, i.e. effectiveness, efficiency and satisfaction. In another study, we developed four different interfaces to analyze the usability and effects of online assistance (consistent help) for functionally illiterate users and compared different help modes including textual, vocal and virtual character on the performance of semi-literate users. In our newly designed interfaces the instructions were automatically translated in Swahili language. All the interfaces were evaluated on the basis of task accomplishment, time consumption, System Usability Scale (SUS) rating and number of times the help was acquired. The results show that the performance of semi-literate users improved significantly when using the online assistance. The dissertation thus introduces a new development approach in which virtual characters are used as additional support for barely literate or naturally challenged users. Such components enhanced the application utility by offering a variety of services like translating contents in local language, providing additional vocal information, and performing automatic translation from text to sign language. Obviously, there is no such thing as one design solution that fits for all in the underlying domain. Context sensitivity, literacy and mental abilities are key factors on which I concentrated and the results emphasize that computer interfaces must be based on a thoughtful definition of target groups, purposes and objectives.
Passive plant actuators have fascinated many researchers in the field of botany and structural biology since at least one century. Up to date, the most investigated tissue types in plant and artificial passive actuators are fibre-reinforced composites (and multilayered assemblies thereof) where stiff, almost inextensible cellulose microfibrils direct the otherwise isotropic swelling of a matrix. In addition, Nature provides examples of actuating systems based on lignified, low-swelling, cellular solids enclosing a high-swelling cellulosic phase. This is the case of the Delosperma nakurense seed capsule, in which a specialized tissue promotes the reversible opening of the capsule upon wetting. This tissue has a diamond-shaped honeycomb microstructure characterized by high geometrical anisotropy: when the cellulosic phase swells inside this constraining structure, the tissue deforms up to four times in one principal direction while maintaining its original dimension in the other. Inspired by the example of the Delosoperma nakurense, in this thesis we analyze the role of architecture of 2D cellular solids as models for natural hygromorphs. To start off, we consider a simple fluid pressure acting in the cells and try to assess the influence of several architectural parameters onto their mechanical actuation. Since internal pressurization is a configurational type of load (that is the load direction is not fixed but it “follows” the structure as it deforms) it will result in the cellular structure acquiring a “spontaneous” shape. This shape is independent of the load but just depends on the architectural characteristics of the cells making up the structure itself. Whereas regular convex tiled cellular solids (such as hexagonal, triangular or square lattices) deform isotropically upon pressurization, we show through finite element simulations that by introducing anisotropic and non-convex, reentrant tiling large expansions can be achieved in each individual cell. The influence of geometrical anisotropy onto the expansion behaviour of a diamond shaped honeycomb is assessed by FEM calculations and a Born lattice approximation. We found that anisotropic expansions (eigenstrains) comparable to those observed in the keels tissue of the Delosoperma nakurense are possible. In particular these depend on the relative contributions of bending and stretching of the beams building up the honeycomb. Moreover, by varying the walls’ Young modulus E and internal pressure p we found that both the eigenstrains and 2D elastic moduli scale with the ratio p/E. Therefore the potential of these pressurized structures as soft actuators is outlined. This approach was extended by considering several 2D cellular solids based on two types of non-convex cells. Each honeycomb is build as a lattice made of only one non-convex cell. Compared to usual honeycombs, these lattices have kinked walls between neighbouring cells which offers a hidden length scale allowing large directed deformations. By comparing the area expansion in all lattices, we were able to show that less convex cells are prone to achieve larger area expansions, but the direction in which the material expands is variable and depends on the local cell’s connectivity. This has repercussions both at the macroscopic (lattice level) and microscopic (cells level) scales. At the macroscopic scale, these non-convex lattices can experience large anisotropic (similarly to the diamond shaped honeycomb) or perfectly isotropic principal expansions, large shearing deformations or a mixed behaviour. Moreover, lattices that at the macroscopic scale expand similarly can show quite different microscopic deformation patterns that include zig-zag motions and radical changes of the initial cell shape. Depending on the lattice architecture, the microscopic deformations of the individual cells can be equal or not, so that they can build up or mutually compensate and hence give rise to the aforementioned variety of macroscopic behaviours. Interestingly, simple geometrical arguments involving the undeformed cell shape and its local connectivity enable to predict the results of the FE simulations. Motivated by the results of the simulations, we also created experimental 3D printed models of such actuating structures. When swollen, the models undergo substantial deformation with deformation patterns qualitatively following those predicted by the simulations. This work highlights how the internal architecture of a swellable cellular solid can lead to complex shape changes which may be useful in the fields of soft robotics or morphing structures.
Measuring the metabolite profile of plants can be a strong phenotyping tool, but the changes of metabolite pool sizes are often difficult to interpret, not least because metabolite pool sizes may stay constant while carbon flows are altered and vice versa. Hence, measuring the carbon allocation of metabolites enables a better understanding of the metabolic phenotype. The main challenge of such measurements is the in vivo integration of a stable or radioactive label into a plant without perturbation of the system. To follow the carbon flow of a precursor metabolite, a method is developed in this work that is based on metabolite profiling of primary metabolites measured with a mass spectrometer preceded by a gas chromatograph (Wagner et al. 2003; Erban et al. 2007; Dethloff et al. submitted). This method generates stable isotope profiling data, besides conventional metabolite profiling data. In order to allow the feeding of a 13C sucrose solution into the plant, a petiole and a hypocotyl feeding assay are developed. To enable the processing of large numbers of single leaf samples, their preparation and extraction are simplified and optimised. The metabolite profiles of primary metabolites are measured, and a simple relative calculation is done to gain information on carbon allocation from 13C sucrose. This method is tested examining single leaves of one rosette in different developmental stages, both metabolically and regarding carbon allocation from 13C sucrose. It is revealed that some metabolite pool sizes and 13C pools are tightly associated to relative leaf growth, i.e. to the developmental stage of the leaf. Fumaric acid turns out to be the most interesting candidate for further studies because pool size and 13C pool diverge considerably. In addition, the analyses are also performed on plants grown in the cold, and the initial results show a different metabolite pool size pattern across single leaves of one Arabidopsis rosette, compared to the plants grown under normal temperatures. Lastly, in situ expression of REIL genes in the cold is examined using promotor-GUS plants. Initial results suggest that single leaf metabolite profiles of reil2 differ from those of the WT.
Today, it is well known that galaxies like the Milky Way consist not only of stars but also of gas and dust. The galactic halo, a sphere of gas that surrounds the stellar disk of a galaxy, is especially interesting. It provides a wealth of information about in and outflowing gaseous material towards and away from galaxies and their hierarchical evolution. For the Milky Way, the so-called high-velocity clouds (HVCs), fast moving neutral gas complexes in the halo that can be traced by absorption-line measurements, are believed to play a crucial role in the overall matter cycle in our Galaxy. Over the last decades, the properties of these halo structures and their connection to the local circumgalactic and intergalactic medium (CGM and IGM, respectively) have been investigated in great detail by many different groups. So far it remains unclear, however, to what extent the results of these studies can be transferred to other galaxies in the local Universe. In this thesis, we study the absorption properties of Galactic HVCs and compare the HVC absorption characteristics with those of intervening QSO absorption-line systems at low redshift. The goal of this project is to improve our understanding of the spatial extent and physical conditions of gaseous galaxy halos in the local Universe. In the first part of the thesis we use HST /STIS ultraviolet spectra of more than 40 extragalactic background sources to statistically analyze the absorption properties of the HVCs in the Galactic halo. We determine fundamental absorption line parameters including covering fractions of different weakly/intermediately/highly ionized metals with a particular focus on SiII and MgII. Due to the similarity in the ionization properties of SiII and MgII, we are able to estimate the contribution of HVC-like halo structures to the cross section of intervening strong MgII absorbers at z = 0. Our study implies that only the most massive HVCs would be regarded as strong MgII absorbers, if the Milky Way halo would be seen as a QSO absorption line system from an exterior vantage point. Combining the observed absorption-cross section of Galactic HVCs with the well-known number density of intervening strong MgII absorbers at z = 0, we conclude that the contribution of infalling gas clouds (i.e., HVC analogs) in the halos of Milky Way-type galaxies to the cross section of strong MgII absorbers is 34%. This result indicates that only about one third of the strong MgII absorption can be associated with HVC analogs around other galaxies, while the majority of the strong MgII systems possibly is related to galaxy outflows and winds. The second part of this thesis focuses on the properties of intervening metal absorbers at low redshift. The analysis of the frequency and physical conditions of intervening metal systems in QSO spectra and their relation to nearby galaxies offers new insights into the typical conditions of gaseous galaxy halos. One major aspect in our study was to regard intervening metal systems as possible HVC analogs. We perform a detailed analysis of absorption line properties and line statistics for 57 metal absorbers along 78 QSO sightlines using newly-obtained ultraviolet spectra obtained with HST /COS. We find clear evidence for bimodal distribution in the HI column density in the absorbers, a trend that we interpret as sign for two different classes of absorption systems (with HVC analogs at the high-column density end). With the help of the strong transitions of SiII λ1260, SiIII λ1206, and CIII λ977 we have set up Cloudy photoionization models to estimate the local ionization conditions, gas densities, and metallicities. We find that the intervening absorption systems studied by us have, on average, similar physical conditions as Galactic HVC absorbers, providing evidence that many of them represent HVC analogs in the vicinity of other galaxies. We therefore determine typical halo sizes for SiII, SiIII, and CIII for L = 0.01L∗ and L = 0.05L∗ galaxies. Based on the covering fractions of the different ions in the Galactic halo, we find that, for example, the typical halo size for SiIII is ∼ 160 kpc for L = 0.05L∗ galaxies. We test the plausibility of this result by searching for known galaxies close to the QSO sightlines and at similar redshifts as the absorbers. We find that more than 34% of the measured SiIII absorbers have galaxies associated with them, with the majority of the absorbers indeed being at impact parameters ρ ≤160 kpc.
Organizations try to gain competitive advantages, and to increase customer satisfaction. To ensure the quality and efficiency of their business processes, they perform business process management. An important part of process management that happens on the daily operational level is process controlling. A prerequisite of controlling is process monitoring, i.e., keeping track of the performed activities in running process instances. Only by process monitoring can business analysts detect delays and react to deviations from the expected or guaranteed performance of a process instance. To enable monitoring, process events need to be collected from the process environment. When a business process is orchestrated by a process execution engine, monitoring is available for all orchestrated process activities. Many business processes, however, do not lend themselves to automatic orchestration, e.g., because of required freedom of action. This situation is often encountered in hospitals, where most business processes are manually enacted. Hence, in practice it is often inefficient or infeasible to document and monitor every process activity. Additionally, manual process execution and documentation is prone to errors, e.g., documentation of activities can be forgotten. Thus, organizations face the challenge of process events that occur, but are not observed by the monitoring environment. These unobserved process events can serve as basis for operational process decisions, even without exact knowledge of when they happened or when they will happen. An exemplary decision is whether to invest more resources to manage timely completion of a case, anticipating that the process end event will occur too late. This thesis offers means to reason about unobserved process events in a probabilistic way. We address decisive questions of process managers (e.g., "when will the case be finished?", or "when did we perform the activity that we forgot to document?") in this thesis. As main contribution, we introduce an advanced probabilistic model to business process management that is based on a stochastic variant of Petri nets. We present a holistic approach to use the model effectively along the business process lifecycle. Therefore, we provide techniques to discover such models from historical observations, to predict the termination time of processes, and to ensure quality by missing data management. We propose mechanisms to optimize configuration for monitoring and prediction, i.e., to offer guidance in selecting important activities to monitor. An implementation is provided as a proof of concept. For evaluation, we compare the accuracy of the approach with that of state-of-the-art approaches using real process data of a hospital. Additionally, we show its more general applicability in other domains by applying the approach on process data from logistics and finance.
In this work, the development of temperature- and protein-responsive sensor materials based on biocompatible, inverse hydrogel opals (IHOs) is presented. With these materials, large biomolecules can be specifically recognised and the binding event visualised. The preparation of the IHOs was performed with a template process, for which monodisperse silica particles were vertically deposited onto glass slides as the first step. The obtained colloidal crystals with a thickness of 5 μm displayed opalescent reflections because of the uniform alignment of the colloids. As a second step, the template was embedded in a matrix consisting of biocompatible, thermoresponsive hydrogels. The comonomers were selected from the family of oligo(ethylene glycol)methacrylates. The monomer solution was injected into a polymerisation mould, which contained the colloidal crystals as a template. The space in-between the template particles was filled with the monomer solution and the hydrogel was cured via UV-polymerisation. The particles were chemically etched, which resulted in a porous inner structure. The uniform alignment of the pores and therefore the opalescent reflection were maintained, so these system were denoted as inverse hydrogel opals. A pore diameter of several hundred nanometres as well as interconnections between the pores should facilitate a diffusion of bigger (bio)molecules, which was always a challenge in the presented systems until now. The copolymer composition was chosen to result in a hydrogel collapse over 35 °C. All hydrogels showed pronounced swelling in water below the critical temperature. The incorporation of a reactive monomer with hydroxyl groups ensured a potential coupling group for the introduction of recognition units for analytes, e.g. proteins. As a test system, biotin as a recognition unit for avidin was coupled to the IHO via polymer-analogous Steglich esterification. The amount of accessible biotin was quantified with a colorimetric binding assay. When avidin was added to the biotinylated IHO, the wavelength of the opalescent reflection was significantly shifted and therefore the binding event was visualised. This effect is based on the change in swelling behaviour of the hydrogel after binding of the hydrophilic avidin, which is amplified by the thermoresponsive nature of the hydrogel. A swelling or shrinking of the pores induces a change in distance of the crystal planes, which are responsible for the colour of the reflection. With these findings, the possibility of creating sensor materials or additional biomolecules in the size range of avidin is given.
The sharply rising level of atmospheric carbon dioxide resulting from anthropogenic emissions is one of the greatest environmental concerns facing our civilization today. Metal-organic frameworks (MOFs) are a new class of materials that constructed by metal-containing nodes bonded to organic bridging ligands. MOFs could serve as an ideal platform for the development of next generation CO2 capture materials owing to their large capacity for the adsorption of gases and their structural and chemical tunability. The ability to rationally select the framework components is expected to allow the affinity of the internal pore surface toward CO2 to be precisely controlled, facilitating materials properties that are optimized for the specific type of CO2 capture to be performed (post-combustion capture, precombustion capture, or oxy-fuel combustion) and potentially even for the specific power plant in which the capture system is to be installed. For this reason, significant effort has been made in recent years in improving the gas separation performance of MOFs and some studies evaluating the prospects of deploying these materials in real-world CO2 capture systems have begun to emerge. We have developed six new MOFs, denoted as IFPs (IFP-5, -6, -7, -8, -9, -10, IFP = Imidazolate Framework Potsdam) and two hydrogen-bonded molecular building block (MBB, named as 1 and 2 for Zn and Co based, respectively) have been synthesized, characterized and applied for gas storage. The structure of IFP possesses 1D hexagonal channels. Metal centre and the substituent groups of C2 position of the linker protrude into the open channels and determine their accessible diameter. Interestingly, the channel diameters (range : 0.3 to 5.2 Å) for IFP structures are tuned by the metal centre (Zn, Co and Cd) and substituent of C2 position of the imidazolate linker. Moreover hydrogen bonded MBB of 1 and 2 is formed an in situ functionalization of a ligand under solvothermal condition. Two different types of channels are observed for 1 and 2. Materials contain solvent accessible void space. Solvent could be easily removed by under high vacuum. The porous framework has maintained the crystalline integrity even without solvent molecules. N2, H2, CO2 and CH4 gas sorption isotherms were performed. Gas uptake capacities are comparable with other frameworks. Gas uptake capacity is reduced when the channel diameter is narrow. For example, the channel diameter of IFP-5 (channel diameter: 3.8 Å) is slightly lower than that of IFP-1 (channel diameter: 4.2 Å); hence, the gas uptake capacity and Brunauer-Emmett-Teller (BET) surface area are slightly lower than IFP-1. The selectivity does not depend only on the size of the gas components (kinetic diameter: CO2 3.3 Å, N2 3.6 Å and CH4 3.8 ) but also on the polarizability of the surface and of the gas components. IFP-5 and-6 have the potential applications for the separation of CO2 and CH4 from N2-containing gas mixtures and CO2 and CH4 containing gas mixtures. Gas sorption isotherms of IFP-7, -8, -9, -10 exhibited hysteretic behavior due to flexible alkoxy (e.g., methoxy and ethoxy) substituents. Such phenomenon is a kind of gate effects which is rarely observed in microporous MOFs. IFP-7 (Zn-centred) has a flexible methoxy substituent. This is the first example where a flexible methoxy substituent shows the gate opening behavior in a MOF. Presence of methoxy functional group at the hexagonal channels, IFP-7 acted as molecular gate for N2 gas. Due to polar methoxy group and channel walls, wide hysteretic isotherm was observed during gas uptake. The N2 The estimated BET surface area for 1 is 471 m2 g-1 and the Langmuir surface area is 570 m2 g-1. However, such surface area is slightly higher than azolate-based hydrogen-bonded supramolecular assemblies and also comparable and higher than some hydrogen-bonded porous organic molecules.
Within the course of this thesis, I have investigated the complex interplay between electron and lattice dynamics in nanostructures of perovskite oxides. Femtosecond hard X-ray pulses were utilized to probe the evolution of atomic rearrangement directly, which is driven by ultrafast optical excitation of electrons. The physics of complex materials with a large number of degrees of freedom can be interpreted once the exact fingerprint of ultrafast lattice dynamics in time-resolved X-ray diffraction experiments for a simple model system is well known. The motion of atoms in a crystal can be probed directly and in real-time by femtosecond pulses of hard X-ray radiation in a pump-probe scheme. In order to provide such ultrashort X-ray pulses, I have built up a laser-driven plasma X-ray source. The setup was extended by a stable goniometer, a two-dimensional X-ray detector and a cryogen-free cryostat. The data acquisition routines of the diffractometer for these ultrafast X-ray diffraction experiments were further improved in terms of signal-to-noise ratio and angular resolution. The implementation of a high-speed reciprocal-space mapping technique allowed for a two-dimensional structural analysis with femtosecond temporal resolution. I have studied the ultrafast lattice dynamics, namely the excitation and propagation of coherent phonons, in photoexcited thin films and superlattice structures of the metallic perovskite SrRuO3. Due to the quasi-instantaneous coupling of the lattice to the optically excited electrons in this material a spatially and temporally well-defined thermal stress profile is generated in SrRuO3. This enables understanding the effect of the resulting coherent lattice dynamics in time-resolved X-ray diffraction data in great detail, e.g. the appearance of a transient Bragg peak splitting in both thin films and superlattice structures of SrRuO3. In addition, a comprehensive simulation toolbox to calculate the ultrafast lattice dynamics and the resulting X-ray diffraction response in photoexcited one-dimensional crystalline structures was developed in this thesis work. With the powerful experimental and theoretical framework at hand, I have studied the excitation and propagation of coherent phonons in more complex material systems. In particular, I have revealed strongly localized charge carriers after above-bandgap femtosecond photoexcitation of the prototypical multiferroic BiFeO3, which are the origin of a quasi-instantaneous and spatially inhomogeneous stress that drives coherent phonons in a thin film of the multiferroic. In a structurally imperfect thin film of the ferroelectric Pb(Zr0.2Ti0.8)O3, the ultrafast reciprocal-space mapping technique was applied to follow a purely strain-induced change of mosaicity on a picosecond time scale. These results point to a strong coupling of in- and out-of-plane atomic motion exclusively mediated by structural defects.
Cloud computing is a model for enabling on-demand access to a shared pool of computing resources. With virtually limitless on-demand resources, a cloud environment enables the hosted Internet application to quickly cope when there is an increase in the workload. However, the overhead of provisioning resources exposes the Internet application to periods of under-provisioning and performance degradation. Moreover, the performance interference, due to the consolidation in the cloud environment, complicates the performance management of the Internet applications. In this dissertation, we propose two approaches to mitigate the impact of the resources provisioning overhead. The first approach employs control theory to scale resources vertically and cope fast with workload. This approach assumes that the provider has knowledge and control over the platform running in the virtual machines (VMs), which limits it to Platform as a Service (PaaS) and Software as a Service (SaaS) providers. The second approach is a customer-side one that deals with the horizontal scalability in an Infrastructure as a Service (IaaS) model. It addresses the trade-off problem between cost and performance with a multi-goal optimization solution. This approach finds the scale thresholds that achieve the highest performance with the lowest increase in the cost. Moreover, the second approach employs a proposed time series forecasting algorithm to scale the application proactively and avoid under-utilization periods. Furthermore, to mitigate the interference impact on the Internet application performance, we developed a system which finds and eliminates the VMs suffering from performance interference. The developed system is a light-weight solution which does not imply provider involvement. To evaluate our approaches and the designed algorithms at large-scale level, we developed a simulator called (ScaleSim). In the simulator, we implemented scalability components acting as the scalability components of Amazon EC2. The current scalability implementation in Amazon EC2 is used as a reference point for evaluating the improvement in the scalable application performance. ScaleSim is fed with realistic models of the RUBiS benchmark extracted from the real environment. The workload is generated from the access logs of the 1998 world cup website. The results show that optimizing the scalability thresholds and adopting proactive scalability can mitigate 88% of the resources provisioning overhead impact with only a 9% increase in the cost.
Lakes are increasingly being recognized as an important component of the global carbon cycle, yet anthropogenic activities that alter their community structure may change the way they transport and process carbon. This research focuses on the relationship between carbon cycling and community structure of primary producers in small, shallow lakes, which are the most abundant lake type in the world, and furthermore subject to intense terrestrial-aquatic coupling due to their high perimeter:area ratio. Shifts between macrophyte and phytoplankton dominance are widespread and common in shallow lakes, with potentially large consequences to regional carbon cycling. I thus compared a lake with clear-water conditions and a submerged macrophyte community to a turbid, phytoplankton-dominated lake, describing differences in the availability, processing, and export of organic and inorganic carbon. I furthermore examined the effects of increasing terrestrial carbon inputs on internal carbon cycling processes. Pelagic diel (24-hour) oxygen curves and independent fluorometric approaches of individual primary producers together indicated that the presence of a submerged macrophyte community facilitated higher annual rates of gross primary production than could be supported in a phytoplankton-dominated lake at similar nutrient concentrations. A simple model constructed from the empirical data suggested that this difference between regime types could be common in moderately eutrophic lakes with mean depths under three to four meters, where benthic primary production is a potentially major contributor to the whole-lake primary production. It thus appears likely that a regime shift from macrophyte to phytoplankton dominance in shallow lakes would typically decrease the quantity of autochthonous organic carbon available to lake food webs. Sediment core analyses indicated that a regime shift from macrophyte to phytoplankton dominance was associated with a four-fold increase in carbon burial rates, signalling a major change in lake carbon cycling dynamics. Carbon mass balances suggested that increasing carbon burial rates were not due to an increase in primary production or allochthonous loading, but instead were due to a higher carbon burial efficiency (carbon burial / carbon deposition). This, in turn, was associated with diminished benthic mineralization rates and an increase in calcite precipitation, together resulting in lower surface carbon dioxide emissions. Finally, a period of unusually high precipitation led to rising water levels, resulting in a feedback loop linking increasing concentrations of dissolved organic carbon (DOC) to severely anoxic conditions in the phytoplankton-dominated system. High water levels and DOC concentrations diminished benthic primary production (via shading) and boosted pelagic respiration rates, diminishing the hypolimnetic oxygen supply. The resulting anoxia created redox conditions which led to a major release of nutrients, DOC, and iron from the sediments. This further transformed the lake metabolism, providing a prolonged summertime anoxia below a water depth of 1 m, and leading to the near-complete loss of fish and macroinvertebrates. Pelagic pH levels also decreased significantly, increasing surface carbon dioxide emissions by an order of magnitude compared to previous years. Altogether, this thesis adds an important body of knowledge to our understanding of the significance of the benthic zone to carbon cycling in shallow lakes. The contribution of the benthic zone towards whole-lake primary production was quantified, and was identified as an important but vulnerable site for primary production. Benthic mineralization rates were furthermore found to influence carbon burial and surface emission rates, and benthic primary productivity played an important role in determining hypolimnetic oxygen availability, thus controlling the internal sediment loading of nutrients and carbon. This thesis also uniquely demonstrates that the ecological community structure (i.e. stable regime) of a eutrophic, shallow lake can significantly influence carbon availability and processing. By changing carbon cycling pathways, regime shifts in shallow lakes may significantly alter the role of these ecosystems with respect to the global carbon cycle.
This thesis proposes a privacy protection framework for the controlled distribution and use of personal private data. The framework is based on the idea that privacy policies can be set directly by the data owner and can be automatically enforced against the data user. Data privacy continues to be a very important topic, as our dependency on electronic communication maintains its current growth, and private data is shared between multiple devices, users and locations. The growing amount and the ubiquitous availability of personal private data increases the likelihood of data misuse. Early privacy protection techniques, such as anonymous email and payment systems have focused on data avoidance and anonymous use of services. They did not take into account that data sharing cannot be avoided when people participate in electronic communication scenarios that involve social interactions. This leads to a situation where data is shared widely and uncontrollably and in most cases the data owner has no control over further distribution and use of personal private data. Previous efforts to integrate privacy awareness into data processing workflows have focused on the extension of existing access control frameworks with privacy aware functions or have analysed specific individual problems such as the expressiveness of policy languages. So far, very few implementations of integrated privacy protection mechanisms exist and can be studied to prove their effectiveness for privacy protection. Second level issues that stem from practical application of the implemented mechanisms, such as usability, life-time data management and changes in trustworthiness have received very little attention so far, mainly because they require actual implementations to be studied. Most existing privacy protection schemes silently assume that it is the privilege of the data user to define the contract under which personal private data is released. Such an approach simplifies policy management and policy enforcement for the data user, but leaves the data owner with a binary decision to submit or withhold his or her personal data based on the provided policy. We wanted to empower the data owner to express his or her privacy preferences through privacy policies that follow the so-called Owner-Retained Access Control (ORAC) model. ORAC has been proposed by McCollum, et al. as an alternate access control mechanism that leaves the authority over access decisions by the originator of the data. The data owner is given control over the release policy for his or her personal data, and he or she can set permissions or restrictions according to individually perceived trust values. Such a policy needs to be expressed in a coherent way and must allow the deterministic policy evaluation by different entities. The privacy policy also needs to be communicated from the data owner to the data user, so that it can be enforced. Data and policy are stored together as a Protected Data Object that follows the Sticky Policy paradigm as defined by Mont, et al. and others. We developed a unique policy combination approach that takes usability aspects for the creation and maintenance of policies into consideration. Our privacy policy consists of three parts: A Default Policy provides basic privacy protection if no specific rules have been entered by the data owner. An Owner Policy part allows the customisation of the default policy by the data owner. And a so-called Safety Policy guarantees that the data owner cannot specify disadvantageous policies, which, for example, exclude him or her from further access to the private data. The combined evaluation of these three policy-parts yields the necessary access decision. The automatic enforcement of privacy policies in our protection framework is supported by a reference monitor implementation. We started our work with the development of a client-side protection mechanism that allows the enforcement of data-use restrictions after private data has been released to the data user. The client-side enforcement component for data-use policies is based on a modified Java Security Framework. Privacy policies are translated into corresponding Java permissions that can be automatically enforced by the Java Security Manager. When we later extended our work to implement server-side protection mechanisms, we found several drawbacks for the privacy enforcement through the Java Security Framework. We solved this problem by extending our reference monitor design to use Aspect-Oriented Programming (AOP) and the Java Reflection API to intercept data accesses in existing applications and provide a way to enforce data owner-defined privacy policies for business applications.
The correction of software failures tends to be very cost-intensive because their debugging is an often time-consuming development activity. During this activity, developers largely attempt to understand what causes failures: Starting with a test case that reproduces the observable failure they have to follow failure causes on the infection chain back to the root cause (defect). This idealized procedure requires deep knowledge of the system and its behavior because failures and defects can be far apart from each other. Unfortunately, common debugging tools are inadequate for systematically investigating such infection chains in detail. Thus, developers have to rely primarily on their intuition and the localization of failure causes is not time-efficient. To prevent debugging by disorganized trial and error, experienced developers apply the scientific method and its systematic hypothesis-testing. However, even when using the scientific method, the search for failure causes can still be a laborious task. First, lacking expertise about the system makes it hard to understand incorrect behavior and to create reasonable hypotheses. Second, contemporary debugging approaches provide no or only partial support for the scientific method. In this dissertation, we present test-driven fault navigation as a debugging guide for localizing reproducible failures with the scientific method. Based on the analysis of passing and failing test cases, we reveal anomalies and integrate them into a breadth-first search that leads developers to defects. This systematic search consists of four specific navigation techniques that together support the creation, evaluation, and refinement of failure cause hypotheses for the scientific method. First, structure navigation localizes suspicious system parts and restricts the initial search space. Second, team navigation recommends experienced developers for helping with failures. Third, behavior navigation allows developers to follow emphasized infection chains back to root causes. Fourth, state navigation identifies corrupted state and reveals parts of the infection chain automatically. We implement test-driven fault navigation in our Path Tools framework for the Squeak/Smalltalk development environment and limit its computation cost with the help of our incremental dynamic analysis. This lightweight dynamic analysis ensures an immediate debugging experience with our tools by splitting the run-time overhead over multiple test runs depending on developers’ needs. Hence, our test-driven fault navigation in combination with our incremental dynamic analysis answers important questions in a short time: where to start debugging, who understands failure causes best, what happened before failures, and which state properties are infected.
Background: Increased numbers of intestinal E. coli are observed in inflammatory bowel disease, but the reasons for this proliferation and it exact role in intestinal inflammation are unknown. Aim of this PhD-project was to identify E. coli proteins involved in E. coli’s adaptation to the inflammatory conditions in the gut and to investigate whether these factors affect the host. Furthermore, the molecular basis for strain-specific differences between probiotic and harmful E. coli in their response to intestinal inflammation was investigated. Methods: Using mice monoassociated either with the adherent-invasive E. coli (AIEC) strain UNC or the probiotic E. coli Nissle, two different mouse models of intestinal inflammation were analysed: On the one hand, severe inflammation was induced by treating mice with 3.5% dextran sodium sulphate (DSS). On the other hand, a very mild intestinal inflammation was generated by associating interleukin 10-deficient (IL-10-/-) mice with E. coli. Differentially expressed proteins in the E. coli strains collected from caecal contents of these mice were identified by two-dimensional fluorescence difference gel electrophoresis. Results DSS-experiment: All DSS-treated mice revealed signs of a moderate caecal and a severe colonic inflammation. However, mice monoassociated with E. coli Nissle were less affected. In both E. coli strains, acute inflammation led to a downregulation of pathways involved in carbohydrate breakdown and energy generation. Accordingly, DSS-treated mice had lower caecal concentrations of bacterial fermentation products than the control mice. Differentially expressed proteins also included the Fe-S cluster repair protein NfuA, the tryptophanase TnaA, and the uncharacterised protein YggE. NfuA was upregulated nearly 3-fold in both E. coli strains after DSS administration. Reactive oxygen species produced during intestinal inflammation damage Fe-S clusters and thereby lead to an inactivation of Fe-S proteins. In vitro data indicated that the repair of Fe-S proteins by NfuA is a central mechanism in E. coli to survive oxidative stress. Expression of YggE, which has been reported to reduce the intracellular level of reactive oxygen species, was 4- to 8-fold higher in E. coli Nissle than in E. coli UNC under control and inflammatory conditions. In vitro growth experiments confirmed these results, indicating that E. coli Nissle is better equipped to cope with oxidative stress than E. coli UNC. Additionally, E. coli Nissle isolated from DSS-treated and control mice had TnaA levels 4- to 7-fold higher than E. coli UNC. In turn, caecal indole concentrations resulting from cleavage of tryptophan by TnaA were higher in E. coli Nissle- associated control mice than in the respective mice associated with E. coli UNC. Because of its anti-inflammatory effect, indole is hypothesised to be involved in the extension of the remission phase in ulcerative colitis described for E. coli Nissle. Results IL-10-/--experiment: Only IL-10-/- mice monoassociated with E. coli UNC for 8 weeks exhibited signs of a very mild caecal inflammation. In agreement with this weak inflammation, the variations in the bacterial proteome were small. Similar to the DSS-experiment, proteins downregulated by inflammation belong mainly to the central energy metabolism. In contrast to the DSS-experiment, no upregulation of chaperone proteins and NfuA were observed, indicating that these are strategies to overcome adverse effects of strong intestinal inflammation. The inhibitor of vertebrate C-type lysozyme, Ivy, was 2- to 3-fold upregulated on mRNA and protein level in E. coli Nissle in comparison to E. coli UNC isolated from IL-10-/- mice. By overexpressing ivy, it was demonstrated in vitro that Ivy contributes to a higher lysozyme resistance observed for E. coli Nissle, supporting the role of Ivy as a potential fitness factor in this E. coli strain. Conclusions: The results of this PhD-study demonstrate that intestinal bacteria sense even minimal changes in the health status of the host. While some bacterial adaptations to the inflammatory conditions are equal in response to strong and mild intestinal inflammation, other reactions are unique to a specific disease state. In addition, probiotic and colitogenic E. coli differ in their response to the intestinal inflammation and thereby may influence the host in different ways.
Agriculture is one of the most important human activities providing food and more agricultural goods for seven billion people around the world and is of special importance in sub-Saharan Africa. The majority of people depends on the agricultural sector for their livelihoods and will suffer from negative climate change impacts on agriculture until the middle and end of the 21st century, even more if weak governments, economic crises or violent conflicts endanger the countries’ food security. The impact of temperature increases and changing precipitation patterns on agricultural vegetation motivated this thesis in the first place. Analyzing the potentials of reducing negative climate change impacts by adapting crop management to changing climate is a second objective of the thesis. As a precondition for simulating climate change impacts on agricultural crops with a global crop model first the timing of sowing in the tropics was improved and validated as this is an important factor determining the length and timing of the crops´ development phases, the occurrence of water stress and final crop yield. Crop yields are projected to decline in most regions which is evident from the results of this thesis, but the uncertainties that exist in climate projections and in the efficiency of adaptation options because of political, economical or institutional obstacles have to be considered. The effect of temperature increases and changing precipitation patterns on crop yields can be analyzed separately and varies in space across the continent. Southern Africa is clearly the region most susceptible to climate change, especially to precipitation changes. The Sahel north of 13° N and parts of Eastern Africa with short growing seasons below 120 days and limited wet season precipitation of less than 500 mm are also vulnerable to precipitation changes while in most other part of East and Central Africa, in contrast, the effect of temperature increase on crops overbalances the precipitation effect and is most pronounced in a band stretching from Angola to Ethiopia in the 2060s. The results of this thesis confirm the findings from previous studies on the magnitude of climate change impact on crops in sub-Saharan Africa but beyond that helps to understand the drivers of these changes and the potential of certain management strategies for adaptation in more detail. Crop yield changes depend on the initial growing conditions, on the magnitude of climate change, and on the crop, cropping system and adaptive capacity of African farmers which is only now evident from this comprehensive study for sub-Saharan Africa. Furthermore this study improves the representation of tropical cropping systems in a global crop model and considers the major food crops cultivated in sub-Saharan Africa and climate change impacts throughout the continent.
Cellulose is the most abundant biopolymer on earth. In this work it has been used, in various forms ranging from wood to fully processed laboratory grade microcrystalline cellulose, to synthesise a variety of metal and metal carbide nanoparticles and to establish structuring and patterning methodologies that produce highly functional nano-hybrids. To achieve this, the mechanisms governing the catalytic processes that bring about graphitised carbons in the presence of iron have been investigated. It was found that, when infusing cellulose with an aqueous iron salt solution and heating this mixture under inert atmosphere to 640 °C and above, a liquid eutectic mixture of iron and carbon with an atom ratio of approximately 1:1 forms. The eutectic droplets were monitored with in-situ TEM at the reaction temperature where they could be seen dissolving amorphous carbon and leaving behind a trail of graphitised carbon sheets and subsequently iron carbide nanoparticles. These transformations turned ordinary cellulose into a conductive and porous matrix that is well suited for catalytic applications. Despite these significant changes on the nanometre scale the shape of the matrix as a whole was retained with remarkable precision. This was exemplified by folding a sheet of cellulose paper into origami cranes and converting them via the temperature treatment in to magnetic facsimiles of those cranes. The study showed that the catalytic mechanisms derived from controlled systems and described in the literature can be transferred to synthetic concepts beyond the lab without loss of generality. Once the processes determining the transformation of cellulose into functional materials were understood, the concept could be extended to other metals and metal-combinations. Firstly, the procedure was utilised to produce different ternary iron carbides in the form of MxFeyC (M = W, Mn). None of those ternary carbides have thus far been produced in a nanoparticle form. The next part of this work encompassed combinations of iron with cobalt, nickel, palladium and copper. All of those metals were also probed alone in combination with cellulose. This produced elemental metal and metal alloy particles of low polydispersity and high stability. Both features are something that is typically not associated with high temperature syntheses and enables to connect the good size control with a scalable process. Each of the probed reactions resulted in phase pure, single crystalline, stable materials. After showing that cellulose is a good stabilising and separating agent for all the investigated types of nanoparticles, the focus of the work at hand is shifted towards probing the limits of the structuring and pattering capabilities of cellulose. Moreover possible post-processing techniques to further broaden the applicability of the materials are evaluated. This showed that, by choosing an appropriate paper, products ranging from stiff, self-sustaining monoliths to ultra-thin and very flexible cloths can be obtained after high temperature treatment. Furthermore cellulose has been demonstrated to be a very good substrate for many structuring and patterning techniques from origami folding to ink-jet printing. The thereby resulting products have been employed as electrodes, which was exemplified by electrodepositing copper onto them. Via ink-jet printing they have additionally been patterned and the resulting electrodes have also been post functionalised by electro-deposition of copper onto the graphitised (printed) parts of the samples. Lastly in a preliminary test the possibility of printing several metals simultaneously and thereby producing finely tuneable gradients from one metal to another have successfully been made. Starting from these concepts future experiments were outlined. The last chapter of this thesis concerned itself with alternative synthesis methods of the iron-carbon composite, thereby testing the robustness of the devolved reactions. By performing the synthesis with partly dissolved scrap metal and pieces of raw, dry wood, some progress for further use of the general synthesis technique were made. For example by using wood instead of processed cellulose all the established shaping techniques available for wooden objects, such as CNC milling or 3D prototyping, become accessible for the synthesis path. Also by using wood its intrinsic well defined porosity and the fact that large monoliths are obtained help expanding the prospect of using the composite. It was also demonstrated in this chapter that the resulting material can be applied for the environmentally important issue of waste water cleansing. Additionally to being made from renewable resources and by a cheap and easy one-pot synthesis, the material is recyclable, since the pollutants can be recovered by washing with ethanol. Most importantly this chapter covered experiments where the reaction was performed in a crude, home-built glass vessel, fuelled – with the help of a Fresnel lens – only by direct concentrated sunlight irradiation. This concept carries the thus far presented synthetic procedures from being common laboratory syntheses to a real world application. Based on cellulose, transition metals and simple equipment, this work enabled the easy one-pot synthesis of nano-ceramic and metal nanoparticle composites otherwise not readily accessible. Furthermore were structuring and patterning techniques and synthesis routes involving only renewable resources and environmentally benign procedures established here. Thereby it has laid the foundation for a multitude of applications and pointed towards several future projects reaching from fundamental research, to application focussed research and even and industry relevant engineering project was envisioned.