Refine
Year of publication
- 2013 (174) (remove)
Document Type
- Doctoral Thesis (174) (remove)
Language
- English (174) (remove)
Keywords
- remote sensing (3)
- Arctic (2)
- Fernerkundung (2)
- HCI (2)
- Hochwasser (2)
- Kontext (2)
- Morphologie (2)
- Populationsdynamik (2)
- Proteom (2)
- Vorhersage (2)
Institute
- Institut für Biochemie und Biologie (34)
- Institut für Geowissenschaften (30)
- Institut für Physik und Astronomie (30)
- Institut für Chemie (17)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (14)
- Institut für Informatik und Computational Science (13)
- Department Linguistik (8)
- Institut für Ernährungswissenschaft (8)
- Department Psychologie (5)
- Institut für Mathematik (5)
- Extern (4)
- Institut für Umweltwissenschaften und Geographie (4)
- Sozialwissenschaften (4)
- Philosophische Fakultät (3)
- Wirtschaftswissenschaften (2)
- Institut für Anglistik und Amerikanistik (1)
The study of outcrop modeling is located at the interface between two fields of expertise, Sedimentology and Computing Geoscience, which respectively investigates and simulates geological heterogeneity observed in the sedimentary record. During the last past years, modeling tools and techniques were constantly improved. In parallel, the study of Phanerozoic carbonate deposits emphasized the common occurrence of a random facies distribution along single depositional domain. Although both fields of expertise are intrinsically linked during outcrop simulation, their respective advances have not been combined in literature to enhance carbonate modeling studies. The present study re-examines the modeling strategy adapted to the simulation of shallow-water carbonate systems, based on a close relationship between field sedimentology and modeling capabilities. In the present study, the evaluation of three commonly used algorithms Truncated Gaussian Simulation (TGSim), Sequential Indicator Simulation (SISim), and Indicator Kriging (IK), were performed for the first time using visual and quantitative comparisons on an ideally suited carbonate outcrop. The results show that the heterogeneity of carbonate rocks cannot be fully simulated using one single algorithm. The operating mode of each algorithm involves capabilities as well as drawbacks that are not capable to match all field observations carried out across the modeling area. Two end members in the spectrum of carbonate depositional settings, a low-angle Jurassic ramp (High Atlas, Morocco) and a Triassic isolated platform (Dolomites, Italy), were investigated to obtain a complete overview of the geological heterogeneity in shallow-water carbonate systems. Field sedimentology and statistical analysis performed on the type, morphology, distribution, and association of carbonate bodies and combined with palaeodepositional reconstructions, emphasize similar results. At the basin scale (x 1 km), facies association, composed of facies recording similar depositional conditions, displays linear and ordered transitions between depositional domains. Contrarily, at the bedding scale (x 0.1 km), individual lithofacies type shows a mosaic-like distribution consisting of an arrangement of spatially independent lithofacies bodies along the depositional profile. The increase of spatial disorder from the basin to bedding scale results from the influence of autocyclic factors on the transport and deposition of carbonate sediments. Scale-dependent types of carbonate heterogeneity are linked with the evaluation of algorithms in order to establish a modeling strategy that considers both the sedimentary characteristics of the outcrop and the modeling capabilities. A surface-based modeling approach was used to model depositional sequences. Facies associations were populated using TGSim to preserve ordered trends between depositional domains. At the lithofacies scale, a fully stochastic approach with SISim was applied to simulate a mosaic-like lithofacies distribution. This new workflow is designed to improve the simulation of carbonate rocks, based on the modeling of each scale of heterogeneity individually. Contrarily to simulation methods applied in literature, the present study considers that the use of one single simulation technique is unlikely to correctly model the natural patterns and variability of carbonate rocks. The implementation of different techniques customized for each level of the stratigraphic hierarchy provides the essential computing flexibility to model carbonate systems. Closer feedback between advances carried out in the field of Sedimentology and Computing Geoscience should be promoted during future outcrop simulations for the enhancement of 3-D geological models.
3D from 2D touch
(2013)
While interaction with computers used to be dominated by mice and keyboards, new types of sensors now allow users to interact through touch, speech, or using their whole body in 3D space. These new interaction modalities are often referred to as "natural user interfaces" or "NUIs." While 2D NUIs have experienced major success on billions of mobile touch devices sold, 3D NUI systems have so far been unable to deliver a mobile form factor, mainly due to their use of cameras. The fact that cameras require a certain distance from the capture volume has prevented 3D NUI systems from reaching the flat form factor mobile users expect. In this dissertation, we address this issue by sensing 3D input using flat 2D sensors. The systems we present observe the input from 3D objects as 2D imprints upon physical contact. By sampling these imprints at very high resolutions, we obtain the objects' textures. In some cases, a texture uniquely identifies a biometric feature, such as the user's fingerprint. In other cases, an imprint stems from the user's clothing, such as when walking on multitouch floors. By analyzing from which part of the 3D object the 2D imprint results, we reconstruct the object's pose in 3D space. While our main contribution is a general approach to sensing 3D input on 2D sensors upon physical contact, we also demonstrate three applications of our approach. (1) We present high-accuracy touch devices that allow users to reliably touch targets that are a third of the size of those on current touch devices. We show that different users and 3D finger poses systematically affect touch sensing, which current devices perceive as random input noise. We introduce a model for touch that compensates for this systematic effect by deriving the 3D finger pose and the user's identity from each touch imprint. We then investigate this systematic effect in detail and explore how users conceptually touch targets. Our findings indicate that users aim by aligning visual features of their fingers with the target. We present a visual model for touch input that eliminates virtually all systematic effects on touch accuracy. (2) From each touch, we identify users biometrically by analyzing their fingerprints. Our prototype Fiberio integrates fingerprint scanning and a display into the same flat surface, solving a long-standing problem in human-computer interaction: secure authentication on touchscreens. Sensing 3D input and authenticating users upon touch allows Fiberio to implement a variety of applications that traditionally require the bulky setups of current 3D NUI systems. (3) To demonstrate the versatility of 3D reconstruction on larger touch surfaces, we present a high-resolution pressure-sensitive floor that resolves the texture of objects upon touch. Using the same principles as before, our system GravitySpace analyzes all imprints and identifies users based on their shoe soles, detects furniture, and enables accurate touch input using feet. By classifying all imprints, GravitySpace detects the users' body parts that are in contact with the floor and then reconstructs their 3D body poses using inverse kinematics. GravitySpace thus enables a range of applications for future 3D NUI systems based on a flat sensor, such as smart rooms in future homes. We conclude this dissertation by projecting into the future of mobile devices. Focusing on the mobility aspect of our work, we explore how NUI devices may one day augment users directly in the form of implanted devices.
In soils and sediments there is a strong coupling between local biogeochemical processes and the distribution of water, electron acceptors, acids and nutrients. Both sides are closely related and affect each other from small scale to larger scales. Soil structures such as aggregates, roots, layers or macropores enhance the patchiness of these distributions. At the same time it is difficult to access the spatial distribution and temporal dynamics of these parameter. Noninvasive imaging techniques with high spatial and temporal resolution overcome these limitations. And new non-invasive techniques are needed to study the dynamic interaction of plant roots with the surrounding soil, but also the complex physical and chemical processes in structured soils. In this study we developed an efficient non-destructive in-situ method to determine biogeochemical parameters relevant to plant roots growing in soil. This is a quantitative fluorescence imaging method suitable for visualizing the spatial and temporal pH changes around roots. We adapted the fluorescence imaging set-up and coupled it with neutron radiography to study simultaneously root growth, oxygen depletion by respiration activity and root water uptake. The combined set up was subsequently applied to a structured soil system to map the patchy structure of oxic and anoxic zones induced by a chemical oxygen consumption reaction for spatially varying water contents. Moreover, results from a similar fluorescence imaging technique for nitrate detection were complemented by a numerical modeling study where we used imaging data, aiming to simulate biodegradation under anaerobic, nitrate reducing conditions.
Adaptation of nature conservation to global change: an ecosystem-based approach to priority-setting
(2013)
Water management and environmental protection is vulnerable to extreme low flows during streamflow droughts. During the last decades, in most rivers of Central Europe summer runoff and low flows have decreased. Discharge projections agree that future decrease in runoff is likely for catchments in Brandenburg, Germany. Depending on the first-order controls on low flows, different adaption measures are expected to be appropriate. Small catchments were analyzed because they are expected to be more vulnerable to a changing climate than larger rivers. They are mainly headwater catchments with smaller ground water storage. Local characteristics are more important at this scale and can increase vulnerability. This thesis mutually evaluates potential adaption measures to sustain minimum runoff in small catchments of Brandenburg, Germany, and similarities of these catchments regarding low flows. The following guiding questions are addressed: (i) Which first-order controls on low flows and related time scales exist? (ii) Which are the differences between small catchments regarding low flow vulnerability? (iii) Which adaption measures to sustain minimum runoff in small catchments of Brandenburg are appropriate considering regional low flow patterns? Potential adaption measures to sustain minimum runoff during periods of low flows can be classified into three categories: (i) increase of groundwater recharge and subsequent baseflow by land use change, land management and artificial ground water recharge, (ii) increase of water storage with regulated outflow by reservoirs, lakes and wetland water management and (iii) regional low flow patterns have to be considered during planning of measures with multiple purposes (urban water management, waste water recycling and inter-basin water transfer). The question remained whether water management of areas with shallow groundwater tables can efficiently sustain minimum runoff. Exemplary, water management scenarios of a ditch irrigated area were evaluated using the model Hydrus-2D. Increasing antecedent water levels and stopping ditch irrigation during periods of low flows increased fluxes from the pasture to the stream, but storage was depleted faster during the summer months due to higher evapotranspiration. Fluxes from this approx. 1 km long pasture with an area of approx. 13 ha ranged from 0.3 to 0.7 l\s depending on scenario. This demonstrates that numerous of such small decentralized measures are necessary to sustain minimum runoff in meso-scale catchments. Differences in the low flow risk of catchments and meteorological low flow predictors were analyzed. A principal component analysis was applied on daily discharge of 37 catchments between 1991 and 2006. Flows decreased more in Southeast Brandenburg according to meteorological forcing. Low flow risk was highest in a region east of Berlin because of intersection of a more continental climate and the specific geohydrology. In these catchments, flows decreased faster during summer and the low flow period was prolonged. A non-linear support vector machine regression was applied to iteratively select meteorological predictors for annual 30-day minimum runoff in 16 catchments between 1965 and 2006. The potential evapotranspiration sum of the previous 48 months was the most important predictor (r²=0.28). The potential evapotranspiration of the previous 3 months and the precipitation of the previous 3 months and last year increased model performance (r²=0.49, including all four predictors). Model performance was higher for catchments with low yield and more damped runoff. In catchments with high low flow risk, explanatory power of long term potential evapotranspiration was high. Catchments with a high low flow risk as well as catchments with a considerable decrease in flows in southeast Brandenburg have the highest demand for adaption. Measures increasing groundwater recharge are to be preferred. Catchments with high low flow risk showed relatively deep and decreasing groundwater heads allowing increased groundwater recharge at recharge areas with higher altitude away from the streams. Low flows are expected to stay low or decrease even further because long term potential evapotranspiration was the most important low flow predictor and is projected to increase during climate change. Differences in low flow risk and runoff dynamics between catchments have to be considered for management and planning of measures which do not only have the task to sustain minimum runoff.
For the first time the transcriptional reprogramming of distinct root cortex cells during the arbuscular mycorrhizal (AM) symbiosis was investigated by combining Laser Capture Mirodissection and Affymetrix GeneChip® Medicago genome array hybridization. The establishment of cryosections facilitated the isolation of high quality RNA in sufficient amounts from three different cortical cell types. The transcript profiles of arbuscule-containing cells (arb cells), non-arbuscule-containing cells (nac cells) of Rhizophagus irregularis inoculated Medicago truncatula roots and cortex cells of non-inoculated roots (cor) were successfully explored. The data gave new insights in the symbiosis-related cellular reorganization processes and indicated that already nac cells seem to be prepared for the upcoming fungal colonization. The mycorrhizal- and phosphate-dependent transcription of a GRAS TF family member (MtGras8) was detected in arb cells and mycorrhizal roots. MtGRAS shares a high sequence similarity to a GRAS TF suggested to be involved in the fungal colonization processes (MtRAM1). The function of MtGras8 was unraveled upon RNA interference- (RNAi-) mediated gene silencing. An AM symbiosis-dependent expression of a RNAi construct (MtPt4pro::gras8-RNAi) revealed a successful gene silencing of MtGras8 leading to a reduced arbuscule abundance and a higher proportion of deformed arbuscules in root with reduced transcript levels. Accordingly, MtGras8 might control the arbuscule development and life-time. The targeting of MtGras8 by the phosphate-dependent regulated miRNA5204* was discovered previously (Devers et al., 2011). Since miRNA5204* is known to be affected by phosphate, the posttranscriptional regulation might represent a link between phosphate signaling and arbuscule development. In this work, the posttranscriptional regulation was confirmed by mis-expression of miRNA5204* in M. truncatula roots. The miRNA-mediated gene silencing affects the MtGras8 transcript abundance only in the first two weeks of the AM symbiosis and the mis-expression lines seem to mimic the phenotype of MtGras8-RNAi lines. Additionally, MtGRAS8 seems to form heterodimers with NSP2 and RAM1, which are known to be key regulators of the fungal colonization process (Hirsch et al., 2009; Gobbato et al., 2012). These data indicate that MtGras8 and miRNA5204* are linked to the sym pathway and regulate the arbuscule development in phosphate-dependent manner.
Even though quite different in occurrence and consequences, from a modeling perspective many natural hazards share similar properties and challenges. Their complex nature as well as lacking knowledge about their driving forces and potential effects make their analysis demanding: uncertainty about the modeling framework, inaccurate or incomplete event observations and the intrinsic randomness of the natural phenomenon add up to different interacting layers of uncertainty, which require a careful handling. Nevertheless deterministic approaches are still widely used in natural hazard assessments, holding the risk of underestimating the hazard with disastrous effects. The all-round probabilistic framework of Bayesian networks constitutes an attractive alternative. In contrast to deterministic proceedings, it treats response variables as well as explanatory variables as random variables making no difference between input and output variables. Using a graphical representation Bayesian networks encode the dependency relations between the variables in a directed acyclic graph: variables are represented as nodes and (in-)dependencies between variables as (missing) edges between the nodes. The joint distribution of all variables can thus be described by decomposing it, according to the depicted independences, into a product of local conditional probability distributions, which are defined by the parameters of the Bayesian network. In the framework of this thesis the Bayesian network approach is applied to different natural hazard domains (i.e. seismic hazard, flood damage and landslide assessments). Learning the network structure and parameters from data, Bayesian networks reveal relevant dependency relations between the included variables and help to gain knowledge about the underlying processes. The problem of Bayesian network learning is cast in a Bayesian framework, considering the network structure and parameters as random variables itself and searching for the most likely combination of both, which corresponds to the maximum a posteriori (MAP score) of their joint distribution given the observed data. Although well studied in theory the learning of Bayesian networks based on real-world data is usually not straight forward and requires an adoption of existing algorithms. Typically arising problems are the handling of continuous variables, incomplete observations and the interaction of both. Working with continuous distributions requires assumptions about the allowed families of distributions. To "let the data speak" and avoid wrong assumptions, continuous variables are instead discretized here, thus allowing for a completely data-driven and distribution-free learning. An extension of the MAP score, considering the discretization as random variable as well, is developed for an automatic multivariate discretization, that takes interactions between the variables into account. The discretization process is nested into the network learning and requires several iterations. Having to face incomplete observations on top, this may pose a computational burden. Iterative proceedings for missing value estimation become quickly infeasible. A more efficient albeit approximate method is used instead, estimating the missing values based only on the observations of variables directly interacting with the missing variable. Moreover natural hazard assessments often have a primary interest in a certain target variable. The discretization learned for this variable does not always have the required resolution for a good prediction performance. Finer resolutions for (conditional) continuous distributions are achieved with continuous approximations subsequent to the Bayesian network learning, using kernel density estimations or mixtures of truncated exponential functions. All our proceedings are completely data-driven. We thus avoid assumptions that require expert knowledge and instead provide domain independent solutions, that are applicable not only in other natural hazard assessments, but in a variety of domains struggling with uncertainties.
Systems of Systems (SoS) have received a lot of attention recently. In this thesis we will focus on SoS that are built atop the techniques of Service-Oriented Architectures and thus combine the benefits and challenges of both paradigms. For this thesis we will understand SoS as ensembles of single autonomous systems that are integrated to a larger system, the SoS. The interesting fact about these systems is that the previously isolated systems are still maintained, improved and developed on their own. Structural dynamics is an issue in SoS, as at every point in time systems can join and leave the ensemble. This and the fact that the cooperation among the constituent systems is not necessarily observable means that we will consider these systems as open systems. Of course, the system has a clear boundary at each point in time, but this can only be identified by halting the complete SoS. However, halting a system of that size is practically impossible. Often SoS are combinations of software systems and physical systems. Hence a failure in the software system can have a serious physical impact what makes an SoS of this kind easily a safety-critical system. The contribution of this thesis is a modelling approach that extends OMG's SoaML and basically relies on collaborations and roles as an abstraction layer above the components. This will allow us to describe SoS at an architectural level. We will also give a formal semantics for our modelling approach which employs hybrid graph-transformation systems. The modelling approach is accompanied by a modular verification scheme that will be able to cope with the complexity constraints implied by the SoS' structural dynamics and size. Building such autonomous systems as SoS without evolution at the architectural level --- i. e. adding and removing of components and services --- is inadequate. Therefore our approach directly supports the modelling and verification of evolution.
Antarctic glacier forfields are extreme environments and pioneer sites for ecological succession. The Antarctic continent shows microbial community development as a natural laboratory because of its special environment, geographic isolation and little anthropogenic influence. Increasing temperatures due to global warming lead to enhanced deglaciation processes in cold-affected habitats and new terrain is becoming exposed to soil formation and accessible for microbial colonisation. This study aims to understand the structure and development of glacier forefield bacterial communities, especially how soil parameters impact the microorganisms and how those are adapted to the extreme conditions of the habitat. To this effect, a combination of cultivation experiments, molecular, geophysical and geochemical analysis was applied to examine two glacier forfields of the Larsemann Hills, East Antarctica. Culture-independent molecular tools such as terminal restriction length polymorphism (T-RFLP), clone libraries and quantitative real-time PCR (qPCR) were used to determine bacterial diversity and distribution. Cultivation of yet unknown species was carried out to get insights in the physiology and adaptation of the microorganisms. Adaptation strategies of the microorganisms were studied by determining changes of the cell membrane phospholipid fatty acid (PLFA) inventory of an isolated bacterium in response to temperature and pH fluctuations and by measuring enzyme activity at low temperature in environmental soil samples. The two studied glacier forefields are extreme habitats characterised by low temperatures, low water availability and small oligotrophic nutrient pools and represent sites of different bacterial succession in relation to soil parameters. The investigated sites showed microbial succession at an early step of soil formation near the ice tongue in comparison to closely located but rather older and more developed soil from the forefield. At the early step the succession is influenced by a deglaciation-dependent areal shift of soil parameters followed by a variable and prevalently depth-related distribution of the soil parameters that is driven by the extreme Antarctic conditions. The dominant taxa in the glacier forefields are Actinobacteria, Acidobacteria, Proteobacteria, Bacteroidetes, Cyanobacteria and Chloroflexi. The connection of soil characteristics with bacterial community structure showed that soil parameter and soil formation along the glacier forefield influence the distribution of certain phyla. In the early step of succession the relative undifferentiated bacterial diversity reflects the undifferentiated soil development and has a high potential to shift according to past and present environmental conditions. With progressing development environmental constraints such as water or carbon limitation have a greater influence. Adapting the culturing conditions to the cold and oligotrophic environment, the number of culturable heterotrophic bacteria reached up to 108 colony forming units per gram soil and 148 isolates were obtained. Two new psychrotolerant bacteria, Herbaspirillum psychrotolerans PB1T and Chryseobacterium frigidisoli PB4T, were characterised in detail and described as novel species in the family of Oxalobacteraceae and Flavobacteriaceae, respectively. The isolates are able to grow at low temperatures tolerating temperature fluctuations and they are not specialised to a certain substrate, therefore they are well-adapted to the cold and oligotrophic environment. The adaptation strategies of the microorganisms were analysed in environmental samples and cultures focussing on extracellular enzyme activity at low temperature and PLFA analyses. Extracellular phosphatases (pH 11 and pH 6.5), β-glucosidase, invertase and urease activity were detected in the glacier forefield soils at low temperature (14°C) catalysing the conversion of various compounds providing necessary substrates and may further play a role in the soil formation and total carbon turnover of the habitat. The PLFA analysis of the newly isolated species C. frigidisoli showed that the cold-adapted strain develops different strategies to maintain the cell membrane function under changing environmental conditions by altering the PLFA inventory at different temperatures and pH values. A newly discovered fatty acid, which was not found in any other microorganism so far, significantly increased at decreasing temperature and low pH and thus plays an important role in the adaption of C. frigidisoli. This work gives insights into the diversity, distribution and adaptation mechanisms of microbial communities in oligotrophic cold-affected soils and shows that Antarctic glacier forefields are suitable model systems to study bacterial colonisation in connection to soil formation.
In the presence of a solid-liquid or liquid-air interface, bacteria can choose between a planktonic and a sessile lifestyle. Depending on environmental conditions, cells swimming in close proximity to the interface can irreversibly attach to the surface and grow into three-dimensional aggregates where the majority of cells is sessile and embedded in an extracellular polymer matrix (biofilm). We used microfluidic tools and time lapse microscopy to perform experiments with the polarly flagellated soil bacterium Pseudomonas putida (P. putida), a bacterial species that is able to form biofilms. We analyzed individual trajectories of swimming cells, both in the bulk fluid and in close proximity to a glass-liquid interface. Additionally, surface related growth during the early phase of biofilm formation was investigated. In the bulk fluid, P.putida shows a typical bacterial swimming pattern of alternating periods of persistent displacement along a line (runs) and fast reorientation events (turns) and cells swim with an average speed around 24 micrometer per second. We found that the distribution of turning angles is bimodal with a dominating peak around 180 degrees. In approximately six out of ten turning events, the cell reverses its swimming direction. In addition, our analysis revealed that upon a reversal, the cell systematically changes its swimming speed by a factor of two on average. Based on the experimentally observed values of mean runtime and rotational diffusion, we presented a model to describe the spreading of a population of cells by a run-reverse random walker with alternating speeds. We successfully recover the mean square displacement and, by an extended version of the model, also the negative dip in the directional autocorrelation function as observed in the experiments. The analytical solution of the model demonstrates that alternating speeds enhance a cells ability to explore its environment as compared to a bacterium moving at a constant intermediate speed. As compared to the bulk fluid, for cells swimming near a solid boundary we observed an increase in swimming speed at distances below d= 5 micrometer and an increase in average angular velocity at distances below d= 4 micrometer. While the average speed was maximal with an increase around 15% at a distance of d= 3 micrometer, the angular velocity was highest in closest proximity to the boundary at d=1 micrometer with an increase around 90% as compared to the bulk fluid. To investigate the swimming behavior in a confinement between two solid boundaries, we developed an experimental setup to acquire three-dimensional trajectories using a piezo driven objective mount coupled to a high speed camera. Results on speed and angular velocity were consistent with motility statistics in the presence of a single boundary. Additionally, an analysis of the probability density revealed that a majority of cells accumulated near the upper and lower boundaries of the microchannel. The increase in angular velocity is consistent with previous studies, where bacteria near a solid boundary were shown to swim on circular trajectories, an effect which can be attributed to a wall induced torque. The increase in speed at a distance of several times the size of the cell body, however, cannot be explained by existing theories which either consider the drag increase on cell body and flagellum near a boundary (resistive force theory) or model the swimming microorganism by a multipole expansion to account for the flow field interaction between cell and boundary. An accumulation of swimming bacteria near solid boundaries has been observed in similar experiments. Our results confirm that collisions with the surface play an important role and hydrodynamic interactions alone cannot explain the steady-state accumulation of cells near the channel walls. Furthermore, we monitored the number growth of cells in the microchannel under medium rich conditions. We observed that, after a lag time, initially isolated cells at the surface started to grow by division into colonies of increasing size, while coexisting with a comparable smaller number of swimming cells. After 5:50 hours, we observed a sudden jump in the number of swimming cells, which was accompanied by a breakup of bigger clusters on the surface. After approximately 30 minutes where planktonic cells dominated in the microchannel, individual swimming cells reattached to the surface. We interpret this process as an emigration and recolonization event. A number of complementary experiments were performed to investigate the influence of collective effects or a depletion of the growth medium on the transition. Similar to earlier observations on another bacterium from the same family we found that the release of cells to the swimming phase is most likely the result of an individual adaption process, where syntheses of proteins for flagellar motility are upregulated after a number of division cycles at the surface.
In children the way of life, nutrition and recreation changed in recent years and as a consequence body composition shifted as well. It is established that overweight belongs to a global problem. In addition, German children exhibit a less robust skeleton than ten years ago. These developments may elevate the risk of cardiovascular diseases and skeletal modifications. Heredity and environmental factors as nutrition, socioeconomic status, physical activity and inactivity influence fat accumulation and the skeletal system. Based on these negative developments associations between type of body shape, skeletal measures and physical activity; relations between external skeletal robustness, physical activity and inactivity, BMI and body fat and also the progress of body composition especially external skeletal robustness in comparison in Russian and German children were investigated. In a cross-sectional study 691 German boys and girls aged 6 to 10 years were examined. Anthropometric measurements were taken and questionnaires about physical activity and inactivity were answered by parents. Additionally, pedometers were worn to determinate the physical activity in children. To compare the body composition in Russian and German children data from the years 2000 and 2010 were used. The study has shown that pyknomorphic individuals exhibit the highest external skeletal robustness and leptomorphic ones the lowest. Leptomorphic children may have a higher risk for bone diseases in adulthood. Pyknomorphic boys are more physically active by tendency. This is assessed as positive because pyknomorphic types display the highest BMI and body fat. Results showed that physical activity may reduce BMI and body fat. In contrast physical inactivity may lead to an increase of BMI and body fat and may rise with increasing age. Physical activity encourages additionally a robust skeleton. Furthermore external skeletal robustness is associated with BMI in order that BMI as a measure of overweight should be consider critically. The international 10-year comparison has shown an increase of BMI in Russian children and German boys. Currently, Russian children exhibit a higher external skeletal robustness than the Germans. However, in Russian boys skeleton is less robust than ten years ago. This trend should be observed in the future as well in other countries. All in all, several measures should be used to describe health situation in children and adults. Furthermore, in children it is essential to support physical activity in order to reduce the risk of obesity and to maintain a robust skeleton. In this way diseases are able to prevent in adulthood.
Lakes are increasingly being recognized as an important component of the global carbon cycle, yet anthropogenic activities that alter their community structure may change the way they transport and process carbon. This research focuses on the relationship between carbon cycling and community structure of primary producers in small, shallow lakes, which are the most abundant lake type in the world, and furthermore subject to intense terrestrial-aquatic coupling due to their high perimeter:area ratio. Shifts between macrophyte and phytoplankton dominance are widespread and common in shallow lakes, with potentially large consequences to regional carbon cycling. I thus compared a lake with clear-water conditions and a submerged macrophyte community to a turbid, phytoplankton-dominated lake, describing differences in the availability, processing, and export of organic and inorganic carbon. I furthermore examined the effects of increasing terrestrial carbon inputs on internal carbon cycling processes. Pelagic diel (24-hour) oxygen curves and independent fluorometric approaches of individual primary producers together indicated that the presence of a submerged macrophyte community facilitated higher annual rates of gross primary production than could be supported in a phytoplankton-dominated lake at similar nutrient concentrations. A simple model constructed from the empirical data suggested that this difference between regime types could be common in moderately eutrophic lakes with mean depths under three to four meters, where benthic primary production is a potentially major contributor to the whole-lake primary production. It thus appears likely that a regime shift from macrophyte to phytoplankton dominance in shallow lakes would typically decrease the quantity of autochthonous organic carbon available to lake food webs. Sediment core analyses indicated that a regime shift from macrophyte to phytoplankton dominance was associated with a four-fold increase in carbon burial rates, signalling a major change in lake carbon cycling dynamics. Carbon mass balances suggested that increasing carbon burial rates were not due to an increase in primary production or allochthonous loading, but instead were due to a higher carbon burial efficiency (carbon burial / carbon deposition). This, in turn, was associated with diminished benthic mineralization rates and an increase in calcite precipitation, together resulting in lower surface carbon dioxide emissions. Finally, a period of unusually high precipitation led to rising water levels, resulting in a feedback loop linking increasing concentrations of dissolved organic carbon (DOC) to severely anoxic conditions in the phytoplankton-dominated system. High water levels and DOC concentrations diminished benthic primary production (via shading) and boosted pelagic respiration rates, diminishing the hypolimnetic oxygen supply. The resulting anoxia created redox conditions which led to a major release of nutrients, DOC, and iron from the sediments. This further transformed the lake metabolism, providing a prolonged summertime anoxia below a water depth of 1 m, and leading to the near-complete loss of fish and macroinvertebrates. Pelagic pH levels also decreased significantly, increasing surface carbon dioxide emissions by an order of magnitude compared to previous years. Altogether, this thesis adds an important body of knowledge to our understanding of the significance of the benthic zone to carbon cycling in shallow lakes. The contribution of the benthic zone towards whole-lake primary production was quantified, and was identified as an important but vulnerable site for primary production. Benthic mineralization rates were furthermore found to influence carbon burial and surface emission rates, and benthic primary productivity played an important role in determining hypolimnetic oxygen availability, thus controlling the internal sediment loading of nutrients and carbon. This thesis also uniquely demonstrates that the ecological community structure (i.e. stable regime) of a eutrophic, shallow lake can significantly influence carbon availability and processing. By changing carbon cycling pathways, regime shifts in shallow lakes may significantly alter the role of these ecosystems with respect to the global carbon cycle.
Cellulose is the most abundant biopolymer on earth. In this work it has been used, in various forms ranging from wood to fully processed laboratory grade microcrystalline cellulose, to synthesise a variety of metal and metal carbide nanoparticles and to establish structuring and patterning methodologies that produce highly functional nano-hybrids. To achieve this, the mechanisms governing the catalytic processes that bring about graphitised carbons in the presence of iron have been investigated. It was found that, when infusing cellulose with an aqueous iron salt solution and heating this mixture under inert atmosphere to 640 °C and above, a liquid eutectic mixture of iron and carbon with an atom ratio of approximately 1:1 forms. The eutectic droplets were monitored with in-situ TEM at the reaction temperature where they could be seen dissolving amorphous carbon and leaving behind a trail of graphitised carbon sheets and subsequently iron carbide nanoparticles. These transformations turned ordinary cellulose into a conductive and porous matrix that is well suited for catalytic applications. Despite these significant changes on the nanometre scale the shape of the matrix as a whole was retained with remarkable precision. This was exemplified by folding a sheet of cellulose paper into origami cranes and converting them via the temperature treatment in to magnetic facsimiles of those cranes. The study showed that the catalytic mechanisms derived from controlled systems and described in the literature can be transferred to synthetic concepts beyond the lab without loss of generality. Once the processes determining the transformation of cellulose into functional materials were understood, the concept could be extended to other metals and metal-combinations. Firstly, the procedure was utilised to produce different ternary iron carbides in the form of MxFeyC (M = W, Mn). None of those ternary carbides have thus far been produced in a nanoparticle form. The next part of this work encompassed combinations of iron with cobalt, nickel, palladium and copper. All of those metals were also probed alone in combination with cellulose. This produced elemental metal and metal alloy particles of low polydispersity and high stability. Both features are something that is typically not associated with high temperature syntheses and enables to connect the good size control with a scalable process. Each of the probed reactions resulted in phase pure, single crystalline, stable materials. After showing that cellulose is a good stabilising and separating agent for all the investigated types of nanoparticles, the focus of the work at hand is shifted towards probing the limits of the structuring and pattering capabilities of cellulose. Moreover possible post-processing techniques to further broaden the applicability of the materials are evaluated. This showed that, by choosing an appropriate paper, products ranging from stiff, self-sustaining monoliths to ultra-thin and very flexible cloths can be obtained after high temperature treatment. Furthermore cellulose has been demonstrated to be a very good substrate for many structuring and patterning techniques from origami folding to ink-jet printing. The thereby resulting products have been employed as electrodes, which was exemplified by electrodepositing copper onto them. Via ink-jet printing they have additionally been patterned and the resulting electrodes have also been post functionalised by electro-deposition of copper onto the graphitised (printed) parts of the samples. Lastly in a preliminary test the possibility of printing several metals simultaneously and thereby producing finely tuneable gradients from one metal to another have successfully been made. Starting from these concepts future experiments were outlined. The last chapter of this thesis concerned itself with alternative synthesis methods of the iron-carbon composite, thereby testing the robustness of the devolved reactions. By performing the synthesis with partly dissolved scrap metal and pieces of raw, dry wood, some progress for further use of the general synthesis technique were made. For example by using wood instead of processed cellulose all the established shaping techniques available for wooden objects, such as CNC milling or 3D prototyping, become accessible for the synthesis path. Also by using wood its intrinsic well defined porosity and the fact that large monoliths are obtained help expanding the prospect of using the composite. It was also demonstrated in this chapter that the resulting material can be applied for the environmentally important issue of waste water cleansing. Additionally to being made from renewable resources and by a cheap and easy one-pot synthesis, the material is recyclable, since the pollutants can be recovered by washing with ethanol. Most importantly this chapter covered experiments where the reaction was performed in a crude, home-built glass vessel, fuelled – with the help of a Fresnel lens – only by direct concentrated sunlight irradiation. This concept carries the thus far presented synthetic procedures from being common laboratory syntheses to a real world application. Based on cellulose, transition metals and simple equipment, this work enabled the easy one-pot synthesis of nano-ceramic and metal nanoparticle composites otherwise not readily accessible. Furthermore were structuring and patterning techniques and synthesis routes involving only renewable resources and environmentally benign procedures established here. Thereby it has laid the foundation for a multitude of applications and pointed towards several future projects reaching from fundamental research, to application focussed research and even and industry relevant engineering project was envisioned.
Within a research project about future sustainable water management options in the Elbe River basin, quasi-natural discharge scenarios had to be provided. The semi-distributed eco-hydrological model SWIM was utilised for this task. According to scenario simulations driven by the stochastical climate model STAR, the region would get distinctly drier. However, this thesis focuses on the challenge of meeting the requirement of high model fidelity even for smaller sub-basins. Usually, the quality of the simulations is lower at inner points than at the outlet. Four research paper chapters and the discussion chapter deal with the reasons for local model deviations and the problem of optimal spatial calibration. Besides other assessments, the Markov Chain Monte Carlo method is applied to show whether evapotranspiration or precipitation should be corrected to minimise runoff deviations, principal component analysis is used in an unusual way to evaluate local precipitation alterations by land cover changes, and remotely sensed surface temperatures allow for an independent view on the evapotranspiration landscape. The overall insight is that spatially explicit hydrological modelling of such a large river basin requires a lot of local knowledge. It probably needs more time to obtain such knowledge as is usually provided for hydrological modelling studies.
Challenging Khmer citizenship : minorities, the state, and the international community in Cambodia
(2013)
The idea of a distinctly ‘liberal’ form of multiculturalism has emerged in the theory and practice of Western democracies and the international community has become actively engaged in its global dissemination via international norms and organizations. This thesis investigates the internationalization of minority rights, by exploring state-minority relations in Cambodia, in light of Will Kymlicka’s theory of multicultural citizenship. Based on extensive empirical research, the analysis explores the situation and aspirations of Cambodia’s ethnic Vietnamese, highland peoples, Muslim Cham, ethnic Chinese and Lao and the relationships between these groups and the state. All Cambodian regimes since independence have defined citizenship with reference to the ethnicity of the Khmer majority and have - often violently - enforced this conception through the assimilation of highland peoples and the Cham and the exclusion of ethnic Vietnamese and Chinese. Cambodia’s current constitution, too, defines citizenship ethnically. State-sponsored Khmerization systematically privileges members of the majority culture and marginalizes minority members politically, economically and socially. The thesis investigates various international initiatives aimed at promoting application of minority rights norms in Cambodia. It demonstrates that these initiatives have largely failed to accomplish a greater degree of compliance with international norms in practice. This failure can be explained by a number of factors, among them Cambodia’s neo-patrimonial political system, the geo-political fears of a ‘minoritized’ Khmer majority, the absence of effective regional security institutions, the lack of minority access to political decision-making, the significant differences between international and Cambodian conceptions of modern statehood and citizenship and the emergence of China as Cambodia’s most important bilateral donor and investor. Based on this analysis, the dissertation develops recommendations for a sequenced approach to minority rights promotion, with pragmatic, less ambitious shorter-term measures that work progressively towards achievement of international norms in the longer-term.
Die Anpassung von Sektoren an veränderte klimatische Bedingungen erfordert ein Verständnis von regionalen Vulnerabilitäten. Vulnerabilität ist als Funktion von Sensitivität und Exposition, welche potentielle Auswirkungen des Klimawandels darstellen, und der Anpassungsfähigkeit von Systemen definiert. Vulnerabilitätsstudien, die diese Komponenten quantifizieren, sind zu einem wichtigen Werkzeug in der Klimawissenschaft geworden. Allerdings besteht von der wissenschaftlichen Perspektive aus gesehen Uneinigkeit darüber, wie diese Definition in Studien umgesetzt werden soll. Ausdiesem Konflikt ergeben sich viele Herausforderungen, vor allem bezüglich der Quantifizierung und Aggregierung der einzelnen Komponenten und deren angemessenen Komplexitätsniveaus. Die vorliegende Dissertation hat daher zum Ziel die Anwendbarkeit des Vulnerabilitätskonzepts voranzubringen, indem es in eine systematische Struktur übersetzt wird. Dies beinhaltet alle Komponenten und schlägt für jede Klimaauswirkung (z.B. Sturzfluten) eine Beschreibung des vulnerablen Systems vor (z.B. Siedlungen), welches direkt mit einer bestimmten Richtung eines relevanten klimatischen Stimulus in Verbindung gebracht wird (z.B. stärkere Auswirkungen bei Zunahme der Starkregentage). Bezüglich der herausfordernden Prozedur der Aggregierung werden zwei alternative Methoden, die einen sektorübergreifenden Überblick ermöglichen, vorgestellt und deren Vor- und Nachteile diskutiert. Anschließend wird die entwickelte Struktur einer Vulnerabilitätsstudie mittels eines indikatorbasierten und deduktiven Ansatzes beispielhaft für Gemeinden in Nordrhein-Westfalen in Deutschland angewandt. Eine Übertragbarkeit auf andere Regionen ist dennoch möglich. Die Quantifizierung für die Gemeinden stützt sich dabei auf Informationen aus der Literatur. Da für viele Sektoren keine geeigneten Indikatoren vorhanden waren, werden in dieser Arbeit neue Indikatoren entwickelt und angewandt, beispielsweise für den Forst- oder Gesundheitssektor. Allerdings stellen fehlende empirische Daten bezüglich relevanter Schwellenwerte eine Lücke dar, beispielsweise welche Stärke von Klimaänderungen eine signifikante Auswirkung hervorruft. Dies führt dazu, dass die Studie nur relative Aussagen zum Grad der Vulnerabilität jeder Gemeinde im Vergleich zum Rest des Bundeslandes machen kann. Um diese Lücke zu füllen, wird für den Forstsektor beispielhaft die heutige und zukünftige Sturmwurfgefahr von Wäldern berechnet. Zu diesem Zweck werden die Eigenschaften der Wälder mit empirischen Schadensdaten eines vergangenen Sturmereignisses in Verbindung gebracht. Der sich daraus ergebende Sensitivitätswert wird anschließend mit den Windverhältnissen verknüpft. Sektorübergreifende Vulnerabilitätsstudien erfordern beträchtliche Ressourcen, was oft deren Anwendbarkeit erschwert. In einem nächsten Schritt wird daher das Potential einer Vereinfachung der Komplexität anhand zweier sektoraler Beispiele untersucht. Um das Auftreten von Waldbränden vorherzusagen, stehen zahlreiche meteorologische Indices zur Verfügung, welche eine Spannbreite unterschiedlicher Komplexitäten aufweisen. Bezüglich der Anzahl monatlicher Waldbrände weist die relative Luftfeuchtigkeit für die meisten deutschen Bundesländer eine bessere Vorhersagekraft als komplexere Indices auf. Dies ist er Fall, obgleich sie selbst als Eingangsvariable für die komplexeren Indices verwendet wird. Mit Hilfe dieses einzelnen meteorologischen Faktors kann also die Waldbrandgefahr in deutschen Region ausreichend genau ausgedrückt werden, was die Ressourceneffizienz von Studien erhöht. Die Methodenkomplexität wird auf ähnliche Weise hinsichtlich der Anwendung des ökohydrologischen Modells SWIM für die Region Brandenburg untersucht. Die interannuellen Bodenwasserwerte, welche durch dieses Modell simuliert werden, können nur unzureichend durch ein einfacheres statistisches Modell, welches auf denselben Eingangsdaten aufbaut, abgebildet werden. Innerhalb eines Zeithorizonts von Jahrzehnten, kann der statistische Ansatz jedoch das Bodenwasser zufriedenstellend abbilden und zeigt eine Dominanz der Bodeneigenschaft Feldkapazität. Dies deutet darauf hin, dass die Komplexität im Hinblick auf die Anzahl der Eingangsvariablen für langfristige Berechnungen reduziert werden kann. Allerdings sind die Aussagen durch fehlende beobachtete Bodenwasserwerte zur Validierung beschränkt. Die vorliegenden Studien zur Vulnerabilität und ihren Komponenten haben gezeigt, dass eine Anwendung noch immer wissenschaftlich herausfordernd ist. Folgt man der hier verwendeten Vulnerabilitätsdefinition, treten zahlreiche Probleme bei der Implementierung in regionalen Studien auf. Mit dieser Dissertation wurden Fortschritte bezüglich der aufgezeigten Lücken bisheriger Studien erzielt, indem eine systematische Struktur für die Beschreibung und Aggregierung von Vulnerabilitätskomponenten erarbeitet wurde. Hierfür wurden mehrere Ansätze diskutiert, die jedoch Vor- und Nachteile besitzen. Diese sollten vor der Anwendung von zukünftigen Studien daher ebenfalls sorgfältig abgewogen werden. Darüber hinaus hat sich gezeigt, dass ein Potential besteht einige Ansätze zu vereinfachen, jedoch sind hierfür weitere Untersuchungen nötig. Insgesamt konnte die Dissertation die Anwendung von Vulnerabilitätsstudien als Werkzeug zur Unterstützung von Anpassungsmaßnahmen stärken.
Black shales are sedimentary rocks with a high content of organic carbon, which leads to a dark grayish to black color. Due to their potential to contain oil or gas, black shales are of great interest for the support of the worldwide energy supply. An integrated seismic investigation of the Lower Palaeozoic black shales was carried out at the Danish island Bornholm to locate the shallow-lying Alum Shale layer and its surrounding formations and to characterize its potential as a source rock. Therefore, two seismic experiments at a total of three crossing profiles were carried out in October 2010 and in June 2012 in the southern part of the island. Two different active measurements were conducted with either a weight drop source or a minivibrator. Additionally, the ambient noise field was recorded at the study location over a time interval of about one day, and also a laboratory analysis of borehole samples was carried out. The seismic profiles were positioned as close as possible to two scientific boreholes which were used for comparative purposes. The seismic field data was analyzed with traveltime tomography, surface wave inversion and seismic interferometry to obtain the P-wave and S-wave velocity models of the subsurface. The P-wave velocity models which were determined for all three profiles clearly locate the Alum Shale layer between the Komstad Limestone layer on top and the Læså Sandstone Formation at the base of the models. The black shale layer has P-wave velocities around 3 km/s which are lower compared to the adjacent formations. Due to a very good agreement of the sonic log and the vertical velocity profiles of the two seismic lines, which are directly crossing the borehole where the sonic log was conducted, the reliability of the traveltime tomography is proven. A correlation of the seismic velocities with the content of organic carbon is an important task for the characterization of the reservoir properties of a black shale formation. It is not possible without calibration but in combination with a full 2D tomographic image of the subsurface it gives the subsurface distribution of the organic material. The S-wave model obtained with surface wave inversion of the vibroseis data of one of the profiles images the Alum Shale layer also very well with S-wave velocities around 2 km/s. Although individual 1D velocity models for each of the source positions were determined, the subsurface S-wave velocity distribution is very uniform with a good match between the single models. A really new approach described here is the application of seismic interferometry to a really small study area and a quite short time interval. Also new is the selective procedure of only using time windows with the best crosscorrelation signals to achieve the final interferograms. Due to the small scale of the interferometry even P-wave signals can be observed in the final crosscorrelations. In the laboratory measurements the seismic body waves were recorded for different pressure and temperature stages. Therefore, samples of different depths of the Alum Shale were available from one of the scientific boreholes at the study location. The measured velocities have a high variance with changing pressure or temperature. Recordings with wave propagation both parallel and perpendicular to the bedding of the samples reveal a great amount of anisotropy for the P-wave velocity, whereas the S-wave velocity is almost independent of the wave direction. The calculated velocity ratio is also highly anisotropic with very low values for the perpendicular samples and very high values for the parallel ones. Interestingly, the laboratory velocities of the perpendicular samples are comparable to the velocities of the field experiments indicating that the field measurements are sensitive to wave propagation in vertical direction. The velocity ratio is also calculated with the P-wave and S-wave velocity models of the field experiments. Again, the Alum Shale can be clearly separated from the adjacent formations because it shows overall very low vP/vS ratios around 1.4. The very low velocity ratio indicates the content of gas in the black shale formation. With the combination of all the different methods described here, a comprehensive interpretation of the seismic response of the black shale layer can be made and the hydrocarbon source rock potential can be estimated.
The Semantic Web provides information contained in the World Wide Web as machine-readable facts. In comparison to a keyword-based inquiry, semantic search enables a more sophisticated exploration of web documents. By clarifying the meaning behind entities, search results are more precise and the semantics simultaneously enable an exploration of semantic relationships. However, unlike keyword searches, a semantic entity-focused search requires that web documents are annotated with semantic representations of common words and named entities. Manual semantic annotation of (web) documents is time-consuming; in response, automatic annotation services have emerged in recent years. These annotation services take continuous text as input, detect important key terms and named entities and annotate them with semantic entities contained in widely used semantic knowledge bases, such as Freebase or DBpedia. Metadata of video documents require special attention. Semantic analysis approaches for continuous text cannot be applied, because information of a context in video documents originates from multiple sources possessing different reliabilities and characteristics. This thesis presents a semantic analysis approach consisting of a context model and a disambiguation algorithm for video metadata. The context model takes into account the characteristics of video metadata and derives a confidence value for each metadata item. The confidence value represents the level of correctness and ambiguity of the textual information of the metadata item. The lower the ambiguity and the higher the prospective correctness, the higher the confidence value. The metadata items derived from the video metadata are analyzed in a specific order from high to low confidence level. Previously analyzed metadata are used as reference points in the context for subsequent disambiguation. The contextually most relevant entity is identified by means of descriptive texts and semantic relationships to the context. The context is created dynamically for each metadata item, taking into account the confidence value and other characteristics. The proposed semantic analysis follows two hypotheses: metadata items of a context should be processed in descendent order of their confidence value, and the metadata that pertains to a context should be limited by content-based segmentation boundaries. The evaluation results support the proposed hypotheses and show increased recall and precision for annotated entities, especially for metadata that originates from sources with low reliability. The algorithms have been evaluated against several state-of-the-art annotation approaches. The presented semantic analysis process is integrated into a video analysis framework and has been successfully applied in several projects for the purpose of semantic video exploration of videos.
African states are often called corrupt indicating that the political system in Africa differs from the one prevalent in the economically advanced democracies. This however does not give us any insight into what makes corruption the ruling norm of African statehood. Thus we must turn to the overly neglected theoretical work on the political economy of Africa in order to determine how the poverty of governance in Africa is firmly anchored both in Africa’s domestic socioeconomic reality, as well as in the region’s role in the international economic order. Instead of focusing on increased monitoring, enforcement and formal democratic procedures, this book integrates economic analysis with political theory in order to arrive at a better understanding of the political-economic roots of corruption in Sub-Saharan Africa.
This cumulative dissertation explored the use of the detection of natural background of fast neutrons, the so-called cosmic-ray neutron sensing (CRS) approach to measure field-scale soil moisture in cropped fields. Primary cosmic rays penetrate the top atmosphere and interact with atmospheric particles. Such interaction results on a cascade of high-energy neutrons, which continue traveling through the atmospheric column. Finally, neutrons penetrate the soil surface and a second cascade is produced with the so-called secondary cosmic-ray neutrons (fast neutrons). Partly, fast neutrons are absorbed by hydrogen (soil moisture). Remaining neutrons scatter back to the atmosphere, where its flux is inversely correlated to the soil moisture content, therefore allowing a non-invasive indirect measurement of soil moisture. The CRS methodology is mainly evaluated based on a field study carried out on a farmland in Potsdam (Brandenburg, Germany) along three crop seasons with corn, sunflower and winter rye; a bare soil period; and two winter periods. Also, field monitoring was carried out in the Schaefertal catchment (Harz, Germany) for long-term testing of CRS against ancillary data. In the first experimental site, the CRS method was calibrated and validated using different approaches of soil moisture measurements. In a period with corn, soil moisture measurement at the local scale was performed at near-surface only, and in subsequent periods (sunflower and winter rye) sensors were placed in three depths (5 cm, 20 cm and 40 cm). The direct transfer of CRS calibration parameters between two vegetation periods led to a large overestimation of soil moisture by the CRS. Part of this soil moisture overestimation was attributed to an underestimation of the CRS observation depth during the corn period ( 5-10 cm), which was later recalculated to values between 20-40 cm in other crop periods (sunflower and winter rye). According to results from these monitoring periods with different crops, vegetation played an important role on the CRS measurements. Water contained also in crop biomass, above and below ground, produces important neutron moderation. This effect was accounted for by a simple model for neutron corrections due to vegetation. It followed crop development and reduced overall CRS soil moisture error for periods of sunflower and winter rye. In Potsdam farmland also inversely-estimated soil hydraulic parameters were determined at the field scale, using CRS soil moisture from the sunflower period. A modelling framework coupling HYDRUS-1D and PEST was applied. Subsequently, field-scale soil hydraulic properties were compared against local scale soil properties (modelling and measurements). Successful results were obtained here, despite large difference in support volume. Simple modelling framework emphasizes future research directions with CRS soil moisture to parameterize field scale models. In Schaefertal catchment, CRS measurements were verified using precipitation and evapotranspiration data. At the monthly resolution, CRS soil water storage was well correlated to these two weather variables. Also clearly, water balance could not be closed due to missing information from other compartments such as groundwater, catchment discharge, etc. In the catchment, the snow influence to natural neutrons was also evaluated. As also observed in Potsdam farmland, CRS signal was strongly influenced by snow fall and snow accumulation. A simple strategy to measure snow was presented for Schaefertal case. Concluding remarks of this dissertation showed that (a) the cosmic-ray neutron sensing (CRS) has a strong potential to provide feasible measurement of mean soil moisture at the field scale in cropped fields; (b) CRS soil moisture is strongly influenced by other environmental water pools such as vegetation and snow, therefore these should be considered in analysis; (c) CRS water storage can be used for soil hydrology modelling for determination of soil hydraulic parameters; and (d) CRS approach has strong potential for long term monitoring of soil moisture and for addressing studies of water balance.
Crowded field spectroscopy and the search for intermediate-mass black holes in globular clusters
(2013)
Globular clusters are dense and massive star clusters that are an integral part of any major galaxy. Careful studies of their stars, a single cluster may contain several millions of them, have revealed that the ages of many globular clusters are comparable to the age of the Universe. These remarkable ages make them valuable probes for the exploration of structure formation in the early universe or the assembly of our own galaxy, the Milky Way. A topic of current research relates to the question whether globular clusters harbour massive black holes in their centres. These black holes would bridge the gap from stellar mass black holes, that represent the final stage in the evolution of massive stars, to supermassive ones that reside in the centres of galaxies. For this reason, they are referred to as intermediate-mass black holes. The most reliable method to detect and to weigh a black hole is to study the motion of stars inside its sphere of influence. The measurement of Doppler shifts via spectroscopy allows one to carry out such dynamical studies. However, spectroscopic observations in dense stellar fields such as Galactic globular clusters are challenging. As a consequence of diffraction processes in the atmosphere and the finite resolution of a telescope, observed stars have a finite width characterized by the point spread function (PSF), hence they appear blended in crowded stellar fields. Classical spectroscopy does not preserve any spatial information, therefore it is impossible to separate the spectra of blended stars and to measure their velocities. Yet methods have been developed to perform imaging spectroscopy. One of those methods is integral field spectroscopy. In the course of this work, the first systematic study on the potential of integral field spectroscopy in the analysis of dense stellar fields is carried out. To this aim, a method is developed to reconstruct the PSF from the observed data and to use this information to extract the stellar spectra. Based on dedicated simulations, predictions are made on the number of stellar spectra that can be extracted from a given data set and the quality of those spectra. Furthermore, the influence of uncertainties in the recovered PSF on the extracted spectra are quantified. The results clearly show that compared to traditional approaches, this method makes a significantly larger number of stars accessible to a spectroscopic analysis. This systematic study goes hand in hand with the development of a software package to automatize the individual steps of the data analysis. It is applied to data of three Galactic globular clusters, M3, M13, and M92. The data have been observed with the PMAS integral field spectrograph at the Calar Alto observatory with the aim to constrain the presence of intermediate-mass black holes in the centres of the clusters. The application of the new analysis method yields samples of about 80 stars per cluster. These are by far the largest spectroscopic samples that have so far been obtained in the centre of any of the three clusters. In the course of the further analysis, Jeans models are calculated for each cluster that predict the velocity dispersion based on an assumed mass distribution inside the cluster. The comparison to the observed velocities of the stars shows that in none of the three clusters, a massive black hole is required to explain the observed kinematics. Instead, the observations rule out any black hole in M13 with a mass higher than 13000 solar masses at the 99.7% level. For the other two clusters, this limit is at significantly lower masses, namely 2500 solar masses in M3 and 2000 solar masses in M92. In M92, it is possible to lower this limit even further by a combined analysis of the extracted stars and the unresolved stellar component. This component consists of the numerous stars in the cluster that appear unresolved in the integral field data. The final limit of 1300 solar masses is the lowest limit obtained so far for a massive globular cluster.
Data integration aims to combine data of different sources and to provide users with a unified view on these data. This task is as challenging as valuable. In this thesis we propose algorithms for dependency discovery to provide necessary information for data integration. We focus on inclusion dependencies (INDs) in general and a special form named conditional inclusion dependencies (CINDs): (i) INDs enable the discovery of structure in a given schema. (ii) INDs and CINDs support the discovery of cross-references or links between schemas. An IND “A in B” simply states that all values of attribute A are included in the set of values of attribute B. We propose an algorithm that discovers all inclusion dependencies in a relational data source. The challenge of this task is the complexity of testing all attribute pairs and further of comparing all of each attribute pair's values. The complexity of existing approaches depends on the number of attribute pairs, while ours depends only on the number of attributes. Thus, our algorithm enables to profile entirely unknown data sources with large schemas by discovering all INDs. Further, we provide an approach to extract foreign keys from the identified INDs. We extend our IND discovery algorithm to also find three special types of INDs: (i) Composite INDs, such as “AB in CD”, (ii) approximate INDs that allow a certain amount of values of A to be not included in B, and (iii) prefix and suffix INDs that represent special cross-references between schemas. Conditional inclusion dependencies are inclusion dependencies with a limited scope defined by conditions over several attributes. Only the matching part of the instance must adhere the dependency. We generalize the definition of CINDs distinguishing covering and completeness conditions and define quality measures for conditions. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. The challenge for this task is twofold: (i) Which (and how many) attributes should be used for the conditions? (ii) Which attribute values should be chosen for the conditions? Previous approaches rely on pre-selected condition attributes or can only discover conditions applying to quality thresholds of 100%. Our approaches were motivated by two application domains: data integration in the life sciences and link discovery for linked open data. We show the efficiency and the benefits of our approaches for use cases in these domains.
Derived algebraic systems
(2013)
The sharply rising level of atmospheric carbon dioxide resulting from anthropogenic emissions is one of the greatest environmental concerns facing our civilization today. Metal-organic frameworks (MOFs) are a new class of materials that constructed by metal-containing nodes bonded to organic bridging ligands. MOFs could serve as an ideal platform for the development of next generation CO2 capture materials owing to their large capacity for the adsorption of gases and their structural and chemical tunability. The ability to rationally select the framework components is expected to allow the affinity of the internal pore surface toward CO2 to be precisely controlled, facilitating materials properties that are optimized for the specific type of CO2 capture to be performed (post-combustion capture, precombustion capture, or oxy-fuel combustion) and potentially even for the specific power plant in which the capture system is to be installed. For this reason, significant effort has been made in recent years in improving the gas separation performance of MOFs and some studies evaluating the prospects of deploying these materials in real-world CO2 capture systems have begun to emerge. We have developed six new MOFs, denoted as IFPs (IFP-5, -6, -7, -8, -9, -10, IFP = Imidazolate Framework Potsdam) and two hydrogen-bonded molecular building block (MBB, named as 1 and 2 for Zn and Co based, respectively) have been synthesized, characterized and applied for gas storage. The structure of IFP possesses 1D hexagonal channels. Metal centre and the substituent groups of C2 position of the linker protrude into the open channels and determine their accessible diameter. Interestingly, the channel diameters (range : 0.3 to 5.2 Å) for IFP structures are tuned by the metal centre (Zn, Co and Cd) and substituent of C2 position of the imidazolate linker. Moreover hydrogen bonded MBB of 1 and 2 is formed an in situ functionalization of a ligand under solvothermal condition. Two different types of channels are observed for 1 and 2. Materials contain solvent accessible void space. Solvent could be easily removed by under high vacuum. The porous framework has maintained the crystalline integrity even without solvent molecules. N2, H2, CO2 and CH4 gas sorption isotherms were performed. Gas uptake capacities are comparable with other frameworks. Gas uptake capacity is reduced when the channel diameter is narrow. For example, the channel diameter of IFP-5 (channel diameter: 3.8 Å) is slightly lower than that of IFP-1 (channel diameter: 4.2 Å); hence, the gas uptake capacity and Brunauer-Emmett-Teller (BET) surface area are slightly lower than IFP-1. The selectivity does not depend only on the size of the gas components (kinetic diameter: CO2 3.3 Å, N2 3.6 Å and CH4 3.8 ) but also on the polarizability of the surface and of the gas components. IFP-5 and-6 have the potential applications for the separation of CO2 and CH4 from N2-containing gas mixtures and CO2 and CH4 containing gas mixtures. Gas sorption isotherms of IFP-7, -8, -9, -10 exhibited hysteretic behavior due to flexible alkoxy (e.g., methoxy and ethoxy) substituents. Such phenomenon is a kind of gate effects which is rarely observed in microporous MOFs. IFP-7 (Zn-centred) has a flexible methoxy substituent. This is the first example where a flexible methoxy substituent shows the gate opening behavior in a MOF. Presence of methoxy functional group at the hexagonal channels, IFP-7 acted as molecular gate for N2 gas. Due to polar methoxy group and channel walls, wide hysteretic isotherm was observed during gas uptake. The N2 The estimated BET surface area for 1 is 471 m2 g-1 and the Langmuir surface area is 570 m2 g-1. However, such surface area is slightly higher than azolate-based hydrogen-bonded supramolecular assemblies and also comparable and higher than some hydrogen-bonded porous organic molecules.
This thesis gives formal definitions of discourse-givenness, coreference and reference, and reports on experiments with computational models of discourse-givenness of noun phrases for English and German. Definitions are based on Bach's (1987) work on reference, Kibble and van Deemter's (2000) work on coreference, and Kamp and Reyle's Discourse Representation Theory (1993). For the experiments, the following corpora with coreference annotation were used: MUC-7, OntoNotes and ARRAU for Englisch, and TueBa-D/Z for German. As for classification algorithms, they cover J48 decision trees, the rule based learner Ripper, and linear support vector machines. New features are suggested, representing the noun phrase's specificity as well as its context, which lead to a significant improvement of classification quality.
Diurnal regulation of chloroplast gene expresssion in the green alga chlamydomonas reinhardtii
(2013)
Given a large set of records in a database and a query record, similarity search aims to find all records sufficiently similar to the query record. To solve this problem, two main aspects need to be considered: First, to perform effective search, the set of relevant records is defined using a similarity measure. Second, an efficient access method is to be found that performs only few database accesses and comparisons using the similarity measure. This thesis solves both aspects with an emphasis on the latter. In the first part of this thesis, a frequency-aware similarity measure is introduced. Compared record pairs are partitioned according to frequencies of attribute values. For each partition, a different similarity measure is created: machine learning techniques combine a set of base similarity measures into an overall similarity measure. After that, a similarity index for string attributes is proposed, the State Set Index (SSI), which is based on a trie (prefix tree) that is interpreted as a nondeterministic finite automaton. For processing range queries, the notion of query plans is introduced in this thesis to describe which similarity indexes to access and which thresholds to apply. The query result should be as complete as possible under some cost threshold. Two query planning variants are introduced: (1) Static planning selects a plan at compile time that is used for all queries. (2) Query-specific planning selects a different plan for each query. For answering top-k queries, the Bulk Sorted Access Algorithm (BSA) is introduced, which retrieves large chunks of records from the similarity indexes using fixed thresholds, and which focuses its efforts on records that are ranked high in more than one attribute and thus promising candidates. The described components form a complete similarity search system. Based on prototypical implementations, this thesis shows comparative evaluation results for all proposed approaches on different real-world data sets, one of which is a large person data set from a German credit rating agency.
This thesis presents novel ideas and research findings for the Web of Data – a global data space spanning many so-called Linked Open Data sources. Linked Open Data adheres to a set of simple principles to allow easy access and reuse for data published on the Web. Linked Open Data is by now an established concept and many (mostly academic) publishers adopted the principles building a powerful web of structured knowledge available to everybody. However, so far, Linked Open Data does not yet play a significant role among common web technologies that currently facilitate a high-standard Web experience. In this work, we thoroughly discuss the state-of-the-art for Linked Open Data and highlight several shortcomings – some of them we tackle in the main part of this work. First, we propose a novel type of data source meta-information, namely the topics of a dataset. This information could be published with dataset descriptions and support a variety of use cases, such as data source exploration and selection. For the topic retrieval, we present an approach coined Annotated Pattern Percolation (APP), which we evaluate with respect to topics extracted from Wikipedia portals. Second, we contribute to entity linking research by presenting an optimization model for joint entity linking, showing its hardness, and proposing three heuristics implemented in the LINked Data Alignment (LINDA) system. Our first solution can exploit multi-core machines, whereas the second and third approach are designed to run in a distributed shared-nothing environment. We discuss and evaluate the properties of our approaches leading to recommendations which algorithm to use in a specific scenario. The distributed algorithms are among the first of their kind, i.e., approaches for joint entity linking in a distributed fashion. Also, we illustrate that we can tackle the entity linking problem on the very large scale with data comprising more than 100 millions of entity representations from very many sources. Finally, we approach a sub-problem of entity linking, namely the alignment of concepts. We again target a method that looks at the data in its entirety and does not neglect existing relations. Also, this concept alignment method shall execute very fast to serve as a preprocessing for further computations. Our approach, called Holistic Concept Matching (HCM), achieves the required speed through grouping the input by comparing so-called knowledge representations. Within the groups, we perform complex similarity computations, relation conclusions, and detect semantic contradictions. The quality of our result is again evaluated on a large and heterogeneous dataset from the real Web. In summary, this work contributes a set of techniques for enhancing the current state of the Web of Data. All approaches have been tested on large and heterogeneous real-world input.
Explaning change in flood hazard in the Mekong river : the hypothesis of nonstationary variance
(2013)
When we read a text, we obtain information at different levels of representation from abstract symbols. A reader’s ultimate aim is the extraction of the meaning of the words and the text. The reserach of eye movements in reading covers a broad range of psychological systems, ranging from low-level perceptual and motor processes to high-level cognition. Reading of skilled readers proceeds highly automatic, but is a complex phenomenon of interacting subprocesses at the same time. The study of eye movements during reading offers the possibility to investigate cognition via behavioral measures during the excercise of an everyday task. The process of reading is not limited to the directly fixated (or foveal) word but also extends to surrounding (or parafoveal) words, particularly the word to the right of the gaze position. This process may be unconscious, but parafoveal information is necessary for efficient reading. There is an ongoing debate on whether processing of the upcoming word encompasses word meaning (or semantics) or only superficial features. To increase the knowledge about how the meaning of one word helps processing another word, seven experiments were conducted. In these studies, words were exachanged during reading. The degree of relatedness between the word to the right of the currently fixated one and the word subsequently fixated was experimentally manipulated. Furthermore, the time course of the parafoveal extraction of meaning was investigated with two different approaches, an experimental one and a statistical one. As a major finding, fixation times were consistently lower if a semantically related word was presented compared to the presence of an unrelated word. Introducing an experimental technique that allows controlling the duration for which words are available, the time course of processing and integrating meaning was evaluated. Results indicated both facilitation and inhibition due to relatedness between the meanings of words. In a more natural reading situation, the effectiveness of the processing of parafoveal words was sometimes time-dependent and substantially increased with shorter distances between the gaze position and the word. Findings are discussed with respect to theories of eye-movement control. In summary, the results are more compatible with models of distributed word processing. The discussions moreover extend to language differences and technical issues of reading research.
A detailed description of the characteristics of antimicrobial peptides (AMPs) is highly demanded, since the resistance against traditional antibiotics is an emerging problem in medicine. They are part of the innate immune system in every organism, and they are very efficient in the protection against bacteria, viruses, fungi and even cancer cells. Their advantage is that their target is the cell membrane, in contrast to antibiotics which disturb the metabolism of the respective cell type. This allows AMPs to be more active and faster. The lack of an efficient therapy for some cancer types and the evolvement of resistance against existing antitumor agents make AMPs promising in cancer therapy besides being an alternative to traditional antibiotics. The aim of this work was the physical-chemical characterization of two fragments of LL-37, a human antimicrobial peptide from the cathelicidin family. The fragments LL-32 and LL-20 exhibited contrary behavior in biological experiments concerning their activity against bacterial cells, human cells and human cancer cells. LL-32 had even a higher activity than LL-37, while LL-20 had almost no effect. The interaction of the two fragments with model membranes was systematically studied in this work to understand their mode of action. Planar lipid films were mainly applied as model systems in combination with IR-spectroscopy and X-ray scattering methods. Circular Dichroism spectroscopy in bulk systems completed the results. In the first approach, the structure of the peptides was determined in aqueous solution and compared to the structure of the peptides at the air/water interface. In bulk, both peptides are in an unstructured conformation. Adsorbed and confined to at the air-water interface, the peptides differ drastically in their surface activity as well as in the secondary structure. While LL-32 transforms into an α-helix lying flat at the water surface, LL-20 stays partly unstructured. This is in good agreement with the high antimicrobial activity of LL-32. In the second approach, experiments with lipid monolayers as biomimetic models for the cell membrane were performed. It could be shown that the peptides fluidize condensed monolayers of negatively charged DPPG which can be related to the thinning of a bacterial cell membrane. An interaction of the peptides with zwitterionic PCs, as models for mammalian cells, was not clearly observed, even though LL-32 is haemolytic. In the third approach, the lipid monolayers were more adapted to the composition of human erythrocyte membranes by incorporating sphingomyelin (SM) into the PC monolayers. Physical-chemical properties of the lipid films were determined and the influence of the peptides on them was studied. It could be shown that the interaction of the more active LL-32 is strongly increased for heterogeneous lipid films containing both gel and fluid phases, while the interaction of LL-20 with the monolayers was unaffected. The results indicate an interaction of LL-32 with the membrane in a detergent-like way. Additionally, the modelling of the peptide interaction with cancer cells was performed by incorporating some negatively charged lipids into the PC/SM monolayers, but the increased charge had no effect on the interaction of LL-32. It was concluded, that the high anti-cancer activity of the peptide originates from the changed fluidity of cell membrane rather than from the increased surface charge. Furthermore, similarities to the physical-chemical properties of melittin, an AMP from the bee venom, were demonstrated.
The Arctic tundra, covering approx. 5.5 % of the Earth’s land surface, is one of the last ecosystems remaining closest to its untouched condition. Remote sensing is able to provide information at regular time intervals and large spatial scales on the structure and function of Arctic ecosystems. But almost all natural surfaces reveal individual anisotropic reflectance behaviors, which can be described by the bidirectional reflectance distribution function (BRDF). This effect can cause significant changes in the measured surface reflectance depending on solar illumination and sensor viewing geometries. The aim of this thesis is the hyperspectral and spectro-directional reflectance characterization of important Arctic tundra vegetation communities at representative Siberian and Alaskan tundra sites as basis for the extraction of vegetation parameters, and the normalization of BRDF effects in off-nadir and multi-temporal remote sensing data. Moreover, in preparation for the upcoming German EnMAP (Environmental Mapping and Analysis Program) satellite mission, the understanding of BRDF effects in Arctic tundra is essential for the retrieval of high quality, consistent and therefore comparable datasets. The research in this doctoral thesis is based on field spectroscopic and field spectro-goniometric investigations of representative Siberian and Alaskan measurement grids. The first objective of this thesis was the development of a lightweight, transportable, and easily managed field spectro-goniometer system which nevertheless provides reliable spectro-directional data. I developed the Manual Transportable Instrument platform for ground-based Spectro-directional observations (ManTIS). The outcome of the field spectro-radiometrical measurements at the Low Arctic study sites along important environmental gradients (regional climate, soil pH, toposequence, and soil moisture) show that the different plant communities can be distinguished by their nadir-view reflectance spectra. The results especially reveal separation possibilities between the different tundra vegetation communities in the visible (VIS) blue and red wavelength regions. Additionally, the near-infrared (NIR) shoulder and NIR reflectance plateau, despite their relatively low values due to the low structure of tundra vegetation, are still valuable information sources and can separate communities according to their biomass and vegetation structure. In general, all different tundra plant communities show: (i) low maximum NIR reflectance; (ii) a weakly or nonexistent visible green reflectance peak in the VIS spectrum; (iii) a narrow “red-edge” region between the red and NIR wavelength regions; and (iv) no distinct NIR reflectance plateau. These common nadir-view reflectance characteristics are essential for the understanding of the variability of BRDF effects in Arctic tundra. None of the analyzed tundra communities showed an even closely isotropic reflectance behavior. In general, tundra vegetation communities: (i) usually show the highest BRDF effects in the solar principal plane; (ii) usually show the reflectance maximum in the backward viewing directions, and the reflectance minimum in the nadir to forward viewing directions; (iii) usually have a higher degree of reflectance anisotropy in the VIS wavelength region than in the NIR wavelength region; and (iv) show a more bowl-shaped reflectance distribution in longer wavelength bands (>700 nm). The results of the analysis of the influence of high sun zenith angles on the reflectance anisotropy show that with increasing sun zenith angles, the reflectance anisotropy changes to azimuthally symmetrical, bowl-shaped reflectance distributions with the lowest reflectance values in the nadir view position. The spectro-directional analyses also show that remote sensing products such as the NDVI or relative absorption depth products are strongly influenced by BRDF effects, and that the anisotropic characteristics of the remote sensing products can significantly differ from the observed BRDF effects in the original reflectance data. But the results further show that the NDVI can minimize view angle effects relative to the contrary spectro-directional effects in the red and NIR bands. For the researched tundra plant communities, the overall difference of the off-nadir NDVI values compared to the nadir value increases with increasing sensor viewing angles, but on average never exceeds 10 %. In conclusion, this study shows that changes in the illumination-target-viewing geometry directly lead to an altering of the reflectance spectra of Arctic tundra communities according to their object-specific BRDFs. Since the different tundra communities show only small, but nonetheless significant differences in the surface reflectance, it is important to include spectro-directional reflectance characteristics in the algorithm development for remote sensing products.
The supercapacitor is one of the most important energy storage devices as its construction allows for addressing many of the drawbacks related to batteries, but the low energy density of current systems is a major issue. In this doctoral dissertation, with a view to attaining high energy density supercapacitor systems that can be comparable to those for batteries, new heteroatom-containing carbons in the form of particles and three-dimensional films were investigated. A nitrogen-containing material, acrodam, was chosen as the carbon precursor due to the inexpensiveness, high carbonization yield, oligomerizability, etc. The carbon particles were prepared from acrodam together with caesium acetate as a meltable flux agent, and disclosed excellent properties in hydroquinone-loaded sulphuric acid electrolyte with high energy densities (up to 133.0 Wh kg–1) and sufficient cycle stabilities. These properties are already now comparable to those of batteries. Besides, conductive carbon three-dimensional films were fabricated using acrodam oligomer as the precursor by the inexpensive spin coating method. The films were found to be homogeneous, flat, void- and crack-free, and high conductivities (up to 334 S cm–1) could be obtained at the carbonization temperature of 1000 ºC. Furthermore, a porous carbon three-dimensional film could be formed using an organic template at the first attempt. This finding demonstrates the film’s potentiality for various applications such as supercapacitor electrode; the essential absence of contact resistance within the network should contribute to effective transportation of electron within the electrode. The progress made in this dissertation will open a new way to further enhancement of energy density for supercapacitor as well as other applications that exceeds the current properties.
Initiation and perpetuation of inflammatory bowel diseases (IBD) may result from an exaggerated mucosal immune response to the luminal microbiota in a susceptible host. We proposed that this may be caused either 1) by an abnormal microbial composition or 2) by weakening of the protective mucus layer due to excessive mucus degradation, which may lead to an easy access of luminal antigens to the host mucosa triggering inflammation. We tested whether the probiotic Enterococcus faecium NCIMB 10415 (NCIMB) is capable of reducing chronic gut inflammation by changing the existing gut microbiota composition and aimed to identify mechanisms that are involved in possible beneficial effects of the probiotic. To identify health-promoting mechanisms of the strain, we used interleukin (IL)-10 deficient mice that spontaneously develop gut inflammation and fed these mice a diet containing NCIMB (106 cells g-1) for 3, 8 and 24 weeks, respectively. Control mice were fed an identically composed diet but without the probiotic strain. No clear-cut differences between the animals were observed in pro-inflammatory cytokine gene expression and in intestinal microbiota composition after probiotic supplementation. However, we observed a low abundance of the mucin-degrading bacterium Akkermansia muciniphila in the mice that were fed NCIMB for 8 weeks. These low cell numbers were associated with significantly lower interferon gamma (IFN-γ) and IFN-γ-inducible protein (IP-10) mRNA levels as compared to the NCIMB-treated mice that were killed after 3 and 24 weeks of intervention. In conclusion, NCIMB was not capable of reducing gut inflammation in the IL-10-/- mouse model. To further identify the exact role of A. muciniphila and uncover a possible interaction between this bacterium, NCIMB and the host in relation to inflammation, we performed in vitro studies using HT-29 colon cancer cells. The HT-29 cells were treated with bacterial conditioned media obtained by growing either A. muciniphila (AM-CM) or NCIMB (NCIMB-CM) or both together (COMB-CM) in Dulbecco’s Modified Eagle Medium (DMEM) for 2 h at 37 °C followed by bacterial cell removal. HT-29 cells treated with COMB-CM displayed reduced cell viability after 18 h (p<0.01) and no viable cells were detected after 24 h of treatment, in contrast to the other groups or heated COMB-CM. Detection of activated caspase-3 in COMB-CM treated groups indicated that death of the HT-29 cells was brought about by apoptosis. It was concluded that either NCIMB or A. muciniphila produce a soluble and heat-sensitive factor during their concomitant presence that influences cell viability in an in vitro system. We currently hypothesize that this factor is a protein, which has not yet been identified. Based on the potential effect of A. muciniphila on inflammation (in vivo) and cell-viability (in vitro) in the presence of NCIMB, we investigated how the presence of A. muciniphila affects the severity of an intestinal Salmonella enterica Typhimurium (STm)-induced gut inflammation using gnotobiotic C3H mice with a background microbiota of eight bacterial species (SIHUMI, referred to as simplified human intestinal microbiota). Presence of A. muciniphila in STm-infected SIHUMI (SIHUMI-AS) mice caused significantly increased histopathology scores and elevated mRNA levels of IFN-γ, IP-10, tumor necrosis factor alpha (TNF-α), IL-12, IL-17 and IL-6 in cecal and colonic tissue. The number of mucin filled goblet cells was 2- to 3- fold lower in cecal tissue of SIHUMI-AS mice compared to SIHUMI mice associated with STm (SIHUMI-S) or A. muciniphila (SIHUMI-A) or SIHUMI mice. Reduced goblet cell numbers significantly correlated with increased IFN-γ (r2 = -0.86, ***P<0.001) in all infected mice. In addition, loss of cecal mucin sulphation was observed in SIHUMI-AS mice. Concomitant presence of A. muciniphila and STm resulted in a drastic change in microbiota composition of the SIHUMI consortium. The proportion of Bacteroides thetaiotaomicron in SIHUMI, SIHUMI-A and SIHUMI-S mice made up to 80-90% but was completely taken over by STm in SIHUMI-AS mice contributing 94% to total bacteria. These results suggest that A. muciniphila exacerbates STm-induced intestinal inflammation by its ability to disturb host mucus homeostasis. In conclusion, abnormal microbiota composition together with excessive mucus degradation contributes to severe intestinal inflammation in a susceptible host.
Logging and large earthquakes are disturbances that may significantly affect hydrological and erosional processes and process rates, although in decisively different ways. Despite numerous studies that have documented the impacts of both deforestation and earthquakes on water and sediment fluxes, a number of details regarding the timing and type of de- and reforestation; seismic impacts on subsurface water fluxes; or the overall geomorphic work involved have remained unresolved. The main objective of this thesis is to address these shortcomings and to better understand and compare the hydrological and erosional process responses to such natural and man-made disturbances. To this end, south-central Chile provides an excellent natural laboratory owing to its high seismicity and the ongoing conversion of land into highly productive plantation forests. In this dissertation I combine paired catchment experiments, data analysis techniques, and physics-based modelling to investigate: 1) the effect of plantation forests on water resources, 2) the source and sink behavior of timber harvest areas in terms of overland flow generation and sediment fluxes, 3) geomorphic work and its efficiency as a function of seasonal logging, 4) possible hydrologic responses of the saturated zone to the 2010 Maule earthquake and 5) responses of the vadose zone to this earthquake. Re 1) In order to quantify the hydrologic impact of plantation forests, it is fundamental to first establish their water balances. I show that tree species is not significant in this regard, i.e. Pinus radiata and Eucalyptus globulus do not trigger any decisive different hydrologic response. Instead, water consumption is more sensitive to soil-water supply for the local hydro-climatic conditions. Re 2) Contradictory opinions exist about whether timber harvest areas (THA) generate or capture overland flow and sediment. Although THAs contribute significantly to hydrology and sediment transport because of their spatial extent, little is known about the hydrological and erosional processes occurring on them. I show that THAs may act as both sources and sinks for overland flow, which in turn intensifies surface erosion. Above a rainfall intensity of ~20 mm/h, which corresponds to <10% of all rainfall, THAs may generate runoff whereas below that threshold they remain sinks. The overall contribution of Hortonian runoff is thus secondary considering the local rainfall regime. The bulk of both runoff and sediment is generated by Dunne, saturation excess, overland flow. I also show that logging may increase infiltrability on THAs which may cause an initial decrease in streamflow followed by an increase after the groundwater storage has been refilled. Re 3) I present changes in frequency-magnitude distributions following seasonal logging by applying Quantile Regression Forests at hitherto unprecedented detail. It is clearly the season that controls the hydro-geomorphic work efficiency of clear cutting. Logging, particularly dry seasonal logging, caused a shift of work efficiency towards less flashy and mere but more frequent moderate rainfall-runoff events. The sediment transport is dominated by Dunne overland flow which is consistent with physics-based modelling using WASA-SED. Re 4) It is well accepted that earthquakes may affect hydrological processes in the saturated zone. Assuming such flow conditions, consolidation of saturated saprolitic material is one possible response. Consolidation raises the hydraulic gradients which may explain the observed increase in discharge following earthquakes. By doing so, squeezed water saturates the soil which in turn increases the water accessible for plant transpiration. Post-seismic enhanced transpiration is reflected in the intensification of diurnal cycling. Re 5) Assuming unsaturated conditions, I present the first evidence that the vadose zone may also respond to seismic waves by releasing pore water which in turn feeds groundwater reservoirs. By doing so, water tables along the valley bottoms are elevated thus providing additional water resources to the riparian vegetation. By inverse modelling, the transient increase in transpiration is found to be 30-60%. Based on the data available, both hypotheses, are not testable. Finally, when comparing the hydrological and erosional effects of the Maule earthquake with the impact of planting exotic plantation forests, the overall observed earthquake effects are comparably small, and limited to short time scales.
Imaginary Interfaces
(2013)
The size of a mobile device is primarily determined by the size of the touchscreen. As such, researchers have found that the way to achieve ultimate mobility is to abandon the screen altogether. These wearable devices are operated using hand gestures, voice commands or a small number of physical buttons. By abandoning the screen these devices also abandon the currently dominant spatial interaction style (such as tapping on buttons), because, seemingly, there is nothing to tap on. Unfortunately this design prevents users from transferring their learned interaction knowledge gained from traditional touchscreen-based devices. In this dissertation, I present Imaginary Interfaces, which return spatial interaction to screenless mobile devices. With these interfaces, users point and draw in the empty space in front of them or on the palm of their hands. While they cannot see the results of their interaction, they obtain some visual and tactile feedback by watching and feeling their hands interact. After introducing the concept of Imaginary Interfaces, I present two hardware prototypes that showcase two different forms of interaction with an imaginary interface, each with its own advantages: mid-air imaginary interfaces can be large and expressive, while palm-based imaginary interfaces offer an abundance of tactile features that encourage learning. Given that imaginary interfaces offer no visual output, one of the key challenges is to enable users to discover the interface's layout. This dissertation offers three main solutions: offline learning with coordinates, browsing with audio feedback and learning by transfer. The latter I demonstrate with the Imaginary Phone, a palm-based imaginary interface that mimics the layout of a physical mobile phone that users are already familiar with. Although these designs enable interaction with Imaginary Interfaces, they tell us little about why this interaction is possible. In the final part of this dissertation, I present an exploration into which human perceptual abilities are used when interacting with a palm-based imaginary interface and how much each accounts for performance with the interface. These findings deepen our understanding of Imaginary Interfaces and suggest that palm-based Imaginary Interfaces can enable stand-alone eyes-free use for many applications, including interfaces for visually impaired users.
Introduction: Intestinal bacteria influence gut morphology by affecting epithelial cell proliferation, development of the lamina propria, villus length and crypt depth [1]. Gut microbiota-derived factors have been proposed to also play a role in the development of a 30 % longer intestine, that is characteristic of PRM/Alf mice compared to other mouse strains [2, 3]. Polyamines and SCFAs produced by gut bacteria are important growth factors, which possibly influence mucosal morphology, in particular villus length and crypt depth and play a role in gut lengthening in the PRM/Alf mouse. However, experimental evidence is lacking. Aim: The objective of this work was to clarify the role of bacterially-produced polyamines on crypt depth, mucosa thickness and epithelial cell proliferation. For this purpose, C3H mice associated with a simplified human microbiota (SIHUMI) were compared with mice colonized with SIHUMI complemented by the polyamine-producing Fusobacterium varium (SIHUMI + Fv). In addition, the microbial impact on gut lengthening in PRM/Alf mice was characterized and the contribution of SCFAs and polyamines to this phenotype was examined. Results: SIHUMI + Fv mice exhibited an up to 1.7 fold higher intestinal polyamine concentration compared to SIHUMI mice, which was mainly due to increased putrescine concentrations. However, no differences were observed in crypt depth, mucosa thickness and epithelial proliferation. In PRM/Alf mice, the intestine of conventional mice was 8.5 % longer compared to germfree mice. In contrast, intestinal lengths of C3H mice were similar, independent of the colonization status. The comparison of PRM/Alf and C3H mice, both associated with SIHUMI + Fv, demonstrated that PRM/Alf mice had a 35.9 % longer intestine than C3H mice. However, intestinal SCFA and polyamine concentrations of PRM/Alf mice were similar or even lower, except N acetylcadaverine, which was 3.1-fold higher in PRM/Alf mice. When germfree PRM/Alf mice were associated with a complex PRM/Alf microbiota, the intestine was one quarter longer compared to PRM/Alf mice colonized with a C3H microbiota. This gut elongation correlated with levels of the polyamine N acetylspermine. Conclusion: The intestinal microbiota is able to influence intestinal length dependent on microbial composition and on the mouse genotype. Although SCFAs do not contribute to gut elongation, an influence of the polyamines N acetylcadaverine and N acetylspermine is conceivable. In addition, the study clearly demonstrated that bacterial putrescine does not influence gut morphology in C3H mice.
Several mechanisms are proposed to be part of the earthquake triggering process, including static stress interactions and dynamic stress transfer. Significant differences of these mechanisms are particularly expected in the spatial distribution of aftershocks. However, testing the different hypotheses is challenging because it requires the consideration of the large uncertainties involved in stress calculations as well as the appropriate consideration of secondary aftershock triggering which is related to stress changes induced by smaller pre- and aftershocks. In order to evaluate the forecast capability of different mechanisms, I take the effect of smaller--magnitude earthquakes into account by using the epidemic type aftershock sequence (ETAS) model where the spatial probability distribution of direct aftershocks, if available, is correlated to alternative source information and mechanisms. Surface shaking, rupture geometry, and slip distributions are tested. As an approximation of the shaking level, ShakeMaps are used which are available in near real-time after a mainshock and thus could be used for first-order forecasts of the spatial aftershock distribution. Alternatively, the use of empirical decay laws related to minimum fault distance is tested and Coulomb stress change calculations based on published and random slip models. For comparison, the likelihood values of the different model combinations are analyzed in the case of several well-known aftershock sequences (1992 Landers, 1999 Hector Mine, 2004 Parkfield). The tests show that the fault geometry is the most valuable information for improving aftershock forecasts. Furthermore, they reveal that static stress maps can additionally improve the forecasts of off--fault aftershock locations, while the integration of ground shaking data could not upgrade the results significantly. In the second part of this work, I focused on a procedure to test the information content of inverted slip models. This allows to quantify the information gain if this kind of data is included in aftershock forecasts. For this purpose, the ETAS model based on static stress changes, which is introduced in part one, is applied. The forecast ability of the models is systematically tested for several earthquake sequences and compared to models using random slip distributions. The influence of subfault resolution and segment strike and dip is tested. Some of the tested slip models perform very good, in that cases almost no random slip models are found to perform better. Contrastingly, for some of the published slip models, almost all random slip models perform better than the published slip model. Choosing a different subfault resolution hardly influences the result, as long the general slip pattern is still reproducible. Whereas different strike and dip values strongly influence the results depending on the standard deviation chosen, which is applied in the process of randomly selecting the strike and dip values.
Measuring the metabolite profile of plants can be a strong phenotyping tool, but the changes of metabolite pool sizes are often difficult to interpret, not least because metabolite pool sizes may stay constant while carbon flows are altered and vice versa. Hence, measuring the carbon allocation of metabolites enables a better understanding of the metabolic phenotype. The main challenge of such measurements is the in vivo integration of a stable or radioactive label into a plant without perturbation of the system. To follow the carbon flow of a precursor metabolite, a method is developed in this work that is based on metabolite profiling of primary metabolites measured with a mass spectrometer preceded by a gas chromatograph (Wagner et al. 2003; Erban et al. 2007; Dethloff et al. submitted). This method generates stable isotope profiling data, besides conventional metabolite profiling data. In order to allow the feeding of a 13C sucrose solution into the plant, a petiole and a hypocotyl feeding assay are developed. To enable the processing of large numbers of single leaf samples, their preparation and extraction are simplified and optimised. The metabolite profiles of primary metabolites are measured, and a simple relative calculation is done to gain information on carbon allocation from 13C sucrose. This method is tested examining single leaves of one rosette in different developmental stages, both metabolically and regarding carbon allocation from 13C sucrose. It is revealed that some metabolite pool sizes and 13C pools are tightly associated to relative leaf growth, i.e. to the developmental stage of the leaf. Fumaric acid turns out to be the most interesting candidate for further studies because pool size and 13C pool diverge considerably. In addition, the analyses are also performed on plants grown in the cold, and the initial results show a different metabolite pool size pattern across single leaves of one Arabidopsis rosette, compared to the plants grown under normal temperatures. Lastly, in situ expression of REIL genes in the cold is examined using promotor-GUS plants. Initial results suggest that single leaf metabolite profiles of reil2 differ from those of the WT.
Background: Increased numbers of intestinal E. coli are observed in inflammatory bowel disease, but the reasons for this proliferation and it exact role in intestinal inflammation are unknown. Aim of this PhD-project was to identify E. coli proteins involved in E. coli’s adaptation to the inflammatory conditions in the gut and to investigate whether these factors affect the host. Furthermore, the molecular basis for strain-specific differences between probiotic and harmful E. coli in their response to intestinal inflammation was investigated. Methods: Using mice monoassociated either with the adherent-invasive E. coli (AIEC) strain UNC or the probiotic E. coli Nissle, two different mouse models of intestinal inflammation were analysed: On the one hand, severe inflammation was induced by treating mice with 3.5% dextran sodium sulphate (DSS). On the other hand, a very mild intestinal inflammation was generated by associating interleukin 10-deficient (IL-10-/-) mice with E. coli. Differentially expressed proteins in the E. coli strains collected from caecal contents of these mice were identified by two-dimensional fluorescence difference gel electrophoresis. Results DSS-experiment: All DSS-treated mice revealed signs of a moderate caecal and a severe colonic inflammation. However, mice monoassociated with E. coli Nissle were less affected. In both E. coli strains, acute inflammation led to a downregulation of pathways involved in carbohydrate breakdown and energy generation. Accordingly, DSS-treated mice had lower caecal concentrations of bacterial fermentation products than the control mice. Differentially expressed proteins also included the Fe-S cluster repair protein NfuA, the tryptophanase TnaA, and the uncharacterised protein YggE. NfuA was upregulated nearly 3-fold in both E. coli strains after DSS administration. Reactive oxygen species produced during intestinal inflammation damage Fe-S clusters and thereby lead to an inactivation of Fe-S proteins. In vitro data indicated that the repair of Fe-S proteins by NfuA is a central mechanism in E. coli to survive oxidative stress. Expression of YggE, which has been reported to reduce the intracellular level of reactive oxygen species, was 4- to 8-fold higher in E. coli Nissle than in E. coli UNC under control and inflammatory conditions. In vitro growth experiments confirmed these results, indicating that E. coli Nissle is better equipped to cope with oxidative stress than E. coli UNC. Additionally, E. coli Nissle isolated from DSS-treated and control mice had TnaA levels 4- to 7-fold higher than E. coli UNC. In turn, caecal indole concentrations resulting from cleavage of tryptophan by TnaA were higher in E. coli Nissle- associated control mice than in the respective mice associated with E. coli UNC. Because of its anti-inflammatory effect, indole is hypothesised to be involved in the extension of the remission phase in ulcerative colitis described for E. coli Nissle. Results IL-10-/--experiment: Only IL-10-/- mice monoassociated with E. coli UNC for 8 weeks exhibited signs of a very mild caecal inflammation. In agreement with this weak inflammation, the variations in the bacterial proteome were small. Similar to the DSS-experiment, proteins downregulated by inflammation belong mainly to the central energy metabolism. In contrast to the DSS-experiment, no upregulation of chaperone proteins and NfuA were observed, indicating that these are strategies to overcome adverse effects of strong intestinal inflammation. The inhibitor of vertebrate C-type lysozyme, Ivy, was 2- to 3-fold upregulated on mRNA and protein level in E. coli Nissle in comparison to E. coli UNC isolated from IL-10-/- mice. By overexpressing ivy, it was demonstrated in vitro that Ivy contributes to a higher lysozyme resistance observed for E. coli Nissle, supporting the role of Ivy as a potential fitness factor in this E. coli strain. Conclusions: The results of this PhD-study demonstrate that intestinal bacteria sense even minimal changes in the health status of the host. While some bacterial adaptations to the inflammatory conditions are equal in response to strong and mild intestinal inflammation, other reactions are unique to a specific disease state. In addition, probiotic and colitogenic E. coli differ in their response to the intestinal inflammation and thereby may influence the host in different ways.
Information flows in EU policy-making are heavily dependent on personal networks, both within the Brussels sphere but also reaching outside the narrow limits of the Belgian capital. These networks develop for example in the course of formal and informal meetings or at the sidelines of such meetings. A plethora of committees at European, transnational and regional level provides the basis for the establishment of pan-European networks. By studying affiliation to those committees, basic network structures can be uncovered. These affiliation network structures can then be used to predict EU information flows, assuming that certain positions within the network are advantageous for tapping into streams of information while others are too remote and peripheral to provide access to information early enough. This study has tested those assumptions for the case of the reform of the Common Fisheries Policy for the time after 2012. Through the analysis of an affiliation network based on participation in 10 different fisheries policy committees over two years (2009 and 2010), network data for an EU-wide network of about 1300 fisheries interest group representatives and more than 200 events was collected. The structure of this network showed a number of interesting patterns, such as – not surprisingly – a rather central role of Brussels-based committees but also close relations of very specific interests to the Brussels-cluster and stronger relations between geographically closer maritime regions. The analysis of information flows then focused on access to draft EU Commission documents containing the upcoming proposal for a new basic regulation of the Common Fisheries Policy. It was first documented that it would have been impossible to officially obtain this document and that personal networks were thus the most likely sources for fisheries policy actors to obtain access to these “leaks” in early 2011. A survey of a sample of 65 actors from the initial network supported these findings: Only a very small group had accessed the draft directly from the Commission. Most respondents who obtained access to the draft had received it from other actors, highlighting the networked flow of informal information in EU politics. Furthermore, the testing of the hypotheses connecting network positions and the level of informedness indicated that presence in or connections to the Brussels sphere had both advantages for overall access to the draft document and with regard to timing. Methodologically, challenges of both the network analysis and the analysis of information flows but also their relevance for the study of EU politics have been documented. In summary, this study has laid the foundation for a different way to study EU policy-making by connecting topical and methodological elements – such as affiliation network analysis and EU committee governance – which so far have not been considered together, thereby contributing in various ways to political science and EU studies.
Inhibition, attentional control, and causes of forgetting in working memory: a formal approach
(2013)
In many cognitive activities, the temporary maintenance and manipulation of mental objects is a necessary step in order to reach a cognitive goal. Working memory has been regarded as the process responsible for those cognitive activities. This thesis addresses the question: what limits working-memory capacity (WMC)? A question that still remains controversial (Barrouillet & Camos, 2009; Lewandowsky, Oberauer, & Brown, 2009). This study attempted to answer this question by proposing that the dynamics between the causes of forgetting and the processes helping the maintenance, and the manipulation of the memoranda are the key aspects in understanding the limits of WMC.
Chapter 1 introduced key constructs and the strategy to examine the dynamics between inhibition, attentional control, and the causes of forgetting in working memory.
The study in Chapter 2 tested the performance of children, young adults, and old adults in a working-memory updating-task with two conditions: one condition included go steps and the other condition included go, and no-go steps. The interference model (IM; Oberauer & Kliegl, 2006), a model proposing interference-related mechanisms as the main cause of forgetting was used to simultaneously fit the data of these age groups. In addition to the interference-related parameters reflecting interference by feature overwriting and interference by confusion, and in addition to the parameters reflecting the speed of processing, the study included a new parameter that captured the time for switching between go steps and no-go steps. The study indicated that children and young adults were less susceptible than old adults to interference by feature overwriting; children were the most susceptible to interference by confusion, followed by old adults and then by young adults; young adults presented the higher rate of processing, followed by children and then by old adults; and young adults were the fastest group switching from go steps to no-go steps.
Chapter 3 examined the dynamics between causes of forgetting and the inhibition of a prepotent response in the context of three formal models of the limits of WMC: A resources model, a decay-based model, and three versions of the IM. The resources model was built on the assumption that a limited and shared source of activation for the maintenance and manipulation of the objects underlies the limits of WMC. The decay model assumes that memory traces of the working-memory objects decay over time if they are not reactivated via different mechanisms of maintenance. The IM, already described, proposes that interference-related mechanisms explain the limits of WMC. In two experiments and in a reanalysis of data of the second experiment, one version of the IM received more statistical support from the data. This version of the IM proposes that interference by feature overwriting and interference by confusion are the main factors underlying the limits of WMC. In addition, the model suggests that experimental conditions involving the inhibition of a prepotent response reduce the speed of processing and promotes the involuntary activation of irrelevant information in working memory.
Chapter 4 summed up Chapter 2 and 3 and discussed their findings and presented how this thesis has provided evidence of interference-related mechanisms as the main cause of forgetting, and it has attempted to clarify the role of inhibition and attentional control in working memory. With the implementation of formal models and experimental manipulations in the framework of nonlinear mixed models the data offered explanations of causes of forgetting and the role of inhibition in WMC at different levels: developmental effects, aging effects, effects related to experimental manipulations and individual differences in these effects. Thus, the present approach afforded a comprehensive view of a large number of factors limiting WMC.
The phenolic compounds as food components represent the largest group of secondary metabolites in plant foods. The phenolic compounds, e.g. chlorogenic acid (CQA), are susceptible to oxidation by enzymes specially, polyphenol oxidase (PPO) and at alkaline conditions. Both enzymatic and non-enzymatic oxidations occur in the presence of oxygen and produce quinone, which normally further react with other quinone to produce colored compounds (dimers), as well as is capable of undergoing a nucleophilic addition to proteins. The interactions of proteins with the phenolic compounds have received considerable attention in the recent years where, plant phenolic compounds have drawn increasing attention due to their antioxidant properties and their noticeable effects in the prevention of various oxidative stress associated diseases. Green coffee beans are one of the richest sources of chlorogenic acids. Therefore, a green coffee extract would provide an eligible food relevant source for phenolic compounds for modification of proteins. The interaction between 5-CQA and amino acid lysine showed decrease in both free CQA and amino acid groups and only a slight effect on the antioxidative capacity depending on the reaction time was found. Furthermore, this interaction showed a large number of intermediary substances of low intensities. The reaction of lysine with 5-CQA in a model system initially leads to formation of 3-CQA and 4-CQA (both are isomers of 5-CQA), oxidation giving rise to the formation of a dimer which subsequently forms an adduct with lysine to finally result in a benzacridine derivative as reported and confirmed with the aid of HPLC coupled with ESI-MSn. The benzacridine derivative containing a trihydroxy structural element, was found to be yellow, being very reactive with oxygen yielding semiquinone and quinone type of products with characteristic green colors. Finally, the optimal conditions for this interaction as assessed by both the loss of CQA and free amino groups of lysine can be given at pH 7 and 25°C, the interaction increasing with incubation time and depending also on the amount of tyrosinase present. Green coffee bean has a higher diversity and content of phenolics, where besides the CQA isomers and their esters, other conjugates like feruloylquinic acids were also identified, thus documenting differences in phenolic profiles for the two coffee types (Coffea arabica and Coffea robusta). Coffee proteins are modified by interactions with phenolic compounds during the extraction, where those from C. arabica are more susceptible to these interactions compared to C. robusta, and the polyphenol oxidase activity seems to be a crucial factor for the formation of these addition products. Moreover, In-gel digestion combined with MALDI-TOF-MS revealed that the most reactive and susceptible protein fractions to covalent reactions are the α-chains of the 11S storage protein. Thus, based on these results and those supplied by other research groups, a tentative list of possible adduct structures was derived. The diversity of the different CQA derivatives present in green coffee beans complicates the series of reactions occurring, providing a broad palette of reaction products. These interactions influence the properties of protein, where they exposed changes in the solubility and hydrophobicity of proteins compared to faba bean proteins (as control). Modification of milk whey protein products (primarily b-lactoglobulin) with coffee specific phenolics and commercial CQA under enzymatic and alkaline conditions seems to be affecting their chemical, structural and functional properties, where both modifications led to reduced free amino-,thiol groups and tryptophan content. We propose that the disulfide-thiol exchange in the C-terminus of b-lactoglobulin may be initiated by the redox conditions provided in the presence of CQA. The protein structure b-lactoglobulin thereupon becomes more disordered as simulated by molecular dynamic calculation. This unfolding process may additionally be supported by the reaction of the CQA at the proposed sites of modification of -amino groups of lysine (K77, K91, K138, K47) and the thiol group of cysteine (C121). These covalent modifications also decreased the solubility and hydrophobicity of b-lactoglobulin, moreover they provide modified protein samples with a high antioxidative power, thermally more stable as reflected by a higher Td, require less amount of energy to unfold and when emulsified with lutein esters, exhibit their higher stability against UV light. The MALDI-TOF and SDS-PAGE results revealed that proteins treated at alkaline conditions were more strongly modified than those treated under enzymatic conditions. Finally, the results showed a slight change in emulsifying properties of modified proteins.
Interactive generation of effective discourse in situated context : a planning-based approach
(2013)
As our modern-built structures are becoming increasingly complex, carrying out basic tasks such as identifying points or objects of interest in our surroundings can consume considerable time and cognitive resources. In this thesis, we present a computational approach to converting contextual information about a person's physical environment into natural language, with the aim of helping this person identify given task-related entities in their environment. Using efficient methods from automated planning - the field of artificial intelligence concerned with finding courses of action that can achieve a goal -, we generate discourse that interactively guides a hearer through completing their task. Our approach addresses the challenges of controlling, adapting to, and monitoring the situated context. To this end, we develop a natural language generation system that plans how to manipulate the non-linguistic context of a scene in order to make it more favorable for references to task-related objects. This strategy distributes a hearer's cognitive load of interpreting a reference over multiple utterances rather than one long referring expression. Further, to optimize the system's linguistic choices in a given context, we learn how to distinguish speaker behavior according to its helpfulness to hearers in a certain situation, and we model the behavior of human speakers that has been proven helpful. The resulting system combines symbolic with statistical reasoning, and tackles the problem of making non-trivial referential choices in rich context. Finally, we complement our approach with a mechanism for preventing potential misunderstandings after a reference has been generated. Employing remote eye-tracking technology, we monitor the hearer's gaze and find that it provides a reliable index of online referential understanding, even in dynamically changing scenes. We thus present a system that exploits hearer gaze to generate rapid feedback on a per-utterance basis, further enhancing its effectiveness. Though we evaluate our approach in virtual environments, the efficiency of our planning-based model suggests that this work could be a step towards effective conversational human-computer interaction situated in the real world.
Interactive rendering techniques for focus+context visualization of 3D geovirtual environments
(2013)
This thesis introduces a collection of new real-time rendering techniques and applications for focus+context visualization of interactive 3D geovirtual environments such as virtual 3D city and landscape models. These environments are generally characterized by a large number of objects and are of high complexity with respect to geometry and textures. For these reasons, their interactive 3D rendering represents a major challenge. Their 3D depiction implies a number of weaknesses such as occlusions, cluttered image contents, and partial screen-space usage. To overcome these limitations and, thus, to facilitate the effective communication of geo-information, principles of focus+context visualization can be used for the design of real-time 3D rendering techniques for 3D geovirtual environments (see Figure). In general, detailed views of a 3D geovirtual environment are combined seamlessly with abstracted views of the context within a single image. To perform the real-time image synthesis required for interactive visualization, dedicated parallel processors (GPUs) for rasterization of computer graphics primitives are used. For this purpose, the design and implementation of appropriate data structures and rendering pipelines are necessary. The contribution of this work comprises the following five real-time rendering methods: • The rendering technique for 3D generalization lenses enables the combination of different 3D city geometries (e.g., generalized versions of a 3D city model) in a single image in real time. The method is based on a generalized and fragment-precise clipping approach, which uses a compressible, raster-based data structure. It enables the combination of detailed views in the focus area with the representation of abstracted variants in the context area. • The rendering technique for the interactive visualization of dynamic raster data in 3D geovirtual environments facilitates the rendering of 2D surface lenses. It enables a flexible combination of different raster layers (e.g., aerial images or videos) using projective texturing for decoupling image and geometry data. Thus, various overlapping and nested 2D surface lenses of different contents can be visualized interactively. • The interactive rendering technique for image-based deformation of 3D geovirtual environments enables the real-time image synthesis of non-planar projections, such as cylindrical and spherical projections, as well as multi-focal 3D fisheye-lenses and the combination of planar and non-planar projections. • The rendering technique for view-dependent multi-perspective views of 3D geovirtual environments, based on the application of global deformations to the 3D scene geometry, can be used for synthesizing interactive panorama maps to combine detailed views close to the camera (focus) with abstract views in the background (context). This approach reduces occlusions, increases the usage the available screen space, and reduces the overload of image contents. • The object-based and image-based rendering techniques for highlighting objects and focus areas inside and outside the view frustum facilitate preattentive perception. The concepts and implementations of interactive image synthesis for focus+context visualization and their selected applications enable a more effective communication of spatial information, and provide building blocks for design and development of new applications and systems in the field of 3D geovirtual environments.
Landslides are one of the biggest natural hazards in Georgia, a mountainous country in the Caucasus. So far, no systematic monitoring and analysis of the dynamics of landslides in Georgia has been made. Especially as landslides are triggered by extrinsic processes, the analysis of landslides together with precipitation and earthquakes is challenging. In this thesis I describe the advantages and limits of remote sensing to detect and better understand the nature of landslide in Georgia. The thesis is written in a cumulative form, composing a general introduction, three manuscripts and a summary and outlook chapter. In the present work, I measure the surface displacement due to active landslides with different interferometric synthetic aperture radar (InSAR) methods. The slow landslides (several cm per year) are well detectable with two-pass interferometry. In same time, the extremely slow landslides (several mm per year) could be detected only with time series InSAR techniques. I exemplify the success of InSAR techniques by showing hitherto unknown landslides, located in the central part of Georgia. Both, the landslide extent and displacement rate is quantified. Further, to determine a possible depth and position of potential sliding planes, inverse models were developed. Inverse modeling searches for parameters of source which can create observed displacement distribution. I also empirically estimate the volume of the investigated landslide using displacement distributions as derived from InSAR combined with morphology from an aerial photography. I adapted a volume formula for our case, and also combined available seismicity and precipitation data to analyze potential triggering factors. A governing question was: What causes landslide acceleration as observed in the InSAR data? The investigated area (central Georgia) is seismically highly active. As an additional product of the InSAR data analysis, a deformation area associated with the 7th September Mw=6.0 earthquake was found. Evidences of surface ruptures directly associated with the earthquake could not be found in the field, however, during and after the earthquake new landslides were observed. The thesis highlights that deformation from InSAR may help to map area prone landslides triggering by earthquake, potentially providing a technique that is of relevance for country wide landslide monitoring, especially as new satellite sensors will emerge in the coming years.
This dissertation is about factors that contribute to the surface forms of tones in connected speech in Akan. Akan is an African tone language, which is spoken in Ghana. It has two level tones (low and high), automatic and non-automatic downstep. Downstep is the major factor that influences the surface forms of tones. The thesis shows that downstep is caused by declination. It is argued that declination is an intonational property of Akan, which serves to signal coherence. A phonological representation using a high and a low register tone, associating to the left and right edge of an intonational phrase (IP), respectively, is proposed. Declination/downstep is modelled using a (phonetic) pitch implementation algorithm (Liberman & Pierrehumbert, 1984). An innovative application of the algorithm is presented, which naturally captures the relation between declination and downstep in Akan. Another important factor is the prosodic manifestation of sentence level pragmatic meanings, such as sentence mode and focus. Regarding the former, the thesis shows that a post-lexical low tone, which associates with the right edge of an IP, signals interrogativity. Additionally, lexical tones in Yes – No questions are realized in a higher pitch register, which does not lead to a reduction of declination. It is claimed that the higher register is not part of the phonological representation in Akan, but that it emerges at the phonetic level to compensate for the ‘unnatural’ form of the question morpheme and to satisfy the Frequency code (Gussenhoven, 2002; 2004). An extension of Rialland’s (2007) typology in terms of a new category called “low tense” question prosody is proposed. Concerning focus marking, it is argued that the use of the morpho-syntactic focus marking strategy is related to extra grammatical factors, such as hearer expectation, discourse expectability (Zimmermann, 2007) and emphasis (Hartmann, 2008). If a speaker of Akan wants to highlight a particular element in a sentence, in-situ, i.e. by means of prosody, the default prosodic structure is modified in such a way that the focused element forms its own phonological phrase (pP). If it is already contained in a pP, the boundary deliminating the focused element is enhanced (Féry, 2012). This restructuring/enhancement is accompanied by an interruption of the otherwise continuous melody due to insertion of a pause and/or a glottal stop. Beside declination and intonation, raising of H tones applies in Akan. H raising is analyzed as a local anticipatory planning effect, employed at the phonetic level, which enhances the perceptual distance between low and high tones. Low tones are raised, if they are wedged between two high tones. L raising is argued to be a local carryover effect (co-articulation). Further, it is demonstrated that global anticipatory raising takes place. It is shown that Akan speakers anticipate the length of an IP. Preplanning (anticipatory raising) is argued to be an important process at the level of pitch implementation. It serves to ensure that declination can be maintained throughout the IP, which prevents pitch resetting.
The melody of an Akan sentence is largely determined by the choice of words. The inventory of post-lexical tones is small. It consists of post-lexical register tones, which trigger declination and post-lexical intonational tones, which signal sentence type. The overall melodic shape is falling. At the local level, H raising and L raising occur. At the global level, initial low and high tones are realized higher if they occur in a long and/or complex sentence. This dissertation shows that many factors, which emerge at different levels of the tone production process, contribute to the surface form of tones in Akan.
In the context of ecological risk assessment of chemicals, individual-based population models hold great potential to increase the ecological realism of current regulatory risk assessment procedures. However, developing and parameterizing such models is time-consuming and often ad hoc. Using standardized, tested submodels of individual organisms would make individual-based modelling more efficient and coherent. In this thesis, I explored whether Dynamic Energy Budget (DEB) theory is suitable for being used as a standard submodel in individual-based models, both for ecological risk assessment and theoretical population ecology. First, I developed a generic implementation of DEB theory in an individual-based modeling (IBM) context: DEB-IBM. Using the DEB-IBM framework I tested the ability of the DEB theory to predict population-level dynamics from the properties of individuals. We used Daphnia magna as a model species, where data at the individual level was available to parameterize the model, and population-level predictions were compared against independent data from controlled population experiments. We found that DEB theory successfully predicted population growth rates and peak densities of experimental Daphnia populations in multiple experimental settings, but failed to capture the decline phase, when the available food per Daphnia was low. Further assumptions on food-dependent mortality of juveniles were needed to capture the population dynamics after the initial population peak. The resulting model then predicted, without further calibration, characteristic switches between small- and large-amplitude cycles, which have been observed for Daphnia. We conclude that cross-level tests help detecting gaps in current individual-level theories and ultimately will lead to theory development and the establishment of a generic basis for individual-based models and ecology. In addition to theoretical explorations, we tested the potential of DEB theory combined with IBMs to extrapolate effects of chemical stress from the individual to population level. For this we used information at the individual level on the effect of 3,4-dichloroanailine on Daphnia. The individual data suggested direct effects on reproduction but no significant effects on growth. Assuming such direct effects on reproduction, the model was able to accurately predict the population response to increasing concentrations of 3,4-dichloroaniline. We conclude that DEB theory combined with IBMs holds great potential for standardized ecological risk assessment based on ecological models.
Permafrost-affected ecosystems including peat wetlands are among the most obvious regions in which current microbial controls on organic matter decomposition are likely to change as a result of global warming. Wet tundra ecosystems in particular are ideal sites for increased methane production because of the waterlogged, anoxic conditions that prevail in seasonally increasing thawed layers. The following doctoral research project focused on investigating the abundance and distribution of the methane-cycling microbial communities in four different polygons on Herschel Island and the Yukon Coast. Despite the relevance of the Canadian Western Arctic in the global methane budget, the permafrost microbial communities there have thus far remained insufficiently characterized. Through the study of methanogenic and methanotrophic microbial communities involved in the decomposition of permafrost organic matter and their potential reaction to rising environmental temperatures, the overarching goal of the ensuing thesis is to fill the current gap in understanding the fate of the organic carbon currently stored in Artic environments and its implications regarding the methane cycle in permafrost environments. To attain this goal, a multiproxy approach including community fingerprinting analysis, cloning, quantitative PCR and next generation sequencing was used to describe the bacterial and archaeal community present in the active layer of four polygons and to scrutinize the diversity and distribution of methane-cycling microorganisms at different depths. These methods were combined with soil properties analyses in order to identify the main physico-chemical variables shaping these communities. In addition a climate warming simulation experiment was carried-out on intact active layer cores retrieved from Herschel Island in order to investigate the changes in the methane-cycling communities associated with an increase in soil temperature and to help better predict future methane-fluxes from polygonal wet tundra environments in the context of climate change. Results showed that the microbial community found in the water-saturated and carbon-rich polygons on Herschel Island and the Yukon Coast was diverse and showed a similar distribution with depth in all four polygons sampled. Specifically, the methanogenic community identified resembled the communities found in other similar Arctic study sites and showed comparable potential methane production rates, whereas the methane oxidizing bacterial community differed from what has been found so far, being dominated by type-II rather than type-I methanotrophs. After being subjected to strong increases in soil temperature, the active-layer microbial community demonstrated the ability to quickly adapt and as a result shifts in community composition could be observed. These results contribute to the understanding of carbon dynamics in Arctic permafrost regions and allow an assessment of the potential impact of climate change on methane-cycling microbial communities. This thesis constitutes the first in-depth study of methane-cycling communities in the Canadian Western Arctic, striving to advance our understanding of these communities in degrading permafrost environments by establishing an important new observatory in the Circum-Arctic.
Climatic variations and human activity now and increasingly in the future cause land cover changes and introduce perturbations in the terrestrial carbon reservoirs in vegetation, soil and detritus. Optical remote sensing and in particular Imaging Spectroscopy has shown the potential to quantify land surface parameters over large areas, which is accomplished by taking advantage of the characteristic interactions of incident radiation and the physico-chemical properties of a material. The objective of this thesis is to quantify key soil parameters, including soil organic carbon, using field and Imaging Spectroscopy. Organic carbon, iron oxides and clay content are selected to be analyzed to provide indicators for ecosystem function in relation to land degradation, and additionally to facilitate a quantification of carbon inventories in semiarid soils. The semiarid Albany Thicket Biome in the Eastern Cape Province of South Africa is chosen as study site. It provides a regional example for a semiarid ecosystem that currently undergoes land changes due to unadapted management practices and furthermore has to face climate change induced land changes in the future. The thesis is divided in three methodical steps. Based on reflectance spectra measured in the field and chemically determined constituents of the upper topsoil, physically based models are developed to quantify soil organic carbon, iron oxides and clay content. Taking account of the benefits limitations of existing methods, the approach is based on the direct application of known diagnostic spectral features and their combination with multivariate statistical approaches. It benefits from the collinearity of several diagnostic features and a number of their properties to reduce signal disturbances by influences of other spectral features. In a following step, the acquired hyperspectral image data are prepared for an analysis of soil constituents. The data show a large spatial heterogeneity that is caused by the patchiness of the natural vegetation in the study area that is inherent to most semiarid landscapes. Spectral mixture analysis is performed and used to deconvolve non-homogenous pixels into their constituent components. For soil dominated pixels, the subpixel information is used to remove the spectral influence of vegetation and to approximate the pure spectral signature coming from the soil. This step is an integral part when working in natural non-agricultural areas where pure bare soil pixels are rare. It is identified as the largest benefit within the multi-stage methodology, providing the basis for a successful and unbiased prediction of soil constituents from hyperspectral imagery. With the proposed approach it is possible (1) to significantly increase the spatial extent of derived information of soil constituents to areas with about 40 % vegetation coverage and (2) to reduce the influence of materials such as vegetation on the quantification of soil constituents to a minimum. Subsequently, soil parameter quantities are predicted by the application of the feature-based soil prediction models to the maps of locally approximated soil signatures. Thematic maps showing the spatial distribution of the three considered soil parameters in October 2009 are produced for the Albany Thicket Biome of South Africa. The maps are evaluated for their potential to detect erosion affected areas as effects of land changes and to identify degradation hot spots in regard to support local restoration efforts. A regional validation, carried out using available ground truth sites, suggests remaining factors disturbing the correlation of spectral characteristics and chemical soil constituents. The approach is developed for semiarid areas in general and not adapted to specific conditions in the study area. All processing steps of the developed methodology are implemented in software modules, where crucial steps of the workflow are fully automated. The transferability of the methodology is shown for simulated data of the future EnMAP hyperspectral satellite. Soil parameters are successfully predicted from these data despite intense spectral mixing within the lower spatial resolution EnMAP pixels. This study shows an innovative approach to use Imaging Spectroscopy for mapping of key soil constituents, including soil organic carbon, for large areas in a non-agricultural ecosystem and under consideration of a partially vegetation coverage. It can contribute to a better assessment of soil constituents that describe ecosystem processes relevant to detect and monitor land changes. The maps further provide an assessment of the current carbon inventory in soils, valuable for carbon balances and carbon mitigation products.
The Arctic is considered as a focal region in the ongoing climate change debate. The currently observed and predicted climate warming is particularly pronounced in the high northern latitudes. Rising temperatures in the Arctic cause progressive deepening and duration of permafrost thawing during the arctic summer, creating an ‘active layer’ with high bioavailability of nutrients and labile carbon for microbial consumption. The microbial mineralization of permafrost carbon creates large amounts of greenhouse gases, including carbon dioxide and methane, which can be released to the atmosphere, creating a positive feedback to global warming. However, to date, the microbial communities that drive the overall carbon cycle and specifically methane production in the Arctic are poorly constrained. To assess how these microbial communities will respond to the predicted climate changes, such as an increase in atmospheric and soil temperatures causing increased bioavailability of organic carbon, it is necessary to investigate the current status of this environment, but also how these microbial communities reacted to climate changes in the past. This PhD thesis investigated three records from two different study sites in the Russian Arctic, including permafrost, lake shore and lake deposits from Siberia and Chukotka. A combined stratigraphic approach of microbial and molecular organic geochemical techniques were used to identify and quantify characteristic microbial gene and lipid biomarkers. Based on this data it was possible to characterize and identify the climate response of microbial communities involved in past carbon cycling during the Middle Pleistocene and the Late Pleistocene to Holocene. It is shown that previous warmer periods were associated with an expansion of bacterial and archaeal communities throughout the Russian Arctic, similar to present day conditions. Different from this situation, past glacial and stadial periods experienced a substantial decrease in the abundance of Bacteria and Archaea. This trend can also be confirmed for the community of methanogenic archaea that were highly abundant and diverse during warm and particularly wet conditions. For the terrestrial permafrost, a direct effect of the temperature on the microbial communities is likely. In contrast, it is suggested that the temperature rise in scope of the glacial-interglacial climate variations led to an increase of the primary production in the Arctic lake setting, as can be seen in the corresponding biogenic silica distribution. The availability of this algae-derived carbon is suggested to be a driver for the observed pattern in the microbial abundance. This work demonstrates the effect of climate changes on the community composition of methanogenic archae. Methanosarcina-related species were abundant throughout the Russian Arctic and were able to adapt to changing environmental conditions. In contrast, members of Methanocellales and Methanomicrobiales were not able to adapt to past climate changes. This PhD thesis provides first evidence that past climatic warming led to an increased abundance of microbial communities in the Arctic, closely linked to the cycling of carbon and methane production. With the predicted climate warming, it may, therefore, be anticipated that extensive amounts of microbial communities will develop. Increasing temperatures in the Arctic will affect the temperature sensitive parts of the current microbiological communities, possibly leading to a suppression of cold adapted species and the prevalence of methanogenic archaea that tolerate or adapt to increasing temperatures. These changes in the composition of methanogenic archaea will likely increase the methane production potential of high latitude terrestrial regions, changing the Arctic from a carbon sink to a source.
Numerical simulations of galaxy formation and observational Galactic Astronomy are two fields of research that study the same objects from different perspectives. Simulations try to understand galaxies like our Milky Way from an evolutionary point of view while observers try to disentangle the current structure and the building blocks of our Galaxy. Due to great advances in computational power as well as in massive stellar surveys we are now able to compare resolved stellar populations in simulations and in observations. In this thesis we use a number of approaches to relate the results of the two fields to each other. The major observational data set we refer to for this work comes from the Radial Velocity Experiment (RAVE), a massive spectroscopic stellar survey that observed almost half a million stars in the Galaxy. In a first study we use three different models of the Galaxy to generate synthetic stellar surveys that can be directly compared to the RAVE data. To do this we evaluate the RAVE selection function to great detail. Among the Galaxy models is the widely used Besancon model that performs well when individual parameter distribution are considered, but fails when we study chemodynamic correlations. The other two models are based on distributions of mass particles instead of analytical distribution functions. This is the first time that such models are converted to the space of observables and are compared to a stellar survey. We show that these models can be competitive and in some aspects superior to analytic models, because of their self-consistent dynamic history. In the case of a full cosmological simulation of disk galaxy formation we can recover features in the synthetic survey that relate to the known issues of the model and hence proof that our technique is sensitive to the global structure of the model. We argue that the next generation of cosmological galaxy formation simulations will deliver valuable models for our Galaxy. Testing these models with our approach will provide a direct connection between stellar Galactic astronomy and physical cosmology. In the second part of the thesis we use a sample of high-velocity halo stars from the RAVE data to estimate the Galactic escape speed and the virial mass of the Milky Way. In the course of this study cosmological simulations of galaxy formation also play a crucial role. Here we use them to calibrate and extensively test our analysis technique. We find the local Galactic escape speed to be 533 (+54/-41) km/s (90% confidence). With this result in combination with a simple mass model of the Galaxy we then construct an estimate of the virial mass of the Galaxy. For the mass profile of the dark matter halo we use two extreme models, a pure Navarro, Frenk & White (NFW) profile and an adiabatically contracted NFW profile. When we use statistics on the concentration parameter of these profile taken from large dissipationless cosmological simulations we obtain an estimate of the virial mass that is almost independent of the choice of the halo profile. For the mass M_340 enclosed within R_340 = 180 kpc we find 1.3 (+0.4/-0.3) x 10^12 M_sun. This value is in very good agreement with a number of other mass estimates in the literature that are based on independent data sets and analysis techniques. In the last part of this thesis we investigate a new possible channel to generate a population of Hypervelocity stars (HVSs) that is observed in the stellar halo. Commonly, it is assumed that the velocities of these stars originate from an interaction with the super-massive black hole in the Galactic center. It was suggested recently that stars stripped-off a disrupted satellite galaxy could reach similar velocities and leave the Galaxy. Here we study in detail the kinematics of tidal debris stars to investigate the probability that the observed sample of HVSs could partly originate from such a galaxy collision. We use a suite of $N$-body simulations following the encounter of a satellite galaxy with its Milky Way-type host galaxy. We quantify the typical pattern in angular and phase space formed by the debris stars and develop a simple model that predicts the kinematics of stripped-off stars. We show that the distribution of orbital energies in the tidal debris has a typical form that can be described quite accurately by a simple function. The main parameters determining the maximum energy kick a tidal debris star can get is the initial mass of the satellite and only to a lower extent its orbit. Main contributors to an unbound stellar population created in this way are massive satellites (M_sat > 10^9 M_sun). The probability that the observed HVS population is significantly contaminated by tidal debris stars appears small in the light of our results.
An important strand of research has investigated the question of how children acquire a morphological system using offline data from spontaneous or elicited child language. Most of these studies have found dissociations in how children apply regular and irregular inflection (Marcus et al. 1992, Weyerts & Clahsen 1994, Rothweiler & Clahsen 1993). These studies have considerably deepened our understanding of how linguistic knowledge is acquired and organised in the human mind. Their methodological procedures, however, do not involve measurements of how children process morphologically complex forms in real time. To date, little is known about how children process inflected word forms. The aim of this study is to investigate children’s processing of inflected words in a series of on-line reaction time experiments. We used a cross-modal priming experiment to test for decompositional effects on the central level. We used a speeded production task and a lexical decision task to test for frequency effects on access level in production and recognition. Children’s behaviour was compared to adults’ behaviour towards three participle types (-t participles, e.g. getanzt ‘danced’ vs. -n participles with stem change, e.g. gebrochen ‘broken’ vs.-n participles without stem change, e.g. geschlafen ‘slept’). For the central level, results indicate that -t participles but not -n participles have decomposed representations. For the access level, results indicate that -t participles are represented according to their morphemes and additionally as full forms, at least from the age of nine years onwards (Pinker 1999 and Clahsen et al. 2004). Further evidence suggested that -n participles are represented as full-form entries on access level and that -n participles without stem change may encode morphological structure (cf. Clahsen et al. 2003). Out data also suggests that processing strategies for -t participles are differently applied in recognition and production. These results provide evidence that children (within the age range tested) employ the same mechanisms for processing participles as adults. The child lexicon grows as children form additional full-form representations for -t participles on access level and elaborate their full-form lexical representations of -n participles on central level. These results are consistent with processing as explained in dual-system theories.
Organic semiconductors combine the benefits of organic materials, i.e., low-cost production, mechanical flexibility, lightweight, and robustness, with the fundamental semiconductor properties light absorption, emission, and electrical conductivity. This class of material has several advantages over conventional inorganic semiconductors that have led, for instance, to the commercialization of organic light-emitting diodes which can nowadays be found in the displays of TVs and smartphones. Moreover, organic semiconductors will possibly lead to new electronic applications which rely on the unique mechanical and electrical properties of these materials. In order to push the development and the success of organic semiconductors forward, it is essential to understand the fundamental processes in these materials. This thesis concentrates on understanding how the charge transport in thiophene-based semiconductor layers depends on the layer morphology and how the charge transport properties can be intentionally modified by doping these layers with a strong electron acceptor. By means of optical spectroscopy, the layer morphologies of poly(3-hexylthiophene), P3HT, P3HT-fullerene bulk heterojunction blends, and oligomeric polyquaterthiophene, oligo-PQT-12, are studied as a function of temperature, molecular weight, and processing conditions. The analyses rely on the decomposition of the absorption contributions from the ordered and the disordered parts of the layers. The ordered-phase spectra are analyzed using Spano’s model. It is figured out that the fraction of aggregated chains and the interconnectivity of these domains is fundamental to a high charge carrier mobility. In P3HT layers, such structures can be grown with high-molecular weight, long P3HT chains. Low and medium molecular weight P3HT layers do also contain a significant amount of chain aggregates with high intragrain mobility; however, intergranular connectivity and, therefore, efficient macroscopic charge transport are absent. In P3HT-fullerene blend layers, a highly crystalline morphology that favors the hole transport and the solar cell efficiency can be induced by annealing procedures and the choice of a high-boiling point processing solvent. Based on scanning near-field and polarization optical microscopy, the morphology of oligo-PQT-12 layers is found to be highly crystalline which explains the rather high field-effect mobility in this material as compared to low molecular weight polythiophene fractions. On the other hand, crystalline dislocations and grain boundaries are identified which clearly limit the charge carrier mobility in oligo-PQT-12 layers. The charge transport properties of organic semiconductors can be widely tuned by molecular doping. Indeed, molecular doping is a key to highly efficient organic light-emitting diodes and solar cells. Despite this vital role, it is still not understood how mobile charge carriers are induced into the bulk semiconductor upon the doping process. This thesis contains a detailed study of the doping mechanism and the electrical properties of P3HT layers which have been p-doped by the strong molecular acceptor tetrafluorotetracyanoquinodimethane, F4TCNQ. The density of doping-induced mobile holes, their mobility, and the electrical conductivity are characterized in a broad range of acceptor concentrations. A long-standing debate on the nature of the charge transfer between P3HT and F4TCNQ is resolved by showing that almost every F4TCNQ acceptor undergoes a full-electron charge transfer with a P3HT site. However, only 5% of these charge transfer pairs can dissociate and induce a mobile hole into P3HT which contributes electrical conduction. Moreover, it is shown that the left-behind F4TCNQ ions broaden the density-of-states distribution for the doping-induced mobile holes, which is due to the longrange Coulomb attraction in the low-permittivity organic semiconductors.
Multi tenancy for cloud-based in-memory column databases : workload management and data placement
(2013)
Multi-messenger constraints and pressure from dark matter annihilation into electron-positron pairs
(2013)
Despite striking evidence for the existence of dark matter from astrophysical observations, dark matter has still escaped any direct or indirect detection until today. Therefore a proof for its existence and the revelation of its nature belongs to one of the most intriguing challenges of nowadays cosmology and particle physics. The present work tries to investigate the nature of dark matter through indirect signatures from dark matter annihilation into electron-positron pairs in two different ways, pressure from dark matter annihilation and multi-messenger constraints on the dark matter annihilation cross-section. We focus on dark matter annihilation into electron-positron pairs and adopt a model-independent approach, where all the electrons and positrons are injected with the same initial energy E_0 ~ m_dm*c^2. The propagation of these particles is determined by solving the diffusion-loss equation, considering inverse Compton scattering, synchrotron radiation, Coulomb collisions, bremsstrahlung, and ionization. The first part of this work, focusing on pressure from dark matter annihilation, demonstrates that dark matter annihilation into electron-positron pairs may affect the observed rotation curve by a significant amount. The injection rate of this calculation is constrained by INTEGRAL, Fermi, and H.E.S.S. data. The pressure of the relativistic electron-positron gas is computed from the energy spectrum predicted by the diffusion-loss equation. For values of the gas density and magnetic field that are representative of the Milky Way, it is estimated that the pressure gradients are strong enough to balance gravity in the central parts if E_0 < 1 GeV. The exact value depends somewhat on the astrophysical parameters, and it changes dramatically with the slope of the dark matter density profile. For very steep slopes, as those expected from adiabatic contraction, the rotation curves of spiral galaxies would be affected on kiloparsec scales for most values of E_0. By comparing the predicted rotation curves with observations of dwarf and low surface brightness galaxies, we show that the pressure from dark matter annihilation may improve the agreement between theory and observations in some cases, but it also imposes severe constraints on the model parameters (most notably, the inner slope of the halo density profile, as well as the mass and the annihilation cross-section of dark matter particles into electron-positron pairs). In the second part, upper limits on the dark matter annihilation cross-section into electron-positron pairs are obtained by combining observed data at different wavelengths (from Haslam, WMAP, and Fermi all-sky intensity maps) with recent measurements of the electron and positron spectra in the solar neighbourhood by PAMELA, Fermi, and H.E.S.S.. We consider synchrotron emission in the radio and microwave bands, as well as inverse Compton scattering and final-state radiation at gamma-ray energies. For most values of the model parameters, the tightest constraints are imposed by the local positron spectrum and synchrotron emission from the central regions of the Galaxy. According to our results, the annihilation cross-section should not be higher than the canonical value for a thermal relic if the mass of the dark matter candidate is smaller than a few GeV. In addition, we also derive a stringent upper limit on the inner logarithmic slope α of the density profile of the Milky Way dark matter halo (α < 1 if m_dm < 5 GeV, α < 1.3 if m_dm < 100 GeV and α < 1.5 if m_dm < 2 TeV) assuming a dark matter annihilation cross-section into electron-positron pairs (σv) = 3*10^−26 cm^3 s^−1, as predicted for thermal relics from the big bang.
In this work, the development of temperature- and protein-responsive sensor materials based on biocompatible, inverse hydrogel opals (IHOs) is presented. With these materials, large biomolecules can be specifically recognised and the binding event visualised. The preparation of the IHOs was performed with a template process, for which monodisperse silica particles were vertically deposited onto glass slides as the first step. The obtained colloidal crystals with a thickness of 5 μm displayed opalescent reflections because of the uniform alignment of the colloids. As a second step, the template was embedded in a matrix consisting of biocompatible, thermoresponsive hydrogels. The comonomers were selected from the family of oligo(ethylene glycol)methacrylates. The monomer solution was injected into a polymerisation mould, which contained the colloidal crystals as a template. The space in-between the template particles was filled with the monomer solution and the hydrogel was cured via UV-polymerisation. The particles were chemically etched, which resulted in a porous inner structure. The uniform alignment of the pores and therefore the opalescent reflection were maintained, so these system were denoted as inverse hydrogel opals. A pore diameter of several hundred nanometres as well as interconnections between the pores should facilitate a diffusion of bigger (bio)molecules, which was always a challenge in the presented systems until now. The copolymer composition was chosen to result in a hydrogel collapse over 35 °C. All hydrogels showed pronounced swelling in water below the critical temperature. The incorporation of a reactive monomer with hydroxyl groups ensured a potential coupling group for the introduction of recognition units for analytes, e.g. proteins. As a test system, biotin as a recognition unit for avidin was coupled to the IHO via polymer-analogous Steglich esterification. The amount of accessible biotin was quantified with a colorimetric binding assay. When avidin was added to the biotinylated IHO, the wavelength of the opalescent reflection was significantly shifted and therefore the binding event was visualised. This effect is based on the change in swelling behaviour of the hydrogel after binding of the hydrophilic avidin, which is amplified by the thermoresponsive nature of the hydrogel. A swelling or shrinking of the pores induces a change in distance of the crystal planes, which are responsible for the colour of the reflection. With these findings, the possibility of creating sensor materials or additional biomolecules in the size range of avidin is given.
Various synthetic approaches were explored towards the preparation of poly(N-substituted glycine) homo/co-polymers (a.k.a. polypeptoids). In particular, monomers that would facilitate in the preparation of bio-relevant polymers via either chain- or step-growth polymerization were targeted. A 3-step synthetic approach towards N-substituted glycine N-carboxyanhydrides (NNCA) was implemented, or developed, and optimized allowing for an efficient gram scale preparation of the aforementioned monomer (chain-growth). After exploring several solvents and various conditions, a reproducible and efficient ring-opening polymerization (ROP) of NNCAs was developed in benzonitrile (PhCN). However, achieving molecular weights greater than 7 kDa required longer reaction times (>4 weeks) and sub-sequentially allowed for undesirable competing side reactions to occur (eg. zwitterion monomer mechanisms). A bulk-polymerization strategy provided molecular weights up to 11 kDa within 24 hours but suffered from low monomer conversions (ca. 25%). Likewise, a preliminary study towards alcohol promoted ROP of NNCAs suffered from impurities and a suspected alternative activated monomer mechanism (AAMM) providing poor inclusion of the initiator and leading to multi-modal dispersed polymeric systems. The post-modification of poly(N-allyl glycine) via thiol-ene photo-addition was observed to be quantitative, with the utilization of photo-initiators, and facilitated in the first glyco-peptoid prepared under environmentally benign conditions. Furthermore, poly(N-allyl glycine) demonstrated thermo-responsive behavior and could be prepared as a semi-crystalline bio-relevant polymer from solution (ie. annealing). Initial efforts in preparing these polymers via standard poly-condensation protocols were insufficient (step-growth). However, a thermally induced side-product, diallyl diketopiperazine (DKP), afforded the opportunity to explore photo-induced thiol-ene and acyclic diene metathesis (ADMET) polymerizations. Thiol-ene polymerization readily led to low molecular weight polymers (<2.5 kDa), that were insoluble in most solvents except heated amide solvents (ie. DMF), whereas ADMET polymerization, with diallyl DKP, was unsuccessful due to a suspected 6 member complexation/deactivation state of the catalyst. This understanding prompted the preparation of elongated DKPs most notably dibutenyl DKP. SEC data supports the aforementioned understanding but requires further optimization studies in both the preparation of the DKP monomers and following ADMET polymerization. This work was supported by NMR, GC-MS, FT-IR, SEC-IR, and MALDI-Tof MS characterization. Polymer properties were measured by UV-Vis, TGA, and DSC.
On Particular n-Clones
(2013)