Refine
Year of publication
- 2016 (197) (remove)
Document Type
- Doctoral Thesis (197) (remove)
Language
- English (197) (remove)
Is part of the Bibliography
- yes (197)
Keywords
- Blickbewegungen (3)
- earthquake (3)
- Aggression (2)
- Aphasie (2)
- Bodenfeuchte (2)
- Erdbeben (2)
- Erosion (2)
- Geomorphologie (2)
- Hydrogel (2)
- Hydrologie (2)
Institute
- Institut für Biochemie und Biologie (52)
- Institut für Geowissenschaften (35)
- Institut für Chemie (19)
- Institut für Physik und Astronomie (19)
- Department Linguistik (10)
- Institut für Ernährungswissenschaft (10)
- Institut für Mathematik (10)
- Institut für Informatik und Computational Science (9)
- Department Sport- und Gesundheitswissenschaften (7)
- Wirtschaftswissenschaften (6)
This dissertation uses a common grammatical phenomenon, light verb constructions (LVCs) in English and German, to investigate how syntax-semantics mapping defaults influence the relationships between language processing, representation and conceptualization. LVCs are analyzed as a phenomenon of mismatch in the argument structure. The processing implication of this mismatch are experimentally investigated, using ERPs and a dual task. Data from these experiments point to an increase in working memory. Representational questions are investigated using structural priming. Data from this study suggest that while the syntax of LVCs is not different from other structures’, the semantics and mapping are represented differently. This hypothesis is tested with a new categorization paradigm, which reveals that the conceptual structure that LVC evoke differ in interesting, and predictable, ways from non-mismatching structures’.
In the past decades, development cooperation (DC) led by conventional bi- and multilateral donors has been joined by a large number of small, private or public-private donors. This pluralism of actors raises questions as to whether or not these new donors are able to implement projects more or less effectively than their conventional counterparts. In contrast to their predecessors, the new donors have committed themselves to be more pragmatic, innovative and flexible in their development cooperation measures. However, they are also criticized for weakening the function of local civil society and have the reputation of being an intransparent and often controversial alternative to public services. With additional financial resources and their new approach to development, the new donors have been described in the literature as playing a controversial role in transforming development cooperation. This dissertation compares the effectiveness of initiatives by new and conventional donors with regard to the provision of public goods and services to the poor in the water and sanitation sector in India.
India is an emerging country but it is experiencing high poverty rates and poor water supply in predominantly rural areas. It lends itself for analyzing this research theme as it is currently being confronted by a large number of actors and approaches that aim to find solutions for these challenges .
In the theoretical framework of this dissertation, four governance configurations are derived from the interaction of varying actor types with regard to hierarchical and non-hierarchical steering of their interactions. These four governance configurations differ in decision-making responsibilities, accountability and delegation of tasks or direction of information flow. The assumption on actor relationships and steering is supplemented by possible alternative explanations in the empirical investigation, such as resource availability, the inheritance of structures and institutions from previous projects in a project context, gaining acceptance through beneficiaries (local legitimacy) as a door opener, and asymmetries of power in the project context.
Case study evidence from seven projects reveals that the actors' relationship is important for successful project delivery. Additionally, the results show that there is a systematic difference between conventional and new donors. Projects led by conventional donors were consistently more successful, due to an actor relationship that placed the responsibility in the hands of the recipient actors and benefited from the trust and reputation of a long-term cooperation. The trust and reputation of conventional donors always went along with a back-up from federal level and trickled down as reputation also at local level implementation. Furthermore, charismatic leaders, as well as the acquired structures and institutions of predecessor projects, also proved to be a positive influencing factor for successful project implementation.
Despite the mixed results of the seven case studies, central recommendations for action can be derived for the various actors involved in development cooperation. For example, new donors could fulfill a supplementary function with conventional donors by developing innovative project approaches through pilot studies and then implementing them as a supplement to the projects of conventional donors on the ground. In return, conventional donors would have to make room the new donors by integrating their approaches into already programs in order to promote donor harmonization. It is also important to identify and occupy niches for activities and to promote harmonization among donors on state and federal sides.
The empirical results demonstrate the need for a harmonization strategy of different donor types in order to prevent duplication, over-experimentation and the failure of development programs. A transformation to successful and sustainable development cooperation can only be achieved through more coordination processes and national self-responsibility.
In the context of an increasing population of aging people and a shift of medical paradigm towards an individualized medicine in health care, nanostructured lanthanides doped sodium yttrium fluoride (NaYF4) represents an exciting class of upconversion nanomaterials (UCNM) which are suitable to bring forward developments in biomedicine and -biodetection. Despite the fact that among various fluoride based upconversion (UC) phosphors lanthanide doped NaYF4 is one of the most studied upconversion nanomaterial, many open questions are still remaining concerning the interplay of the population routes of sensitizer and activator electronic states involved in different luminescence upconversion photophysics as well as the role of phonon coupling. The collective work aims to explore a detailed understanding of the upconversion mechanism in nanoscaled NaYF4 based materials co-doped with several lanthanides, e.g. Yb3+ and Er3+ as the "standard" type upconversion nanoparticles (UCNP) up to advanced UCNP with Gd3+ and Nd3+. Especially the impact of the crystal lattice structure as well as the resulting lattice phonons on the upconversion luminescence was investigated in detail based on different mixtures of cubic and hexagonal NaYF4 nanoscaled crystals. Three synthesis methods, depending on the attempt of the respective central spectroscopic questions, could be accomplished in the following work. NaYF4 based upconversion nanoparticles doped with several combination of lanthanides (Yb3+, Er3+, Gd3+ and Nd3+) were synthesized successfully using a hydrothermal synthesis method under mild conditions as well as a co-precipitation and a high temperature co-precipitation technique. Structural information were gathered by means of X-ray diffraction (XRD), electron microscopy (TEM), dynamic light scattering (DLS), Raman spectroscopy and inductively coupled plasma atomic emission spectrometry (ICP-OES). The results were discussed in detail with relation to the spectroscopic results. A variable spectroscopic setup was developed for multi parameter upconversion luminescence studies at various temperature 4 K to 328 K. Especially, the study of the thermal behavior of upconversion luminescence as well as time resolved area normalized emission spectra were a prerequisite for the detailed understanding of intramolecular deactivation processes, structural changes upon annealing or Gd3+ concentration, and the role of phonon coupling for the upconversion efficiency. Subsequently it became possible to synthesize UCNP with tailored upconversion luminescence properties. In the end, the potential of UCNP for life science application should be enunciated in context of current needs and improvements of a nanomaterial based optical sensors, whereas the "standard" UCNP design was attuned according to the special conditions in the biological matrix. In terms of a better biocompatibility due to a lower impact on biological tissue and higher penetrability for the excitation light. The first step into this direction was to use Nd3+ ions as a new sensitizer in tridoped NaYF4 based UCNP, whereas the achieved absolute and relative temperature sensitivity is comparable to other types of local temperature sensors in the literature.
Understanding the rates and processes of denudation is key to unraveling the dynamic processes that shape active orogens. This includes decoding the roles of tectonic and climate-driven processes in the long-term evolution of high- mountain landscapes in regions with pronounced tectonic activity and steep climatic and surface-process gradients. Well-constrained denudation rates can be used to address a wide range of geologic problems. In steady-state landscapes, denudation rates are argued to be proportional to tectonic or isostatic uplift rates and provide valuable insight into the tectonic regimes underlying surface denudation. The use of denudation rates based on terrestrial cosmogenic nuclide (TCN) such as 10Beryllium has become a widely-used method to quantify catchment-mean denudation rates. Because such measurements are averaged over timescales of 102 to 105 years, they are not as susceptible to stochastic changes as shorter-term denudation rate estimates (e.g., from suspended sediment measurements) and are therefore considered more reliable for a comparison to long-term processes that operate on geologic timescales. However, the impact of various climatic, biotic, and surface processes on 10Be concentrations and the resultant denudation rates remains unclear and is subject to ongoing discussion. In this thesis, I explore the interaction of climate, the biosphere, topography, and geology in forcing and modulating denudation rates on catchment to orogen scales.
There are many processes in highly dynamic active orogens that may effect 10Be concentrations in modern river sands and therefore impact 10Be-derived denudation rates. The calculation of denudation rates from 10Be concentrations, however, requires a suite of simplifying assumptions that may not be valid or applicable in many orogens. I investigate how these processes affect 10Be concentrations in the Arun Valley of Eastern Nepal using 34 new 10Be measurements from the main stem Arun River and its tributaries. The Arun Valley is characterized by steep gradients in climate and topography, with elevations ranging from <100 m asl in the foreland basin to >8,000 asl in the high sectors to the north. This is coupled with a five-fold increase in mean annual rainfall across strike of the orogen. Denudation rates from tributary samples increase toward the core of the orogen, from <0.2 to >5 mm/yr from the Lesser to Higher Himalaya. Very high denudation rates (>2 mm/yr), however, are likely the result of 10Be TCN dilution by surface and climatic processes, such as large landsliding and glaciation, and thus may not be representative of long-term denudation rates. Mainstem Arun denudation rates increase downstream from ~0.2 mm/yr at the border with Tibet to 0.91 mm/yr at its outlet into the Sapt Kosi. However, the downstream 10Be concentrations may not be representative of the entire upstream catchment. Instead, I document evidence for downstream fining of grains from the Tibetan Plateau, resulting in an order-of-magnitude apparent decrease in the measured 10Be concentration.
In the Arun Valley and across the Himalaya, topography, climate, and vegetation are strongly interrelated. The observed increase in denudation rates at the transition from the Lesser to Higher Himalaya corresponds to abrupt increases in elevation, hillslope gradient, and mean annual rainfall. Thus, across strike (N-S), it is difficult to decipher the potential impacts of climate and vegetation cover on denudation rates. To further evaluate these relationships I instead took advantage of an along-strike west-to-east increase of mean annual rainfall and vegetation density in the Himalaya. An analysis of 136 published 10Be denudation rates from along strike of the revealed that median denudation rates do not vary considerably along strike of the Himalaya, ~1500 km E-W. However, the range of denudation rates generally decreases from west to east, with more variable denudation rates in the northwestern regions of the orogen than in the eastern regions. This denudation rate variability decreases as vegetation density increases (R=- 0.90), and increases proportionately to the annual seasonality of vegetation (R=0.99). Moreover, rainfall and vegetation modulate the relationship between topographic steepness and denudation rates such that in the wet, densely vegetated regions of the Himalaya, topography responds more linearly to changes in denudation rates than in dry, sparsely vegetated regions, where the response of topographic steepness to denudation rates is highly nonlinear. Understanding the relationships between denudation rates, topography, and climate is also critical for interpreting sedimentary archives. However, there is a lack of understanding of how terrestrial organic matter is transported out of orogens and into sedimentary archives. Plant wax lipid biomarkers derived from terrestrial and marine sedimentary records are commonly used as paleo- hydrologic proxy to help elucidate these problems. I address the issue of how to interpret the biomarker record by using the plant wax isotopic composition of modern suspended and riverbank organic matter to identify and quantify organic matter source regions in the Arun Valley. Topographic and geomorphic analysis, provided by the 10Be catchment-mean denudation rates, reveals that a combination of topographic steepness (as a proxy for denudation) and vegetation density is required to capture organic matter sourcing in the Arun River.
My studies highlight the importance of a rigorous and careful interpretation of denudation rates in tectonically active orogens that are furthermore characterized by strong climatic and biotic gradients. Unambiguous information about these issues is critical for correctly decoding and interpreting the possible tectonic and climatic forces that drive erosion and denudation, and the manifestation of the erosion products in sedimentary archives.
In this thesis, the two prototype catalysts Fe(CO)₅ and Cr(CO)₆ are investigated with time-resolved photoelectron spectroscopy at a high harmonic setup. In both of these metal carbonyls, a UV photon can induce the dissociation of one or more ligands of the complex. The mechanism of the dissociation has been debated over the last decades. The electronic dynamics of the first dissociation occur on the femtosecond timescale.
For the experiment, an existing high harmonic setup was moved to a new location, was extended, and characterized. The modified setup can induce dynamics in gas phase samples with photon energies of 1.55eV, 3.10eV, and 4.65eV. The valence electronic structure of the samples can be probed with photon energies between 20eV and 40eV. The temporal resolution is 111fs to 262fs, depending on the combination of the two photon energies.
The electronically excited intermediates of the two complexes, as well as of the reaction product Fe(CO)₄, could be observed with photoelectron spectroscopy in the gas phase for the first time. However, photoelectron spectroscopy gives access only to the final ionic states. Corresponding calculations to simulate these spectra are still in development. The peak energies and their evolution in time with respect to the initiation pump pulse have been determined, these peaks have been assigned based on literature data. The spectra of the two complexes show clear differences. The dynamics have been interpreted with the assumption that the motion of peaks in the spectra relates to the movement of the wave packet in the multidimensional energy landscape. The results largely confirm existing models for the reaction pathways. In both metal carbonyls, this pathway involves a direct excitation of the wave packet to a metal-to-ligand charge transfer state and the subsequent crossing to a dissociative ligand field state. The coupling of the electronic dynamics to the nuclear dynamics could explain the slower dissociation in Fe(CO)₅ as compared to Cr(CO)₆.
In complement to the well-established zwitterionic monomers 3-((2-(methacryloyloxy)ethyl)dimethylammonio)propane-1-sulfonate (“SPE”) and 3-((3-methacrylamidopropyl)dimethylammonio)propane-1-sulfonate (“SPP”), the closely related sulfobetaine monomers were synthesized and polymerized by reversible addition-fragmentation chain transfer (RAFT) polymerization, using a fluorophore labeled RAFT agent. The polyzwitterions of systematically varied molar mass were characterized with respect to their solubility in water, deuterated water, and aqueous salt solutions. These poly(sulfobetaine)s show thermoresponsive behavior in water, exhibiting upper critical solution temperatures (UCST). Phase transition temperatures depend notably on the molar mass and polymer concentration, and are much higher in D2O than in H2O. Also, the phase transition temperatures are effectively modulated by the addition of salts. The individual effects can be in parts correlated to the Hofmeister series for the anions studied. Still, they depend in a complex way on the concentration and the nature of the added electrolytes, on the one hand, and on the detailed structure of the zwitterionic side chain, on the other hand. For the polymers with the same zwitterionic side chain, it is found that methacrylamide-based poly(sulfobetaine)s exhibit higher UCST-type transition temperatures than their methacrylate analogs. The extension of the distance between polymerizable unit and zwitterionic groups from 2 to 3 methylene units decreases the UCST-type transition temperatures. Poly(sulfobetaine)s derived from aliphatic esters show higher UCST-type transition temperatures than their analogs featuring cyclic ammonium cations. The UCST-type transition temperatures increase markedly with spacer length separating the cationic and anionic moieties from 3 to 4 methylene units. Thus, apparently small variations of their chemical structure strongly affect the phase behavior of the polyzwitterions in specific aqueous environments.
Water-soluble block copolymers were prepared from the zwitterionic monomers and the non-ionic monomer N-isopropylmethacrylamide (“NIPMAM”) by the RAFT polymerization. Such block copolymers with two hydrophilic blocks exhibit twofold thermoresponsive behavior in water. The poly(sulfobetaine) block shows an UCST, whereas the poly(NIPMAM) block exhibits a lower critical solution temperature (LCST). This constellation induces a structure inversion of the solvophobic aggregate, called “schizophrenic micelle”. Depending on the relative positions of the two different phase transitions, the block copolymer passes through a molecularly dissolved or an insoluble intermediate regime, which can be modulated by the polymer concentration or by the addition of salt. Whereas, at low temperature, the poly(sulfobetaine) block forms polar aggregates that are kept in solution by the poly(NIPMAM) block, at high temperature, the poly(NIPMAM) block forms hydrophobic aggregates that are kept in solution by the poly(sulfobetaine) block. Thus, aggregates can be prepared in water, which switch reversibly their “inside” to the “outside”, and vice versa.
It is commonly recognized that soil moisture exhibits spatial heterogeneities occurring in a wide range of scales. These heterogeneities are caused by different factors ranging from soil structure at the plot scale to land use at the landscape scale. There is an urgent need for effi-cient approaches to deal with soil moisture heterogeneity at large scales, where manage-ment decisions are usually made. The aim of this dissertation was to test innovative ap-proaches for making efficient use of standard soil hydrological data in order to assess seep-age rates and main controls on observed hydrological behavior, including the role of soil het-erogeneities.
As a first step, the applicability of a simplified Buckingham-Darcy method to estimate deep seepage fluxes from point information of soil moisture dynamics was assessed. This was done in a numerical experiment considering a broad range of soil textures and textural het-erogeneities. The method performed well for most soil texture classes. However, in pure sand where seepage fluxes were dominated by heterogeneous flow fields it turned out to be not applicable, because it simply neglects the effect of water flow heterogeneity. In this study a need for new efficient approaches to handle heterogeneities in one-dimensional water flux models was identified.
As a further step, an approach to turn the problem of soil moisture heterogeneity into a solu-tion was presented: Principal component analysis was applied to make use of the variability among soil moisture time series for analyzing apparently complex soil hydrological systems. It can be used for identifying the main controls on the hydrological behavior, quantifying their relevance, and describing their particular effects by functional averaged time series. The ap-proach was firstly tested with soil moisture time series simulated for different texture classes in homogeneous and heterogeneous model domains. Afterwards, it was applied to 57 mois-ture time series measured in a multifactorial long term field experiment in Northeast Germa-ny.
The dimensionality of both data sets was rather low, because more than 85 % of the total moisture variance could already be explained by the hydrological input signal and by signal transformation with soil depth. The perspective of signal transformation, i.e. analyzing how hydrological input signals (e.g., rainfall, snow melt) propagate through the vadose zone, turned out to be a valuable supplement to the common mass flux considerations. Neither different textures nor spatial heterogeneities affected the general kind of signal transfor-mation showing that complex spatial structures do not necessarily evoke a complex hydro-logical behavior. In case of the field measured data another 3.6% of the total variance was unambiguously explained by different cropping systems. Additionally, it was shown that dif-ferent soil tillage practices did not affect the soil moisture dynamics at all.
The presented approach does not require a priori assumptions about the nature of physical processes, and it is not restricted to specific scales. Thus, it opens various possibilities to in-corporate the key information from monitoring data sets into the modeling exercise and thereby reduce model uncertainties.
The horse is a fascinating animal symbolizing power, beauty, strength and grace. Among all the animal species domesticated the horse had the largest impact on the course of human history due to its importance for warfare and transportation. Studying the process of horse domestication contributes to the knowledge about the history of horses and even of our own species.
Research based on molecular methods has increasingly focused on the genetic basis of horse domestication. Mitochondrial DNA (mtDNA) analyses of modern and ancient horses detected immense maternal diversity, probably due to many mares that contributed to the domestic population. However, mtDNA does not provide an informative phylogeographic structure. In contrast, Y chromosome analyses displayed almost complete uniformity in modern stallions but relatively high diversity in a few ancient horses. Further molecular markers that seem to be well suited to infer the domestication history of horses or genetic and phenotypic changes during this process are loci associated with phenotypic traits.
This doctoral thesis consists of three different parts for which I analyzed various single nucleotide polymorphisms (SNPs) associated with coat color, locomotion or Y chromosomal variation of horses. These SNPs were genotyped in 350 ancient horses from the Chalcolithic (5,000 BC) to the Middle Ages (11th century). The distribution of the samples ranges from China to the Iberian Peninsula and Iceland. By applying multiplexed next-generation sequencing (NGS) I sequenced short amplicons covering the relevant positions: i) eight coat-color-associated mutations in six genes to deduce the coat color phenotype; ii) the so-called ’Gait-keeper’ SNP in the DMRT3 gene to screen for the ability to amble; iii) 16 SNPs previously detected in ancient horses to infer the corresponding haplotype. Based on these data I investigated the occurrence and frequencies of alleles underlying the respective phenotypes as well as Y chromosome haplotypes at different times and regions. Also, selection coefficients for several Y chromosome lineages or phenotypes were estimated.
Concerning coat color differences in ancient horses my work constitutes the most comprehensive study to date. I detected an increase of chestnut horses in the Middle Ages as well as differential selection for spotted and solid phenotypes over time which reflects changing human preferences.
With regard to ambling horses, the corresponding allele was present in medieval English and Icelandic horses. Based on these results I argue that Norse settlers, who frequently invaded parts of Britain, brought ambling individuals to Iceland from the British Isles which can be regarded the origin of this trait. Moreover, these settlers appear to have selected for ambling in Icelandic horses.
Relating to the third trait, the paternal diversity, these findings represent the largest ancient dataset of Y chromosome variation in non-humans. I proved the existence of several Y chromosome haplotypes in early domestic horses. The decline of Y chromosome variation coincides with the movement of nomadic peoples from the Eurasian steppes and later with different breeding practices in the Roman period.
In conclusion, positive selection was estimated for several phenotypes/lineages
in different regions or times which indicates that these were preferred by humans. Furthermore, I could successfully infer the distribution and dispersal of horses in association with human movements and actions. Thereby, a better understanding of the influence of people on the changing appearance and genetic diversity of domestic horses could be gained. My results also emphasize the close relationship of ancient genetics and archeology or history and that only in combination well-founded conclusions can be reached.
This doctoral thesis seeks to elaborate how Wittgenstein’s very sparse writings on ethics and ethical thought, together with his later work on the more general problem of normativity and his approach to philosophical problems as a whole, can be applied to contemporary meta-ethical debates about the nature of moral thought and language and the sources of moral obligation. I begin with a discussion of Wittgenstein’s early “Lecture on Ethics”, distinguishing the thesis of a strict fact/value dichotomy that Wittgenstein defends there from the related thesis that all ethical discourse is essentially and intentionally nonsensical, an attempt to go beyond the limits of sense. The first chapter discusses and defends Wittgenstein’s argument that moral valuation always goes beyond any ascertaining of fact; the second chapter seeks to draw out the valuable insights from Wittgenstein’s (early) insistence that value discourse is nonsensical while also arguing that this thesis is ultimately untenable and also incompatible with later Wittgensteinian understanding of language. On the basis of this discussion I then take up the writings of the American philosopher Cora Diamond, who has worked out an ethical approach in a very closely Wittgensteinian spirit, and show how this approach shares many of the valuable insights of the moral expressivism and constructivism of contemporary authors such as Blackburn and Korsgaard while suggesting a way to avoid some of the problems and limitations of their approaches. Subsequently I turn to a criticism of the attempts by Lovibond and McDowell to enlist Wittgenstein in the support for a non-naturalist moral realism. A concluding chapter treats the ways that a broadly Wittgensteinian conception expands the subject of metaethics itself by questioning the primacy of discursive argument in moral thought and of moral propositions as the basic units of moral belief.
In experiments investigating sentence processing, eye movement measures such as fixation durations and regression proportions while reading are commonly used to draw conclusions about processing difficulties. However, these measures are the result of an interaction of multiple cognitive levels and processing strategies and thus are only indirect indicators of processing difficulty. In order to properly interpret an eye movement response, one has to understand the underlying principles of adaptive processing such as trade-off mechanisms between reading speed and depth of comprehension that interact with task demands and individual differences. Therefore, it is necessary to establish explicit models of the respective mechanisms as well as their causal relationship with observable behavior. There are models of lexical processing and eye movement control on the one side and models on sentence parsing and memory processes on the other. However, no model so far combines both sides with explicitly defined linking assumptions.
In this thesis, a model is developed that integrates oculomotor control with a parsing mechanism and a theory of cue-based memory retrieval. On the basis of previous empirical findings and independently motivated principles, adaptive, resource-preserving mechanisms of underspecification are proposed both on the level of memory access and on the level of syntactic parsing. The thesis first investigates the model of cue-based retrieval in sentence comprehension of Lewis & Vasishth (2005) with a comprehensive literature review and computational modeling of retrieval interference in dependency processing. The results reveal a great variability in the data that is not explained by the theory. Therefore, two principles, 'distractor prominence' and 'cue confusion', are proposed as an extension to the theory, thus providing a more adequate description of systematic variance in empirical results as a consequence of experimental design, linguistic environment, and individual differences. In the remainder of the thesis, four interfaces between parsing and eye movement control are defined: Time Out, Reanalysis, Underspecification, and Subvocalization. By comparing computationally derived predictions with experimental results from the literature, it is investigated to what extent these four interfaces constitute an appropriate elementary set of assumptions for explaining specific eye movement patterns during sentence processing. Through simulations, it is shown how this system of in itself simple assumptions results in predictions of complex, adaptive behavior.
In conclusion, it is argued that, on all levels, the sentence comprehension mechanism seeks a balance between necessary processing effort and reading speed on the basis of experience, task demands, and resource limitations. Theories of linguistic processing therefore need to be explicitly defined and implemented, in particular with respect to linking assumptions between observable behavior and underlying cognitive processes. The comprehensive model developed here integrates multiple levels of sentence processing that hitherto have only been studied in isolation. The model is made publicly available as an expandable framework for future studies of the interactions between parsing, memory access, and eye movement control.
This study presents new insights into null subjects, topic drop and the interpretation of topic-dropped elements. Besides providing an empirical data survey, it offers explanations to well-known problems, e.g. syncretisms in the context of null-subject licensing or the marginality of dropping an element which carries oblique case. The book constitutes a valuable source for both empirically and theoretically interested (generative) linguists.
Thermophony in real gases
(2016)
A thermophone is an electrical device for sound generation. The advantages of thermophones over conventional sound transducers such as electromagnetic, electrostatic or piezoelectric transducers are their operational principle which does not require any moving parts, their resonance-free behavior, their simple construction and their low production costs.
In this PhD thesis, a novel theoretical model of thermophonic sound generation in real gases has been developed. The model is experimentally validated in a frequency range from 2 kHz to 1 MHz by testing more then fifty thermophones of different materials, including Carbon nano-wires, Titanium, Indium-Tin-Oxide, different sizes and shapes for sound generation in gases such as air, argon, helium, oxygen, nitrogen and sulfur hexafluoride.
Unlike previous approaches, the presented model can be applied to different kinds of thermophones and various gases, taking into account the thermodynamic properties of thermophone materials and of adjacent gases, degrees of freedom and the volume occupied by the gas atoms and molecules, as well as sound attenuation effects, the shape and size of the thermophone surface and the reduction of the generated acoustic power due to photonic emission. As a result, the model features better prediction accuracy than the existing models by a factor up to 100. Moreover, the new model explains previous experimental findings on thermophones which can not be explained with the existing models.
The acoustic properties of the thermophones have been tested in several gases using unique, highly precise experimental setups comprising a Laser-Doppler-Vibrometer combined with a thin polyethylene film which acts as a broadband and resonance-free sound-pressure detector. Several outstanding properties of the thermophones have been demonstrated for the first time, including the ability to generate arbitrarily shaped acoustic signals, a greater acoustic efficiency compared to conventional piezoelectric and electrostatic airborne ultrasound transducers, and applicability as powerful and tunable sound sources with a bandwidth up to the megahertz range and beyond.
Additionally, new applications of thermophones such as the study of physical properties of gases, the thermo-acoustic gas spectroscopy, broad-band characterization of transfer functions of sound and ultrasound detection systems, and applications in non-destructive materials testing are discussed and experimentally demonstrated.
Widespread landscape changes are presently observed in the Arctic and are most likely to
accelerate in the future, in particular in permafrost regions which are sensitive to climate warming. To assess current and future developments, it is crucial to understand past
environmental dynamics in these landscapes. Causes and interactions of environmental variability can hardly be resolved by instrumental records covering modern time scales. However, long-term
environmental variability is recorded in paleoenvironmental archives. Lake sediments are important archives that allow reconstruction of local limnogeological processes as well as past environmental changes driven directly or indirectly by climate dynamics. This study aims at
reconstructing Late Quaternary permafrost and thermokarst dynamics in central-eastern Beringia,
the terrestrial land mass connecting Eurasia and North America during glacial sea-level low stands. In order to investigate development, processes and influence of thermokarst dynamics, several sediment cores from extant lakes and drained lake basins were analyzed to answer the
following research questions:
1. When did permafrost degradation and thermokarst lake development take place and what were enhancing and inhibiting environmental factors?
2. What are the dominant processes during thermokarst lake development and how are
they reflected in proxy records?
3. How did, and still do, thermokarst dynamics contribute to the inventory and properties of organic matter in sediments and the carbon cycle?
Methods applied in this study are based upon a multi-proxy approach combining
sedimentological, geochemical, geochronological, and micropaleontological analyses, as well as
analyses of stable isotopes and hydrochemistry of pore-water and ice. Modern field observations of water quality and basin morphometrics complete the environmental investigations.
The investigated sediment cores reveal permafrost degradation and thermokarst dynamics on different time scales. The analysis of a sediment core from GG basin on the northern Seward
Peninsula (Alaska) shows prevalent terrestrial accumulation of yedoma throughout the Early to
Mid Wisconsin with intermediate wet conditions at around 44.5 to 41.5 ka BP. This first wetland
development was terminated by the accumulation of a 1-meter-thick airfall tephra most likely originating from the South Killeak Maar eruption at 42 ka BP. A depositional hiatus between 22.5 and 0.23 ka BP may indicate thermokarst lake formation in the surrounding of the site which forms a yedoma upland till today. The thermokarst lake forming GG basin initiated 230 ± 30 cal a
BP and drained in Spring 2005 AD. Four years after drainage the lake talik was still unfrozen below 268 cm depth.
A permafrost core from Mama Rhonda basin on the northern Seward Peninsula preserved a
full lacustrine record including several lake phases. The first lake generation developed at 11.8 cal ka BP during the Lateglacial-Early Holocene transition; its old basin (Grandma Rhonda) is still partially preserved at the southern margin of the study basin. Around 9.0 cal ka BP a shallow and more dynamic thermokarst lake developed with actively eroding shorelines and potentially intermediate shallow water or wetland phases (Mama Rhonda). Mama Rhonda lake drainage at 1.1 cal ka BP was followed by gradual accumulation of terrestrial peat and top-down refreezing of the lake talik. A significant lower organic carbon content was measured in Grandma Rhonda deposits (mean TOC of 2.5 wt%) than in Mama Rhonda deposits (mean TOC of 7.9 wt%) highlighting the impact of thermokarst dynamics on biogeochemical cycling in different lake generations by thawing and mobilization of organic carbon into the lake system.
Proximal and distal sediment cores from Peatball Lake on the Arctic Coastal Plain of Alaska revealed young thermokarst dynamics since about 1,400 years along a depositional gradient based on reconstructions from shoreline expansion rates and absolute dating results. After its initiation as a remnant pond of a previous drained lake basin, a rapidly deepening lake with increasing oxygenation of the water column is evident from laminated sediments, and higher Fe/Ti and Fe/S ratios in the sediment. The sediment record archived characterizing shifts in depositional regimes and sediment sources from upland deposits and re-deposited sediments from drained thaw lake basins depending on the gradually changing shoreline configuration. These changes are evident from alternating organic inputs into the lake system which highlights the potential for thermokarst lakes to recycle old carbon from degrading permafrost deposits of its catchment.
The lake sediment record from Herschel Island in the Yukon (Canada) covers the full Holocene period. After its initiation as a thermokarst lake at 11.7 cal ka BP and intense thermokarst activity until 10.0 cal ka BP, the steady sedimentation was interrupted by a depositional hiatus at 1.6 cal ka BP which likely resulted from lake drainage or allochthonous slumping due to collapsing shore lines. The specific setting of the lake on a push moraine composed of marine deposits is reflected in the sedimentary record. Freshening of the maturing lake is indicated by decreasing electrical conductivity in pore-water. Alternation of marine to freshwater ostracods and foraminifera confirms decreasing salinity as well but also reflects episodical re-deposition of allochthonous marine sediments.
Based on permafrost and lacustrine sediment records, this thesis shows examples of the Late Quaternary evolution of typical Arctic permafrost landscapes in central-eastern Beringia and the complex interaction of local disturbance processes, regional environmental dynamics and global climate patterns. This study confirms that thermokarst lakes are important agents of organic matter recycling in complex and continuously changing landscapes.
Intracontinental deformation usually is a result of tectonic forces associated with distant plate collisions. In general, the evolution of mountain ranges and basins in this environment is strongly controlled by the distribution and geometries of preexisting structures. Thus, predictive models usually fail in forecasting the deformation evolution in these kinds of settings. Detailed information on each range and basin-fill is vital to comprehend the evolution of intracontinental mountain belts and basins. In this dissertation, I have investigated the complex Cenozoic tectonic evolution of the western Tien Shan in Central Asia, which is one of the most active intracontinental ranges in the world. The work presented here combines a broad array of datasets, including thermo- and geochronology, paleoenvironmental interpretations, sediment provenance and subsurface interpretations in order to track changes in tectonic deformation. Most of the identified changes are connected and can be related to regional-scale processes that governed the evolution of the western Tien Shan.
The NW-SE trending Talas-Fergana fault (TFF) separates the western from the central Tien Shan and constitutes a world-class example of the influence of preexisting anisotropies on the subsequent structural development of a contractile orogen. While to the east most of ranges and basins have a sub-parallel E-W trend, the triangular-shaped Fergana basin forms a substantial feature in the western Tien Shan morphology with ranges on all three sides. In this thesis, I present 55 new thermochronologic ages (apatite fission track and zircon (U-Th)/He)) used to constrain exhumation histories of several mountain ranges in the western Tien Shan. At the same time, I analyzed the Fergana basin-fill looking for progressive changes in sedimentary paleoenvironments, source areas and stratal geometrical configurations in the subsurface and outcrops.
The data presented in this thesis suggests that low cooling rates (<1°C Myr-1), calm depositional environments, and low depositional rates (<10 m Myr-1) were widely distributed across the western Tien Shan, describing a quiescent tectonic period throughout the Paleogene. Increased cooling rates in the late Cenozoic occurred diachronously and with variable magnitudes in different ranges. This rapid cooling stage is interpreted to represent increased erosion caused by active deformation and constrains the onset of Cenozoic deformation in the western Tien Shan. Time-temperature histories derived from the northwestern Tien Shan samples show an increase in cooling rates by ~25 Ma. This event is correlated with a synchronous pulse
iv
in the South Tien Shan. I suggest that strike-slip motion along the TFF commenced at the Oligo-Miocene boundary, facilitating CCW rotation of the Fergana basin and enabling exhumation of the linked horsetail splays. Higher depositional rates (~150 m Myr-1) in the Oligo-Miocene section (Massaget Fm.) of the Fergana basin suggest synchronous deformation in the surrounding ranges. The central Alai Range also experienced rapid cooling around this time, suggesting that the onset of intramontane basin fragmentation and isolation is coeval. These results point to deformation starting simultaneously in the late Oligocene – early Miocene in geographically distant mountain ranges. I suggest that these early uplifts are controlled by reactivated structures (like the TFF), which are probably the frictionally weakest and most-suitably oriented for accommodating and transferring N-S horizontal shortening along the western Tien Shan.
Afterwards, in the late Miocene (~10 Ma), a period of renewed rapid cooling affected the Tien Shan and most mountain ranges and inherited structures started to actively deform. This episode is widely distributed and an increase in exhumation is interpreted in most of the sampled ranges. Moreover, the Pliocene section in the basin subsurface shows the higher depositional rates (>180 m Myr-1) and higher energy facies. The deformation and exhumation increase further contributed to intramontane basin partitioning. Overall, the interpretation is that the Tien Shan and much of Central Asia suffered a global increase in the rate of horizontal crustal shortening. Previously, stress transfer along the rigid Tarim block or Pamir indentation has been proposed to account for Himalayan hinterland deformation. However, the extent of the episode requires a different and broader geodynamic driver.
Among the bloom-forming and potentially harmful cyanobacteria, the genus Microcystis represents a most diverse taxon, on the genomic as well as on morphological and secondary metabolite levels. Microcystis communities are composed of a variety of diversified strains. The focus of this study lies on potential interactions between Microcystis representatives and the roles of secondary metabolites in these interaction processes.
The role of secondary metabolites functioning as signaling molecules in the investigated interactions is demonstrated exemplary for the prevalent hepatotoxin microcystin. The extracellular and intracellular roles of microcystin are tested in microarray-based transcriptomic approaches. While an extracellular effect of microcystin on Microcystis transcription is confirmed and connected to a specific gene cluster of another secondary metabolite in this study, the intracellularly occurring microcystin is related with several pathways of the primary metabolism. A clear correlation of a microcystin knockout and the SigE-mediated regulation of carbon metabolism is found. According to the acquired transcriptional data, a model is proposed that postulates the regulating effect of microcystin on transcriptional regulators such as the alternative sigma factor SigE, which in return captures an essential role in sugar catabolism and redox-state regulation.
For the purpose of simulating community conditions as found in the field, Microcystis colonies are isolated from the eutrophic lakes near Potsdam, Germany and established as stably growing under laboratory conditions. In co-habitation simulations, the recently isolated field strain FS2 is shown to specifically induce nearly immediate aggregation reactions in the axenic lab strain Microcystis aeruginosa PCC 7806. In transcriptional studies via microarrays, the induced expression program in PCC 7806 after aggregation induction is shown to involve the reorganization of cell envelope structures, a highly altered nutrient uptake balance and the reorientation of the aggregating cells to a heterotrophic carbon utilization, e.g. via glycolysis. These transcriptional changes are discussed as mechanisms of niche adaptation and acclimation in order to prevent competition for resources.
Flood generation at the scale of large river basins is triggered by the interaction of the hydrological pre-conditions and the meteorological event conditions at different spatial and temporal scales. This interaction controls diverse flood generating processes and results in floods varying in magnitude and extent, duration as well as socio-economic consequences. For a process-based understanding of the underlying cause-effect relationships, systematic approaches are required. These approaches have to cover the complete causal flood chain, including the flood triggering meteorological event in combination with the hydrological (pre-)conditions in the catchment, runoff generation, flood routing, possible floodplain inundation and finally flood losses.
In this thesis, a comprehensive probabilistic process-based understanding of the causes and effects of floods is advanced. The spatial and temporal dynamics of flood events as well as the geophysical processes involved in the causal flood chain are revealed and the systematic interconnections within the flood chain are deciphered by means of the classification of their associated causes and effects. This is achieved by investigating the role of the hydrological pre-conditions and the meteorological event conditions with respect to flood occurrence, flood processes and flood characteristics as well as their interconnections at the river basin scale.
Broadening the knowledge about flood triggers, which up to now has been limited to linking large-scale meteorological conditions to flood occurrence, the influence of large-scale pre-event hydrological conditions on flood initiation is investigated. Using the Elbe River basin as an example, a classification of soil moisture, a key variable of pre-event conditions, is developed and a probabilistic link between patterns of soil moisture and flood occurrence is established. The soil moisture classification is applied to continuously simulated soil moisture data which is generated using the semi-distributed conceptual rainfall-runoff model SWIM. Applying successively a principal component analysis and a cluster analysis, days of similar soil moisture patterns are identified in the period November 1951 to October 2003.
The investigation of flood triggers is complemented by including meteorological conditions described by a common weather pattern classification that represents the main modes of atmospheric state variability. The newly developed soil moisture classification thereby provides the basis to study the combined impact of hydrological pre-conditions and large-scale meteorological event conditions on flood occurrence at the river basin scale.
A process-based understanding of flood generation and its associated probabilities is attained by classifying observed flood events into process-based flood types such as snowmelt floods or long-rain floods. Subsequently, the flood types are linked to the soil moisture and weather patterns. Further understanding of the processes is gained by modeling of the complete causal flood chain, incorporating a rainfall-runoff model, a 1D/2D hydrodynamic model and a flood loss model. A reshuffling approach based on weather patterns and the month of their occurrence is developed to generate synthetic data fields of meteorological conditions, which drive the model chain, in order to increase the flood sample size. From the large number of simulated flood events, the impact of hydro-meteorological conditions on various flood characteristics is detected through the analysis of conditional cumulative distribution functions and regression trees.
The results show the existence of catchment-scale soil moisture patterns, which comprise of large-scale seasonal wetting and drying components as well as of smaller-scale variations related to spatially heterogeneous catchment processes. Soil moisture patterns frequently occurring before the onset of floods are identified. In winter, floods are initiated by catchment-wide high soil moisture, whereas in summer the flood-initiating soil moisture patterns are diverse and the soil moisture conditions are less stable in time. The combined study of both soil moisture and weather patterns shows that the flood favoring hydro-meteorological patterns as well as their interactions vary seasonally. In the analysis period, 18 % of the weather patterns only result in a flood in the case of preceding soil saturation. The classification of 82 past events into flood types reveals seasonally varying flood processes that can be linked to hydro-meteorological patterns. For instance, the highest flood potential for long-rain floods is associated with a weather pattern that is often detected in the presence of so-called ‘Vb’ cyclones. Rain-on-snow and snowmelt floods are associated with westerly and north-westerly wind directions. The flood characteristics vary among the flood types and can be reproduced by the applied model chain. In total, 5970 events are simulated. They reproduce the observed event characteristics between September 1957 and August 2002 and provide information on flood losses. A regression tree analysis relates the flood processes of the simulated events to the hydro-meteorological (pre-)event conditions and highlights the fact that flood magnitude is primarily controlled by the meteorological event, whereas flood extent is primarily controlled by the soil moisture conditions.
Describing flood occurrence, processes and characteristics as a function of hydro-meteorological patterns, this thesis is part of a paradigm shift towards a process-based understanding of floods. The results highlight that soil moisture patterns as well as weather patterns are not only beneficial to a probabilistic conception of flood initiation but also provide information on the involved flood processes and the resulting flood characteristics.
Background: Aggression is a severe behavioral problem that interferes with many developmental challenges individuals face in middle childhood and adolescence. Particularly in the peer and in the academic domain, aggression inhibits the individual from making important learning experiences that are predictive for a healthy transition into adulthood. Furthermore, the resulting developmental deficits have the propensity to feedback and to promote aggression at later developmental stages. The aim of the present PhD thesis was to investigate pathways and processes involved in the etiology of aggression by examining the interrelation between multiple developmental problems in the peer and in the academic domain. More specifically, the relevance of affiliation with deviant peers as a driving mechanism for the development of aggression, factors promoting the affiliation with deviant peers (social rejection; academic failure), and mechanisms by which affiliation with deviant peers leads to aggression (external locus of control) were investigated.
Method: The research questions were addressed by three studies. Three data waves were available for the first study, the second and third study were based on two data waves. The first study specified pathways to antisocial behavior by investigating the temporal interrelation between social rejection, academic failure, and affiliation with deviant peers in a sample of 1,657 male and female children and adolescents aged between 6 and 15 years. The second study examined the role of external control beliefs as a potential mediator in the link between affiliation with deviant peers and aggression in a sample of 1,466 children and adolescents in the age of 9 to 19 years, employing a half-longitudinal design. The third study aimed to expand the findings of Study 1 and Study 2 by examining the differential predictivity of combinations of developmental risks for different functions of aggression, using a sample of 1,479 participants in the age between 9 and 19 years. First, profiles of social rejection, academic failure, and affiliation with deviant peers were identified, using latent profile analysis. Second, prospective pathways between risk-profiles and reactive and proactive aggression were investigated, using latent path analysis.
Results: The first study revealed that antisocial behavior at T1 was associated with social rejection and academic failure at T2. Both mechanisms promoted affiliation with deviant peers at the same data wave, which predicted deviancy at T3. Furthermore, both an indirect pathway via social rejection and affiliation with deviant peers and an indirect pathway via academic failure and affiliation with deviant peers significantly mediated the link between antisocial behavior at the first and the third data wave. Additionally, the proposed pathways generalized across genders and different age groups. The second study yielded that external control beliefs significantly mediated the link between affiliation with deviant peers and aggression, with affiliation with deviant peers at T1 predicting external control beliefs at T2 and external control beliefs at T1 predicting aggressive behavior at T2. Again, the analyses provided no evidence for gender and age specific variations in the proposed pathways. In the third study, three distinct risk groups were identified, made up of a large non-risk group, with low scores on all risk measures, a group characterized by high scores on social rejection (SR group), and a group with the highest scores on measures of affiliation with deviant peers and academic failure (APAF group). Importantly, risk group membership was differentially associated with reactive and proactive aggression. Only membership in the SR group at T1 was associated with the development of reactive aggression at T2 and only membership in the APAF group at T1 predicted proactive aggression at T2. Additionally, proactive aggression at T1 predicted membership in the APAF group at T2, indicating a reciprocal relationship between both constructs.
Conclusion: The results demonstrated that aggression causes severe behavioral deficits in social and academic domains which promote future aggression by increasing individuals’ tendency to affiliate with deviant peers. The stimulation of external control beliefs provides an explanation for deviant peers’ effect on the progression and intensification of aggression. Finally, multiple developmental risks were shown to co-occur within individuals and to be differentially predictive of reactive and proactive aggression. The findings of this doctoral dissertation have possible implications for the conceptualization of prevention and intervention programs aimed to reduce aggression in middle childhood and adolescence.
Buyer-seller negotiations have significant impact on a company’s profitability, which makes practitioners aim at maximizing their performance. One lever for increasing bargaining performance is to pursue a clearly defined aspiration, i.e. one’s most desired outcome. In this context, the author explores the role of such aspirations in the three negotiation phases: preparation, bargaining, and striking a deal. She investigates determinants of aspirations, unintended consequences such as unethical bargaining behavior, and the consequences of overly ambitious aspirations. As a result, she does not only close existing gaps in negotiation research, but also derives valuable implications for practitioners
The cytoskeleton is an essential component of living cells. It is composed of different types of protein filaments that form complex, dynamically rearranging, and interconnected networks. The cytoskeleton serves a multitude of cellular functions which further depend on the cell context. In animal cells, the cytoskeleton prominently shapes the cell's mechanical properties and movement. In plant cells, in contrast, the presence of a rigid cell wall as well as their larger sizes highlight the role of the cytoskeleton in long-distance intracellular transport. As it provides the basis for cell growth and biomass production, cytoskeletal transport in plant cells is of direct environmental and economical relevance. However, while knowledge about the molecular details of the cytoskeletal transport is growing rapidly, the organizational principles that shape these processes on a whole-cell level remain elusive.
This thesis is devoted to the following question: How does the complex architecture of the plant cytoskeleton relate to its transport functionality? The answer requires a systems level perspective of plant cytoskeletal structure and transport. To this end, I combined state-of-the-art confocal microscopy, quantitative digital image analysis, and mathematically powerful, intuitively accessible graph-theoretical approaches.
This thesis summarizes five of my publications that shed light on the plant cytoskeleton as a transportation network: (1) I developed network-based frameworks for accurate, automated quantification of cytoskeletal structures, applicable in, e.g., genetic or chemical screens; (2) I showed that the actin cytoskeleton displays properties of efficient transport networks, hinting at its biological design principles; (3) Using multi-objective optimization, I demonstrated that different plant cell types sustain cytoskeletal networks with cell-type specific and near-optimal organization; (4) By investigating actual transport of organelles through the cell, I showed that properties of the actin cytoskeleton are predictive of organelle flow and provided quantitative evidence for a coordination of transport at a cellular level; (5) I devised a robust, optimization-based method to identify individual cytoskeletal filaments from a given network representation, allowing the investigation of single filament properties in the network context. The developed methods were made publicly available as open-source software tools.
Altogether, my findings and proposed frameworks provide quantitative, system-level insights into intracellular transport in living cells. Despite my focus on the plant cytoskeleton, the established combination of experimental and theoretical approaches is readily applicable to different organisms. Despite the necessity of detailed molecular studies, only a complementary, systemic perspective, as presented here, enables both understanding of cytoskeletal function in its evolutionary context as well as its future technological control and utilization.
BACKGROUND: The etiology of low back pain (LBP), one of the most prevalent and costly diseases of our time, is accepted to be multi-causal, placing functional factors in the focus of research. Thereby, pain models suggest a centrally controlled strategy of trunk stiffening in LBP. However, supporting biomechanical evidence is mostly limited to static measurements during maximum voluntary contractions (MVC), probably influenced by psychological factors in LBP. Alternatively, repeated findings indicate that the neuromuscular efficiency (NME), characterized by the strength-to-activation relationship (SAR), of lower back muscles is impaired in LBP. Therefore, a dynamic SAR protocol, consisting of normalized trunk muscle activation recordings during submaximal loads (SMVC) seems to be relevant. This thesis aimed to investigate the influence of LBP on the NME and activation pattern of trunk muscles during dynamic trunk extensions.
METHODS: The SAR protocol consisted of an initial MVC reference trial (MVC1), followed by SMVCs at 20, 40, 60 and 80% of MVC1 load. An isokinetic trunk dynamometer (Con-Trex TP, ROM: 45° flexion to 10° extension, velocity: 45°/s) and a trunk surface EMG setup (myon, up to 12 leads) was used. Extension torque output [Nm] and muscular activation [V] were assessed in all trials. Finally, another MVC trial was performed (MVC2) for reliability analysis. For SAR evaluation the SMVC trial values were normalized [%MVC1] and compared inter- and intra-individually.
The methodical validity of the approach was tested in an isometric SAR single-case pilot study (S1a: N = 2, female LBP patient vs. healthy male). In addition, the validity of the MVC reference method was verified by comparing different contraction modes (S1b: N = 17, healthy individuals). Next, the isokinetic protocol was validated in terms of content for its applicability to display known physiological differences between sexes in a cross-sectional study (S2: each n = 25 healthy males/females). Finally, the influence of acute pain on NME was investigated longitudinally by comparing N = 8 acute LBP patients with the retest after remission of pain (S3). The SAR analysis focused on normalized agonistic extensor activation and abdominal and synergistic extensor co-activation (t-tests, ANOVA, α = .05) as well as on reliability of MVC1/2 outcomes.
RESULTS: During the methodological validation of the protocol (S1a), the isometric SAR was found to be descriptively different between individuals. Whereas torque output was highest during eccentric MVC, no relevant difference in peak EMG activation was found between contraction modes (S1b). The isokinetic SAR sex comparison (S2), though showing no significant overall effects, revealed higher normalized extensor activation at moderate submaximal loads in females (13 ± 4%), primarily caused by pronounced thoracic activation. Similarly, co-activation analysis resulted in significantly higher antagonistic activation at moderate loads compared to males (33 ± 9%). During intra-individual analysis of SAR in LBP patients (S3), a significant effect of pain status on the SAR has been identified, manifesting as increased normalized EMG activation of extensors during acute LBP (11 ± 8%) particularly at high load. Abdominal co-activation tended to be elevated (27 ± 11%) just as the thoracic extensor parts seemed to take over proportions of lumbar activation. All together, the M. erector spinae behaviour during the SAR protocol was rather linear with the tendency to rise exponentially during high loads. For the level of normalized EMG activation during SMVCs, a clear increasing trend from healthy males to females over to non-acute and acute LBP patients was discovered. This was associated by elevated antagonistic activation and a shift of synergistic towards lumbar extensor activation. The MVC data revealed overall good reliability, with clearly higher variability during acute LBP.
DISCUSSION: The present thesis demonstrates that the NME of lower back muscles is impaired in LBP patients, especially during an acute pain episode. A new dynamic protocol has been developed that makes it possible to display the underlying SAR using normalized trunk muscle EMG during submaximal isokinetic loads. The protocol shows promise as a biomechanical tool for diagnostic analysis of NME in LBP patients and monitoring of rehabilitation progress. Furthermore, reliability not of maximum strength but rather of peak EMG of MVC measurements seems to be decreased in LBP patients. Meanwhile, the findings of this thesis largely substantiate the assumptions made by the recently presented ‘motor adaptation to pain’ model, suggesting a pain-related intra- and intermuscular activation redistribution affecting movement and stiffness of the trunk. Further research is needed to distinguish the grade of NME impairment between LBP subgroups.
In this Thesis, the properties of aqueous hemicellulose polysaccharides are investigated using computer simulations. The high swelling capacity of materials composed of these molecules allows the generation of directed motion in plant materials entirely controlled by water uptake.
To explore the molecular origin of this swelling capacity, a computational model with atomistic resolution for hemicellulose polysaccharides is build and validated in comparison with experiments. Using this model, simulations of small polysaccharides are employed to gain an understanding of the interactions of these molecules with water, the influence of water on their conformational freedom, and the swelling capacity quantified in terms of osmotic pressure. It is revealed that the branched hemicellulose polysaccharides show different hydration characteristics compared to linear polysaccharides.
To study swelling properties on length and time scales that exceed the limitations imposed by atomistic simulations, a procedure to obtain transferable coarse-grain models is developed. The transferability of the coarse-grain models over both different degrees of polymerization as well as different solute concentrations is demonstrated. Therefore, the procedure allows the construction of large coarse-grained systems based on small atomistic reference systems. Finally, the coarse-grain model is applied to demonstrate that linear and branched polysaccharides show a different swelling behavior when coupled to a water bath.
The impact of soil microbiota on plant species performance and diversity in semi-natural grasslands
(2016)
The aim of this work is the evaluation of the geothermal potential of Luxembourg. The approach consists in a joint interpretation of different types of information necessary for a first rather qualitative assessment of deep geothermal reservoirs in Luxembourg and the adjoining regions in the surrounding countries of Belgium, France and Germany. For the identification of geothermal reservoirs by exploration, geological, thermal, hydrogeological and structural data are necessary. Until recently, however, reliable information about the thermal field and the regional geology, and thus about potential geothermal reservoirs, was lacking. Before a proper evaluation of the geothermal potential can be performed, a comprehensive survey of the geology and an assessment of the thermal field are required.
As a first step, the geology and basin structure of the Mesozoic Trier–Luxembourg Basin (TLB) is reviewed and updated using recently published information on the geology and structures as well as borehole data available in Luxembourg and the adjoining regions. A Bouguer map is used to get insight in the depth, morphology and structures in the Variscan basement buried beneath the Trier–Luxembourg Basin. The geological section of the old Cessange borehole is reinterpreted and provides, in combination with the available borehole data, consistent information for the production of isopach maps. The latter visualize the synsedimentary evolution of the Trier–Luxembourg Basin. Complementary, basin-wide cross sections illustrate the evolution and structure of the Trier–Luxembourg Basin. The knowledge gained does not support the old concept of the Weilerbach Mulde. The basin-wide cross sections, as well as the structural and sedimentological observations in the Trier–Luxembourg Basin suggest that the latter probably formed above a zone of weakness related to a buried Rotliegend graben. The inferred graben structure designated by SE-Luxembourg Graben (SELG) is located in direct southwestern continuation of the Wittlicher Rotliegend-Senke.
The lack of deep boreholes and subsurface temperature prognosis at depth is circumnavigated by using thermal modelling for inferring the geothermal resource at depth. For this approach, profound structural, geological and petrophysical input data are required. Conceptual geological cross sections encompassing the entire crust are constructed and further simplified and extended to lithospheric scale for their utilization as thermal models. The 2-D steady state and conductive models are parameterized by means of measured petrophysical properties including thermal conductivity, radiogenic heat production and density. A surface heat flow of 75 ∓ 7 (2δ) mW m–2 for verification of the thermal models could be determined in the area. The models are further constrained by the geophysically-estimated depth of the lithosphere–asthenosphere boundary (LAB) defined by the 1300 °C isotherm. A LAB depth of 100 km, as seismically derived for the Ardennes, provides the best fit with the measured surface heat flow. The resulting mantle heat flow amounts to ∼40 mW m–2. Modelled temperatures are in the range of 120–125 °C at 5 km depth and of 600–650 °C at the crust/mantle discontinuity (Moho). Possible thermal consequences of the 10–20 Ma old Eifel plume, which apparently caused upwelling of the asthenospheric mantle to 50–60 km depth, were modelled in a steady-state thermal scenario resulting in a surface heat flow of at least 91 mW m–2 (for the plume top at 60 km) in the Eifel region. Available surface heat-flow values are significantly lower (65–80 mW m–2) and indicate that the plume-related heating has not yet entirely reached the surface.
Once conceptual geological models are established and the thermal regime is assessed, the geothermal potential of Luxembourg and the surrounding areas is evaluated by additional consideration of the hydrogeology, the stress field and tectonically active regions. On the one hand, low-enthalpy hydrothermal reservoirs in Mesozoic reservoirs in the Trier–Luxembourg Embayment (TLE) are considered. On the other hand, petrothermal reservoirs in the Lower Devonian basement of the Ardennes and Eifel regions are considered for exploitation by Enhanced/Engineered Geothermal Systems (EGS). Among the Mesozoic aquifers, the Buntsandstein aquifer characterized by temperatures of up to 50 °C is a suitable hydrothermal reservoir that may be exploited by means of heat pumps or provide direct heat for various applications. The most promising area is the zone of the SE–Luxembourg Graben. The aquifer is warmest underneath the upper Alzette River valley and the limestone plateau in Lorraine, where the Buntsandstein aquifer lies below a thick Mesozoic cover. At the base of an inferred Rotliegend graben in the same area, temperatures of up to 75 °C are expected. However, geological and hydraulic conditions are uncertain. In the Lower Devonian basement, thick sandstone-/quartzite-rich formations with temperatures >90 °C are expected at depths >3.5 km and likely offer the possibility of direct heat use. The setting of the Südeifel (South Eifel) region, including the Müllerthal region near Echternach, as a tectonically active zone may offer the possibility of deep hydrothermal reservoirs in the fractured Lower Devonian basement. Based on the recent findings about the structure of the Trier–Luxembourg Basin, the new concept presents the Müllerthal–Südeifel Depression (MSD) as a Cenozoic structure that remains tectonically active and subsiding, and therefore is relevant for geothermal exploration. Beyond direct use of geothermal heat, the expected modest temperatures at 5 km depth (about 120 °C) and increased permeability by EGS in the quartzite-rich Lochkovian could prospectively enable combined geothermal heat production and power generation in Luxembourg and the western realm of the Eifel region.
Background: The engagement in aggressive behavior in middle childhood is linked to the development of severe problems in later life. Thus, identifying factors and processes that con-tribute to the continuity and increase of aggression in middle childhood is essential in order to facilitate the development of intervention programs. The present PhD thesis aimed at expand-ing the understanding of the development of aggression in middle childhood by examining risk factors in the intrapersonal and interpersonal domains as well as the interplay between these factors: Maladaptive anger regulation was examined as an intrapersonal risk factor; processes that occur in the peer context (social rejection and peer socialization) were included as interpersonal risk factors. In addition, in order to facilitate the in situ assessment of anger regulation strategies, an observational measure of anger regulation was developed and validated.
Method: The research aims were addressed within the scope of four articles. Data from two measurement time points about ten months apart were available for the analyses. Participants were elementary school children aged from 6 to 10 years at T1 and 7 to 11 years at T2. The first article was based on cross-sectional analyses including only the first time point; in the remaining three articles longitudinal associations across the two time points were analyzed. The first two articles were concerned with the development and cross-sectional as well as longitudinal validation of observational measure of anger regulation in middle childhood in a sample of 599 children. Using the same sample, the third article investigated the longitudinal link between maladaptive anger regulation and aggression considering social rejection as a mediating variable. The frequency as well as different functions of aggression (reactive and proactive) were included as outcomes measures. The fourth article examined the influence of class-level aggression on the development of different forms of aggression (relational and physical) over time under consideration of differences in initial individual aggression in a sample of 1,284 children. In addition, it was analyzed if the path from aggression to social rejection varies as a function of class-level aggression.
Results: The first two articles revealed that the observational measure of anger regulation developed for the purpose of this research was cross-sectionally related to anger reactivity, aggression and social rejection as well as longitudinally related to self-reported anger regula-tion. In the third article it was found that T1 maladaptive anger regulation showed no direct link to T2 aggression, but an indirect link through T1 social rejection. This indirect link was found for the frequency of aggression as well as for reactive and proactive aggression. The fourth article revealed that with regard to relational aggression, a high level of classroom ag-gression predicted an increase of individual aggression only among children with initially low levels of aggression. For physical aggression, it was found that the overall level of aggression in the class affected all children equally. In addition, physical aggression increased the likelihood of social rejection irrespective of the class-level of aggression whereas relational aggression caused social rejection only in classes with a generally low level of relational aggression. The analyses of gender-specific effects showed that children were mainly influenced by their same-gender peers and that the effect on the opposite gender was higher if children engaged in gender-atypical forms of aggressive behavior.
Conclusion: The results provided evidence for the construct and criterion validity of the observational measure of maladaptive anger regulation that was developed within the scope of this research. Furthermore, the findings indicated that maladaptive anger regulation constitutes an important risk factor of aggression through the influence of social rejection. Finally, the results demonstrated that the level of aggression among classmates is relevant for the development of individual aggression over time and that the children´s evaluation of relationally aggressive behavior varies as a function of the normativity of relational aggression in the class. The study findings have implications for the measurement of anger regulation in middle childhood as well as for the prevention of aggression and social rejection.
Variations in the distribution of mass within an orogen may lead to transient sediment storage, which in turn might affect the state of stress and the level of fault activity. Distinguishing between different forcing mechanisms causing variations of sediment flux and tectonic activity, is therefore one of the most challenging tasks in understanding the spatiotemporal evolution of active mountain belts.
The Himalayan mountain belt is one of the most significant Cenozoic collisional mountain belt, formed due to collision between northward-bound Indian Plate and the Eurasian Plate during the last 55-50 Ma. Ongoing convergence of these two tectonic plates is accommodated by faulting and folding within the Himalayan arc-shaped orogen and the continued lateral and vertical growth of the Tibetan Plateau and mountain belts adjacent to the plateau as well as regions farther north. Growth of the Himalayan orogen is manifested by the development of successive south-vergent thrust systems. These thrust systems divide the orogen into different morphotectonic domains. From north to south these thrusts are the Main Central Thrust (MCT), the Main Boundary Thrust (MBT) and the Main Frontal Thrust (MFT). The growing topography interacts with moisture-bearing monsoonal winds, which results in pronounced gradients in rainfall, weathering, erosion and sediment transport toward the foreland and beyond. However, a fraction of this sediment is trapped and transiently stored within the intermontane valleys or ‘dun’s within the lower-elevation foothills of the range. Improved understanding of the spatiotemporal evolution of these sediment archives could provide a unique opportunity to decipher the triggers of variations in sediment production, delivery and storage in an actively deforming mountain belt and support efforts to test linkages between sediment volumes in intermontane basins and changes in the shallow crustal stress field. As sediment redistribution in mountain belts on timescales of 102-104 years can effect cultural characteristics and infrastructure in the intermontane valleys and may even impact the seismotectonics of a mountain belt, there is a heightened interest in understanding sediment-routing processes and causal relationships between tectonism, climate and topography. It is here at the intersection between tectonic processes and superposed climatic and sedimentary processes in the Himalayan orogenic wedge, where my investigation is focused on. The study area is the intermontane Kangra Basin in the northwestern Sub-Himalaya, because the characteristics of the different Himalayan morphotectonic provinces are well developed, the area is part of a region strongly influenced by monsoonal forcing, and the existence of numerous fluvial terraces provides excellent strain markers to assess deformation processes within the Himalayan orogenic wedge. In addition, being located in front of the Dhauladhar Range the region is characterized by pronounced gradients in past and present-day erosion and sediment processes associated with repeatedly changing climatic conditions. In light of these conditions I analysed climate-driven late Pleistocene-Holocene sediment cycles in this tectonically active region, which may be responsible for triggering the tectonic re-organization within the Himalayan orogenic wedge, leading to out-of-sequence thrusting, at least since early Holocene.
The Kangra Basin is bounded by the MBT and the Sub-Himalayan Jwalamukhi Thrust (JMT) in the north and south, respectively and transiently stores sediments derived from the Dhauladhar Range. The Basin contains ~200-m-thick conglomerates reflecting two distinct aggradation phases; following aggradation, several fluvial terraces were sculpted into these fan deposits. 10Be CRN surface exposure dating of these terrace levels provides an age of 53.4±3.2 ka for the highest-preserved terrace (AF1); subsequently, this surface was incised until ~15 ka, when the second fan (AF2) began to form. AF2 fan aggradation was superseded by episodic Holocene incision, creating at least four terrace levels. We find a correlation between variations in sediment transport and ∂18O records from regions affected by the Indian Summer Monsoon (ISM). During strengthened ISMs sand post-LGM glacial retreat, aggradation occurred in the Kangra Basin, likely due to high sediment flux, whereas periods of a weakened ISM coupled with lower sediment supply coincided with renewed re-incision.
However, the evolution of fluvial terraces along Sub-Himalayan streams in the Kangra sector is also forced by tectonic processes. Back-tilted, folded terraces clearly document tectonic activity of the JMT. Offset of one of the terrace levels indicates a shortening rate of 5.6±0.8 to 7.5±1.0 mm.a-1 over the last ~10 ka. Importantly, my study reveals that late Pleistocene/Holocene out-of-sequence thrusting accommodates 40-60% of the total 14±2 mm.a-1 shortening partitioned throughout the Sub-Himalaya. Importantly, the JMT records shortening at a lower rate over longer timescales hints towards out-of-sequence activity within the Sub-Himalaya. Re-activation of the JMT could be related to changes in the tectonic stress field caused by large-scale sediment removal from the basin. I speculate that the deformation processes of the Sub-Himalaya behave according to the predictions of critical wedge model and assume the following: While >200m of sediment aggradation would trigger foreland-ward propagation of the deformation front, re-incision and removal of most of the stored sediments (nearly 80-85% of the optimum basin-fill) would again create a sub-critical condition of the wedge taper and trigger the retreat of the deformation front.
While tectonism is responsible for the longer-term processes of erosion associated with steepening hillslopes, sediment cycles in this environment are mainly the result of climatic forcing. My new 10Be cosmogenic nuclide exposure dates and a synopsis of previous studies show the late Pleistocene to Holocene alluvial fills and fluvial terraces studied here record periodic fluctuations of sediment supply and transport capacity on timescales of 1000-100000 years. To further evaluate the potential influence of climate change on these fluctuations, I compared the timing of aggradation and incision phases recorded within remnant alluvial fans and terraces with continental climate archives such as speleothems in neighboring regions affected by monsoonal precipitation. Together with previously published OSL ages yielding the timing of aggradation, I find a correlation between variations in sediment transport with oxygen-isotope records from regions affected by the Indian Summer Monsoon (ISM). Accordingly, during periods of increased monsoon intensity (transitions from dry and cold to wet and warm periods – MIS4 to MIS3 and MIS2 to MIS1) (MIS=marine isotope stage) and post-Last Glacial Maximum glacial retreat, aggradation occurred in the Kangra Basin, likely due to high sediment flux. Conversely, periods of weakened monsoon intensity or lower sediment supply coincide with re-incision of the existing basin-fill.
Finally, my study entails part of a low-temperature thermochronology study to assess the youngest exhumation history of the Dhauladhar Range. Zircon helium (ZHe) ages and existing low-temperature data sets (ZHe, apatite fission track (AFT)) across this range, together with 3D thermokinematic modeling (PECUBE) reveals constraints on exhumation and activity of the range-bounding Main Boundary Thrust (MBT) since at least mid-Miocene time. The modeling results indicate mean slip rates on the MBT-fault ramp of ~2 – 3 mm.a-1 since its activation. This has lead to the growth of the >5-km-high frontal Dhauladhar Range and continuous deep-seated exhumation and erosion. The obtained results also provide interesting constraints of deformation patterns and their variation along strike. The results point towards the absence of the time-transient ‘mid-crustal ramp’ in the basal decollement and
duplexing of the Lesser Himalayan sequence, unlike the nearby regions or even the central Nepal domain. A fraction of convergence (~10-15%) is accommodated along the deep-seated MBT-ramp, most likely merging into the MHT. This finding is crucial for a rigorous assessment of the overall level of tectonic activity in the Himalayan morphotectonic provinces as it contradicts recently-published geodetic shortening estimates. In these studies, it has been proposed that the total Himalayan shortening in the NW Himalaya is accommodated within the Sub-Himalaya whereas no tectonic activity is assigned to the MBT.
Complex networks are ubiquitous in nature and society. They appear in vastly different domains, for instance as social networks, biological interactions or communication networks. Yet in spite of their different origins, these networks share many structural characteristics. For instance, their degree distribution typically follows a power law. This means that the fraction of vertices of degree k is proportional to k^(−β) for some constant β; making these networks highly inhomogeneous. Furthermore, they also typically have high clustering, meaning that links between two nodes are more likely to appear if they have a neighbor in common.
To mathematically study the behavior of such networks, they are often modeled as random graphs. Many of the popular models like inhomogeneous random graphs or Preferential Attachment excel at producing a power law degree distribution. Clustering, on the other hand, is in these models either not present or artificially enforced.
Hyperbolic random graphs bridge this gap by assuming an underlying geometry to the graph: Each vertex is assigned coordinates in the hyperbolic plane, and two vertices are connected if they are nearby. Clustering then emerges as a natural consequence: Two nodes joined by an edge are close by and therefore have many neighbors in common. On the other hand, the exponential expansion of space in the hyperbolic plane naturally produces a power law degree sequence. Due to the hyperbolic geometry, however, rigorous mathematical treatment of this model can quickly become mathematically challenging.
In this thesis, we improve upon the understanding of hyperbolic random graphs by studying its structural and algorithmical properties. Our main contribution is threefold. First, we analyze the emergence of cliques in this model. We find that whenever the power law exponent β is 2 < β < 3, there exists a clique of polynomial size in n. On the other hand, for β >= 3, the size of the largest clique is logarithmic; which severely contrasts previous models with a constant size clique in this case. We also provide efficient algorithms for finding cliques if the hyperbolic node coordinates are known. Second, we analyze the diameter, i. e., the longest shortest path in the graph. We find
that it is of order O(polylog(n)) if 2 < β < 3 and O(logn) if β > 3. To complement
these findings, we also show that the diameter is of order at least Ω(logn). Third, we provide an algorithm for embedding a real-world graph into the hyperbolic plane using only its graph structure. To ensure good quality of the embedding, we perform extensive computational experiments on generated hyperbolic random graphs. Further, as a proof of concept, we embed the Amazon product recommendation network and observe that products from the same category are mapped close together.
This book examines why Japan has one of the highest enrolment rates in cram schools and private tutoring worldwide. It sheds light on the causes of this high dependence on ‘shadow education’ and its implications for social inequalities. The book provides a deep and extensive understanding of the role of this kind of education in Japan. It shows new ways to theoretically and empirically address this issue, and offers a comprehensive perspective on the impact of shadow education on social inequality formation that is based on reliable and convincing empirical analyses.
Contrary to earlier studies, the book shows that shadow education does not inevitably result in increasing or persisting inequalities, but also inherits the potential to let students overcome their status-specific disadvantages and contributes to more opportunities in education. Against the background of the continuous expansion and the convergence of shadow education systems across the globe, the findings of this book call for similar works in other national contexts, particularly Western societies without traditional large-scale shadow education markets. The book emphasizes the importance and urgency to deal with the modern excesses of educational expansion and education as an institution, in which the shadow education industry has made itself (seemingly) indispensable.
This book:
• Is the first comprehensive empirical work on the implications of shadow education for educational and social inequalities.
• Draws on quantitative and qualitative data and uses mixed-methods.
• Has major implications for sociological, international and comparative research on the topic.
• Introduces a general theoretical frame to help future research in approaching this under-theorized field.
Software-as-a-Service (SaaS) offers several advantages to both service providers and users. Service providers can benefit from the reduction of Total Cost of Ownership (TCO), better scalability, and better resource utilization. On the other hand, users can use the service anywhere and anytime, and minimize upfront investment by following the pay-as-you-go model. Despite the benefits of SaaS, users still have concerns about the security and privacy of their data. Due to the nature of SaaS and the Cloud in general, the data and the computation are beyond the users' control, and hence data security becomes a vital factor in this new paradigm. Furthermore, in multi-tenant SaaS applications, the tenants become more concerned about the confidentiality of their data since several tenants are co-located onto a shared infrastructure.
To address those concerns, we start protecting the data from the provisioning process by controlling how tenants are being placed in the infrastructure. We present a resource allocation algorithm designed to minimize the risk of co-resident tenants called SecPlace. It enables the SaaS provider to control the resource (i.e., database instance) allocation process while taking into account the security of tenants as a requirement.
Due to the design principles of the multi-tenancy model, tenants follow some degree of sharing on both application and infrastructure levels. Thus, strong security-isolation should be present. Therefore, we develop SignedQuery, a technique that prevents one tenant from accessing others' data. We use the Signing Concept to create a signature that is used to sign the tenant's request, then the server can verifies the signature and recognizes the requesting tenant, and hence ensures that the data to be accessed is belonging to the legitimate tenant.
Finally, Data confidentiality remains a critical concern due to the fact that data in the Cloud is out of users' premises, and hence beyond their control. Cryptography is increasingly proposed as a potential approach to address such a challenge. Therefore, we present SecureDB, a system designed to run SQL-based applications over an encrypted database. SecureDB captures the schema design and analyzes it to understand the internal structure of the data (i.e., relationships between the tables and their attributes). Moreover, we determine the appropriate partialhomomorphic encryption scheme for each attribute where computation is possible even when the data is encrypted.
To evaluate our work, we conduct extensive experiments with di↵erent settings. The main use case in our work is a popular open source HRM application, called OrangeHRM. The results show that our multi-layered approach is practical, provides enhanced security and isolation among tenants, and have a moderate complexity in terms of processing encrypted data.
Savannas cover a broad geographical range across continents and are a biome best described by a mix of herbaceous and woody plants. The former create a more or less continuous layer while the latter should be sparse enough to leave an open canopy. What has long intrigued ecologists is how these two competing plant life forms of vegetation coexist.
Initially attributed to resource competition, coexistence was considered the stable outcome of a root niche differentiation between trees and grasses. The importance of environmental factors became evident later, when data from moister environments demonstrated that tree cover was often lower than what the rainfall conditions would allow for. Our current understanding relies on the interaction of competition and disturbances in space and time. Hence, the influence of grazing and fire and the corresponding feedbacks they generate have been keenly investigated. Grazing removes grass cover, initiating a self-reinforcing process propagating tree cover expansion. This is known as the encroachment phenomenon. Fire, on the other hand, imposes a bottleneck on the tree population by halting the recruitment of young trees into adulthood. Since grasses fuel fires, a feedback linking grazing, grass cover, fire, and tree cover is created. In African savannas, which are the focus of this dissertation, these feedbacks play a major role in the dynamics.
The importance of these feedbacks came into sharp focus when the notion of alternative states began to be applied to savannas. Alternative states in ecology arise when different states of an ecosystem can occur under the same conditions. According to this an open savanna and a tree-dominated savanna can be classified as alternative states, since they can both occur under the same climatic conditions. The aforementioned feedbacks are critical in the creation of alternative states. The grass-fire feedback can preserve an open canopy as long as fire intensity and frequency remain above a certain threshold. Conversely, crossing a grazing threshold can force an open savanna to shift to a tree-dominated state. Critically, transitions between such alternative states can produce hysteresis, where a return to pre-transition conditions will not suffice to restore the ecosystem to its original state.
In the chapters that follow, I will cover aspects relating to the coexistence mechanisms and the role of feedbacks in tree-grass interactions. Coming back to the coexistence question, due to the overwhelming focus on competition and disturbance another important ecological process was neglected: facilitation. Therefore, in the first study within this dissertation I examine how facilitation can expand the tree-grass coexistence range into drier conditions. For the second study I focus on another aspect of savanna dynamics which remains underrepresented in the literature: the impacts of inter-annual rainfall variability upon savanna trees and the resilience of the savanna state. In the third and final study within this dissertation I approach the well-researched encroachment phenomenon from a new perspective: I search for an early warning indicator of the process to be used as a prevention tool for savanna conservation. In order to perform all this work I developed a mathematical ecohydrological model of Ordinary Differential Equations (ODEs) with three variables: soil moisture content, grass cover and tree cover.
Facilitation: Results showed that the removal of grass cover through grazing was detrimental to trees under arid conditions, contrary to expectation based on resource competition. The reason was that grasses preserved moisture in the soil through infiltration and shading, thus ameliorating the harsh conditions for trees in accordance with the Stress Gradient Hypothesis. The exclusion of grasses from the model further demonstrated this: tree cover was lower in the absence of grasses, indicating that the benefits of grass facilitation outweighed the costs of grass competition for trees. Thus, facilitation expanded the climatic range where savannas persisted into drier conditions.
Rainfall variability: By adjusting the model to current rainfall patterns in East Africa, I simulated conditions of increasing inter-annual rainfall variability for two distinct mean rainfall scenarios: semi-arid and mesic. Alternative states of tree-less grassland and tree-dominated savanna emerged in both cases. Increasing variability reduced semi-arid savanna tree cover to the point that at high variability the savanna state was eliminated, because variability intensified resource competition and strengthened the fire disturbance during high rainfall years. Mesic savannas, on the other hand, became more resilient along the variability gradient: increasing rainfall variability created more opportunities for the rapid growth of trees to overcome the fire disturbance, boosting the chances of savannas persisting and thus increasing mesic savanna resilience.
Preventing encroachment: The breakdown in the grass-fire feedback caused by heavy grazing promoted the expansion of woody cover. This could be irreversible due to the presence of alternative states of encroached and open savanna, which I found along a simulated grazing gradient. When I simulated different short term heavy grazing treatments followed by a reduction to the original grazing conditions, certain cases converged to the encroached state. Utilising woody cover changes only during the heavy grazing treatment, I developed an early warning indicator which identified these cases with a high risk of such hysteresis and successfully distinguished them from those with a low risk. Furthermore, after validating the indicator on encroachment data, I demonstrated that it appeared early enough for encroachment to be prevented through realistic grazing-reduction treatments.
Though this dissertation is rooted in the theory of savanna dynamics, its results can have significant applications in savanna conservation. Facilitation has only recently become a topic of interest within savanna literature. Given the threat of increasing droughts and a general anticipation of drier conditions in parts of Africa, insights stemming from this research may provide clues for preserving arid savannas. The impacts of rainfall variability on savannas have not yet been thoroughly studied, either. Conflicting results appear as a result of the lack of a robust theoretical understanding of plant interactions under variable conditions. . My work and other recent studies argue that such conditions may increase the importance of fast resource acquisition creating a ‘temporal niche’. Woody encroachment has been extensively studied as phenomenon, though not from the perspective of its early identification and prevention. The development of an encroachment forecasting tool, as the one presented in this work, could protect both the savanna biome and societies dependent upon it for (economic) survival. All studies which follow are bound by the attempt to broaden the horizons of savanna-related research in order to deal with extreme conditions and phenomena; be it through the enhancement of the coexistence debate or the study of an imminent external threat or the development of a management-oriented tool for the conservation of savannas.
This thesis presents new approaches of SAR methods and their application to tectonically active systems and related surface deformation. With 3 publications two case studies are presented:
(1) The coseismic deformation related to the Nura earthquake (5th October 2008, magnitude Mw 6.6) at the eastern termination of the intramontane Alai valley. Located between the southern Tien Shan and the northern Pamir the coseismic surface displacements are analysed using SAR (Synthetic Aperture RADAR) data. The results show clear gradients in the vertical and horizontal directions along a complex pattern of surface ruptures and active faults. To integrate and to interpret these observations in the context of the regional active tectonics a SAR data analysis is complemented with seismological data and geological field observations. The main moment release of the Nura earthquake appears to be on the Pamir Frontal thrust, while the main surface displacements and surface rupture occurred in the footwall and along of the NE–SW striking Irkeshtam fault. With InSAR data from ascending and descending satellite tracks along with pixel offset measurements the Nura earthquake source is modelled as a segmented rupture. One fault segment corresponds to high-angle brittle faulting at the Pamir Frontal thrust and two more fault segments show moderate-angle and low-friction thrusting at the Irkeshtam fault. The integrated analysis of the coseismic deformation argues for a rupture segmentation and strain partitioning associated to the earthquake. It possibly activated an orogenic wedge in the easternmost segment of the Pamir-Alai collision zone. Further, the style of the segmentation may be associated with the presence of Paleogene evaporites.
(2) The second focus is put on slope instabilities and consequent landslides in the area of prominent topographic transition between the Fergana basin and high-relief Alai range. The Alai range constitutes an active orogenic wedge of the Pamir – Tien Shan collision zone that described as a progressively northward propagating fold-and-thrust belt. The interferometric analysis of ALOS/PALSAR radar data integrates a period of 4 years (2007-2010) based on the Small Baseline Subset (SBAS) time-series technique to assess surface deformation with millimeter surface change accuracy. 118 interferograms are analyzed to observe spatially-continuous movements with downslope velocities up to 71 mm/yr. The obtained rates indicate slow movement of the deep-seated landslides during the observation time. We correlated these movements with precipitation and seismic records. The results suggest that the deformation peaks correlate with rainfall in the 3 preceding months and with one earthquake event. In the next step, to understand the spatial pattern of landslide processes, the tectonic morphologic and lithologic settings are combined with the patterns of surface deformation. We demonstrate that the lithological and tectonic structural patterns are the main controlling factors for landslide occurrence and surface deformation magnitudes. Furthermore active contractional deformation in the front of the orogenic wedge is the main mechanism to sustain relief. Some of the slower but continuously moving slope instabilities are directly related to tectonically active faults and unconsolidated young Quaternary syn-orogenic sedimentary sequences. The InSAR observed slow moving landslides represent active deep-seated gravitational slope deformation phenomena which is first time observed in the Tien Shan mountains. Our approach offers a new combination of InSAR techniques and tectonic aspects to localize and understand enhanced slope instabilities in tectonically active mountain fronts in the Kyrgyz Tien Shan.
Over the past decades, rapid and constant advances have motivated GNSS technology to approach the ability to monitor transient ground motions with mm to cm accuracy in real-time. As a result, the potential of using real-time GNSS for natural hazards prediction and early warning has been exploited intensively in recent years, e.g., landslides and volcanic eruptions monitoring. Of particular note, compared with traditional seismic instruments, GNSS does not saturate or tilt in terms of co-seismic displacement retrieving, which makes it especially valuable for earthquake and earthquake induced tsunami early warning. In this thesis, we focus on the application of real-time GNSS to fast seismic source inversion and tsunami early warning.
Firstly, we present a new approach to get precise co-seismic displacements using cost effective single-frequency receivers. As is well known, with regard to high precision positioning, the main obstacle for single-frequency GPS receiver is ionospheric delay. Considering that over a few minutes, the change of ionospheric delay is almost linear, we constructed a linear model for each satellite to predict ionospheric delay. The effectiveness of this method has been validated by an out-door experiment and 2011 Tohoku event, which confirms feasibility of using dense GPS networks for geo-hazard early warning at an affordable cost.
Secondly, we extended temporal point positioning from GPS-only to GPS/GLONASS and assessed the potential benefits of multi-GNSS for co-seismic displacement determination. Out-door experiments reveal that when observations are conducted in an adversary environment, adding a couple of GLONASS satellites could provide more reliable results. The case study of 2015 Illapel Mw 8.3 earthquake shows that the biases between co-seismic displacements derived from GPS-only and GPS/GLONASS vary from station to station, and could be up to 2 cm in horizontal direction and almost 3 cm in vertical direction. Furthermore, slips inverted from GPS/GLONASS co-seismic displacements using a layered crust structure on a curved plane are shallower and larger for the Illapel event.
Thirdly, we tested different inversion tools and discussed the uncertainties of using real-time GNSS for tsunami early warning. To be exact, centroid moment tensor inversion, uniform slip inversion using a single Okada fault and distributed slip inversion in layered crust on a curved plane were conducted using co-seismic displacements recorded during 2014 Pisagua earthquake. While the inversion results give similar magnitude and the rupture center, there are significant differences in depth, strike, dip and rake angles, which lead to different tsunami propagation scenarios. Even though, resulting tsunami forecasting along the Chilean coast is close to each other for all three models.
Finally, based on the fact that the positioning performance of BDS is now equivalent to GPS in Asia-Pacific area and Manila subduction zone has been identified as a zone of potential tsunami hazard, we suggested a conceptual BDS/GPS network for tsunami early warning in South China Sea. Numerical simulations with two earthquakes (Mw 8.0 and Mw 7.5) and induced tsunamis demonstrate the viability of this network. In addition, the advantage of BDS/GPS over a single GNSS system by source inversion grows with decreasing earthquake magnitudes.
Rapidly uplifting coastlines are frequently associated with convergent tectonic boundaries, like subduction zones, which are repeatedly breached by giant megathrust earthquakes. The coastal relief along tectonically active realms is shaped by the effect of sea-level variations and heterogeneous patterns of permanent tectonic deformation, which are accumulated through several cycles of megathrust earthquakes. However, the correlation between earthquake deformation patterns and the sustained long-term segmentation of forearcs, particularly in Chile, remains poorly understood. Furthermore, the methods used to estimate permanent deformation from geomorphic markers, like marine terraces, have remained qualitative and are based on unrepeatable methods. This contrasts with the increasing resolution of digital elevation models, such as Light Detection and Ranging (LiDAR) and high-resolution bathymetric surveys.
Throughout this thesis I study permanent deformation in a holistic manner: from the methods to assess deformation rates, to the processes involved in its accumulation. My research focuses particularly on two aspects: Developing methodologies to assess permanent deformation using marine terraces, and comparing permanent deformation with seismic cycle deformation patterns under different spatial scales along the M8.8 Maule earthquake (2010) rupture zone. Two methods are developed to determine deformation rates from wave-built and wave-cut terraces respectively. I selected an archetypal example of a wave-built terrace at Santa Maria Island studying its stratigraphy and recognizing sequences of reoccupation events tied with eleven radiocarbon sample ages (14C ages). I developed a method to link patterns of reoccupation with sea-level proxies by iterating relative sea level curves for a range of uplift rates. I find the best fit between relative sea-level and the stratigraphic patterns for an uplift rate of 1.5 +- 0.3 m/ka.
A Graphical User Interface named TerraceM® was developed in Matlab®. This novel software tool determines shoreline angles in wave-cut terraces under different geomorphic scenarios. To validate the methods, I select test sites in areas of available high-resolution LiDAR topography along the Maule earthquake rupture zone and in California, USA. The software allows determining the 3D location of the shoreline angle, which is a proxy for the estimation of permanent deformation rates. The method is based on linear interpolations to define the paleo platform and cliff on swath profiles. The shoreline angle is then located by intersecting these interpolations. The
accuracy and precision of TerraceM® was tested by comparing its results with previous assessments, and through an experiment with students in a computer lab setting at the University
of Potsdam.
I combined the methods developed to analyze wave-built and wave-cut terraces to assess regional patterns of permanent deformation along the (2010) Maule earthquake rupture. Wave-built terraces are tied using 12 Infra Red Stimulated luminescence ages (IRSL ages) and shoreline angles in wave-cut terraces are estimated from 170 aligned swath profiles. The comparison of coseismic slip, interseismic coupling, and permanent deformation, leads to three areas of high permanent uplift, terrace warping, and sharp fault offsets. These three areas correlate with regions of high slip and low coupling, as well as with the spatial limit of at least eight historical megathrust ruptures (M8-9.5). I propose that the zones of upwarping at Arauco and Topocalma reflect changes in frictional properties of the megathrust, which result in discrete boundaries for the propagation of mega earthquakes.
To explore the application of geomorphic markers and quantitative morphology in offshore areas I performed a local study of patterns of permanent deformation inferred from hitherto unrecognized drowned shorelines at the Arauco Bay, at the southern part of the (2010) Maule earthquake rupture zone. A multidisciplinary approach, including morphometry, sedimentology, paleontology, 3D morphoscopy, and a landscape Evolution Model is used to recognize, map, and assess local rates and patterns of permanent deformation in submarine environments. Permanent deformation patterns are then reproduced using elastic models to assess deformation rates of an active submarine splay fault defined as Santa Maria Fault System. The best fit suggests a reverse structure with a slip rate of 3.7 m/ka for the last 30 ka. The register of land level changes during the earthquake cycle at Santa Maria Island suggest that most of the deformation may be accrued through splay fault reactivation during mega earthquakes, like the (2010) Maule event. Considering a recurrence time of 150 to 200 years, as determined from historical and geological observations, slip between 0.3 and 0.7 m per event would be required to account for the 3.7 m/ka millennial slip rate. However, if the SMFS slips only every ~1000 years, representing a few megathrust earthquakes, then a slip of ~3.5 m per event would be required to account for the long- term rate. Such event would be equivalent to a magnitude ~6.7 earthquake capable to generate a local tsunami.
The results of this thesis provide novel and fundamental information regarding the amount of permanent deformation accrued in the crust, and the mechanisms responsible for this accumulation at millennial time-scales along the M8.8 Maule earthquake (2010) rupture zone. Furthermore, the results of this thesis highlight the application of quantitative geomorphology and the use of repeatable methods to determine permanent deformation, improve the accuracy of marine terrace assessments, and estimates of vertical deformation rates in tectonically active coastal areas. This is vital information for adequate coastal-hazard assessments and to anticipate realistic earthquake and tsunami scenarios.
This work reports about new high-resolution imaging and spectroscopic observations of solar type III radio bursts at low radio frequencies in the range from 30 to 80 MHz. Solar type III radio bursts are understood as result of the beam-plasma interaction of electron beams in the corona. The Sun provides a unique opportunity to study these plasma processes of an active star. Its activity appears in eruptive events like flares, coronal mass ejections and radio bursts which are all accompanied by enhanced radio emission. Therefore solar radio emission carries important information about plasma processes associated with the Sun’s activity. Moreover, the Sun’s atmosphere is a unique plasma laboratory with plasma processes under conditions not found in terrestrial laboratories. Because of the Sun’s proximity to Earth, it can be studied in greater detail than any other star but new knowledge about the Sun can be transfer to them. This “solar stellar connection” is important for the understanding of processes on other stars.
The novel radio interferometer LOFAR provides imaging and spectroscopic capabilities to study these processes at low frequencies. Here it was used for solar observations.
LOFAR, the characteristics of its solar data and the processing and analysis of the latter with the Solar Imaging Pipeline and Solar Data Center are described. The Solar Imaging Pipeline is the central software that allows using LOFAR for solar observations. So its development was necessary for the analysis of solar LOFAR data and realized here. Moreover a new density model with heat conduction and Alfvén waves was developed that provides the distance of radio bursts to the Sun from dynamic radio spectra.
Its application to the dynamic spectrum of a type III burst observed on March 16, 2016 by LOFAR shows a nonuniform radial propagation velocity of the radio emission. The analysis of an imaging observation of type III bursts on June 23, 2012 resolves a burst as bright, compact region localized in the corona propagating in radial direction along magnetic field lines with an average velocity of 0.23c. A nonuniform propagation velocity is revealed. A new beam model is presented that explains the nonuniform motion of the radio source as a propagation effect of an electron ensemble with a spread velocity distribution and rules out a monoenergetic electron distribution. The coronal electron number density is derived in the region from 1.5 to 2.5 R☉ and fitted with the newly developed density model. It determines the plasma density for the interplanetary space between Sun and Earth. The values correspond to a 1.25- and 5-fold Newkirk model for harmonic and fundamental emission, respectively. In comparison to data from other radio instruments the LOFAR data shows a high sensitivity and resolution in space, time and frequency.
The new results from LOFAR’s high resolution imaging spectroscopy are consistent with current theories of solar type III radio bursts and demonstrate its capability to track fast moving radio sources in the corona. LOFAR solar data is found to be a valuable source for solar radio physics and opens a new window for studying plasma processes associated with highly energetic electrons in the solar corona.
Gene expression describes the process of making functional gene products (e.g. proteins or special RNAs) from instructions encoded in the genetic information (e.g. DNA). This process is heavily regulated, allowing cells to produce the appropriate gene products necessary for cell survival, adapting production as necessary for different cell environments. Gene expression is subject to regulation at several levels, including transcription, mRNA degradation, translation and protein degradation. When intact, this system maintains cell homeostasis, keeping the cell alive and adaptable to different environments. Malfunction in the system can result in disease states and cell death. In this dissertation, we explore several aspects of gene expression control by analyzing data from biological experiments. Most of the work following uses a common mathematical model framework based on Markov chain models to test hypotheses, predict system dynamics or elucidate network topology. Our work lies in the intersection between mathematics and biology and showcases the power of statistical data analysis and math modeling for validation and discovery of biological phenomena.
Porous Membranes from Imidazolium- and Pyridinium-based Poly(ionic liquid)s with Targeted Properties
(2016)
The global carbon cycle is closely linked to Earth’s climate. In the context of continuously unchecked anthropogenic CO₂ emissions, the importance of natural CO₂ bond and carbon storage is increasing. An important biogenic mechanism of natural atmospheric CO₂ drawdown is the photosynthetic carbon fixation in plants and the subsequent longterm deposition of plant detritus in sediments.
The main objective of this thesis is to identify factors that control mobilization and transport of plant organic matter (pOM) through rivers towards sedimentation basins. I investigated this aspect in the eastern Nepalese Arun Valley. The trans-Himalayan Arun River is characterized by a strong elevation gradient (205 − 8848 m asl) that is accompanied by strong changes in ecology and climate ranging from wet tropical conditions in the Himalayan forelad to high alpine tundra on the Tibetan Plateau. Therefore, the Arun is an excellent natural laboratory, allowing the investigation of the effect of vegetation cover, climate, and topography on plant organic matter mobilization and export in tributaries along the gradient.
Based on hydrogen isotope measurements of plant waxes sampled along the Arun River and its tributaries, I first developed a model that allows for an indirect quantification of pOM contributed to the mainsetm by the Arun’s tributaries. In order to determine the role of climatic and topographic parameters of sampled tributary catchments, I looked for significant statistical relations between the amount of tributary pOM export and tributary characteristics (e.g. catchment size, plant cover, annual precipitation or runoff, topographic measures). On one hand, I demonstrated that pOMsourced from the Arun is not uniformly derived from its entire catchment area. On the other, I showed that dense vegetation is a necessary, but not sufficient, criterion for high tributary pOM export. Instead, I identified erosion and rainfall and runoff as key factors controlling pOM sourcing in the Arun Valley. This finding is supported by terrestrial cosmogenic nuclide concentrations measured on river sands along the Arun and its tributaries in order to quantify catchment wide denudation rates. Highest denudation rates corresponded well with maximum pOM mobilization and export also suggesting the link between erosion and pOM sourcing.
The second part of this thesis focusses on the applicability of stable isotope records such as plant wax n-alkanes in sediment archives as qualitative and quantitative proxy for the variability of past Indian Summer Monsoon (ISM) strength. First, I determined how ISM strength affects the hydrogen and oxygen stable isotopic composition (reported as δD and δ18O values vs. Vienna Standard Mean Ocean Water) of precipitation in the Arun Valley and if this amount effect (Dansgaard, 1964) is strong enough to be recorded in potential paleo-ISM isotope proxies. Second, I investigated if potential isotope records across the Arun catchment reflect ISM strength dependent precipitation δD values only, or if the ISM isotope signal is superimposed by winter precipitation or glacial melt. Furthermore, I tested if δD values of plant waxes in fluvial deposits reflect δD values of environmental waters in the respective catchments.
I showed that surface water δD values in the Arun Valley and precipitation δD from south of the Himalaya both changed similarly during two consecutive years (2011 & 2012) with distinct ISM rainfall amounts (~20% less in 2012). In order to evaluate the effect of other water sources (Winter-Westerly precipitation, glacial melt) and evapotranspiration in the Arun Valley, I analysed satellite remote sensing data of rainfall distribution (TRMM 3B42V7), snow cover (MODIS MOD10C1), glacial coverage (GLIMSdatabase, Global Land Ice Measurements from Space), and evapotranspiration (MODIS MOD16A2). In addition to the predominant ISM in the entire catchment I found through stable isotope analysis of surface waters indications for a considerable amount of glacial melt derived from high altitude tributaries and the Tibetan Plateau. Remotely sensed snow cover data revealed that the upper portion of the Arun also receives considerable winter precipitation, but the effect of snow melt on the Arun Valley hydrology could not be evaluated as it takes place in early summer, several months prior to our sampling campaigns. However, I infer that plant wax records and other potential stable isotope proxy archives below the snowline are well-suited for qualitative, and potentially quantitative, reconstructions of past changes of ISM strength.
In the current paradigm of cosmology, the formation of large-scale structures is mainly driven by non-radiating dark matter, making up the dominant part of the matter budget of the Universe. Cosmological observations however, rely on the detection of luminous galaxies, which are biased tracers of the underlying dark matter. In this thesis I present cosmological reconstructions of both, the dark matter density field that forms the cosmic web, and cosmic velocities, for which both aspects of my work are delved into, the theoretical formalism and the results of its applications to cosmological simulations and also to a galaxy redshift survey.The foundation of our method is relying on a statistical approach, in which a given galaxy catalogue is interpreted as a biased realization of the underlying dark matter density field. The inference is computationally performed on a mesh grid by sampling from a probability density function, which describes the joint posterior distribution of matter density and the three dimensional velocity field. The statistical background of our method is described in Chapter ”Implementation of argo”, where the introduction in sampling methods is given, paying special attention to Markov Chain Monte-Carlo techniques. In Chapter ”Phase-Space Reconstructions with N-body Simulations”, I introduce and implement a novel biasing scheme to relate the galaxy number density to the underlying dark matter, which I decompose into a deterministic part, described by a non-linear and scale-dependent analytic expression, and a stochastic part, by presenting a negative binomial (NB) likelihood function that models deviations from Poissonity. Both bias components had already been studied theoretically, but were so far never tested in a reconstruction algorithm. I test these new contributions againstN-body simulations to quantify improvements and show that, compared to state-of-the-art methods, the stochastic bias is inevitable at wave numbers of k≥0.15h Mpc^−1 in the power spectrum in order to obtain unbiased results from the reconstructions. In the second part of Chapter ”Phase-Space Reconstructions with N-body Simulations” I describe and validate our approach to infer the three dimensional cosmic velocity field jointly with the dark matter density. I use linear perturbation theory for the large-scale bulk flows and a dispersion term to model virialized galaxy motions, showing that our method is accurately recovering the real-space positions of the redshift-space distorted galaxies. I analyze the results with the isotropic and also the two-dimensional power spectrum.Finally, in Chapter ”Phase-space Reconstructions with Galaxy Redshift Surveys”, I show how I combine all findings and results and apply the method to the CMASS (for Constant (stellar) Mass) galaxy catalogue of the Baryon Oscillation Spectroscopic Survey (BOSS). I describe how our method is accounting for the observational selection effects inside our reconstruction algorithm. Also, I demonstrate that the renormalization of the prior distribution function is mandatory to account for higher order contributions in the structure formation model, and finally a redshift-dependent bias factor is theoretically motivated and implemented into our method. The various refinements yield unbiased results of the dark matter until scales of k≤0.2 h Mpc^−1in the power spectrum and isotropize the galaxy catalogue down to distances of r∼20h^−1 Mpc in the correlation function. We further test the results of our cosmic velocity field reconstruction by comparing them to a synthetic mock galaxy catalogue, finding a strong correlation between the mock and the reconstructed velocities. The applications of both, the density field without redshift-space distortions, and the velocity reconstructions, are very broad and can be used for improved analyses of the baryonic acoustic oscillations, environmental studies of the cosmic web, the kinematic Sunyaev-Zel’dovic or integrated Sachs-Wolfe effect.
Computer Security deals with the detection and mitigation of threats to computer networks, data, and computing hardware. This
thesis addresses the following two computer security problems: email spam campaign and malware detection.
Email spam campaigns can easily be generated using popular dissemination tools by specifying simple grammars that serve as message templates. A grammar is disseminated to nodes of a bot net, the nodes create messages by instantiating the grammar at random. Email spam campaigns can encompass huge data volumes and therefore pose a threat to the stability of the infrastructure of email service providers that have to store them. Malware -software that serves a malicious purpose- is affecting web servers, client computers via active content, and client computers through executable files. Without the help of malware detection systems it would be easy for malware creators to collect sensitive information or to infiltrate computers.
The detection of threats -such as email-spam messages, phishing messages, or malware- is an adversarial and therefore intrinsically
difficult problem. Threats vary greatly and evolve over time. The detection of threats based on manually-designed rules is therefore
difficult and requires a constant engineering effort. Machine-learning is a research area that revolves around the analysis of data and the discovery of patterns that describe aspects of the data. Discriminative learning methods extract prediction models from data that are optimized to predict a target attribute as accurately as possible. Machine-learning methods hold the promise of automatically identifying patterns that robustly and accurately detect threats. This thesis focuses on the design and analysis of discriminative learning methods for the two computer-security problems under investigation: email-campaign and malware detection.
The first part of this thesis addresses email-campaign detection. We focus on regular expressions as a syntactic framework, because regular expressions are intuitively comprehensible by security engineers and administrators, and they can be applied as a detection mechanism in an extremely efficient manner. In this setting, a prediction model is provided with exemplary messages from an email-spam campaign. The prediction model has to generate a regular expression that reveals the syntactic pattern that underlies the entire campaign, and that a security engineers finds comprehensible and feels confident enough to use the expression to blacklist further messages at the email server. We model this problem as two-stage learning problem with structured input and output spaces which can be solved using standard cutting plane methods. Therefore we develop an appropriate loss function, and derive a decoder for the resulting optimization problem.
The second part of this thesis deals with the problem of predicting whether a given JavaScript or PHP file is malicious or benign. Recent malware analysis techniques use static or dynamic features, or both. In fully dynamic analysis, the software or script is executed and observed for malicious behavior in a sandbox environment. By contrast, static analysis is based on features that can be extracted directly from the program file. In order to bypass static detection mechanisms, code obfuscation techniques are used to spread a malicious program file in many different syntactic variants. Deobfuscating the code before applying a static classifier can be subjected to mostly static code analysis and can overcome the problem of obfuscated malicious code, but on the other hand increases the computational costs of malware detection by an order of magnitude. In this thesis we present a cascaded architecture in which a classifier first performs a static analysis of the original code and -based on the outcome of this first classification step- the code may be deobfuscated and classified again. We explore several types of features including token $n$-grams, orthogonal sparse bigrams, subroutine-hashings, and syntax-tree features and study the robustness of detection methods and feature types against the evolution of malware over time. The developed tool scans very large file collections quickly and accurately.
Each model is evaluated on real-world data and compared to reference methods. Our approach of inferring regular expressions to filter emails belonging to an email spam campaigns leads to models with a high true-positive rate at a very low false-positive rate that is an order of magnitude lower than that of a commercial content-based filter. Our presented system -REx-SVMshort- is being used by a commercial email service provider and complements content-based and IP-address based filtering.
Our cascaded malware detection system is evaluated on a high-quality data set of almost 400,000 conspicuous PHP files and a collection of more than 1,00,000 JavaScript files. From our case study we can conclude that our system can quickly and accurately process large data collections at a low false-positive rate.
It is "scientific folklore" coming from physical heuristics that solutions to the heat equation on a Riemannian manifold can be represented by a path integral. However, the problem with such path integrals is that they are notoriously ill-defined. One way to make them rigorous (which is often applied in physics) is finite-dimensional approximation, or time-slicing approximation: Given a fine partition of the time interval into small subintervals, one restricts the integration domain to paths that are geodesic on each subinterval of the partition. These finite-dimensional integrals are well-defined, and the (infinite-dimensional) path integral then is defined as the limit of these (suitably normalized) integrals, as the mesh of the partition tends to zero.
In this thesis, we show that indeed, solutions to the heat equation on a general compact Riemannian manifold with boundary are given by such time-slicing path integrals. Here we consider the heat equation for general Laplace type operators, acting on sections of a vector bundle. We also obtain similar results for the heat kernel, although in this case, one has to restrict to metrics satisfying a certain smoothness condition at the boundary. One of the most important manipulations one would like to do with path integrals is taking their asymptotic expansions; in the case of the heat kernel, this is the short time asymptotic expansion. In order to use time-slicing approximation here, one needs the approximation to be uniform in the time parameter. We show that this is possible by giving strong error estimates.
Finally, we apply these results to obtain short time asymptotic expansions of the heat kernel also in degenerate cases (i.e. at the cut locus). Furthermore, our results allow to relate the asymptotic expansion of the heat kernel to a formal asymptotic expansion of the infinite-dimensional path integral, which gives relations between geometric quantities on the manifold and on the loop space. In particular, we show that the lowest order term in the asymptotic expansion of the heat kernel is essentially given by the Fredholm determinant of the Hessian of the energy functional. We also investigate how this relates to the zeta-regularized determinant of the Jacobi operator along minimizing geodesics.
We study the interplay between analysis on manifolds with singularities and complex analysis and develop new structures of operators based on the Mellin transform and tools for iterating the calculus for higher singularities. We refer to the idea of interpreting boundary value problems (BVPs) in terms of pseudo-differential operators with a principal symbolic hierarchy, taking into account that BVPs are a source of cone and edge operator algebras. The respective cone and edge pseudo-differential algebras in turn are the starting point of higher corner theories. In addition there are deep relationships between corner operators and complex analysis. This will be illustrated by the Mellin symbolic calculus.
This thesis is focused on the study and the exact simulation of two classes of real-valued Brownian diffusions: multi-skew Brownian motions with constant drift and Brownian diffusions whose drift admits a finite number of jumps.
The skew Brownian motion was introduced in the sixties by Itô and McKean, who constructed it from the reflected Brownian motion, flipping its excursions from the origin with a given probability. Such a process behaves as the original one except at the point 0, which plays the role of a semipermeable barrier. More generally, a skew diffusion with several semipermeable barriers, called multi-skew diffusion, is a diffusion everywhere except when it reaches one of the barriers, where it is partially reflected with a probability depending on that particular barrier. Clearly, a multi-skew diffusion can be characterized either as solution of a stochastic differential equation involving weighted local times (these terms providing the semi-permeability) or by its infinitesimal generator as Markov process.
In this thesis we first obtain a contour integral representation for the transition semigroup of the multiskew Brownian motion with constant drift, based on a fine analysis of its complex properties. Thanks to this representation we write explicitly the transition densities of the two-skew Brownian motion with constant drift as an infinite series involving, in particular, Gaussian functions and their tails.
Then we propose a new useful application of a generalization of the known rejection sampling method. Recall that this basic algorithm allows to sample from a density as soon as one finds an - easy to sample - instrumental density verifying that the ratio between the goal and the instrumental densities is a bounded function. The generalized rejection sampling method allows to sample exactly from densities for which indeed only an approximation is known. The originality of the algorithm lies in the fact that one finally samples directly from the law without any approximation, except the machine's.
As an application, we sample from the transition density of the two-skew Brownian motion with or without constant drift. The instrumental density is the transition density of the Brownian motion with constant drift, and we provide an useful uniform bound for the ratio of the densities. We also present numerical simulations to study the efficiency of the algorithm.
The second aim of this thesis is to develop an exact simulation algorithm for a Brownian diffusion whose drift admits several jumps. In the literature, so far only the case of a continuous drift (resp. of a drift with one finite jump) was treated. The theoretical method we give allows to deal with any finite number of discontinuities. Then we focus on the case of two jumps, using the transition densities of the two-skew Brownian motion obtained before. Various examples are presented and the efficiency of our approach is discussed.
Proteins are natural polypeptides produced by cells; they can be found in both animals and plants, and possess a variety of functions. One of these functions is to provide structural support to the surrounding cells and tissues. For example, collagen (which is found in skin, cartilage, tendons and bones) and keratin (which is found in hair and nails) are structural proteins. When a tissue is damaged, however, the supporting matrix formed by structural proteins cannot always spontaneously regenerate. Tailor-made synthetic polypeptides can be used to help heal and restore tissue formation.
Synthetic polypeptides are typically synthesized by the so-called ring opening polymerization (ROP) of α-amino acid N-carboxyanhydrides (NCA). Such synthetic polypeptides are generally non-sequence-controlled and thus less complex than proteins. As such, synthetic polypeptides are rarely as efficient as proteins in their ability to self-assemble and form hierarchical or structural supramolecular assemblies in water, and thus, often require rational designing. In this doctoral work, two types of amino acids, γ-benzyl-L/D-glutamate (BLG / BDG) and allylglycine (AG), were selected to synthesize a series of (co)polypeptides of different compositions and molar masses.
A new and versatile synthetic route to prepare polypeptides was developed, and its mechanism and kinetics were investigated. The polypeptide properties were thoroughly studied and new materials were developed from them. In particular, these polypeptides were able to aggregate (or self-assemble) in solution into microscopic fibres, very similar to those formed by collagen. By doing so, they formed robust physical networks and organogels which could be processed into high water-content, pH-responsive hydrogels. Particles with highly regular and chiral spiral morphologies were also obtained by emulsifying these polypeptides. Such polypeptides and the materials derived from them are, therefore, promising candidates for biomedical applications.
In recent years, entire industries and their participants have been affected by disruptive technologies, resulting in dramatic market changes and challenges to firm’s business logic and thus their business models (BMs). Firms from mature industries are increasingly realizing that BMs that worked successfully for years have become insufficient to stay on track in today’s “move fast and break things” economy. Firms must scrutinize the core logic that informs how they do business, which means exploring novel ways to engage customers and get them to pay. This can lead to a complete renewal of existing BMs or innovating completely new BMs.
BMs have emerged as a popular object of research within the last decade. Despite the popularity of the BM, the theoretical and empirical foundation underlying the concept is still weak. In particular, the innovation process for BMs has been developed and implemented in firms, but understanding of the mechanisms behind it is still lacking. Business model innovation (BMI) is a complex and challenging management task that requires more than just novel ideas. Systematic studies to generate a better understanding of BMI and support incumbents with appropriate concepts to improve BMI development are in short supply. Further, there is a lack of knowledge about appropriate research practices for studying BMI and generating valid data sets in order to meet expectations in both practice and academia.
This paper-based dissertation aims to contribute to research practice in the field of BM and BMI and foster better understanding of the BM concept and BMI processes in incumbent firms from mature industries. The overall dissertation presents three main results. The first result is a new perspective, or the systems thinking view, on the BM and BMI. With the systems thinking view, the fuzzy BM concept is clearly structured and a BMI framework is proposed. The second result is a new research strategy for studying BMI. After analyzing current research practice in the areas of BMs and BMI, it is obvious that there is a need for better research on BMs and BMI in terms of accuracy, transparency, and practical orientation. Thus, the action case study approach combined with abductive methodology is proposed and proven in the research setting of this thesis. The third result stems from three action case studies in incumbent firms from mature industries employed to study how BMI occurs in practice. The new insights and knowledge gained from the action case studies help to explain BMI in such industries and increase understanding of the core of these processes.
By studying these issues, the articles complied in this thesis contribute conceptually and empirically to the recently consolidated but still increasing literature on the BM and BMI. The conclusions and implications made are intended to foster further research and improve managerial practices for achieving BMI in a dramatically changing business environment.
In this thesis, a route to temperature-, pH-, solvent-, 1,2-diol-, and protein-responsive sensors made of biocompatible and low-fouling materials is established. These sensor devices are based on the sensitivemodulation of the visual band gap of a photonic crystal (PhC), which is induced by the selective binding of analytes, triggering a volume phase transition.
The PhCs introduced by this work show a high sensitivity not only for small biomolecules, but also for large analytes, such as glycopolymers or proteins. This enables the PhC to act as a sensor that detects analytes without the need of complex equipment.
Due to their periodical dielectric structure, PhCs prevent the propagation of specific wavelengths. A change of the periodicity parameters is thus indicated by a change in the reflected wavelengths. In the case explored, the PhC sensors are implemented as periodically structured responsive hydrogels in formof an inverse opal.
The stimuli-sensitive inverse opal hydrogels (IOHs) were prepared using a sacrificial opal template of monodispersed silica particles. First, monodisperse silica particles were assembled with a hexagonally packed structure via vertical deposition onto glass slides. The obtained silica crystals, also named colloidal crystals (CCs), exhibit structural color. Subsequently, the CCs templates were embedded in polymer matrix with low-fouling properties. The polymer matrices were composed of oligo(ethylene glycol) methacrylate derivatives (OEGMAs) that render the hydrogels thermoresponsive. Finally, the silica particles were etched, to produce highly porous hydrogel replicas of the CC. Importantly, the inner structure and thus the ability for light diffraction of the IOHs formed was maintained.
The IOH membrane was shown to have interconnected pores with a diameter as well as interconnections between the pores of several hundred nanometers. This enables not only the detection of small analytes, but also, the detection of even large analytes that can diffuse into the nanostructured IOH membrane. Various recognition unit – analyte model systems, such as benzoboroxole – 1,2-diols, biotin – avidin and mannose – concanavalin A, were studied by incorporating functional
comonomers of benzoboroxole, biotin and mannose into the copolymers. The incorporated recognition units specifically bind to certain low and highmolar mass biomolecules, namely to certain saccharides, catechols, glycopolymers or proteins.
Their specific binding strongly changes the overall hydrophilicity, thus modulating the swelling of the IOH matrices, and in consequence, drastically changes their internal periodicity. This swelling is amplified by the thermoresponsive properties of the polymer matrix. The shift of the interference band gap due to the specific molecular recognition is easily visible by the naked eye (up to 150 nm shifts). Moreover, preliminary trial were attempted to detect even larger entities. Therefore anti-bodies were immobilized on hydrogel platforms via polymer-analogous esterification. These platforms incorporate comonomers made of tri(ethylene glycol) methacrylate end-functionalized with a carboxylic acid. In these model systems, the bacteria analytes are too big to penetrate into the IOH membranes, but can only interact with their surfaces. The selected model bacteria, as Escherichia coli, show a specific affinity to anti-body-functionalized hydrogels. Surprisingly in the case functionalized IOHs, this study produced weak color shifts, possibly opening a path to detect directly living organism, which will need further investigations.
During the last decade, high intensity interval training (HIIT) has been used as an alternative to endurance (END) exercise, since it requires less time to produce similar physiological adaptations. Previous literature has focused on HIIT changes in aerobic metabolism and cardiorespiratory fitness, however, there are currently no studies focusing on its neuromuscular adaptations.
Therefore, this thesis aimed to compare the neuromuscular adaptations of both HIIT and END after a two-week training intervention, by using a novel technology called high-density surface electromyography (HDEMG) motor unit decomposition. This project consisted in two experiments, where healthy young men were recruited (aged between 18 to 35 years). In experiment one, the reliability of HDEMG motor unit variables (mean discharge rate, peak-to-peak amplitude, conduction velocity and discharge rate variability) was tested (Study 1), a new method to track the same motor units longitudinally was proposed (Study 2), and the level of low (<5Hz) and high (>5Hz) frequency motor unit coherence between vastus medialis (VM) and lateralis (VL) knee extensor muscles was measured (Study 4). In experiment two, a two-week HIIT and END intervention was conducted where cardiorespiratory fitness parameters (e.g. peak oxygen uptake) and motor unit variables from the VM and VL muscles were assessed pre and post intervention (Study 3).
The results showed that HDEMG is reliable to monitor changes in motor unit activity and also allows the tracking of the same motor units across different testing sessions. As expected, both HIIT and END improved cardiorespiratory fitness parameters similarly. However, the neuromuscular adaptations of both types of training differed after the intervention, with HIIT showing a significant increase in knee extensor muscle strength that was accompanied by increased VM and VL motor unit discharge rates and HDEMG amplitude at the highest force levels [(50 and 70% of the maximum voluntary contraction force (MVC)], while END training induced a marked increase in time to task failure at lower force levels (30% MVC), without any influence on HDEMG amplitude and discharge rates. Additionally, the results showed that VM and VL muscles share most of their synaptic input since they present a large amount of low and high frequency motor unit coherence, which can explain the findings of the training intervention where both muscles showed similar changes in HDEMG amplitude and discharge rates.
Taken together, the findings of the current thesis show that despite similar improvements in cardiopulmonary fitness, HIIT and END induced opposite adjustments in motor unit behavior. These results suggest that HIIT and END show specific neuromuscular adaptations, possibly related to their differences in exercise load intensity and training volume.
Meter and syntax have overlapping elements in music and speech domains, and individual differences have been documented in both meter perception and syntactic comprehension paradigms. Previous evidence insinuated but never fully explored the relationship that metrical structure has to syntactic comprehension, the comparability of these processes across music and language domains, and the respective role of individual differences. This dissertation aimed to investigate neurocognitive entrainment to meter in music and language, the impact that neurocognitive entrainment had on syntactic comprehension, and whether individual differences in musical expertise, temporal perception and working memory played a role during these processes.
A theoretical framework was developed, which linked neural entrainment, cognitive entrainment, and syntactic comprehension while detailing previously documented effects of individual differences on meter perception and syntactic comprehension. The framework was developed in both music and language domains and was tested using behavioral and EEG methods across three studies (seven experiments). In order to satisfy empirical evaluation of neurocognitive entrainment and syntactic aspects of the framework, original melodies and sentences were composed. Each item had four permutations: regular and irregular metricality, based on the hierarchical organization of strong and weak notes and syllables, and preferred and non-preferred syntax, based on structurally alternate endings. The framework predicted — for both music and language domains — greater neurocognitive entrainment in regular compared to irregular metricality conditions, and accordingly, better syntactic integration in regular compared to irregular metricality conditions. Individual differences among participants were expected for both entrainment and syntactic processes.
Altogether, the dissertation was able to support a holistic account of neurocognitive entrainment to musical meter and its subsequent influence on syntactic integration of melodies, with musician participants. The theoretical predictions were not upheld in the language domain with musician participants, but initial behavioral evidence in combination with previous EEG evidence suggest that perhaps non-musician language EEG data would support the framework’s predictions. Musicians’ deviation from hypothesized results in the language domain were suspected to reflect heightened perception of acoustic features stemming from musical training, which caused current ‘overly’ regular stimuli to distract the cognitive system. The individual-differences approach was vindicated by the surfacing of two factors scores, Verbal Working Memory and Time and Pitch Discrimination, which in turn correlated with multiple experimental data across the three studies.
The collision of bathymetric anomalies, such as oceanic spreading centers, at convergent plate margins can profoundly affect subduction dynamics, magmatism, and the structural and geomorphic evolution of the overriding plate. The Southern Patagonian Andes of South America are a prime example for sustained oceanic ridge collision and the successive formation and widening of an extensive asthenospheric slab window since the Middle Miocene. Several of the predicted upper-plate geologic manifestations of such deep-seated geodynamic processes have been studied in this region, but many topics remain highly debated. One of the main controversial topics is the interpretation of the regional low-temperature thermochronology exhumational record and its relationship with tectonic and/or climate-driven processes, ultimately manifested and recorded in the landscape evolution of the Patagonian Andes. The prominent along-strike variance in the topographic characteristics of the Andes, combined with coupled trends in low-temperature thermochronometer cooling ages have been interpreted in very contrasting ways, considering either purely climatic (i.e. glacial erosion) or geodynamic (slab-window related) controlling factors.
This thesis focuses on two main aspects of these controversial topics. First, based on field observations and bedrock low-temperature thermochronology data, the thesis addresses an existing research gap with respect to the neotectonic activity of the upper plate in response to ridge collision - a mechanism that has been shown to affect the upper plate topography and exhumational patterns in similar tectonic settings. Secondly, the qualitative interpretation of my new and existing thermochronological data from this region is extended by inverse thermal modelling to define thermal histories recorded in the data and evaluate the relative importance of surface vs. geodynamic factors and their possible relationship with the regional cooling record.
My research is centered on the Northern Patagonian Icefield (NPI) region of the Southern Patagonian Andes. This site is located inboard of the present-day location of the Chile Triple Junction - the juncture between the colliding Chile Rise spreading center and the Nazca and Antarctic Plates along the South American convergent margin. As such this study area represents the region of most recent oceanic-ridge collision and associated slab window formation. Importantly, this location also coincides with the abrupt rise in summit elevations and relief characteristics in the Southern Patagonian Andes. Field observations, based on geological, structural and geomorphic mapping, are combined with bedrock apatite (U-Th)/He and apatite fission track (AHe and AFT) cooling ages sampled along elevation transects across the orogen. This new data reveals the existence of hitherto unrecognized neotectonic deformation along the flanks of the range capped by the NPI.
This deformation is associated with the closely spaced oblique collision of successive oceanic-ridge segments in this region over the past 6 Ma. I interpret that this has caused a crustal-scale partitioning of deformation and the decoupling, margin-parallel migration, and localized uplift of a large crustal sliver (the NPI block) along the subduction margin. The location of this uplift coincides with a major increase of summit elevations and relief at the northern edge of the NPI massif. This mechanism is compatible with possible extensional processes along the topographically subdued trailing edge of the NPI block as documented by very recent and possibly still active normal faulting. Taken together, these findings suggest a major structural control on short-wavelength variations in topography in the Southern Patagonian Andes - the region affected by ridge collision and slab window formation.
The second research topic addressed here focuses on using my new and existing bedrock low-temperature cooling ages in forward and inverse thermal modeling. The data was implemented in the HeFTy and QTQt modeling platforms to constrain the late Cenozoic thermal history of the Southern Patagonian Andes in the region of the most recent upper-plate sectors of ridge collision. The data set combines AHe and AFT data from three elevation transects in the region of the Northern Patagonian Icefield. Previous similar studies claimed far-reaching thermal effects of the approaching ridge collision and slab window to affect patterns of Late Miocene reheating in the modelled thermal histories. In contrast, my results show that the currently available data can be explained with a simpler thermal history than previously proposed. Accordingly, a reheating event is not needed to reproduce the observations. Instead, the analyzed ensemble of modelled thermal histories defines a Late Miocene protracted cooling and Pliocene-to-recent stepwise exhumation. These findings agree with the geological record of this region. Specifically, this record indicates an Early Miocene phase of active mountain building associated with surface uplift and an active fold-and-thrust belt, followed by a period of stagnating deformation, peneplanation, and lack of synorogenic deposition in the Patagonian foreland. The subsequent period of stepwise exhumation likely resulted from a combination of pulsed glacial erosion and coeval neotectonic activity. The differences between the present and previously published interpretation of the cooling record can be reconciled with important inconsistencies of previously used model setup. These include mainly the insufficient convergence of the models and improper assumptions regarding the geothermal conditions in the region. This analysis puts a methodological emphasis on the prime importance of the model setup and the need for its thorough examination to evaluate the robustness of the final outcome.
Background: Low back pain (LBP) is one of the world wide leading causes of limited activity and disability. Impaired motor control has been found to be one of the possible factors related to the development or persistence of LBP. In particularly, motor control strategies seemed to be altered in situations requiring reactive responses of the trunk counteracting sudden external forces. However, muscular responses were mostly assessed in (quasi) static testing situations under simplified laboratory conditions. Comprehensive investigations in motor control strategies during dynamic everyday situations are lacking. The present research project aimed to investigate muscular compensation strategies following unexpected gait perturbations in people with and without LBP. A novel treadmill stumbling protocol was tested for its validity and reliability to provoke muscular reflex responses at the trunk and the lower extremities (study 1). Thereafter, motor control strategies in response to sudden perturbations were compared between people with LBP and asymptomatic controls (CTRL) (study 2). In accordance with more recent concepts of motor adaptation to pain, it was hypothesized that pain may have profound consequences on motor control strategies in LBP. Therefore, it was investigated whether differences in compensation strategies were either consisting of changes local to the painful area at the trunk, or also being present in remote areas such as at the lower extremities.
Methods: All investigations were performed on a custom build split-belt treadmill simulating trip-like events by unexpected rapid deceleration impulses (amplitude: 2 m/s; duration: 100 ms; 200 ms after heel contact) at 1m/s baseline velocity. A total number of 5 (study 1) and 15 (study 2) right sided perturbations were applied during walking trials. Muscular activities were assessed by surface electromyography (EMG), recorded at 12 trunk muscles and 10 (study 1) respectively 5 (study 2) leg muscles. EMG latencies of muscle onset [ms] were retrieved by a semi-automatic detection method. EMG amplitudes (root mean square (RMS)) were assessed within 200 ms post perturbation, normalized to full strides prior to any perturbation [RMS%]. Latency and amplitude investigations were performed for each muscle individually, as well as for pooled data of muscles grouped by location. Characteristic pain intensity scores (CPIS; 0-100 points, von Korff) based on mean intensity ratings reported for current, worst and average pain over the last three months were used to allocate participants into LBP (≥30 points) or CTRL (≤10 points). Test-retest reproducibility between measurements was determined by a compilation of measures of reliability. Differences in muscular activities between LBP and CTRL were analysed descriptively for individual muscles; differences based on grouped muscles were statistically tested by using a multivariate analysis of variance (MANOVA, α =0.05).
Results: Thirteen individuals were included into the analysis of study 1. EMG latencies revealed reflex muscle activities following the perturbation (mean: 89 ms). Respective EMG amplitudes were on average 5-fold of those assessed in unperturbed strides, though being characterized by a high inter-subject variability. Test-retest reliability of muscle latencies showed a high reproducibility, both for muscles at the trunk and legs. In contrast, reproducibility of amplitudes was only weak to moderate for individual muscles, but increased when being assessed as a location specific outcome summary of grouped muscles. Seventy-six individuals were eligible for data analysis in study 2. Group allocation according to CPIS resulted in n=25 for LBP and n=29 for CTRL. Descriptive analysis of activity onsets revealed longer delays for all muscles within LBP compared to CTRL (trunk muscles: mean 10 ms; leg muscles: mean 3 ms). Onset latencies of grouped muscles revealed statistically significant differences between LBP and CTRL for right (p=0.009) and left (p=0.007) abdominal muscle groups. EMG amplitude analysis showed a high variability in activation levels between individuals, independent of group assignment or location. Statistical testing of grouped muscles indicated no significant difference in amplitudes between LBP and CTRL.
Discussion: The present research project could show that perturbed treadmill walking is suitable to provoke comprehensive reflex responses at the trunk and lower extremities, both in terms of sudden onsets and amplitudes of reflex activity. Moreover, it could demonstrate that sudden loadings under dynamic conditions provoke an altered reflex timing of muscles surrounding the trunk in people with LBP compared to CTRL. In line with previous investigations, compensation strategies seemed to be deployed in a task specific manner, with differences between LBP and CTRL being evident predominately at ventral sides. No muscular alterations exceeding the trunk could be found when being assessed under the automated task of locomotion. While rehabilitation programs tailored towards LBP are still under debate, it is tempting to urge the implementation of dynamic sudden loading incidents of the trunk to enhance motor control and thereby to improve spinal protection. Moreover, in respect to the consistently observed task specificity of muscular compensation strategies, such a rehabilitation program should be rich in variety.
Foam fractionation of surfactant and protein solutions is a process dedicated to separate surface active molecules from each other due to their differences in surface activities. The process is based on forming bubbles in a certain mixed solution followed by detachment and rising of bubbles through a certain volume of this solution, and consequently on the formation of a foam layer on top of the solution column. Therefore, systematic analysis of this whole process comprises of at first investigations dedicated to the formation and growth of single bubbles in solutions, which is equivalent to the main principles of the well-known bubble pressure tensiometry. The second stage of the fractionation process includes the detachment of a single bubble from a pore or capillary tip and its rising in a respective aqueous solution. The third and final stage of the process is the formation and stabilization of the foam created by these bubbles, which contains the adsorption layers formed at the growing bubble surface, carried up and gets modified during the bubble rising and finally ends up as part of the foam layer.
Bubble pressure tensiometry and bubble profile analysis tensiometry experiments were performed with protein solutions at different bulk concentrations, solution pH and ionic strength in order to describe the process of accumulation of protein and surfactant molecules at the bubble surface. The results obtained from the two complementary methods allow understanding the mechanism of adsorption, which is mainly governed by the diffusional transport of the adsorbing protein molecules to the bubble surface. This mechanism is the same as generally discussed for surfactant molecules. However, interesting peculiarities have been observed for protein adsorption kinetics at sufficiently short adsorption times. First of all, at short adsorption times the surface tension remains constant for a while before it decreases as expected due to the adsorption of proteins at the surface. This time interval is called induction time and it becomes shorter with increasing protein bulk concentration. Moreover, under special conditions, the surface tension does not stay constant but even increases over a certain period of time. This so-called negative surface pressure was observed for BCS and BLG and discussed for the first time in terms of changes in the surface conformation of the adsorbing protein molecules. Usually, a negative surface pressure would correspond to a negative adsorption, which is of course impossible for the studied protein solutions. The phenomenon, which amounts to some mN/m, was rather explained by simultaneous changes in the molar area required by the adsorbed proteins and the non-ideality of entropy of the interfacial layer. It is a transient phenomenon and exists only under dynamic conditions.
The experiments dedicated to the local velocity of rising air bubbles in solutions were performed in a broad range of BLG concentration, pH and ionic strength. Additionally, rising bubble experiments were done for surfactant solutions in order to validate the functionality of the instrument. It turns out that the velocity of a rising bubble is much more sensitive to adsorbing molecules than classical dynamic surface tension measurements. At very low BLG or surfactant concentrations, for example, the measured local velocity profile of an air bubble is changing dramatically in time scales of seconds while dynamic surface tensions still do not show any measurable changes at this time scale. The solution’s pH and ionic strength are important parameters that govern the measured rising velocity for protein solutions. A general theoretical description of rising bubbles in surfactant and protein solutions is not available at present due to the complex situation of the adsorption process at a bubble surface in a liquid flow field with simultaneous Marangoni effects. However, instead of modelling the complete velocity profile, new theoretical work has been started to evaluate the maximum values in the profile as characteristic parameter for dynamic adsorption layers at the bubble surface more quantitatively.
The studies with protein-surfactant mixtures demonstrate in an impressive way that the complexes formed by the two compounds change the surface activity as compared to the original native protein molecules and therefore lead to a completely different retardation behavior of rising bubbles. Changes in the velocity profile can be interpreted qualitatively in terms of increased or decreased surface activity of the formed protein-surfactant complexes. It was also observed that the pH and ionic strength of a protein solution have strong effects on the surface activity of the protein molecules, which however, could be different on the rising bubble velocity and the equilibrium adsorption isotherms. These differences are not fully understood yet but give rise to discussions about the structure of protein adsorption layer under dynamic conditions or in the equilibrium state.
The third main stage of the discussed process of fractionation is the formation and characterization of protein foams from BLG solutions at different pH and ionic strength. Of course a minimum BLG concentration is required to form foams. This minimum protein concentration is a function again of solution pH and ionic strength, i.e. of the surface activity of the protein molecules. Although at the isoelectric point, at about pH 5 for BLG, the hydrophobicity and hence the surface activity should be the highest, the concentration and ionic strength effects on the rising velocity profile as well as on the foamability and foam stability do not show a maximum. This is another remarkable argument for the fact that the interfacial structure and behavior of BLG layers under dynamic conditions and at equilibrium are rather different. These differences are probably caused by the time required for BLG molecules to adapt respective conformations once they are adsorbed at the surface.
All bubble studies described in this work refer to stages of the foam fractionation process. Experiments with different systems, mainly surfactant and protein solutions, were performed in order to form foams and finally recover a solution representing the foamed material. As foam consists to a large extent of foam lamella – two adsorption layers with a liquid core – the concentration in a foamate taken from foaming experiments should be enriched in the stabilizing molecules. For determining the concentration of the foamate, again the very sensitive bubble rising velocity profile method was applied, which works for any type of surface active materials. This also includes technical surfactants or protein isolates for which an accurate composition is unknown.
In many statistical applications, the aim is to model the relationship between covariates and some outcomes. A choice of the appropriate model depends on the outcome and the research objectives, such as linear models for continuous outcomes, logistic models for binary outcomes and the Cox model for time-to-event data. In epidemiological, medical, biological, societal and economic studies, the logistic regression is widely used to describe the relationship between a response variable as binary outcome and explanatory variables as a set of covariates. However, epidemiologic cohort studies are quite expensive regarding data management since following up a large number of individuals takes long time. Therefore, the case-cohort design is applied to reduce cost and time for data collection. The case-cohort sampling collects a small random sample from the entire cohort, which is called subcohort. The advantage of this design is that the covariate and follow-up data are recorded only on the subcohort and all cases (all members of the cohort who develop the event of interest during the follow-up process).
In this thesis, we investigate the estimation in the logistic model for case-cohort design. First, a model with a binary response and a binary covariate is considered. The maximum likelihood estimator (MLE) is described and its asymptotic properties are established. An estimator for the asymptotic variance of the estimator based on the maximum likelihood approach is proposed; this estimator differs slightly from the estimator introduced by Prentice (1986). Simulation results for several proportions of the subcohort show that the proposed estimator gives lower empirical bias and empirical variance than Prentice's estimator.
Then the MLE in the logistic regression with discrete covariate under case-cohort design is studied. Here the approach of the binary covariate model is extended. Proving asymptotic normality of estimators, standard errors for the estimators can be derived. The simulation study demonstrates the estimation procedure of the logistic regression model with a one-dimensional discrete covariate. Simulation results for several proportions of the subcohort and different choices of the underlying parameters indicate that the estimator developed here performs reasonably well. Moreover, the comparison between theoretical values and simulation results of the asymptotic variance of estimator is presented.
Clearly, the logistic regression is sufficient for the binary outcome refers to be available for all subjects and for a fixed time interval. Nevertheless, in practice, the observations in clinical trials are frequently collected for different time periods and subjects may drop out or relapse from other causes during follow-up. Hence, the logistic regression is not appropriate for incomplete follow-up data; for example, an individual drops out of the study before the end of data collection or an individual has not occurred the event of interest for the duration of the study. These observations are called censored observations. The survival analysis is necessary to solve these problems. Moreover, the time to the occurence of the event of interest is taken into account. The Cox model has been widely used in survival analysis, which can effectively handle the censored data. Cox (1972) proposed the model which is focused on the hazard function. The Cox model is assumed to be
λ(t|x) = λ0(t) exp(β^Tx)
where λ0(t) is an unspecified baseline hazard at time t and X is the vector of covariates, β is a p-dimensional vector of coefficient.
In this thesis, the Cox model is considered under the view point of experimental design. The estimability of the parameter β0 in the Cox model, where β0 denotes the true value of β, and the choice of optimal covariates are investigated. We give new representations of the observed information matrix In(β) and extend results for the Cox model of Andersen and Gill (1982). In this way conditions for the estimability of β0 are formulated. Under some regularity conditions, ∑ is the inverse of the asymptotic variance matrix of the MPLE of β0 in the Cox model and then some properties of the asymptotic variance matrix of the MPLE are highlighted. Based on the results of asymptotic estimability, the calculation of local optimal covariates is considered and shown in examples. In a sensitivity analysis, the efficiency of given covariates is calculated. For neighborhoods of the exponential models, the efficiencies have then been found. It is appeared that for fixed parameters β0, the efficiencies do not change very much for different baseline hazard functions. Some proposals for applicable optimal covariates and a calculation procedure for finding optimal covariates are discussed.
Furthermore, the extension of the Cox model where time-dependent coefficient are allowed, is investigated. In this situation, the maximum local partial likelihood estimator for estimating the coefficient function β(·) is described. Based on this estimator, we formulate a new test procedure for testing, whether a one-dimensional coefficient function β(·) has a prespecified parametric form, say β(·; ϑ). The score function derived from the local constant partial likelihood function at d distinct grid points is considered. It is shown that the distribution of the properly standardized quadratic form of this d-dimensional vector under the null hypothesis tends to a Chi-squared distribution. Moreover, the limit statement remains true when replacing the unknown ϑ0 by the MPLE in the hypothetical model and an asymptotic α-test is given by the quantiles or p-values of the limiting Chi-squared distribution. Finally, we propose a bootstrap version of this test. The bootstrap test is only defined for the special case of testing whether the coefficient function is constant. A simulation study illustrates the behavior of the bootstrap test under the null hypothesis and a special alternative. It gives quite good results for the chosen underlying model.
References
P. K. Andersen and R. D. Gill. Cox's regression model for counting processes: a large samplestudy. Ann. Statist., 10(4):1100{1120, 1982.
D. R. Cox. Regression models and life-tables. J. Roy. Statist. Soc. Ser. B, 34:187{220, 1972.
R. L. Prentice. A case-cohort design for epidemiologic cohort studies and disease prevention trials. Biometrika, 73(1):1{11, 1986.
Recently, due to an increasing demand on functionality and flexibility, beforehand isolated systems have become interconnected to gain powerful adaptive Systems of Systems (SoS) solutions with an overall robust, flexible and emergent behavior. The adaptive SoS comprises a variety of different system types ranging from small embedded to adaptive cyber-physical systems. On the one hand, each system is independent, follows a local strategy and optimizes its behavior to reach its goals. On the other hand, systems must cooperate with each other to enrich the overall functionality to jointly perform on the SoS level reaching global goals, which cannot be satisfied by one system alone. Due to difficulties of local and global behavior optimizations conflicts may arise between systems that have to be solved by the adaptive SoS.
This thesis proposes a modeling language that facilitates the description of an adaptive SoS by considering the adaptation capabilities in form of feedback loops as first class entities. Moreover, this thesis adopts the Models@runtime approach to integrate the available knowledge in the systems as runtime models into the modeled adaptation logic. Furthermore, the modeling language focuses on the description of system interactions within the adaptive SoS to reason about individual system functionality and how it emerges via collaborations to an overall joint SoS behavior. Therefore, the modeling language approach enables the specification of local adaptive system behavior, the integration of knowledge in form of runtime models and the joint interactions via collaboration to place the available adaptive behavior in an overall layered, adaptive SoS architecture.
Beside the modeling language, this thesis proposes analysis rules to investigate the modeled adaptive SoS, which enables the detection of architectural patterns as well as design flaws and pinpoints to possible system threats. Moreover, a simulation framework is presented, which allows the direct execution of the modeled SoS architecture. Therefore, the analysis rules and the simulation framework can be used to verify the interplay between systems as well as the modeled adaptation effects within the SoS. This thesis realizes the proposed concepts of the modeling language by mapping them to a state of the art standard from the automotive domain and thus, showing their applicability to actual systems. Finally, the modeling language approach is evaluated by remodeling up to date research scenarios from different domains, which demonstrates that the modeling language concepts are powerful enough to cope with a broad range of existing research problems.
The relationship between climate and forest productivity is an intensively studied subject in forest science. This Thesis is embedded within the general framework of future forest growth under climate change and its implications for the ongoing forest conversion. My objective is to investigate the future forest productivity at different spatial scales (from a single specific forest stand to aggregated information across Germany) with focus on oak-pine forests in the federal state of Brandenburg. The overarching question is: how are the oak-pine forests affected by climate change described by a variety of climate scenarios. I answer this question by using a model based analysis of tree growth processes and responses to different climate scenarios with emphasis on drought events. In addition, a method is developed which considers climate change uncertainty of forest management planning.
As a first 'screening' of climate change impacts on forest productivity, I calculated the change in net primary production on the base of a large set of climate scenarios for different tree species and the total area of Germany. Temperature increases up to 3 K lead to positive effects on the net primary production of all selected tree species. But, in water-limited regions this positive net primary production trend is dependent on the length of drought periods which results in a larger uncertainty regarding future forest productivity. One of the regions with the highest uncertainty of net primary production development is the federal state of Brandenburg.
To enhance the understanding and ability of model based analysis of tree growth sensitivity to drought stress two water uptake approaches in pure pine and mixed oak-pine stands are contrasted. The first water uptake approach consists of an empirical function for root water uptake. The second approach is more mechanistic and calculates the differences of soil water potential along a soil-plant-atmosphere continuum. I assumed the total root resistance to vary at low, medium and high total root resistance levels. For validation purposes three data sets on different tree growth relevant time scales are used. Results show that, except the mechanistic water uptake approach with high total root resistance, all transpiration outputs exceeded observed values. On the other hand high transpiration led to a better match of observed soil water content. The strongest correlation between simulated and observed annual tree ring width occurred with the mechanistic water uptake approach and high total root resistance. The findings highlight the importance of severe drought as a main reason for small diameter increment, best supported by the mechanistic water uptake approach with high root resistance. However, if all aspects of the data sets are considered no approach can be judged superior to the other. I conclude that the uncertainty of future productivity of water-limited forest ecosystems under changing environmental conditions is linked to simulated root water uptake.
Finally my study aimed at the impacts of climate change combined with management scenarios on an oak-pine forest to evaluate growth, biomass and the amount of harvested timber. The pine and the oak trees are 104 and 9 years old respectively. Three different management scenarios with different thinning intensities and different climate scenarios are used to simulate the performance of management strategies which explicitly account for the risks associated with achieving three predefined objectives (maximum carbon storage, maximum harvested timber, intermediate). I found out that in most cases there is no general management strategy which fits best to different objectives. The analysis of variance in the growth related model outputs showed an increase of climate uncertainty with increasing climate warming. Interestingly, the increase of climate-induced uncertainty is much higher from 2 to 3 K than from 0 to 2 K.
Numerous reports of relatively rapid climate changes over the past century make a clear case of the impact of aerosols and clouds, identified as sources of largest uncertainty in climate projections. Earth’s radiation balance is altered by aerosols depending on their size, morphology and chemical composition. Competing effects in the atmosphere can be further studied by investigating the evolution of aerosol microphysical properties, which are the focus of the present work.
The aerosol size distribution, the refractive index, and the single scattering albedo are commonly used such properties linked to aerosol type, and radiative forcing. Highly advanced lidars (light detection and ranging) have reduced aerosol monitoring and optical profiling into a routine process. Lidar data have been widely used to retrieve the size distribution through the inversion of the so-called Lorenz-Mie model (LMM). This model offers a reasonable treatment for spherically approximated particles, it no longer provides, though, a viable description for other naturally occurring arbitrarily shaped particles, such as dust particles. On the other hand, non-spherical geometries as simple as spheroids reproduce certain optical properties with enhanced accuracy. Motivated by this, we adapt the LMM to accommodate the spheroid-particle approximation introducing the notion of a two-dimensional (2D) shape-size distribution.
Inverting only a few optical data points to retrieve the shape-size distribution is classified as a non-linear ill-posed problem. A brief mathematical analysis is presented which reveals the inherent tendency towards highly oscillatory solutions, explores the available options for a generalized solution through regularization methods and quantifies the ill-posedness. The latter will improve our understanding on the main cause fomenting instability in the produced solution spaces. The new approach facilitates the exploitation of additional lidar data points from depolarization measurements, associated with particle non-sphericity. However, the generalization of LMM vastly increases the complexity of the problem. The underlying theory for the calculation of the involved optical cross sections (T-matrix theory) is computationally so costly, that would limit a retrieval analysis to an unpractical point. Moreover the discretization of the model equation by a 2D collocation method, proposed in this work, involves double integrations which are further time consuming. We overcome these difficulties by using precalculated databases and a sophisticated retrieval software (SphInX: Spheroidal Inversion eXperiments) especially developed for our purposes, capable of performing multiple-dataset inversions and producing a wide range of microphysical retrieval outputs.
Hybrid regularization in conjunction with minimization processes is used as a basis for our algorithms. Synthetic data retrievals are performed simulating various atmospheric scenarios in order to test the efficiency of different regularization methods. The gap in contemporary literature in providing full sets of uncertainties in a wide variety of numerical instances is of major concern here. For this, the most appropriate methods are identified through a thorough analysis on an overall-behavior basis regarding accuracy and stability. The general trend of the initial size distributions is captured in our numerical experiments and the reconstruction quality depends on data error level. Moreover, the need for more or less depolarization points is explored for the first time from the point of view of the microphysical retrieval. Finally, our approach is tested in various measurement cases giving further insight for future algorithm improvements.
It is the intention of this study to contribute to further rethinking and innovating in the Microcredit business which stands at a turning point – after around 40 years of practice it is endangered to fail as a tool for economic development and to become a doubtful finance product with a random scope instead. So far, a positive impact of Microfinance on the improvement of the lives of the poor could not be confirmed. Over-indebtment of borrowers due to the pre-dominance of consumption Microcredits has become a widespread problem. Furthermore, a rising number of abusive and commercially excessive practices have been reported.
In fact, the Microfinance sector appears to suffer from a major underlying deficit: there does not exist a coherent and transparent understanding of its meaning and objectives so that Microfinance providers worldwide follow their own approaches of Microfinance which tend to differ considerably from each other.
In this sense the study aims at consolidating the multi-faced and very often confusingly different Microcredit profiles that exist nowadays. Subsequently, in this study, the Microfinance spectrum will be narrowed to one clear-cut objective, in fact away from the mere monetary business transactions to poor people it has gradually been reduced to back towards a tool for economic development as originally envisaged by its pioneers.
Hence, the fundamental research question of this study is whether, and under which conditions, Microfinance may attain a positive economic impact leading to an improvement of the living of the poor.
The study is structured in five parts: the three main parts (II.-IV.) are surrounded by an introduction (I.) and conclusion (V.). In part II., the Microfinance sector is analysed critically aiming to identify the challenges persisting as well as their root causes. In the third part, a change to the macroeconomic perspective is undertaken in oder to learn about the potential and requirements of small-scale finance to enhance economic development, particularly within the economic context of less developed countries. By consolidating the insights gained in part IV., the elements of a new concept of Microfinance with the objecitve to achieve economic development of its borrowers are elaborated.
Microfinance is a rather sensitive business the great fundamental idea of which is easily corruptible and, additionally, the recipients of which are predestined victims of abuse due to their limited knowledge in finance. It therefore needs to be practiced responsibly, but also according to clear cut definitions of its meaning and objectives all institutions active in the sector should be devoted to comply with. This is especially relevant as the demand for Microfinance services is expected to rise further within the years coming. For example, the recent refugee migration movement towards Europe entails a vast potential for Microfinance to enable these people to make a new start into economic life. This goes to show that Microfinance may no longer mainly be associated with a less developed economic context, but that it will gain importance as a financial instrument in the developed economies, too.
The increase in atmospheric methane concentration, which is determined by an imbalance between its sources and sinks, has led to investigations of the methane cycle in various environments. Aquatic environments are of an exceptional interest due to their active involvement in methane cycling worldwide and in particular in areas sensitive to climate change. Furthermore, being connected with each other aquatic environments form networks that can be spread on vast areas involving marine, freshwater and terrestrial ecosystems. Thus, aquatic systems have a high potential to translate local or regional environmental and subsequently ecosystem changes to a bigger scale. Many studies neglect this connectivity and focus on individual aquatic or terrestrial ecosystems.
The current study focuses on environmental controls of the distribution and aerobic oxidation of methane at the example of two aquatic ecosystems. These ecosystems are Arctic fresh water bodies and the Elbe estuary which represent interfaces between freshwater-terrestrial and freshwater-marine environments, respectively.
Arctic water bodies are significant atmospheric sources of methane. At the same time the methane cycle in Arctic water bodies is strongly affected by the surrounding permafrost environment, which is characterized by high amounts of organic carbon. The results of this thesis indicate that the methane concentrations in Arctic lakes and streams substantially vary between each other being regulated by local landscape features (e.g. floodplain area) and the morphology of the water bodies (lakes, streams and river). The highest methane concentrations were detected in the lake outlets and in a floodplain lake complex. In contrast, the methane concentrations measured at different sites of the Lena River did not vary substantially. The lake complexes in comparison to the Lena River, thus, appear as more individual and heterogeneous systems with a pronounced imprint of the surrounding soil environment. Furthermore, connected with each other Arctic aquatic environments have a large potential to transport methane from methane-rich water bodies such as streams and floodplain lakes to aquatic environments relatively poor in methane such as the Lena River.
Estuaries represent hot spots of oceanic methane emissions. Also, estuaries are intermediate zones between methane-rich river water and methane depleted marine water. Substantiated through this thesis at the example of the Elbe estuary, the methane distribution in estuaries, however, cannot entirely be described by the conservative mixing model i.e. gradual decrease from the freshwater end-member to the marine water end-member. In addition to the methane-rich water from the Elbe River mouth substantial methane input occurs from tidal flats, areas of significant interaction between aquatic and terrestrial environments. Thus, this study demonstrates the complex interactions and their consequences for the methane distribution within estuaries. Also it reveals how important it is to investigate estuaries at larger spatial scales.
Methane oxidation (MOX) rates are commonly correlated with methane concentrations. This was shown in previous research studies and was substantiated by the present thesis. In detail, the highest MOX rates in the Arctic water bodies were detected in methane-rich streams and in the floodplain area while in the Elbe estuary the highest MOX rates were observed at the coastal stations. However, in these bordering environments, MOX rates are affected not only via the regulation through methane concentrations. The MOX rates in the Arctic lakes were shown to be also dependent on the abundance and community composition of methane-oxidising bacteria (MOB), that in turn are controlled by local landscape features (regardless of the methane concentrations) and by the transport of MOB between neighbouring environments. In the Elbe estuary, the MOX rates in addition to the methane concentrations are largely affected by the salinity, which is in turn regulated by the mixing of fresh- and marine waters. The magnitude of the salinity impact on MOX rates thereby depends on the MOB community composition and on the rate of the salinity change.
This study extends our knowledge of environmental controls of methane distribution and aerobic methane oxidation in aquatic environments. It also illustrates how important it is to investigate complex ecosystems rather than individual ecosystems to better understand the functioning of whole biomes.
Dietary approaches contribute to the prevention and treatment of type 2 diabetes. High protein diets were shown to exert beneficial as well as adverse effects on metabolism. However, it is unclear whether the protein origin plays a role in these effects. The LeguAN study investigated in detail the effects of two high protein diets, either from plant or animal origin, in type 2 diabetic patients. Both diets contained 30 EN% protein, 40 EN% carbohydrates, and 30 EN% fat. Fiber content, glycemic index, and composition of dietary fats were similar in both diets. In comparison to previous dietary habits, the fat content was exchanged for protein, while the carbohydrate intake was not modified. Overall, both high protein diets led to improvements of glycemic control, insulin sensitivity, liver fat, and cardiovascular risk markers without remarkable differences between the protein types.
Fasting glucose together with indices of insulin resistance were ameliorated by both interventions to varying extents but without significant differences between protein types. The decline of HbA1c was more pronounced in the plant protein group, whereby the improvement of insulin sensitivity in the animal protein group. The high protein intake had only slight influence on postprandial metabolism seen for free fatty acids and indices of insulin secretion, sensitivity and degradation. Except for GIP release, ingestion of animal and plant meals did not provoke differential metabolic and hormonal responses despite diverse circulating amino acid levels.
The animal protein diets led to a selective increase of fat-free mass and decrease of total fat mass, which was not significantly different from the plant protein diet. Moreover, the high protein diets potently decreased liver fat content by 42% on average which was linked to significantly diminished lipogenesis, free fatty acids flux and lipolysis in adipose tissue. Moderate decline of circulating liver enzymes was induced by both interventions. The liver fat reduction was associated with improved glucose homeostasis and insulin sensitivity which underlines the protective effect of the diets.
Blood lipid profile improved in all subjects and was probably related to the lower fat intake. Reductions in uric acid and markers of inflammation further argued for metabolic benefits of both high protein diets. Systolic and diastolic blood pressure declined only in the PP group pointing a possible role of arginine.
Kidney function was not altered by high protein consumption over 6 weeks. The rapid decrease of serum creatinine in the PP group was noteworthy and should be further investigated. Protein type did not seem to play a role but long-term studies are warranted to fully elucidate safety of high protein regimen.
Varying the source of dietary proteins did not affect the mTOR pathway in adipose tissue and blood cells under neither acute nor chronic settings. Enhancement of whole-body insulin sensitivity suggested also no alteration of mTOR and no impairment of insulin sensitivity in skeletal muscle.
A remarkable outcome was the extensive reduction of FGF21, critical regulator of metabolic processes, by approximately 50% independently of protein type. Whether hepatic ER-stress, ammonia flux or rather macronutrient preferences is behind this paradoxical finding remains to be investigated in detail.
Unlike initial expectations and previous reports plant protein based diet had no clear advantage over animal proteins. The pronounced beneficial effect of animal protein on insulin homeostasis despite high BCAA and methionine intake was certainly unexpected assuming more complex metabolic adaptations occurring upon prolonged consumption. In addition, the reduced fat intake may have also contributed to the overall improvements in both groups.
Taking into account the above observed study results, a short-term diet containing 30 EN% protein (either from plant or animal origin), 40 EN% carbohydrates, and 30 EN% fat with lower SFA amount leads to metabolic improvements in diabetic patients, regardless of protein source.
The cell interior is a highly packed environment in which biological macromolecules evolve and function. This crowded media has effects in many biological processes such as protein-protein binding, gene regulation, and protein folding. Thus, biochemical reactions that take place in such crowded conditions differ from diluted test tube conditions, and a considerable effort has been invested in order to understand such differences.
In this work, we combine different computationally tools to disentangle the effects of molecular crowding on biochemical processes. First, we propose a lattice model to study the implications of molecular crowding on enzymatic reactions. We provide a detailed picture of how crowding affects binding and unbinding events and how the separate effects of crowding on binding equilibrium act together. Then, we implement a lattice model to study the effects of molecular crowding on facilitated diffusion. We find that obstacles on the DNA impair facilitated diffusion. However, the extent of this effect depends on how dynamic obstacles are on the DNA. For the scenario in which crowders are only present in the bulk solution, we find that at some conditions presence of crowding agents can enhance specific-DNA binding. Finally, we make use of structure-based techniques to look at the impact of the presence of crowders on the folding a protein. We find that polymeric crowders have stronger effects on protein stability than spherical crowders. The strength of this effect increases as the polymeric crowders become longer. The methods we propose here are general and can also be applied to more complicated systems.
Infants' lexical processing is modulated by featural manipulations made to words, suggesting that early lexical representations are sufficiently specified to establish a match with the corresponding label. However, the precise degree of detail in early words requires further investigation due to equivocal findings. We studied this question by assessing children’s sensitivity to the degree of featural manipulation (Chapters 2 and 3), and sensitivity to the featural makeup of homorganic and heterorganic consonant clusters (Chapter 4). Gradient sensitivity on the one hand and sensitivity to homorganicity on the other hand would suggest that lexical processing makes use of sub-phonemic information, which in turn would indicate that early words contain sub-phonemic detail. The studies presented in this thesis assess children’s sensitivity to sub-phonemic detail using minimally demanding online paradigms suitable for infants: single-picture pupillometry and intermodal preferential looking. Such paradigms have the potential to uncover lexical knowledge that may be masked otherwise due to cognitive limitations. The study reported in Chapter 2 obtained a differential response in pupil dilation to the degree of featural manipulation, a result consistent with gradient sensitivity. The study reported in Chapter 3 obtained a differential response in proportion of looking time and pupil dilation to the degree of featural manipulation, a result again consistent with gradient sensitivity. The study reported in Chapter 4 obtained a differential response to the manipulation of homorganic and heterorganic consonant clusters, a result consistent with sensitivity to homorganicity. These results suggest that infants' lexical representations are not only specific, but also detailed to the extent that they contain sub-phonemic information.
The human immunodeficiency virus (HIV) has resisted nearly three decades of efforts targeting a cure. Sustained suppression of the virus has remained a challenge, mainly due
to the remarkable evolutionary adaptation that the virus exhibits by the accumulation of drug-resistant mutations in its genome. Current therapeutic strategies aim at achieving and maintaining a low viral burden and typically involve multiple drugs. The choice of optimal combinations of these drugs is crucial, particularly in the background of treatment failure having occurred previously with certain other drugs. An understanding of the dynamics of viral mutant genotypes aids in the assessment of treatment failure with a certain drug
combination, and exploring potential salvage treatment regimens.
Mathematical models of viral dynamics have proved invaluable in understanding the viral life cycle and the impact of antiretroviral drugs. However, such models typically use simplified and coarse-grained mutation schemes, that curbs the extent of their application to drug-specific clinical mutation data, in order to assess potential next-line therapies. Statistical
models of mutation accumulation have served well in dissecting mechanisms of resistance evolution by reconstructing mutation pathways under different drug-environments. While these models perform well in predicting treatment outcomes by statistical learning, they do not incorporate drug effect mechanistically. Additionally, due to an inherent lack of
temporal features in such models, they are less informative on aspects such as predicting mutational abundance at treatment failure. This limits their application in analyzing the
pharmacology of antiretroviral drugs, in particular, time-dependent characteristics of HIV therapy such as pharmacokinetics and pharmacodynamics, and also in understanding the impact of drug efficacy on mutation dynamics.
In this thesis, we develop an integrated model of in vivo viral dynamics incorporating drug-specific mutation schemes learned from clinical data. Our combined modelling
approach enables us to study the dynamics of different mutant genotypes and assess mutational abundance at virological failure. As an application of our model, we estimate in vivo
fitness characteristics of viral mutants under different drug environments. Our approach also extends naturally to multiple-drug therapies. Further, we demonstrate the versatility of our model by showing how it can be modified to incorporate recently elucidated mechanisms of drug action including molecules that target host factors.
Additionally, we address another important aspect in the clinical management of HIV disease, namely drug pharmacokinetics. It is clear that time-dependent changes in in vivo
drug concentration could have an impact on the antiviral effect, and also influence decisions on dosing intervals. We present a framework that provides an integrated understanding
of key characteristics of multiple-dosing regimens including drug accumulation ratios and half-lifes, and then explore the impact of drug pharmacokinetics on viral suppression.
Finally, parameter identifiability in such nonlinear models of viral dynamics is always a concern, and we investigate techniques that alleviate this issue in our setting.
This volume is a novel approach to the corpus-based variationist sociolinguistic study of contemporary urban western Irish English. Based on qualitative data as well as on linguistic features extracted from the Corpus of Galway City Spoken English, this study approaches the major sociolinguistic characteristics of (th) and (dh) variability in Galway City English. It demonstrates the diverse local patterns of variability and change in the phonetic realisation of the dental fricatives and establishes a considerable degree of divergence from traditional accounts on Irish English. This volume suggests that the linguistic stratification of variants of (th) and (dh) in Galway correlates both with the social stratification of the city itself and with the stratification of speakers by social status, sex/gender and age group.
Light-triggered release of bioactive compounds from HA/PLL multilayer films for stimulation of cells
(2016)
The concept of targeting cells and tissues by controlled delivery of molecules is essential in the field of biomedicine. The layer-by-layer (LbL) technology for the fabrication of polymer multilayer films is widely implemented as a powerful tool to assemble tailor-made materials for controlled drug delivery. The LbL films can as well be engineered to act as mimics of the natural cellular microenvironment. Thus, due to the myriad possibilities such as controlled cellular adhesion and drug delivery offered by LbL films, it becomes easily achievable to direct the fate of cells by growing them on the films.
The aim of this work was to develop an approach for non-invasive and precise control of the presentation of bioactive molecules to cells. The strategy is based on employment of the LbL films, which function as support for cells and at the same time as reservoirs for bioactive molecules to be released in a controlled manner. UV light is used to trigger the release of the stored ATP with high spatio-temporal resolution. Both physico-chemical (competitive intermolecular interactions in the film) and biological aspects (cellular response and viability) are addressed in this study.
Biopolymers hyaluronic acid (HA) and poly-L-lysine (PLL) were chosen as the building blocks for the LbL film assembly. Poor cellular adhesion to native HA/PLL films as well as significant degradation by cells within a few days were shown. However, coating the films with gold nanoparticles not only improved cellular adhesion and protected the films from degradation, but also formed a size-exclusion barrier with adjustable cut-off in the size range of a few tens of kDa.
The films were shown to have high reservoir capacity for small charged molecules (reaching mM levels in the film). Furthermore, they were able to release the stored molecules in a sustained manner. The loading and release are explained by a mechanism based on interactions between charges of the stored molecules and uncompensated charges of the biopolymers in the film. Charge balance and polymer dynamics in the film play the pivotal role.
Finally, the concept of light-triggered release from the films has been proven using caged ATP loaded into the films from which ATP was released on demand. ATP induces a fast cellular response, i.e. increase in intracellular [Ca2+], which was monitored in real-time. Limitations of the cellular stimulation by the proposed approach are highlighted by studying the stimulation as a function of irradiation parameters (time, distance, light power). Moreover, caging molecules bind to the film stronger than ATP does, which opens new perspectives for the use of the most diverse chemical compounds as caging molecules.
Employment of HA/PLL films as a nouvelle support for cellular growth and hosting of bioactive molecules, along with the possibility to stimulate individual cells using focused light renders this approach highly efficient and unique in terms of precision and spatio-temporal resolution among those previously described. With its high potential, the concept presented herein provides the foundation for the design of new intelligent materials for single cell studies, with the focus on tissue engineering, diagnostics, and other cell-based applications.
Ionothermal carbon materials
(2016)
Alternative concepts for energy storage and conversion have to be developed, optimized and employed to fulfill the dream of a fossil-independent energy economy. Porous carbon materials play a major role in many energy-related devices. Among different characteristics, distinct porosity features, e.g., specific surface area (SSA), total pore volume (TPV), and the pore size distribution (PSD), are important to maximize the performance in the final device. In order to approach the aim to synthesize carbon materials with tailor-made porosity in a sustainable fashion, the present thesis focused on biomass-derived precursors employing and developing the ionothermal carbonization.
During the ionothermal carbonization, a salt melt simultaneously serves as solvent and porogen. Typically, eutectic mixtures containing zinc chloride are employed as salt phase. The first topic of the present thesis addressed the possibility to precisely tailor the porosity of ionothermal carbon materials by an experimentally simple variation of the molar composition of the binary salt mixture. The developed pore tuning tool allowed the synthesis of glucose derived carbon materials with predictable SSAs in the range of ~ 900 to ~ 2100 m2 g-1. Moreover, the nucleobase adenine was employed as precursor introducing nitrogen functionalities in the final material. Thereby, the chemical properties of the carbon materials are varied leading to new application fields. Nitrogen doped carbons (NDCs) are able to catalyze the oxygen reduction reaction (ORR) which takes place on the cathodic site of a fuel cell. The herein developed porosity tailoring allowed the synthesis of adenine derived NDCs with outstanding SSAs of up to 2900 m2 g-1 and very large TPV of 5.19 cm3 g-1. Furthermore, the influence of the porosity on the ORR could be directly investigated enabling the precise optimization of the porosity characteristics of NDCs for this application. The second topic addressed the development of a new method to investigate the not-yet unraveled mechanism of the oxygen reduction reaction using a rotating disc electrode setup. The focus was put on noble-metal free catalysts. The results showed that the reaction pathway of the investigated catalysts is pH-dependent indicating different active species at different pH-values. The third topic addressed the expansion of the used salts for the ionothermal approach towards hydrated calcium and magnesium chloride. It was shown that hydrated salt phases allowed the introduction of a secondary templating effect which was connected to the coexistence of liquid and solid salt phases. The method enabled the synthesis of fibrous NDCs with SSAs of up to 2780 m2 g-1 and very large TPV of 3.86 cm3 g-1. Moreover, the concept of active site implementation by a facile low-temperature metalation employing the obtained NDCs as solid ligands could be shown for the first time in the context of ORR.
Overall, the thesis may pave the way towards highly porous carbon with tailor-made porosity materials prepared by an inexpensive and sustainable pathway, which can be applied in energy related field thereby supporting the needed expansion of the renewable energy sector.