Refine
Year of publication
Document Type
- Doctoral Thesis (3138) (remove)
Language
- English (3138) (remove)
Keywords
- climate change (51)
- Klimawandel (50)
- Modellierung (27)
- Nanopartikel (22)
- machine learning (21)
- Blickbewegungen (17)
- Fernerkundung (17)
- Arabidopsis thaliana (16)
- Synchronisation (15)
- remote sensing (15)
Institute
- Institut für Biochemie und Biologie (731)
- Institut für Physik und Astronomie (548)
- Institut für Geowissenschaften (405)
- Institut für Chemie (346)
- Extern (139)
- Institut für Informatik und Computational Science (128)
- Institut für Mathematik (124)
- Institut für Umweltwissenschaften und Geographie (116)
- Institut für Ernährungswissenschaft (113)
- Department Linguistik (97)
Visual perception is a complex and dynamic process that plays a crucial role in how we perceive and interact with the world. The eyes move in a sequence of saccades and fixations, actively modulating perception by moving different parts of the visual world into focus. Eye movement behavior can therefore offer rich insights into the underlying cognitive mechanisms and decision processes. Computational models in combination with a rigorous statistical framework are critical for advancing our understanding in this field, facilitating the testing of theory-driven predictions and accounting for observed data. In this thesis, I investigate eye movement behavior through the development of two mechanistic, generative, and theory-driven models. The first model is based on experimental research regarding the distribution of attention, particularly around the time of a saccade, and explains statistical characteristics of scan paths. The second model implements a self-avoiding random walk within a confining potential to represent the microscopic fixational drift, which is present even while the eye is at rest, and investigates the relationship to microsaccades. Both models are implemented in a likelihood-based framework, which supports the use of data assimilation methods to perform Bayesian parameter inference at the level of individual participants, analyses of the marginal posteriors of the interpretable parameters, model comparisons, and posterior predictive checks. The application of these methods enables a thorough investigation of individual variability in the space of parameters. Results show that dynamical modeling and the data assimilation framework are highly suitable for eye movement research and, more generally, for cognitive modeling.
Nitrogen is often a limiting factor for plant growth due to its heterogenous distribution in the soil and to seasonal and diurnal changes in growth rates. In most soils, NH4+ and NO3 – are the predominant sources of inorganic nitrogen that are available for plant nutrition. In this context, plants have evolved mechanisms that enable them to optimize nitrogen acquisition, which include transporters specialized in the uptake of nitrogen and susceptible to a regulation that responds to nitrogen limiting or excess conditions. Although the average NH4+ concentrations of soils are generally 100 to 1000 times lower than those of NO3 – (Marschner, 1995), most plants preferentially take up NH4+ when both forms are present because unlike NO3– , NH4+ has not to be reduced prior to assimilation and thus requires less energy for assimilation (Bloom et al., 1992). Apart from high uptake rates in roots, high intracellular ammonium concentrations also result from quantitatively important internal breakdown of amino acids (Feng et al., 1998), and originates in high quantities during photorespiration (Mattson et al., 1997, Pearson et al., 1998). Thus, NH4+ is a key component of nitrogen metabolism for all plants and can accumulate to varying concentrations in all compartments of the cell, including the cytosol, the vacuole and in the apoplast (Wells and Miller, 2000; Nielsen and Schjoerring, 1998). Two related families of ammonium transporters (AMT1 and AMT2), containing six genes which encode transporter proteins that are specific for ammonium had been identified prior to this thesis and some genes had partially been characterised in Arabidopsis (Gazzarrini et al., 1999; Sohlenkamp et al. 2002; Kaiser et al., 2002). However, these studies were not sufficient to assign physiological functions to the individual transporters and AMT1.4 and AMT1.5 had not been studied prior to this thesis. Given this background, it was considered desirable to acquire a deeper knowledge of the physiological functions of the six Arabidopsis ammonium transporters. To this end, tissue specific expression profiles of the individual wildtype AtAMT genes were performed by quantitative real time PCR (qRT-PCR) and promoter-GUS expression. Modern approaches such as the use of T-DNA insertional mutants and RNAi hairpin constructs were employed to reduce the expression levels of AMT genes. Transcript levels were determined, and physiological, biochemical and developmental analysis such as growth tests on different media and 14C-MA and NH4+ uptake studies with the isolated insertional mutants and RNAi lines were performed to deepen the knowledge of the individual functions of the six AMTs in Arabidopsis. In addition, double mutants of the insertional mutants were created to investigate the extent in which homologous genes could compensate for lost transporter functions. The results described in this thesis show that the six AtAMT genes display a high degree of specifity in their tissue specific expression and are likely to play complementary roles in ammonium uptake into roots, in shoots, and in flowers. AtAMT1.1 is likely to be a ‘work horse’ for cellular ammonium transport and reassimilation. A major role is probably the recapture of photorespiratory NH3/NH4+ escaping from the cytosol. In roots, it is likely to transport NH4+ from the apoplast into cortical cells. AtAMT1.3 and AtAMT1.5 appear to be specialised in the acquisition of external NH4+ from the soil. Furthermore, AtAMT1.5 plays an additional role in the reassimilation of NH3/NH4+ released during the breakdown of storage proteins in the cotyledons of germinating seedlings. It was difficult to distinguish a specialisation between the transporters AtAMt1.2 and AtAMt1.1, however the root and flower specific expression patterns are different and indicate alternative functions of both. AtAMT1.4 has a very distinct expression which is restricted to the vascular bundels of leaves and to pollen only, where it is likely to be involved in the loading of NH4+ into the cells.The AtAMT2.1 expression pattern is confined to vascular bundels and meristematic active tissues in leaves where ammonium concentrations can reach very high levels. Additionally, the Vmax of AtAMT2 increases with increasing external pH, contrasting to AtAMT1.1. Thus, AtAMT2.1 it might be specialised in ammonium transport in ammonium rich environments, where the functions of other transporters are limited, enabling cells to take up NH4+ over a wide range of concentrations. The root hair expression ascribes an additional role in NH3/NH4+ acquisition where it possibly serves as a transporter that is able to acquire ammonium from basic soils where other transporters become less effective.RNAi lines showing a reduction in AtAMT gene mRNA levels and NH4+ transport kinetics, grew slower and flowering time was delayed. This indicates that NH4+ is a crucial and limiting factor for plant growth.
This is a publication-based dissertation comprising three original research stud-ies (one published, one submitted and one ready for submission; status March 2019). The dissertation introduces a generic computer model as a tool to investigate the behaviour and population dynamics of animals in cyclic environments. The model is further employed for analysing how migratory birds respond to various scenarios of altered food supply under global change. Here, ecological and evolutionary time-scales are considered, as well as the biological constraints and trade-offs the individual faces, which ultimately shape response dynamics at the population level. Further, the effect of fine-scale temporal patterns in re-source supply are studied, which is challenging to achieve experimentally. My findings predict population declines, altered behavioural timing and negative carry-over effects arising in migratory birds under global change. They thus stress the need for intensified research on how ecological mechanisms are affected by global change and for effective conservation measures for migratory birds. The open-source modelling software created for this dissertation can now be used for other taxa and related research questions. Overall, this thesis improves our mechanistic understanding of the impacts of global change on migratory birds as one prerequisite to comprehend ongoing global biodiversity loss. The research results are discussed in a broader ecological and scientific context in a concluding synthesis chapter.
The earth’s ecosystems undergo considerable changes characterized by human-induced alterations of environmental factors. In order to develop conservation goals for vulnerable ecosystems, research on ecosystem functioning is required.. Therefore, it is crucial to explore organismal interactions, such as trophic interaction or competition, which are decisive for key processes in ecosystems. These interactions are determined by the performance responses of organisms to environmental changes, which in turn, are shaped by the organism’s functional traits. Exploring traits, their variation, and the environmental factors that act on them may provide insights on how ecological interactions affect
populations, community structures and dynamics, and thus ecosystem
functioning. In aquatic ecosystems, global warming intensifies
phytoplankton blooms, which are more frequently dominated by
cyanobacteria. As cyanobacteria are poor in polyunsaturated fatty acids
(PUFA) and sterols, this compositional change alters the biochemical
food quality of phytoplankton for consumer species with potential
effects on ecological interactions. Within this thesis, I studied the
effects of biochemical food quality on consumer traits and performance responses at the phytoplankton-zooplankton interface using different strains of two closely related generalist rotifer species Brachionus calyciflorus and Brachionus fernandoi and three phytoplankton species that differ in their biochemical food quality, i.e. in their content and composition of PUFA and sterols. In a series of laboratory feeding experiments I found that biochemical food quality affected rotifer’s performance, i.e. fecundity, survival, and population growth, across a broad range of food quantities. Biochemical food quality constraints,
which are often underestimated as influencing environmental factors, had strong impacts on performance responses. I further explored the potential of biochemical food quality in mediating consumer response variation between species and among strains of one species. Co-limitation by food quantity and biochemical food quality resulted in differences in performance responses, which were more pronounced within than between rotifer species. Furthermore, I demonstrated that the body PUFA compositions of rotifer species and strains were differently affected by the dietary PUFA supply, which indicates inter- and intraspecific differences in physiological traits, such as PUFA retention, allocation, and/or bioconversion capacity, within the genus Brachionus. This indicates that dietary PUFA are involved in shaping traits and performance responses of rotifers. This thesis reveals that biochemical food quality is an environmental factor with strong effects on individual traits and performance responses of consumers. Biochemical food quality constraints can further mediate trait and response variation among species or strains. Consequently, they carry the potential to shape ecological interactions and evolutionary processes with effects on community structures and dynamics. Trait-based approaches, which include food quality research, thus may provide further insights into the linkage between functional diversity and the maintenance of crucial ecosystem functions.
A phagocyte-specific Irf8 gene enhancer establishes early conventional dendritic cell commitment
(2011)
Haematopoietic development is a complex process that is strictly hierarchically organized. Here, the phagocyte lineages are a very heterogeneous cell compartment with specialized functions in innate immunity and induction of adaptive immune responses. Their generation from a common precursor must be tightly controlled. Interference within lineage formation programs for example by mutation or change in expression levels of transcription factors (TF) is causative to leukaemia. However, the molecular mechanisms driving specification into distinct phagocytes remain poorly understood. In the present study I identify the transcription factor Interferon Regulatory Factor 8 (IRF8) as the specification factor of dendritic cell (DC) commitment in early phagocyte precursors. Employing an IRF8 reporter mouse, I showed the distinct Irf8 expression in haematopoietic lineage diversification and isolated a novel bone marrow resident progenitor which selectively differentiates into CD8α+ conventional dendritic cells (cDCs) in vivo. This progenitor strictly depends on Irf8 expression to properly establish its transcriptional DC program while suppressing a lineage-inappropriate neutrophile program. Moreover, I demonstrated that Irf8 expression during this cDC commitment-step depends on a newly discovered myeloid-specific cis-enhancer which is controlled by the haematopoietic transcription factors PU.1 and RUNX1. Interference with their binding leads to abrogation of Irf8 expression, subsequently to disturbed cell fate decisions, demonstrating the importance of these factors for proper phagocyte cell development. Collectively, these data delineate a transcriptional program establishing cDC fate choice with IRF8 in its center.
The fabrication of 1D nanostrands composed of stimuli responsive microgels has been shown in this work. Microgels are well known materials able to respond to various stimuli from outer environment. Since these microgels respond via a volume change to an external stimulus, a targeted mechanical response can be achieved. Through carefully choosing the right composition of the polymer matrix, microgels can be designed to react precisely to the targeted stimuli (e.g. drug delivery via pH and temperature changes, or selective contractions through changes in electrical current125).
In this work, it was aimed to create flexible nano-filaments which are capable of fast anisotropic contractions similar to muscle filaments. For the fabrication of such filaments or strands, nanostructured templates (PDMS wrinkles) were chosen due to a facile and low-cost fabrication and versatile tunability of their dimensions. Additionally, wrinkling is a well-known lithography-free method which enables the fabrication of nanostructures in a reproducible manner and with a high long-range periodicity.
In Chapter 2.1, it was shown for the first time that microgels as soft matter particles can be aligned to densely packed microgel arrays of various lateral dimensions. The alignment of microgels with different compositions (e.g. VCL/AAEM, NIPAAm, NIPAAm/VCL and charged microgels) was shown by using different assembly techniques (e.g. spin-coating, template confined molding). It was chosen to set one experimental parameter constant which was the SiOx surface composition of the templates and substrates (e.g. oxidized PDMS wrinkles, Si-wafers and glass slides). It was shown that the fabrication of nanoarrays was feasible with all tested microgel types. Although the microgels exhibited different deformability when aligned on a flat surface, they retained their thermo-responsivity and swelling behavior.
Towards the fabrication of 1D microgel strands interparticle connectivity was aspired. This was achieved via different cross-linking methods (i.e. cross-linking via UV-irradiation and host-guest complexation) discussed in Chapter 2.2. The microgel arrays created by different assembly methods and microgel types were tested for their cross-linking suitability. It was observed that NIPAAm based microgels cannot be cross-linked with UV light. Furthermore, it was found that these microgels exhibit a strong surface-particle-interaction and therefore could not be detached from the given substrates. In contrast to the latter, with VCL/AAEM based microgels it was possible to both UV cross-link them based on the keto-enol tautomerism of the AAEM copolymer, and to detach them from the substrate due to the lower adhesion energy towards SiOx surfaces. With VCL/AAEM microgels long, one-dimensional microgel strands could be re-dispersed in water for further analysis. It has also been shown that at least one lateral dimension of the free dispersed 1D microgel strands is easily controllable by adjusting the wavelength of the wrinkled template. For further work, only VCL/AAEM based microgels were used to focus on the main aim of this work, i.e. the fabrication of 1D microgel nanostrands.
As an alternative to the unspecific and harsh UV cross-linking, the host-guest complexation via diazobenzene cross-linkers and cyclodextrin hosts was explored. The idea behind this approach was to give means to a future construction kit-like approach by incorporation of cyclodextrin comonomers in a broad variety of particle systems (e.g. microgels, nanoparticles). For this purpose, VCL/AAEM microgels were copolymerized with different amounts of mono-acrylate functionalized β-cyclodextrin (CD). After successfully testing the cross-linking capability in solution, the cross-linking of aligned VCL/AAEM/CD microgels was tried. Although the cross-linking worked well, once the single arrays came into contact to each other, they agglomerated. As a reason for this behavior residual amounts of mono-complexed diazobenzene linkers were suspected. Thus, end-capping strategies were tried out (e.g. excess amounts of β-cyclodextrin and coverage with azobenzene functionalized AuNPs) but were unsuccessful. With deeper thought, entropy effects were taken into consideration which favor the release of complexed diazobenzene linker leading to agglomerations. To circumvent this entropy driven effect, a multifunctional polymer with 50% azobenzene groups (Harada polymer) was used. First experiments with this polymer showed promising results regarding a less pronounced agglomeration (Figure 77). Thus, this approach could be pursued in the future. In this chapter it was found out that in contrast to pearl necklace and ribbon like formations, particle alignment in zigzag formation provided the best compromise in terms of stability in dispersion (see Figure 44a and Figure 51) while maintaining sufficient flexibility.
For this reason, microgel strands in zigzag formation were used for the motion analysis described in Chapter 2.3. The aim was to observe the properties of unrestrained microgel strands in solution (e.g. diffusion behavior, rotational properties and ideally, anisotropic contraction after temperature increase). Initially, 1D microgel strands were manipulated via AFM in a liquid cell setup. It could be observed that the strands required a higher load force compared to single microgels to be detached from the surface. However, with the AFM it was not possible to detach the strands in a controllable manner but resulted in a complete removal of single microgel particles and a tearing off the strands from the surface, respectively. For this reason, to observe the motion behavior of unrestrained microgel strands in solution, confocal microscopy was used. Furthermore, to hinder an adsorption of the strands, it was found out that coating the surface of the substrates with a repulsive polymer film was beneficial. Confocal and wide-field microscopy videos showed that the microgel strands exhibit translational and rotational diffusive motion in solution without perceptible bending. Unfortunately, with these methods the detection of the anisotropic stimuli responsive contraction of the free moving microgel strands was not possible. To summarize, the flexibility of microgel strands is more comparable to the mechanical behavior of a semi flexible cable than to a yarn. The strands studied here consist of dozens or even hundreds of discrete submicron units strung together by cross-linking, having few parallels in nanotechnology.
With the insights gained in this work on microgel-surface interactions, in the future, a targeted functionalization of the template and substrate surfaces can be conducted to actively prevent unwanted microgel adsorption for a given microgel system (e.g. PVCL and polystyrene coating235). This measure would make the discussed alignment methods more diverse. As shown herein, the assembly methods enable a versatile microgel alignment (e.g. microgel meshes, double and triple strands). To go further, one could use more complex templates (e.g. ceramic rhombs and star shaped wrinkles (Figure 14) to expand the possibilities of microgel alignment and to precisely control their aspect ratios (e.g. microgel rods with homogeneous size distributions).
In Germany more than 200.000 persons die of cancer every year, which makes it the second most common cause of death. Chemotherapy and radiation therapy are often combined to exploit a supra-additive effect, as some chemotherapeutic agents like halogenated nucleobases sensitize the cancerous tissue to radiation. The radiosensitizing action of certain therapeutic agents can be at least partly assigned to their interaction with secondary low energy electrons (LEEs) that are generated along the track of the ionizing radiation. In the therapy of cancer DNA is an important target, as severe DNA damage like double strand breaks induce the cell death. As there is only a limited number of radiosensitizing agents in clinical practice, which are often strongly cytotoxic, it would be beneficial to get a deeper understanding of the interaction of less toxic potential radiosensitizers with secondary reactive species like LEEs. Beyond that LEEs can be generated by laser illuminated nanoparticles that are applied in photothermal therapy (PTT) of cancer, which is an attempt to treat cancer by an increase of temperature in the cells. However, the application of halogenated nucleobases in PTT has not been taken into account so far. In this thesis the interaction of the potential radiosensitizer 8-bromoadenine (8BrA) with LEEs was studied. In a first step the dissociative electron attachment (DEA) in the gas phase was studied in a crossed electron-molecular beam setup. The main fragmentation pathway was revealed as the cleavage of the C-Br bond. The formation of a stable parent anion was observed for electron energies around 0 eV. Furthermore, DNA origami nanostructures were used as platformed to determine electron induced strand break cross sections of 8BrA sensitized oligonucleotides and the corresponding nonsensitized sequence as a function of the electron energy. In this way the influence of the DEA resonances observed for the free molecules on the DNA strand breaks was examined. As the surrounding medium influences the DEA, pulsed laser illuminated gold nanoparticles (AuNPs) were used as a nanoscale electron source in an aqueous environment. The dissociation of brominated and native nucleobases was tracked with UV-Vis absorption spectroscopy and the generated fragments were identified with surface enhanced Raman scattering (SERS). Beside the electron induced damage, nucleobase analogues are decomposed in the vicinity of the laser illuminatednanoparticles due to the high temperatures. In order to get a deeper understanding of the different dissociation mechanisms, the thermal decomposition of the nucleobases in these systems was studied and the influence of the adsorption kinetics of the molecules was elucidated. In addition to the pulsed laser experiments, a dissociative electron transfer from plasmonically generated ”hot electrons” to 8BrA was observed under low energy continuous wave laser illumination and tracked with SERS. The reaction was studied on AgNPs and AuNPs as a function of the laser intensity and wavelength. On dried samples the dissociation of the molecule was described by fractal like kinetics. In solution, the dissociative electron transfer was observed as well. It turned out that the timescale of the reaction rates were slightly below typical integration times of Raman spectra. In consequence such reactions need to be taken into account in the interpretation of SERS spectra of electrophilic molecules. The findings in this thesis help to understand the interaction of brominated nucleobases with plasmonically generated electrons and free electrons. This might help to evaluate the potential radiosensitizing action of such molecules in cancer radiation therapy and PTT.
Mathematical modeling of biological phenomena has experienced increasing interest since new high-throughput technologies give access to growing amounts of molecular data. These modeling approaches are especially able to test hypotheses which are not yet experimentally accessible or guide an experimental setup. One particular attempt investigates the evolutionary dynamics responsible for today's composition of organisms. Computer simulations either propose an evolutionary mechanism and thus reproduce a recent finding or rebuild an evolutionary process in order to learn about its mechanism. The quest for evolutionary fingerprints in metabolic and gene-coexpression networks is the central topic of this cumulative thesis based on four published articles. An understanding of the actual origin of life will probably remain an insoluble problem. However, one can argue that after a first simple metabolism has evolved, the further evolution of metabolism occurred in parallel with the evolution of the sequences of the catalyzing enzymes. Indications of such a coevolution can be found when correlating the change in sequence between two enzymes with their distance on the metabolic network which is obtained from the KEGG database. We observe that there exists a small but significant correlation primarily on nearest neighbors. This indicates that enzymes catalyzing subsequent reactions tend to be descended from the same precursor. Since this correlation is relatively small one can at least assume that, if new enzymes are no "genetic children" of the previous enzymes, they certainly be descended from any of the already existing ones. Following this hypothesis, we introduce a model of enzyme-pathway coevolution. By iteratively adding enzymes, this model explores the metabolic network in a manner similar to diffusion. With implementation of an Gillespie-like algorithm we are able to introduce a tunable parameter that controls the weight of sequence similarity when choosing a new enzyme. Furthermore, this method also defines a time difference between successive evolutionary innovations in terms of a new enzyme. Overall, these simulations generate putative time-courses of the evolutionary walk on the metabolic network. By a time-series analysis, we find that the acquisition of new enzymes appears in bursts which are pronounced when the influence of the sequence similarity is higher. This behavior strongly resembles punctuated equilibrium which denotes the observation that new species tend to appear in bursts as well rather than in a gradual manner. Thus, our model helps to establish a better understanding of punctuated equilibrium giving a potential description at molecular level. From the time-courses we also extract a tentative order of new enzymes, metabolites, and even organisms. The consistence of this order with previous findings provides evidence for the validity of our approach. While the sequence of a gene is actually subject to mutations, its expression profile might also indirectly change through the evolutionary events in the cellular interplay. Gene coexpression data is simply accessible by microarray experiments and commonly illustrated using coexpression networks where genes are nodes and get linked once they show a significant coexpression. Since the large number of genes makes an illustration of the entire coexpression network difficult, clustering helps to show the network on a metalevel. Various clustering techniques already exist. However, we introduce a novel one which maintains control of the cluster sizes and thus assures proper visual inspection. An application of the method on Arabidopsis thaliana reveals that genes causing a severe phenotype often show a functional uniqueness in their network vicinity. This leads to 20 genes of so far unknown phenotype which are however suggested to be essential for plant growth. Of these, six indeed provoke such a severe phenotype, shown by mutant analysis. By an inspection of the degree distribution of the A.thaliana coexpression network, we identified two characteristics. The distribution deviates from the frequently observed power-law by a sharp truncation which follows after an over-representation of highly connected nodes. For a better understanding, we developed an evolutionary model which mimics the growth of a coexpression network by gene duplication which underlies a strong selection criterion, and slight mutational changes in the expression profile. Despite the simplicity of our assumption, we can reproduce the observed properties in A.thaliana as well as in E.coli and S.cerevisiae. The over-representation of high-degree nodes could be identified with mutually well connected genes of similar functional families: zinc fingers (PF00096), flagella, and ribosomes respectively. In conclusion, these four manuscripts demonstrate the usefulness of mathematical models and statistical tools as a source of new biological insight. While the clustering approach of gene coexpression data leads to the phenotypic characterization of so far unknown genes and thus supports genome annotation, our model approaches offer explanations for observed properties of the coexpression network and furthermore substantiate punctuated equilibrium as an evolutionary process by a deeper understanding of an underlying molecular mechanism.
The surface heat flow (qs) is paramount for modeling the thermal structure of the lithosphere. Changes in the qs over a distinct lithospheric unit are normally directly reflecting changes in the crustal composition and therewith the radiogenic heat budget (e.g., Rudnick et al., 1998; Förster and Förster, 2000; Mareschal and Jaupart, 2004; Perry et al., 2006; Hasterok and Chapman, 2011, and references therein) or, less usual, changes in the mantle heat flow (e.g., Pollack and Chapman, 1977). Knowledge of this physical property is therefore of great interest for both academic research and the energy industry. The present study focuses on the qs of central and southern Israel as part of the Sinai Microplate (SM). Having formed during Oligocene to Miocene rifting and break-up of the African and Arabian plates, the SM is characterized by a young and complex tectonic history. Resulting from the time thermal diffusion needs to pass through the lithosphere, on the order of several tens-of-millions of years (e.g., Fowler, 1990); qs-values of the area reflect conditions of pre-Oligocene times. The thermal structure of the lithosphere beneath the SM in general, and south-central Israel in particular, has remained poorly understood. To address this problem, the two parameters needed for the qs determination were investigated. Temperature measurements were made at ten pre-existing oil and water exploration wells, and the thermal conductivity of 240 drill core and outcrop samples was measured in the lab. The thermal conductivity is the sensitive parameter in this determination. Lab measurements were performed on both, dry and water-saturated samples, which is labor- and time-consuming. Another possibility is the measurement of thermal conductivity in dry state and the conversion to a saturated value by using mean model approaches. The availability of a voluminous and diverse dataset of thermal conductivity values in this study allowed (1) in connection with the temperature gradient to calculate new reliable qs values and to use them to model the thermal pattern of the crust in south-central Israel, prior to young tectonic events, and (2) in connection with comparable datasets, controlling the quality of different mean model approaches for indirect determination of bulk thermal conductivity (BTC) of rocks. The reliability of numerically derived BTC values appears to vary between different mean models, and is also strongly dependent upon sample lithology. Yet, correction algorithms may significantly reduce the mismatch between measured and calculated conductivity values based on the different mean models. Furthermore, the dataset allowed the derivation of lithotype-specific conversion equations to calculate the water-saturated BTC directly from data of dry-measured BTC and porosity (e.g., well log derived porosity) with no use of any mean model and thus provide a suitable tool for fast analysis of large datasets. The results of the study indicate that the qs in the study area is significantly higher than previously assumed. The new presented qs values range between 50 and 62 mW m⁻². A weak trend of decreasing heat flow can be identified from the east to the west (55-50 mW m⁻²), and an increase from the Dead Sea Basin to the south (55-62 mW m⁻²). The observed range can be explained by variation in the composition (heat production) of the upper crust, accompanied by more systematic spatial changes in its thickness. The new qs data then can be used, in conjunction with petrophysical data and information on the structure and composition of the lithosphere, to adjust a model of the pre-Oligocene thermal state of the crust in south-central Israel. The 2-D steady-state temperature model was calculated along an E-W traverse based on the DESIRE seismic profile (Mechie et al., 2009). The model comprises the entire lithosphere down to the lithosphere–asthenosphere boundary (LAB) involving the most recent knowledge of the lithosphere in pre-Oligocene time, i.e., prior to the onset of rifting and plume-related lithospheric thermal perturbations. The adjustment of modeled and measured qs allows conclusions about the pre-Oligocene LAB-depth. After the best fitting the most likely depth is 150 km which is consistent with estimations made in comparable regions of the Arabian Shield. It therefore comprises the first ever modelled pre-Oligocene LAB depth, and provides important clues on the thermal state of lithosphere before rifting. This, in turn, is vital for a better understanding of the (thermo)-dynamic processes associated with lithosphere extension and continental break-up.
With his September 2015 speech “Breaking the tragedy of the horizon”, the President of the Central Bank of England, Mark Carney, put climate change on the agenda of financial market regulators. Until then, climate change had been framed mainly as a problem of negative externalities leading to long-term economic costs, which resulted in countries trying to keep the short-term costs of climate action to a minimum. Carney argued that climate change, as well as climate policy, can also lead to short-term financial risks, potentially causing strong adjustments in asset prices. Analysing the effect of a sustainability transition on the financial sector challenges traditional economic and financial analysis and requires a much deeper understanding of the interrelations between climate policy and financial markets.
This dissertation thus investigates the implications of climate policy for financial markets as well as the role of financial markets in a transition to a sustainable economy. The approach combines insights from macroeconomic and financial risk analysis. Following an introduction and classification in Chapter 1, Chapter 2 shows a macroeconomic analysis that combines ambitious climate targets (negative externality) with technological innovation (positive externality), adaptive expectations and an investment program, resulting in overall positive macroeconomic outcomes. The analysis also reveals the limitations of climate economic models in their representation of financial markets. Therefore, the subsequent part of this dissertation is concerned with the link between climate policies and financial markets. In Chapter 3, an empirical analysis of stock-market responses to the announcement of climate policy targets is performed to investigate impacts of climate policy on financial markets. Results show that 1) international climate negotiations have an effect on asset prices and 2) investors increasingly recognize transition risks in carbon-intensive investments. In Chapter 4, an analysis of equity markets and the interbank market shows that transition risks can potentially affect a large part of the equity market and that financial interconnections can amplify negative shocks. In Chapter 5, an analysis of mortgage loans shows how information on climate policy and the energy performance of buildings can be integrated into risk management and reflected in interest rates.
While costs of climate action have been explored at great depth, this dissertation offers two main contributions. First, it highlights the importance of a green investment program to strengthen the macroeconomic benefits of climate action. Second, it shows different approaches on how to integrate transition risks and opportunities into financial market analysis. Anticipating potential losses and gains in the value of financial assets as early as possible can make the financial system more resilient to transition risks and can stimulate investments into the decarbonization of the economy.
The Italian Army’s participation in Hitler’s war against the Soviet Union has remained unrecognized and understudied. Bastian Matteo Scianna offers a wide-ranging, in-depth corrective. Mining Italian, German and Russian sources, he examines the history of the Italian campaign in the East between 1941 and 1943, as well as how the campaign was remembered and memorialized in the domestic and international arena during the Cold War. Linking operational military history with memory studies, this book revises our understanding of the Italian Army in the Second World War.
Polypeptoid block coloymers
(2014)
The hepatokine FGF21 and the adipokine chemerin have been implicated as metabolic regulators and mediators of inter-tissue crosstalk. While FGF21 is associated with beneficial metabolic effects and is currently being tested as an emerging therapeutic for obesity and diabetes, chemerin is linked to inflammation-mediated insulin resistance. However, dietary regulation of both organokines and their role in tissue interaction needs further investigation.
The LEMBAS nutritional intervention study investigated the effects of two diets differing in their protein content in obese human subjects with non-alcoholic fatty liver disease (NAFLD). The study participants consumed hypocaloric diets containing either low (LP: 10 EN%, n = 10) or high (HP: 30 EN%, n = 9) dietary protein 3 weeks prior to bariatric surgery. Before and after the intervention the participants were anthropometrically assessed, blood samples were drawn, and hepatic fat content was determined by MRS. During bariatric surgery, paired subcutaneous and visceral adipose tissue biopsies as well as liver biopsies were collected. The aim of this thesis was to investigate circulating levels and tissue-specific regulation of (1) FGF21 and (2) chemerin in the LEMBAS cohort. The results were compared to data obtained in 92 metabolically healthy subjects with normal glucose tolerance and normal liver fat content.
(1) Serum FGF21 concentrations were elevated in the obese subjects, and strongly associated with intrahepatic lipids (IHL). In accordance, FGF21 serum concentrations increased with severity of NAFLD as determined histologically in the liver biopsies. Though both diets were successful in reducing IHL, the effect was more pronounced in the HP group. FGF21 serum concentrations and mRNA expression were bi-directionally regulated by dietary protein, independent from metabolic improvements. In accordance, in the healthy study subjects, serum FGF21 concentrations dropped by more than 60% in response to the HP diet. A short-term HP intervention confirmed the acute downregulation of FGF21 within 24 hours. Lastly, experiments in HepG2 cell cultures and primary murine hepatocytes identified nitrogen metabolites (NH4Cl and glutamine) to dose-dependently suppress FGF21 expression.
(2) Circulating chemerin concentrations were considerably elevated in the obese versus lean study participants and differently associated with markers of obesity and NAFLD in the two cohorts. The adipokine decreased in response to the hypocaloric interventions while an unhealthy high-fat diet induced a rise in chemerin serum levels. In the lean subjects, mRNA expression of RARRES2, encoding chemerin, was strongly and positively correlated with expression of several cytokines, including MCP1, TNFα, and IL6, as well as markers of macrophage infiltration in the subcutaneous fat depot. However, RARRES2 was not associated with any cytokine assessed in the obese subjects and the data indicated an involvement of chemerin not only in the onset but also resolution of inflammation. Analyses of the tissue biopsies and experiments in human primary adipocytes point towards a role of chemerin in adipogenesis while discrepancies between the in vivo and in vitro data were detected.
Taken together, the results of this thesis demonstrate that circulating FGF21 and chemerin levels are considerably elevated in obesity and responsive to dietary interventions. FGF21 was acutely and bi-directionally regulated by dietary protein in a hepatocyte-autonomous manner. Given that both, a lack in essential amino acids and excessive nitrogen intake, exert metabolic stress, FGF21 may serve as an endocrine signal for dietary protein balance. Lastly, the data revealed that chemerin is derailed in obesity and associated with obesity-related inflammation. However, future studies on chemerin should consider functional and regulatory differences between secreted and tissue-specific isoforms.
Context: Most solar and stellar dynamo models use the alpha-Omega scenario where the magnetic field is generated by the interplay between differential rotation (the Omega effect) and a mean electromotive force due to helical turbulent convection flows (the alpha effect). There are, however, turbulent dynamo mechnisms that may complement the alpha effect or may be an alternative to it. Aims: We investigate models of solar-type dynamos where the alpha effect is completely replaced by two other turbulent dynamo mechanisms, namely the Omega x J effect and the shear- current effect, which both result from an inhomogeneity of the mean magnetic field. Methods: We studied axisymmetric mean-field dynamo models containing differential rotation, the Omega x J and shear-current effects, and a meridional circulation. The model calculations were carried out using the rotation profile of the Sun as obtained from helioseismic measurements and radial profiles of other quantities according to a standard model of the solar interior. Results: Without meridional flow, no satisfactory agreement of the models with the solar observations can be obtained. With a sufficiently strong meridional circulation included, however, the main properties of the large-scale solar magnetic field, namely, its oscillatory behavior, its latitudinal drift towards the equator within each half cycle, and its dipolar parity with respect to the equatorial plane, are correctly reproduced. Conclusions: We have thereby constructed the first mean-field models of solar-type dynamos that do not use the alpha effect.
During sentence reading the eyes quickly jump from word to word to sample visual information with the high acuity of the fovea. Lexical properties of the currently fixated word are known to affect the duration of the fixation, reflecting an interaction of word processing with oculomotor planning. While low level properties of words in the parafovea can likewise affect the current fixation duration, results concerning the influence of lexical properties have been ambiguous (Drieghe, Rayner, & Pollatsek, 2008; Kliegl, Nuthmann, & Engbert, 2006). Experimental investigations of such lexical parafoveal-on-foveal effects using the boundary paradigm have instead shown, that lexical properties of parafoveal previews affect fixation durations on the upcoming target words (Risse & Kliegl, 2014). However, the results were potentially confounded with effects of preview validity.
The notion of parafoveal processing of lexical information challenges extant models of eye movements during reading. Models containing serial word processing assumptions have trouble explaining such effects, as they usually couple successful word processing to saccade planning, resulting in skipping of the parafoveal word. Although models with parallel word processing are less restricted, in the SWIFT model (Engbert, Longtin, & Kliegl, 2002) only processing of the foveal word can directly influence the saccade latency.
Here we combine the results of a boundary experiment (Chapter 2) with a predictive modeling approach using the SWIFT model, where we explore mechanisms of parafoveal inhibition in a simulation study (Chapter 4). We construct a likelihood function for the SWIFT model (Chapter 3) and utilize the experimental data in a Bayesian approach to parameter estimation (Chapter 3 & 4).
The experimental results show a substantial effect of parafoveal preview frequency on fixation durations on the target word, which can be clearly distinguished from the effect of preview validity. Using the eye movement data from the participants, we demonstrate the feasibility of the Bayesian approach even for a small set of estimated parameters, by comparing summary statistics of experimental and simulated data. Finally, we can show that the SWIFT model can account for the lexical preview effects, when a mechanism for parafoveal inhibition is added. The effects of preview validity were modeled best, when processing dependent saccade cancellation was added for invalid trials. In the simulation study only the control condition of the experiment was used for parameter estimation, allowing for cross validation. Simultaneously the number of free parameters was increased. High correlations of summary statistics demonstrate the capabilities of the parameter estimation approach. Taken together, the results advocate for a better integration of experimental data into computational modeling via parameter estimation.
Nowadays, model-driven engineering (MDE) promises to ease software development by decreasing the inherent complexity of classical software development. In order to deliver on this promise, MDE increases the level of abstraction and automation, through a consideration of domain-specific models (DSMs) and model operations (e.g. model transformations or code generations). DSMs conform to domain-specific modeling languages (DSMLs), which increase the level of abstraction, and model operations are first-class entities of software development because they increase the level of automation. Nevertheless, MDE has to deal with at least two new dimensions of complexity, which are basically caused by the increased linguistic and technological heterogeneity. The first dimension of complexity is setting up an MDE environment, an activity comprised of the implementation or selection of DSMLs and model operations. Setting up an MDE environment is both time-consuming and error-prone because of the implementation or adaptation of model operations. The second dimension of complexity is concerned with applying MDE for actual software development. Applying MDE is challenging because a collection of DSMs, which conform to potentially heterogeneous DSMLs, are required to completely specify a complex software system. A single DSML can only be used to describe a specific aspect of a software system at a certain level of abstraction and from a certain perspective. Additionally, DSMs are usually not independent but instead have inherent interdependencies, reflecting (partial) similar aspects of a software system at different levels of abstraction or from different perspectives. A subset of these dependencies are applications of various model operations, which are necessary to keep the degree of automation high. This becomes even worse when addressing the first dimension of complexity. Due to continuous changes, all kinds of dependencies, including the applications of model operations, must also be managed continuously. This comprises maintaining the existence of these dependencies and the appropriate (re-)application of model operations. The contribution of this thesis is an approach that combines traceability and model management to address the aforementioned challenges of configuring and applying MDE for software development. The approach is considered as a traceability approach because it supports capturing and automatically maintaining dependencies between DSMs. The approach is considered as a model management approach because it supports managing the automated (re-)application of heterogeneous model operations. In addition, the approach is considered as a comprehensive model management. Since the decomposition of model operations is encouraged to alleviate the first dimension of complexity, the subsequent composition of model operations is required to counteract their fragmentation. A significant portion of this thesis concerns itself with providing a method for the specification of decoupled yet still highly cohesive complex compositions of heterogeneous model operations. The approach supports two different kinds of compositions - data-flow compositions and context compositions. Data-flow composition is used to define a network of heterogeneous model operations coupled by sharing input and output DSMs alone. Context composition is related to a concept used in declarative model transformation approaches to compose individual model transformation rules (units) at any level of detail. In this thesis, context composition provides the ability to use a collection of dependencies as context for the composition of other dependencies, including model operations. In addition, the actual implementation of model operations, which are going to be composed, do not need to implement any composition concerns. The approach is realized by means of a formalism called an executable and dynamic hierarchical megamodel, based on the original idea of megamodels. This formalism supports specifying compositions of dependencies (traceability and model operations). On top of this formalism, traceability is realized by means of a localization concept, and model management by means of an execution concept.
Intentionality in Sellars
(2021)
This book argues that Sellars’ theory of intentionality can be understood as an advancement of a transcendental philosophical approach. It shows how Sellars develops his theory of intentionality through his engagement with the theoretical philosophy of Immanuel Kant.
The book delivers a provocative reinterpretation of one of the most problematic and controversial concepts of Sellars' philosophy: the picturing-relation. Sellars' theory of intentionality addresses the question of how to reconcile two aspects that seem opposed: the non-relational theory of intellectual and linguistic content and a causal-transcendental theory of representation inspired by the philosophy of the early Wittgenstein. The author explains how both parts cohere in a transcendental account of finite knowledge. He claims that this can only be achieved by reading Sellars as committed to a transcendental methodology inspired by Kant. In a final step, he brings his interpretation to bear on the contemporary metaphilosophical debate on pragmatism and expressivism.
Intentionality in Sellars will be of interest to scholars of Sellars and Kant, as well as researchers working in philosophy of mind, epistemology, and the history of nineteenth- and twentieth-century philosophy.
We investigate models for incremental binary classification, an example for supervised online learning. Our starting point is a model for human and machine learning suggested by E.M.Gold.
In the first part, we consider incremental learning algorithms that use all of the available binary labeled training data in order to compute the current hypothesis. For this model, we observe that the algorithm can be assumed to always terminate and that the distribution of the training data does not influence learnability. This is still true if we pose additional delayable requirements that remain valid despite a hypothesis output delayed in time. Additionally, we consider the non-delayable requirement of consistent learning. Our corresponding results underpin the claim for delayability being a suitable structural property to describe and collectively investigate a major part of learning success criteria. Our first theorem states the pairwise implications or incomparabilities between an established collection of delayable learning success criteria, the so-called complete map. Especially, the learning algorithm can be assumed to only change its last hypothesis in case it is inconsistent with the current training data. Such a learning behaviour is called conservative.
By referring to learning functions, we obtain a hierarchy of approximative learning success criteria. Hereby we allow an increasing finite number of errors of the hypothesized concept by the learning algorithm compared with the concept to be learned. Moreover, we observe a duality depending on whether vacillations between infinitely many different correct hypotheses are still considered a successful learning behaviour. This contrasts the vacillatory hierarchy for learning from solely positive information.
We also consider a hypothesis space located between the two most common hypothesis space types in the nearby relevant literature and provide the complete map.
In the second part, we model more efficient learning algorithms. These update their hypothesis referring to the current datum and without direct regress to past training data. We focus on iterative (hypothesis based) and BMS (state based) learning algorithms. Iterative learning algorithms use the last hypothesis and the current datum in order to infer the new hypothesis.
Past research analyzed, for example, the above mentioned pairwise relations between delayable learning success criteria when learning from purely positive training data. We compare delayable learning success criteria with respect to iterative learning algorithms, as well as learning from either exclusively positive or binary labeled data. The existence of concept classes that can be learned by an iterative learning algorithm but not in a conservative way had already been observed, showing that conservativeness is restrictive. An additional requirement arising from cognitive science research %and also observed when training neural networks is U-shapedness, stating that the learning algorithm does diverge from a correct hypothesis. We show that forbidding U-shapes also restricts iterative learners from binary labeled data.
In order to compute the next hypothesis, BMS learning algorithms refer to the currently observed datum and the actual state of the learning algorithm. For learning algorithms equipped with an infinite amount of states, we provide the complete map. A learning success criterion is semantic if it still holds, when the learning algorithm outputs other parameters standing for the same classifier. Syntactic (non-semantic) learning success criteria, for example conservativeness and syntactic non-U-shapedness, restrict BMS learning algorithms. For proving the equivalence of the syntactic requirements, we refer to witness-based learning processes. In these, every change of the hypothesis is justified by a later on correctly classified witness from the training data. Moreover, for every semantic delayable learning requirement, iterative and BMS learning algorithms are equivalent. In case the considered learning success criterion incorporates syntactic non-U-shapedness, BMS learning algorithms can learn more concept classes than iterative learning algorithms.
The proofs are combinatorial, inspired by investigating formal languages or employ results from computability theory, such as infinite recursion theorems (fixed point theorems).
One of the tremendous discoveries by the Cassini spacecraft has been the detection of propeller structures in Saturn's A ring. Although the generating moonlet is too small to be resolved by the cameras aboard Cassini, its produced density structure within the rings, caused by its gravity can be well observed. The largest observed propeller is called Blériot and has an azimuthal extent over several thousand kilometers. Thanks to its large size, Blériot could be identified in different images over a time span of over 10 years, allowing the reconstruction of its orbital evolution. It turns out that Blériot deviates considerably from its expected Keplerian orbit in azimuthal direction by several thousand kilometers. This excess motion can be well reconstructed by a superposition of three harmonics, and therefore resembles the typical fingerprint of a resonantly perturbed body. This PhD thesis is directed to the excess motion of Blériot. Resonant perturbations are a known for some of the outer satellites of Saturn. Thus, in the first part of this thesis, we seek for suiting resonance candidates nearby the propeller, which might explain the observed periods and amplitudes. In numeric simulations, we show that indeed resonances by Prometheus, Pandora and Mimas can explain the libration periods in good agreement, but not the amplitudes. The amplitude problem is solved by the introduction of a propeller-moonlet interaction model, where we assume a broken symmetry of the propeller by a small displacement of the moonlet. This results in a librating motion the moonlet around the propeller's symmetry center due to the non-vanishing accelerations. The retardation of the reaction of the propeller structure to the motion of the moonlet causes the propeller to become asymmetric. Hydrodynamic simulations to test our analytical model confirm our predictions. In the second part of this thesis, we consider a stochastic migration of the moonlet, which is an alternative hypothesis to explain the observed excess motion of Blériot. The mean-longitude is a time-integrated quantity and thus introduces a correlation between the independent kicks of a random walk, smoothing the noise and thus makes the residual look similar to the observed one for Blériot. We apply a diagonalization test to decorrelated the observed residuals for the propellers Blériot and Earhart and the ring-moon Daphnis. It turns out that the decorrelated distributions do not strictly follow the expected Gaussian distribution. The decorrelation method fails to distinguish a correlated random walk from a noisy libration and thus we provide an alternative study. Assuming the three-harmonic fit to be a valid representation of the excess motion for Blériot, independently from its origin, we test the likelihood that this excess motion can be created by a random walk. It turns out that a non-correlated and correlated random walk is unlikely to explain the observed excess motion.
Hierarchical meso- and macropore architectures by liquid crystalline and polymer colloid templating
(2007)
Casualties and damages from urban pluvial flooding are increasing. Triggered by short, localized, and intensive rainfall events, urban pluvial floods can occur anywhere, even in areas without a history of flooding. Urban pluvial floods have relatively small temporal and spatial scales. Although cumulative losses from urban pluvial floods are comparable, most flood risk management and mitigation strategies focus on fluvial and coastal flooding. Numerical-physical-hydrodynamic models are considered the best tool to represent the complex nature of urban pluvial floods; however, they are computationally expensive and time-consuming. These sophisticated models make large-scale analysis and operational forecasting prohibitive. Therefore, it is crucial to evaluate and benchmark the performance of other alternative methods.
The findings of this cumulative thesis are represented in three research articles. The first study evaluates two topographic-based methods to map urban pluvial flooding, fill–spill–merge (FSM) and topographic wetness index (TWI), by comparing them against a sophisticated hydrodynamic model. The FSM method identifies flood-prone areas within topographic depressions while the TWI method employs maximum likelihood estimation to calibrate a TWI threshold (τ) based on inundation maps from the 2D hydrodynamic model. The results point out that the FSM method outperforms the TWI method. The study highlights then the advantage and limitations of both methods.
Data-driven models provide a promising alternative to computationally expensive hydrodynamic models. However, the literature lacks benchmarking studies to evaluate the different models' performance, advantages and limitations. Model transferability in space is a crucial problem. Most studies focus on river flooding, likely due to the relative availability of flow and rain gauge records for training and validation. Furthermore, they consider these models as black boxes. The second study uses a flood inventory for the city of Berlin and 11 predictive features which potentially indicate an increased pluvial flooding hazard to map urban pluvial flood susceptibility using a convolutional neural network (CNN), an artificial neural network (ANN) and the benchmarking machine learning models random forest (RF) and support vector machine (SVM). I investigate the influence of spatial resolution on the implemented models, the models' transferability in space and the importance of the predictive features. The results show that all models perform well and the RF models are superior to the other models within and outside the training domain. The models developed using fine spatial resolution (2 and 5 m) could better identify flood-prone areas. Finally, the results point out that aspect is the most important predictive feature for the CNN models, and altitude is for the other models.
While flood susceptibility maps identify flood-prone areas, they do not represent flood variables such as velocity and depth which are necessary for effective flood risk management. To address this, the third study investigates data-driven models' transferability to predict urban pluvial floodwater depth and the models' ability to enhance their predictions using transfer learning techniques. It compares the performance of RF (the best-performing model in the previous study) and CNN models using 12 predictive features and output from a hydrodynamic model. The findings in the third study suggest that while CNN models tend to generalise and smooth the target function on the training dataset, RF models suffer from overfitting. Hence, RF models are superior for predictions inside the training domains but fail outside them while CNN models could control the relative loss in performance outside the training domains. Finally, the CNN models benefit more from transfer learning techniques than RF models, boosting their performance outside training domains.
In conclusion, this thesis has evaluated both topographic-based methods and data-driven models to map urban pluvial flooding. However, further studies are crucial to have methods that completely overcome the limitation of 2D hydrodynamic models.
Geospatial data has become a natural part of a growing number of information systems and services in the economy, society, and people's personal lives. In particular, virtual 3D city and landscape models constitute valuable information sources within a wide variety of applications such as urban planning, navigation, tourist information, and disaster management. Today, these models are often visualized in detail to provide realistic imagery. However, a photorealistic rendering does not automatically lead to high image quality, with respect to an effective information transfer, which requires important or prioritized information to be interactively highlighted in a context-dependent manner.
Approaches in non-photorealistic renderings particularly consider a user's task and camera perspective when attempting optimal expression, recognition, and communication of important or prioritized information. However, the design and implementation of non-photorealistic rendering techniques for 3D geospatial data pose a number of challenges, especially when inherently complex geometry, appearance, and thematic data must be processed interactively. Hence, a promising technical foundation is established by the programmable and parallel computing architecture of graphics processing units.
This thesis proposes non-photorealistic rendering techniques that enable both the computation and selection of the abstraction level of 3D geospatial model contents according to user interaction and dynamically changing thematic information. To achieve this goal, the techniques integrate with hardware-accelerated rendering pipelines using shader technologies of graphics processing units for real-time image synthesis. The techniques employ principles of artistic rendering, cartographic generalization, and 3D semiotics—unlike photorealistic rendering—to synthesize illustrative renditions of geospatial feature type entities such as water surfaces, buildings, and infrastructure networks. In addition, this thesis contributes a generic system that enables to integrate different graphic styles—photorealistic and non-photorealistic—and provide their seamless transition according to user tasks, camera view, and image resolution.
Evaluations of the proposed techniques have demonstrated their significance to the field of geospatial information visualization including topics such as spatial perception, cognition, and mapping. In addition, the applications in illustrative and focus+context visualization have reflected their potential impact on optimizing the information transfer regarding factors such as cognitive load, integration of non-realistic information, visualization of uncertainty, and visualization on small displays.
The purpose of this thesis is to develop an automated inversion scheme to derive point and finite source parameters for weak earthquakes, here intended with the unusual meaning of earthquakes with magnitudes at the limit or below the bottom magnitude threshold of standard source inversion routines. The adopted inversion approaches entirely rely on existing inversion software, the methodological work mostly targeting the development and tuning of optimized inversion flows. The resulting inversion scheme is tested for very different datasets, and thus allows the discussion on the source inversion problem at different scales. In the first application, dealing with mining induced seismicity, the source parameters determination is addressed at a local scale, with source-sensor distance of less than 3 km. In this context, weak seismicity corresponds to event below magnitude MW 2.0, which are rarely target of automated source inversion routines. The second application considers a regional dataset, namely the aftershock sequence of the 2010 Maule earthquake (Chile), using broadband stations at regional distances, below 300 km. In this case, the magnitude range of the target aftershocks range down to MW 4.0. This dataset is here considered as a weak seismicity case, since the analysis of such moderate seismicity is generally investigated only by moment tensor inversion routines, with no attempt to resolve source duration or finite source parameters. In this work, automated multi-step inversion schemes are applied to both datasets with the aim of resolving point source parameters, both using double couple (DC) and full moment tensor (MT) models, source duration and finite source parameters. A major result of the analysis of weaker events is the increased size of resulting moment tensor catalogues, which interpretation may become not trivial. For this reason, a novel focal mechanism clustering approach is used to automatically classify focal mechanisms, allowing the investigation of the most relevant and repetitive rupture features. The inversion of the mining induced seismicity dataset reveals the repetitive occurrence of similar rupture processes, where the source geometry is controlled by the shape of the mined panel. Moreover, moment tensor solutions indicate a significant contribution of tensile processes. Also the second application highlights some characteristic geometrical features of the fault planes, which show a general consistency with the orientation of the slab. The additional inversion for source duration allowed to verify the empirical correlation for moment normalized earthquakes in subduction zones among a decreasing rupture duration with increasing source depth, which was so far only observed for larger events.
To investigate the reliability and stability of spherical harmonic models based on archeo/-paleomagnetic data, 2000 Geomagnetic models were calculated. All models are based on the same data set but with randomized uncertainties. Comparison of these models to the geomagnetic field model gufm1 showed that large scale magnetic field structures up to spherical harmonic degree 4 are stable throughout all models. Through a ranking of all models by comparing the dipole coefficients to gufm1 more realistic uncertainty estimates were derived than the authors of the data provide.
The derived uncertainty estimates were used in further modelling, which combines archeo/-paleomagnetic and historical data. The huge difference in data count, accuracy and coverage of these two very different data sources made it necessary to introduce a time dependent spatial damping, which was constructed to constrain the spatial complexity of the model. Finally 501 models were calculated by considering that each data point is a Gaussian random variable, whose mean is the original value and whose standard deviation is its uncertainty. The final model arhimag1k is calculated by taking the mean of the 501 sets of Gauss coefficients. arhimag1k fits different dependent and independent data sets well. It shows an early reverse flux patch at the core-mantle boundary between 1000 AD and 1200 AD at the location of the South Atlantic Anomaly today. Another interesting feature is a high latitude flux patch over Greenland between 1200 and 1400 AD. The dipole moment shows a constant behaviour between 1600 and 1840 AD.
In the second part of the thesis 4 new paleointensities from 4 different flows of the island Fogo, which is part of Cape Verde, are presented. The data is fitted well by arhimag1k with the exception of the value at 1663 of 28.3 microtesla, which is approximately 10 microtesla lower than the model suggest.
There are many factors which make speaking and understanding a second language (L2) a highly complex challenge. Skills and competencies in in both linguistic and metalinguistic areas emerge as parts of a multi-faceted, flexible concept underlying bilingual/multilingual communication. On the linguistic level, a combination of an extended knowledge of idiomatic expressions, a broad lexical familiarity, a large vocabulary size, and the ability to deal with phonetic distinctions and fine phonetic detail has been argued necessary for effective nonnative comprehension of spoken language. The scientific interest in these factors has also led to more interest in the L2’s information structure, the way in which information is organised and packaged into informational units, both within and between clauses. On a practical level, the information structure of a language can offer the means to assign focus to a certain element considered important. Speakers can draw from a rich pool of linguistic means to express this focus, and listeners can in turn interpret these to guide them to the highlighted information which in turn facilitates comprehension, resulting in an appropriate understanding of what has been said. If a speaker doesn’t follow the principles of information structure, and the main accent in a sentence is placed on an unimportant word, then there may be inappropriate information transfer within the discourse, and misunderstandings. The concept of focus as part of the information structure of a language, the linguistic means used to express it, and the differential use of focus in native and nonnative language processing are central to this dissertation. Languages exhibit a wide range of ways of directing focus, including by prosodic means, by syntactic constructions, and by lexical means. The general principles underlying information structure seem to contrast structurally across different languages, and they can also differ in the way they express focus. In the context of L2 acquisition, characteristics of the L1 linguistic system are argued to influence the acquisition of the L2. Similarly, the conceptual patterns of information structure of the L1 may influence the organization of information in the L2. However, strategies and patterns used to exploit information structure for succesful language comprehension in the native L1, may not apply at all, or work in different ways or todifferent degrees in the L2. This means that L2 learners ideally have to understand the way that information structure is expressed in the L2 to fully use the information structural benefit in the L2. The knowledge of information structural requirements in the L2 could also imply that the learner would have to make adjustments regarding the use of information structural devices in the L2. The general question is whether the various means to mark focus in the learners’ native language are also accessible in the nonnative language, and whether a L1-L2 transfer of their usage should be considered desirable. The current work explores how information structure helps the listener to discover and structure the forms and meanings of the L2. The central hypothesis is that the ability to access information structure has an impact on the level of the learners’ appropriateness and linguistic competence in the L2. Ultimately, the ability to make use of information structure in the L2 is believed to underpin the L2 learners’ ability to effectively communicate in the L2. The present study investigated how use of focus markers affects processing speed and word recall recall in a native-nonnative language comparison. The predominant research question was whether the type of focus marking leads to more efficient and accurate word processing in marked structures than in unmarked structures, and whether differences in processing patterns can be observed between the two language conditions. Three perception studies were conducted, each concentrating on one of the following linguistic parameters: 1. Prosodic prominence: Does prosodic focus conveyed by sentence accent and by word position facilitate word recognition? 2. Syntactical means: Do cleft constructions result in faster and more accurate word processing? 3. Lexical means: Does focus conveyed by the particles even/only (German: sogar/nur) facilitate word processing and word recall? Experiments 2 and 3 additionally investigated the contribution of context in the form of preceding questions. Furthermore, they considered accent and its facilitative effect on the processing of words which are in the scope of syntactic or lexical focus marking. All three experiments tested German learners of English in a native German language condition and in English as their L2. Native English speakers were included as a control for the English language condition. Test materials consisted of single sentences, all dealing with bird life. Experiment 1 tested word recognition in three focus conditions (broad focus, narrow focus on the target, and narrow focus on a constituent than the target) in one condition using natural unmanipulated sentences, and in the other two conditions using spliced sentences. Experiment 2 (effect of syntactic focus marking) and Experiment 3 (effect of lexical focus marking) used phoneme monitoring as a measure for the speed of word processing. Additionally, a word recall test (4AFC) was conducted to assess the effective entry of target-bearing words in the listeners’ memory. Experiment 1: Focus marking by prosodic means Prosodic focus marking by pitch accent was found to highlight important information (Bolinger, 1972), making the accented word perceptually more prominent (Klatt, 1976; van Santen & Olive, 1990; Eefting, 1991; Koopmans-van Beinum & van Bergem, 1989). However, accent structure seems to be processed faster in native than in nonnative listening (Akker& Cutler, 2003, Expt. 3). Therefore, it is expected that prosodically marked words are better recognised than unmarked words, and that listeners can exploit accent structure better for accurate word recognition in their L1 than they do in the L2 (L1 > L2). Altogether, a difference in word recognition performance in L1 listening is expected between different focus conditions (narrow focus > broad focus). Results of Experiments 1 show that words were better recognized in native listening than in nonnative listening. Focal accent, however, doesn’t seem to help the German subjects recognize accented words more accurately, in both the L1 and the L2. This could be due to the focus conditions not being acoustically distinctive enough. Results of experiments with spliced materials suggest that the surrounding prosodic sentence contour made listeners remember a target word and not the local, prosodic realization of the word. Prosody seems to indeed direct listeners’ attention to the focus of the sentence (see Cutler, 1976). Regarding the salience of word position, VanPatten (2002; 2004) postulated a sentence location principle for L2 processing, stating a ranking of initial > final > medial word position. Other evidence mentions a processing adantage of items occurring late in the sentence (Akker & Cutler, 2003), and Rast (2003) observed in an English L2 production study a trend of an advantage of items occurring at the outer ends of the sentence. The current Experiment 1 aimed to keep the length of the sentences to an acceptable length, mainly to keep the task in the nonnative lnaguage condition feasable. Word length showed an effect only in combination with word position (Rast, 2003; Rast & Dommergues, 2003). Therefore, word length was included in the current experiment as a secondary factor and without hypotheses. Results of Experiment 1 revealed that the length of a word doesn’t seem to be important for its accurate recognition. Word position, specifically the final position, clearly seems to facilitate accurate word recognition in German. A similar trend emerges in condition English L2, confirming Klein (1984) and Slobin (1985). Results don’t support the sentence location principle of VanPatten (2002; 2004). The salience of the final position is interpreted as recency effect (Murdock, 1962). In addition, the advantage of the final position may benefit from the discourse convention that relevant background information is referred to first, and then what is novel later (Haviland & Clark, 1974). This structure is assumed to cue the listener as to what the speaker considers to be important information, and listeners might have reacted according to this convention. Experiment 2: Focus marking by syntactic means Atypical syntactic structures often draw listeners’ attention to certain information in an utterance, and the cleft structure as a focus marking device appears to be a common surface feature in many languages (Lambrecht, 2001). Surface structure influences sentence processing (Foss & Lynch, 1969; Langford & Holmes, 1979), which leads to competing hypotheses in Experiment 2: on the one hand, the focusing effect of the cleft construction might reduce processing times. On the other, cleft constructions in German were found to be used less to mark fo than in English (Ahlemeyer & Kohlhof, 1999; Doherty, 1999; E. Klein, 1988). The complexity of the constructions, and the experience from the native language might work against an advantage of the focus effect in the L2. Results of Experiment 2 show that the cleft structure is an effective device to mark focus in German L1. The processing advantage is explained by the low degree of structural markedness of cleft structures: listeners use the focus function of sentence types headed by the dummy subject es (English: it) due to reliance on 'safe' subject-prominent SVO-structures. The benefit of cleft is enhanced when the sentences are presented with context, suggesting a substantial benefit when focus effects of syntactic surface structure and coherence relation between sentences are integrated. Clefts facilitate word processing for English native speakers. Contrary to German L1, the marked cleft construction doesn’t reduce processing times in English L2. The L1-L2 difference was interpreted as a learner problem of applying specific linguistic structures according to the principles of information structure in the target language. Focus marking by cleft did not help German learners in native or in nonnative word recall. This could be attributed to the phonological similarity of the multiple choice options (Conrad & Hull, 1964), and to a long time span between listening and recall (Birch & Garnsey, 1995; McKoon et al., 1993). Experiment 3: Focus marking by lexical means Focus particles are elements of structure that can indicate focus (König, 1991), and their function is to emphasize a certain part of the sentence (Paterson et al., 1999). I argue that the focus particles even/only (German: sogar/nur) evoke contrast sets of alternatives resp. complements to the element in focus (Ni et al., 1996), which causes interpretations of context. Therefore, lexical focus marking isn’t expected to lead to faster word processing. However, since different mechanisms of encoding seem to underlie word memory, a benefit of the focusing function of particles is expected to show in the recall task: due to focus particles being a preferred and well-used feature for native speakers of German, a transfer of this habitualness is expected, resulting in a better recall of focused words. Results indicated that focus particles seem to be the weakest option to mark focus: Focus marking by lexical particle don’t seem to reduce word processing times in either German L1, English L2, or in English L1. The presence of focus particles is likely to instantiate a complex discourse model which lets the listener await further modifying information (Liversedge et al., 2002). This semantic complexity might slow down processing. There are no indications that focus particles facilitate native language word recall in German L1 and English L1. This could be because focus particles open sets of conditions and contexts that enlarge the set of representations in listeners rather than narrowing it down to the element in the scope of the focus particle. In word recall, the facilitative effect of focus particles emerges only in the nonnative language condition. It is suggested that L2 learners, when faced with more demanding tasks in an L2, use a broad variety of means that identify focus for a better representation of novel words in the memory. In Experiments 2 and 3, evidence suggests that accent is an important factor for efficient word processing and accurate recall in German L1 and English L1, but less so in English L2. This underlines the function of accent as core speech parameter and consistent cue to the perception of prominence native language use (see Cutler & Fodor, 1979; Pitt & Samuel, 1990a; Eriksson et al., 2002; Akker & Cutler, 2003); the L1-L2 difference is attributed to patterns of expectation that are employed in the L1 but not (yet?) in the L2. There seems to exist a fine-tuned sensitivity to how accents are distributed in the native language, listeners expect an appropriate distribution and interpret it accordingly (Eefting, 1991). This pleads for accent placement as extremely important to L2 proficiency; the current results also suggest that accent and its relationship with other speech parameters has to be newly established in the L2 to fully reveal its benefits for efficient processing of speech. There is evidence that additional context facilitates processing of complex syntactic structures but that a surplus of information has no effect if the sentence construction is less challenging for the listener. The increased amount of information to be processed seems to impede better word recall, particularly in the L2. Altogether, it seems that focus marking devices and context can combine to form an advantageous alliance: a substantial benefit in processing efficiency is found when parameters of focus marking and sentence coherence are integrated. L2 research advocates the beneficial aspects of providing context for efficient L2 word learning (Lawson & Hogben, 1996). The current thesis promotes the view that a context which offers more semantic, prosodic, or lexical connections might compensate for the additional processing load that context constitutes for the listeners. A methodological consideration concerns the order in which language conditions are presented to listeners, i.e., L1-L2 or L2-L1. Findings suggest that presentation order could enforce a learning bias, with the performance in the second experiment being influenced by knowledge acquired in the first (see Akker & Cutler, 2003). To conclude this work: The results of the present study suggest that information structure is more accessible in the native language than it is in the nonnative language. There is, however, some evidence that L2 learners have an understanding of the significance of some information-structural parameters of focus marking. This has a beneficial effect on processing efficiency and recall accuracy; on the cognitive side it illustrates the benefits and also the need of a dynamic exchange of information-structural organization between L1 and L2. The findings of the current thesis encourage the view that an understanding of information structure can help the learner to discover and categorise forms and meanings of the L2. Information structure thus emerges as a valuable resource to advance proficiency in a second language.
Mars is one of the best candidates among planetary bodies for supporting life. The presence of water in the form of ice and atmospheric vapour together with the availability of biogenic elements and energy are indicators of the possibility of hosting life as we know it. The occurrence of permanently frozen ground – permafrost, is a common phenomenon on Mars and it shows multiple morphological analogies with terrestrial permafrost. Despite the extreme inhospitable conditions, highly diverse microbial communities inhabit terrestrial permafrost in large numbers. Among these are methanogenic archaea, which are anaerobic chemotrophic microorganisms that meet many of the metabolic and physiological requirements for survival on the martian subsurface. Moreover, methanogens from Siberian permafrost are extremely resistant against different types of physiological stresses as well as simulated martian thermo-physical and subsurface conditions, making them promising model organisms for potential life on Mars. The main aims of this investigation are to assess the survival of methanogenic archaea under Mars conditions, focusing on methanogens from Siberian permafrost, and to characterize their biosignatures by means of Raman spectroscopy, a powerful technology for microbial identification that will be used in the ExoMars mission. For this purpose, methanogens from Siberian permafrost and non-permafrost habitats were subjected to simulated martian desiccation by exposure to an ultra-low subfreezing temperature (-80ºC) and to Mars regolith (S-MRS and P-MRS) and atmospheric analogues. They were also exposed to different concentrations of perchlorate, a strong oxidant found in martian soils. Moreover, the biosignatures of methanogens were characterized at the single-cell level using confocal Raman microspectroscopy (CRM). The results showed survival and methane production in all methanogenic strains under simulated martian desiccation. After exposure to subfreezing temperatures, Siberian permafrost strains had a faster metabolic recovery, whereas the membranes of non-permafrost methanogens remained intact to a greater extent. The strain Methanosarcina soligelidi SMA-21 from Siberian permafrost showed significantly higher methane production rates than all other strains after the exposure to martian soil and atmospheric analogues, and all strains survived the presence of perchlorate at the concentration on Mars. Furthermore, CRM analyses revealed remarkable differences in the overall chemical composition of permafrost and non-permafrost strains of methanogens, regardless of their phylogenetic relationship. The convergence of the chemical composition in non-sister permafrost strains may be the consequence of adaptations to the environment, and could explain their greater resistance compared to the non-permafrost strains. As part of this study, Raman spectroscopy was evaluated as an analytical technique for remote detection of methanogens embedded in a mineral matrix. This thesis contributes to the understanding of the survival limits of methanogenic archaea under simulated martian conditions to further assess the hypothetical existence of life similar to methanogens on the martian subsurface. In addition, the overall chemical composition of methanogens was characterized for the first time by means of confocal Raman microspectroscopy, with potential implications for astrobiological research.
The objective of this thesis is to provide new space compaction techniques for testing or concurrent checking of digital circuits. In particular, the work focuses on the design of space compactors that achieve high compaction ratio and minimal loss of testability of the circuits. In the first part, the compactors are designed for combinational circuits based on the knowledge of the circuit structure. Several algorithms for analyzing circuit structures are introduced and discussed for the first time. The complexity of each design procedure is linear with respect to the number of gates of the circuit. Thus, the procedures are applicable to large circuits. In the second part, the first structural approach for output compaction for sequential circuits is introduced. Essentially, it enhances the first part. For the approach introduced in the third part it is assumed that the structure of the circuit and the underlying fault model are unknown. The space compaction approach requires only the knowledge of the fault-free test responses for a precomputed test set. The proposed compactor design guarantees zero-aliasing with respect to the precomputed test set.
In today's world, many applications produce large amounts of data at an enormous rate. Analyzing such datasets for metadata is indispensable for effectively understanding, storing, querying, manipulating, and mining them. Metadata summarizes technical properties of a dataset which rang from basic statistics to complex structures describing data dependencies. One type of dependencies is inclusion dependency (IND), which expresses subset-relationships between attributes of datasets. Therefore, inclusion dependencies are important for many data management applications in terms of data integration, query optimization, schema redesign, or integrity checking. So, the discovery of inclusion dependencies in unknown or legacy datasets is at the core of any data profiling effort.
For exhaustively detecting all INDs in large datasets, we developed S-indd++, a new algorithm that eliminates the shortcomings of existing IND-detection algorithms and significantly outperforms them. S-indd++ is based on a novel concept for the attribute clustering for efficiently deriving INDs. Inferring INDs from our attribute clustering eliminates all redundant operations caused by other algorithms. S-indd++ is also based on a novel partitioning strategy that enables discording a large number of candidates in early phases of the discovering process. Moreover, S-indd++ does not require to fit a partition into the main memory--this is a highly appreciable property in the face of ever-growing datasets. S-indd++ reduces up to 50% of the runtime of the state-of-the-art approach.
None of the approach for discovering INDs is appropriate for the application on dynamic datasets; they can not update the INDs after an update of the dataset without reprocessing it entirely. To this end, we developed the first approach for incrementally updating INDs in frequently changing datasets. We achieved that by reducing the problem of incrementally updating INDs to the incrementally updating the attribute clustering from which all INDs are efficiently derivable. We realized the update of the clusters by designing new operations to be applied to the clusters after every data update. The incremental update of INDs reduces the time of the complete rediscovery by up to 99.999%.
All existing algorithms for discovering n-ary INDs are based on the principle of candidate generation--they generate candidates and test their validity in the given data instance. The major disadvantage of this technique is the exponentially growing number of database accesses in terms of SQL queries required for validation. We devised Mind2, the first approach for discovering n-ary INDs without candidate generation. Mind2 is based on a new mathematical framework developed in this thesis for computing the maximum INDs from which all other n-ary INDs are derivable. The experiments showed that Mind2 is significantly more scalable and effective than hypergraph-based algorithms.
This dissertation focuses on the understanding of the optical manipulation of microgels dispersed in aqueous solution of azobenzene containing surfactant. The work consists of three parts where each part is a systematic investigation of the (1) photo-isomerization kinetics of the surfactant in complex with the microgel polymer matrix, (2) light driven diffusiosmosis (LDDO) in microgels and (3) photo-responsivity of microgel on complexation with spiropyran.
The first part comprises three publications where the first one [P1] investigates the photo-isomerization kinetics and corresponding isomer composition at a photo-stationary state of the photo-sensitive surfactant conjugated with charged polymers or micro sized polymer networks to understand the structural response of such photo-sensitive complexes. We report that the photo-isomerization of the azobenzene-containing cationic surfactant is slower in a polymer complex compared to being purely dissolved in an aqueous solution. The surfactant aggregates near the polyelectrolyte chains at concentrations much lower than the bulk critical micelle concentration. This, along with the inhibition of the photo-isomerization kinetics due to steric hindrance within the densely packed aggregates, pushes the isomer-ratio to a higher trans-isomer concentration for all irradiation wavelengths.
The second publication [P2] combines experimental results and non-adiabatic dynamic simulations for the same surfactant molecules embedded in the micelles with absorption spectroscopy measurements of micellar solutions to uncover the reasons responsible for the slowdown in photo induced trans → cis azobenzene isomerization at concentrations higher than the critical micelle concentration (CMC). The simulations reveal a decrease of isomerization quantum yields for molecules inside the micelles and observes a reduction of extinction coefficients upon micellization. These findings explain the deceleration of the trans → cis switching in micelles of the azobenzene-containing surfactants.
Finally, the third publication [P3] focusses on the kinetics of adsorption and desorption of the same surfactant within anionic microgels in the dark and under continuous irradiation. Experimental data demonstrate, that microgels can serve as a selective absorber of the trans isomers. The interaction of the isomers with the gel matrix induces a remotely controllable collapse or swelling on appropriate irradiation wavelengths. Measuring the kinetics of the microgel size response and knowing the exact isomer composition under light exposure, we calculate the adsorption rate of the trans-isomers.
The second part comprises two publications. The first publication [P4] reports on the phenomenon of light-driven diffusioosmotic (DO) long-range attractive and repulsive interactions between micro-sized objects, whose range extends several times the size of microparticles and can be adjusted to point towards or away from the particle by varying irradiation parameters such as intensity or wavelength of light. The phenomenon is fueled by the aforementioned photosensitive surfactant. The complex interaction of dynamic exchange of isomers and photo-isomerization rate yields to relative concentrations gradients of the isomers in the vicinity of micro-sized object inducing a local diffusioosmotic (DO) flow thereby making a surface act as a micropump.
The second publication [P5] exclusively aims the visualization and investigation of the DO flows generated from microgels by using small tracer particles. Similar to micro sized objects, the flow is able to push adjacent tracers over distances several times larger than microgel size. Here we report that the direction and the strength of the l-LDDO depends on the intensity, irradiation wavelength and the amount of surfactant adsorbed by the microgel. For example, the flow pattern around a microgel is directed radially outward and can be maintained quasi-indefinitely under exposure at 455 nm when the trans:cis ratio is 2:1, whereas irradiation at 365 nm, generates a radially transient flow pattern, which inverts at lower intensities.
Lastly, the third part consists of one publication [P6] which, unlike the previous works, reports on the study of the kinetics of photo- and thermo-switching of a new surfactant namely, spiropyran, upon exposure with light of different wavelengths and its interaction with p(NIPAM-AA) microgels. The surfactant being an amphiphile, switches between its ring closed spiropyran (SP) form and ring open merocyanine (MC) form which results in a change in the hydrophilic–hydrophobic balance of the surfactant as MC being a zwitterionic form along with the charged head group, generates three charges on the molecule. Therefore, the MC form of the surfactant is more hydrophilic than in the case of the neutral SP state. Here, we investigate the initial shrinkage of the gel particles via charge compensation on first exposure to SP molecules which results from the complex formation of the molecules with the gel matrix, triggering them to become photo responsive. The size and VPTT of the microgels during irradiation is shown to be a combination of heating up of the solution during light absorption by the surfactant (more pronounced in the case of UV irradiation) and the change in the hydrophobicity of the surfactant.
Regulation of potassium channels in plants : biophysical mechanisms and physiological implacations
(2011)
The origin and structure of magnetic fields in the Galaxy are largely unknown. What is known is that they are essential for several astrophysical processes, in particular the propagation of cosmic rays. Our ability to describe the propagation of cosmic rays through the Galaxy is severely limited by the lack of observational data needed to probe the structure of the Galactic magnetic field on many different length scales. This is particularly true for modelling the propagation of cosmic rays into the Galactic halo, where our knowledge of the magnetic field is particularly poor.
In the last decade, observations of the Galactic halo in different frequency regimes have revealed the existence of out-of-plane bubble emission in the Galactic halo. In gamma rays these bubbles have been termed Fermi bubbles with a radial extent of ≈ 3 kpc and an azimuthal height of ≈ 6 kpc. The radio counterparts of the Fermi bubbles were seen by both the S-PASS telescopes and the Planck satellite, and showed a clear spatial overlap. The X-ray counterparts of the Fermi bubbles were named eROSITA bubbles after the eROSITA satellite, with a radial width of ≈ 7 kpc and an azimuthal height of ≈ 14 kpc. Taken together, these observations suggest the presence of large extended Galactic Halo Bubbles (GHB) and have stimulated interest in exploring the less explored Galactic halo.
In this thesis, a new toy model (GHB model) for the magnetic field and non-thermal electron distribution in the Galactic halo has been proposed. The new toy model has been used to produce polarised synchrotron emission sky maps. Chi-square analysis was used to compare the synthetic skymaps with the Planck 30 GHz polarised skymaps. The obtained constraints on the strength and azimuthal height were found to be in agreement with the S-PASS radio observations.
The upper, lower and best-fit values obtained from the above chi-squared analysis were used to generate three separate toy models. These three models were used to propagate ultra-high energy cosmic rays. This study was carried out for two potential sources, Centaurus A and NGC 253, to produce magnification maps and arrival direction skymaps. The simulated arrival direction skymaps were found to be consistent with the hotspots of Centaurus A and NGC 253 as seen in the observed arrival direction skymaps provided by the Pierre Auger Observatory (PAO).
The turbulent magnetic field component of the GHB model was also used to investigate the extragalactic dipole suppression seen by PAO. UHECRs with an extragalactic dipole were forward-tracked through the turbulent GHB model at different field strengths. The suppression in the dipole due to the varying diffusion coefficient from the simulations was noted. The results could also be compared with an analytical analogy of electrostatics. The simulations of the extragalactic dipole suppression were in agreement with similar studies carried out for galactic cosmic rays.
Advancements in computer vision techniques driven by machine learning have facilitated robust and efficient estimation of attributes such as depth, optical flow, albedo, and shading. To encapsulate all such underlying properties associated with images and videos, we evolve the concept of intrinsic images towards intrinsic attributes. Further, rapid hardware growth in the form of high-quality smartphone cameras, readily available depth sensors, mobile GPUs, or dedicated neural processing units have made image and video processing pervasive. In this thesis, we explore the synergies between the above two advancements and propose novel image and video processing techniques and systems based on them. To begin with, we investigate intrinsic image decomposition approaches and analyze how they can be implemented on mobile devices. We propose an approach that considers not only diffuse reflection but also specular reflection; it allows us to decompose an image into specularity, albedo, and shading on a resource constrained system (e.g., smartphones or tablets) using the depth data provided by the built-in depth sensors. In addition, we explore how on-device depth data can further be used to add an immersive dimension to 2D photos, e.g., showcasing parallax effects via 3D photography. In this regard, we develop a novel system for interactive 3D photo generation and stylization on mobile devices. Further, we investigate how adaptive manipulation of baseline-albedo (i.e., chromaticity) can be used for efficient visual enhancement under low-lighting conditions. The proposed technique allows for interactive editing of enhancement settings while achieving improved quality and performance. We analyze the inherent optical flow and temporal noise as intrinsic properties of a video. We further propose two new techniques for applying the above intrinsic attributes for the purpose of consistent video filtering. To this end, we investigate how to remove temporal inconsistencies perceived as flickering artifacts. One of the techniques does not require costly optical flow estimation, while both provide interactive consistency control. Using intrinsic attributes for image and video processing enables new solutions for mobile devices – a pervasive visual computing device – and will facilitate novel applications for Augmented Reality (AR), 3D photography, and video stylization. The proposed low-light enhancement techniques can also improve the accuracy of high-level computer vision tasks (e.g., face detection) under low-light conditions. Finally, our approach for consistent video filtering can extend a wide range of image-based processing for videos.
Via their powerful radiation, stellar winds, and supernova explosions, massive stars (Mini & 8 M☉) bear a tremendous impact on galactic evolution. It became clear in recent decades that the majority of massive stars reside in binary systems. This thesis sets as a goal to quantify the impact of binarity (i.e., the presence of a companion star) on massive stars. For this purpose, massive binary systems in the Local Group, including OB-type binaries, high mass X-ray binaries (HMXBs), and Wolf-Rayet (WR) binaries, were investigated by means of spectral, orbital, and evolutionary analyses.
The spectral analyses were performed with the non-local thermodynamic equillibrium (non-LTE) Potsdam Wolf-Rayet (PoWR) model atmosphere code. Thanks to critical updates in the calculation of the hydrostatic layers, the code became a state-of-the-art tool applicable for all types of hot massive stars (Chapter 2). The eclipsing OB-type triple system δ Ori served as an intriguing test-case for the new version of the PoWR code, and provided key insights regarding the formation of X-rays in massive stars (Chapter 3). We further analyzed two prototypical HMXBs, Vela X-1 and IGR J17544-2619, and obtained fundamental conclusions regarding the dichotomy of two basic classes of HMXBs (Chapter 4). We performed an exhaustive analysis of the binary R 145 in the Large Magellanic Cloud (LMC), which was claimed to host the most massive stars known. We were able to disentangle the spectrum of the system, and performed an orbital, polarimetric, and spectral analysis, as well as an analysis of the wind-wind collision region. The true masses of the binary components turned out to be significantly lower than suggested, impacting our understanding of the initial mass function and stellar evolution at low metallicity (Chapter 5). Finally, all known WR binaries in the Small Magellanic Cloud (SMC) were analyzed. Although it was theoretical predicted that virtually all WR stars in the SMC should be formed via mass-transfer in binaries, we find that binarity was not important for the formation of the known WR stars in the SMC, implying a strong discrepancy between theory and observations (Chapter 6).
Vegetation change at high latitudes is one of the central issues nowadays with respect to ongoing climate changes and triggered potential feedback. At high latitude ecosystems, the expected changes include boreal treeline advance, compositional, phenological, physiological (plants), biomass (phytomass) and productivity changes. However, the rate and the extent of the changes under climate change are yet poorly understood and projections are necessary for effective adaptive strategies and forehanded minimisation of the possible negative feedbacks.
The vegetation itself and environmental conditions, which are playing a great role in its development and distribution are diverse throughout the Subarctic to the Arctic. Among the least investigated areas is central Chukotka in North-Eastern Siberia, Russia. Chukotka has mountainous terrain and a wide variety of vegetation types on the gradient from treeless tundra to northern taiga forests. The treeline there in contrast to subarctic North America and north-western and central Siberia is represented by a deciduous conifer, Larix cajanderi Mayr. The vegetation varies from prostrate lichen Dryas octopetala L. tundra to open graminoid (hummock and non-hummock) tundra to tall Pinus pumila (Pall.) Regel shrublands to sparse and dense larch forests.
Hence, this thesis presents investigations on recent compositional and above-ground biomass (AGB) changes, as well as potential future changes in AGB in central Chukotka. The aim is to assess how tundra-taiga vegetation develops under changing climate conditions particularly in Fareast Russia, central Chukotka. Therefore, three main research questions were considered:
1) What changes in vegetation composition have recently occurred in central Chukotka?
2) How have the above-ground biomass AGB rates and distribution changed in central Chukotka?
3) What are the spatial dynamics and rates of tree AGB change in the upcoming millennia in the northern tundra-taiga of central Chukotka?
Remote sensing provides information on the spatial and temporal variability of vegetation. I used Landsat satellite data together with field data (foliage projective cover and AGB) from two expeditions in 2016 and 2018 to Chukotka to upscale vegetation types and AGB for the study area. More specifically, I used Landsat spectral indices (Normalised Difference Vegetation Index (NDVI), Normalised Difference Water Index (NDWI) and Normalised Difference Snow Index (NDSI)) and constrained ordination (Redundancy analysis, RDA) for further k-means-based land-cover classification and general additive model (GAM)-based AGB maps for 2000/2001/2002 and 2016/2017. I also used Tandem-X DEM data for a topographical correction of the Landsat satellite data and to derive slope, aspect, and Topographical Wetness Index (TWI) data for forecasting AGB.
Firstly, in 2016, taxa-specific projective cover data were collected during a Russian-German expedition. I processed the field data and coupled them with Landsat spectral Indices in the RDA model that was used for k-means classification. I could establish four meaningful land-cover classes: (1) larch closed-canopy forest, (2) forest tundra and shrub tundra, (3) graminoid tundra and (4) prostrate herb tundra and barren areas, and accordingly, I produced the land cover maps for 2000/2001/2002 and 2016/20017. Changes in land-cover classes between the beginning of the century (2000/2001/2002) and the present time (2016/2017) were estimated and interpreted as recent compositional changes in central Chukotka. The transition from graminoid tundra to forest tundra and shrub tundra was interpreted as shrubification and amounts to a 20% area increase in the tundra-taiga zone and 40% area increase in the northern taiga. Major contributors of shrubification are alder, dwarf birch and some species of the heather family. Land-cover change from the forest tundra and shrub tundra class to the larch closed-canopy forest class is interpreted as tree infilling and is notable in the northern taiga. We find almost no land-cover changes in the present treeless tundra.
Secondly, total AGB state and change were investigated for the same areas. In addition to the total vegetation AGB, I provided estimations for the different taxa present at the field sites. As an outcome, AGB in the study region of central Chukotka ranged from 0 kg m-2 at barren areas to 16 kg m-2 in closed-canopy forests with the larch trees contributing the highest. A comparison of changes in AGB within the investigated period from 2000 to 2016 shows that the greatest changes (up to 1.25 kg m 2 yr 1) occurred in the northern taiga and in areas where land cover changed to larch closed-canopy forest. Our estimations indicate a general increase in total AGB throughout the investigated tundra-taiga and northern taiga, whereas the tundra showed no evidence of change in AGB within the 15 years from 2002 to 2017.
In the third manuscript, potential future AGB changes were estimated based on the results of simulations of the individual-based spatially explicit vegetation model LAVESI using different climate scenarios, depending on Representative Concentration Pathways (RCPs) RCP 2.6, RCP 4.5 and RCP 8.5 with or without cooling after 2300 CE. LAVESI-based AGB was simulated for the current state until 3000 CE for the northern tundra-taiga study area for larch species because we expect the most notable changes to occur will be associated with forest expansion in the treeline ecotone. The spatial distribution and current state of tree AGB was validated against AGB field data, AGB extracted from Landsat satellite data and a high spatial resolution image with distinctive trees visible. The simulation results are indicating differences in tree AGB dynamics plot wise, depending on the distance to the current treeline. The simulated tree AGB dynamics are in concordance with fundamental ecological (emigrational and successional) processes: tree stand formation in simulated results starts with seed dispersion, tree stand establishment, tree stand densification and episodic thinning. Our results suggest mostly densification of existing tree stands in the study region within the current century in the study region and a lagged forest expansion (up to 39% of total area in the RCP 8.5) under all considered climate scenarios without cooling in different local areas depending on the closeness to the current treeline. In scenarios with cooling air temperature after 2300 CE, forests stopped expanding at 2300 CE (up to 10%, RCP 8.5) and then gradually retreated to their pre-21st century position. The average tree AGB rates of increase are the strongest in the first 300 years of the 21st century. The rates depend on the RCP scenario, where the highest are as expected under RCP 8.5.
Overall, this interdisciplinary thesis shows a successful integration of field data, satellite data and modelling for tracking recent and predicting future vegetation changes in mountainous subarctic regions. The obtained results are unique for the focus area in central Chukotka and overall, for mountainous high latitude ecosystems.
Over the last decades, the world’s population has been growing at a faster rate, resulting in increased urbanisation, especially in developing countries. More than half of the global population currently lives in urbanised areas with an increasing tendency. The growth of cities results in a significant loss of vegetation cover, soil compaction and sealing of the soil surface which in turn results in high surface runoff during high-intensity storms and causes the problem of accelerated soil water erosion on streets and building grounds. Accelerated soil water erosion is a serious environmental problem in cities as it gives rise to the contamination of aquatic bodies, reduction of ground water recharge and increase in land degradation, and also results in damages to urban infrastructures, including drainage systems, houses and roads. Understanding the problem of water erosion in urban settings is essential for the sustainable planning and management of cities prone to water erosion. However, in spite of the vast existence of scientific literature on water erosion in rural regions, a concrete understanding of the underlying dynamics of urban erosion still remains inadequate for the urban dryland environments.
This study aimed at assessing water erosion and the associated socio-environmental determinants in a typical dryland urban area and used the city of Windhoek, Namibia, as a case study. The study used a multidisciplinary approach to assess the problem of water erosion. This included an in depth literature review on current research approaches and challenges of urban erosion, a field survey method for the quantification of the spatial extent of urban erosion in the dryland city of Windhoek, and face to face interviews by using semi-structured questionnaires to analyse the perceptions of stakeholders on urban erosion.
The review revealed that around 64% of the literatures reviewed were conducted in the developed world, and very few researches were carried out in regions with extreme climate, including dryland regions. Furthermore, the applied methods for erosion quantification and monitoring are not inclusive of urban typical features and they are not specific for urban areas. The reviewed literature also lacked aspects aimed at addressing the issues of climate change and policies regarding erosion in cities. In a field study, the spatial extent and severity of an urban dryland city, Windhoek, was quantified and the results show that nearly 56% of the city is affected by water erosion showing signs of accelerated erosion in the form of rills and gullies, which occurred mainly in the underdeveloped, informal and semi-formal areas of the city. Factors influencing the extent of erosion in Windhoek included vegetation cover and type, socio-urban factors and to a lesser extent slope estimates. A comparison of an interpolated field survey erosion map with a conventional erosion assessment tool (the Universal Soil Loss Equation) depicted a large deviation in spatial patterns, which underlines the inappropriateness of traditional non-urban erosion tools to urban settings and emphasises the need to develop new erosion assessment and management methods for urban environments. It was concluded that measures for controlling water erosion in the city need to be site-specific as the extent of erosion varied largely across the city.
The study also analysed the perceptions and understanding of stakeholders of urban water erosion in Windhoek, by interviewing 41 stakeholders using semi-structured questionnaires. The analysis addressed their understanding of water erosion dynamics, their perceptions with regards to the causes and the seriousness of erosion damages, and their attitudes towards the responsibilities for urban erosion. The results indicated that there is less awareness of the process as a phenomenon, instead there is more awareness of erosion damages and the factors contributing to the damages. About 69% of the stakeholders considered erosion damages to be ranging from moderate to very serious. However, there were notable disparities between the private householders and public authority groups. The study further found that the stakeholders have no clear understanding of their responsibilities towards the management of the control measures and payment for the damages. The private householders and local authority sectors pointed fingers at each other for the responsibilities for erosion damage payments and for putting up prevention measures. The reluctance to take responsibility could create a predicament for areas affected, specifically in the informal settlements where land management is not carried out by the local authority and land is not owned by the occupants.
The study concluded that in order to combat urban erosion, it is crucial to understand diverse dynamics aggravating the process of urbanisation from different scales. Accordingly, the study suggests that there is an urgent need for the development of urban-specific approaches that aim at: (a) incorporating the diverse socio-economic-environmental aspects influencing erosion, (b) scientifically improving natural cycles that influence water storages and nutrients for plants in urbanised dryland areas in order to increase the amount of vegetation cover, (c) making use of high resolution satellite images to improve the adopted methods for assessing urban erosion, (d) developing water erosion policies, and (e) continuously monitoring the impact of erosion and the influencing processes from local, national and international levels.
Èto-clefts are Russian focus constructions with the demonstrative pronoun èto ‘this’ at the beginning: “Èto Mark vyigral gonku” (“It was Mark who won the race”). They are often being compared with English it-clefts, German es-clefts, as well as the corresponding focus-background structures in other languages.
In terms of semantics, èto-clefts have two important properties which are cross-linguistically typical for clefts: existence presupposition (“Someone won the race”) and exhaustivity (“Nobody except Mark won the race”). However, the exhaustivity effects are not as strong as exhaustivity effects in structures with the exclusive only and require more research.
At the same time, the question if the syntactic structure of èto-clefts matches the biclausal structure of English and German clefts, remains open. There are arguments in favor of biclausality, as well as monoclausality. Besides, there is no consistency regarding the status of èto itself.
Finally, the information structure of èto-clefts has remained underexplored in the existing literature.
This research investigates the information-structural, syntactic, and semantic properties of Russian clefts, both theoretically (supported by examples from Russian text corpora and judgments from native speakers) and experimentally. It is determined which desired changes in the information structure motivate native speakers to choose an èto-cleft and not the canonical structure or other focus realization tools. Novel syntactic tests are conducted to find evidence for bi-/monoclausality of èto-clefts, as well as for base-generation or movement of the cleft pivot. It is hypothesized that èto has a certain important function in clefts, and its status is investigated. Finally, new experiments on the nature of exhaustivity in èto-clefts are conducted. They allow for direct cross-linguistic comparison, using an incremental-information paradigm with truth-value judgments.
In terms of information structure, this research makes a new proposal that presents èto-clefts as structures with an inherent focus-background bipartitioning. Even though èto-clefts are used in typical focus contexts, evidence was found that èto-clefts (as well as Russian thetic clefts) allow for both new information focus and contrastive focus. Èto-clefts are pragmatically acceptable when a singleton answer to the implied question is expected (e.g. “It was Mark who won the race” but not “It was Mark who came to the party”). Importantly, èto in Russian clefts is neither dummy, nor redundant, but is a topic expression; conveys familiarity which triggers existence presupposition; refers to an instantiated event, or a known/perceivable situation; finally, èto plays an important role in the spoken language as a tool for speech coherency and a focus marker.
In terms of syntax, this research makes a new monoclausal proposal and shows evidence that the cleft pivot undergoes movement to the left peripheral position. Èto is proposed to be TopP.
Finally, in terms of semantics, a novel cross-linguistic evaluation of Russian clefts is made. Experiments show that the exhaustivity inference in èto-clefts is not robust. Participants used different strategies in resolving exhaustivity, falling into 2 groups: one group considered èto-clefts exhaustive, while another group considered them non-exhaustive. Hence, there is evidence for the pragmatic nature of exhaustivity in èto-clefts. The experimental results for èto-clefts are similar to the experimental results for clefts in German, French and Akan. It is concluded that speakers use different tools available in their languages to produce structures with similar interpretive properties.
Crustal deformation can be the result of volcanic and tectonic activity such as fault dislocation and magma intrusion. The crustal deformation may precede and/or succeed the earthquake occurrence and eruption. Mitigating the associated hazard, continuous monitoring of the crustal deformation accordingly has become an important task for geo-observatories and fast response systems. Due to highly non-linear behavior of the crustal deformation fields in time and space, which are not always measurable using conventional geodetic methods (e.g., Leveling), innovative techniques of monitoring and analysis are required. In this thesis I describe novel methods to improve the ability for precise and accurate mapping the spatiotemporal surface deformation field using multi acquisitions of satellite radar data. Furthermore, to better understand the source of such spatiotemporal deformation fields, I present novel static and time dependent model inversion approaches. Almost any interferograms include areas where the signal decorrelates and is distorted by atmospheric delay. In this thesis I detail new analysis methods to reduce the limitations of conventional InSAR, by combining the benefits of advanced InSAR methods such as the permanent scatterer InSAR (PSI) and the small baseline subsets (SBAS) with a wavelet based data filtering scheme. This novel InSAR time series methodology is applied, for instance, to monitor the non-linear deformation processes at Hawaii Island. The radar phase change at Hawaii is found to be due to intrusions, eruptions, earthquakes and flank movement processes and superimposed by significant environmental artifacts (e.g., atmospheric). The deformation field, I obtained using the new InSAR analysis method, is in good agreement with continuous GPS data. This provides an accurate spatiotemporal deformation field at Hawaii, which allows time dependent source modeling. Conventional source modeling methods usually deal with static deformation field, while retrieving the dynamics of the source requires more sophisticated time dependent optimization approaches. This problem I address by combining Monte Carlo based optimization approaches with a Kalman Filter, which provides the model parameters of the deformation source consistent in time. I found there are numerous deformation sources at Hawaii Island which are spatiotemporally interacting, such as volcano inflation is associated to changes in the rifting behavior, and temporally linked to silent earthquakes. I applied these new methods to other tectonic and volcanic terrains, most of which revealing the importance of associated or coupled deformation sources. The findings are 1) the relation between deep and shallow hydrothermal and magmatic sources underneath the Campi Flegrei volcano, 2) gravity-driven deformation at Damavand volcano, 3) fault interaction associated with the 2010 Haiti earthquake, 4) independent block wise flank motion at the Hilina Fault system, Kilauea, and 5) interaction between salt diapir and the 2005 Qeshm earthquake in southern Iran. This thesis, written in cumulative form including 9 manuscripts published or under review in peer reviewed journals, improves the techniques for InSAR time series analysis and source modeling and shows the mutual dependence between adjacent deformation sources. These findings allow more realistic estimation of the hazard associated with complex volcanic and tectonic systems.
Distance Education or e-Learning platform should be able to provide a virtual laboratory to let the participants have hands-on exercise experiences in practicing their skill remotely. Especially in Cybersecurity e-Learning where the participants need to be able to attack or defend the IT System. To have a hands-on exercise, the virtual laboratory environment must be similar to the real operational environment, where an attack or a victim is represented by a node in a virtual laboratory environment. A node is usually represented by a Virtual Machine (VM). Scalability has become a primary issue in the virtual laboratory for cybersecurity e-Learning because a VM needs a significant and fix allocation of resources. Available resources limit the number of simultaneous users. Scalability can be increased by increasing the efficiency of using available resources and by providing more resources. Increasing scalability means increasing the number of simultaneous users.
In this thesis, we propose two approaches to increase the efficiency of using the available resources. The first approach in increasing efficiency is by replacing virtual machines (VMs) with containers whenever it is possible. The second approach is sharing the load with the user-on-premise machine, where the user-on-premise machine represents one of the nodes in a virtual laboratory scenario. We also propose two approaches in providing more resources. One way to provide more resources is by using public cloud services. Another way to provide more resources is by gathering resources from the crowd, which is referred to as Crowdresourcing Virtual Laboratory (CRVL).
In CRVL, the crowd can contribute their unused resources in the form of a VM, a bare metal system, an account in a public cloud, a private cloud and an isolated group of VMs, but in this thesis, we focus on a VM. The contributor must give the credential of the VM admin or root user to the CRVL system. We propose an architecture and methods to integrate or dis-integrate VMs from the CRVL system automatically. A Team placement algorithm must also be investigated to optimize the usage of resources and at the same time giving the best service to the user. Because the CRVL system does not manage the contributor host machine, the CRVL system must be able to make sure that the VM integration will not harm their system and that the training material will be stored securely in the contributor sides, so that no one is able to take the training material away without permission. We are investigating ways to handle this kind of threats.
We propose three approaches to strengthen the VM from a malicious host admin. To verify the integrity of a VM before integration to the CRVL system, we propose a remote verification method without using any additional hardware such as the Trusted Platform Module chip. As the owner of the host machine, the host admins could have access to the VM's data via Random Access Memory (RAM) by doing live memory dumping, Spectre and Meltdown attacks. To make it harder for the malicious host admin in getting the sensitive data from RAM, we propose a method that continually moves sensitive data in RAM. We also propose a method to monitor the host machine by installing an agent on it. The agent monitors the hypervisor configurations and the host admin activities.
To evaluate our approaches, we conduct extensive experiments with different settings. The use case in our approach is Tele-Lab, a Virtual Laboratory platform for Cyber Security e-Learning. We use this platform as a basis for designing and developing our approaches. The results show that our approaches are practical and provides enhanced security.
The immense popularity of online communication services in the last decade has not only upended our lives (with news spreading like wildfire on the Web, presidents announcing their decisions on Twitter, and the outcome of political elections being determined on Facebook) but also dramatically increased the amount of data exchanged on these platforms. Therefore, if we wish to understand the needs of modern society better and want to protect it from new threats, we urgently need more robust, higher-quality natural language processing (NLP) applications that can recognize such necessities and menaces automatically, by analyzing uncensored texts. Unfortunately, most NLP programs today have been created for standard language, as we know it from newspapers, or, in the best case, adapted to the specifics of English social media.
This thesis reduces the existing deficit by entering the new frontier of German online communication and addressing one of its most prolific forms—users’ conversations on Twitter. In particular, it explores the ways and means by how people express their opinions on this service, examines current approaches to automatic mining of these feelings, and proposes novel methods, which outperform state-of-the-art techniques. For this purpose, I introduce a new corpus of German tweets that have been manually annotated with sentiments, their targets and holders, as well as lexical polarity items and their contextual modifiers. Using these data, I explore four major areas of sentiment research: (i) generation of sentiment lexicons, (ii) fine-grained opinion mining, (iii) message-level polarity classification, and (iv) discourse-aware sentiment analysis. In the first task, I compare three popular groups of lexicon generation methods: dictionary-, corpus-, and word-embedding–based ones, finding that dictionary-based systems generally yield better polarity lists than the last two groups. Apart from this, I propose a linear projection algorithm, whose results surpass many existing automatically-generated lexicons. Afterwords, in the second task, I examine two common approaches to automatic prediction of sentiment spans, their sources, and targets: conditional random fields (CRFs) and recurrent neural networks, obtaining higher scores with the former model and improving these results even further by redefining the structure of CRF graphs. When dealing with message-level polarity classification, I juxtapose three major sentiment paradigms: lexicon-, machine-learning–, and deep-learning–based systems, and try to unite the first and last of these method groups by introducing a bidirectional neural network with lexicon-based attention. Finally, in order to make the new classifier aware of microblogs' discourse structure, I let it separately analyze the elementary discourse units of each tweet and infer the overall polarity of a message from the scores of its EDUs with the help of two new approaches: latent-marginalized CRFs and Recursive Dirichlet Process.
The ionosphere, which is strongly influenced by the Sun, is known to be also affected by meteorological processes. These processes, despite having their origin in the troposphere and stratosphere, interact with the upper atmosphere. Such an interaction between atmospheric layers is known as vertical coupling. During geomagnetically quiet times, when near-Earth space is not under the influence of solar storms, these processes become important drivers for ionospheric variability. Studying the link between these processes in the lower atmosphere and the ionospheric variability is important for our understanding of fundamental mechanisms in ionospheric and meteorological research.
A prominent example of vertical coupling between the stratosphere and the ionosphere are the so-called stratospheric sudden warming (SSW) events that occur usually during northern winters and result in an increase in the polar stratospheric temperature and a reversal of the circumpolar winds. While the phenomenon of SSW is confined to the northern polar stratosphere, its influence on the ionosphere can be observed even at equatorial latitudes. During SSW events, the connection between the polar stratosphere and the equatorial ionosphere is believed to be through the modulation of global atmospheric tides. These tides are fundamental for the ionospheric E-region wind dynamo that generates electric fields and currents in the ionosphere. Observations of ionospheric currents indicate a large enhancement of the semidiurnal lunar tide in response to SSW events. Thus, the semidiurnal lunar tide becomes an important driver of ionospheric variability during SSW events.
In this thesis, the ionospheric effect of SSW events is investigated in the equatorial region, where a narrow but an intense E-region current known as the equatorial electrojet (EEJ) flows above the dip equator during the daytime. The day-to-day variability of the EEJ can be determined from magnetic field records at geomagnetic observatories close to the dip equator. Such magnetic data are available for several decades and allows to investigate the impact of SSW events on the EEJ and, even more importantly, helps in understanding the effects of SSW events on the equatorial ionosphere. An excellent long-term record of the geomagnetic field at the equator from 1922 onwards is available for the observatory Huancayo in Peru and is extensively utilized in this study.
The central subject of this thesis is the investigation of lunar tides in the EEJ during SSW events by analyzing long time series. This is done by estimating the lunar tidal amplitude in the EEJ from the magnetic records at Huancayo and by comparing them to measurements of the polar stratospheric wind and temperature, which led to the identification of the known SSW events from 1952 onwards. One goal of this thesis is to identify SSW events that predate 1952. To this end, superposed epoch analysis (SEA) is employed to establish a relationship between the lunar tidal power and the wind and temperature conditions in the lower atmosphere. A threshold value for the lunar tidal power is identified that is discriminative for the known SSW events. This threshold is then used to identify lunar tidal enhancements, which are indicative for any historic SSW events prior to 1952. It can be shown, that the number of lunar tidal enhancements and thus the occurrence frequency of historic SSW events between 1926 and 1952 is similar to the occurrence frequency of the known SSW events from 1952 onwards.
Next to the classic SSW definition, the concept of polar vortex weakening (PVW) is utilized in this thesis. PVW is defined for higher latitudes and altitudes (≈ 40km) than the classical SSW definition (≈ 32km). The correlation between the timing and magnitude of lunar tidal enhancements in the EEJ and the timing and magnitude of PVW is found to be better than for the classic SSW definition. This suggests that the lunar tidal enhancements in the EEJ are closely linked to the state of the middle atmosphere.
Geomagnetic observatories located in different longitudes at the dip equator allow investigating the longitudinally dependent variability of the EEJ during SSW events. For this purpose, the lunar tidal enhancements in the EEJ are determined for the Peruvian and Indian sectors during the major SSW events of the years 2006 and 2009. It is found that the lunar tidal amplitude shows similar enhancements in the Peruvian sector during both SSW events, while the enhancements are notably different for the two events in the Indian sector.
In summary, this thesis shows that lunar tidal enhancements in the EEJ are indeed correlated to the occurrence of SSW events and they should be considered a prominent driver of low latitude ionospheric variability. Secondly, lunar tidal enhancements are found to be longitudinally variable. This suggests that regional effects, such as ionospheric conductivity and the geometry and strength of the geomagnetic field, also play an important role and have to be considered when investigating the mechanisms behind vertical coupling.
Natural extreme events are an integral part of nature on planet earth. Usually these events are only considered hazardous to humans, in case they are exposed. In this case, however, natural hazards can have devastating impacts on human societies. Especially hydro-meteorological hazards have a high damage potential in form of e.g. riverine and pluvial floods, winter storms, hurricanes and tornadoes, which can occur all over the globe. Along with an increasingly warm climate also an increase in extreme weather which potentially triggers natural hazards can be expected. Yet, not only changing natural systems, but also changing societal systems contribute to an increasing risk associated with these hazards. These can comprise increasing exposure and possibly also increasing vulnerability to the impacts of natural events. Thus, appropriate risk management is required to adapt all parts of society to existing and upcoming risks at various spatial scales. One essential part of risk management is the risk assessment including the estimation of the economic impacts. However, reliable methods for the estimation of economic impacts due to hydro-meteorological hazards are still missing. Therefore, this thesis deals with the question of how the reliability of hazard damage estimates can be improved, represented and propagated across all spatial scales. This question is investigated using the specific example of economic impacts to companies as a result of riverine floods in Germany.
Flood damage models aim to describe the damage processes during a given flood event. In other words they describe the vulnerability of a specific object to a flood. The models can be based on empirical data sets collected after flood events. In this thesis tree-based models trained with survey data are used for the estimation of direct economic flood impacts on the objects. It is found that these machine learning models, in conjunction with increasing sizes of data sets used to derive the models, outperform state-of-the-art damage models. However, despite the performance improvements induced by using multiple variables and more data points, large prediction errors remain at the object level. The occurrence of the high errors was explained by a further investigation using distributions derived from tree-based models. The investigation showed that direct economic impacts to individual objects cannot be modeled by a normal distribution. Yet, most state-of-the-art approaches assume a normal distribution and take mean values as point estimators. Subsequently, the predictions are unlikely values within the distributions resulting in high errors. At larger spatial scales more objects are considered for the damage estimation. This leads to a better fit of the damage estimates to a normal distribution. Consequently, also the performance of the point estimators get better, although large errors can still occur due to the variance of the normal distribution. It is recommended to use distributions instead of point estimates in order to represent the reliability of damage estimates.
In addition current approaches also mostly ignore the uncertainty associated with the characteristics of the hazard and the exposed objects. For a given flood event e.g. the estimation of the water level at a certain building is prone to uncertainties. Current approaches define exposed objects mostly by the use of land use data sets. These data sets often show inconsistencies, which introduce additional uncertainties. Furthermore, state-of-the-art approaches also imply problems of missing consistency when predicting the damage at different spatial scales. This is due to the use of different types of exposure data sets for model derivation and application. In order to face these issues a novel object-based method was developed in this thesis. The method enables a seamless estimation of hydro-meteorological hazard damage across spatial scales including uncertainty quantification. The application and validation of the method resulted in plausible estimations at all spatial scales without overestimating the uncertainty.
Mainly newly available data sets containing individual buildings make the application of the method possible as they allow for the identification of flood affected objects by overlaying the data sets with water masks. However, the identification of affected objects with two different water masks revealed huge differences in the number of identified objects. Thus, more effort is needed for their identification, since the number of objects affected determines the order of magnitude of the economic flood impacts to a large extent.
In general the method represents the uncertainties associated with the three components of risk namely hazard, exposure and vulnerability, in form of probability distributions. The object-based approach enables a consistent propagation of these uncertainties in space. Aside from the propagation of damage estimates and their uncertainties across spatial scales, a propagation between models estimating direct and indirect economic impacts was demonstrated. This enables the inclusion of uncertainties associated with the direct economic impacts within the estimation of the indirect economic impacts. Consequently, the modeling procedure facilitates the representation of the reliability of estimated total economic impacts. The representation of the estimates' reliability prevents reasoning based on a false certainty, which might be attributed to point estimates. Therefore, the developed approach facilitates a meaningful flood risk management and adaptation planning.
The successful post-event application and the representation of the uncertainties qualifies the method also for the use for future risk assessments. Thus, the developed method enables the representation of the assumptions made for the future risk assessments, which is crucial information for future risk management. This is an important step forward, since the representation of reliability associated with all components of risk is currently lacking in all state-of-the-art methods assessing future risk.
In conclusion, the use of object-based methods giving results in the form of distributions instead of point estimations is recommended. The improvement of the model performance by the means of multi-variable models and additional data points is possible, but small. Uncertainties associated with all components of damage estimation should be included and represented within the results. Furthermore, the findings of the thesis suggest that, at larger scales, the influence of the uncertainty associated with the vulnerability is smaller than those associated with the hazard and exposure. This leads to the conclusion that for an increased reliability of flood damage estimations and risk assessments, the improvement and active inclusion of hazard and exposure, including their uncertainties, is needed in addition to the improvements of the models describing the vulnerability of the objects.
Together with the gradual change of mean values, ongoing climate change is projected to increase frequency and amplitude of temperature and precipitation extremes in many regions of Europe. The impacts of such in most cases short term extraordinary climate situations on terrestrial ecosystems are a matter of central interest of recent climate change research, because it can not per se be assumed that known dependencies between climate variables and ecosystems are linearly scalable. So far, yet, there is a high demand for a method to quantify such impacts in terms of simultaneities of event time series.
In the course of this manuscript the new statistical approach of Event Coincidence Analysis (ECA) as well as it's R implementation is introduced, a methodology that allows assessing whether or not two types of event time series exhibit similar sequences of occurrences. Applications of the method are presented, analyzing climate impacts on different temporal and spacial scales: the impact of extraordinary expressions of various climatic variables on tree stem variations (subdaily and local scale), the impact of extreme temperature and precipitation events on the owering time of European shrub species (weekly and country scale), the impact of extreme temperature events on ecosystem health in terms of NDVI (weekly and continental scale) and the impact of El Niño and La Niña events on precipitation anomalies (seasonal and global scale).
The applications presented in this thesis refine already known relationships based on classical methods and also deliver substantial new findings to the scientific community: the widely known positive correlation between flowering time and temperature for example is confirmed to be valid for the tails of the distributions while the widely assumed positive dependency between stem diameter variation and temperature is shown to be not valid for very warm and very cold days. The larger scale investigations underline the sensitivity of anthrogenically shaped landscapes towards temperature extremes in Europe and provide a comprehensive global ENSO impact map for strong precipitation events.
Finally, by publishing the R implementation of the method, this thesis shall enable other researcher to further investigate on similar research questions by using Event Coincidence Analysis.
Among the multitude of geomorphological processes, aeolian shaping processes are of special character, Pedogenic dust is one of the most important sources of atmospheric aerosols and therefore regarded as a key player for atmospheric processes. Soil dust emissions, being complex in composition and properties, influence atmospheric processes and air quality and has impacts on other ecosystems. In this because even though their immediate impact can be considered low (exceptions exist), their constant and large-scale force makes them a powerful player in the earth system. dissertation, we unravel a novel scientific understanding of this complex system based on a holistic dataset acquired during a series of field experiments on arable land in La Pampa, Argentina. The field experiments as well as the generated data provide information about topography, various soil parameters, the atmospheric dynamics in the very lower atmosphere (4m height) as well as measurements regarding aeolian particle movement across a wide range of particle size classes between 0.2μm up to the coarse sand.
The investigations focus on three topics: (a) the effects of low-scale landscape structures on aeolian transport processes of the coarse particle fraction, (b) the horizontal and vertical fluxes of the very fine particles and (c) the impact of wind gusts on particle emissions.
Among other considerations presented in this thesis, it could in particular be shown, that even though the small-scale topology does have a clear impact on erosion and deposition patterns, also physical soil parameters need to be taken into account for a robust statistical modelling of the latter. Furthermore, specifically the vertical fluxes of particulate matter have different characteristics for the particle size classes. Finally, a novel statistical measure was introduced to quantify the impact of wind gusts on the particle uptake and its application on the provided data set. The aforementioned measure shows significantly increased particle concentrations during points in time defined as gust event.
With its holistic approach, this thesis further contributes to the fundamental understanding of how atmosphere and pedosphere are intertwined and affect each other.
Redox signalling in plants
(2020)
Once proteins are synthesized, they can additionally be modified by post-translational modifications (PTMs). Proteins containing reactive cysteine thiols, stabilized in their deprotonated form due to their local environment as thiolates (RS-), serve as redox sensors by undergoing a multitude of oxidative PTMs (Ox-PTMs). Ox-PTMs such as S-nitrosylation or formation of inter- or intra-disulfide bridges induce functional changes in these proteins. Proteins containing cysteines, whose thiol oxidation state regulates their functions, belong to the so-called redoxome. Such Ox-PTMs are controlled by site-specific cellular events that play a crucial role in protein regulation, affecting enzyme catalytic sites, ligand binding affinity, protein-protein interactions or protein stability. Reversible protein thiol oxidation is an essential regulatory mechanism of photosynthesis, metabolism, and gene expression in all photosynthetic organisms. Therefore, studying PTMs will remain crucial for understanding plant adaptation to external stimuli like fluctuating light conditions. Optimizing methods suitable for studying plants Ox-PTMs is of high importance for elucidation of the redoxome in plants. This study focusses on thiol modifications occurring in plant and provides novel insight into in vivo redoxome of Arabidopsis thaliana in response to light vs. dark. This was achieved by utilizing a resin-assisted thiol enrichment approach. Furthermore, confirmation of candidates on the single protein level was carried out by a differential labelling approach. The thiols and disulfides were differentially labelled, and the protein levels were detected using immunoblot analysis. Further analysis was focused on light-reduced proteins. By the enrichment approach many well studied redox-regulated proteins were identified. Amongst those were fructose 1,6-bisphosphatase (FBPase) and sedoheptulose-1,7-bisphosphatase (SBPase) which have previously been described as thioredoxin system targeted enzymes. The redox regulated proteins identified in the current study were compared to several published, independent results showing redox regulated proteins in Arabidopsis leaves, root, mitochondria and specifically S-nitrosylated proteins. These proteins were excluded as potential new candidates but remain as a proof-of-concept to the enrichment experiments to be effective. Additionally, CSP41A and CSP41B proteins, which emerged from this study as potential targets of redox-regulation, were analyzed by Ribo-Seq. The active translatome study of csp41a mutant vs. wild-type showed most of the significant changes at end of the night, similarly as csp41b. Yet, in both mutants only several chloroplast-encoded genes were altered. Further studies of CSP41A and CSP41B proteins are needed to reveal their functions and elucidate the role of redox regulation of these proteins.
Dryland vulnerability : typical patterns and dynamics in support of vulnerability reduction efforts
(2011)
The pronounced constraints on ecosystem functioning and human livelihoods in drylands are frequently exacerbated by natural and socio-economic stresses, including weather extremes and inequitable trade conditions. Therefore, a better understanding of the relation between these stresses and the socio-ecological systems is important for advancing dryland development. The concept of vulnerability as applied in this dissertation describes this relation as encompassing the exposure to climate, market and other stresses as well as the sensitivity of the systems to these stresses and their capacity to adapt. With regard to the interest in improving environmental and living conditions in drylands, this dissertation aims at a meaningful generalisation of heterogeneous vulnerability situations. A pattern recognition approach based on clustering revealed typical vulnerability-creating mechanisms at global and local scales. One study presents the first analysis of dryland vulnerability with global coverage at a sub-national resolution. The cluster analysis resulted in seven typical patterns of vulnerability according to quantitative indication of poverty, water stress, soil degradation, natural agro-constraints and isolation. Independent case studies served to validate the identified patterns and to prove the transferability of vulnerability-reducing approaches. Due to their worldwide coverage, the global results allow the evaluation of a specific system’s vulnerability in its wider context, even in poorly-documented areas. Moreover, climate vulnerability of smallholders was investigated with regard to their food security in the Peruvian Altiplano. Four typical groups of households were identified in this local dryland context using indicators for harvest failure risk, agricultural resources, education and non-agricultural income. An elaborate validation relying on independently acquired information demonstrated the clear correlation between weather-related damages and the identified clusters. It also showed that household-specific causes of vulnerability were consistent with the mechanisms implied by the corresponding patterns. The synthesis of the local study provides valuable insights into the tailoring of interventions that reflect the heterogeneity within the social group of smallholders. The conditions necessary to identify typical vulnerability patterns were summarised in five methodological steps. They aim to motivate and to facilitate the application of the selected pattern recognition approach in future vulnerability analyses. The five steps outline the elicitation of relevant cause-effect hypotheses and the quantitative indication of mechanisms as well as an evaluation of robustness, a validation and a ranking of the identified patterns. The precise definition of the hypotheses is essential to appropriately quantify the basic processes as well as to consistently interpret, validate and rank the clusters. In particular, the five steps reflect scale-dependent opportunities, such as the outcome-oriented aspect of validation in the local study. Furthermore, the clusters identified in Northeast Brazil were assessed in the light of important endogenous processes in the smallholder systems which dominate this region. In order to capture these processes, a qualitative dynamic model was developed using generalised rules of labour allocation, yield extraction, budget constitution and the dynamics of natural and technological resources. The model resulted in a cyclic trajectory encompassing four states with differing degree of criticality. The joint assessment revealed aggravating conditions in major parts of the study region due to the overuse of natural resources and the potential for impoverishment. The changes in vulnerability-creating mechanisms identified in Northeast Brazil are well-suited to informing local adjustments to large-scale intervention programmes, such as “Avança Brasil”. Overall, the categorisation of a limited number of typical patterns and dynamics presents an efficient approach to improving our understanding of dryland vulnerability. Appropriate decision-making for sustainable dryland development through vulnerability reduction can be significantly enhanced by pattern-specific entry points combined with insights into changing hotspots of vulnerability and the transferability of successful adaptation strategies.
Media artists have been struggling for financial survival ever since media art came into being. The non-material value of the artwork, a provocative attitude towards the traditional arts world and originally anti-capitalist mindset of the movement makes it particularly difficult to provide a constructive solution. However, a cultural entrepreneurial approach can be used to build a framework in order to find a balance between culture and business while ensuring that the cultural mission remains the top priority.
Isolation and characterisation of ammonium transporters from the module legumen : lotus japanicus
(2001)
The increasing demand for energy in the current technological era and the recent political decisions about giving up on nuclear energy diverted humanity to focus on alternative environmentally friendly energy sources like solar energy. Although silicon solar cells are the product of a matured technology, the search for highly efficient and easily applicable materials is still ongoing. These properties made the efficiency of halide perovskites comparable with silicon solar cells for single junctions within a decade of research. However, the downside of halide perovskites are poor stability and lead toxicity for the most stable ones.
On the other hand, chalcogenide perovskites are one of the most promising absorber materials for the photovoltaic market, due to their elemental abundance and chemical stability against moisture and oxygen. In the search of the ultimate solar absorber material, combining the good optoelectronic properties of halide perovskites with the stability of chalcogenides could be the promising candidate.
Thus, this work investigates new techniques for the synthesis and design of these novel chalcogenide perovskites, that contain transition metals as cations, e.g., BaZrS3, BaHfS3, EuZrS3, EuHfS3 and SrHfS3. There are two stages in the deposition techniques of this study: In the first stage, the binary compounds are deposited via a solution processing method. In the second stage, the deposited materials are annealed in a chalcogenide atmosphere to form the perovskite structure by using solid-state reactions.
The research also focuses on the optimization of a generalized recipe for a molecular ink to deposit precursors of chalcogenide perovskites with different binaries. The implementation of the precursor sulfurization resulted in either binaries without perovskite formation or distorted perovskite structures, whereas some of these materials are reported in the literature as they are more favorable in the needle-like non-perovskite configuration.
Lastly, there are two categories for the evaluation of the produced materials: The first category is about the determination of the physical properties of the deposited layer, e.g., crystal structure, secondary phase formation, impurities, etc. For the second category, optoelectronic properties are measured and compared to an ideal absorber layer, e.g., band gap, conductivity, surface photovoltage, etc.
Gene expression describes the process of making functional gene products (e.g. proteins or special RNAs) from instructions encoded in the genetic information (e.g. DNA). This process is heavily regulated, allowing cells to produce the appropriate gene products necessary for cell survival, adapting production as necessary for different cell environments. Gene expression is subject to regulation at several levels, including transcription, mRNA degradation, translation and protein degradation. When intact, this system maintains cell homeostasis, keeping the cell alive and adaptable to different environments. Malfunction in the system can result in disease states and cell death. In this dissertation, we explore several aspects of gene expression control by analyzing data from biological experiments. Most of the work following uses a common mathematical model framework based on Markov chain models to test hypotheses, predict system dynamics or elucidate network topology. Our work lies in the intersection between mathematics and biology and showcases the power of statistical data analysis and math modeling for validation and discovery of biological phenomena.
The goal of this work was to study the binding of ions to polymers and lipid bilayer membranes in aqueous solutions. In the first part of this work, the influence of various inorganic salts and polyelectrolytes on the structure of water was studied using Isothermal Titration Calorimetry (ITC). The heat of dilution of the salts was used as a scale of water structure making and breaking of the ions. The heats of dilution could be attributed to the Hofmeister Series. Following this, the binding of Ca2+ to poly(sodium acrylate) (NaPAA) was studied. ITC and a Ca2+ Ion Selective Electrode were used to measure the reaction enthalpy and binding isotherm. Binding of Ca2+ ions to PAA, was found to be highly endothermic and therefore solely driven by entropy. We then compared the binding of ions to the one-dimensional PAA polymer chain to the binding to lipid vesicles with the same functional groups. As for the polymer, Ca2+ binding was found to be endothermic. Binding of calcium to the lipid bilayer was found to be weaker than to the polymer. In the context of these experiments, it was shown that Ca2+ not only binds to charged but also to zwitterionic lipid vesicles. Finally, we studied the interaction of two salts, KCl and NaCl, to a neutral polymer gel, PNIPAAM, and to the ionic polymer PAA. Combining calorimetry and a potassium selective electrode we observed that the ions interact with both polymers, whether containing charges or not.
Numbers are omnipresent in daily life. They vary in display format and in their meaning so that it does not seem self-evident that our brains process them more or less easily and flexibly. The present thesis addresses mental number representations in general, and specifically the impact of finger counting on mental number representations. Finger postures that result from finger counting experience are one of many ways to convey numerical information. They are, however, probably the one where the numerical content becomes most tangible. By investigating the role of fingers in adults’ mental number representations the four presented studies also tested the Embodied Cognition hypothesis which predicts that bodily experience (e.g., finger counting) during concept acquisition (e.g., number concepts) stays an immanent part of these concepts. The studies focussed on different aspects of finger counting experience. First, consistency and further details of spontaneously used finger configurations were investigated when participants repeatedly produced finger postures according to specific numbers (Study 1). Furthermore, finger counting postures (Study 2), different finger configurations (Study 2 and 4), finger movements (Study 3), and tactile finger perception (Study 4) were investigated regarding their capability to affect number processing. Results indicated that active production of finger counting postures and single finger movements as well as passive perception of tactile stimulation of specific fingers co-activated associated number knowledge and facilitated responses towards corresponding magnitudes and number symbols. Overall, finger counting experience was reflected in specific effects in mental number processing of adult participants. This indicates that finger counting experience is an immanent part of mental number representations.
Findings are discussed in the light of a novel model. The MASC (Model of Analogue and Symbolic Codes) combines and extends two established models of number and magnitude processing. Especially a symbolic motor code is introduced as an essential part of the model. It comprises canonical finger postures (i.e., postures that are habitually used to represent numbers) and finger-number associations. The present findings indicate that finger counting functions both as a sensorimotor magnitude and as a symbolic representational format and that it thereby directly mediates between physical and symbolic size. The implications are relevant both for basic research regarding mental number representations and for pedagogic practices regarding the effectiveness of finger counting as a means to acquire a fundamental grasp of numbers.
Functional analysis of selected DOF transcription factors in the model plant Arabidopsis thaliana
(2007)
Transcription factors (TFs) are global regulators of gene expression playing essential roles in almost all biological processes, and are therefore of great scientific and biotechnological interest. This project focused on functional characterisation of three DNA-binding-with-one-zinc-finger (DOF) TFs from the genetic model plant Arabidopsis thaliana, namely OBP1, OBP2 and AtDOF4;2. These genes were selected due to severe growth phenotypes conferred upon their constitutive over-expression. To identify biological processes regulated by OBP1, OBP2 and AtDOF4;2 in detail molecular and physiological characterization of transgenic plants with modified levels of OBP1, OBP2 and AtDOF4;2 expression (constitutive and inducible over-expression, RNAi) was performed using both targeted and profiling technologies. Additionally expression patterns of studied TFs and their target genes were analyzed using promoter-GUS lines and publicly available microarray data. Finally selected target genes were confirmed by chromatin immuno-precipitation and electrophoretic-mobility shift assays. This combinatorial approach revealed distinct biological functions of OBP1, OBP2 and AtDOF4;2. Specifically OBP2 controls indole glucosinolate / auxin homeostasis by directly regulating the enzyme at the branch of these pathways; CYP83B1 (Skirycz et al., 2006). Glucosinolates are secondary compounds important for defence against herbivores and pathogens in the plants order Caparales (e.g. Arabidopsis, canola and broccoli) whilst auxin is an essential plant hormone. Hence OBP2 is important for both response to biotic stress and plant growth. Similarly to OBP2 also AtDOF4;2 is involved in the regulation of plant secondary metabolism and affects production of various phenylpropanoid compounds in a tissue and environmental specific manner. It was found that under certain stress conditions AtDOF4;2 negatively regulates flavonoid biosynthetic genes whilst in certain tissues it activates hydroxycinnamic acid production. It was hypothesized that this dual function is most likely related to specific interactions with other proteins; perhaps other TFs (Skirycz et al., 2007). Finally OBP1 regulates both cell proliferation and cell expansion. It was shown that OBP1 controls cell cycle activity by directly targeting the expression of core cell cycle genes (CYCD3;3 and KRP7), other TFs and components of the replication machinery. Evidence for OBP1 mediated activation of cell cycle during embryogenesis and germination will be presented. Additionally and independently on its effects on cell proliferation OBP1 negatively affects cell expansion via reduced expression of cell wall loosening enzymes. Summing up this work provides an important input into our knowledge on DOF TFs function. Future work will concentrate on establishing exact regulatory networks of OBP1, OBP2 and AtDOF4;2 and their possible biotechnological applications.
Background: A growing body of research has documented negative effects of sexualization in the media on individuals’ self-objectification. This research is predominantly built on studies examining traditional media, such as magazines and television, and young female samples. Furthermore, longitudinal studies are scarce, and research is missing studying mediators of the relationship. The first aim of the present PhD thesis was to investigate the relations between the use of sexualized interactive media and social media and self-objectification. The second aim of this work was to examine the presumed processes within understudied samples, such as males and females beyond college age, thus investigating the moderating roles of age and gender. The third aim was to shed light on possible mediators of the relation between sexualized media and self-objectification.
Method: The research aims were addressed within the scope of four studies. In an experiment, women’s self-objectification and body satisfaction was measured after playing a video game with a sexualized vs. a nonsexualized character that was either personalized or generic. The second study investigated the cross-sectional link between sexualized television use and self-objectification and consideration of cosmetic surgery in a sample of women across a broad age spectrum, examining the role of age in the relations. The third study looked at the cross-sectional link between male and female sexualized images on Instagram and their associations with self-objectification among a sample of male and female adolescents. Using a two-wave longitudinal design, the fourth study examined sexualized video game and Instagram use as predictors of adolescents’ self-objectification. Path models were conceptualized for the second, third and fourth study, in which media use predicted body surveillance via appearance comparisons (Study 4), thin-ideal internalization (Study 2, 3, 4), muscular-ideal internalization (Study 3, 4), and valuing appearance (all studies).
Results: The results of the experimental study revealed no effect of sexualized video game characters on women’s self-objectification and body satisfaction. No moderating effect of personalization emerged. Sexualized television use was associated to consideration of cosmetic surgery via body surveillance and valuing appearance for women of all ages in Study 2, while no moderating effect of age was found. Study 3 revealed that seeing sexualized male images on Instagram was indirectly associated with higher body surveillance via muscular-ideal internalization for boys and girls. Sexualized female images were indirectly linked to higher body surveillance via thin-ideal internalization and valuing appearance over competence only for girls. The longitudinal analysis of Study 4 showed no moderating effect of gender: For boys and girls, sexualized video game use at T1 predicted body surveillance at T2 via appearance comparisons, thin-ideal internalization and valuing appearance over competence. Furthermore, the use of sexualized Instagram images at T1 predicted body surveillance at T2 via valuing appearance.
Conclusion: The findings show that sexualization in the media is linked to self-objectification among a variety of media formats and within diverse groups of people. While the longitudinal study indicates that sexualized media predict self-objectification over time, the experimental null findings warrant caution regarding this temporal order. The results demonstrate that several mediating variables might be involved in this link. Possible implications for research and practice, such as intervention programs and policy-making, are discussed.
New ABC triblock copolymers were synthesized by controlled free-radical polymerization via Reversible Addition-Fragmentation chain Transfer (RAFT). Compared to amphiphilic diblock copolymers, the prepared materials formed more complex self-assembled structures in water due to three different functional units. Two strategies were followed: The first approach relied on double-thermoresponsive triblock copolymers exhibiting Lower Critical Solution Temperature (LCST) behavior in water. While the first phase transition triggers the self-assembly of triblock copolymers upon heating, the second one allows to modify the self-assembled state. The stepwise self-assembly was followed by turbidimetry, dynamic light scattering (DLS) and 1H NMR spectroscopy as these methods reflect the behavior on the macroscopic, mesoscopic and molecular scale. Although the first phase transition could be easily monitored due to the onset of self-assembly, it was difficult to identify the second phase transition unambiguously as the changes are either marginal or coincide with the slow response of the self-assembled system to relatively fast changes of temperature. The second approach towards advanced polymeric micelles exploited the thermodynamic incompatibility of “triphilic” block copolymers – namely polymers bearing a hydrophilic, a lipophilic and a fluorophilic block – as the driving force for self-assembly in water. The self-assembly of these polymers in water produced polymeric micelles comprising a hydrophilic corona and a microphase-separated micellar core with lipophilic and fluorophilic domains – so called multi-compartment micelles. The association of triblock copolymers in water was studied by 1H NMR spectroscopy, DLS and cryogenic transmission electron microscopy (cryo-TEM). Direct imaging of the polymeric micelles in solution by cryo-TEM revealed different morphologies depending on the block sequence and the preparation conditions. While polymers with the sequence hydrophilic-lipophilic-fluorophilic built core-shell-corona micelles with the core being the fluorinated compartment, block copolymers with the hydrophilic block in the middle formed spherical micelles where single or multiple fluorinated domains “float” as disks on the surface of the lipophilic core. Increasing the temperature during micelle preparation or annealing of the aqueous solutions after preparation at higher temperatures induced occasionally a change of the micelle morphology or the particle size distribution. By RAFT polymerization not only the desired polymeric architectures could be realized, but the technique provided in addition a precious tool for molar mass characterization. The thiocarbonylthio moieties, which are present at the chain ends of polymers prepared by RAFT, absorb light in the UV and visible range and were employed for end-group analysis by UV-vis spectroscopy. A variety of dithiobenzoate and trithiocarbonate RAFT agents with differently substituted initiating R groups were synthesized. The investigation of their absorption characteristics showed that the intensity of the absorptions depends sensitively on the substitution pattern next to the thiocarbonylthio moiety and on the solvent polarity. According to these results, the conditions for a reliable and convenient end-group analysis by UV-vis spectroscopy were optimized. As end-group analysis by UV-vis spectroscopy is insensitive to the potential association of polymers in solution, it was advantageously exploited for the molar mass characterization of the prepared amphiphilic block copolymers.
Introduction: Intestinal bacteria influence gut morphology by affecting epithelial cell proliferation, development of the lamina propria, villus length and crypt depth [1]. Gut microbiota-derived factors have been proposed to also play a role in the development of a 30 % longer intestine, that is characteristic of PRM/Alf mice compared to other mouse strains [2, 3]. Polyamines and SCFAs produced by gut bacteria are important growth factors, which possibly influence mucosal morphology, in particular villus length and crypt depth and play a role in gut lengthening in the PRM/Alf mouse. However, experimental evidence is lacking. Aim: The objective of this work was to clarify the role of bacterially-produced polyamines on crypt depth, mucosa thickness and epithelial cell proliferation. For this purpose, C3H mice associated with a simplified human microbiota (SIHUMI) were compared with mice colonized with SIHUMI complemented by the polyamine-producing Fusobacterium varium (SIHUMI + Fv). In addition, the microbial impact on gut lengthening in PRM/Alf mice was characterized and the contribution of SCFAs and polyamines to this phenotype was examined. Results: SIHUMI + Fv mice exhibited an up to 1.7 fold higher intestinal polyamine concentration compared to SIHUMI mice, which was mainly due to increased putrescine concentrations. However, no differences were observed in crypt depth, mucosa thickness and epithelial proliferation. In PRM/Alf mice, the intestine of conventional mice was 8.5 % longer compared to germfree mice. In contrast, intestinal lengths of C3H mice were similar, independent of the colonization status. The comparison of PRM/Alf and C3H mice, both associated with SIHUMI + Fv, demonstrated that PRM/Alf mice had a 35.9 % longer intestine than C3H mice. However, intestinal SCFA and polyamine concentrations of PRM/Alf mice were similar or even lower, except N acetylcadaverine, which was 3.1-fold higher in PRM/Alf mice. When germfree PRM/Alf mice were associated with a complex PRM/Alf microbiota, the intestine was one quarter longer compared to PRM/Alf mice colonized with a C3H microbiota. This gut elongation correlated with levels of the polyamine N acetylspermine. Conclusion: The intestinal microbiota is able to influence intestinal length dependent on microbial composition and on the mouse genotype. Although SCFAs do not contribute to gut elongation, an influence of the polyamines N acetylcadaverine and N acetylspermine is conceivable. In addition, the study clearly demonstrated that bacterial putrescine does not influence gut morphology in C3H mice.
The near-Earth space environment is a highly complex system comprised of several regions and particle populations hazardous to satellite operations. The trapped particles in the radiation belts and ring current can cause significant damage to satellites during space weather events, due to deep dielectric and surface charging. Closer to Earth is another important region, the ionosphere, which delays the propagation of radio signals and can adversely affect navigation and positioning. In response to fluctuations in solar and geomagnetic activity, both the inner-magnetospheric and ionospheric populations can undergo drastic and sudden changes within minutes to hours, which creates a challenge for predicting their behavior. Given the increasing reliance of our society on satellite technology, improving our understanding and modeling of these populations is a matter of paramount importance.
In recent years, numerous spacecraft have been launched to study the dynamics of particle populations in the near-Earth space, transforming it into a data-rich environment. To extract valuable insights from the abundance of available observations, it is crucial to employ advanced modeling techniques, and machine learning methods are among the most powerful approaches available. This dissertation employs long-term satellite observations to analyze the processes that drive particle dynamics, and builds interdisciplinary links between space physics and machine learning by developing new state-of-the-art models of the inner-magnetospheric and ionospheric particle dynamics.
The first aim of this thesis is to investigate the behavior of electrons in Earth's radiation belts and ring current. Using ~18 years of electron flux observations from the Global Positioning System (GPS), we developed the first machine learning model of hundreds-of-keV electron flux at Medium Earth Orbit (MEO) that is driven solely by solar wind and geomagnetic indices and does not require auxiliary flux measurements as inputs. We then proceeded to analyze the directional distributions of electrons, and for the first time, used Fourier sine series to fit electron pitch angle distributions (PADs) in Earth's inner magnetosphere. We performed a superposed epoch analysis of 129 geomagnetic storms during the Van Allen Probes era and demonstrated that electron PADs have a strong energy-dependent response to geomagnetic activity. Additionally, we showed that the solar wind dynamic pressure could be used as a good predictor of the PAD dynamics. Using the observed dependencies, we created the first PAD model with a continuous dependence on L, magnetic local time (MLT) and activity, and developed two techniques to reconstruct near-equatorial electron flux observations from low-PA data using this model.
The second objective of this thesis is to develop a novel model of the topside ionosphere. To achieve this goal, we collected observations from five of the most widely used ionospheric missions and intercalibrated these data sets. This allowed us to use these data jointly for model development, validation, and comparison with other existing empirical models. We demonstrated, for the first time, that ion density observations by Swarm Langmuir Probes exhibit overestimation (up to ~40-50%) at low and mid-latitudes on the night side, and suggested that the influence of light ions could be a potential cause of this overestimation. To develop the topside model, we used 19 years of radio occultation (RO) electron density profiles, which were fitted with a Chapman function with a linear dependence of scale height on altitude. This approximation yields 4 parameters, namely the peak density and height of the F2-layer and the slope and intercept of the linear scale height trend, which were modeled using feedforward neural networks (NNs). The model was extensively validated against both RO and in-situ observations and was found to outperform the International Reference Ionosphere (IRI) model by up to an order of magnitude. Our analysis showed that the most substantial deviations of the IRI model from the data occur at altitudes of 100-200 km above the F2-layer peak. The developed NN-based ionospheric model reproduces the effects of various physical mechanisms observed in the topside ionosphere and provides highly accurate electron density predictions.
This dissertation provides an extensive study of geospace dynamics, and the main results of this work contribute to the improvement of models of plasma populations in the near-Earth space environment.
Business process models are used within a range of organizational initiatives, where every stakeholder has a unique perspective on a process and demands the respective model. As a consequence, multiple process models capturing the very same business process coexist. Keeping such models in sync is a challenge within an ever changing business environment: once a process is changed, all its models have to be updated. Due to a large number of models and their complex relations, model maintenance becomes error-prone and expensive. Against this background, business process model abstraction emerged as an operation reducing the number of stored process models and facilitating model management. Business process model abstraction is an operation preserving essential process properties and leaving out insignificant details in order to retain information relevant for a particular purpose. Process model abstraction has been addressed by several researchers. The focus of their studies has been on particular use cases and model transformations supporting these use cases. This thesis systematically approaches the problem of business process model abstraction shaping the outcome into a framework. We investigate the current industry demand in abstraction summarizing it in a catalog of business process model abstraction use cases. The thesis focuses on one prominent use case where the user demands a model with coarse-grained activities and overall process ordering constraints. We develop model transformations that support this use case starting with the transformations based on process model structure analysis. Further, abstraction methods considering the semantics of process model elements are investigated. First, we suggest how semantically related activities can be discovered in process models-a barely researched challenge. The thesis validates the designed abstraction methods against sets of industrial process models and discusses the method implementation aspects. Second, we develop a novel model transformation, which combined with the related activity discovery allows flexible non-hierarchical abstraction. In this way this thesis advocates novel model transformations that facilitate business process model management and provides the foundations for innovative tool support.
More than a billion people rely on water from rivers sourced in High Mountain Asia (HMA), a significant portion of which is derived from snow and glacier melt. Rural communities are heavily dependent on the consistency of runoff, and are highly vulnerable to shifts in their local environment brought on by climate change. Despite this dependence, the impacts of climate change in HMA remain poorly constrained due to poor process understanding, complex terrain, and insufficiently dense in-situ measurements.
HMA's glaciers contain more frozen water than any region outside of the poles. Their extensive retreat is a highly visible and much studied marker of regional and global climate change. However, in many catchments, snow and snowmelt represent a much larger fraction of the yearly water budget than glacial meltwaters. Despite their importance, climate-related changes in HMA's snow resources have not been well studied.
Changes in the volume and distribution of snowpack have complex and extensive impacts on both local and global climates. Eurasian snow cover has been shown to impact the strength and direction of the Indian Summer Monsoon -- which is responsible for much of the precipitation over the Indian Subcontinent -- by modulating earth-surface heating. Shifts in the timing of snowmelt have been shown to limit the productivity of major rangelands, reduce streamflow, modify sediment transport, and impact the spread of vector-borne diseases. However, a large-scale regional study of climate impacts on snow resources had yet to be undertaken.
Passive Microwave (PM) remote sensing is a well-established empirical method of studying snow resources over large areas. Since 1987, there have been consistent daily global PM measurements which can be used to derive an estimate of snow depth, and hence snow-water equivalent (SWE) -- the amount of water stored in snowpack. The SWE estimation algorithms were originally developed for flat and even terrain -- such as the Russian and Canadian Arctic -- and have rarely been used in complex terrain such as HMA.
This dissertation first examines factors present in HMA that could impact the reliability of SWE estimates. Forest cover, absolute snow depth, long-term average wind speeds, and hillslope angle were found to be the strongest controls on SWE measurement reliability. While forest density and snow depth are factors accounted for in modern SWE retrieval algorithms, wind speed and hillslope angle are not. Despite uncertainty in absolute SWE measurements and differences in the magnitude of SWE retrievals between sensors, single-instrument SWE time series were found to be internally consistent and suitable for trend analysis.
Building on this finding, this dissertation tracks changes in SWE across HMA using a statistical decomposition technique. An aggregate decrease in SWE was found (10.6 mm/yr), despite large spatial and seasonal heterogeneities. Winter SWE increased in almost half of HMA, despite general negative trends throughout the rest of the year. The elevation distribution of these negative trends indicates that while changes in SWE have likely impacted glaciers in the region, climate change impacts on these two pieces of the cryosphere are somewhat distinct.
Following the discussion of relative changes in SWE, this dissertation explores changes in the timing of the snowmelt season in HMA using a newly developed algorithm. The algorithm is shown to accurately track the onset and end of the snowmelt season (70% within 5 days of a control dataset, 89% within 10). Using a 29-year time series, changes in the onset, end, and duration of snowmelt are examined. While nearly the entirety of HMA has experienced an earlier end to the snowmelt season, large regions of HMA have seen a later start to the snowmelt season. Snowmelt periods have also decreased in almost all of HMA, indicating that the snowmelt season is generally shortening and ending earlier across HMA.
By examining shifts in both the spatio-temporal distribution of SWE and the timing of the snowmelt season across HMA, we provide a detailed accounting of changes in HMA's snow resources. The overall trend in HMA is towards less SWE storage and a shorter snowmelt season. However, long-term and regional trends conceal distinct seasonal, temporal, and spatial heterogeneity, indicating that changes in snow resources are strongly controlled by local climate and topography, and that inter-annual variability plays a significant role in HMA's snow regime.
Fault planes of large earthquakes incorporate inhomogeneous structures. This can be observed in teleseismic studies through the spatial distribution of slip and seismic moment release caused by the mainshock. Both parameters are often concentrated on patches on the fault plane with much higher values for slip and moment release than their adjacent areas. These patches are called asperities which obviously have a strong influence on the mainshock rupture propagation. Condition and properties of structures in the fault plane area, which are responsible for the evolution of such asperities or their significance on damage distributions of future earthquakes, are still not well understood and subject to recent geo-scientific studies. In the presented thesis asperity structures are identified on the fault plane of the Mw=8.0 Antofagasta earthquake in northern Chile which occurred on 30th of July, 1995. It was a thrust-type event in the seismogenic zone between the subducting pacific Nazca plate and the overriding South American plate. In cooperation of the German Task Force for Earthquakes and the CINCA'95 project a network of up to 44 seismic stations was set up to record the aftershock sequence. The seaward extension of the network with 9 OBH stations increased significantly the precision of hypocenter determinations. They were distributed mainly on the fault plane itself around the city of Antofagasta and Mejillones Peninsula. The asperity structures were recognized here by the spatial variations of local seismological parameters; at first by the spatial distribution of the seismic b-value on the fault plane, derived from the magnitude-frequency relation of Gutenberg-Richter. The correlation of this b-value map with other parameters like the mainshock source time function, the gravity isostatic residual anomalies, the aftershock radiated seismic energy distribution and the vp/vs ratios from a local earthquake tomograhpy study revealed some ideas about the composition and asperity generating processes. The investigation of 295 aftershock focal mechanism solutions supported the resulting fault plane structure and proposed a 3D similar stress state in the area of the Antofagasta fault plane.
Implementation of a plasmodesmata gatekeeper system, and its effect on intercellular transport
(2016)
During the drug discovery & development process, several phases encompassing a number of preclinical and clinical studies have to be successfully passed to demonstrate safety and efficacy of a new drug candidate. As part of these studies, the characterization of the drug's pharmacokinetics (PK) is an important aspect, since the PK is assumed to strongly impact safety and efficacy. To this end, drug concentrations are measured repeatedly over time in a study population. The objectives of such studies are to describe the typical PK time-course and the associated variability between subjects. Furthermore, underlying sources significantly contributing to this variability, e.g. the use of comedication, should be identified. The most commonly used statistical framework to analyse repeated measurement data is the nonlinear mixed effect (NLME) approach. At the same time, ample knowledge about the drug's properties already exists and has been accumulating during the discovery & development process: Before any drug is tested in humans, detailed knowledge about the PK in different animal species has to be collected. This drug-specific knowledge and general knowledge about the species' physiology is exploited in mechanistic physiological based PK (PBPK) modeling approaches -it is, however, ignored in the classical NLME modeling approach.
Mechanistic physiological based models aim to incorporate relevant and known physiological processes which contribute to the overlying process of interest. In comparison to data--driven models they are usually more complex from a mathematical perspective. For example, in many situations, the number of model parameters outrange the number of measurements and thus reliable parameter estimation becomes more complex and partly impossible. As a consequence, the integration of powerful mathematical estimation approaches like the NLME modeling approach -which is widely used in data-driven modeling -and the mechanistic modeling approach is not well established; the observed data is rather used as a confirming instead of a model informing and building input.
Another aggravating circumstance of an integrated approach is the inaccessibility to the details of the NLME methodology so that these approaches can be adapted to the specifics and needs of mechanistic modeling. Despite the fact that the NLME modeling approach exists for several decades, details of the mathematical methodology is scattered around a wide range of literature and a comprehensive, rigorous derivation is lacking. Available literature usually only covers selected parts of the mathematical methodology. Sometimes, important steps are not described or are only heuristically motivated, e.g. the iterative algorithm to finally determine the parameter estimates.
Thus, in the present thesis the mathematical methodology of NLME modeling is systemically described and complemented to a comprehensive description,
comprising the common theme from ideas and motivation to the final parameter estimation. Therein, new insights for the interpretation of different approximation methods used in the context of the NLME modeling approach are given and illustrated; furthermore, similarities and differences between them are outlined. Based on these findings, an expectation-maximization (EM) algorithm to determine estimates of a NLME model is described.
Using the EM algorithm and the lumping methodology by Pilari2010, a new approach on how PBPK and NLME modeling can be combined is presented and exemplified for the antibiotic levofloxacin. Therein, the lumping identifies which processes are informed by the available data and the respective model reduction improves the robustness in parameter estimation. Furthermore, it is shown how apriori known factors influencing the variability and apriori known unexplained variability is incorporated to further mechanistically drive the model development. Concludingly, correlation between parameters and between covariates is automatically accounted for due to the mechanistic derivation of the lumping and the covariate relationships.
A useful feature of PBPK models compared to classical data-driven PK models is in the possibility to predict drug concentration within all organs and tissue in the body. Thus, the resulting PBPK model for levofloxacin is used to predict drug concentrations and their variability within soft tissues which are the site of action for levofloxacin. These predictions are compared with data of muscle and adipose tissue obtained by microdialysis, which is an invasive technique to measure a proportion of drug in the tissue, allowing to approximate the concentrations in the interstitial fluid of tissues. Because, so far, comparing human in vivo tissue PK and PBPK predictions are not established, a new conceptual framework is derived. The comparison of PBPK model predictions and microdialysis measurements shows an adequate agreement and reveals further strengths of the presented new approach.
We demonstrated how mechanistic PBPK models, which are usually developed in the early stage of drug development, can be used as basis for model building in the analysis of later stages, i.e. in clinical studies. As a consequence, the extensively collected and accumulated knowledge about species and drug are utilized and updated with specific volunteer or patient data. The NLME approach combined with mechanistic modeling reveals new insights for the mechanistic model, for example identification and quantification of variability in mechanistic processes. This represents a further contribution to the learn & confirm paradigm across different stages of drug development.
Finally, the applicability of mechanism--driven model development is demonstrated on an example from the field of Quantitative Psycholinguistics to analyse repeated eye movement data. Our approach gives new insight into the interpretation of these experiments and the processes behind.
In this work we investigated ultrafast demagnetization in a Heusler-alloy. This material belongs to the halfmetal and exists in a ferromagnetic phase. A special feature of investigated alloy is a structure of electronic bands. The last leads to the specific density of the states. Majority electrons form a metallic like structure while minority electrons form a gap near the Fermi-level, like in semiconductor. This particularity offers a good possibility to use this material as model-like structure and to make some proof of principles concerning demagnetization. Using pump-probe experiments we carried out time-resolved measurements to figure out the times of demagnetization. For the pumping we used ultrashort laser pulses with duration around 100 fs. Simultaneously we used two excitation regimes with two different wavelengths namely 400 nm and 1240 nm. Decreasing the energy of photons to the gap size of the minority electrons we explored the effect of the gap on the demagnetization dynamics. During this work we used for the first time OPA (Optical Parametrical Amplifier) for the generation of the laser irradiation in a long-wave regime. We tested it on the FETOSPEX-beamline in BASSYII electron storage ring. With this new technique we measured wavelength dependent demagnetization dynamics. We estimated that the demagnetization time is in a correlation with photon energy of the excitation pulse. Higher photon energy leads to the faster demagnetization in our material. We associate this result with the existence of the energy-gap for minority electrons and explained it with Elliot-Yaffet-scattering events. Additionally we applied new probe-method for magnetization state in this work and verified their effectivity. It is about the well-known XMCD (X-ray magnetic circular dichroism) which we adopted for the measurements in reflection geometry. Static experiments confirmed that the pure electronic dynamics can be separated from the magnetic one. We used photon energy fixed on the L3 of the corresponding elements with circular polarization. Appropriate incidence angel was estimated from static measurements. Using this probe method in dynamic measurements we explored electronic and magnetic dynamics in this alloy.